The traditional regulatory and policy approach to flood risk in the US has been the optimization of benefits and costs, broadly mandated by federal policy. However, optimization may not be the best approach to flood risk management in light of the deep uncertainties we now face. A more incremental approach using a satisficing strategy may be. Flood risk is a function of the hydrologic factors that produce a hazard and the consequences of the hazard interfacing with the people and property exposed. Regretfully, both hydrologists and climatologists seem unable to provide the clairvoyant guidance needed by the water community facing major decisions on flood risk management in the coming years. As the seminal ‘Red Book’ noted, two things have become second nature to policy analysts and risk managers: absolute safety is unachievable, and it is necessary to distinguish between science and policy. The forcing elements and largest unknowns in determining risk rest with understanding the hydrologic factors involved in shaping the hazard.

  • Flood risk is a function of hazard, exposure, and vulnerability.

  • Hydrologists and climatologists seem unable to provide guidance for coming years.

  • The US operates with an inconsistent approach to dealing with hydrologic factors.

  • Quantitative risk analysis may not be the best approach considering the deep uncertainties such analyses face.

  • A more incremental and satisficing one may be.

The seminal ‘Red Book’, Risk assessment in the federal government: managing the process published by the U.S. National Academies1 (NRC, 1983) warned of two things that have become second nature to most policy analysts and risk managers. These are

  • Absolute safety is unachievable, and

  • It is necessary to distinguish between science and policy.

Each applies to flood risk management. Critics contend that the results of risk assessment are often seen as scientific findings by regulators and the public, whereas in fact they are based in part on other considerations. The Committee believed that guidelines can lead to risk assessments that clearly delineate the limits of current scientific knowledge and the policy basis for choosing among inference options. ‘Flood risk is a function of the hydrologic factors that produce flood waters and the consequences of the flood waters interfacing with the people and property exposed to the hazard’ (Figure 1). With the extent of exposure to flood hazard, the composition of the communities affected (vulnerability), and steps taken to mitigate the hazard, the forcing elements and largest unknowns in determining risk rest with understanding the hydrologic factors involved in shaping the hazard (Kron et al., 2019).

Fig. 1

The components of risk from Flood Risk Management: A Strategic Approach (Sayers et al., 2013).

Fig. 1

The components of risk from Flood Risk Management: A Strategic Approach (Sayers et al., 2013).

Close modal

Hydrology has likely been with us since the beginnings of civilization, probably even before. Hydrological science is more recent, but still mature. Yet, our ability to characterize hydrological risks remains imperfect, even in that stationary world in which few hydrologists or risk analysts still believe (Galloway, 2011). Some hydrologists are skeptical that current data yet indicate significant changes in river parameters. Climatologists, however, predict a climate change shift away from current conditions to a more turbulent flood and drought future.

Regretfully, both hydrologists and climatologists seem unable to provide guidance to the water community facing major decisions on flood risk management for coming years. This is no one's fault; the challenge is hard. Nonetheless, there is a need for the hydrology and meteorology communities to provide analysts and officials the information they require now to operate today's water projects and to plan and design for those of tomorrow. A revised way of thinking about flood risk and hydrologic factors that influence it – a different paradigm – may be called for.

Flood frequency is the number of times a flood above a given discharge or stage is likely to occur over a given number of years (WMO, 1992).

Nearly, all analyses of historical flood information to estimate future flood risk assume that future climate will be like the past, that is, that the world is stationary. Even so, observed intervals between floods differ widely from their average. The well-known example of this is Passau where during the past 500 years the intervals between 50-year floods ranged from 4 to 192 years, and between 100-year floods from 37 to 192 years (Figure 2). These historical climate variations are unpredictable even at multi-decadal time scales. A home or a bridge built to serve for the next 50 years faces a risk that a new phase of long-term climate variability will arrive, and that uncertainty cannot be quantified from short historical records. Unlike Passau, US records seldom exceed 100 years, and 50 or so years is more common.

Fig. 2

High-water scale (1501–2002) taken in 2012 at the tower of the Altes Rathaus, Passau, Germany, as of September 2012 (Eychaner, 2015). Reproduced with permission of the Association of State Flood Plain Managers.

Fig. 2

High-water scale (1501–2002) taken in 2012 at the tower of the Altes Rathaus, Passau, Germany, as of September 2012 (Eychaner, 2015). Reproduced with permission of the Association of State Flood Plain Managers.

Close modal

Most people think of flood risk as having to do with the frequency of severe storms or stream flows, leading to notions such as the 100-year flood. We measure these frequencies by an annual probability, for example, the p = 0.01 flood. It has become common to talk of flood recurrence in probabilities rather than in return periods, but one suspects that most people feel more comfort with the latter, as few among our citizenry have ever taken a course in probability2. The implication of these metrics is that the probability of a flood of a certain size has some particular value, and that this frequency is a property of nature. In risk jargon, it is an aleatory uncertainty. Granted, the probability may change with time either because the natural processes are not stationary, or because there are upstream or site-specific anthropogenic changes in the watershed. Nonetheless, the underlying concept is that flood frequency is a property of nature.

In the late 1960s, the US government decided to use the 1% annual exceedance probability (AEP) flood as the basis for the National Flood Insurance Program. The 1% AEP flood was thought to be a fair balance between protecting the public and overly stringent regulation, but the decision to use it for the National Flood Insurance Program (NFIP) was not a risk-based or an economic calculation. Because the 1% AEP flood has a 1-in-100 chance of being equaled or exceeded in any year, a flood that large or larger has an average recurrence interval of 100 years, and it also is referred to as the 100-year flood.

The US Department of Housing and Urban Development (HUD), which was responsible until 1979 for the flood insurance program3, asked a group of academic experts, headed by Gilbert White to meet at the University of Chicago to examine what should be used as a national standard for flood insurance requirements. The group selected the 100-year flood as a standard they thought would be appropriate representing a balance between a flood height that would prevent substantial damages and a standard that would be economically affordable. One attendee, John Sheaffer, a former student of Gilbert White, indicated that:

‘[…] the discussions focused on the selection of a flood frequency determination that would delineate the regulatory area. The 100-year flood emerged as the flood to be used in the flood insurance program. There was no data on 100-year floods, but it was stated that it had ‘a nice sound’ to it and would give an allusion [sic] of safety. As a result, all of the definitions developed at the Seminar, e.g., floodway, flood fringe, and regulatory area, were based on the 100-year flood. Also, the communities were to formulate land use regulations for the 100-year flood plain. Each locality was to regulate the extension of public facilities such as streets, sewers, gas, electricity and water in the 100-year floodplain regulatory area’ (Sheaffer, 2004).

Floods of other average recurrence intervals are defined in the same way: a 50-year flood has 1 chance in 50 (2%) of being exceeded in any 1 year, a 200-year flood has 1 chance in 200 (0.5%), and so on. The chance of an X-year flood occurring in a single year is 1 divided by X. More recently, people are talking about larger floods, such as the 500-year flood, as the citizenry's tolerance for risk has diminished and exposures of land use and development have increased. The ‘500-year flood’ corresponds to an AEP of 0.2%, which means a flood of that size or greater has a 0.2% chance in a given year. In 2007, the California Legislature set the acceptable risk criterion on urban levees in the Central Valley of that state at a 200-year return period (SB5, 2007).

In earlier times, other standards were used. Prior to the NFIP, the Tennessee Valley Authority (TVA) used the maximum probable flood, and the US Army Corps of Engineers (USACE) used the standard project flood (SPF). Neither of these was a probabilistic concept, and neither was particularly replicable by different engineers. According to Robinson (2004), other standards were also in use during this period. The Connecticut Resources Commission adopted 5–7 times the mean annual flood, equating to a 35- to 150-year level. The reason for this standard was given to be that there was no uniform method for determining flood frequencies. The Soil Conservation Service (SCS) used the 25-year flood in rural areas and the 100-year flood in urban areas. The USGS at the time avoided the 100-year flood because most basins had much shorter record lengths. Nonetheless, by the 1970s, federal agencies began agreeing on the 100-year flood as the standard for floodplain management that would be required by FEMA and formalized with the NFIP. As far as one can tell, this criterion was not based on economic analysis. USACE continued to apply project by project criteria including use of the SPF in their designs.

Famously, the Netherlands Ministry of Infrastructure and Water Management (Rijkswaterstaat) adopted a 10,000-year standard for its coastal flood defenses along the North Sea and a 2,000-year standard for its Rhine River levees, although their calculation of return periods differ somewhat from US practice. Unlike other regions of the world – specifically coastal America – the additional meters elevation to provide this added protection along the Dutch coast are by comparison few. The derivative of elevation to AEP is larger in the Netherlands, so extra protection is comparatively less expensive (Jorrison et al., 2016)4.

The Dutch example illustrates the point that setting risk standards is a matter of trade-offs. If the cost of increased safety is small, then why not tighten the safety standard? If the cost is large, then one has to think more carefully about what is being bought and at what cost. This is the case in New Orleans where the derivative of elevation with risk reduction is larger than in the Netherlands (i.e., the levees have to be raised a great deal higher and at much greater cost to achieve even a 1,000-year elevation). Everything depends on trade-offs.

Risk analysts separate uncertainties into at least two types: aleatory due to randomness in nature, and epistemic due to the lack of knowledge. Aleatory uncertainties are properties of nature and are irreducible, but epistemic uncertainties are properties of the mind and are, in principle, reducible to zero with enough information. The common notion in engineering and planning is that flood frequencies are aleatory uncertainties. Actually, this is mostly a convenient assumption of the way we model floods. Pierre-Simon Laplace, the great Enlightenment probabilist, for example, thought there was no such thing as randomness in nature and that all uncertainty was epistemic (Laplace, 1814). From the view of risk analysis and planning, the uncertainties of flood discharge or river height are not simple frequencies of nature but more complicated probabilities of how we know things.

From a purely statistical view, the limited historical record of floods means that the sampling error in flood discharge and river height is reasonably large. Along the Rhine or Danube that history may be as much as 500 years – on the Nile at Cairo it is perhaps 1,400 years – but for North American streams, the length of record is much shorter. This means that the aleatory natural process of rainfall, runoff, and streamflow is supplemented by epistemic uncertainty in the parameters of the frequency distributions describing those natural processes.

In the early 1990s, USACE, at its Hydraulic Engineering Center in Davis, California, began developing a ‘risk & uncertainty’ approach to flood hazard damage reduction studies (USACE, 2008). This resulted in a risk analysis methodology for dimensioning flood levee heights that incorporates uncertainty due to both the expected AEP of river discharge (i.e., the best estimate of the aleatory frequency of flooding) and to the parameter uncertainty in those exceedance probabilities (i.e., the epistemic or statistical uncertainty due to limited data). The distinction between these two uncertainties is seen in Figure 2. The standard deviations referred to in the figure are those involving limited numbers of data from which to estimate flood frequencies.

How should these two sorts of uncertainty be combined? USACE adopted the approach of using the expected (aleatory) flood discharge with some return period, say 100 years, and adding to it some fraction of the epistemic uncertainty. This results in what has been called, a ‘conditional non-exceedance probability’ flood. The percentage is usually taken to be 95%. For example, in Figure 3, the best estimate of the discharge with a 100-year return period is about 6,000 cfs (170 m3/s). The mean plus two standard deviations of parameter uncertainty is about 12,000 cfs (340 m3/s). Presuming for illustration that the uncertainty is Normally distributed, plus two standard deviations above the mean has about a 98% chance of not being exceeded. So, one would say that the ‘100-year flood with a conditional non-exceedance probability of 98%’ is 12,000 cfs (340 m3/s). Many people find this confusing.

Fig. 3

Exceedance-probability function and error limit values for the South Fork Bear Creek (NRC, 2000). The middle (red) curve is the expected value, whereas the upper (blue) and lower (green) curves are plus and minus two standard deviations, respectively. Please refer to the online version of this paper to see this figure in color: https://doi.org/10.2166/wp.2021.269.

Fig. 3

Exceedance-probability function and error limit values for the South Fork Bear Creek (NRC, 2000). The middle (red) curve is the expected value, whereas the upper (blue) and lower (green) curves are plus and minus two standard deviations, respectively. Please refer to the online version of this paper to see this figure in color: https://doi.org/10.2166/wp.2021.269.

Close modal

An alternative way of combining aleatory and epistemic uncertainty is by convolving them together to get a single probability. This is sometimes called an expected probability (NRC, 2000) or the Bayesian predictive probability (Mosleh & Bier, 1996), but it is just a simple probabilistic calculation, presuming the two types of uncertainty are interchangeable. Not all risk analysts accept this presumption.

The distinction between aleatory and epistemic uncertainty becomes especially important when considering the fragility term in the risk equation. Fragility describes the response of protective works to water loads, for example, levee performance. The geotechnical probabilities inherent in forecasting the performance of protective works seldom have to do with aleatory uncertainties, that is, natural frequencies in space or time. They have to do with the characterizations of geotechnical or constructed systems, with uncertainties about engineering parameters, as-built conditions, and models. That is, the uncertainties underlying fragility curves are mostly epistemic. These uncertainties are not describable as annualized values. If a levee is weak, it is not weak on so many days per year; it is either weak or it is not, one just does not know which. The annualized probability only arises in the hazard curve.

As noted, TVA at one time used a maximum probable flood and USACE used a standard project flood, neither of which was probabilistic. These were point-estimated, deterministic flood heights used for dimensioning flood protection (aka, flood risk management) structures.

A widely used concept in the US is the Probable Maximum Flood (PMF). The PMF is the most severe possible flood in a particular drainage area. It is calculated by combining information about the precipitation (PMP), geography, and water management strategies of a particular area. According to the Federal Energy Regulatory Commission (FERC, 2001),

‘The Probable Maximum Flood (PMF) is the flood that may be expected from the most severe combination of critical meteorological and hydrologic conditions that are reasonably possible in a particular drainage area’.

‘[The] Probable Maximum Precipitation (PMP) is the greatest depth of precipitation for a given duration that is theoretically physically possible over a particular size storm area at a particular geographical location at a particular time of year’.

The SPF is the volume of streamflow expected to result from the most severe combination of meteorological and hydrologic conditions which are reasonably characteristic of the geographic region involved, excluding extremely rare combinations. It is considered to be in the range of half of the flow of the PMF.

For the Mississippi River, a project specific or design flood is used by USACE to aid in the design and execution of flood protection in the Mississippi Valley. The current project design flood, developed between 1954 and 1955 resulted from a thorough and cooperative effort by the Weather Bureau, USACE, and the Mississippi River Commission that incorporated previously unavailable data regarding the sequence, severity, and distribution of past major storms and investigated 35 different hypothetical combinations of actual storms that produced significant amounts of precipitation and runoff (Mississippi River Commission, 2007). The standard project storm estimate for a particular drainage area and season of the year in which snow melt is not a major consideration intends to represent the most severe flood-producing rainfall depth–area–duration relationship and isohyetal pattern of any storm that is considered reasonably characteristic of the region in which the drainage basin is located, giving consideration to the runoff characteristics and existence of water regulation structures in the basin. Where floods are predominantly the result of melting snow, the SPF estimate is based on estimates of the most critical combinations of snow, temperature, and water losses considered reasonably characteristic of the region (USACE, 1965).

USACE has traditionally used the SPF as its design event. This SPF does not have a fixed recurrence frequency such 100 years or 200 years but is a site-specific determination made on the basis of flood frequency, damage potential, and cost of construction. It is generally understood that the SPF ranges in the vicinity of a 200- to 500-year event (FEMA, 2020). SPF is a design storm based on lesser storm than PMF but where such flood would create significant consequences. The actual level of mitigation authorized for a dam or a levee is proposed by the Administration, after USACE study, and approved by the Congress5.

Beyond the concepts of aleatory and epistemic uncertainty is that of whether the world is stationary. Can we rely on historical frequencies to forecast those of the future? Has the world ever been stationary? That is perhaps both a scientific and a philosophical question. The pronouncement in 2008 that ‘stationarity was dead’ (Milly et al., 2008) coupled with an increasing number of large storm events and growing recognition of the impacts of climate change, brought into question continuing reliance on the use of stationarity in assessing flood risk6.

Predicting future flood risk from past floods assumes that the future will be like the past. Yet most geo-scientists probably assume that climate variability and change will bring a future unlike the past (IPCC, 2012). Historical data only provide estimates of historical recurrences. The extent to which flood risk will change in the future is unknown with certainty. Some argue that the end certainty is unquantifiable; others might argue that the end certainty is at least difficult to quantify.

In the world of risk analysis, such uncertainties are called ‘deep’ (Cox, 2012). Simply, this means that the relationship between alternatives and their consequences is unknown even to a probabilistic description. Some deep uncertainties are of the type that Secretary of Defense Rumsfeld called ‘unknown-unknowns’ (USDOD, 2002), or others have called ‘black swans’ (Aven, 2014).

In the realm of flood risk, these deep uncertainties are perhaps less enigmatic, but still difficult to forecast in probabilities. One sees this manifest in the reports of the IPCC in which projections are treated as scenarios rather than as probabilities. That is not to say that projections of climate change, increasing flood risk, or sea-level rise cannot be made probabilistically, but only that many scientists and politically prominent organizations seem reluctant to put probabilities to them. Given this widespread reluctance, there may be an implicit concurrence that quantitative risk analysis itself may not be the best approach to flood risk management in light of these deep uncertainties.

In the US, flood insurance was established by Congress through the National Flood Insurance Act of 1968. A series of subsequent amendments to the Act made insurance mandatory for properties in identified high-hazard or ‘A’ zones (100-year flood plains) financed by federally backed mortgages. These amendments also regulated new construction in high hazard zones. When catastrophic floods cause damages beyond those expected and in other than high hazard zones, low interest loans from the federal government may be available for repair and recovery (floodsmart.gov).

Two kinds of impacts and two kinds of consequences are important here: economics and safety. Economics has to do with financial losses that the catastrophe causes. These include damages to structures, business interruption, and things that cost money. Safety has to do with morbidity and mortality losses. The more relevance of the latter is the number of lives lost. The NFIP deals only with the former. Until the calamity that was Hurricane Katrina in New Orleans, USACE also only considered economic losses of floods. Since then, and in concordance with the US Bureau of Reclamation practice since the failure of Teton dam in 1976, potential loss of life has become a dominating criterion for dam and levee safety, and increasingly has become of concern for flood risk reduction projects (Figure 4).

Fig. 4

USACE societal risk guidelines for existing dams (Reclamation & USACE, 2015). ALARP means, ‘as low as reasonably practicable’.

Fig. 4

USACE societal risk guidelines for existing dams (Reclamation & USACE, 2015). ALARP means, ‘as low as reasonably practicable’.

Close modal

Economic losses in the US water resource project history have traditionally been treated using benefit–cost analysis (BCA). In this approach, consequences even if not obviously economic are nonetheless monetized to provide commensurate metrics. The discounted sum of monetized project benefits is compared to the discounted sum of monetized project costs to generate a proportionality index. If this index is sufficiently larger than 1.0, the benefits are said to outweigh the costs and the project is deemed worthy of moving forward. For many project benefits and costs, there exist markets from which prices can be taken. For those for which markets do not exist or are difficult to identify, then non-market proxy methods like surveys are sometimes used (NRC, 2004b).

The willingness-to-pay (WTP) for some benefit or to avoid some risk is a common economic measure used in monetizing non-commensurate benefits and costs. The WTP is the maximum price at which a consumer will buy a unit of some product or service. There are a variety of indirect means for attempting to measure WTP and as might be expected there is always some uncertainty about the values inferred.

One approach to monetizing safety impacts is to attempt to quantify the WTP for marginal reductions in the risk to life. This is sometimes called the value of a statistical life (VSL). Engineers and planners are usually reluctant to quantify the proverbial ‘value of life’, and the VSL attempts to sidestep the issue by valuing risk reduction at the margin (Viscusi, 1993). This VSL can be inferred from decisions made by citizens in life-threatening situations, or by direct questioning. This is an approach common in regulatory use, such as the US Environmental Protection Agency (US EPA, 2014) or the US Department of Transportation (USDOT, 2012). It is not common in flood risk management or other water resource infrastructure decision making.

In dam and levee safety, several major federal agencies in flood management (USACE, Reclamation, FERC, FEMA) have adopted a ‘tolerable risk criteria’ approach pioneered by the UK Health and Safety Executive for nuclear power (HSE, 2001; Figure 3). The history of this approach is summarized by Baecher et al. (2015). In essence, tolerable risk criteria attempt to establish the equivalent of standards for risk to life safety. They do not use trade-offs or economic considerations. Such criteria are, for the most part, not based on demonstrated social preferences but on the judgments of engineers and planners. Social perceptions of risk to life are known to be influenced by a variety of factors – voluntary vs. involuntary, dread vs. usual, occupational vs. non-occupational, public vs. private, and so on – few of which are reflected in the tolerable risk curves in water resource practice (Starr, 1969; Slovic, 1987).

In the recent past, flood risk management has relied upon either statistical analysis of historical flood frequencies or on deterministic bounding concepts such as the PMF or design flood. In an age of non-stationarity and evolving climate neither of these seems to seize the current challenge: how to make rational decisions in this age of change and indefiniteness?

The traditional flood risk management approach, faced either from the point of view of an economist or an engineer, is optimization. How can we best balance the probabilities of adverse natural events against the associated costs of protection and consequence? This has been the standard rational utilitarian approach since the Victorian age. But does it work in an age of vagueness? Perhaps not.

It seems that what is needed is more akin to the muddling through approach of Charles Lindblom (1959), something that Herbert Simon (1956) would have called satisficing. Rather than seeking optimal solutions, perhaps our planning objective should be to find incremental, satisfactory solutions which are resilient against the vagaries of future climate, future land uses and development, and other factors which affect risk reduction and the return on investment in flood protection measures. Satisfactory plans lack the attractive property of optimization but benefit from robustness. The question is, what does it mean for a flood protection measure to be, ‘satisfactory’ or ‘good enough’ given present knowledge? That is a less obvious question to answer than is that of optimizing the trade-off between risk and cost.

This might be an approach to which Gilbert White (1961) could be sympathetic. He proposed thinking about natural resource decisions in the sense of satisficing. He cited March and Simon's (1958) book on decision making in organizations (1958) as well as Firey's, Man, Mind, and Land (1960) who present concepts along this line. Kates (1971) reprises White's thinking in the context of natural hazards and risk, as does the National Academy (NRC, 2004a) in its report on Adaptive Management for Water Resources Project Planning.

A more adaptive approach to flood risk management need to postulate no ‘utility function’ for the organization or citizenry, or does it require an elaborate procedure for calculating marginal rates of substitution among different costs and benefits. Natural hazard problems like flood risk are characterized by computational intractability and inadequate information, both of which inhibit the use of mathematical optimization. Simon observed in his Nobel Lecture that ‘decision makers can satisfice either by finding optimum solutions for a simplified world, or by finding satisfactory solutions for a realistic world. Neither approach, in general, dominates the other and both have continued to co-exist in the world of management science’. This is similar to the approach adopted by USACE is the aftermath of Hurricane Katrina (Figure 5) called ‘stepping down risk’.

Fig. 5

Integrated flood risk management from a satisficing approach (Courtesy: USACE).

Fig. 5

Integrated flood risk management from a satisficing approach (Courtesy: USACE).

Close modal

To the extent we are still trying to maximize something, it is our confidence of a good enough outcome even if things go poorly. There is no particular reason to assume that the solution that is best in a utility calculation is also the solution that is most robust to error in the data and assumptions underlying that calculation.

Our uncertainty in today's flood risk management is more extreme than the uncertainty when rolling dice. Knight (1921) distinguished between probabilistic risk which can be measured and insured against, and the non-probabilistic ‘true uncertainty’, which is the source of entrepreneurial profit (and loss) in the natural world. Here is an alternative to utility maximization: it is maximizing robustness to uncertainty of a satisfactory outcome. Robust satisficing is appropriate when probabilities are known only vaguely. The maximizer of utility seeks the answer to a single question: which option provides the highest benefit–cost. The robust satisficer seeks the answer to two questions: what is a ‘good enough’ outcome; and of the options that produce a good enough outcome, which one will be robust to the widest range of possible futures?

For the first two centuries of the nation's settlement of floodplains, the determination of the flood risk to be tolerated (level of protection to be provided or required) was based on the judgment of local officials responsible for carrying out the construction or controlling the development of the landscape. Decisions were made based on the knowledge available at the site and were tied to both engineering and political factors as well as statistics – what was known about the floods that had occurred and might occur in the future, the landscape, and the necessity for flood mitigation for social or economic purposes.

The 20th century brought more science into these judgments and saw the development of mathematically based methodologies for determining the recurrence interval of flood events and the use of climatology and meteorology to determine the characteristics of large floods. By the end of the century, both approaches were in play with the National Flood Insurance Program focusing its attention on the 100-year flood event as an insurance and floodplain management standard, and USACE developing risk standards relying on a combination of climatology, meteorology, and statistics.

After major climate-related disasters in the first decades of the 21st century, President Barack Obama proposed that all flood risk reduction projects undertaken by the federal government should require efforts to be made to accommodate future changes in the flood levels that might occur under climate change and physical changes in the areas being considered for projects. The federal government examined what standards should be used to prevent building new or rebuilding damaged projects whose characteristics had or would have already proven inadequate in dealing with the 21st century flood events for support of new federal projects and the repair of those damaged by floods. ‘The government concludes that, in the face of the conditions previously mentioned, efforts needed to be made to accommodate the seemingly inevitable rise in flood elevations’.

In 2015, President Obama issued Executive Order 13690 promulgating a federal flood risk management standard (FFRMS) to ensure that agencies expand management from the current base flood level to a higher vertical elevation and corresponding horizontal floodplain to address current and future flood risk and ensure that projects funded with taxpayer dollars last as long as intended. Under the new standards, projects being supported by the federal government, agencies would have followed one of three options designed to adapt to future conditions:

  • 1.

    Utilizing best-available, actionable data and methods that integrate current and future changes in flooding based on science.

  • 2.

    Two or three feet of elevation (depending on criticality), above the 100-year, or 1%-annual-chance, flood elevation; or

  • 3.

    500-year, or 0.2%-annual-chance, flood elevation.

The FFRMS would not apply beyond projects supported by the federal government; however, the presence of such an approach would influence local and state agencies.

During the period that federal agencies were putting the standard into use, President Trump was elected and rescinded the Executive Order 13690. While there may not be a federal standard, much of the business world has been carefully examining the FFRMS and voluntarily including elements of it in their plans. In late 2019, the governor of the state of Virginia, facing significant sea-level rise along the Virginia coast and riverine climate change impacts throughout the state, issued State Executive Order 45: Floodplain Management Requirements and Planning Standards for State Agencies, Institutions, and Property, which mirrors much of the thrust of the FFRMS.

In addition, the Congress, through the 2018 National Defense Authorization Act in a section, Flood Risk Disclosure for Military Construction, directed the Secretary of Defense to require the secretaries of the military departments to:

‘[…] when mitigating the flood risk of a major or minor military construction project within or partially within the 100-year floodplain, the Secretary concerned shall require any mitigation plan to assume an additional, two feet above the base flood elevation for non-mission critical buildings, as determined by the Secretary; and three feet above the base flood elevation for mission-critical buildings, as determined by the Secretary’.

On January 20, 2021, on his first day in office, President Joe Biden issued Executive Order 13990, Protecting Public Health and the Environment and Restoring Science To Tackle the Climate Crisis, which among other things rescinded Executive Order 13807, August 15, 2017 (Establishing Discipline and Accountability in the Environmental Review and Permitting Process for Infrastructure Projects), which rescinded the previously issued Executive Order establishing the FFRMS and efforts are underway in federal agencies to establish agency guidelines for implementation of the standard.

Even within the federal role, the US operates with an inconsistent approach to flood risk. There is no consistent approach across federal agencies or among states and localities. No federal agency exercises direction or even oversight as the world moves into a rapidly changing climate and weather situation. As noted, optimizing risk may not be the best approach to flood risk management in light of the deep uncertainties we now face, but a more incremental and satisficing one may be.

Funding for this research was provided by the Water Center of the University of Maryland, College Park.

All relevant data are included in the paper or its Supplementary Information.

1

U.S. National Academies of Science, Engineering, and Medicine (NASEM).

2

There is substantial evidence that many of the public understand odds and pictures far more clearly than they do either probabilities or percentage frequencies (CDC, 2018).

3

The Federal Emergency Management Agency (FEMA) assumed these responsibilities from HUD.

4

In late 2016, the Dutch Parliament amended the national water law to include a risk-based standard of 1:100,000 years with a goal of reaching that level by 2050.

5

Agricultural areas may receive less protection.

6

The 2008 paper by Milly et al. was followed by another paper in 2015 (Milly et al., 2015) indicating that the problem was far from being solved.

Aven
T.
, (
2014
).
Risk, Surprises and Black Swans: Fundamental Ideas and Concepts in Risk Assessment and Risk Management
.
Routledge
,
Abingdon, Oxon
.
New York, NY
.
Baecher
G. B.
,
Abedinisohi
F.
&
Patev
R. C.
, (
2015
).
Societal Risk Criteria for Loss of Life Concepts, History, and Mathematics
.
University of Maryland
,
College Park, MD
.
CDC
(
2018
).
If the Material Uses Numeric Probability to Describe Risk, Is the Probability also Explained with Words or a Visual? | The CDC Clear Communication Index | Centers for Disease Control and Prevention
.
Center for Disease Control
.
Available at: https://www.cdc.gov/ccindex/tool/page-20.html (accessed January 11, 2020)
.
Cox
L. A.
, (
2012
).
Confronting deep uncertainties in risk analysis
.
Risk Analysis
32
(
10
),
1607
1629
.
Eychaner
J. H.
, (
2015
).
Lessons from a 500-Year Record of Flood Elevations
.
Technical Report 7, Association of State Floodplain Managers
,
Madison
, p.
25
.
FEMA
(
2020
).
Myths and Facts about the National Flood Insurance Program
.
Federal Emergency Management Agency
. .
FERC
(
2001
).
Determination of the probable maximum flood (Chapter VIII)
. In:
Engineering Guidelines for the Evaluation of Hydropower Projects
.
Prepared by the Office of Energy Projects (OEP)
.
United States Federal Energy Regulatory Commission
,
Washington DC
.
Firey
W. I.
, (
1960
).
Man, Mind, and Land: A Theory of Resource Use
.
Free Press
,
Glencoe, Ill
.
Galloway
G. E.
, (
2011
).
If stationarity is dead, what do we do now?
JAWRA Journal of the American Water Resources Association
47
(
3
),
563
570
.
HSE
(
2001
).
Reducing Risks, Protecting People – HSE's Decision Making Process
.
UK Health and Safety Executive, HMSO
,
London
.
IPCC
(
2012
).
Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaption: Special Report of the Intergovernmental Panel on Climate Change
.
C. B. Field and Intergovernmental Panel on Climate Change
(eds.).
Cambridge University Press
,
New York, NY
.
Jorissen
R.
,
Kraaij
E.
&
Tromp
E.
, (
2016
).
Dutch flood protection policy and measures based on risk assessment
. In:
FLOODrisk 2016 – 3rd European Conference on Flood Risk Management
.
E3S Web of Conferences 7
,
20016
.
Knight
F. H.
, (
1921
).
Risk, Uncertainty, and Profit
.
Hart, Schaffner & Marx; Houghton Mifflin Company
,
Boston, MA
.
Kron
W.
,
Eichner
J.
&
Kundzewicz
Z. W.
, (
2019
).
Reduction of flood risk in Europe - Reflections from a reinsurance perspective
.
Journal of Hydrology
576
,
197
209
.
Laplace
P. S.
, (
1814
).
Philosophical Essay on Probabilities
.
Dover Publications, Inc
,
New York
.
Lindblom
C. E.
, (
1959
).
The science of ‘muddling through’
.
Public Administration Review
19
(
2
),
79
88
.
March
J. G.
&
Simon
H. A.
, (
1958
).
Organizations
.
Wiley
,
New York
.
Milly
P. C. D.
,
Betancourt
J.
,
Falkenmark
M.
,
Hirsch
R. M.
,
Kundzewicz
Z. W.
,
Lettenmaier
D. P.
&
Stouffer
R. J.
, (
2008
).
Stationarity is dead: Whither water management?
Science
319
,
573
574
.
Milly
P. C. D.
,
Betancourt
J.
,
Falkenmark
M.
,
Hirsch
R. M.
,
Kundzewicz
Z. W.
,
Lettenmaier
D. P.
,
Stouffer
R. J.
,
Dettinger
M. D.
&
Krysanova
V.
, (
2015
).
On critiques of ‘stationarity is dead’: Whither water management?
Water Resources Research
51
(
9
),
7785
7778
.
Mississippi River Commission (MRC)
(
2007
).
The Mississippi River and Tributaries Project: Controlling the Project Flood
.
MRC
,
Vicksburg, MS
.
Mosleh
A.
&
Bier
V. M.
, (
1996
).
Uncertainty about probability: a recociliation with the subjectivist viewpoint
.
IEEE Transactions on Systems, Man, and Cybernetics – Part A: Systems and Humans
26
(
3
),
1083
1090
.
National Research Council
(
1983
).
Risk Assessment in the Federal Government: Managing the Process
.
National Academy Press
,
Washington, DC
.
National Research Council
(
2004a
).
Adaptive Management for Water Resources Project Planning
.
National Academies Press
,
Washington, DC
.
National Research Council
(
2004b
).
Valuing Ecosystem Services: Toward Better Environmental Decision-Making
.
National Academies Press
,
Washington, DC
.
NRC
(
2000
).
Risk Analysis and Uncertainty in Flood Damage Reduction Studies
.
National Academies Press
,
Washington, DC
.
Reclamation and USACE
(
2015
). Best Practices and Risk Methodology.
US Bureau of Reclamation
,
Denver
, p.
20
.
Robinson
M. F.
, (
2004
).
History of the 1% annual chance flood standard
. In:
Olinger, L. W., Plasencia, D. & Galloway, G. E. (eds). Reducing Flood Losses: Is the 1% Chance (100-year) Flood Standard Sufficient?
Association of State Floodplain Managers
,
Madison, WI
, pp.
1
8
.
Sayers
P.
,
Li
Y.
,
Galloway
G.
,
Penning-Rowsell
E.
,
Shen
F.
,
Wen
K.
,
Chen
Y.
&
Le Quesne
T.
, (
2013
).
Flood Risk Management: A Strategic Approach
.
UNESCO
,
Paris
.
SB5 (Senate Bill 5) State of California
(
2007
).
Sacramento
.
Sheaffer
J.
, (
2004
). The evolution of the 100-Year flood standard. In:
Olinger, L. W., Plasencia, D. & Galloway, G. E. (eds). Reducing Flood Losses: Is the 1% Chance (100-year) Flood Standard Sufficient?
Association of State Floodplain Managers
,
Madison, WI
, pp
9
-
11
.
Simon
H. A.
, (
1956
).
Rational choice and the structure of the environment
.
Psychological Review
63
(
2
),
129
138
.
Slovic
P.
, (
1987
).
Perception of risk
.
Science
236
(
4799
),
280
285
.
USACE
(
1965
).
Standard Project Flood Determination
.
US Army Corps of Engineers (USACE)
,
Washington, DC
.
USACE
(
2008
).
HEC-FDA Flood Damage Reduction Analysis: Users Manual CPD-72, Version 1.2.4. US Army Corps of Engineers
,
Hydrologic Engineering Center
,
Davis, CA
.
USDOD
(
2002
).
Defense.gov Transcript: DoD News Briefing – Secretary Rumsfeld and Gen. Myers
.
DoD News Briefing Secretary Rumsfeld and Gen. Myers, USDOE News Transcript Press Operations
. .
USDOT
(
2012
).
Economic Values Used in Analyses
.
US Department of Transportation
. .
USEPA
(
2014
).
Mortality Risk Valuation
.
US EPA, Overviews and Factsheets
. .
Viscusi
W. K.
, (
1993
).
The value of risks to life and health
.
Journal of Economic Literature
31
(
4
),
1912
1946
.
White
G. F.
, (
1961
).
The choice of use in resource management
.
Natural Resources Journal
1
(
1
),
23
40
.
WMO
(
1992
).
International Glossary of Hydrology
.
World Meteorological Organization
,
Geneva
.
This is an Open Access article distributed under the terms of the Creative Commons Attribution Licence (CC BY 4.0), which permits copying, adaptation and redistribution, provided the original work is properly cited (http://creativecommons.org/licenses/by/4.0/).