Computational decision analysis for flood risk management in an uncertain future

Flood risk management is in many countries a major expense, and while the returns on this investment, in terms of risk reduction, are also high, the process of developing and choosing between management options is of critical importance. New sources of data and the falling cost of computation have made possible new approaches to options appraisal. The state of the art has a number of limitations, however. We present a comprehensive but parsimonious framework for computational decision analysis in flood risk management that addresses these issues. At its core is a simple but flexible model of change on the decadal time scale of typical option appraisals, including the management interventions that are the subject of decision along with influences, such as climate change, that are independent of the processes of flood risk management. A fully integrated performance model is developed, estimating both costs and benefits. Uncertainty analysis can thereby be applied to performance metrics of direct interest to stakeholders. We illustrate the framework with an implementation for a hypothetical flood risk management decision. We discuss possible variants of the framework that could be extended to fields other than flood risk


INTRODUCTION
The UK will spend £800 m on flood and coastal erosion risk in England in 2010-2011 (Environment Agency ).
£570 m is allocated to the construction and maintenance of flood defence assets, the balance to development control, warning and various planning and operating activities.The return on this investment is considerable: it is estimated that in the long term £8 is saved for every £1 spent on flood and coastal risk management.
Computational models have long been used to support decision making in flood risk management.A common use is to estimate the level that a river would reach during a design flood, say a 1:100 year event, so that a flood defence scheme could be designed to accommodate this event.The model would be run several times during development and to test variations in the scheme, but each of these runs would be set up and initiated by a modeller.
In recent years, computational risk analysis has been used with increasing frequency in flood risk management.
Here, the statistical expectation of an impact is estimated using numerical integration, a process that involves running many hundreds or thousands of simulations.Multiple runs enable analysis of floods and their consequences in a very wide range of possible conditions, more and less severe than the design flood.Risk analysis provides a framework for proper treatment of the joint probability of multiple flooding conditions and dike failure modes.Risk analysis became practical as the cost of processing power dropped, and was pursued because it provides information that in certain decision-making contexts is of much greater value than the results of individual model runs.
In parallel with the development and adoption of risk analysis methods, an appreciation of the potential impact of uncertainty in data and modelling has grown and become embedded in flood risk management practice.
Again, this is partly because advancing technology has made computational uncertainty analysis affordable.But again, it has been taken up as a matter of concern in flood risk management because of the evident impact of uncertainty on our ability to make decisions on the basis of the outputs of models.
The high profile issue of climate change and its clear relevance to flood risk management have raised a third issue to prominence: that of processes of change that operate over decadal time scales and that significantly alter flood risk.
These have always been present of course, and indeed many flood risk management measures are taken in response to one or more such processes moving or threatening to move a system out of an acceptable behavioural range.
Over the effective lifetimes of flood risk management measures, whether structural or non-structural, substantial changes will occur to the system in which they are embedded.Changes to catchment land use alter run-off processes, for example, while socio-economic change alters the value of assets at risk and vulnerability to flooding.The very dikes that we build to protect us from floods suffer gradual deterioration and settlement.It is increasingly expected that the design and justification of these measures will take these processes of change into account.
It is clear that these three issuesof risk, uncertainty In this paper we introduce the notion of a fully integrated decision analysis and present a framework to guide the development of such an analysis.Building on the state of the art, we make a number of novel contributions.
• Impact assessment, risk analysis and intervention costing are brought together in a fully integrated model of option performance.
• Uncertainty analysis is applied to the performance esti- mator rather than the component parts of the analysis, generating information of high relevance in making decisions.
• Flood risk management interventions are modelled as functions that map from system state to system state.
Given a library of intervention functions, options can be defined by simply listing which interventions to apply when.
• Processes of long-term change are simulated, including the effects of management interventions alongside 'exogenous' change processes (processes over which flood risk managers have no control but to which they must respond).
• Interventions are costed when they are applied during simulation and these costs are aggregated to provide costs of each proposed sequence of management interventions.Cost models can depend on system state variables, which in turn can be subject to long-term change and uncertainty.
The framework described is parsimonious.Although simple in its essence however, its implementation is not trivial.The analysis involves several layers of sample propagation and simulation, generating a combinatorial explosion of model runs and large, high-dimensional data sets.This poses challenges for traditional approaches to programming computational analyses.The practicality of the framework as a tool depends on the use of emerging highly scalable computing resources, in particular cloud computing (Harvey & Hall ), to implement its engine.
Those resources are not accessible to the average engineer or analyst, however.Introducing programmers and system administrators between the engineer and the computational resource will increase friction in the analysis process and thus the cost of change, to the detriment of decision quality.
The example described in this paper was implemented using We then set out an example analysis for a decision between flood risk management options in a hypothetical coastal setting, and discuss the implementation of this analysis and the results generated.Finally, we consider some broader issues: the process of decision analysis development and how this fits within a decision-making process, issues to be addressed in implementing the framework at full scale and opportunities for application of variants of the framework in settings other than flood risk management.
We refer frequently to options, a term commonly used in decision theory and analysis circles.

FRAMEWORK Overview
Figure 1 shows the layered structure of the framework.
Figure 2 overlays on this structure the analysis data flow.
Small rectangular and oval boxes denote data sets and the transformations between data sets respectively.
The framework has a layered structure that is indicated by the dashed boxes enclosing subgraphs.We will summarise the role of each layer in turn, starting with the innermost (we recommend against approaching the design and implementation of a decision analysis in this order, howeverthis point is taken up in more detail in the section 'The process of decision analysis development' below).
Decision analysis is undertaken to inform decisions regarding the management of some system.A typical system might be an urban area exposed to flooding from a river, the sea, or both.The decisions made ultimately result in changes being made to that system, for example the construction of dikes, modification to planning regulation and the implementation of educational initiatives.
Meanwhile the system is subject to constant change irrespective of this process of active management: relative mean sea level is changing, dikes deteriorate, and development proceeds, albeit within (or nearly within) the restrictions of the enacted legislation.
At any given time the system has a state.We represent a system state as a system state vector s ∈ S, where S denotes the space of representable system states, and we use system state vectors at all layers of the framework.Table 1 lists the members of the system state vector used in the example, which gives an indication of the sort of variables that one might expect to find in the state vector, and thus of the scope of the notion of system state invoked by this framework.
System state as used here is a broad concept.It includes most of the usual inputs to the hydrodynamic and other modelling systems that are used in impact modelling, as well as the joint probability distribution over events used in risk analysis and parameters and state variables needed in simulating long-term change.
Our vector s, however large, cannot capture all of the myriad details of a real system.As usual in a modelling exercise our goal is to capture as much of the relevant detail as possible and to make a fair assessment of the impact of what has been omitted.It is part of the process of process of decision analysis to establish just what is in fact relevant.
Layer 1: event-based impact estimation The innermost layer implements a deterministic estimator dðs; xÞ of the impact or impacts of an event x in a system s.Impacts may be quantified according to a number of economic, social and environmental metrics including but not limited to property damage and loss of life.
Impact estimation and risk analysis (layer 2, below) in the framework as described are event based (we comment on adaptation of the framework to accommodate continuous simulation methods in the discussion section).The event vector x specifies any aspect of system behaviour that can only be characterised probabilistically.In a typical flood risk analysis including reliability analysis of dike failure, x specifies both the driving hydrological event (some combination of extreme tide, wave action and flow) and the dike system state.Given the event x, the calculation of impact is deterministic.
The impact model d will normally be implemented by choreographing a number of component models capturing the physical behaviour of the system and the socio-economic or environmental impact of that behaviour.In a typical flood impact model, the driving event is translated into flood depths and velocities by means of a number of hydrodynamic models, which may include models of river or estuary hyraulics, of flow past dikes and of floodplain inundation.Further models are then used to estimate the impact of those depths and velocities on people and property.
The framework requires that impacts be quantitatively modelled, but detailed process-based models are not assumed and may not always be appropriate.Indeed not all impacts can be accurately and precisely modelled (for example impact on amenity of high flood walls).Each increase in complexity incurs substantial cost, which must be justified in terms of the decision to be made.Because uncertainty analysis is included as a fundamental component of the framework (in layer 5, below), simple impact generation models can safely be used as long as the model uncertainty introduced is properly captured.This may require error models to be introduced into d, the parameters to which can be set as part of the system state s by the layer 5 uncertainty analysis.
The impact model d must be implemented as a fully automated procedure, as d will be evaluated many thousands of times during a single run of the decision analysis and the analysis itself will typically be iteratively refined and re-run many times during a decision-making process.This means that component models are required that can be run under software control without a graphical user interface.It is unfortunate that not all commercial modelling packages support this mode of working, but many of them and most simulation codes developed in an academic setting do.
Layer 2: risk analysis Estimates of the impact of particular events are of limited interest in the context of investment decision making.A more useful measure is the statistical expectation of the impact given a joint probability density function over the space of possible events as found by risk analysis.
If dðs;xÞ is the impact of event x in system s (layer 1) and f ðs;xÞ describes the probability of occurrence of x in s, the expected impact EðsÞ is given by: In flood risk assessment, d most commonly estimates economic damage and f is an extreme value distribution over annual maxima, so the expectation E is expected annual damage (EAD).In a multi-criterion analysis the impact model d will generate a vector of values and Equation (1) will result in a vector of expected values.
The function d is available as a procedure (the implementation of an algorithm), so symbolic integration is not possible.We therefore use numerical integration to obtain an estimate of E. If xi ½i is the ith member of a sample from the space of possible events X and wðs;x i ½iÞ is a weighting function, the expected impact is estimated by: EðsÞ ≈ X i wðs;x i ½iÞdðs;x i ½iÞ ð2Þ If fx i ½iji ¼ 1 . . .N i g is a sample from f, setting wðs;x i ½iÞ ¼ 1=N i for all i gives 'brute force' Monte Carlo integration: In a location protected by well-maintained dikes with a 1:100 year standard of protection, around 99% of model runs in a naïve Monte Carlo analysis will be wasted.
Weighted sampling schemes can be much more efficient (Dawson et al.  provide a detailed discussion of sampling for flood risk analysis).
If the results of the evaluations of the impact function d are retained, they can be reprocessed to construct the full impact exceedance curve (illustrated for the example in Figure 8).
Where components of the event vector x are sampled from conditional probability distributions, a multi-step sampling process will be required.These distributions may be conditional on other components of the event vector, or on the value of an impact model variable.The probability of failure of a dike, for example, is conditional on peak water level at that dike during the event, and this level might be directly sampled (if the dike is at the coast, as in the example analysis set out below) or generated by propagating a coastal tidal cycle through a river model.
The latter situation may require that the impact model is evaluated progressively while the event vector is populated.
One relatively clean way of achieving this is to divide the impact model up into pieces such that each (except the last) returns some of the values needed in the sampling process.The risk analysis layer can then call each in turn, using the results in sampling inputs for the next.It is important that, at the end of this process, coherent end-to-end runs of the impact model can be identified for well-defined, physically meaningful events.Otherwise there will be no way to establish whether the model underlying the entire analysis is adequate. The

).
Layer 3: simulation of long-term change The heart of the framework and the core of its novelty lies in the integrated treatment of long-term change.The expected impact calculated in layer 2 is for a given system state s.
System state, however, is changing continuously, so a given s and the expected impact EðsÞ are snapshots.
We recognise two types of long-term change.With reference to a given flood risk management system or process long-term change can be endogenous or exogenous.
Endogenous change is deliberately induced as part of that process, and is the subject of our decision making.Exogenous change in contrast is change that 'happens to' the system under management and over which the flood risk management process does not exercise control.
Exogenous change processes might include sea level rise, changes in population density and demography, economic growth and deterioration and settlement of dikes.
Being relative to the management process in which a decision is embedded, this classification is context-dependent. In where Intervention functions are applied at the start of intervals.
The special case for t s ¼ 1 gives us the option of applying an intervention at the very start of the appraisal period.
We illustrate this in Figure 3(a) by showing a ghost interval t s ¼ 0 with an ending state ss ½0; 1 equal to s0 prior to the first interval of the appraisal period with end state s0 .
Intervention functions are applied at the start of intervals.We handle the initial condition in Equation ( 6) by introducing a ghost interval t s ¼ 0 with an ending state ss ½0; 1 equal to s0 .
Intervention functions are instantaneous in effect while the exogenous change function takes effect through the interval.Correspondingly the exogenous change function takes duration as a parameter while the intervention functions do not.While clearly not a perfect representation of reality, the assumptions that interventions take effect instantaneously greatly simplifies the modelling of long-term change.Interventions that will take several years to implement can, if the timing of expenditure or benefit is likely to be significant to the results of the analysis, be broken down into parts, each implemented at different times.
The final results of this layer of analysis are time series of cost and expected impact.Costs result directly from Equation ( 6), but expected impacts must be calculated from system states using the risk analysis layer.
We wish to minimise the number of system states for which a computationally expensive risk analysis is run.An Finally we generate annual costs c y ½t y and expected impacts ρ y ½t y , where t y ¼ 1 . . .T y indexes years of the appraisal period (Figure 3(c)) .Costs c y ½t y ¼ c s ½t s where t y is the start year of interval t s and 0 elsewhere.Expected impacts ρ y ½t y are estimated by interpolating into the values ρ ρ .
The formulation presented is flexible in allowing intervention frequency to be decoupled from risk analysis frequency.As always when approximating a function through interpolation, care is required to ensure that discontinuities and rapid changes of gradient are adequately captured.To capture the profile of expected impact through time, risk analysis must be run immediately before and after any intervention is applied that substantially modifies the system, and additional runs may be necessary to capture changes in gradient.The choice of when to calculate risk will require some trial and error and an understanding of the nature of the interventions applied (routine maintenance activities applied to a fraction of the total number of flood defence assets is unlikely to generate a step change in expected impact, for example) and any discontinuities or other rapid variation in the exogenous change functions.
Figure 4 illustrates the effect of long-term change simulation on a selection of system state parameters and expected impact in the case of the example presented below.
Layer 4: evaluation of option performance Our analysis is conducted to support some decision-making process, the purpose of which is to choose between a number of options.This layer of the framework calculates quantative performance metrics by comparing the cost and expected impacts of a number of 'do something' options j ¼ 1 . . .N j with the expected impacts of a base case j ¼ 0.
As set out above, we model options as sequences of interventions through time.
For the base case and each option, starting with the same initial state s0 and using the same exogenous change function e, we apply layer 3 to generate a time series of expected impacts and one of incurred costs.We extend the arrays ρ y and c y defined above over a second dimension, with ρ y ½ j; t y and c y ½ j; t y being indexed by both option and year in the appraisal period.
Calculating option performance then involves some combination of weighting, aggregation and comparison.
Equations ( 8)-( 10) give the general pattern.A positive value of benefit B represents an improvement between the base case and an option, while deterioration will result in a negative benefit.For multi-criterion analysis the array ρ y ½ j; t y becomes an array of vectors of expected impacts ρy ½ j; t y and the operators Agg b and Comp will process these into vector performance measures P½ j.
We obtain Equation ( 11) from Equations ( 8) to (10) by making the following substitutions.The term ð1 À rÞ ðtÀ1Þ in Equation ( 12) is the discount factor in year t.
Layer 5: uncertainty analysis Layers 1-4 of the framework define a fully-integrated, riskbased option performance estimator.Used with a scalar (single criterion) performance metric, this estimator will provide an unambiguous ranking of options, however it can only do so under the assumption that the state of the system now and the processes of change that are driving its evolution are perfectly known and modelled.Since this is not the case, we add a fifth and final layer to the framework in which we address epistemic uncertainty (uncertainty arising from lack of knowledge, Hall & Solomatine (), in contrast with aleatory uncertainty that derives from natural variability and is accounted for in layer 2 risk analysis).
A variety of techniques might be implemented in this layer of the framework.These generally conform to a pattern of generating a sample of initial system state vectors Careful analysis of robustness analysis results can suggest ways of modifying options to improve their robustness, making this particularly interesting as an option design tool.
Certain types of uncertainty, especially when dealing with the far future, are best captured using scenarios, which are readily accommodated by this framework.If uncertainty is represented solely using scenarios then each scenario simply generates an initial system state vector, and a performance is calculated for each option in each scenario.Scenarios may also be used in combination with other representations of uncertainty.
These are just some examples of the many analyses that become possible at this level of the framework once an integrated risk-based performance estimator is available.
Further subdividing this layer is also possible, as for example in the use of a heuristic search method such as a genetic algorithm to explore the option space for robust options.
Uncertainty analysis can be framed to answer questions about the analysis or about the system and options being studied.It is important to be clear when conducting such analysis what type of question is being asked, and to consider carefully the extent to which a computational experiment applied to a model can be informative about the behaviour of a real system.
We have structured this framework to enable the consideration of uncertainty in option performance.Modellers must continue to use all the techniques available to them to ensure the quality and appropriateness of the component models used.Uncertainty and sensitivity analysis techniques are among these and can profitably be deployed against many subcomponents of the framework described, including the impact model, its components and the risk analysis built around it.

INTERVENTION FUNCTIONS Intervention function families
Many interventions occur in families.An example is a 'raise dike crest level to x m AOD' intervention, where each value for the (real-valued) parameter x generates a different intervention.We model these as higher-order functions (functions that take or, as in this case, return other functions).
An application of the higher-order function m to an appropriate parameter such as m(5.0) results in a particular intervention function.By Equation ( 15), the result of this application has the same form as given for an intervention function n in Equation ( 4) above, and mð5:0ÞðsÞ is then an application of that particular intervention function to s.
The 'do nothing' intervention The formulation of Equation ( 6) demands exactly one intervention per interval t s .In order to accommodate the possibility of no intervention we introduce a ' do nothing' intervention n 0 , the result of which is the unmodified system state and a cost of zero A.

Intervention function composition
If more than one intervention is to be applied at the same time a composite must be defined.This is similar to normal function composition, but because the return value of in intervention function is a (state, cost) pair, the standard function composition operator cannot be used.We introduce the intervention function composition operator ⊙ for this purpose. where

EXAMPLE ANALYSIS Hypothetical situation
To illustrate the framework we consider a simple hypothetical situation (Figure 4) of a coastal location at risk of flooding from extreme tide events, for example resulting from the combination of a high spring tide with a storm surge.The On the decadal time scale of the appraisal period, relative mean sea level is increasing, the condition of the dike is deteriorating and the economy is growing.These processes are altering the state of the system and thus changing the probability and consequence of flooding.
Relative sea level rise leads to increased frequency of overtopping and increased flooding in an event of a given frequency.Dike deterioration increases the probability  options in terms of the their costs and benefits.They choose to conduct a normal economic analysis in which the performance of an option (a sequence of interventions) is taken as its NPV of reduction in EAD relative to a 'do nothing' base case.

Impact model
The In a real analysis, s will be very large.As an analysis is developed, it is important to clearly identify the role or roles of each variable.In Table 1 we show for each variable whether it is used in the impact model, the risk analysis, the exogenous change function and the intervention functions.
We also indicate whether a variable is modified or read by the exogenous change and intervention functions.In a more complex analysis, indicating which intervention functions use each variable might also be valuable.
We treat dike breaching rather simplistically for the purposes of this demonstration.We assume that if a breach occurs (x d ¼ 1), it will occur when the tide is at its peak level such that the load on the dike and the rate of flow over it are at their maxima and it will then develop instantaneously to its final dimensions.These breach dimensions are assumed to be deterministic functions of the dike dimensions (dike crest level and length and ground level behind the dike) and the peak tide water level during the event.
A complete tide hydrograph is estimated by scaling a typical tide hydrograph to match the peak surge tide level.
The volume of water that would enter the floodplain during an event is estimated as the sum of the volumes entering over the breached and unbreached lengths of dike.The dike is assumed to act as a broad-crested weir, with the weir crest taken as the dike crest level or breach invert level as appropriate.
The water depth in the floodplain is found by dividing the total volume entering the floodplain by the floodplain area, the floodplain being treated as a flat-bottomed, vertical-walled basin.A depth-damage curve relates flood depth to property damage caused by linear interpolation.

Risk analysis
Events grid of values over the space of load x l and dike condition s q .Dike condition s q is indicated by a value in the range s q ∈ ½1; 5 where s q ¼ 1 indicates perfect condition and s q ¼ 5:0 very poor condition.A dike with condition s q ¼ 5:0 will fail with high probability under even quite low loads, while condition s q ¼ 1:0 indicates that the dike will not fail with significant probability until a water level exceeds crest level (the failure being triggered by erosion during overtopping).
Since the impact model dðs;xÞ generates estimates of direct property damage and x l ½i are drawn from a Generalised Extreme Value distribution over annual maxima, the result of Equation ( 17) is EAD.

LONG-TERM CHANGE Exogenous change
The exogenous changes considered in the example are sea level rise, dike deterioration and economic growth.The variables modified by and used by the exogenous change function in the example are indicated in the 'Exogenous change' column of Table 1.The exogenous change function is given by Equation (18).
eðs; ΔÞ ¼ ½s q ¼ s 0 q ; s d1 ¼ s 0 d1 ; where Sea level rise modifies the location parameter s el of the GEV distribution over annual maxima of peak tide water level.A constant rate of rise s rs throughout the appraisal period is assumed.Dike deterioration results in dike condition s q being reduced over time at constant rate s rd .Economic growth takes place at a constant compound rate s rg .The effect of economic growth is to cause the value of assets at risk in the floodplain to increase, as reflected in the damage caused by a given depth of water.This increase is assumed to be evenly distributed over the depth/damage curve.

Interventions
Three types of 'do something' intervention are considered.
The variables modified by and used by the intervention functions in the example are indicated in the 'Intervention' column of Table 1.Table 2 shows the example interventions and intervention families.
Dike repair n r models inspection-triggered repairs to the surface of the dike that reduce the probability breach during overtopping.If the condition of the dike s q < 4:0, the intervention does nothing (corresponding with an inspection report indicating that repair work is not needed).Otherwise, the condition of the dike is reset to s q ¼ 2:0, making the dike less likely to breach during an event.The cost of repair is a function of the height of the dike, its starting condition and a repair cost system state variable s crm , s crl and s crs .
The Crest level raising improves dike condition as a side effect of the extensive reconstruction involved, so it is not necessary to compose with repair, though because of the conditional nature of the repair intervention doing so will have no effect.
The options considered are set out in Table 3.The effects of long-term change simulation on a selection of system state variables and EAD for the base case and three options are illustrated in Figure 5.

Uncertainty analysis
For illustrative purposes we apply forward propagation of uncertainty.The 'Value' column in Table 1 shows the uncertain variables and the probability density functions used to characterise our uncertainty regarding their true values.
Not all variables are treated as being uncertain, Table 1 shows a single value for those that are not.
We assume that these uncertainties are fully independent.The sampling algorithm would be complicated by the existence of dependence, but no other part of the process would be affected.Given the assumption of independence, we can sample from each distribution separately.
We build a sample of initial system state vectors fs 0 ½k : k ¼ 1 . . .ng from the distributions given in Table 1 and run the performance estimator for each s0 ½k.This results in an array of NPV P½k; j, one for each member of the sample from epistemic uncertainty and for each option Points on the cumulative density function over performance can be estimated for each option i by sorting the P½k; j (separately for each option j) into ascending order and taking quantiles.The probability density function can be estimated by building a histogram or, as here, using a kernel density estimator, giving a histogram (an approximate probability density function).This will become more accurate with increasing sample size, and more accurate but less precise with decreasing number of quantiles.Repair dike if condition s q is below 4.0 n r ðsÞ ¼ 〈s 0 ; c〉; s q !4:0 〈s; 0〉; otherwise where Rebuild dike with crest level x.If crest level is already higher, don't change.Defence condition raised to s q ¼ 1:0 m c ðxÞðsÞ ¼ 〈s 0 ; c〉; x > s t 〈s; 0〉; otherwise where Flood proof property.Flood proofing is applied to proportion x of total property or proportion not already proofed, whichever is higher m p ðxÞðsÞ ¼ 〈s 0 ; c〉; x > 0:0 〈s; 0〉; otherwise where

Example implementation
The example described has been implemented using the latest prototype of Reframe, a web-based tool for collaborative development of computational analyses (Harvey & Hall ; Harvey et al. ; Reframe can be found at http:// reframe.org/.This example analysis is included as an example application).Reframe allows users to build up calculations using example data in worksheets, then to convert these worksheets into reusable functions.Worksheets can also contain visualisations of data (Figure 5 is a screenshot from the Reframe user interface).
Reframe provides strong support for working with nested multi-dimensional arrays.This support is inspired by the APL programming language family see the Wikipedia page on APL for an introduction: (http://goo.gl/w7gu),especially J (http://www.jsoftware.com/), and the NumPy extensions (http://numpy.scipy.org/) to Python language (the prototype of Reframe is implemented in Python and NumPy).The nested multi-dimensional array is a flexible model of data (More ) with much promise for spatial decision analysis applications.
In common with other array languages, many operations over arrays can be expressed in Reframe without explicit flow control structures such as loops and array dimensions are automatically 'broadcast' (dimensions in function parameters that are not handled explicitly by the function are propagated into the output).
Reframe also provides some simple support for functional programming.It is possible construct data sets containing functions and to pass these as parameters to other functions.This enables a rather direct implemention of the long-term change model as set out in the paper, in which options are specified as an array of intervention functions.
These features support the the expression of calculations in a manner that is compact and reusable.The complete to a typical programming language statement.This remarkable density highlights both the parsimony of the framework and the fact that the abstractions provided by Reframe are tuned to this kind of analysis.
These abstractions lend themselves to the graphical presentation of computations, a feature that is valuable in designing an analysis and in communicating its structure.
The structure of the implementation in Reframe follows Layers 2-5 of the framework each introduce a dimension to the analysis.This dimension is propagated through the inner calculation (for all i).This is indicated by the labels ∀i, ∀j etc. at the lower right hand corner of the dashed boxes in Figure 2. The performance layer is evaluated ∀k, the long-term change layer ∀k; j and so on.The result of this is that the innermost datasets are defined over each of the dimensions in the analysis.Figure 2 indicates this by listing these implied dimensions in grey.
Operations work over particular dimensions, in which case these dimensions are indicated on the (oval) node for the operation in Figure 2. The expected impact, for example, is found by taking a weighted sum over i, while costs and impacts are aggregated over t (through time).

Example results
Figure 7 shows the main result of the example analysis, a probability distribution over NPV Risk Reduction for an illustrative selection of options.
In general, we see that the  Figure 6 shows a single state variabledike conditionin possible futures with different rates of dike deterioration.
Dike repair takes place at different times in the different futures, an effect achieved using a state-triggered intervention function.
The full impact exceedance curve behind the expected impact estimate for a given system state can provide valuable information.In Figure 8 we see that flood proofing provides a progressive increase in damage reduction up to a maximum, which then pertains at all higher return periods.Dike crest level raising eliminates damage at lower return periods without substantially altering the behaviour of extreme events.

THE PROCESS OF DECISION ANALYSIS DEVELOPMENT
The development of a decision analysis takes place within a broader decision-making process.Initially, those involved will have only a vague idea of the critical parameters of system behaviour, the benefit/cost characteristics of interventions and the space of available options.Even the performance criteria to be used to assess options may be subject to negotiation and refinement.Both the analysis development and the broader process should proceed iteratively, each informing and guiding the other at every stage.
The purpose of the broader decision-making process is not simply to select from a pre-specified option set.Rather it is to develop an understanding of the behaviour of the system, identify shortcomings in that behaviour and design possible solutions.This notion of active design of optionscaptured in the term optioneering (see for example The discipline of working from the outset with a complete analysis that is framed in terms of option performance and follows a clearly specified framework has a number of benefits.
• Directs focus from the outset to the decision and the options.Ensures that a model is developed that can represent the impacts of interest and is sufficiently flexible to represent the range of management interventions to be considered.Avoids prejudicing the decision-making process against interventions that cannot be adequately modelled.
• Ensures that a complete decision analysis is possible.
Focusing first on developing an impact or risk model without consideration of the whole framework can lead to problems in implementing higher framework layers.
Oversights in the design or implementation of the risk analysis engine, for example, can make uncertainty analysis difficult or impossible.Iterative refinement of a complete analysis exposes such problems early when they are most easily corrected.
• Maximises relevant learning (about the decision, the options and the capbilities of the overall analysis) from early iterations.The earlier lessons are learned, the better that learning can be incorporated into the final analysis and taken into account in the decision-making process.
• Avoids unnecessary expense, such as expenditure on further refining a model that is already capable of distinguishing between the options to be considered.
• Enables development to be targeted where it is most likely to improve the final decision.Avoids excess investment in non-critical components (the critical components are rarely obvious and will change at each iteration).
At the end of each development iteration, and assuming that the analysis engine is implemented on appropriate infrastructure (implementation issues are discussed below), options can be specified simply by identifying which interventions to apply when.Introducing new types of intervention will often require that the impact model be modified, and defining intervention functions will in any case require deep understanding of how system state is represented and used in the impact model.Given a library of intervention functions, however, the option space can be explored without requiring further input from model developers.

OPERATIONAL ISSUES
A real decision analysis will be more elaborate than this example, but the framework is highly modular and the additional complexity is localised in the impact model, the exogenous change and intervention functions and the uncertainty analysis layer.None of these changes require the changing overall structure, which remains applicable regardless of the complexity of the system being modelled and of the models themselves.
Although the framework is conceptually simple and its structure valid across many situations and decision problems, its implementation for a full scale analysis will raise non-trivial operational issues.The layers of sampling and simulation generate a combinatorial explosion of impact model runs resulting in punishing computational demands.
In the example analysis this problem was mitigated by the simplicity of the impact model, but a real impact model will be considerably more computationally complex.
The analysis also generates large volumes of data.The balance between keeping intermediate results and discarding them is delicate, as confidence in the results of analysis is undermined if it is not possible to 'drill down' to understand how they were generated.It is only at the level of individual impact model runs that results relate directly to physical processes.The impact model contains the assumptions of an otherwise deductive framework, and it is critical that these assumptions can be tested by examination.
Storage is cheaper than ever and the calculation is 'embarrassingly parallel' (in that thousands of entirely independent simulations must be run) and so lends itself to solution using a cluster of computers.Nonetheless, computer resources remain a limiting constraint.The episodic nature of decision-making processes exacerbates the problem.In order to support the iterative process of analysis development and option design outlined above, the analysis must run in the shortest possible time.The larger the cluster on which the analysis will run, the quicker it will finish, but an oversized cluster will sit idle more of the time, increasing the effective cost of the work it does do.Even then, compromises may be required to keep the cost of computer time and data storage down.In a given computer time budget a choice may be necessary between increasing the veracity of the impact modelling and exploring a broader space of scenarios and options.Similarly, we may find that we cannot afford to keep all intermediate data, which runs to several gigabytes even for the simple analysis described above.
This framework demands full automation of the decision analysis computation from individual simulations up to uncertainty analysis.Previous approaches to this kind of analysis have relied on manual intervention at various levels, for example in the construction of data sets representing system states in different futures.As with any manual data manipulation this increases the risk of error.
It is also time consuming, severely limiting the number of options that can be explored and making rigorous uncertainty analysis prohibitively expensive.
At the bottom level of the framework a very large number of simulations will be run with a wide range of models and boundary conditions.The failure of even one of these simulations to run to completion may invalidate the whole analysis.Modellers are used to manipulating individual models, diagnosing and working around problems encountered by the simulation engine on a case by case basis.This approach does not scale, however, and a new focus is required on ensuring stability across a range of conditions.
We hope that modelling systems will evolve to provide a user interfaces that assists the user in developing robust families of models, provide a simulation engine that runs without a graphical user interface and can be reliably controlled by other software (all output, including diagnostics, being presented in a consistent and readily software-readable format) and be made available under licensing conditions and with license control devices that allow cost-effective deployment of the simulation engine on compute clusters.

VARIANTS
In the framework as described the configuration of the impact estimation and risk analysis layers is based on the assumption of an event-based impact model.This approach is appropriate for typical applications in flood risk management.The sort of integrated decision analysis described has a wide range of possible applications, however, and eventbased risk analysis is not as well suited in all of these.
In general, the event-based approach is adequate if the following conditions are met.
1.Both the driving conditions and the impacts of interest can be readily interpreted as events, and the impacts relate to individual events not sequences.
2. Events are of short duration relative to the units into which appraisal time is discretised.
3. The effect of antecedent conditions can be disregarded.
Much tidal and fluvial flooding of defended urban areas satisfies these criteria reasonably well.High water levels last for hours or days and recur with return periods measured in decades to centuries.Recovery can take up to a few years, but the probability of occurrence of a further flood while recovery is under way is low and the error introduced by disregarding this possibility small.
Where these conditions are not met, a continuous simulation approach is likely to be needed.Two approaches to continuous simulation are possible.One is to run simulations for long periods with a stationary system state.As in the event-based formulation presented above, natural variability and long-term change are handled orthogonally.
This can be implemented within the framework by modifying only the innermost two layers.The interface to the risk analysis layer will remain the same: a system state is passed in and an expected impact returned.Rather than conducting an event-based numerical integration, the risk analysis layer will generate a long input series to the continuous simulation model and process the long output series to estimate expected impacts.Other variants are possible, but as with those just discussed these will share a great deal in common with the framework as presented.Risk analysis offers a rigorous approach to reasoning about natural variability, and any strategic investment decision must take account of both long-term change and uncertainty.The approach we present to handling these in an integrated decision analysis is broadly applicable.

CONCLUSIONS
We have presented a decision analysis framework for use in flood risk management decision making and provided a demonstration application of the framework to a complete decision analysis for a simple hypothetical flood risk management decision.
The novel features of the framework relate to two things.
We construct a fully integrated option performance estima- Options can be defined that generate different patterns of intervention in different members of the sample from epistemic uncertainty, allowing management policies to be represented as well as fixed sequences of intervention.
Our goal is not only to move decision analysis to a clearer, more comprehensive and more rigorous basis, but also to enable a more iterative, exploratory approach to option design and analysis leading to better, more defensible Taking these barriers together with the close alignment with current practice and trends we believe that the framework is well positioned for adoption in larger projects, where both the problems addressed and the resources available are greatest.Efficient implementation for larger numbers of smaller projects will follow later as experience is gained and supporting tools are developed.
The analysis structure we propose is in some ways specific to the event-based impact modelling and risk analysis appropriate for flood risk management, and was developed as a clarification, refinement and extension of existing practice.
We have discussed a number of variant frameworks, however, and these variants share the same principles and significant structure.The proposed approach to long-term change modelling, in particular, is broadly applicable.

b
. .〉 modified by the substitutions indicated to the left of the /.Where state variables appear in expressions on the right of an equality sign, they take the value before substitution in the manner familiar from programming languages ∼N( μ, σ) Normal probability distribution with mean μ and standard deviation σ ∼Tri(a, b, c) Triangular probability distribution with minimum a mode b, and maximum c ∼U(a, b) Uniform probability distribution from minimum a to maximum b ⊙ Composition operator for intervention functions Agg Aggregation operator e.g.∑ (sum), ∏ Bounds index, b ¼ 1 is the start of the interval, b ¼ 2 the end and long-term changeare closely connected.To date however only a few studies have tackled more than one of them at once.Two examples from the United Kingdom are the Foresight Flood and Coastal Defence study (Evans et al. , ) and the UK Environment Agency's recent strategic planning project, Thames Estuary 2100 (Environment Agency a).The former explored the impact on flood risk of scenarios of climate and socio-economic change but applied only limited uncertainty analysis and did not set out to explore management options.The latter examined the evolution of risk through the coming century under a selection of management options, but while uncertainty and sensitivity analysis were conducted as part of the project (Environment Agency b, c; Hall & Harvey in press) it was applied to individual risk estimates rather than being propagated through to the performance metrics upon which decisions were based.
the latest prototype of the Reframe tool for web-based data analysis and visualization (Harvey et al. ), which was developed by the authors to address this problem.This paper is organised as follows.We first present the framework, working from the more familiar level of impact modelling, through risk analysis, long-term change simulation, option performance estimation to uncertainty analysis, and finally providing some further details on the modelling of management interventions as state-transforming functions.

Figure 1 |
Figure1| The layered structure of the decision analysis framework.

Figure 2 |
Figure 2 | Bipartite data flow graph of data sets (rectangular boxes) and operations (ovals) superimposed on layered structure from Figure 1.
methods of flood risk analysis are documented in more detail in the literature (Sayers et al. ; Dawson ; Hall et al. ; Dawson et al. ; Gouldby et al.
It consists of sequences management interventions made in the interests of managing flood risk.Such interventions might include dike repair, raising of dike crest levels, flood proofing of houses, implementation of flood warning and changes to planning regulations.
the development of a national strategy, possible changes to planning law may reasonably be regarded as part of the set of possible management interventions and thus endogenous, for example.At a more local level planning law is something to be complied with, and if changes to such law are anticipated such change is then exogenous.Management interventions and exogenous change are modelled as functions.If S is the space of possible system states, s;s 0 ∈ S are the system states before and after the intervention is implemented respectively and c is the (positive real-valued) cost of implementing the intervention in the context of system s, then an intervention function n has the form: Each intervention function embeds a cost model that has access to the prior state of the systems.The cost of application of a 'raise crest level' intervention may vary depending on the current crest level and condition of the dike.System state variables may include cost model parameters such as unit rates, which can then be subject to long-term change and uncertainty.The effect of an intervention may also be influenced by the prior state of the system.A 'repair dike' intervention might model inspection-triggered maintenance by altering the system state (and incurring a cost) only if the dike condition is worse than some trigger level.If s;s 0 ∈ S are the system states before and after exogenous change has acted for Δt years, the exogenous change function has the form: e : S × N → S s0 ¼ eðs; ΔÞ ð5Þ Exogenous change can be discontinuous.The duration parameter Δ to the exogenous change function e is defined here as belonging to the natural numbers ℕ.This restricts the external interface of e but not its implementation.If an exogenous change process is implemented by indexing into an externally generated data set, values need only be provided for integer indices.On the other hand, if a process can only be adequately represented by simulation in continuous time this is also possible.If absolute time is needed, as it might be when indexing into a pre-computed time series, then it is maintained as a system state variable.The components of the exogenous change function e will be only as complex as is necessary to allow uncertainty in current state and future evolution of relevant phenomena to be explored.We may be concerned about the effects of sea level rise, for example, but while this is partly caused by thermal expansion of the oceans we do not need to embed a general circulation model and downscaling apparatus in our exogenous change function.Instead, we use a simple parameterisation of the rise in mean sea level through time based on the published results of more detailed model and empirical studies.In order to balance accuracy with computational cost, we work with three distinct discretisations of appraisal time.These discretisations are illustrated in Figure 3.Each is introduced as needed in the following.A future is fully defined by an initial state s0 , a discretisation of the appraisal period into a sequence of consecutive intervals t s ¼ 1 . . .T s of durations Δ s ½t s , an exogenous change function e and an array of intervention functions n s ½t s .Long-term change simulation is enacted by a function z, resulting in an array of systems states ss ½t s ; b and one of costs c s ½t s , where b indexes the start (b ¼ 1) and end (b ¼ 2) of an interval.The subscript s, for 'state simulation', is used consistently to highlight the relationship of the arrays with intervals t s .

< :
Figure 3(a).Beginning with the initial state the intervention and exogenous change functions are applied alternately.

Figure 3 |
Figure 3 | Three distinct discretisations of appraisal time are used in the framework.Circles indicate nodes at which data are computed.(a) Long-term change simulation divides the appraisal period into intervals t c , estimating system state at each bound of each interval by alternate application of exogenous change function e and intervention functions ns½ts.(b) Intervals from (a) are merged and risk analysis is conducted at the bounds of the merged intervals t ρ .(c) Interpolation is used to estimate expected impacts for each year t y in the appraisal period.
P½ j ¼ CompðB½ j; C½ jÞ ð8Þ where B½ j ¼ Agg b ty ðw b ðt y ; ρ y ½ j; t y À ρ y ½0; t y ÞÞ ð9Þ C½ j ¼ Agg c ty ðw c ðt y ; c y ½ j; t y ÞÞ ð10Þ B½ j and C½ j are aggregated benefit and cost respectively in option j.Comp is a comparison function, Agg b , Agg c are aggregation operators and w b , w c are weighting functions.
running the performance estimator for each member of that sample, and processing the results into a form that provides insight into the behaviour of options.The nature of sampling and post-processing will vary between techniques.The alternative initial state vectors s0 ½k encapsulate all parameters to the analysis, which include the initial configuration of the system, parameters to the exogenous change model and cost model parameters.Uncertainty analysis can explore uncertainty in all aspects of option performance and the evolution of system state through time.As noted in the discussion of long-term change simulation, interventions and thus options may behave differently in the different futures generated by uncertainty analysis.The example analysis set out below demonstrates feedforward propagation of uncertainty, in which probability distributions on option performance are constructed by running the option performance estimator for each member of a pseudo-random sample from distributions over input parameter values.The results indicate the degree of confidence that can be justified in estimates of option performance and may, as in this case, indicate that a unique ranking of options in terms of performance is not possible.Reprocessing of the sample of performance estimates may enable the analyst to establish the conditions under which option rankings differ.Sensitivity analysis, in particular variance-based global sensitivity analysis (VBSA) methods (Saltelli et al. ),can be used to establish which input uncertainties contribute most to output uncertainty.This information is invaluable in allocating resources where they will most efficiently reduce variance in the results of the feed-forward uncertainty analysis, thereby improving our ability to make a decision.VBSA uses a space-covering quasi-random sampling method coupled with a variance decomposition post-processing step.Robustness analysis methods, such as Info-gap (Ben-Haim ), help to identify options that perform acceptably well over a wide range of possible conditions.Again, Infogap analysis is implemented by propagating a sample and running a simple post-process over the result.Hall & Harvey   ()  describe the application of Info-gap robustness analysis in a flood risk management context, implemented using a precursor to the framework described in this paper.

Conditionality
Intervention functions can operate conditionally on the values of system state variables.Structuring intervention functions this way allows options to behave differently in different realisations of future uncertainty.This is illustrated in the example below by the 'repair dike' intervention which has no effect unless the condition of the dike is below a threshold, simulating the process of inspection-driven maintenance.Interaction between exogenous change and interventionsExogenous change processes and interventions can interactby altering or depending on the same state variables.A simple example is the interaction of dike deterioration (exogenous change) and repair (intervention), both of which alter state parameters representing dike condition.This is illustrated in the example below.Change processes that cannot be allocated to one or other category should be subdivided into interacting components, one or more in each category.In a regional strategic planning exercise, land use change is not fully under the control of the flood risk management process but nor is it entirely exogenous.It can be separated into three interacting parts, however, each of which is easily categorised.Regional population trends and associated pressures are modelled as exogenous processes.Planning regulations relating to floodplain development are represented in the system state and can be modified by management interventions.A further exogenous change process then translates regional trends into local land use changes taking the planning regulations into account.A model of this nature is described by Hall et al. ().
property in the floodplain is afforded some protection by a dike, the current 'standard of protection' being 1:200 years (there is a 1/200 annual probability of overtopping).Behind the dike, the floodplain is approximately flat bottomed and vertical walled.Flooding occurs by overtopping when water level exceeds the crest level of the dike.The dike may breach, effectively lowering the crest level over part of the dike length and increasing the volume of water entering the floodplain.Flooding may occur during an event in which peak water level does not exceed crest level if a breach forms during the event (as may happen as a result of piping failure, for example).When flooding occurs the property in the floodplain sustains damage.

Figure 4 |
Figure 4 | Illustration of the hypothetical situation studied in the example analysis.
Figure 5(e)).The organisation responsible for tidal flood risk management wishes to assess a variety of flood risk management dike deterioration component of exogenous change also affects the dike condition state variable.This interaction between exogenous change and management intervention generates the saw-tooth form visible in Figure 5(b).The 'Raise dike crest level' family of interventions m c is parameterised on the target crest level (in m AOD).A particular intervention m c ðxÞ sets the dike crest level to the target level x if the existing crest level is lower.Since crest level raising involves substantial reconstruction of the dike, the condition of the dike is improved to s q ¼ 1:0.Raising the dike crest level reduces the frequency of overtopping and the volume of water entering the floodplain in a given event.The cost of crest level raising is a function of target and existing crest level, ground level and a unit rate s cc .Flood proofing of property reduces the amount of damage that will be caused by a given depth of water.The 'Flood proof property' family of intervention functions m p captures this by modifying the depth/damage curve.The proportion of property already flood proofed is tracked in a system state variable s p and requests for greater than 100% flood proofing are capped.The cost of flood proofing is proportional to the fraction of property to be proofed and the system state parameter s cp .Options Interventions are applied at five year intervals.The base case applies the 'do nothing' intervention throughout.A 'Maintain' option models regular inspection and maintenance work on the dike by applying the the conditional 'repair dike' intervention in every interval.The remaining options combine regular maintenance with dike crest level or flood proofing interventions.The flood proofing intervention is composed with dike repair, as regular maintenance of the dike is not interrupted by activity elsewhere.

Figure 2
Figure 2 very closely, a few details being elided in the figure to improve clarity and conserve space.A separate Reframe worksheet is used to implement each of five framework layers, indicated by dashed enclosing boxes.Further worksheets implement the exogenous change function and the intervention functions or function families.While the implementation is hierarchically structured as described, the Reframe execution engine reproduces the planar data flow that is visible running from top to bottom of Figure 2 if the dashed boxes are disregarded.
'do something' options shown deliver with high probability a positive NPV.Furthermore, more expensive interventions, such as implementing flood proofing or rebuilding dikes with increased crest level (in addition to regular maintenance), can lead to a higher expected NPV.It is possible, however, to overspend.Relative to the maintenance only option, rebuilding the dike with crest level 5.1 m in 2,030 distributes mass from the peak predominantly in the direction of increased NPV.Rebuilding with crest level 6.8 m, however, gives little or no further increase in probability of these higher returns.Instead, it redistributes mass in the direction of decreased NPV and substantially increases the probability of a negative NPV.When conducting this kind of analysis it is rarely sufficient to compute and present the primary result.The data resulting from intermediate steps in the analysis hold valuable insights into the behaviour of the analysis.The Reframe system retains intermediate data, which can be further processed and visualised, though a user interface to make this accessible has still to be developed.Some examples are presented in Figures 5, 6, 8 and 9.

Figure 5
Figure 5 shows times series of a selection of system state variables and EAD in a particular future for a selection of options.Implementation of an intervention generates a

Finally
, scatter plots of initial state variables against Present Value Benefit and Present Value Cost (Figure 9) or NPV give an indication of the extent to which input uncertainties influence the results of the analysis.Run times are always difficult to interpret, but our use of a prototypical framework for implementation here means that no useful figures can be provided.Whatever tool, language or framework is used to implement an analysis such as this, as component model complexity increases the run time overhead of the tool will quickly shrink as a proportion of the whole.For realistic analyses the dominating factors will be impact model run time and the number of samples introduced by each layer of the analysis.

Figure 8 |
Figure 8 | Damage exceedance curves before and after applying interventions to a test system state.

Figure 7 |
Figure 7 | Probability density function over NPV risk reduction, estimated from a 500- member sample of parameter uncertainty using a Gaussian kernel density estimator.

Figure 9 |
Figure 9 | Scatter plots of Present Value Cost or Benefit vs. initial system state parameter value for a dike crest level raising option.Samples are from distributions given in Table 1.(a) Higher shape parameter leads to more extreme events being sampled and thus greater benefit.(b) Scatter on PV Cost results from variation in other cost model parameters and maintenance interventions.

Makropoulos
et al. )has motivated the design of the framework described in this paper.After scoping the problem and initial ideas about options and establishing a list of available data, models and associated uncertainties, a first iteration of decision analysis can begin.In this first cycle the system state representation, impact model and long-term change functions used should be very simple, capturing the gross behaviour of the system in a way that is quick to develop and run.As simple as the models used are, the analysis even at this early stage should be complete.The uncertainty introduced by simplification should be realistically assessed and used to configure the uncertainty analysis layer.The limiting case of a 'model' in this framework is a value encoded directly in the system state, which can then be sampled from a distribution or interval at the uncertainty analysis level.In general, even the models used in the first development cycle are likely to be more involved than this, but such extreme simplicity may sometimes represent an honest assessment of the state of knowledge.This analysis should then be iteratively refined, these refinements reflecting insights gained from previous cycles of analysis about the nature of the problem and the limitations of the model and available data.At each cycle, a limited set of improvements must be selected from a wide range of possibilities.More or better data could be collected, component models improved, interactions between component models better captured, options refined or a new class of interventions introduced.The choice of which improvements to make should be based on their cost and likely contribution to the overall decision-making process.In guiding the process global sensitivity analysis is a powerful tool (Saltelli et al. ) that may be usefully applied at different levels of the framework.

Fortunately
it is now possible to have access to very large number of computers and large volumes of storage 'on demand', only paying for resources while they are in use.This is made possible by the utility computing or 'Infrastructure as a Service' model, in which service providers achieve very high utilisation of large data centres by aggregating demand (Carr ).Harvey & Hall () describe the use of one such service, Amazon Web Services, to conduct uncertainty and sensitivity analyses of a flood risk estimator (in effect, uncertainty analysis applied to layers 1 and 2 of this framework).The experience described in that paper informed the design of this framework, which assumes the availability of a such an economical, dynamically scalable compute resource.
Accommodating the risk-based approach to water resources planning proposed byHall et al. ()  would require only slightly more reconfiguration.Water shortages arise when low rainfall is experience for several years in succession.There is no way to treat the driving conditions as consisting of independent events, and the time period over which water shortages develop is commensurate with the rate of action of long-term change processes (including management intervention).For each time series of system state (for brevity, each 'future') generated by layers 3-5 of the framework, risk analysis would proceed by running water resource simulations driven by a large set of synthetic rainfall time series consistent with the gradually changing climate specified by that future ('transient' rainfall scenarios,Burton et al. ).Water shortage events can then be identified in the output of these simulations and the probability of occurrence of water shortage in each year for each future established.
tor, and we simulate the influence of long-term change, including management interventions and change external to the flood risk management system.By integrating all aspects of the estimation of option performance, the framework makes it possible to explore uncertainty regarding option performance.This is prerequisite for conducting robustness analysis, and important in getting a true picture of uncertainty and its potential influence on the preference ordering of options.It makes it possible to account coherently for the influence of long-term change processes and uncertainty on the cost of interventions as well as their benefits.For integrated performance estimation to be possible, it is necessary to simulate futures given an initial state, a model of exogenous change and a set of options.Options are modelled as sequences of interventions.The effects of interventions and their cost of implementation are captured as functions.Exogenous change and management interventions can interact, as for example with dike deterioration and repair.
decisions.Key to this is minimising the cost of modifying an analysis, as a high marginal cost of change inhibits both correction of shortcomings in the modelling and exploration of the option space.The proposed framework is highly modular with clearly defined interfaces between modules, minimising the impact of changes made within components.Options are specified by simply listing the interventions to apply though time, allowing non-technical stakeholders to specify new options.If the framework is implemented on appropriate computational infrastructure and the automation requirement is met, then rerunning the analysis to accommodate any change will cost little more than the required CPU hours.The framework aligns with increasing emphasis globally on risk-based decision making and proper analysis of uncertainty (for example in the European Floods Directive).In the inner layers of impact and risk analysis it refines and formalises current practice in England and Wales, not just in philosophy but also in structure.The remaining layers (analysis of long-term change, option performance and uncertainty) go well beyond current practice, but they provide clarity and structure to issues that have been examined in a less formal manner in a few strategic investment planning projects.Some barriers to adoption remain.While the overall structure is given by the framework, each component (impact models, intervention and exogenous change functions, cost models, uncertainty estimates and scenarios) has high complexity and expertise requirement.The shift from building models of single system states and costing of particular options to constructing parametric models of families of systems and interventions goes beyond the experience of most flood risk management practitioners.
The term alternatives is often used in the flood risk management literature with the same meaning.Similarly our interventions are sometimes referred to as measuresthe former choice allows us to refer to intervention functions without ambiguity, where measure function is already used in mathematics.

Table 1 |
System state variables for example analysis, categorised by the layer of the analysis framework at which they are used.The symbol √ indicates use of a variable, while ← indi- cates that a variable is set.The column 'Value' indicates the initial value or the distribution from which this value is drawn during uncertainty analysis impact model implements the function dðs;xÞ.For the hypothetical situation described, x ¼ 〈x l ; x d 〉 specifies an event as the combination of the peak water level x l and the dike state x d , and Table 1 lists the system state variables that make up s.Note that the system state vector s contains variables used at all layers of the framework.The column 'Impact model' in Table1indicates those variables used by the impact model.
x ¼ 〈x l ; x d 〉 are sampled in two stages.First, a sample of peak tide water level fx l ½i : i ¼ 1 . . .N k g is drawn.Then, dike state x d is sampled conditional on x l .
mated) Generalised Extreme Value distribution f over x l are among the system state variables (see column 'Risk analysis' in Table1for the variables used by the risk analysis layer).A proposal distribution f 0 is constructed by increasing the scale parameter of that distribution by a factor of five.Because in the example we have a single dike with only two possible states, we simply generate two events for each maximum water level, one with each possible state.If at the end of a tidal cycle with peak level x l the dike will be in statexd with probability p d ðs;xÞ, the expected damage for systems is given by wðs; 〈x l ½i; x d 〉Þ dðs; 〈x l ½i; x d 〉Þ ð 17Þ where wðs; 〈x l ½i; x d 〉Þ ¼ f ðs; x l ½iÞ N i f 0 ðs; x l ½iÞ p d ðs; 〈x l ½i; x d 〉Þ d ðs;xÞ encapsulates these various influences and is implemented in this case by bilinear interpolation into a

Table 2 |
Details of basic intervention families in the example analysis.Table 1 details all s sub system state parameters.The expression ½substitution; . . .=s evaluates to the vector s modi-

Table 3 |
The options used in the example analysis.Appraisal time is divided into 5-year intervals