Bayesian inference via Markov Chain Monte Carlo (MCMC) sampling and sequential Monte Carlo (SMC) sampling are popular methods for uncertainty analysis in hydrological modelling. However, application of these methodologies can incur significant computational costs. This study investigated using model pre-emption for improving the computational efficiency of MCMC and SMC samplers in the context of hydrological modelling. The proposed pre-emption strategy facilitates early termination of low-likelihood simulations and results in reduction of unnecessary simulation time steps. The proposed approach is incorporated into two samplers and applied to the calibration of three rainfall–runoff models. Results show that overall pre-emption savings range from 5 to 21%. Furthermore, results indicate that pre-emption savings are greatest during the pre-convergence ‘burn-in’ period (i.e., between 8 and 39%) and decrease as the algorithms converge towards high likelihood regions of parameter space. The observed savings are achieved with absolutely no change in the posterior set of parameters.

## INTRODUCTION

This paper focuses on improving the computational efficiency of calibration and uncertainty analysis – two essential components of model assessment, defined as the use of robust procedures to determine the suitability of a given model for a given purpose (Matott *et al.* 2009). Investigations of uncertainty in hydrological modelling have emphasized the use of automatic calibration methods, which develop expressions for parameter uncertainty, ranging from simple Monte Carlo simulations such as GLUE (Beven & Binley 1992) to statistical approaches based on Bayesian inference (Box & Tiao 1973; Kuczera 1983). Owing to the complexity of large-scale hydrological models, Bayesian inference is facilitated through Markov Chain Monte Carlo (MCMC) sampling from parameters' posterior distributions (e.g., Kuczera & Parent 1998; Haario *et al.* 2001; Vrugt *et al.* 2003, 2009; Kavetski *et al.* 2006).

Sequential Monte Carlo (SMC) simulations have also become very attractive in hydrological modelling in recent years (Moradkhani *et al.* 2005; Hsu *et al.* 2009; Salamon & Feyen 2010; Jeremiah *et al.* 2011). SMC samplers combine data assimilation principles with a particle filtering strategy (e.g., Moradkhani *et al.* 2005; Smith *et al.* 2008), and generally resemble previous developed sampling importance resampling approaches (e.g., Del Moral *et al.* 2006). More recently, Jeremiah *et al. (*2011) compared an example SMC sampler with an adaptive MCMC sampler and found that both methods displayed robustness and convergence.

Despite the common use of MCMC and SMC approaches, their application can incur high computational costs. Therefore, strategies for improving the efficiency of such samplers are an ongoing area of research. In MCMC sampling, efforts to improve efficiency include utilizing prior information (Mertens *et al.* 2004; Sikorska *et al.* 2012), developing adaptive algorithms (e.g., Haario *et al.* 2001; Vrugt *et al.* 2003, 2009; Craiu *et al.* 2009), and using ‘limited-memory’ sampling (Kuczera *et al.* 2010). Efforts for overcoming practical SMC issues include using importance sampling (Cheng & Druzdzel 2000) and seeding initial solutions using empirical Bayes (Chen *et al.* 2004).

As a complementary approach to the aforementioned efforts, this study explores the use of the deterministic model ‘pre-emption’ concept, first introduced in Razavi *et al.* (2010), to improve the computational efficiency of MCMC and SMC samplers in the context of hydrological modelling. Model pre-emption is a relatively simple strategy for identifying low-quality simulations and terminating them early before the entire simulation run time completes. Model pre-emption requires the continuous computation of a goodness-of-fit criterion as the simulation proceeds and deterministic model pre-emption is the application of this concept within a sampling algorithm for which a threshold criterion value can be defined such that candidate solutions with goodness-of-fit measures beyond this value are known to have no impact on the search algorithm results. Previous research by Razavi *et al.* (2010) establishes that model pre-emption can yield substantial computational savings when applied to various optimization-based calibration strategies (DDS and PSO) and various ‘informal’ uncertainty-based calibration strategies, e.g., GLUE (Beven & Binley 1992) and DDS-AU (Tolson & Shoemaker 2007). In contrast, this study investigated model pre-emption for use within formal likelihood functions embedded within the MCMC and SMC sampling algorithms. These pre-emption-enabled formal samplers were then applied to the calibration and uncertainty analysis of three rainfall–runoff models. To the best of our knowledge, such an implementation has not been considered in previous studies on the use of MCMC and SMC sampling in hydrological modelling.

## METHODS

Model pre-emption was applied to two algorithms, i.e., an MCMC implementation known as differential evolution adaptive metropolis (DREAM) (Vrugt *et al.* 2009) and an SMC implementation described by Jeremiah *et al. (*2011) and referred to herein as JSMC. DREAM runs multiple Markov chains simultaneously to facilitate efficient global exploration of the parameter space and its convergence is monitored using the Gelman–Rubin metric () (Gelman & Rubin 1992), where values less than 1.2 indicate convergence. The JSMC sampling process propagates a population of parameter vectors (or particles) of size *N* from an initial sampling distribution to the desired posterior distribution. For more information on the JSMC sampler refer to Jeremiah *et al.* (2011).

*N*stream flow observations, (or

*Y*in vector notation) used to calibrate hydrologic model given its parameter vector (). Assuming the model errors are uncorrelated and Gaussian distributed with zero mean and variance , the posterior probability density function has the following form (after integrating out ):where is a vector of residuals. Equation (1) assumes that errors are uncorrelated, but this is not a very realistic assumption in the context of hydrologic modelling. One approach to account for auto-correlation is to use a first-order auto-regressive (AR) scheme for the error series (Sorooshian & Dracup 1980)where , is the first-order correlation coefficient, and is the remaining error prescribed to have a zero mean and constant variance . The resulting joint posterior distribution of and in this case would bewhere . It is observed in both Equations (1) and (3) that the posterior density will monotonically decrease when residuals are incorporated time step by time step into the equations. Since the posterior densities calculated with Equations (1) and (3) monotonically degrade as the simulation proceeds through time, both formulations are suitable for adopting a model pre-emption approach (Razavi

*et al.*2010).

Pre-emption-enabled DREAM and JSMC sampling was applied to the calibration and uncertainty analysis of three different rainfall–runoff models. Table 1 provides summary information on these case studies and lists corresponding case study reference papers containing complete descriptions.

Simulation model (type) . | No. of para. . | Catchment (area km^{2})
. | Forcing data . | Calibration period . | Refs for more information . |
---|---|---|---|---|---|

HYMOD (lumped) | 5 | Leaf (1994) | Precipitation temperature PET^{a} | 1953–1954 | Boyle (2000); Vrugt et al. (2003) |

WetSpa (semi-distributed) | 6 | Baron (965) | 1995–2000 | Safari et al. (2009) | |

SWAT (semi-distributed) | 26 | Cannonsville (37) | Precipitation temperature PET^{a}; Sol. radiation; Rel. humidity | 1996–1998 | Tolson (2005); Tolson & Shoemaker (2007) |

Simulation model (type) . | No. of para. . | Catchment (area km^{2})
. | Forcing data . | Calibration period . | Refs for more information . |
---|---|---|---|---|---|

HYMOD (lumped) | 5 | Leaf (1994) | Precipitation temperature PET^{a} | 1953–1954 | Boyle (2000); Vrugt et al. (2003) |

WetSpa (semi-distributed) | 6 | Baron (965) | 1995–2000 | Safari et al. (2009) | |

SWAT (semi-distributed) | 26 | Cannonsville (37) | Precipitation temperature PET^{a}; Sol. radiation; Rel. humidity | 1996–1998 | Tolson (2005); Tolson & Shoemaker (2007) |

^{a}Potential evapotranspiration.

### Model pre-emption

In deterministic model pre-emption (Razavi *et al.* 2010), model performance (in terms of some monotonically degrading calibration objective function) is monitored during simulation, and a given simulation is terminated early if it is recognized to be so poor that it will not contribute to guiding the search strategy. In the present study, the DREAM and JSMC sampling algorithms were modified to support deterministic model pre-emption. The first step in implementing pre-emption is to select an appropriate objective function. As noted previously, both Equations (1) and (3) are suitable choices for model pre-emption.

Another important factor in pre-emption implementation is the pre-emption threshold, i.e., a likelihood value where the solutions resulting in likelihood values worse than this threshold would be rejected even if the simulation is carried out completely. Both DREAM and JSMC decide to jump from a current state () to a candidate state () based on the ratio of the posterior densities of the two states, i.e., . The candidate state is accepted if , where *Z* is a random number uniformly distributed between 0 and 1; otherwise the sampler remains at the current state. A move to is accepted only if . Thus, the posterior density value of can be considered as the pre-emption threshold so long as the random number *Z* is generated ‘prior’ to evaluating a given candidate solution. Algorithms can then determine, a priori, the minimum acceptable value of the candidate posterior density () as the pre-emption threshold.

Defining a pre-emption-enabled version of DREAM and JSMC requires slight adjustment of the acceptance/rejection step in the Metropolis–Hastings algorithm, as illustrated in Figure 1. When a given parameter set is evaluated (box 1 in Figure 1), *Z* is generated and the pre-emption threshold for candidate or is identified (box 2). At any time step (*t*) of the model simulation, the likelihood can be calculated as and evaluated against (boxes 3–6). If the evaluated density of any candidate solution becomes lower than at any point through the simulation, it is pre-empted (box 7); otherwise, the evaluation of terminates without any time saving (box 8). Note that a pre-empted candidate would never be accepted by the Metropolis–Hastings algorithm, even if the simulation had not been pre-empted. As such, the pre-emption strategy employed here is deterministic in that it has absolutely no influence, other than computational savings, on the behaviour of the algorithm.

For this study, pre-emption was implemented using a simple file-based hand-shaking mechanism between the MCMC or JSMC algorithm and the simulation model. Prior to invoking a simulation run, the algorithm would write the calculated pre-emption threshold () for that run to a file and the simulation model would read the value from the file as part of the model initialization process. During simulation, the objective function was compared against this threshold each time the objective function was updated (e.g., after each time step). Violation of the threshold would then trigger the model to gracefully terminate the simulation.

*et al.*(2010) where

*S*is the computation savings (in per cent),

*n*is the total number of time steps in the calibration period, and

*n*is the number of time steps simulated before the simulation is terminated by pre-emption. Note that the pre-emption approach outlined in this section is applicable to any other MCMC or SMC samplers that utilize the Metropolis–Hastings acceptance/rejection approach for evaluating candidate moves.

_{p}## RESULTS

### Non-pre-emptive experiments

A ‘standard’ (i.e., non-pre-emptive) DREAM implementation was applied to three case studies, thereby establishing baseline computational costs for the algorithm. Preliminary investigation of model residuals indicated the standard Bayesian formulation of Equation (1) was sufficient for calibrating the HYMOD case study. Conversely, the AR-based formulation of Equation (3) was required for the WetSpa and SWAT case studies to accommodate correlation among the residuals. The Gelman–Rubin convergence metric (-statistic) indicated that DREAM converged after 7,800, 4,000 and 161,000 simulations in HYMOD, WetSpa and SWAT case studies, respectively. After convergence, 10,000 more samples were taken to form the posterior distribution.

A baseline set of non-pre-emptive JSMC sampling experiments were also applied to the three case studies. Although JSMC convergence and model residuals are treated differently, the same likelihood formulations as the DREAM experiments were used, and the same computational budgets were considered. Note that the relative efficiencies of DREAM and JSMC were compared previously by Jeremiah *et al.* (2011) and such inter-algorithm comparisons were not pursued in the present study. Instead, the numerical experiments focussed on the application of pre-emption to reduce the computational burden of these methods.

### Application of model pre-emption

Pre-emption-enabled versions of DREAM and JSMC were applied to the same calibration problems as mentioned in the section Non-pre-emptive experiments. The pre-emptive DREAM and JSMC experiments were performed using the same sequence of random numbers (generated by a random number generator) applied in previous experiments. Moreover, the same computational budgets were considered for corresponding pre-emptive and baseline experiments. These identical settings ensured that a given sampler's search behaviour was the same with and without pre-emption. As expected, the pre-emption-based DREAM and JSMC samplers yielded the same sets of posterior parameter values as those obtained in the corresponding baseline (i.e., non-pre-emptive) experiments. These identical parameter values are not shown here. In other words, use of model pre-emption did not change the calibration results, and the only effect of using pre-emption was a reduction in the required amount of computation.

Table 2 provides average computational savings (in per cent) for the pre-emption-based DREAM and JSMC experiments. The total average savings ranged from 5 to 21% in DREAM, and from 16 to 18% in JSMC. Extrapolating based on average simulation model runtimes and the percentage savings yields estimated wall-clock savings of up to 38 hours for our case studies. For more computationally demanding hydrologic models, such as fully distributed models requiring hours of simulation time, the wall-time savings afforded by pre-emption would be even more significant.

. | . | Case study . | ||
---|---|---|---|---|

Method . | Time window . | HYMOD . | WetSpa . | SWAT . |

MCMC | Burn-in period | 17 | 39 | 8 |

Post-burn-in period | 12 | 13 | 4 | |

Entire simulations | 14 | 21 | 5 | |

JSMC | First half of the simulations | 21 | 28 | 25 |

Second half of the simulations | 11 | 8 | 9 | |

Entire simulations | 16 | 18 | 17 |

. | . | Case study . | ||
---|---|---|---|---|

Method . | Time window . | HYMOD . | WetSpa . | SWAT . |

MCMC | Burn-in period | 17 | 39 | 8 |

Post-burn-in period | 12 | 13 | 4 | |

Entire simulations | 14 | 21 | 5 | |

JSMC | First half of the simulations | 21 | 28 | 25 |

Second half of the simulations | 11 | 8 | 9 | |

Entire simulations | 16 | 18 | 17 |

For the selected algorithms (i.e., DREAM and JSMC), most of the pre-emption savings occurred during the initial sampling or ‘burn-in’ period, defined as the period before the Gelman–Rubin metric indicates convergence. As the samplers converge, candidate parameter sets () decreasingly differ from the current parameter sets (). This in turn increases the likelihood ratio acceptance criterion, , and reduces the probability of pre-emption. To quantify this behaviour, the DREAM pre-emption savings were separated into burn-in and post-burn-in periods, and the JSMC results were likewise divided into two halves. The results are shown in Table 2.

Figure 2 illustrates the empirical cumulative distribution function of the simulation time at which model pre-emption terminated a given simulation. For DREAM in the post-burn-in period, almost all pre-emption occurred after 85% of the simulation was completed. This explains why the overall cumulative savings reported for DREAM in Table 2 are relatively low. However, unacceptable simulations were terminated much earlier during the burn-in period and there was considerable computational savings in this stage. Fairly similar pre-emption behaviour was observed for the JSMC sampler (lower panel in Figure 2).

### Sensitivity of pre-emption savings to calibration period

The effectiveness of model pre-emption can be sensitive to the location of large storms within the calibration period (Razavi *et al.* 2010). For example, inferior parameter sets will generally trigger early exceedance of the pre-emption threshold if major events happen early in the calibration period. However, pre-emption will not help as much if a major storm occurs at the end of a calibration period. This is because the simulation will need to cover most of the calibration period before a pre-emption judgement can be made. As identified by Razavi *et al.* (2010), one approach to enhance efficiency gains due to pre-emption is to choose the calibration period such that the largest magnitude events/responses occur near the beginning of the calibration period.

To explore the sensitivity of model pre-emption to the calibration period, the HYMOD case study was calibrated using pre-emption-enabled DREAM considering four different years from the observation period. Results showed the pre-emption savings varied according to the selected calibration period, and in some cases considerable savings were achieved. Overall pre-emption savings in these experiments ranged from 8 to 35% during the entire simulation and 10 to 39% during the burn-in period.

## DISCUSSION AND CONCLUSIONS

In view of the computational burden associated with samplers employed for Bayesian inference (e.g., DREAM or JSMC), a model pre-emption approach was investigated for saving computational time. The proposed approach (i.e., avoiding unnecessary simulations) yielded on average between 5 and 21% computational savings in the three selected case studies. In one of the case studies, it was shown that savings could reach as high as 39% depending on the selected calibration period. The time savings were larger during the initial stage of sampling, and ranged from 8 to 39%. Such savings are considerable for simulation models that require several minutes or hours to complete. Moreover, the pre-emption savings varied according to the selected calibration period, and in some cases considerable savings were achieved. Implementing pre-emption did not change the calibration results compared to when calibrating without pre-emption. Moreover, implementation was straightforward and our approach is generally applicable to any samplers that utilize the Metropolis–Hastings acceptance/rejection approach for evaluating candidate moves in the search space.

The case study results presented here provide strong empirical evidence that a model pre-emption approach is a good choice for application to other case studies involving formal Bayesian calibration. Pre-emption will be most useful in calibration problems where it is very hard to find good solutions and a great deal of time is wasted fully evaluating bad solutions long after it is known that they will have no influence whatsoever on the sampling algorithm. Moreover, our results suggest that pre-emption savings are most significant in cases where Bayesian samplers do not converge. In practice, convergence failure is relatively common during the initial phases of calibrating complex hydrological models where multiple applications of a Bayesian sampler can be required. For example, refinement of the model, model input forcings and/or likelihood function is often required before a satisfactory calibration result is obtained. The burn-in period of the selected case studies are representative of these no-converged situations and corresponding results suggest that savings of up to 39% can be achieved. In this way, pre-emption can accelerate model development by helping modellers more quickly determine when there is an issue preventing MCMC or SMC algorithm convergence.

## ACKNOWLEDGEMENTS

This research was supported with funding from Bryan Tolson's NSERC Discovery Grant (50%). The authors thank Dr De Smedt (Vrije Universiteit Brussel – VUB) and Dr Ali Safari (VUB) for providing the WetSpa case study (and associated input and data files), and Dr Vrugt (University of California at Irvine) for their DREAM source code.