Postprocessing of the ensemble precipitation data improves the bias and uncertainty induced in the numerical weather prediction (NWP) due to perturbations of the initial condition of atmospheric models. The evaluation of the NWP of short-range quantitative weather forecasts provided by the NCMRWF archived in TIGGE, for the Vishwamitri River Basin. The aim of the study is to perform univariate statistical postprocessing using six parametric methods and compare to find the best suitable approach among censored non-homogeneous logistic regression (cNLR), Bayesian model averaging (BMA), logistic regression (logreg), heteroscedastic logistic regression (hlogreg), heteroscedastic extended logistic regression (HXLR), and ordered logistic regression (OLR) methods for the Vishwamitri River Basin. The Brier score (BS), the area under curve (AUC) of receiver operator characteristics (ROC) plots, Brier decomposition, and reliability plots were used for the verification of the probabilistic forecasts. In agreement with the BS and AUC, the cNLR approach for postprocessing performed very well for calibration at all five grids and is preferred, whereas BMA and hlogreg approaches showed relatively poor performance for the Vishwamitri basin. The best post-processed ensemble precipitation will further be employed as an input for the generation of the hydrological forecasts in operational flood forecasting in the Vishwamitri River Basin.

  • Parametric postprocessing of ensemble precipitation forecasts using cNLR, BMA, logreg, hlogreg, HXLR, OLR for the Vishwamitri River Basin.

  • Evaluation of the post-processed ensemble forecasts to find the best-fit model for the calibration of the short-range ensemble precipitation forecasts.

  • Comparison of different postprocessing methods using verification metrics, viz. BS, Brier decomposition, reliability plots and AUC of ROC plots.

Graphical Abstract

Graphical Abstract
Graphical Abstract

Ensembles became part of the implementation suites at the Europe Centre of Medium-range Weather Forecasts (ECMWF) and National Centers for Environmental Predictions (NCEP) in 1992, eventually bringing a fundamental change from deterministic to ensemble-based probabilistic forecasting (Demeritt et al. 2010; Buizza 2018). The perturbations in the initial conditions of the climatic variable result in a set of members called ensemble members. The numerical weather prediction (NWP) by the ensemble prediction systems (EPSs) is classified as short-range forecasts (lead time ranging from 6 h to 3 days), medium-range forecasts (3–15 days lead time), and long-range forecasts (up to a few weeks). The short- to medium-range forecasts show better skills and added value in hydrological forecasting.

TIGGE stood for ‘THORPEX Interactive Grand Global Ensemble’ when first introduced as a project, but when the project was completed in 2014, it continued and is now known as ‘The International Grand Global Ensemble’ (Nobert et al. 2010). It provides the common platform for NWP datasets generated by different EPSs over the world such as ECMWF by Europe, CMA by China, KMA by Korea, NCEP by the US, NCMRWF by India, BoM, CPTEC by Brazil, ECCC, IMD by India, JMA by Japan, France, and UKMO by the United Kingdom (Bao & Zhao 2012; Duan et al. 2012; Jingyue et al. 2012; Liu et al. 2017) with different ranges of ensemble members.

The skill of the operational probability-based flood forecasting system is basically driven by the two major sources of uncertainty, i.e. uncertainty introduced due to the meteorological and hydrological aspects (Bourgin et al. 2014). The propagation of the NWP forecasts inputs as the perturbed members or ensembles into the hydrological modelling systems introduces meteorological uncertainty in the forecasting system (Buizza 2009). So, for a reliable forecasting system, it is of utmost importance to use a well-calibrated precipitation dataset. Recent studies show the implementation of statistical postprocessing approaches to evaluate and improve the skill of the ensemble flood forecasting systems (Schmeits & Kok 2010; Buizza 2018; Wilks 2018; Liu et al. 2019). The statistical postprocessing of the meteorological data (also called pre-processing of the data with respect to the input of the hydrological model) is carried out to address the uncertainty and bias of the verification period observed and ensemble data by taking into account the uncertainty in the past data.

The development, operation, and constant improvement of the EPS require a large avenue, rather than the improvement in the skills of the forecasts through the postprocessing of the ensemble numerical weather predictions which is presumably more economical. Even though EPS forecasts consist of system errors in the mean and spread of their forecast distribution, (Verkade et al. 2013), it is recurrently highly suggested to postprocess the weather predictions before feeding them as input to the hydrological model (Olsson & Lindström 2008; Awol et al. 2021). The requirement of the postprocessing is to do the bias correction and fine-tune their dispersion to produce appropriately calibrated forecast probabilities. For rectifying the systematic biases to produce improved calibrated forecasts, statistical postprocessing (or model output statistics (MOS)) utilises the relationship between a historical set of weather predictions and the corresponding observation.

Several parametric and non-parametric methods of postprocessing for meteorological parameters like temperature, precipitation, or wind, etc., are used for the correction and prediction of calibrated forecasts based on past observed data. In contrast to the statistical postprocessing of the meteorological forecasts like wind, temperature, etc., the postprocessing of ensemble precipitation is more thought-provoking due to the high variability of rainfall in terms of the magnitude and higher number of zero rainfall. The higher rainfall events recorded are not very frequent, so a large training dataset is required to be considered (Han & Coulibaly 2019). While the setup of the model, the forecast uncertainty typically rises with the consideration of a particular magnitude of precipitation. Recent studies show the implementation of the parametric methods, namely BMA (Raftery et al. 2005; Liang et al. 2013; Liu & Xie 2014), cNLR (Messner et al. 2014b; Scheuerer 2014; Gebetsberger et al. 2017), logreg (Hamill et al. 2008; Verkade et al. 2013; Gebetsberger et al. 2019; Medina et al. 2019), hlogreg (Messner et al. 2016), heteroscedastic extended logistic regression (HXLR) (Schmeits & Kok 2010; Scheuerer & Büermann 2014), and ordered logistic regression (OLR) (Gebetsberger et al. 2018) for postprocessing.

Messner et al. (2014a) compared the extended logistic regression with three closely related regression models of censored logistic regression, OLR, and heteroscedastic OLR for 10 European weather stations. The results showed that censored logistic regression performed better than extended logistic regression. Liu & Xie (2014) compared the performance of the raw ensembles, BMA, and the logreg method with the concluding remark of an advantage of BMA over logreg is the prediction of a full probability density function. Messner et al. (2014b) extended the logistic regression method to develop HXLR and presented the advantage of using the ensemble spread effectively to improve forecasts. The performance criteria showed good predictions by BMA and logreg as compared to the raw ensembles using BS metrics. An approach to postprocess the ensemble forecasts using logistic regression and OLR is carried out with results showing slight improvement in the forecasts using the OLR method (Hemri et al. 2016). Williams et al. (2014) carried out a comparative study of different postprocessing methods and concluded that the Bayesian model averaging (BMA) and non-homogeneous Gaussian regression perform similarly, while logistic regression performs less well. In the comparative study of the postprocessing methods by Schmeits & Kok (2010) the results showed that BMA and extended LR are much more skilful than the raw EPS, but the difference in skill between the two postprocessing methods does not seem to be statistically significant for the Netherlands. To the author's knowledge, similar studies of comparison of the parametric postprocessing methods to find the best suitable method have not been conducted for Indian regions.

This paper addressed the challenge of identifying the most suitable postprocessing technique and assessed the performance of the six parametric statistical postprocessing methods, i.e. cNLR, BMA, logreg, hlogreg, HXLR, and OLR for the Vishwamitri River Basin using short-range precipitation forecasts by NCMRWF EPS. The data retrieved from TIGGE for daily accumulation of 1-day and 3-day lead times were used for model fitting, prediction, and verification for various postprocessing methods using Brier score (BS), AUC of receiver operating curve (ROC) plots, reliability plots, and Brier decomposition of the post-processed ensembles with the objective to compare and find the best-fit model for the calibration of ensemble forecasts for the Vishwamitri River Basin.

Study area

The Vishwamitri River Basin is located between 73°0′0″ E to 73°30′0″ E longitude and 22°0′0″ N to 22°45′0″ N latitude. Hilly topography with an elevation of 149–480 m above sea level may be found in the northeast region of the city, while the major part of the basin has a flat terrain. The Vishwamitri river originates in the Pavagadh, 43 km northeast of Vadodara. The river's flow length is roughly 71 km, with about 59 km flowing through the Vadodara district. The river is seasonal but has a history of floods almost every year during the monsoon. The river flows through the heart of Vadodara City and due to the insufficient stormwater drainage systems, rapid urbanisation, and frequent constructions in the city (Sarthak et al. 2015), the issue of flooding arises nearly every monsoon. Vadodara City suffered major flood events in 2005, 2008, 2014, and 2019 which have even resulted in 8 deaths and the evacuation of more than 5,000 people. The total rainfall recorded in the verification period considered is 932.7 mm which is higher than the average annual rainfall of Vadodara, i.e. 868 mm.

The high-resolution 0.25° × 0.25° gridded data are prepared based on quality-controlled recorded rainfall data from more than 6,000 rain gauges available in India by the Indian Meteorological Department (IMD) (Roy Bhowmik et al. 2007; Deshpande et al. 2021; Seenu & Jayakumar 2021). The IMD daily gridded data at this resolution are prepared by employing data for the period of 1901–2010 from the National Data Centre, IMD, Pune using the Shepard interpolation technique. The observed rainfall data are derived from the high-resolution IMD gridded data for the years 2007–2021 for the study area.

The NCMRWF ensemble precipitation data are available from August 2017 onwards and are downloaded from TIGGE (ECMWF | TIGGE Data Retrieval) at the resolution of 0.18° × 0.12° with 11 ensemble forecasts and one control forecast. The initial perturbation strategy used for the generation of ensembles is ETKF (Ensemble Transform Kalman Filter) with scaling of perturbations using radiosonde and Advanced TIROS Operational Vertical Sounder (ATOVS) observations (Bowler et al. 2008). More details of the NCMRWF dataset can be found at Models – TIGGE – ECMWF Confluence Wiki. The NCMRWF dataset for the short-range forecast (1- to 3-day lead times) is used for the evaluation of the grids in the Vishwamitri basin. The total ensemble data retrieved, downloaded, and extracted is 767 (days) × 11 (ensemble members) × 3 (lead days) × 5 (grids) which equals 126,000. The data are divided into two sets, first one is the training set considering the data from 1 August 2017 to 24 July 2021 and the verification period from 25 July 2021 to 30 September 2021, for which the prediction models are calibrated. The data before August 2017 are not available on the TIGGE website for NCMRWF. Before calculating the ensemble mean and standard deviation, all precipitation values are squarely rooted (Wilks 2018). Weather variables such as rainfall and wind speed are greater than 0 and hence certainly non-Gaussian (Messner 2018). The precipitation generally contains a greater number of zero values. Since the majority of the Vishwamitri River Basin has flat terrain, the regridding of the IMD data is performed to the spatial resolution of the NCMRWF dataset using the bilinear interpolation technique. The regridding is carried out by giving the input of the target projection and grid specification file (i.e. NCMWRF), IMD data in netcdf format, and empty output netcdf file using Climate Data Operator (CDO). More details about CDO can be found at https://code.mpimet.mpg.de/projects/cdo/. The grids of NCMRWF falling in the Vishwamitri River Basin are highlighted in Figure 1.
Figure 1

Index map of the Vishwamitri River basin showing its location and the NCMRWF grids as G1, G2, G3, G4, and G5 considered and falling in the study area.

Figure 1

Index map of the Vishwamitri River basin showing its location and the NCMRWF grids as G1, G2, G3, G4, and G5 considered and falling in the study area.

Close modal

The postprocessing for the improvement of the ensemble precipitation datasets is carried out in three phases. The first phase of data preparation comprised of retrieval of the NCMRWF dataset from the TIGGE portal in grib file format and thus extracting the data of the grids falling in the study region by using the shapefile of Vishwamitri River basin. The extraction of the data is carried in R studio using the raster and rgdal package. The grids falling in the region are G1(73.14, 22.18), G2(73.14, 22.30), G3(73.32, 22.30), G4(73.32, 22.42), and G5(73.5, 22.42).

The excel sheets of the ensemble datasets for 24-h accumulated rainfall of 1-day and 3-day lead times with the corresponding observed re-gridded IMD rainfall were prepared for all grids. The first column containing the date is followed by the regridded IMD observed data column and the corresponding 11 ensemble members. In phase 2, these files are the input for the model fitting and prediction using six parametric postprocessing methods to test and find the best suitable method of postprocessing for the Vishwamitri River basin using R programming (Messner 2018). The 24-h accumulated precipitation of five grids of both 1-day and 3-day lead times are calibrated separately for each postprocessing method. Prior to the postprocessing of the ensembles, the dataset is divided into training and testing periods. The models are fitted for the defined training period. For model fitting of cNLR the package crch is utilised by employing continuous distributions for the response with separate linear predictors for the ensemble mean (location model) and variance (scale model). Further details of crch may be found at https://CRAN.R-project.org/package=crch. The BMA model is fitted using the command fitBMA of the ensembleBMA package. The precipitation data are cube-root transformed while fitting this model. Being a member of the generalised linear model family, binomial has been set while fitting the logistic regression model using the glm package. The ensemble mean has been employed as the only predictor. The hlogreg model is fitted with predictors as the ensemble mean and standard deviation using the hetglm command of the glmx package. The model for HXLR is developed using the command hxlr from the crch package. In HXLR, the ensemble spread is directly used to predict the dispersion of the predictive distribution. The clm command of the ordinal library is utilised for the fitting of the model for OLR.

The parametric univariate postprocessing methods of the ensemble precipitation forecasts employed in this paper are discussed in detail.

Censored non-homogeneous logistic regression

The concept of censoring can be understood as considering a certain minimum precipitation amount, which is referred to as the occurrence of the rain event, if the precipitation falling under this ‘minimum’ amount is recorded as ≤ minimum, then the data are censored at ‘minimum’. If the model is zero censored, it means any probability for rainfall less than zero will be allocated as 0 rainfall.

A power transformation of 1/x is employed on observations and the ensemble predictions, to overcome the issue of positive skewness (Box & Cox 1964). In this study, x is taken as 2. The zero left-censored non-homogeneous logistic regression (cNLR) is symbolised as cNLR and for the setup of left censoring, the code consists of the command of left = 0 (Messner 2018). The cNLR assumption for the precipitation can be stated as:
(1)
The latent variable for the LD, is understood as:
(2)
(3)
where is the logistic distribution and the defining parameters are the location and the scale, where mean (μ), is described in location and width of the distribution is defined by scale ς (Stauffer et al. 2017). The transformed scale's probabilities are therefore quantified using Equation (4) as,
(4)
In order to use the cNLR approach for precipitation forecasts, taking in consideration the possibility of precipitation forecasts of zero amount by all ensemble members, the logistic parameter estimates can thus be defined as follows:
(5)
(6)
where and denotes the mean and standard deviation parameters, when all ensembles are dry (zero), the mean will become a + and standard deviation will become exp(c). The historical observations and the ensemble forecast available at every location in the study area are modelled. The grid-wise ensemble postprocessing is carried out by fitting the cNLR model at each observation grid using the code written in R (Messner 2018).

Bayesian model averaging

Sloughter et al. (2007) envisioned a BMA approach for quantitative precipitation forecasting. So, rather than Gaussian distributions, the component distributions are the combinations of gamma distributions and point masses at zero. The generation of predictive distribution of BMA can be articulated as:
(7)
where m is the total number of ensembles, each is the positive weight related to the component probability density associating to the ith ensemble member xt,i, and the sum of all the weights will be 1. The members are treated to be exchangeable as they are generated from the same distribution.
Forecasting precipitation distributions are significantly more challenging to model because they recurrently comprise a discrete probability for zero and a continuous probability density for non-zero precipitation (Wilks 2018). For such precipitation distributions Sloughter et al. (2007) defined a method by defining the probability of exactly zero amount of rainfall with the logistic regression given in Equation (8) and a gamma distribution for the non-zero precipitation, achieving the mixed discrete-continuous component distributions expressed in Equation (9) as follows:
(8)
(9)

Logistic regression (logreg)

Even though logistic regression seems to have a relatively long track record in the structure of generalised linear modelling (Nelder & Wedderburn 1984), Hamill et al. (2004) were the first to recommend being used in the postprocessing of the ensemble forecasts. Maximum likelihood is used for the estimation of regression parameters. In an implementation in which only two climatological terciles have been utilised as prediction thresholds, Hamill et al. (2004) employed the ensemble mean as the sole predictor, q:
(10)

The training-data predictands are binary while fitting logistic regression parameters, if the statement on the left-hand side in Equation (10) is true or false then one and zero, respectively.

Heteroscedastic logistic regression (hlogreg)

As the regressor for location, the ensemble mean is taken, and for the regressor for the scale, ensemble standard deviation or variance is considered for the location in the hlogreg method (Gneiting et al. 2005). The common slope parameter b in Equation (11) force regressions for all quantiles to be parallel whereas the individual logistic regressions cross, leading in some cases to cumulative probability specifications for smaller precipitation amounts being larger than specifications for larger amounts. The hlogreg is expressed on the log-odds scale as:
(11)

Heteroscedastic extended logistic regression

The HXLR introduced by Wilks (2009), happens to fit a single equation across all threshold values, where the additional predictor variable are the threshold values with an assumption of identical regression coefficients , and emphasising that the regression intercepts should be of an increasing function of the target quantile,
(12)
where . The HXLR offers comprehensive continuous predictive distributions (CPDs) in order to avert negative probabilities and reduces the number of regression coefficients. The intercepts of HXLR are the linear function of g(q). The advantage of HXLR is that it employs the ensemble spread as the scale parameter to fine-tune the dispersion of CPD. Thus, this method is capable of sufficiently exploiting the ensemble spread.

Ordered logistic regression

OLR (Messner et al. 2014b; Wilks 2018) is another closely related approach of logreg that pertains to a finite set of thresholds and prevents regression lines from intersecting by mandating that they must be parallel. OLR is very similar to extended logistic regression but avoids a continuous distribution assumption. The (homoscedastic) OLR forecasts are then formulated as,
(13)

OLR also provides coherent forecasts of category probabilities (Messner 2018). Unlike HXLR it has ordered intercepts and requires the estimation of more coefficients. However, it differs from HXLR as in OLR no CPD is presumed or specified by the model. Also, only the probability of exceedance forecasts can be derived from the OLR models, which do not facilitate density or quantile predictions.

Once the model is fitted for cNLR, BMA, logreg, hlogreg, HXLR, and OLR then the predictions for the model are derived using R programming. The location and scale parameters for cNLR were derived for the test dataset for different quantiles using predict command in R. The predictions of the BMA are generated using the command cdf of the ensembleBMA or fitBMA package which computes the cumulative distribution function. The modelling functions estimate model parameters from training data via the EM algorithm for the mixture models of gamma distributions with a point mass at 0 (appropriate for quantitative precipitation) to the cube-root transformation of the ensemble forecasts and observed data. In the case of the prediction of binary or heteroscedastic logistic regression, the computation of the probabilities is carried out using predict command and setting the type as ‘response’. The type in the predict command controls whether location (‘response"/"location’), scale (‘scale’) or quantiles (‘quantile’) are predicted. Quantile Further, the sapply command is used to predict the logistic regression model. For HXLR and OLR the prediction is done using predict command giving the input type as ‘cumprob’ and ‘cum.prob’ respectively. For HXLR crch and for predictions by OLR models’ Ordinal package is used in R programming.

cNLR, BMA and HXLR provide a full predictive distribution but the other logistic regression method only predicts the threshold probabilities.

The assessment of the degree of similarity between observed rainfall and forecasts is known as forecast verification. In this research the binary verification is carried on the occurrence of the rainfall, i.e. precipitation of 1 mm or more is considered a precipitation event. The verification of the calibrated forecasts is carried out using the metrics called the BS, Brier decomposition components, namely reliability, resolution and uncertainty, reliability plots of probability of occurrence of rainfall, and AUC of ROC plots, the detailed methodology followed in the paper is reflected in Figure 2.
Figure 2

Methodology for postprocessing of short-range ensemble QPF of NCMRWF EPS.

Figure 2

Methodology for postprocessing of short-range ensemble QPF of NCMRWF EPS.

Close modal

Model verification metrics

Furthermore, the verification scores were estimated for the univariate parametric statistical post-processed ensembles using BS, AUC under ROC plots, reliability diagrams, and Brier decomposition (metrics used for probabilistic forecasts) in Phase 3. Verification scores such as BS, AUC under ROC plots, and Brier decomposition enable a quantitative and comprehensive assessment of forecast performance. For this purpose, the rain event that occurred or no rain event is classified based on the amount of rainfall observed and forecasted. When the precipitation recorded is 1 mm per day or more it is designated as a rain event, otherwise no rain event. The probabilistic forecasts are verified against the observed data by converting it to binary or dichotomous (forecast) based on the occurrence or the non-occurrence of an event. The BS is fundamentally the mean squared error of the probabilistic forecast. BS (Wilks & Hamill 2007) representing the errors of these binary events at a given threshold is computed as:
(14)
where N represents the total number of forecast samples, is the predicted probability of forecast, and represents the probability of occurrence of observed (0 represents non-occurrence and 1 represents the occurrence of an event) (Siddique et al. 2015). As the BS is an error score, the lesser the BS better is the forecast. The range of the BS is (0, 1), values approaching 1 represent the worst forecast and a value of 0 designates a perfect forecast.
The BS is decomposed into reliability, uncertainty, and resolution called as Brier decomposition. The difference between resolution from reliability and uncertainty is equivalent to BS. In the probabilistic verification procedure, ‘reliability’ is defined as the agreement between forecast probability and mean observed frequency; ‘resolution’ is defined as the capability of the forecast to resolve the set of sample events into subsets with characteristically different outcomes; ‘sharpness’ is defined as the tendency to forecast probabilities near 0 or 1, as opposed to values clustered around the mean; and ‘uncertainty’ is defined as the nature (in other words, climatological frequency) of a specified event or category.
(15)

The first part of the Equation (15) is reliability, the middle part is resolution, and the last part of the equation is uncertainty. Higher reliability signifies a higher contribution to the higher BS and thus poorer forecasts. The ROC deals with the performance of a forecast for the occurrence of a distinct event, with reference to a threshold value (Regonda et al. 2013; Fan et al. 2014). For a probabilistic forecast, the ROC curve deals with the quality of a decision on the basis of forecast probability. The area under the ROC plot represents the performance forecast skill, the higher value of AUC signifies better skill, and the diagonal line or 0.5 AUC represents no or zero skill. Different verification tools were used for the conduction of the verification of the precipitation post-processed outputs such as for Reliability plots were generated using ReliabilityDiagram command in the SpecsVerification package. The generation and analysis of the ROC plots that include the computation of the area under the curve are carried out using the pROC package in R programming. Another alternative is to use the verification package for the generation of ROC plots using roc.plot command. The BS is estimated by the command brier in the verification package and its decomposition is carried out using the command BrierDecomp in SpecsVerification in R programming software.

Precipitation plays an important role in operational flood forecasting systems. The evaluation of NWP from NCMRWF is conducted for Western India, where the major rainfall occurs in the monsoon period (June–September) of the year. The Vishwamitri basin region is mostly hit by southwest monsoon which results in major rainfall in the duration from June to September, where around 85–90% of the annual rainfall occurs in this duration (Karuna Sagar et al. 2017). Due to the higher number of zero precipitation in the non-monsoon period, the validation of the calibrated forecasts over the period of 25 July to 30 September, 2021.

The comparison of the quality of the forecasts by the postprocessing methods used in the paper is done on the basis of the performance criteria by using the verification metrics such as BS, reliability, resolution, AUC of ROC plots, and reliability diagram criteria.

The spread skill relationship of the raw ensemble for different intervals of the ensemble standard deviation can be seen in Figure 3(a)–3(e) for 1-day lead time and Figure 3(f)–3(j) for 3-day lead time for the grids G1, G2, G3, G4, and G5, respectively. The spread skill provides a measure of the dispersion of the ensemble members.
Figure 3

Boxplots representing the spread skill of absolute error for various intervals of ensemble standard deviation for 1-day and 3-day lead times.

Figure 3

Boxplots representing the spread skill of absolute error for various intervals of ensemble standard deviation for 1-day and 3-day lead times.

Close modal
The higher dispersion of the absolute error and ensemble standard deviation signifies the higher uncertainty induced in the forecasts. The 1-day forecasts showed the variation of upper whiskers of the absolute error between 4 and 6, for the ensemble standard deviation of 1–4.5, whereas in 3-day lead time, the ensemble standard deviation is relatively larger up to the range of 1–5.55 showing an absolute error higher than 6–8.5. So, it can be inferred from the results that the absolute error and ensemble standard deviation increase with the lead time of the short-range forecasts (Pappenberger et al. 2009). To reduce the absolute error, the postprocessing using different methods mentioned in the paper is employed for each grid to consider the heterogeneity of the rainfall distribution over the Vishwamitri River Basin. With the increase in the lead time, a relative increase of up to 50% in the ensemble standard deviation is noted. Figure 4 shows the average BS for the raw ensemble from NCMRWF EPS, with the post-processed ensemble forecasts using cNLR, BMA, logreg, hlogreg, HXLR, and OLR. The results of average BS at G1 (Figure 4(a)), G2 (Figure 4(b)), G3 (Figure 4(c)), G4 (Figure 4(d)), and G5 (Figure 4(e)) of 1 day and 3 days clearly show the need of the postprocessing of the ensemble precipitation from NCMRWF EPS. G5 1-day, G1 3-day, G2 3-day, and G3 3-day raw forecasts were improved by 60–63%. The post-processed ensembles of grids G1 1-day, G4 3-day, and G5 3-day showed an improvement of 65–71% in the raw forecasts, which shows a significant requirement of postprocessing the raw forecasts of precipitation. The results of average BS showed the added value in the raw ensemble precipitation forecasts due to postprocessing (Liu & Xie 2014). Hemri et al. (2016) found that the OLR model proved to slightly outperform the logreg model, similar results are found at the grids G4 1-day, G1 3-day, G2 3-day, and G3 3-day.
Figure 4

Average BS at grids (a) G1, (b) G2, (c) G3, (d) G4, and (e) G5 for 1-day and 3-day forecasts.

Figure 4

Average BS at grids (a) G1, (b) G2, (c) G3, (d) G4, and (e) G5 for 1-day and 3-day forecasts.

Close modal
At all the grid points, the BS average for the raw ensembles showed a decrease in the skill of the forecasts with the increase in the lead time. The improvements in skill at grids G2, G3, G4, and G5 from postprocessing are greater at the longer (3 days) lead time (Gomez et al. 2019). The mean BS of BMA and logreg showed comparatively higher BS for BMA representing the slightly poorer performance of the BMA method at all the grids for 1-day and 3-day verification, also pointed by Liu & Xie (2014). The verification metric BS for calibrated forecasts by different approaches is represented using boxplots, here shown in Figure 5 for G1–G5 grids to highlight the thorough comparison between the performance of the six methods employed for postprocessing of short-range forecasts. For each boxplot, the bottom line of the box represents the first quartile (25th percentile, i.e. Q1) line in the middle of the box represents the median, i.e. 50th percentile Q2 and the top line of the box represents the third quartile (75th percentile, i.e. Q3) of the Brier Scores, lines on the whisker portion from top signify ‘maximum’ (Q3 + 1.5 × (Q3–Q1)) and at bottom signifies the ‘minimum’ (Q1–1.5 × (Q3–Q1)) of the BS, respectively; black circles outside the whisker are represented as outliers.
Figure 5

Boxplot representation for BS comparison of six postprocessing methods computed at G1, G2, G3, G4, and G5 for 1-day and 3-day lead times.

Figure 5

Boxplot representation for BS comparison of six postprocessing methods computed at G1, G2, G3, G4, and G5 for 1-day and 3-day lead times.

Close modal

1-day lead time of the G1 grid shows the BS in the range of 0.10–0.22, Figure 5(a) showing the clear uplift of BMA and hlogreg boxplot as compared with the other postprocessing methods showing slightly poorer performance than others. The cNLR and logistic regression performed better than the rest. Figure 5(f) shows the better performance of hlogreg and cNLR for G1, 3-day forecasts with BMA and logreg performing poorer than the rest of other approaches. At G2 shown in Figure 5(b) and 5(h) logistic regression and cNLR performed comparatively well than the other postprocessing approaches applied. BMA and hlogreg performed slightly poorly on both days. The postprocessing method BMA for Figure 5(c) G3 1 day, Figure 5(h) G3 3 days, and Figure 5(d) G4 1 day also showed slightly poorer results than others. For G5, referring to Figure 5(e) and 5(j) BMA and OLR methods performed poorly for 1-day and 3-day lead times. The cNLR method performed the best for both lead times with the 25th and 75th quantile ranging from 0.14 to 0.17 with the median nearly at 0.16. Overall for all the short-range forecasts, the cNLR method outperforms the other postprocessing methods applied for the grids falling in the study area. The cNLR method uses each individual predictand value in the training dataset instead of the selected category probabilities. Messner et al. (2014a) also found cNLR provides good category probability forecasts while requiring fewer coefficients and additionally specifying full predictive distributions. For different grids, hlogreg gave mixed performances according to BS, so for the Vishwamitri River Basin, it is recommended not to prefer hlogreg. Overall, considering the performances determined by the BS of different approaches the cNLR shows the best performance as compared with other parametric methods.

Reliability, resolution, and uncertainty are the components of the BS (the relation discussed in the data and methods section). Reliability is a measure of the conditional bias in the forecast. Reliability for different grids falling in the Vishwamitri River Basin is denoted by different postprocessing methods for short-range lead times in Figure 6. For G1, Figure 6(a) cNLR and hlogreg method shows the lowest reliability of 0.181 for 1-day and 3-day forecasts at G1 showed the best reliability score by cNLR than other postprocessing methods. G2 (Figure 6(b)) cNLR showed the reliability of 0.019 and 0.013. The higher reliability value of the BMA method for G1 (Figure 6(a)), G2 (Figure 6(b)), and G5 (Figure 6(e)) for both 1-day and 3-day lead times is the reason for the poor BS which is being noted in BS plots. The uncertainty remained the same as 0.246 at all five grids. G4 (Figure 6(d)) showed the poor performance of BMA for 3-day calibrated forecasts relative to other methods. The results at the majority of the grids show higher reliability in logreg resulting in poorer performance in comparison to HXLR. The cNLR, hlogreg, and HXLR method uses the ensemble spread along with the ensemble mean as the other predictor in correcting the dispersion which helps in the improvement of the (lower) reliability of the post-processed forecasts over the logreg method (Messner 2018). Overall, despite the cube-root transformation in the BMA postprocessing method, it showed poor (higher in the graph) reliability and resolution of the post-processed forecasts. Instead, the cNLR method resulted in better calibration with square root transformation of the precipitation forecasts. Also, both methods have the advantage of estimating the complete cumulative density function.
Figure 6

Representation of Brier decomposition into reliability, resolution, and uncertainty for various parametric postprocessing methods considered for 1-day and 3-day lead times.

Figure 6

Representation of Brier decomposition into reliability, resolution, and uncertainty for various parametric postprocessing methods considered for 1-day and 3-day lead times.

Close modal
Figure 7 shows the reliability graph in relation to the BS and its decomposition. The x-axis represents the prediction probability, while the y-axis represents the observed relative frequency. The diagonal solid straight line represents perfect reliability. The reliability of the forecast is determined by how closely the black line and point curve deviates from the diagonal line. On the diagonal, the forecast probability and observed relative frequency are the same, representing the perfect skill. If the black curve is below the diagonal line, the forecast probabilities are too high (over-forecast). The black curve is above the diagonal line in most of the reliability plots, indicating the forecast probabilities are too low (under-forecast). As the curve flattens, the resolution decreases. Referring to Figure 7(a) G1 of 1 day shows cNLR and hlogreg nearer to the diagonal indicating better performance than other approaches and logreg, HXLR, and OLR show the larger deviation from the diagonal at the upper side indicating the under-forecast in prediction by these methods. The curve closest to the diagonal line shows the best fit postprocessing method for the forecast. The histogram at the bottom right corner shows the forecast frequency for each probability bin and the sharpness of the forecast. The performance of logreg, hlogreg, and OLR for the various rainfall events is limited because it can only forecast the likelihood of crossing a few discrete thresholds (Wilks 2018). In contrast, the advantage of cNLR, BMA, and HXLR are that it works well even for the events that occurred as it offers predictions of the entire predictive distribution, being able to give probabilities of exceeding arbitrary precipitation amounts. The uncertainty information contained in the ensemble spread could be utilised effectively while employing HXLR, as compared with the other logistic methods (Messner et al. 2014b). Based on the results it can be inferred that if the primary interest is finding the predictive probability density function, the cNLR or HXLR should be preferred.
Figure 7

Reliability plots for the probability of occurrence of rainfall (i.e. Prob(r > 0)) for various postprocessing methods used at different grids.

Figure 7

Reliability plots for the probability of occurrence of rainfall (i.e. Prob(r > 0)) for various postprocessing methods used at different grids.

Close modal
The specificity and sensitivity can be defined as the true negative and true positive with respect to the particular condition (the occurrence of precipitation is considered an event). The specificity which is also known as the miss rate refers to the probability of a negative test, conditioned on truly being negative, i.e. the incorrectly predicted event or non-event. The sensitivity also known as hit rate is the probability of the positive test, conditioned on truly being positive, i.e. the correct prediction of an event or non-event. Specificity is plotted on the horizontal axis whereas sensitivity is on the vertical axis of the ROC plots. In the cases where the summation of the sensitivity and specificity is equal to one, i.e. the diagonal line (AUC-0.5) of the ROC plots, the forecasts exhibit no skill. The area under the curve of receiver operator characteristics (ROC) plots is denoted as AUC in Figure 8 grid-wise for G1, G2, G3, G4, and G5 of 1-day and 3-day lead times. Higher AUC shows good performance of the postprocessing method in calibration, AUC equals 1 indicates the perfect forecast. AUC of G1 for 1 day shown in Figure 8(a), depicts the good performance by all postprocessing methods with AUC greater than 0.9 except for hlogreg which resulted in AUC of 0.896. The cNLR method for G1 3 days showed the highest AUC of 0.839 reflected in Figure 8(f). Overall at G1 the calibration of 1-day forecast was better than for the 3-day lead time. At G2 for 1 day and 3 days all the postprocessing methods showed almost the same results with AUC above 0.87 shown in Figure 8(b) and 8(g). The larger area of the ROC plot corresponds to a higher number of hit rate along with a lower number of false alarms which results in good forecast skills.
Figure 8

AUC of receiver operator characteristic plots for the probability of occurrence of rainfall (i.e. Prob(r > 0)) computed for various postprocessing methods used.

Figure 8

AUC of receiver operator characteristic plots for the probability of occurrence of rainfall (i.e. Prob(r > 0)) computed for various postprocessing methods used.

Close modal

AUC varied from 0.857 to 0.876 at G4 of 1-day and 3-day lead times represented in Figure 8(d) and 8(i). The cNLR method shows the best AUC of 0.867 and 0.876 at 1 day and 3 days, respectively. Referring Figure 8(e) and 8(j), the AUC showed good performance by all postprocessing approaches adopted, with the best AUC noted by cNLR of 0.871 and 0.908 for 1 day and 3 days, respectively. In comparison with 1 day, the 3-day forecasts were better calibrated at G5. At grid G1, the 1-day forecasts show better skills (higher hit rate) than 3-day forecast. Excluding grid G1, all the grids showed better performance (higher AUC corresponds to 3 days) of all the postprocessing methods for higher lead time forecasts (Gomez et al. 2019).

Based on the parametric postprocessing methods used worldwide for the postprocessing of the ensemble forecasts, the comparison of six postprocessing methods is shown in this paper using different verification metrics for probabilistic forecasts such as BS, reliability plots, and AUC under ROC plots grid-wise for the G1, G2, G3, G4, and G5 grids falling in the Vishwamitri River Basin. The results can be summarised as follows:

  • The average BS shows that an overall 50–70% improvement is noted from raw to post-processed forecasts, signifying the necessity of the statistical postprocessing of the ensemble forecasts for better and more reliable calibrated forecasts.

  • For the overall performance, in G1, G2, G3, G4, and G5 of short-range forecasts (here shown as 1 day and 3 days, due to the cause of brevity, 2-day results are not presented in the paper), the cNLR method performed comparatively better than the other five postprocessing methods adopted for the study.

  • The BS of BMA at all the grids showed relatively poor performance than other methods. Other methods performed moderately well in calibration, wherein the cNLR outperformed for G1, G2, G3, G4, and G5 for 1 day and 3 days with logreg giving a good performance along with cNLR at G5 1-day, G2 3-day, and G3 3-day forecasts.

  • The reliability of the forecast is evidently better for the forecasts predicted by the cNLR and logreg for G1, G2, G4, and G5 of 1-day forecasts.

  • The plots of AUC of ROC show the well-calibrated forecasts by cNLR at different grids falling in the Vishwamitri River basin for short-range ensemble forecasts by NCMRWF EPS.

  • The cNLR method slightly outperformed and provided the full predictive continuous density function for the prediction of future forecasts instead of only predicting the probability of exceedance of specified thresholds as in the case of logreg, hlogreg, and OLR.

The study has explored a wide number of packages for the evaluation of the ensemble precipitation forecasts such as ensembleBMA, ensemblepp, ensembleMOS, Ordinal, gamlss, and verification packages such as easyverification, pROC, s2dverification, scoringrules, SpecsVerification, and verification in R programming. It would be an interesting study to carry out the evaluation for multi-model or grand ensembles and evaluate the performances of short–medium-range forecasts in future studies. Furthermore, the best-calibrated datasets shall be used for the development of the flood forecasting system for the Vishwamitri River Basin which will be very beneficial to addressing the floods and early preparedness in the case of flood in Vadodara City.

All relevant data are available from an online repository or repositories. NCMRWF data can be downloaded from https://apps.ecmwf.int/datasets/data/tigge/levtype%3Dsfc/type%3Dcf/. Gridded IMD rainfall data can be downloaded from https://imdpune.gov.in/cmpg/Griddata/Rainfall_25_Bin.html.

The authors declare there is no conflict.

Awol
F. S.
,
Coulibaly
P.
&
Tsanis
I.
2021
Identification of combined hydrological models and numerical weather predictions for enhanced flood forecasting in a semiurban watershed
.
Journal of Hydrologic Engineering
26
(
1
),
04020057
.
https://doi.org/10.1061/(asce)he.1943-5584.0002018
.
Bao
H.
&
Zhao
L.
2012
Development and application of an atmospheric-hydrologic- hydraulic flood forecasting model driven by TIGGE ensemble forecasts
.
Acta Meteorologica Sinica
26
(
1
),
93
102
.
https://doi.org/10.1007/s13351-012-0109-0
.
Bourgin
F.
,
Ramos
M. H.
,
Thirel
G.
&
Andréassian
V.
2014
Investigating the interactions between data assimilation and post-processing in hydrological ensemble forecasting
.
Journal of Hydrology
519
(
PD
),
2775
2784
.
https://doi.org/10.1016/j.jhydrol.2014.07.054
.
Bowler
N. E.
,
Arribas
A.
,
Mylne
K. R.
,
Robertson
K. B.
&
Beare
S. E.
2008
The MOGREPS short-range ensemble prediction system
.
Quarterly Journal of the Royal Meteorological Society
134
,
703
722
.
Box
G. E.
&
Cox
D. R.
1964
An analysis of transformations
.
Journal of the Royal Statistical Society: Series B (Methodological)
26
(
2
),
211
243
.
Buizza
R.
2009
Current Status and Future Developments of the ECMWF Ensemble Prediction System
. pp.
29
30
.
Buizza
R.
2018
Ensemble forecasting and the need for calibration
. In:
Statistical Postprocessing of Ensemble Forecasts
.
Elsevier Inc
.
https://doi.org/10.1016/b978-0-12-812372-0.00002-9
Demeritt
D.
,
Nobert
Ś.
,
Cloke
H.
&
Pappenberg
F.
2010
Challenges in communicating and using ensembles in operational flood forecasting
.
Meteorological Applications
17
(
2
),
209
222
.
https://doi.org/10.1002/met.194
.
Deshpande
M.
,
Kanase
R.
,
Phani Murali Krishna
R.
,
Tirkey
S.
,
Mukhopadhyay
P.
,
Prasad
V. S.
,
Johny
C. J.
,
Durai
V. R.
,
Devi
S.
&
Mohapatra
M.
2021
Global ensemble forecast system (Gefs t1534) evaluation for tropical cyclone prediction over the north Indian ocean
.
Mausam
72
(
1
),
119
128
.
Duan
M.
,
Ma
J.
&
Wang
P.
2012
Preliminary comparison of the CMA, ECMWF, NCEP, and JMA ensemble prediction systems
.
Acta Meteorologica Sinica
26
(
1
),
26
40
.
https://doi.org/10.1007/s13351-012-0103-6
.
Fan
F. M.
,
Collischonn
W.
,
Meller
A.
&
Botelho
L. C. M.
2014
Ensemble streamflow forecasting experiments in a tropical basin: the São Francisco river case study
.
Journal of Hydrology
519
,
2906
2919
.
https://doi.org/10.1016/j.jhydrol.2014.04.038
.
Gebetsberger
M.
,
Messner
J. W.
,
Mayr
G. J.
&
Zeileis
A.
2017
Fine-tuning nonhomogeneous regression for probabilistic precipitation forecasts: unanimous predictions, heavy tails, and link functions
.
Monthly Weather Review
145
(
11
),
4693
4708
.
https://doi.org/10.1175/MWR-D-16-0388.1
.
Gebetsberger
M.
,
Messner
J. W.
,
Mayr
G. J.
&
Zeileis
A.
2018
Estimation methods for nonhomogeneous regression models: minimum continuous ranked probability score versus maximum likelihood
.
Monthly Weather Review
146
(
12
),
4323
4338
.
https://doi.org/10.1175/MWR-D-17-0364.1
.
Gebetsberger
M.
,
Stauffer
R.
,
Mayr
G. J.
&
Zeileis
A.
2019
Skewed logistic distribution for statistical temperature post-processing in mountainous areas
.
Advances in Statistical Climatology, Meteorology and Oceanography
5
(
1
),
87
100
.
https://doi.org/10.5194/ascmo-5-87-2019
.
Gneiting
T.
,
Raftery
A. E.
,
Westveld
A. H.
&
Goldman
T.
2005
Calibrated probabilistic forecasting using ensemble model output statistics and minimum CRPS estimation
.
Monthly Weather Review
133
(
5
),
1098
1118
.
https://doi.org/10.1175/MWR2904.1
.
Gomez
M.
,
Sharma
S.
,
Reed
S.
&
Mejia
A.
2019
Skill of ensemble fl ood inundation forecasts at short- to medium-range timescales
.
Journal of Hydrology
568
(
June 2018
),
207
220
.
https://doi.org/10.1016/j.jhydrol.2018.10.063
.
Hamill
T. M.
,
Whitaker
J. S.
&
Wei
X.
2004
Ensemble reforecasting: improving medium-range forecast skill using retrospective forecasts
.
Monthly Weather Review
132
1434
1447
.
https://doi.org/10.1175/1520-0493(2004)132<1434:ERIMFS>2.0.CO;2
.
Hamill
T. M.
,
Hagedorn
R.
&
Whitaker
J. S.
2008
Probabilistic forecast calibration using ECMWF and GFS ensemble reforecasts. Part II: precipitation
.
Monthly Weather Review
136
(
7
),
2620
2632
.
https://doi.org/10.1175/2007MWR2411.1
.
Han
S.
&
Coulibaly
P.
2019
Probabilistic flood forecasting using hydrologic uncertainty processor with ensemble weather forecasts
.
Journal of Hydrometeorology
20
(
7
),
1379
1398
.
https://doi.org/10.1175/JHM-D-18-0251.1
.
Hemri
S.
,
Haiden
T.
&
Pappenberger
F.
2016
Discrete postprocessing of total cloud cover ensemble forecasts
.
Monthly Weather Review
144
(
7
),
2565
2577
.
https://doi.org/10.1175/MWR-D-15-0426.1
.
Jingyue
D. I.
,
Fuyou
T.
&
Zhi
W.
2012
Probabilistic flood prediction in the Upper Huaihe Catchment
.
Acta Meteorologica Sinica
26
,
62
71
.
https://doi.org/10.1007/s13351-012-0106-3.1
.
Karuna Sagar
S.
,
Rajeevan
M.
&
Vijaya Bhaskara Rao
S.
2017
On increasing monsoon rainstorms over India
.
Natural Hazards
85
(
3
),
1743
1757
.
https://doi.org/10.1007/s11069-016-2662-9
.
Liang
Z.
,
Wang
D.
,
Guo
Y.
,
Zhang
Y.
&
Dai
R.
2013
Application of Bayesian model averaging approach to multimodel ensemble hydrologic forecasting
.
Journal of Hydrologic Engineering
18
(
11
),
1426
1436
.
https://doi.org/10.1061/(asce)he.1943-5584.0000493
.
Liu
J.
&
Xie
Z.
2014
BMA probabilistic quantitative precipitation forecasting over the Huaihe Basin using TIGGE multimodel ensemble forecasts
.
Monthly Weather Review
142
(
4
),
1542
1555
.
https://doi.org/10.1175/MWR-D-13-00031.1
.
Liu
L.
,
Gao
C.
,
Xuan
W.
&
Xu
Y. P.
2017
Evaluation of medium-range ensemble flood forecasting based on calibration strategies and ensemble methods in Lanjiang Basin, Southeast China
.
Journal of Hydrology
554
,
233
250
.
https://doi.org/10.1016/j.jhydrol.2017.08.032
.
Liu
L.
,
Gao
C.
,
Zhu
Q.
&
Xu
Y. P.
2019
Evaluation of TIGGE daily accumulated precipitation forecasts over the Qu River Basin, China
.
Journal of Meteorological Research
33
(
4
),
747
764
.
https://doi.org/10.1007/s13351-019-8096-z
.
Medina
H.
,
Tian
D.
,
Marin
F. R.
&
Chirico
G. B.
2019
Comparing GEFS, ECMWF, and postprocessing methods for ensemble precipitation forecasts over Brazil
.
Journal of Hydrometeorology
20
(
4
),
773
790
.
https://doi.org/10.1175/JHM-D-18-0125.1
.
Messner
J. W.
2018
Ensemble postprocessing with R
. In:
Statistical Postprocessing of Ensemble Forecasts
.
Elsevier Inc
.
https://doi.org/10.1016/B978-0-12-812372-0.00011-X
Messner
J. W.
,
Mayr
G. J.
,
Wilks
D. S.
&
Zeileis
A.
2014a
Extending extended logistic regression: extended versus separate versus ordered versus censored
.
Monthly Weather Review
142
(
8
),
3003
3014
.
https://doi.org/10.1175/MWR-D-13-00355.1
.
Messner
J. W.
,
Mayr
G. J.
,
Zeileis
A.
&
Wilks
D. S.
2014b
Heteroscedastic extended logistic regression for postprocessing of ensemble guidance
.
Monthly Weather Review
142
(
1
),
448
456
.
https://doi.org/10.1175/MWR-D-13-00271.1
.
Messner
J. W.
,
Mayr
G. J.
&
Zeileis
A.
2016
Heteroscedastic censored and truncated regression with crch
.
R Journal
8
(
1
),
173
181
.
https://doi.org/10.32614/rj-2016-012
.
Nelder
J. A.
&
Wedderburn
R. W.
1984
Generalized linear models
.
European Journal of Operational Research
16
(
3
),
285
292
.
https://doi.org/10.1016/0377-2217(84)90282-0
.
Nobert
S.
,
Demeritt
D.
&
Cloke
H.
2010
Informing operational flood management with ensemble predictions: lessons from Sweden
.
Journal of Flood Risk Management
3
(
1
),
72
79
.
https://doi.org/10.1111/j.1753-318X.2009.01056.x
.
Olsson
J.
&
Lindström
G.
2008
Evaluation and calibration of operational hydrological ensemble forecasts in Sweden
.
Journal of Hydrology
350
(
1–2
),
14
24
.
https://doi.org/10.1016/j.jhydrol.2007.11.010
.
Pappenberger
F.
,
Ghelli
A.
,
Buizza
R.
&
Bódis
K.
2009
The skill of probabilistic precipitation forecasts under observational uncertainties within the generalized likelihood uncertainty estimation framework for hydrological applications
.
Journal of Hydrometeorology
10
(
3
),
807
819
.
https://doi.org/10.1175/2008JHM956.1
.
Raftery
A. E.
,
Gneiting
T.
,
Balabdaoui
F.
&
Polakowski
M.
2005
Using Bayesian model averaging to calibrate forecast ensembles
.
Monthly Weather Review
133
(
5
),
1155
1174
.
https://doi.org/10.1175/MWR2906.1
.
Regonda
S. K.
,
Seo
D. J.
,
Lawrence
B.
,
Brown
J. D.
&
Demargne
J.
2013
Short-term ensemble streamflow forecasting using operationally-produced single-valued streamflow forecasts – a Hydrologic Model Output Statistics (HMOS) approach
.
Journal of Hydrology
497
,
80
96
.
https://doi.org/10.1016/j.jhydrol.2013.05.028
.
Roy Bhowmik
S. K.
,
Joardar
D.
&
Hatwar
H. R.
2007
Evaluation of precipitation prediction skill of IMD operational NWP system over Indian monsoon region
.
Meteorology and Atmospheric Physics
95
(
3–4
),
205
221
.
https://doi.org/10.1007/s00703-006-0198-3
.
Sarthak
K.
,
Ripple
V.
,
Sanyukta
M.
&
Manthan
A, T.
2015
Vulnerability assessment of human settlement on river banks: a case study of Vishwamitri River, Vadodara, India
.
Journal of Environmental Research and Development
9
(
3A
),
1015
1023
.
Scheuerer
M.
2014
Probabilistic quantitative precipitation forecasting using Ensemble Model Output Statistics
.
Quarterly Journal of the Royal Meteorological Society
140
(
680
),
1086
1096
.
https://doi.org/10.1002/qj.2183
.
Scheuerer
M.
&
Büermann
L.
2014
Spatially adaptive post-processing of ensemble forecasts for temperature
.
Journal of the Royal Statistical Society. Series C: Applied Statistics
63
(
3
),
405
422
.
https://doi.org/10.1111/rssc.12040
.
Seenu
P. Z.
&
Jayakumar
K. V.
2021
Comparative study of innovative trend analysis technique with Mann-Kendall tests for extreme rainfall
.
Arabian Journal of Geosciences
14
(
7
).
https://doi.org/10.1007/s12517-021-06906-w
Siddique
R.
,
Mejia
A.
,
Brown
J.
,
Reed
S.
&
Ahnert
P.
2015
Verification of precipitation forecasts from two numerical weather prediction models in the Middle Atlantic Region of the USA: a precursory analysis to hydrologic forecasting
.
Journal of Hydrology
529
,
1390
1406
.
https://doi.org/10.1016/j.jhydrol.2015.08.042
.
Sloughter
J. M. L.
,
Raftery
A. E.
,
Gneiting
T.
&
Fraley
C.
2007
Probabilistic quantitative precipitation forecasting using bayesian model averaging
.
Monthly Weather Review
135
(
9
),
3209
3220
.
https://doi.org/10.1175/MWR3441.1
.
Stauffer
R.
,
Umlauf
N.
,
Messner
J. W.
,
Mayr
G. J.
&
Zeileis
A.
2017
Ensemble postprocessing of daily precipitation sums over complex terrain using censored high-resolution standardized anomalies
.
American Meteorological Society
145
,
955
969
.
https://doi.org/10.1175/MWR-D-16-0260.1
.
Verkade
J. S.
,
Brown
J. D.
,
Reggiani
P.
&
Weerts
A. H.
2013
Post-processing ECMWF precipitation and temperature ensemble reforecasts for operational hydrologic forecasting at various spatial scales
.
Journal of Hydrology
501
,
73
91
.
https://doi.org/10.1016/j.jhydrol.2013.07.039
.
Wilks
D. S.
2009
Extending logistic regression to provide full-probability-distribution MOS forecasts
.
Meteorological Applications
16
(
3
),
361
368
.
Wilks
D. S.
2018
Univariate ensemble postprocessing
. In:
Statistical Postprocessing of Ensemble Forecasts
.
https://doi.org/10.1016/B978-0-12-812372-0.00003-0
Wilks
D. S.
&
Hamill
T. M.
2007
Comparison of ensemble-MOS methods using GFS reforecasts
.
Monthly Weather Review
135
(
6
),
2379
2390
.
https://doi.org/10.1175/MWR3402.1
.
Williams
R. M.
,
Ferro
C. A. T.
&
Kwasniok
F.
2014
A comparison of ensemble post-processing methods for extreme events
.
Quarterly Journal of the Royal Meteorological Society
140
(
680
),
1112
1120
.
https://doi.org/10.1002/qj.2198
.
This is an Open Access article distributed under the terms of the Creative Commons Attribution Licence (CC BY 4.0), which permits copying, adaptation and redistribution, provided the original work is properly cited (http://creativecommons.org/licenses/by/4.0/).