Abstract
The study aims at calibration of the storm water management model (SWMM) with non-dominated sorting genetic algorithm-III (NSGA-III) for urban catchment in Hyderabad, India. The SWMM parameters calibrated were Manning's roughness coefficient (N), depression storage for pervious and impervious areas (DP and Di), sub-catchment width (W), curve number (CN), drying time (dry) of soil and percentage of imperviousness (I). The efficacy of calibration was evaluated by comparing the observed and simulated peak flows and runoff using goodness-of-fit indices. The calibration takes into consideration eight event rainfalls resulting in eight calibrated sets. Weights of goodness-of-fit indices were estimated and the best calibrated set was further validated for five continuous rainfalls/runoffs. Simulated runoff volume and peak runoff over the five continuous rainfalls deviated by 7–22% and 2–20% with respect to observed data. Results indicated that parameters calibrated for an event rainfall could be used for continuous rainfall-runoff modelling. The effect of catchment delineation scale on runoff was also studied. The study indicated that output of the model was sensitive to variation in parameter values of infiltration and imperviousness.
INTRODUCTION
Rapid urbanisation and imperviousness in urban areas could increase runoff volume by two to six times that of normal runoff (Sowmya et al. 2015). Runoff discharge through storm drains cause overland flow and can significantly impact the hydrology of an area (Alaghmand et al. 2010). It is essential to model the rainfall-runoff pattern to enable planning, managing storm drain network and mitigation strategies. This can be achieved by hydrological models, evolved significantly over a period from lumped to semi-distributed, and to fully distributed models. This has led to complex model structure and increase in model parameters. An accurate and successful application of a hydrological model can be achieved only through efficient calibration of model parameters. Automatic calibration procedures are capable of calibrating the parameters within the acceptable range in a short duration using optimisation approaches (Barnhart et al. 2017).
Automatic calibration of hydrological models using multi-objective optimisation and related algorithms has been studied by various researchers: Zaghloul (1983) applied the generalised regression neural network to improve storm water management model (SWMM) simulation for impervious test areas in Canada, Australia and the USA. In sensitivity analysis (SA), conduit length was found to be most significant and conduit routing was insignificant for small areas. Kanso et al. (2003) applied Monte Carlo Markov sampling method to calibrate a storm water quality model for an urban catchment in Paris. The algorithm could identify the best parameter set. Barco et al. (2008) integrated SWMM, geographic information system (GIS) with a complex constrained optimisation technique. SWMM was calibrated and validated for 10 rainfall events in a catchment in Ballona Creek in California. The calibration technique was able to model the outputs with reasonable correlation to observed data. Zhang et al. (2009) compared a few multi-objective optimisation algorithms and found that genetic algorithm-based approaches outperformed particle swarm optimisation (PSO) (Kennedy & Eberhart 1995), differential evolution (DE) (Storn & Price 1997) and artificial immune system (Hunt & Cooke 1996). Krebs et al. (2013) employed non-dominated sorting genetic algorithm-II (NSGA-II) to optimise representative best management scenarios in a catchment based in Finland. Optimisation yielded good performance in calibration and validation of SWMM. De Paola et al. (2016a) modelled pressure in water valves of a water distribution network in Spain. They used harmony search based (HSB) optimisation algorithm, which mimics jazz musician behaviour. They found that HSB was successful in finding optimum pressure of valves within given hydraulic constraints. Apart from providing the best solution, the HSB algorithm suggested alternative solutions that are available in harmonic matrix after the simulations. In a later study, De Paola et al. (2016b) coupled jazz music-based algorithm with SWMM to calibrate runoff parameters in an urban catchment in Bologna, Italy. The calibration showed good correlation between modelled and observed runoff estimates. Bhesdadiya et al. (2016) have revised the NSGA-II to perform better, maintaining the diversity among population members. The new version, non-dominated sorting genetic algorithm-III (NSGA-III) is superior to NSGA-II in terms of selection operator, Pareto solution set and diversified solutions (Deb & Jain 2014). However, NSGA-III is not used for calibration of hydrological models.
Rainfall-runoff modelling can be calibrated for continuous and/or event-based rainfall depending on the application. Calibration and simulation accuracy of a hydrological model can be increased by rigorous use of a combination of temporal fine scale event rainfall and coarse scale continuous rainfall in calibration (Chu & Steinman 2009). Event-based rainfall-runoff modelling shows the catchment's response to an isolated rainfall at fine time steps (hourly or sub-hourly). Continuous rainfall-runoff modelling analyses the catchment's response to several rainfall events in a long time span (i.e., days). Model calibration with finer time scale helps to understand the hydrological processes at a finer time scale. This is contradictory to continuous rainfall modelling at larger time steps (Chu & Steinman 2009; De Silva et al. 2013). Generally, calibrated parameters from an event rainfall are used to model runoff from continuous rainfall (Tramblay et al. 2010). However, it is always possible for a ‘calibrated parameter set’ to generate poor results with respect to another type of rainfall event. Hence, performance of a calibrated set is not guaranteed and testing its performance prior to validation is necessary, which is investigated in this study. The present study explores the event rainfall-based automatic calibration of SWMM using NSGA-III for a catchment in Hyderabad city, India. Additionally, the effect of catchment delineation scale on runoff is also studied. The paper is organised in four sections and consists of introduction to the study, materials and methods, results and discussion, followed by conclusions.
MATERIALS AND METHODS
Study area
Hyderabad is the capital of Telangana, a south Indian state. The city is located at 17° 20′ N and 78° 30′ E. The city's infrastructure including storm water network is governed by Greater Hyderabad Municipal Corporation (GHMC). The storm water network of the city has 16 storm water zones based on topography. The storm water zones 12 and 13 of Hyderabad city (Figure 1) are chosen for the study. These zones represent the most urbanised areas in the city. The length of storm drain network in zones 12 and 13 are 66 km and 21 km. Details of the study area are given in Table 1. The storm drain network consists of mostly open rectangular lined channels, with a few closed drains. The storm water network of zone 12 and 13 discharge into Hussain Sagar Lake through the Kukatpally Nalla (Nalla means drains in the local language) and the Picket Nalla, respectively. A few storm drains of zone 13 discharge into the River Musi in the downstream. In the present study, storm drains discharging into Hussain Sagar Lake are only considered and shown in Figure 1 (general information on drains, unpublished report, GHMC 2007).
Study area properties . | Values/Range . |
---|---|
Area | 270 km2 |
Altitude | 380–590 m |
Average annual rainfall (1991–2013) | 840 mm |
Storm water drain network length | Zone 12–66 km |
Zone 13–21 km | |
Storm water drain width | Major (width > 2 m) |
Minor (2 m < width < 0.45 m) | |
Street (width < 0.45 m) | |
% Pervious area | 30a |
% Impervious area | 70a |
Study area properties . | Values/Range . |
---|---|
Area | 270 km2 |
Altitude | 380–590 m |
Average annual rainfall (1991–2013) | 840 mm |
Storm water drain network length | Zone 12–66 km |
Zone 13–21 km | |
Storm water drain width | Major (width > 2 m) |
Minor (2 m < width < 0.45 m) | |
Street (width < 0.45 m) | |
% Pervious area | 30a |
% Impervious area | 70a |
aApproximately.
Methods
The present study considers event rainfall-based automatic calibration of SWMM using NSGA-III. The methodology (Figure 2) of the study comprises four phases as follows:
Automatic calibration of SWMM parameters with NSGA-III for eight event rainfalls resulting in eight calibrated sets.
Efficacy of calibration using goodness-of-fit indices showing the deviation between simulated model results with observed data.
Selection of best calibrated set using Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS).
The best calibrated set was further used to validate SWMM with continuous rainfall events.
Continuous rainfall is defined by several researchers, but the time gap between two rainfalls and the number of rain events constituting a continuous rainfall is not unique (De Silva et al. 2013; Ratan & Venugopal 2013). In the present study, with data limitation in view, two consecutive rainfall events of wet and dry periods are considered as continuous rainfall. The rainfall occurring continuously within a span of 24 hours or a day is considered as one event rainfall. The frequency of obtaining continuous rainfall data is hourly. Methods used for analysis and goodness-of-fit indices are described in this section.
Autoregressive integrated moving average
Autoregressive integrated moving average (ARIMA) assumes that the future rainfall values are linearly related to the immediate historic or n historic rainfall values. The ARIMA model consists of three steps, i.e., data identification, parameter estimation and residual diagnostic. The historic data, i.e., time series, is analysed to identify stationarity, seasonality and trend components. After analysing time series, ARIMA model parameters need to be estimated. ARIMA has two forms; the non-seasonal part of ARIMA is (p, d, q). The (p, d, q) represents order of autoregressive component, order of differencing and order of moving average component for non-seasonal time series. Similarly, the seasonal part of ARIMA is represented as (p, d, q) (P, D, Q)s. (p, d, q) represents non-seasonal part and (P, D, Q) is seasonal part of the model. The ARIMA parameters, i.e. (p, d, q) or (p, d, q) (P, D, Q)s, can be estimated by trial and error or by using ‘auto.arima’ function. Once the ARIMA parameters are fixed, future rainfall is forecasted for a user-defined time period. The final step is to calculate errors/residuals in the forecasted rainfall values. Residuals in the forecasted data are found with the Ljung-Box-Pierce statistics (LBP) test. Residuals should satisfy the white noise principle, in which the residuals have to be independent of each other without autocorrelation. The residuals are distributed normally with a constant mean and variance. The variance can be finite in white noise principle (Reddy 1997; Maity 2018).
Modelling in SWMM
SWMM 5.1 is a hydrological modelling software by the United States Environmental Protection Agency. The primary purpose of the model is rainfall-runoff computations in urban areas for both event-based and continuous rainfall (Rossman 2010). It has computational blocks to perform rainfall-runoff modelling. Runoff block generates runoff from rainfall using non-linear reservoir method, which assumes sub-catchment as a non-linear reservoir. Overland flow from each sub-catchment is computed by applying the law of conservation of mass (Rossman & Huber 2016). Transport and Extran blocks route the runoff flows through storm water drains using kinematic and dynamic wave routing, respectively (Figure 3). Infiltration can be found using CN, Horton, modified Horton and Green and Ampt methods. For a model run, all computational blocks can work simultaneously or individually. In the present work, dynamic wave routing and CN methods were used for routing flows and infiltration, respectively. Although the CN method has its own limitations, it is chosen in the study for its simple data requirements. Despite its limitations, CN was capable of generating reasonable outputs according to Bales & Betson (1982). As the present study area is in India, the CN proposed for the Indian region by Vandersypen et al. (1972) is adopted.
For catchment delineation, storm drain network, DEM and land use maps were processed in GIS. Catchment was delineated at a stream density of every 2 km drainage length resulting in 40 sub-catchments. Runoff from each sub-catchment was computed for every 1 hour and the flow routing time step was 30 seconds. Runoff from the sub-catchment will discharge into a single outlet as there is no provision of multiple outlets in SWMM. The outlet could be a storm drain, storage or a sub-catchment.
Goodness-of-fit indices
SWMM parameters
In SWMM, the parameters physically define the properties of a study area. Parameters have to be adjusted to minimise errors in rainfall-runoff simulation, and are categorised as measured and inferred parameters. The parameters that are actual measurements on the ground such as dimensions of conduit, rainfall depth, etc., are measured parameters. Inferred parameters are not measured directly on the ground, but are extracted from spatial data, e.g., land use data. However, they are not well defined specifically for a catchment (Choi & Ball 2002). In the present study, based on available data, 10 inferred parameters were considered for SA (Table 2).
Parameters . | Notation . | Range of values . | Data source . |
---|---|---|---|
Manning's roughness (impervious area) | Ni | 0.011–0.017 | Rossman (2010) |
Manning's roughness (pervious area) | NP | 0.15–0.8 | Rossman (2010) |
Manning's roughness (conduits) | Nc | 0.011–0.04 | Rossman (2010) |
% Imperviousness | I | 60–85 | Calculated from Google maps and land use maps (Rossman 2010; Google Earth 2016) |
Curve number | CN | 60–75 | Vandersypen et al. (1972) |
Drying time of soil (in days) | dry | 4–13 | Calculated as per Rossman (2010) |
Catchment slope (%) | S | 0.01–3 | Calculated from DEM and contour map |
Catchment width (m) | W | 100–1,500 | Calculated as per Rossman (2010) |
Depression storage (impervious area) (mm) | Di | 1.5–3 | ASCE (1993) |
Depression storage (pervious area) (mm) | DP | 2.5–5.5 | ASCE (1993) |
Parameters . | Notation . | Range of values . | Data source . |
---|---|---|---|
Manning's roughness (impervious area) | Ni | 0.011–0.017 | Rossman (2010) |
Manning's roughness (pervious area) | NP | 0.15–0.8 | Rossman (2010) |
Manning's roughness (conduits) | Nc | 0.011–0.04 | Rossman (2010) |
% Imperviousness | I | 60–85 | Calculated from Google maps and land use maps (Rossman 2010; Google Earth 2016) |
Curve number | CN | 60–75 | Vandersypen et al. (1972) |
Drying time of soil (in days) | dry | 4–13 | Calculated as per Rossman (2010) |
Catchment slope (%) | S | 0.01–3 | Calculated from DEM and contour map |
Catchment width (m) | W | 100–1,500 | Calculated as per Rossman (2010) |
Depression storage (impervious area) (mm) | Di | 1.5–3 | ASCE (1993) |
Depression storage (pervious area) (mm) | DP | 2.5–5.5 | ASCE (1993) |
The parameters in Table 2 were selected for their unique importance in modelling, as explained: Manning's roughness coefficient (N) represents the friction of a land cover to overland flow. As the surface roughness increases, the runoff rate decreases. Depression storages in pervious and impervious areas (DP and Di) store runoff. SWMM assumes that the sub-catchment is rectangular in shape with certain width (W) (Rossman 2010). The sub-catchments on the ground do not possess a rectangular shape and hence they need to be calibrated. The smaller the width of the catchment, the longer is the runoff flow path. CN and drying time (dry) of soil depict the infiltration capacity of soil. As the percentage of imperviousness (I) increases, peak runoff also increases. The identified sensitive parameters were calibrated using multi-objective optimisation with NSGA-III as described below.
Calibration with NSGA-III
NSGA-III was applied to calibrate SWMM for various rainfalls. The algorithm calibrates the model, until the best values of objective functions are obtained. NSGA-III is an updated version of NSGA-II, proposed by Deb & Jain (2014). It produces a set of optimal solutions satisfying all the objective functions. These solutions are known as ‘non-dominated solutions’. The algorithm randomly initialises a population size. The population is distributed in a hyper plane with reference points, where each reference point corresponds to an objective function (Deb & Jain 2014). The population produces offspring population with crossover and mutation combinations at every generation. The new offspring population is also associated with reference points. At each generation, the population size creates a non-dominated solutions set. The entire process continues until a population member is found for each reference point at a Pareto optimal front; it is a space representing all non-dominated solutions. The process continues until the termination criterion is met.
The new version NSGA-III is superior to NSGA-II in: (1) selection operator: NSGA-III does not require selection operator for the population to produce a new offspring population; (2) Pareto solution set: NSGA-III applies the reference point approach to obtain distributed Pareto solution set; and (3) diversified solutions: the algorithm employs a pre-allocated reference set approach to choose a variety of diversified solutions across the population size (Deb & Jain 2014).
Coupling NSGA-III and SWMM for calibration
SWMM has an open source code that can be modified (Rossman 2010). A desired functionality can be incorporated without much difficulty into the model. For coupling of SWMM and NSGA-III, three files are required: (1) SWMM input file (.inp) containing the entire model processes; (2) template file (.inp) containing model parameter values and their ranges; (3) file (.doc) with observed runoff data. The computational library of SWMM is coupled with NSGA-III by writing a code in Python (Jones et al. 2014; Team 2016). Once the required files were created, NSGA-III was executed along with the SWMM computational module. NSGA-III reads SWMM input files and generates model parameter values using population hyper plane method (Deb & Jain 2014). These generated parameter values were recorded in the template file. NSGA-III uses template file and executes SWMM with new parameter values. The model outputs obtained with the new parameter values were stored in the data base. NSGA-III compares the observed and the recorded output values and calculates the objective functions. When there exists deviation from observed and model outputs, NSGA-III goes with the next iteration. Iterations were performed based on termination criteria of generations.
Entropy method
Entropy method was applied to find the weights of criteria (i.e., goodness-of-fit indices) from the payoff matrix. The payoff matrix is formulated with calibrated sets as alternatives against the goodness-of-fit indices as criteria (Hwang & Yoon 1981).
The approach consists of generation of entropy values for criteria, degree of diversification and normalisation of degree of diversification, which yields weights (Raju & Kumar 2014). The mathematical steps are as follows.
The higher the degree of diversification, the greater the level of entropy.
TOPSIS
After obtaining weights of goodness-of-fit indices, TOPSIS was used to select the best calibrated set (Hwang & Yoon 1981; Raju & Kumar 2014). TOPSIS is based on the idea that the best calibrated set should have shortest distance from a positive ideal solution and farther distance from the negative ideal solution. TOPSIS computes the positive separation measure from the ideal solution, negative separation measure from negative ideal solution and closeness index . The higher the value, the better the calibrated set (Raju & Kumar 2014). Figure 4 presents the flow chart of TOPSIS. The best performing calibrated set was used for model validations. The following section presents the results obtained using the methodology (refer to Figure 2).
RESULTS, ANALYSIS AND DISCUSSION
Rainfall forecasting by ARIMA
Inflow data of Hussain Sagar were available for the years 2012, 2013 and 2014–2017, whereas hourly data for 2014–2017 were unavailable. Keeping the data limitation in view, ARIMA was used to forecast the hourly rainfall data. The processes involved in ARIMA are as follows.
The historic time series data (i.e., hourly rainfall data of 2000–2013) were analysed for seasonality and trend components. The time series data were decomposed to know the existence of trend and seasonality. There was consistent seasonality in the data and no trend existed. The ARIMA model can handle seasonality. The seasonality is required to achieve a forecast near to the original series. A subsequent step was to estimate the model parameters. The hourly rainfall data of year 2000 to 2010 were used to build the seasonal ARIMA (p, d, q) (P, D, Q)s model. The model parameters were estimated using ‘auto.arima’ function in R environment (Team 2016). This function uses Hyndman–Khandakar algorithm to estimate seasonal and non-seasonal parameters (Reddy 1997; Maity 2018). The non-seasonal (p, d, q) of the data was estimated as (4, 0, 4), i.e., partial and auto- correlation among rainfall values were 4 between past and future rainfall. The seasonal (P, D, Q) values were (0, 0, 1). Hence, the autoregressive component is 0 and the order of moving average is 1.
Model parameters that possess a minimum Akaike information criterion (AIC) are considered the best model order. The hourly rainfall data of years 2011 to 2013 were used for model verification and the AIC was calculated. The AIC of the model (4, 0, 4) (0, 0, 1) was 3,559, which was minimum compared to other models obtained from trial and error. Hence, the model was considered as the best fit model. The model was further used to forecast hourly rainfall for 2014 to 2017.
The final step was to identify errors/residuals in the forecasted values using LBP statistics. LBP verifies residuals using a parameter ‘p’ (it should be noted that the LBP parameter ‘p’ is different from ‘partial autocorrelation’ in the ARIMA model). LBP parameter ‘p’ represents null hypothesis of residuals' independence. The p value should be greater than 0.05, which means residuals are independent. The p value of forecasted data was found to be 0.7, which exceeded the critical threshold value of 0.05. Based on this analysis, residuals in the forecasted model were found to be random and there was no need of further modelling.
The rainfall events considered in this study are shown in Figure 5. Rainfalls are denoted with ‘R’ (R1 to R13). Here, R1 to R8 are event rainfalls used for calibration. Rainfalls R9 to R13 are continuous rainfalls used for validation and rainfalls were also selected such that the inflow data to Hussain Sagar were available on the respective day(s).
SA of SWMM parameters
The objective of SA was to maximise NSE and CC, as they represented the overall runoff hydrograph. Approximately 1,200 runs were performed in the SWMM parameter specified range (refer to Table 2) starting from the initial values in order to match the simulated and observed hydrograph closely. In SA, SWMM parameters were varied, one after the other individually, and the corresponding NSE and CC were computed at the end of each run. The SA was performed for the single rainfall event of June 8, 2017 (R6) with the highest magnitude of 130 mm. The model output was significantly effected by varying I, W and Di. I impacted the NSE from 0.3 to 0.5 and CC from 0.4 to 0.6. W effected NSE from 0.3 to 0.42, CC from 0.4 to 0.57; whereas Di effected CC from 0.4 to 0.5. The model output was slightly impacted by varying CN and dry. The CN impacted NSE from 0.3 to 0.38, whereas dry of soil slightly altered CC from 0.4 to 0.46. Similarly, the model output was slightly sensitive to S, Di and Dp. The model output did not show any impact by varying Ni, Np and Nc throughout the analysis. However, all the parameters were considered for calibration.
Calibration of SWMM parameters using NSGA-III for 130 mm event rainfall
In the present study, all the event rainfalls (R1–R8 in Figure 5) were calibrated. The rainfall event of June 8, 2017 (R6) was used for detailed presentation of the results. However, results related to other rainfall events were presented in the form of graphs to minimise repetition. For calibrating SWMM parameters, the range of values is shown in Table 2. Before proceeding to SWMM calibration, hyper parameter tuning of NSGA-III is performed. The optimum hyper parameters of NSGA-III were achieved through the four objective functions and hyper parameter tuning procedure, as discussed below.
Hyper parameter tuning of NSGA-III is required to enhance the performance of the algorithm and to yield better quality solutions (Basu 2011). The hyper parameters considered for tuning are: (1) population, (2) crossover, (3) mutation and (4) generations. Hyper parameter tuning requires extensive computational time. The tuning was performed by varying parameter value in ranges as follows: population (7 levels, i.e., 50, 150, 200, 250, 300, 350, 400), crossover (3 levels, i.e., 0.2, 0.6, 0.9), mutation (3 levels, i.e., 0.2, 0.6, 0.9) and generation (3 levels, i.e., 56, 150, 350) resulting in 189 runs. Figure 6 shows that the SA results correspond to the NSE, SSE, PEP and CC in the multi-objective framework.
The sensitivity analyses showed that NSE significantly varied with population. Referring to Figure 6(a), NSE is maximised with increase in population up to 280 and later decreased gradually. NSE is maximised with increase in generations and conversely minimised with increase in mutation. It is least sensitive to mutation (Figure 6(a) and 6(b)). SSE varied with population; it has a minimum value at a population of 230. SSE is non-linearly proportional to crossover and mutation. SSE increased with increase in generations (Figure 6(c) and 6(d)). PEP was affected by population ranges; PEP is minimum at a population of 300. PEP is linearly proportional to mutation and non-linearly proportional to crossover. PEP remained constant in all generation ranges (Figure 6(e) and 6(f)). CC increased with increase in population up to 320 and later decreased gradually. CC is non-linearly proportional to mutation and linearly proportional to crossover. CC is least sensitive to generation (Figure 6(g) and 6(h)). Figure 6(a), 6(c), 6(e) and 6(h) show that NSE, SSE, PEP and CC are quite sensitive to variations in the generation. Figure 6(d), 6(f) and 6(g) indicate that SSE, PEP and CC are not very sensitive to crossover.
Every set of hyper parameter combination has been run 10 times and the corresponding average value of the objective function is taken. Here, the single objective function is to maximise NSE, CC and to minimise SSE, PEP. Equal weightages are given to the indices and the objective function is normalised. The normalisation method used is minimax normalisation. The optimum values of population, crossover, mutation and generation for 130 mm rainfall (R6) are 250, 0.9, 0.2 and 150, respectively. These hyper parameters were used for calibrating the remaining rainfall events also.
Calibration set was obtained for the entire catchment (consisting of 40 sub-catchments) resulting in 400 unique parameter values (10 parameters × 40 sub-catchments). The ranges of Ni, NP, Nc, Di, DP, I, CN, dry, S and W obtained are 0.012–0.016, 0.3–0.7, 0.02–0.035, 1.8–2.6, 1.8–2.6, 66–85, 60–70, 4–11, 0.6–1.2 and 492–1,470. Deviation of simulated runoff and peak runoff at the outfall (Hussain Sagar) with respect to the observed data was 27% and 18%, respectively.
Similar analysis was applied for the other seven event rainfalls (R1–R5, R7 and R8) resulting in seven calibrated sets. The runoff results for eight event rainfalls are shown in Figures 7 and 8. Simulated and observed hydrographs for all rainfalls were created but are not presented here due to space limitation.
In the case of peak runoff, it was observed that for eight rainfall events, the percentage deviation varied between 2% and 33% (Figure 7). In the case of runoff volume, the percentage deviations varied between 11% and 54% (Figure 8).
It is observed that R8 has error greater than 50% in runoff volume. Rainfall R4 had the largest error of 35% in simulating peak runoff. Rainfalls R4 and R7 have the largest error in runoff volume. Both R4 and R7 are relatively small magnitude rainfalls (with magnitude <45 mm). The rainfalls R3 and R6 produced a small error (<10%) in the runoff volume and peak runoff. R3 and R6 are high magnitude rainfalls, with 120 mm and 130 mm, respectively. This shows that the calibration of small magnitude rainfalls is not as adequate as compared to high magnitude rainfalls. R3 had the lowest error of 2% for peak runoff. Similarly, R6 had the lowest error of 10% for runoff volume. No specific pattern is observed for these rainfalls. Additionally, the average values of objective functions NSE, SSE, PEP and CC for all rainfalls obtained in the calibration process were −4.6, 1.07, 21 and 0.5, respectively.
Selection of best calibrated parameter sets with TOPSIS
One of the methods is to use all the calibrated sets (C1 to C8) for validation. The performance of a calibrated set could vary with respect to each validation rainfall. When there are five such validation rainfall events, finding out the ‘best performing calibrated set’ for all validation rainfalls is by trial and error. Awol et al. (2018) suggested using a ‘final parameter set’ that performs best with average performance across all rainfall events. However, they also cautioned that this could result in under- or over-estimation runoff flows/values. They also opined that this can lead to compromise in obtaining the parameter set that could satisfy all rainfall events.
Hence, TOPSIS was applied to select the best performing calibrated set among the eight calibrated sets, namely, C1 to C8. In the calibration process, goodness-of-fit indices corresponding to eight calibrated sets (C1 to C8) were obtained. Here, payoff matrix means eight calibrated sets (C1 to C8) versus respective goodness-of-fit indices (Table 3). Entropy method was used to find the weights of indices from the formulated payoff matrix, and these values for NSE, SSE, PEP and CC are 0.4363, 0.4890, 0.0319 and 0.0429, respectively (Equations (6)–(8), Table 4). The outcomes of TOPSIS are shown in Table 3 (Srikrishan et al. 2014). The sample calculation for entropy and TOPSIS is shown in Appendix A (available with the online version of this paper).
Calibrated parameter sets . | NSE . | SSE . | PEP . | CC . | . | . | Ci . | Rank . |
---|---|---|---|---|---|---|---|---|
C1 | −0.02 | 0.98 | 24.9 | 0.6 | 0.0897 | 0.5119 | 0.8509 | 4 |
C2 | −1.7 | 0.01 | 30 | 0.8 | 0.0330 | 0.5525 | 0.9437 | 2 |
C3 | −0.6 | 0.46 | 16 | 0.7 | 0.0431 | 0.5363 | 0.9256 | 3 |
C4 | −0.5 | 2.52 | 29.18 | 0.2 | 0.2315 | 0.4264 | 0.6481 | 6 |
C5 | −21 | 0.02 | 30 | 0.6 | 0.3917 | 0.4180 | 0.5162 | 7 |
C6 | −1.56 | 0.04 | 18.42 | 0.58 | 0.0302 | 0.5521 | 0.9482 | 1 |
C7 | −9.61 | 0.01 | 9.52 | 0.89 | 0.1790 | 0.4700 | 0.7242 | 5 |
C8 | −2.53 | 4.57 | 16 | 0.29 | 0.4216 | 0.3447 | 0.4498 | 8 |
Direction | Max | Min | Min | Max |
Calibrated parameter sets . | NSE . | SSE . | PEP . | CC . | . | . | Ci . | Rank . |
---|---|---|---|---|---|---|---|---|
C1 | −0.02 | 0.98 | 24.9 | 0.6 | 0.0897 | 0.5119 | 0.8509 | 4 |
C2 | −1.7 | 0.01 | 30 | 0.8 | 0.0330 | 0.5525 | 0.9437 | 2 |
C3 | −0.6 | 0.46 | 16 | 0.7 | 0.0431 | 0.5363 | 0.9256 | 3 |
C4 | −0.5 | 2.52 | 29.18 | 0.2 | 0.2315 | 0.4264 | 0.6481 | 6 |
C5 | −21 | 0.02 | 30 | 0.6 | 0.3917 | 0.4180 | 0.5162 | 7 |
C6 | −1.56 | 0.04 | 18.42 | 0.58 | 0.0302 | 0.5521 | 0.9482 | 1 |
C7 | −9.61 | 0.01 | 9.52 | 0.89 | 0.1790 | 0.4700 | 0.7242 | 5 |
C8 | −2.53 | 4.57 | 16 | 0.29 | 0.4216 | 0.3447 | 0.4498 | 8 |
Direction | Max | Min | Min | Max |
Goodness-of-fit indices . | Entropy value . | Degree of diversification . | Weights . |
---|---|---|---|
NSE | 0.603 | 0.397 | 0.4363 |
SSE | 0.555 | 0.445 | 0.4890 |
PEP | 0.971 | 0.029 | 0.0319 |
CC | 0.961 | 0.039 | 0.0429 |
Goodness-of-fit indices . | Entropy value . | Degree of diversification . | Weights . |
---|---|---|---|
NSE | 0.603 | 0.397 | 0.4363 |
SSE | 0.555 | 0.445 | 0.4890 |
PEP | 0.971 | 0.029 | 0.0319 |
CC | 0.961 | 0.039 | 0.0429 |
In TOPSIS, corresponding ranks assigned to the calibrated set were based on the closeness index (Raju & Kumar 2014). Calibrated set, C6, with the highest closeness index was selected. It is used to validate continuous rainfall-runoff modelling.
Validation with continuous rainfalls
The calibrated set C6 derived from event rainfall was used to model runoff from five continuous rainfalls (Figures 9 and 10).
Percentage change in peak runoff for R9 to R13 varied between 2% and 20% (Figure 9). In the case of runoff volume, percentage deviations for R9 to R13 varied between 7% and 22% (Figure 10). It is observed that for rainfalls R11 and R12, peak runoff was well modelled by the calibrated set with an error of 2% and 9%, respectively. Deviation in runoff volume ranges from 4% to 16% and peak runoff was 13% to 24%. In a similar study by Di Pierro et al. (2006), a peak flow deviation of 49% was obtained, whereas Huber et al. (1988) observed a peak flow deviation of 4% to 20%. Both sets of authors opined that the range of deviation was satisfactory. Accordingly, we felt that simulated runoff was in proper agreement with observed runoff.
Apart from rainfall-runoff data availability and calibration approach, runoff estimation is also dependent on catchment delineation scales. This aspect is discussed in totality in the following section.
Effect of catchment delineation scale on runoff
Based on the catchment area and data availability, only two delineation scales were possible. They are: (1) fine delineation (Cf): the catchment was delineated at high stream density at every 2 km drainage length resulting in 40 finer sub-catchment areas; (2) coarse delineation (Cc): the catchment was delineated at relatively low stream density at every 5 km drainage length producing 18 coarser sub-catchment areas. The Cf and Cc also match with the mini (0–100 hectares) and milli (1,000–10,000 hectares) sub-catchments proposed by Singh (2000). The Cf aimed to reduce the heterogeneity within the sub-catchment relative to Cc. Cf and Cc have 40 and 18 sub-catchments, respectively. Supportive studies on two-scale catchment delineation include Sun et al. (2012), Stephenson (1989), Zaghloul (1983) and Metcalf & Eddy (1971).
In both delineation scales, storm water drain network detail was retained at original scale. The calibration and validation procedure discussed earlier was based on fine delineated scale catchment. A similar procedure was applied to coarse delineated catchment for one event rainfall (R6) and one continuous rainfall (R10). Other rainfall event simulations are presented in Appendix B (available online).
Event rainfall-runoff
Event rainfall R6 is considered for analysis as it is a relatively high magnitude rainfall (130 mm). The calibrated runoff and the observed runoff hydrographs are shown in Figure 11.
From Figure 11, it is observed that there is a significant difference between the runoff rate hydrographs simulated by Cf and Cc. To understand this effect of delineation scale, hydrological processes such as peak runoff, runoff volume and infiltration were compared for Cf and Cc. It was found that there was not much difference in runoff volume (5%) and infiltration (3%), but peak runoff varied significantly (18%) at Cf and Cc. This indicates that peak runoff is sensitive to delineation scale. As observed from Figure 11, Cf under-simulated and Cc over-simulated peak runoff at 15 hours compared to observed data. A false peak is observed at 17 hours. The deviation in the peak runoff could be due to the runoff routing which is a combination of overland flow and conduit routing.
The sub-catchment width (W) is obtained by dividing the sub-catchment area with length of overland flow (LOF). LOF is the total length of the storm water network within the sub-catchment; LOF is calculated using GIS. LOF is also known as a measure or indicator of runoff storage in a sub-catchment (Ghosh & Hellweger 2011). From Equation (9), runoff is found to be non-linearly proportional to LOF. Hence, LOF can be sensitive to catchment delineation scale. When more numbers of sub-catchments are aggregated to a single sub-catchment (like in Cc), there is a possibility of the runoff storage to decrease or increase.
To affirm the sensitivity of LOF to delineation scale, the LOFs of Cf and Cc were reduced to their average values, other model parameters remaining unaltered. The peak runoff decreased in Cc (by 2%) and increased in Cf (by 3%) as compared to the observed runoff values (Figure 7). Trend indicates that LOF is sensitive to delineation scale. However, the effect of conduit routing also has to be analysed.
The larger sub-catchment areas in Cc discharge more runoff into conduits than smaller sub-catchment areas in Cf. This may cause peak flow attenuation/reduction. To illustrate the effect of delineation scale, dimensions (length and breadth) of conduits in the entire storm water network were uniformly increased by 10%. All other model parameters were kept the same. Peak runoff in the case of Cf decreased by 3% and decreased by 6% in Cc. Increasing the conduit dimensions decreased peak runoff in both delineation scales, showing that conduit routing is insensitive to delineation scale.
Continuous rainfall-runoff
LOF is sensitive to delineation scale in event rainfall, and this is assumed to hold true in continuous rainfall-runoff too. According to Ghosh & Hellweger (2011), long continuous rainfall occurs over a period of wet and dry days, and has varied infiltration. Hence, in continuous rainfall, the sensitivity of infiltration to catchment delineation scale has to be analysed. The continuous rainfall R11 was considered for analysis.
From Figure 12, the Cf and Cc simulated peak runoff of 0.31 and 0.41 m3/s. The maximum infiltration depth loss by Cf and Cc was found to be 14 mm and 9 mm. It is inferred that, for Cc, volume and peak runoff increased relative to Cf, but conversely infiltration decreased. This points out that infiltration can be sensitive to delineation scale.
In SWMM, infiltration is computed using CN method. Spatial variability of infiltration is modified at delineation scale affecting runoff. As an example, if a sub-catchment consists of two sub-areas with different land use, it will have two CNs or a single composite CN. Due to relatively finer delineation in Cf, a smaller area is treated as a separate sub-catchment with individual CN. In Cc, the smaller areas are aggregated into one sub-catchment which will have an averaged CN. Spatial variability of infiltration will affect the runoff generated from the sub-catchment.
It is unknown whether infiltration alone affects runoff or in combination with other parameters.
Some scenarios investigated with respect to peak runoff value and corresponding results are as follows.
Scenario 1: The original calibrated parameter values were used in modelling – the peak runoff simulated by Cf and Cc was 0.31 and 0.41 m3/s.
Scenario 2: The average values were assigned to the parameters – the peak runoff simulated by Cf and Cc was 0.18 and 0.20 m3/s.
Scenario 3: The CN and other parameters were varied in combinations until peak runoff of Cf and Cc matched closely. The combination of parameters are: CN and S, CN and I, CN and Dp, CN and Di (Ghosh & Hellweger 2011). For example, in the combination of CN and S, both these parameters were varied together and the other parameters were kept constant. This scenario showed the following:
When only CN was varied and other parameters were kept constant – the peak runoff simulated by Cf and Cc was 0.22 and 0.33 m3/s.
In the combination of CN and S – the peak runoff simulated by Cf and Cc was 0.20 and 0.22 m3/s.
In the combination of CN and I – the peak runoff simulated by Cf and Cc was 0.27 and 0.39 m3/s.
In the combination of CN and Dp – the peak runoff simulated by Cf and Cc was 0.19 and 0.21 m3/s.
In the combination of CN and Di – the peak runoff simulated by Cf and Cc was 0.19 and 0.21 m3/s.
From the results, it is inferred that, when all the parameters were assigned with average values, the peak runoff of Cf and Cc almost matched within a deviation of 10%. Hence, it was found that spatial variability of parameters was slightly modified at catchment delineation scale. Only a small difference (10–11%) in peak runoff was observed as CN varied in combination with S, Dp and Di. A large difference (40–50%) in peak runoff was observed when CN was varied alone and in combination with I. Therefore, CN and I were found to be most sensitive to delineation scale.
CONCLUSIONS AND FUTURE DIRECTIONS
In our opinion, this is the first application of NSGA-III for automatic calibration of SWMM for an urban catchment in Hyderabad, India. To perform autocalibration, computational library of SWMM was coupled with NSGA-III by coding in Python (Jones et al. 2014; Team 2016). The calibration aimed to minimise the deviation between model simulated runoff and observed runoff. The efficacy of calibration was evaluated using NSE, SSE, PEP and CC. The conclusions from the study are as follows:
From the SA of SWMM parameters, it was evident that imperviousness, width of catchment and depression storages for impervious area affected the runoff volume and peak flow. Manning's roughness coefficient for overland and conduits did not show any effect on peak flow and runoff volume.
The calibrated SWMM simulated runoff volume with a deviation of 11–54% and peak flow with a deviation of 2–33% compared to observed data in event-based rainfalls.
In validation with continuous rainfalls, the best calibrated set simulated runoff volume with a deviation of 7–22% and peak runoff with 2–20%. Additionally, the average values of objective functions for all continuous rainfalls are 9.85, 0.46, 35 and 0.73 for NSE, SSE, PEP and CC, respectively. The low values of SSE, PEP and the high CC indicate that the parameters calibrated for an event rainfall could be used for continuous rainfall-runoff modelling as opined in a study by Tramblay et al. (2010).
The fine-scale delineation decreased peak runoff and coarse scale delineation increased peak runoff. This is attributed to the delineation effect on overland flow routing.
Further work can be carried out with a greater number of objective functions, rainfall events, modelling parameters and delineation scales, based on data availability and data quality. However, the methodology could be applied to any other case study, which is the main motive of the paper.
ACKNOWLEDGEMENTS
This work is supported by Information Technology Research Academy (ITRA), Government of India under ITRA-water grant ITRA/15(68)/water/IUFM/01. The authors thank the officials of Greater Hyderabad Municipal Corporation (GHMC) for providing storm water network data, brainstorming discussions, valuable suggestions and facilitating field surveys. Special acknowledgements go to Mr D. Ravinder, Superintending Engineer, GHMC; Ms P. Sumasri, Deputy Executive Engineer, GHMC; Mr K. Suresh Kumar, Chief Engineer, GHMC for providing data and facilitating valuable thought-provoking discussions. We thank Department of Civil Engineering, NIT Warangal for providing rainfall data. We thank Prof. N. V. Umamahesh for providing valuable suggestions in improving the manuscript. We extend our thanks to Hussain Sagar Lake Division (HSLD) for providing inflow data. We thank the United States Environmental Protection Agency (US EPA) for making the storm water management model (SWMM) an open source software. The authors also thank Google Earth for its open source images. The first author thanks Mr Santosh P, IBM Bangalore for helping in modifying the R code as per requirements.