The precise evaluation of the Muskingum model (MM) parameters is quite critical for routing flood waves for achieving flood control in open channels. The MM is one of the popular techniques adopted for flood routing. Estimation of the MM parameters so as to provide the best fit for the observed and computed flow values is a global optimization problem. Several optimization techniques have been adopted in the past to serve this purpose, but efficient optimization algorithms are needed to overcome the local optima issues and improvement of accuracy. In this paper, the efficiency of three optimization algorithms, namely Jaya, Covariance Matrix Adaption-Evolution Strategy (CMAES) and self-adaptive differential evolution (SaDE), has been assessed in the evaluation of the Muskingum parameters. The sum of the square deviation of the observed outflow and computed outflow (SSQ) is considered an objective in this MM optimization problem. Also, a constraint is proposed in this paper to help the optimization algorithms in finding the optimal global solutions. The simulation results show that the sum of the square deviation of the observed outflow and computed outflow (SSQ) was the least for SaDE, followed by CMAES.

HIGHLIGHTS

  • Precise evaluation of Muskingum model (MM) parameters is quite critical for routing flood waves.

  • Efficient optimization algorithms are needed to overcome local optima issues in the estimation of the Muskingum parameters.

  • Jaya, Covariance Matrix Adaption-Evolution Strategy (CMAES) and self-adaptive differential evolution (SaDE) have been assessed.

  • SaDE shows the best performance followed by CMAES.

Graphical Abstract

Graphical Abstract
Graphical Abstract
Flood routing has a vital role in the determination of the profile of flood waves in open channel flows. Generally, two basic methods of flood routing are adopted, namely hydrologic routing and hydraulic routing. The Muskingum routing method is one of the hydrologic approaches to flood routing, where the storage is a function of inflow and outflow. Muskingum routing models are of two types: linear and nonlinear. In the linear Muskingum model (LMM), the following equations of continuity and storage are adopted:
(1)
(2)
where St is the channel storage at time t (L3), It and Ot are the rates of inflow and outflow at time t (L3T − 1), K is the storage-time constant for the river reach (T), x is the dimensionless weighting factor (ranging between 0 and 0.3 for stream channels and between 0 and 0.5 for the reservoir storage) and Δt is the time step (T). The parameters K and x are evaluated graphically using the trial and error procedure. Yoon & Padmanabhan (1993) showed that the association between [xIt + (1 − x)Ot] and St is nonlinear, using a nonlinear model is highly appropriate. The representation of the MM is recommended to account for the nonlinearity (Gill 1978) as:
(3)

The nonlinear equations possess a supplementary parameter m which can be evaluated using alternative parameter evaluation techniques. The values of K and x in the nonlinear model are different from those of the linear model (Barati 2011).

Several approaches have been recommended in the literature for the evaluation of the hydrologic parameters of the nonlinear Muskingum models (NMM). Gill (1978) adopted the least squares method (LSM) for the determination of the values of K, x and m, but his strategy suffered due to heavy time consumption. Tung (1985) advocated some methods to evaluate the parameters using different curve-fitting methods, namely simple linear regression (LR), conjugate gradient (CG) and Davidson–Fletcher–Powell (DFP) algorithms in conjunction with the Hooke–Jeeves (HJ) pattern search method. Yoon & Padmanabhan (1993) suggested a technique which involves an iterative procedure and nonlinear squares regression. Mohan (1997) proposed the genetic algorithm (GA) for the evaluation of the MM parameters and reported GA to be better than other models. Kim et al. (2001) proposed the harmony search (HS) algorithm and they found that the HS evaluation was superior compared to the GA. Their drawback was that careful attention was needed for the harmony memory. Das (2004) evaluated the parameters using the Lagrange multiplier by transforming the constrained parameter optimization problem into an unconstrained problem. Their results were not good compared to other methods (Luo & Xie 2010). Geem (2006) used the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm to evaluate the parameters of the model. The only drawback with BFGS is the requirement of the initial assumption of K, x and m. Chu & Chang (2009) adopted the particle swarm optimization (PSO) method to evaluate the parameters of the NMM. PSO does not necessitate the preliminary evaluation of the values for any of the parameters, but the results of PSO were inferior compared to HS and BFGS. Luo & Xie (2010) proposed the immune clonal selection algorithm (ICSA) for finding the parameters of the NMM. This algorithm can combat prematurity and slow convergence speed, but sensitivity analysis has to be performed to evaluate the algorithm parameters. Zhengxiang & Ling (2010) used three intelligence algorithms, namely GA, PSO and simulated annealing (SA), algorithm for the parameter calibration of LMM. Barati (2011) used the Nelder–Mead Simplex (NMS) algorithm for the evaluation of the Muskingum parameters and the algorithm was found to perform on par with BFGS and HS. Various other algorithms and techniques were adopted to evaluate the parameters of the NMM. Xu et al. (2012) adopted differential evolution. Talatahari et al. (2013) used the combination of charged system search and PSO to identify the parameters of the LMM. Ouyang et al. (2014) adopted PSO hybridized with the NMS algorithm. The proposed model was found to have an advanced grade of accuracy and quicker convergent speed. The hybrid model had the advantages of both PSO and NMS. Sheng et al. (2014) used the Newton-type trust region algorithm to obtain the parameters of the MM. Niazkar & Afzali (2015) adopted the modified honey bee mating optimization (MHBMO) algorithm. Ouyang et al. (2015) used the hybrid invasive weed optimization (HIWO) algorithm for the evaluation of the parameters of the NMM. Haddad et al. (2015) used a hybrid method by combining the shuffled frog leaping algorithm (SFLA) and the NMS. The generalized reduced gradient (GRG) method was adopted by Hirpurkar & Ghare (2014) and Easa (2015). Zhang et al. (2016) adopted the adaptive GA method to evaluate the parameters of the NMM. They also solved the model equations considering the lateral flow. Niazkar & Afzali (2016) used the hybrid technique which combines the modified HBMO and the GRG algorithms. Yuan et al. (2016) used the improved backtracking search algorithm (BSA) for the parameter evaluation. Kang & Zhang (2016) used a newly developed elitist-mutated PSO (EMPSO) technique and an improved gravitational search algorithm (IGSA) for the parameter evaluation of the NMM and NMM along with the lateral flow. Kang et al. (2017) used a hybrid algorithm, which combined the improved real-coded adaptive GA and the NMS for the evaluation of the parameters of two versions of the NMM. Khalifeh et al. (2020) used the Goa algorithm for the determination of the parameters of the NMM for the flood routing of the Kardeh river. Mohammad et al. (2018) adopted the bat algorithm to optimize the NMM parameters for flood routing.

In this study, the performance of Jaya, self-adaptive differential evolution (SaDE) and Covariance Matrix Adaption-Evolution Strategy (CMAES) are examined in the evaluation of the parameters of the NMM. The Jaya optimization algorithm was introduced by Rao for solving constrained and unconstrained optimization problems in 2016. The advantage of this algorithm is the reduction in the computational effort and evasion of local optima solutions (Rao 2016). Qin & Suganthan (2005) proposed a method which adaptively selects a mutation strategy from the pool of strategies and updates the crossover rate (CR) during the optimization process. They named it SaDE. The CMAES was introduced by Hansen and Ostermeier in 1996, which explores the search space using the covariance matrix during the optimization process. CMAES provides better convergence ability when compared to many recent algorithms. The distinct characteristic of the CMAES is its reliability in acclimatizing a randomly oriented scaling of the search space in small populations (Hansen et al. 2003). As far as the knowledge of the authors is concerned, the performance of Jaya, SaDE and CMAES algorithm for the evaluation of the parameters of the NMM is yet to be examined. Hence, in this study, the performance of Jaya, SaDE and CMAES has been compared with PSO, GA, differential evolution (DE), BFGS and non-linear least square regression (NONLR).

First, rearranging Equation (3), the rate of flow Qt can be expressed as:
(4)
Second, by rearrangement of Equations (3) and (1):
(5)
where Δt is the time interval.

Performance metrics

The performance metrics are used to evaluate the routed model by comparing the actual model. In this work, the residual sum of squares (SSQ) is used as a performance metric for evaluating the routed model. Also, it can be used as fitness for optimization problem formulation. SSQ provides information about the difference between the routed model with the actual model and is given in Equation (6):
(6)
where Qt and are the actual outflow and computed outflow at time t, respectively.

The fitness evaluation for solving the flood routing is given in Algorithm 1

Algorithm 1: Fitness Evaluation for the Muskingum model

Input: The parameters K, x, m 
Step 1: The initial storage is calculated using Equation (3) in which the initial outflow is the same as the initial inflow 
Step 2: The time rate of change of storage volume is calculated using Equation (5
Step 3: New storage is determined using 
 
Step 5: The next outflow is calculated using Equation (4
Step 6: Steps 2–5 are repeated till the entire flood hydrograph values are determined. 
Step 7: SSQ can be evaluated using Equation (6
Output: SSQ 
Input: The parameters K, x, m 
Step 1: The initial storage is calculated using Equation (3) in which the initial outflow is the same as the initial inflow 
Step 2: The time rate of change of storage volume is calculated using Equation (5
Step 3: New storage is determined using 
 
Step 5: The next outflow is calculated using Equation (4
Step 6: Steps 2–5 are repeated till the entire flood hydrograph values are determined. 
Step 7: SSQ can be evaluated using Equation (6
Output: SSQ 

Problem formulation

In order to find the optimal parameters of K, x and m values for the NMM, a problem is formulated with the objective to minimize the SSQ. SSQ is the difference between actual and routed outflows. Thus, minimization of SSQ gives the routed outflows very nearer to the actual outflow. The formulated problem is given in Equation (7):
(7)
where X is the solution vector which contains the parameters as X= [K, x, m]. XL and XU are the Lower bound and Upper bound of the parameters [K, x, m], respectively. CN denotes the complex number. It is observed that during the optimization process sometimes, the fitness values become a complex number for some solutions due to the randomness of the algorithms. This will affect the searchability of the algorithms. Hence, a constraint is added to the formulated problem to avoid complex numbers. In this work, penalty-based constraint handling is implemented, i.e. if the fitness value is a complex number, then the fitness value is set as a very high value like 10,000. Due to the high fitness value, the corresponding solution is automatically rejected in the current generation. The optimization algorithms minimize the objective function to find the optimal solution, i.e. the parameters X= [K, x, m].

Differential evolution

Differential evolution was developed based on the concept of evolutionary algorithms (EA) such as population-based mutation, crossover and selection. DE was proposed by Price et al. (2006) which can be implemented for many real-world engineering problems due to its simple procedure. DE initially generates the N number of solution vectors Xi,G, i=1…N randomly and generates the offspring solution vector with the help of DE mutation strategies and crossover operation. Then, the target solution vectors for the next generation are selected by comparing the parent solution vector with the offspring solution vector. If the offspring solution is better than the corresponding parent solution, then the parent vectors are replaced with the offspring vector. This procedure is repeated until the stopping condition is reached. Even though DE is a simple and powerful technique in solving real-world problems, its performance is based on opting for a suitable mutation strategy and the control parameters (i.e. Mutation Scaling factor F and CR). Many researchers addressed this issue and proposed many adaptive DE algorithms. The SaDE algorithm is popular and has better performance than other adaptive DE algorithms.

Self-adaptive differential evolution

SaDE adaptively chooses a suitable DE mutation strategy from the K number of mutation strategies and updates the CR value based on the previous generation's learning (Qin & Suganthan 2005). Even though DE is simple and effective, it cannot perform well for different types of complex optimization problems. This is because its exploration capability is limited for complex optimization problems due to the predefined DE mutation strategy, CR and F. As the problem differs, different DE mutation strategies, CR and F are required for effectively exploring the problem landscapes. Thus, it is required to select the suitable DE mutation strategy and fine-tune the parameters (CR and F) based on the type of problem. SaDE algorithm has the ability to choose the suitable DE mutation strategy and fine-tune the parameters CR and F based on the previous generation's learning experience. Hence, SaDE does not require a particular DE mutation strategy and initial parameter settings of CR and F. Due to the ensemble learning of SaDE, it can produce consistent performance in finding the optimal solutions for different types of problems like unimodal, multi-modal, nonlinear, non-convex and non-smooth problems.

Four (i.e. K = 4) DE mutation strategies are used in SaDE and are given as follows:

  • (1)
    DE/rand/1/bin
    (8)
  • (2)
    DE/rand-to-best/2/bin
    (9)
  • (3)
    DE/rand/2/bin
    (10)
  • (4)
    DE/current-to-rand/1
    (11)
    where r1, r2, r3, r4 and r5 represent the indices in the population which are generated randomly, U represents the offspring solution vector and u represents an element in the offspring solution vector U. During the optimization process, all four strategies have equal chances (i.e. equal probability) in generating the offspring solution. Then for every Linear Programming (LP) generation, the probability of success and failures of each DE strategy is updated. The probability of success and failure rate of each mutation strategy is calculated based on the ability of every DE mutation strategy for generating promising solutions and can be calculated using Equation (12).
  • (12)
    where k indicates the corresponding kth strategy and is a constant. Generally, can be fixed as 0.01 to avoid a zero probability, and nSk,G and nfk,G denote the success and failure counts of the kth strategy, respectively. Here, the F is not considered for adaption during the optimization process and is approximated using a normal distribution with a mean of 0.5 and standard deviation of 0.3 for each solution vector Xi,g. For every LP generation, CR for the kth strategy is approximated using the normal distribution with a mean of 0.5 and a standard deviation of 0.1. After LP generation, CR for each strategy is adapted based on the median of the successful CR values. The previous generation's successful CR values are stored in a memory for updating the CR for every LP generation.

Crossover

Crossover operations are embedded in the DE mutation strategies as in Equations (8)–(11). For better understanding, crossover operation is explained separately in this section. In crossover operation, the information in the parent and offspring solutions are exchanged based on the CR value and the operation is done as per Equation (13):
(13)
where vi,G is the element in the mutated vector, and xi,G is an element in the parent solution vector. If the random value () is less than the current CR value, then the mutated vector information is exchanged to the offspring vector U, otherwise, there is no change in the information.

Selection

After the generation of the offspring vector Ui,G, the objective function as in Equation (7) is evaluated for both the offspring vector (Ui,G) and the parent vector (Xi,G). If the offspring vector is better in fitness (i.e. minimum value) than the parent vector, then the offspring vector is chosen for the next generation as in Equation (14):
(14)

Algorithm 2: Self-Adaptive Differential Evolution

Input: Population size (N), maximum function evaluations (mFes) 
Step 1: Set the initial ranges for F and CR 
Step 2: Initialize the N number of population of solutions Xi,G, i = 1 to N 
Step 3: Set the equal probability for DE strategies in generating the mutated vectors 
Step 4: Generate the mutated solution vectors using four DE strategies 
Step 5: Generate the trail or offspring vector vectors by applying crossover operation on each mutated vectors 
Step 6: Evaluate the objective function as described in Algorithm1 for parent vector f(Xi,G) 
and offspring vector f(Ui,G) 
Step 7: Apply the selection operation as in Equation (14) for selecting the better solutions 
Step 8: For every LP generation, update the probability of each DE strategy and CR values 
Step 9: Repeat the procedure from Step 4 to Step 8 until reaching the stopping conditions 
Output: Optimal solution Xi,G 
Input: Population size (N), maximum function evaluations (mFes) 
Step 1: Set the initial ranges for F and CR 
Step 2: Initialize the N number of population of solutions Xi,G, i = 1 to N 
Step 3: Set the equal probability for DE strategies in generating the mutated vectors 
Step 4: Generate the mutated solution vectors using four DE strategies 
Step 5: Generate the trail or offspring vector vectors by applying crossover operation on each mutated vectors 
Step 6: Evaluate the objective function as described in Algorithm1 for parent vector f(Xi,G) 
and offspring vector f(Ui,G) 
Step 7: Apply the selection operation as in Equation (14) for selecting the better solutions 
Step 8: For every LP generation, update the probability of each DE strategy and CR values 
Step 9: Repeat the procedure from Step 4 to Step 8 until reaching the stopping conditions 
Output: Optimal solution Xi,G 

All the simulations were performed in MATLAB using an Intel Core i7 desktop with 4GB RAM. The SaDE procedure as described in Algorithm 2 is executed for the formulated problem as described in the problem formulation section. The algorithms, namely SaDE, JAYA (Rao 2016), PSO (Chu & Chang 2009) and Covariance Matrix Adaption Evolution Strategy (Auger & Hansen 2005) algorithms, are compared based on their performance. Every algorithm is executed by 20 runs due to its stochastic nature. Except for PSO, other algorithms like SaDE, CMAES and Jaya are parameter-less algorithms. The parameter setting for the algorithms is given in Table 1.

Table 1

Parameter setting of the algorithms

General parameters Population size: 10 
Maximum number of function evaluation: 600 
Number of design variables: 3 (i.e. K, x, m
Lower bound of design variables [0.01, 0.01, 1] 
Upper bound of design variables [1.2, 0.5, 2.5] 
PSO Acceleration factor (c1): 2 
Acceleration factor (c2): 2 
Inertia weight (wmax): 0.9 
Inertia weight (wmin): 0.4 
General parameters Population size: 10 
Maximum number of function evaluation: 600 
Number of design variables: 3 (i.e. K, x, m
Lower bound of design variables [0.01, 0.01, 1] 
Upper bound of design variables [1.2, 0.5, 2.5] 
PSO Acceleration factor (c1): 2 
Acceleration factor (c2): 2 
Inertia weight (wmax): 0.9 
Inertia weight (wmin): 0.4 

All the algorithms are simulated for solving the formulated problem as mentioned in the earlier section. The statistical results of 20 runs are given in Table 2. From Table 2, it can be seen that SaDE identifies minimum fitness value compared to other algorithms. Furthermore, SaDE has a minimum standard deviation which indicates that SaDE provides consistent results in solving MM problem. Based on the fitness value and consistency (i.e. high standard deviation: 102.18), it can be observed that the performance of PSO is worst compared to other algorithms. CMAES shows equivalent performance with SaDE in identifying the optimal fitness value. However, it does not provide consistent results (i.e. standard deviation is high: 1.1982) when solving the proposed problem.

Table 2

Statistical results of the algorithms

Statistical parameterSaDEJayaPSOCMAES
Minimum 36.7679 36.7940 44.8281 36.7686 
Mean 37.0446 38.0346 87.5153 37.8705 
Standard deviation 0.6284 1.7740 102.18 1.1982 
Maximum (worst) 39.2914 44.1997 602.82 41.2769 
Statistical parameterSaDEJayaPSOCMAES
Minimum 36.7679 36.7940 44.8281 36.7686 
Mean 37.0446 38.0346 87.5153 37.8705 
Standard deviation 0.6284 1.7740 102.18 1.1982 
Maximum (worst) 39.2914 44.1997 602.82 41.2769 

Table 3 shows the optimal design parameters (i.e. [K, x, m]) of the algorithms used in this work. Also, the design parameters of the existing algorithms such as DE, NONLR, GA and BFGS are considered for performance comparison. The results of DE, NONLR, GA and BFGS are adopted from Xu et al. (2012). From Table 3, it can be concluded that SaDE has a minimum SSQ (i.e. objective or fitness) value in comparison with other algorithms. This shows that SaDE has better performance in identifying the optimal values of the proposed problem compared to others.

Table 3

Optimal design parameters

AlgorithmsDesign parameter [K, x, m]SSQ
SaDE [0.5171, 0.2869, 1.8683] 36.7679 
CMAES [0.5148, 0.2869, 1.8692] 36.7686 
Jaya [0.5041, 0.2860, 1.8739] 36.7940 
PSO [0.9127, 0.2891, 1.7425] 44.8281 
DEa [0.5175, 0.2869, 1.8680] 36.77 
GAa [0.1033, 0.2873, 1.8282] 38.23 
NONLRa [0.0600, 0.2700, 2.3600] 43.26 
BFGSa [0.0863, 0.2869, 1.8679] 36.77 
AlgorithmsDesign parameter [K, x, m]SSQ
SaDE [0.5171, 0.2869, 1.8683] 36.7679 
CMAES [0.5148, 0.2869, 1.8692] 36.7686 
Jaya [0.5041, 0.2860, 1.8739] 36.7940 
PSO [0.9127, 0.2891, 1.7425] 44.8281 
DEa [0.5175, 0.2869, 1.8680] 36.77 
GAa [0.1033, 0.2873, 1.8282] 38.23 
NONLRa [0.0600, 0.2700, 2.3600] 43.26 
BFGSa [0.0863, 0.2869, 1.8679] 36.77 

The data regarding time, inflows, observed outflows (i.e. actual outflows) and the computed outflows in cubic meters per second (cms) using the PSO, Jaya, CMAES and SaDE are given in Table 4.

Table 4

Data for the computed outflow from the algorithms

TimeInflows (cm)Observed outflows (cm)Computed outflows (cm)
PSOJayaCMAESSaDE
22 22 22.0000 22.0000 22.0000 22.0000 
23 21 22.0000 22.0000 22.0000 22.0000 
12 35 21 22.3386 22.4260 22.4230 22.4223 
18 71 26 25.8616 26.6461 26.6188 26.6121 
24 103 34 33.2684 34.5246 34.4683 34.4566 
30 111 44 43.4831 44.2257 44.1753 44.1660 
36 109 55 56.8548 56.8735 56.8570 56.8532 
42 100 66 68.5584 68.0402 68.0559 68.0568 
48 86 75 77.7975 77.0259 77.0660 77.0698 
54 71 82 83.9852 83.2588 83.3123 83.3171 
60 59 85 86.2862 85.8435 85.8969 85.9008 
66 47 84 84.5431 84.4949 84.5354 84.5373 
72 39 80 80.1979 80.5606 80.5833 80.5827 
78 32 73 73.0457 73.7145 73.7154 73.7127 
84 28 64 64.6408 65.4294 65.4124 65.4088 
90 24 54 55.3604 56.0306 56.0020 55.9990 
96 22 44 46.3884 46.6977 46.6689 46.6684 
102 21 36 37.9744 37.7702 37.7506 37.7538 
108 20 30 31.1391 30.4658 30.4614 30.4679 
114 19 25 26.1369 25.2095 25.2190 25.2270 
120 19 22 22.6383 21.7140 21.7301 21.7375 
126 18 19 20.6155 19.9769 19.9888 19.9934 
TimeInflows (cm)Observed outflows (cm)Computed outflows (cm)
PSOJayaCMAESSaDE
22 22 22.0000 22.0000 22.0000 22.0000 
23 21 22.0000 22.0000 22.0000 22.0000 
12 35 21 22.3386 22.4260 22.4230 22.4223 
18 71 26 25.8616 26.6461 26.6188 26.6121 
24 103 34 33.2684 34.5246 34.4683 34.4566 
30 111 44 43.4831 44.2257 44.1753 44.1660 
36 109 55 56.8548 56.8735 56.8570 56.8532 
42 100 66 68.5584 68.0402 68.0559 68.0568 
48 86 75 77.7975 77.0259 77.0660 77.0698 
54 71 82 83.9852 83.2588 83.3123 83.3171 
60 59 85 86.2862 85.8435 85.8969 85.9008 
66 47 84 84.5431 84.4949 84.5354 84.5373 
72 39 80 80.1979 80.5606 80.5833 80.5827 
78 32 73 73.0457 73.7145 73.7154 73.7127 
84 28 64 64.6408 65.4294 65.4124 65.4088 
90 24 54 55.3604 56.0306 56.0020 55.9990 
96 22 44 46.3884 46.6977 46.6689 46.6684 
102 21 36 37.9744 37.7702 37.7506 37.7538 
108 20 30 31.1391 30.4658 30.4614 30.4679 
114 19 25 26.1369 25.2095 25.2190 25.2270 
120 19 22 22.6383 21.7140 21.7301 21.7375 
126 18 19 20.6155 19.9769 19.9888 19.9934 

Performance analysis of the algorithms with various search space

This section analyses the performance of the algorithms in different search spaces with different maximum function evaluations (mFes). For this, three different cases are considered and are listed as follows:

Case 1: Search space: XL = [0 0 0], XU = [5 5 5], mFes = 600.

Case 2: search space: XL = [0 0 0], XU = [10 10 10], mFes = 600.

Case 3: Search space: XL = [0 0 0], XU = [10 10 10], mFes = 3,000.

Cases 1 and 2 have large search spaces when compared to the actual search space of the proposed problem (i.e. actual search space is XL = [ 0.01 0.01 1]; XU = [ 1.2 0.5 2.5]). Case 3 considers large search space (i.e. XL = [0 0 0]; XU = [10 10 10]) with higher function evaluations (i.e. mFes = 3,000) than the original problem. Table 5 shows the statistical results of the algorithms for Case 1. It can be concluded from Table 5 that for Case 1, SaDE is capable of finding the optimal value within the maximum function evaluations (i.e. mFes = 600) than other algorithms. However, it has only a 10% success rate in identifying the optimal values, but other algorithms possess a 0% success rate in identifying the optimal values. This shows that even in the large search space SaDE finds the optimal values with minimum mFes.

Table 5

Statistical results of the algorithm for Case 1

SaDEJayaPSOCMAES
Minimum 36.8561 125.31 800.92 50.7007 
Mean 1367 16649 2627.5 22818 
Standard deviation 5607.5 25267 6039.7 7777.2 
Maximum (worst) 25187 11953 25564 25587 
Success Rate 2/20 0/20 0/20 0/20 
SaDEJayaPSOCMAES
Minimum 36.8561 125.31 800.92 50.7007 
Mean 1367 16649 2627.5 22818 
Standard deviation 5607.5 25267 6039.7 7777.2 
Maximum (worst) 25187 11953 25564 25587 
Success Rate 2/20 0/20 0/20 0/20 

Table 6 shows the statistical results of the algorithms for Case 2. From Table 6, it can be observed that all the algorithms failed to find the optimal value in the large search space (i.e. XL = [0 0 0], XU = [10 10 10]) within 600 function evaluations. Furthermore, this scenario also SaDE finds the minimum objective value in comparison with other algorithms.

Table 6

Statistical results of the algorithm for Case 2

SaDEJayaPSOCMAES
Minimum 65.8055 19302 19302 19303 
Mean 3407 19313 19343 19580 
Standard deviation 6937 31.7137 77.2081 143.92 
Maximum (worst) 19320 19440 19670 19670 
Success rate 0/20 0/20 0/20 0/20 
SaDEJayaPSOCMAES
Minimum 65.8055 19302 19302 19303 
Mean 3407 19313 19343 19580 
Standard deviation 6937 31.7137 77.2081 143.92 
Maximum (worst) 19320 19440 19670 19670 
Success rate 0/20 0/20 0/20 0/20 

Case 3 is considered to check the performance of the algorithms in large search space (i.e. XL = [0 0 0], XU = [10 10 10]) with increased function evaluations (i.e. mFes = 3,000). Table 7 shows the statistical results of the algorithms for Case 3. From Table 7, it can be observed that SaDE has a 100% success rate in identifying the optimal solutions. Also, based on the minimum standard deviation (i.e. 0.6817), it can be concluded that SaDE provides consistent results for Case 3. The other algorithms, namely, PSO, Jaya and CMAES have a success rate of 0, 10 and 65%, respectively. In addition, based on standard deviation and mean, they do not provide consistent results for Case 3.

Table 7

Statistical results of the algorithm for Case 3

SaDEJayaPSOCMAES
Minimum 36.7679 36.7684 251.21e + 02 36.7684 
Mean 36.9731 16421 929.47 6805.9 
Standard deviation 0.6817 7058.5 2916.9 9464.7 
Maximum (worst) 39.8254 19460 19615 19670 
Success Rate 20/20 2/20 0/20 13/20 
SaDEJayaPSOCMAES
Minimum 36.7679 36.7684 251.21e + 02 36.7684 
Mean 36.9731 16421 929.47 6805.9 
Standard deviation 0.6817 7058.5 2916.9 9464.7 
Maximum (worst) 39.8254 19460 19615 19670 
Success Rate 20/20 2/20 0/20 13/20 

The convergence behaviour of the algorithms CMAES, PSO, Jaya and SaDE is provided in Figures 1 and 2. CMAES starts with very high fitness in initial generations. Hence, in order to avoid scaling and display issues, the convergence of CMAES is plotted in a separate figure. It can be seen from Figure 1 that at initial generations, CMAES finds very large fitness values and it can identify the optimal region after (approximately) 20 generations. Figure 2 shows the convergence behaviour of SaDE, Jaya and PSO algorithms. SaDE finds the minimum value at the initial generations compared to PSO and Jaya. This shows that SaDE possesses strong exploration ability even at initial generations. Jaya also quickly converged to a minimum value, but it did not reach the optimal value at the end of the generations. PSO slowly explores the search space over the generations and is capable of identifying the optimal region only after the (approximately) 35 generations.
Figure 1

Convergence of CMAES.

Figure 1

Convergence of CMAES.

Close modal
Figure 2

Convergence of SaDE, PSO and Jaya.

Figure 2

Convergence of SaDE, PSO and Jaya.

Close modal

Constraint handling

This section analyses the impact of the constraint in identifying the optimal solutions. Table 8 shows the performance of the algorithms with the constraint. From Table 8, it can be concluded that all the algorithms possess a 100% success rate in identifying the optimal solution. Table 9 presents the performance of the algorithms without constraint.

Table 8

Performance of the algorithms with constraint

SaDEJayaPSOCMAES
Minimum 36.7679 36.7940 44.8281 36.7686 
Success rate 20/20 20/20 20/20 20/20 
SaDEJayaPSOCMAES
Minimum 36.7679 36.7940 44.8281 36.7686 
Success rate 20/20 20/20 20/20 20/20 
Table 9

Performance of the algorithms without constraint

Statistical parameterSaDEJayaPSOCMAES
Minimum Complex number 8750.8 38.6359 36.8432 
Success rate 0/20 0/20 1/20 20/20 
Statistical parameterSaDEJayaPSOCMAES
Minimum Complex number 8750.8 38.6359 36.8432 
Success rate 0/20 0/20 1/20 20/20 

From Table 9, it can be noted that CMAES has an excellent performance, i.e. 100% success rate in identifying the optimal solutions without constraint. Meanwhile, SaDE and Jaya fail to identify the optimal solutions without the constraint, i.e. both algorithms have a 0% success rate. Fortunately, PSO identifies the optimal solutions in only one run out of 20 runs, i.e. it has only a 5% success rate. This shows that the proposed constraint helps the SaDE, Jaya and PSO algorithms to identify the feasible region of the solutions very quickly. This is evident from Case 2.

In the current study, the performance of Jaya, CMAES and SaDE have been compared in the evaluation of the parameters of the NMM. The performance of these algorithms has been compared with PSO, GA, DE, NONLR and BFGS. It is found that SaDE provides the minimal sum of square deviation between the observed and computed outflow values. Moreover, the SaDE is capable of finding the optimal value within the maximum function evaluations, when compared with other optimization algorithms. This proves that the SaDE algorithm is efficient in determining the optimal values with minimum mFes even in large search space. Further, the results show that SaDE, Jaya and PSO algorithms perform well in identifying feasible solutions with the help of proposed constraints in the formulated problem.

All relevant data are included in the paper or its Supplementary Information.

The authors declare there is no conflict.

Auger
A.
&
Hansen
N.
2005
A restart CMA evolution strategy with increasing population size
. In:
Proceedings of the IEEE Congress on Evolutionary Computation
.
CEC
,
Edinburgh, UK
, pp.
1769
1776
.
Auger
A.
&
Hansen
N.
2005
A restart CMA evolution strategy with increasing population size
. In:
Proceedings of the IEEE Congress on Evolutionary Computation
.
CEC
, pp.
1769
1776
.
Chu
H. J.
&
Chang
L. C.
2009
Applying particle swarm optimisation to parameter estimation of the nonlinear Muskingum model
.
Journal of Hydrologic Engineering
14
(
9
),
1024
1027
.
Das
A.
2004
Parameter estimation for Muskingum models
.
Journal of Irrigation and Drainage Engineering
130
(
2
),
140
147
.
Easa
M. S.
2015
Versatile Muskingum flood model with four variable parameters
.
Proceedings of the Institution of Civil Engineers: Water Management
168
(
3
),
139
148
.
Geem
Z. W.
2006
Parameter estimation for the nonlinear Muskingum model using BFGS technique
.
Journal of Irrigation and Drainage Engineering
132
(
5
),
474
478
.
Gill
M. A.
1978
Flood routing by Muskingum method
.
Journal of Hydrology
36
(
3–4
),
353
363
.
Haddad
O. B.
,
Hamedi
F.
,
Fallah-Mehdipour
E.
,
Orouji
H.
&
Marino
M. A.
2015
Application of a hybrid optimization method in Muskingum parameter estimation
.
Journal of Irrigation and Drainage Engineering
141
(
12
),
04015026
.
Hansen
N.
&
Ostermeier
A.
,
1996
Convergence properties of evolution strategies with the derandomized covariance matrix adaptation: CMA-ES
. In:
EUFIT'97, 5th Europ. Congr. on Intelligent Techniques and Soft Computing, Proceedings
(
Zimmermann
H.-J.
, ed.).
Verlag
,
Aachen
,
Germany
, pp.
650
654
.
Hirpurkar
P.
&
Ghare
A. D.
2014
Parameter estimation for the nonlinear forms of the Muskingum model
.
Journal of Hydrologic Engineering
20
(
8
),
04014085
.
Khalifeh
S.
,
Esmaili
K.
,
Khodashenas
S.
&
Akbarifard
S.
2020
Data on optimisation of the non-linear Muskingum flood routing in Kardeh river using Goa algorithm
.
Data in Brief
30
,
105398
.
Kim
J. H.
,
Geem
Z. W.
&
Kim
E. S.
2001
Parameter estimation of the nonlinear Muskingum model using harmony search
.
Journal of American Water Resources Association
37
(
5
),
1131
1138
.
Mohammad
E.
,
Karami
H.
,
Mousavi
S.-F.
,
Saeed
F.
&
Saeed
S.
2018
Evaluation of the performance of bat algorithm in optimisation of nonlinear Muskingum model parameters for flood routing
.
Iranian Journal of Ecohydrology
4
(
4
),
1025
1032
.
Ouyang
A.
,
Li
K.
,
Truong
T. K.
,
Sallam
A.
&
Sha
E. H.-M.
2014
Hybrid particle swarm optimization for parameter estimation of Muskingum model
.
Neural Computing and Applications
25
,
1785
1799
.
Ouyang
A.
,
Liu
L.-B.
,
Sheng
Z.
&
Wu
F.
2015
A class of parameter estimation methods for nonlinear Muskingum model using hybrid invasive weed optimisation algorithm
.
Mathematical Problems in Engineering
2015
,
1
15
.
Price
K.
,
Storn
R. M.
&
Lampinen
J. A.
2006
Differential Evolution: A Practical Approach to Global Optimization
.
Springer-Verlag
,
Berlin
.
Qin
A. K.
&
Suganthan
P. N.
2005
Self-adaptive differential evolution algorithm for numerical optimization
.
IEEE Congress on Evolutionary Computation
2
,
1785
1791
.
Rao
R. V.
2016
A simple and new optimization algorithm for solving constrained and unconstrained optimization problems
.
International Journal of Industrial Engineering Computations
7
,
19
34
.
Sheng
Z.
,
Ouyang
A.
,
Liu
L.-B.
&
Yuan
G.
2014
A novel parameter estimation method for Muskingum model using new Newton-type trust region algorithm
.
Mathematics Problems in Engineering
2014
,
1
7
.
Talatahari
S.
,
Sheikholeslami
R.
,
Azar
B. F.
&
Daneshpajouh
H.
2013
Optimal parameter estimation for Muskingum model using a CSS-PSO method
.
Advances in Mechanical Engineering
2013
,
1
6
.
Tung
Y. K.
1985
River flood routing by nonlinear Muskingum method
.
Journal of Hydraulic Engineering
113
(
1
),
61
79
.
Xu
D.-M.
,
Qiu
L.
&
Chen
S.-Y.
2012
Estimation of nonlinear Muskingum model parameter using differential evolution
.
Journal of Hydrologic Engineering
17
(
2
),
348
353
.
Yoon
J. W.
&
Padmanabhan
G.
1993
Parameter estimation of linear and nonlinear Muskingum models
.
Journal of Water Resources Planning and Management
119
(
5
),
600
610
.
Yuan
X.
,
Wu
X.
,
Tian
H.
,
Yuan
Y.
&
Adnan
R. M.
2016
Parameter identification of nonlinear Muskingum model with backtracking search algorithm
.
Water Resources Management
30
,
2767
2783
.
Zhengxiang
Y.
&
Ling
K.
2010
Application and comparison of several intelligent algorithms on Muskingum routing model
. In:
2nd IEEE International Conference on Information and Financial Engineering
,
17–19 September
,
Chongqing, China
.
This is an Open Access article distributed under the terms of the Creative Commons Attribution Licence (CC BY-NC-ND 4.0), which permits copying and redistribution for non-commercial purposes with no derivatives, provided the original work is properly cited (http://creativecommons.org/licenses/by-nc-nd/4.0/).