Abstract
The precise evaluation of the Muskingum model (MM) parameters is quite critical for routing flood waves for achieving flood control in open channels. The MM is one of the popular techniques adopted for flood routing. Estimation of the MM parameters so as to provide the best fit for the observed and computed flow values is a global optimization problem. Several optimization techniques have been adopted in the past to serve this purpose, but efficient optimization algorithms are needed to overcome the local optima issues and improvement of accuracy. In this paper, the efficiency of three optimization algorithms, namely Jaya, Covariance Matrix Adaption-Evolution Strategy (CMAES) and self-adaptive differential evolution (SaDE), has been assessed in the evaluation of the Muskingum parameters. The sum of the square deviation of the observed outflow and computed outflow (SSQ) is considered an objective in this MM optimization problem. Also, a constraint is proposed in this paper to help the optimization algorithms in finding the optimal global solutions. The simulation results show that the sum of the square deviation of the observed outflow and computed outflow (SSQ) was the least for SaDE, followed by CMAES.
HIGHLIGHTS
Precise evaluation of Muskingum model (MM) parameters is quite critical for routing flood waves.
Efficient optimization algorithms are needed to overcome local optima issues in the estimation of the Muskingum parameters.
Jaya, Covariance Matrix Adaption-Evolution Strategy (CMAES) and self-adaptive differential evolution (SaDE) have been assessed.
SaDE shows the best performance followed by CMAES.
Graphical Abstract
INTRODUCTION
The nonlinear equations possess a supplementary parameter m which can be evaluated using alternative parameter evaluation techniques. The values of K and x in the nonlinear model are different from those of the linear model (Barati 2011).
Several approaches have been recommended in the literature for the evaluation of the hydrologic parameters of the nonlinear Muskingum models (NMM). Gill (1978) adopted the least squares method (LSM) for the determination of the values of K, x and m, but his strategy suffered due to heavy time consumption. Tung (1985) advocated some methods to evaluate the parameters using different curve-fitting methods, namely simple linear regression (LR), conjugate gradient (CG) and Davidson–Fletcher–Powell (DFP) algorithms in conjunction with the Hooke–Jeeves (HJ) pattern search method. Yoon & Padmanabhan (1993) suggested a technique which involves an iterative procedure and nonlinear squares regression. Mohan (1997) proposed the genetic algorithm (GA) for the evaluation of the MM parameters and reported GA to be better than other models. Kim et al. (2001) proposed the harmony search (HS) algorithm and they found that the HS evaluation was superior compared to the GA. Their drawback was that careful attention was needed for the harmony memory. Das (2004) evaluated the parameters using the Lagrange multiplier by transforming the constrained parameter optimization problem into an unconstrained problem. Their results were not good compared to other methods (Luo & Xie 2010). Geem (2006) used the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm to evaluate the parameters of the model. The only drawback with BFGS is the requirement of the initial assumption of K, x and m. Chu & Chang (2009) adopted the particle swarm optimization (PSO) method to evaluate the parameters of the NMM. PSO does not necessitate the preliminary evaluation of the values for any of the parameters, but the results of PSO were inferior compared to HS and BFGS. Luo & Xie (2010) proposed the immune clonal selection algorithm (ICSA) for finding the parameters of the NMM. This algorithm can combat prematurity and slow convergence speed, but sensitivity analysis has to be performed to evaluate the algorithm parameters. Zhengxiang & Ling (2010) used three intelligence algorithms, namely GA, PSO and simulated annealing (SA), algorithm for the parameter calibration of LMM. Barati (2011) used the Nelder–Mead Simplex (NMS) algorithm for the evaluation of the Muskingum parameters and the algorithm was found to perform on par with BFGS and HS. Various other algorithms and techniques were adopted to evaluate the parameters of the NMM. Xu et al. (2012) adopted differential evolution. Talatahari et al. (2013) used the combination of charged system search and PSO to identify the parameters of the LMM. Ouyang et al. (2014) adopted PSO hybridized with the NMS algorithm. The proposed model was found to have an advanced grade of accuracy and quicker convergent speed. The hybrid model had the advantages of both PSO and NMS. Sheng et al. (2014) used the Newton-type trust region algorithm to obtain the parameters of the MM. Niazkar & Afzali (2015) adopted the modified honey bee mating optimization (MHBMO) algorithm. Ouyang et al. (2015) used the hybrid invasive weed optimization (HIWO) algorithm for the evaluation of the parameters of the NMM. Haddad et al. (2015) used a hybrid method by combining the shuffled frog leaping algorithm (SFLA) and the NMS. The generalized reduced gradient (GRG) method was adopted by Hirpurkar & Ghare (2014) and Easa (2015). Zhang et al. (2016) adopted the adaptive GA method to evaluate the parameters of the NMM. They also solved the model equations considering the lateral flow. Niazkar & Afzali (2016) used the hybrid technique which combines the modified HBMO and the GRG algorithms. Yuan et al. (2016) used the improved backtracking search algorithm (BSA) for the parameter evaluation. Kang & Zhang (2016) used a newly developed elitist-mutated PSO (EMPSO) technique and an improved gravitational search algorithm (IGSA) for the parameter evaluation of the NMM and NMM along with the lateral flow. Kang et al. (2017) used a hybrid algorithm, which combined the improved real-coded adaptive GA and the NMS for the evaluation of the parameters of two versions of the NMM. Khalifeh et al. (2020) used the Goa algorithm for the determination of the parameters of the NMM for the flood routing of the Kardeh river. Mohammad et al. (2018) adopted the bat algorithm to optimize the NMM parameters for flood routing.
In this study, the performance of Jaya, self-adaptive differential evolution (SaDE) and Covariance Matrix Adaption-Evolution Strategy (CMAES) are examined in the evaluation of the parameters of the NMM. The Jaya optimization algorithm was introduced by Rao for solving constrained and unconstrained optimization problems in 2016. The advantage of this algorithm is the reduction in the computational effort and evasion of local optima solutions (Rao 2016). Qin & Suganthan (2005) proposed a method which adaptively selects a mutation strategy from the pool of strategies and updates the crossover rate (CR) during the optimization process. They named it SaDE. The CMAES was introduced by Hansen and Ostermeier in 1996, which explores the search space using the covariance matrix during the optimization process. CMAES provides better convergence ability when compared to many recent algorithms. The distinct characteristic of the CMAES is its reliability in acclimatizing a randomly oriented scaling of the search space in small populations (Hansen et al. 2003). As far as the knowledge of the authors is concerned, the performance of Jaya, SaDE and CMAES algorithm for the evaluation of the parameters of the NMM is yet to be examined. Hence, in this study, the performance of Jaya, SaDE and CMAES has been compared with PSO, GA, differential evolution (DE), BFGS and non-linear least square regression (NONLR).
MUSKINGUM ROUTING
Performance metrics

The fitness evaluation for solving the flood routing is given in Algorithm 1
Algorithm 1: Fitness Evaluation for the Muskingum model
Input: The parameters K, x, m |
Step 1: The initial storage is calculated using Equation (3) in which the initial outflow is the same as the initial inflow |
Step 2: The time rate of change of storage volume is calculated using Equation (5) |
Step 3: New storage is determined using |
![]() |
Step 5: The next outflow is calculated using Equation (4) |
Step 6: Steps 2–5 are repeated till the entire flood hydrograph values are determined. |
Step 7: SSQ can be evaluated using Equation (6) |
Output: SSQ |
Input: The parameters K, x, m |
Step 1: The initial storage is calculated using Equation (3) in which the initial outflow is the same as the initial inflow |
Step 2: The time rate of change of storage volume is calculated using Equation (5) |
Step 3: New storage is determined using |
![]() |
Step 5: The next outflow is calculated using Equation (4) |
Step 6: Steps 2–5 are repeated till the entire flood hydrograph values are determined. |
Step 7: SSQ can be evaluated using Equation (6) |
Output: SSQ |
Problem formulation
OPTIMIZATION ALGORITHMS
Differential evolution
Differential evolution was developed based on the concept of evolutionary algorithms (EA) such as population-based mutation, crossover and selection. DE was proposed by Price et al. (2006) which can be implemented for many real-world engineering problems due to its simple procedure. DE initially generates the N number of solution vectors Xi,G, i=1…N randomly and generates the offspring solution vector with the help of DE mutation strategies and crossover operation. Then, the target solution vectors for the next generation are selected by comparing the parent solution vector with the offspring solution vector. If the offspring solution is better than the corresponding parent solution, then the parent vectors are replaced with the offspring vector. This procedure is repeated until the stopping condition is reached. Even though DE is a simple and powerful technique in solving real-world problems, its performance is based on opting for a suitable mutation strategy and the control parameters (i.e. Mutation Scaling factor F and CR). Many researchers addressed this issue and proposed many adaptive DE algorithms. The SaDE algorithm is popular and has better performance than other adaptive DE algorithms.
Self-adaptive differential evolution
SaDE adaptively chooses a suitable DE mutation strategy from the K number of mutation strategies and updates the CR value based on the previous generation's learning (Qin & Suganthan 2005). Even though DE is simple and effective, it cannot perform well for different types of complex optimization problems. This is because its exploration capability is limited for complex optimization problems due to the predefined DE mutation strategy, CR and F. As the problem differs, different DE mutation strategies, CR and F are required for effectively exploring the problem landscapes. Thus, it is required to select the suitable DE mutation strategy and fine-tune the parameters (CR and F) based on the type of problem. SaDE algorithm has the ability to choose the suitable DE mutation strategy and fine-tune the parameters CR and F based on the previous generation's learning experience. Hence, SaDE does not require a particular DE mutation strategy and initial parameter settings of CR and F. Due to the ensemble learning of SaDE, it can produce consistent performance in finding the optimal solutions for different types of problems like unimodal, multi-modal, nonlinear, non-convex and non-smooth problems.
Four (i.e. K = 4) DE mutation strategies are used in SaDE and are given as follows:
- (1)
- (2)
- (3)
- (4)DE/current-to-rand/1where r1, r2, r3, r4 and r5 represent the indices in the population which are generated randomly, U represents the offspring solution vector and u represents an element in the offspring solution vector U. During the optimization process, all four strategies have equal chances (i.e. equal probability) in generating the offspring solution. Then for every Linear Programming (LP) generation, the probability of success and failures of each DE strategy is updated. The probability of success and failure rate of each mutation strategy is calculated based on the ability of every DE mutation strategy for generating promising solutions and can be calculated using Equation (12).
- where k indicates the corresponding kth strategy and
is a constant. Generally,
can be fixed as 0.01 to avoid a zero probability, and nSk,G and nfk,G denote the success and failure counts of the kth strategy, respectively. Here, the F is not considered for adaption during the optimization process and is approximated using a normal distribution with a mean of 0.5 and standard deviation of 0.3 for each solution vector Xi,g. For every LP generation, CR for the kth strategy is approximated using the normal distribution with a mean of 0.5 and a standard deviation of 0.1. After LP generation, CR for each strategy is adapted based on the median of the successful CR values. The previous generation's successful CR values are stored in a memory for updating the CR for every LP generation.
Crossover

Selection
Algorithm 2: Self-Adaptive Differential Evolution
Input: Population size (N), maximum function evaluations (mFes) |
Step 1: Set the initial ranges for F and CR |
Step 2: Initialize the N number of population of solutions Xi,G, i = 1 to N |
Step 3: Set the equal probability for DE strategies in generating the mutated vectors |
Step 4: Generate the mutated solution vectors using four DE strategies |
Step 5: Generate the trail or offspring vector vectors by applying crossover operation on each mutated vectors |
Step 6: Evaluate the objective function as described in Algorithm1 for parent vector f(Xi,G) |
and offspring vector f(Ui,G) |
Step 7: Apply the selection operation as in Equation (14) for selecting the better solutions |
Step 8: For every LP generation, update the probability of each DE strategy and CR values |
Step 9: Repeat the procedure from Step 4 to Step 8 until reaching the stopping conditions |
Output: Optimal solution Xi,G |
Input: Population size (N), maximum function evaluations (mFes) |
Step 1: Set the initial ranges for F and CR |
Step 2: Initialize the N number of population of solutions Xi,G, i = 1 to N |
Step 3: Set the equal probability for DE strategies in generating the mutated vectors |
Step 4: Generate the mutated solution vectors using four DE strategies |
Step 5: Generate the trail or offspring vector vectors by applying crossover operation on each mutated vectors |
Step 6: Evaluate the objective function as described in Algorithm1 for parent vector f(Xi,G) |
and offspring vector f(Ui,G) |
Step 7: Apply the selection operation as in Equation (14) for selecting the better solutions |
Step 8: For every LP generation, update the probability of each DE strategy and CR values |
Step 9: Repeat the procedure from Step 4 to Step 8 until reaching the stopping conditions |
Output: Optimal solution Xi,G |
RESULTS AND DISCUSSION
All the simulations were performed in MATLAB using an Intel Core i7 desktop with 4GB RAM. The SaDE procedure as described in Algorithm 2 is executed for the formulated problem as described in the problem formulation section. The algorithms, namely SaDE, JAYA (Rao 2016), PSO (Chu & Chang 2009) and Covariance Matrix Adaption Evolution Strategy (Auger & Hansen 2005) algorithms, are compared based on their performance. Every algorithm is executed by 20 runs due to its stochastic nature. Except for PSO, other algorithms like SaDE, CMAES and Jaya are parameter-less algorithms. The parameter setting for the algorithms is given in Table 1.
Parameter setting of the algorithms
General parameters | Population size: 10 |
Maximum number of function evaluation: 600 | |
Number of design variables: 3 (i.e. K, x, m) | |
Lower bound of design variables [0.01, 0.01, 1] | |
Upper bound of design variables [1.2, 0.5, 2.5] | |
PSO | Acceleration factor (c1): 2 |
Acceleration factor (c2): 2 | |
Inertia weight (wmax): 0.9 | |
Inertia weight (wmin): 0.4 |
General parameters | Population size: 10 |
Maximum number of function evaluation: 600 | |
Number of design variables: 3 (i.e. K, x, m) | |
Lower bound of design variables [0.01, 0.01, 1] | |
Upper bound of design variables [1.2, 0.5, 2.5] | |
PSO | Acceleration factor (c1): 2 |
Acceleration factor (c2): 2 | |
Inertia weight (wmax): 0.9 | |
Inertia weight (wmin): 0.4 |
All the algorithms are simulated for solving the formulated problem as mentioned in the earlier section. The statistical results of 20 runs are given in Table 2. From Table 2, it can be seen that SaDE identifies minimum fitness value compared to other algorithms. Furthermore, SaDE has a minimum standard deviation which indicates that SaDE provides consistent results in solving MM problem. Based on the fitness value and consistency (i.e. high standard deviation: 102.18), it can be observed that the performance of PSO is worst compared to other algorithms. CMAES shows equivalent performance with SaDE in identifying the optimal fitness value. However, it does not provide consistent results (i.e. standard deviation is high: 1.1982) when solving the proposed problem.
Statistical results of the algorithms
Statistical parameter . | SaDE . | Jaya . | PSO . | CMAES . |
---|---|---|---|---|
Minimum | 36.7679 | 36.7940 | 44.8281 | 36.7686 |
Mean | 37.0446 | 38.0346 | 87.5153 | 37.8705 |
Standard deviation | 0.6284 | 1.7740 | 102.18 | 1.1982 |
Maximum (worst) | 39.2914 | 44.1997 | 602.82 | 41.2769 |
Statistical parameter . | SaDE . | Jaya . | PSO . | CMAES . |
---|---|---|---|---|
Minimum | 36.7679 | 36.7940 | 44.8281 | 36.7686 |
Mean | 37.0446 | 38.0346 | 87.5153 | 37.8705 |
Standard deviation | 0.6284 | 1.7740 | 102.18 | 1.1982 |
Maximum (worst) | 39.2914 | 44.1997 | 602.82 | 41.2769 |
Table 3 shows the optimal design parameters (i.e. [K, x, m]) of the algorithms used in this work. Also, the design parameters of the existing algorithms such as DE, NONLR, GA and BFGS are considered for performance comparison. The results of DE, NONLR, GA and BFGS are adopted from Xu et al. (2012). From Table 3, it can be concluded that SaDE has a minimum SSQ (i.e. objective or fitness) value in comparison with other algorithms. This shows that SaDE has better performance in identifying the optimal values of the proposed problem compared to others.
Optimal design parameters
Algorithms . | Design parameter [K, x, m] . | SSQ . |
---|---|---|
SaDE | [0.5171, 0.2869, 1.8683] | 36.7679 |
CMAES | [0.5148, 0.2869, 1.8692] | 36.7686 |
Jaya | [0.5041, 0.2860, 1.8739] | 36.7940 |
PSO | [0.9127, 0.2891, 1.7425] | 44.8281 |
DEa | [0.5175, 0.2869, 1.8680] | 36.77 |
GAa | [0.1033, 0.2873, 1.8282] | 38.23 |
NONLRa | [0.0600, 0.2700, 2.3600] | 43.26 |
BFGSa | [0.0863, 0.2869, 1.8679] | 36.77 |
Algorithms . | Design parameter [K, x, m] . | SSQ . |
---|---|---|
SaDE | [0.5171, 0.2869, 1.8683] | 36.7679 |
CMAES | [0.5148, 0.2869, 1.8692] | 36.7686 |
Jaya | [0.5041, 0.2860, 1.8739] | 36.7940 |
PSO | [0.9127, 0.2891, 1.7425] | 44.8281 |
DEa | [0.5175, 0.2869, 1.8680] | 36.77 |
GAa | [0.1033, 0.2873, 1.8282] | 38.23 |
NONLRa | [0.0600, 0.2700, 2.3600] | 43.26 |
BFGSa | [0.0863, 0.2869, 1.8679] | 36.77 |
Note:aXu et al. (2012).
The data regarding time, inflows, observed outflows (i.e. actual outflows) and the computed outflows in cubic meters per second (cms) using the PSO, Jaya, CMAES and SaDE are given in Table 4.
Data for the computed outflow from the algorithms
Time . | Inflows (cm) . | Observed outflows (cm) . | Computed outflows (cm) . | |||
---|---|---|---|---|---|---|
PSO . | Jaya . | CMAES . | SaDE . | |||
0 | 22 | 22 | 22.0000 | 22.0000 | 22.0000 | 22.0000 |
6 | 23 | 21 | 22.0000 | 22.0000 | 22.0000 | 22.0000 |
12 | 35 | 21 | 22.3386 | 22.4260 | 22.4230 | 22.4223 |
18 | 71 | 26 | 25.8616 | 26.6461 | 26.6188 | 26.6121 |
24 | 103 | 34 | 33.2684 | 34.5246 | 34.4683 | 34.4566 |
30 | 111 | 44 | 43.4831 | 44.2257 | 44.1753 | 44.1660 |
36 | 109 | 55 | 56.8548 | 56.8735 | 56.8570 | 56.8532 |
42 | 100 | 66 | 68.5584 | 68.0402 | 68.0559 | 68.0568 |
48 | 86 | 75 | 77.7975 | 77.0259 | 77.0660 | 77.0698 |
54 | 71 | 82 | 83.9852 | 83.2588 | 83.3123 | 83.3171 |
60 | 59 | 85 | 86.2862 | 85.8435 | 85.8969 | 85.9008 |
66 | 47 | 84 | 84.5431 | 84.4949 | 84.5354 | 84.5373 |
72 | 39 | 80 | 80.1979 | 80.5606 | 80.5833 | 80.5827 |
78 | 32 | 73 | 73.0457 | 73.7145 | 73.7154 | 73.7127 |
84 | 28 | 64 | 64.6408 | 65.4294 | 65.4124 | 65.4088 |
90 | 24 | 54 | 55.3604 | 56.0306 | 56.0020 | 55.9990 |
96 | 22 | 44 | 46.3884 | 46.6977 | 46.6689 | 46.6684 |
102 | 21 | 36 | 37.9744 | 37.7702 | 37.7506 | 37.7538 |
108 | 20 | 30 | 31.1391 | 30.4658 | 30.4614 | 30.4679 |
114 | 19 | 25 | 26.1369 | 25.2095 | 25.2190 | 25.2270 |
120 | 19 | 22 | 22.6383 | 21.7140 | 21.7301 | 21.7375 |
126 | 18 | 19 | 20.6155 | 19.9769 | 19.9888 | 19.9934 |
Time . | Inflows (cm) . | Observed outflows (cm) . | Computed outflows (cm) . | |||
---|---|---|---|---|---|---|
PSO . | Jaya . | CMAES . | SaDE . | |||
0 | 22 | 22 | 22.0000 | 22.0000 | 22.0000 | 22.0000 |
6 | 23 | 21 | 22.0000 | 22.0000 | 22.0000 | 22.0000 |
12 | 35 | 21 | 22.3386 | 22.4260 | 22.4230 | 22.4223 |
18 | 71 | 26 | 25.8616 | 26.6461 | 26.6188 | 26.6121 |
24 | 103 | 34 | 33.2684 | 34.5246 | 34.4683 | 34.4566 |
30 | 111 | 44 | 43.4831 | 44.2257 | 44.1753 | 44.1660 |
36 | 109 | 55 | 56.8548 | 56.8735 | 56.8570 | 56.8532 |
42 | 100 | 66 | 68.5584 | 68.0402 | 68.0559 | 68.0568 |
48 | 86 | 75 | 77.7975 | 77.0259 | 77.0660 | 77.0698 |
54 | 71 | 82 | 83.9852 | 83.2588 | 83.3123 | 83.3171 |
60 | 59 | 85 | 86.2862 | 85.8435 | 85.8969 | 85.9008 |
66 | 47 | 84 | 84.5431 | 84.4949 | 84.5354 | 84.5373 |
72 | 39 | 80 | 80.1979 | 80.5606 | 80.5833 | 80.5827 |
78 | 32 | 73 | 73.0457 | 73.7145 | 73.7154 | 73.7127 |
84 | 28 | 64 | 64.6408 | 65.4294 | 65.4124 | 65.4088 |
90 | 24 | 54 | 55.3604 | 56.0306 | 56.0020 | 55.9990 |
96 | 22 | 44 | 46.3884 | 46.6977 | 46.6689 | 46.6684 |
102 | 21 | 36 | 37.9744 | 37.7702 | 37.7506 | 37.7538 |
108 | 20 | 30 | 31.1391 | 30.4658 | 30.4614 | 30.4679 |
114 | 19 | 25 | 26.1369 | 25.2095 | 25.2190 | 25.2270 |
120 | 19 | 22 | 22.6383 | 21.7140 | 21.7301 | 21.7375 |
126 | 18 | 19 | 20.6155 | 19.9769 | 19.9888 | 19.9934 |
Performance analysis of the algorithms with various search space
This section analyses the performance of the algorithms in different search spaces with different maximum function evaluations (mFes). For this, three different cases are considered and are listed as follows:
Case 1: Search space: XL = [0 0 0], XU = [5 5 5], mFes = 600.
Case 2: search space: XL = [0 0 0], XU = [10 10 10], mFes = 600.
Case 3: Search space: XL = [0 0 0], XU = [10 10 10], mFes = 3,000.
Cases 1 and 2 have large search spaces when compared to the actual search space of the proposed problem (i.e. actual search space is XL = [ 0.01 0.01 1]; XU = [ 1.2 0.5 2.5]). Case 3 considers large search space (i.e. XL = [0 0 0]; XU = [10 10 10]) with higher function evaluations (i.e. mFes = 3,000) than the original problem. Table 5 shows the statistical results of the algorithms for Case 1. It can be concluded from Table 5 that for Case 1, SaDE is capable of finding the optimal value within the maximum function evaluations (i.e. mFes = 600) than other algorithms. However, it has only a 10% success rate in identifying the optimal values, but other algorithms possess a 0% success rate in identifying the optimal values. This shows that even in the large search space SaDE finds the optimal values with minimum mFes.
Statistical results of the algorithm for Case 1
. | SaDE . | Jaya . | PSO . | CMAES . |
---|---|---|---|---|
Minimum | 36.8561 | 125.31 | 800.92 | 50.7007 |
Mean | 1367 | 16649 | 2627.5 | 22818 |
Standard deviation | 5607.5 | 25267 | 6039.7 | 7777.2 |
Maximum (worst) | 25187 | 11953 | 25564 | 25587 |
Success Rate | 2/20 | 0/20 | 0/20 | 0/20 |
. | SaDE . | Jaya . | PSO . | CMAES . |
---|---|---|---|---|
Minimum | 36.8561 | 125.31 | 800.92 | 50.7007 |
Mean | 1367 | 16649 | 2627.5 | 22818 |
Standard deviation | 5607.5 | 25267 | 6039.7 | 7777.2 |
Maximum (worst) | 25187 | 11953 | 25564 | 25587 |
Success Rate | 2/20 | 0/20 | 0/20 | 0/20 |
Table 6 shows the statistical results of the algorithms for Case 2. From Table 6, it can be observed that all the algorithms failed to find the optimal value in the large search space (i.e. XL = [0 0 0], XU = [10 10 10]) within 600 function evaluations. Furthermore, this scenario also SaDE finds the minimum objective value in comparison with other algorithms.
Statistical results of the algorithm for Case 2
. | SaDE . | Jaya . | PSO . | CMAES . |
---|---|---|---|---|
Minimum | 65.8055 | 19302 | 19302 | 19303 |
Mean | 3407 | 19313 | 19343 | 19580 |
Standard deviation | 6937 | 31.7137 | 77.2081 | 143.92 |
Maximum (worst) | 19320 | 19440 | 19670 | 19670 |
Success rate | 0/20 | 0/20 | 0/20 | 0/20 |
. | SaDE . | Jaya . | PSO . | CMAES . |
---|---|---|---|---|
Minimum | 65.8055 | 19302 | 19302 | 19303 |
Mean | 3407 | 19313 | 19343 | 19580 |
Standard deviation | 6937 | 31.7137 | 77.2081 | 143.92 |
Maximum (worst) | 19320 | 19440 | 19670 | 19670 |
Success rate | 0/20 | 0/20 | 0/20 | 0/20 |
Case 3 is considered to check the performance of the algorithms in large search space (i.e. XL = [0 0 0], XU = [10 10 10]) with increased function evaluations (i.e. mFes = 3,000). Table 7 shows the statistical results of the algorithms for Case 3. From Table 7, it can be observed that SaDE has a 100% success rate in identifying the optimal solutions. Also, based on the minimum standard deviation (i.e. 0.6817), it can be concluded that SaDE provides consistent results for Case 3. The other algorithms, namely, PSO, Jaya and CMAES have a success rate of 0, 10 and 65%, respectively. In addition, based on standard deviation and mean, they do not provide consistent results for Case 3.
Statistical results of the algorithm for Case 3
. | SaDE . | Jaya . | PSO . | CMAES . |
---|---|---|---|---|
Minimum | 36.7679 | 36.7684 | 251.21e + 02 | 36.7684 |
Mean | 36.9731 | 16421 | 929.47 | 6805.9 |
Standard deviation | 0.6817 | 7058.5 | 2916.9 | 9464.7 |
Maximum (worst) | 39.8254 | 19460 | 19615 | 19670 |
Success Rate | 20/20 | 2/20 | 0/20 | 13/20 |
. | SaDE . | Jaya . | PSO . | CMAES . |
---|---|---|---|---|
Minimum | 36.7679 | 36.7684 | 251.21e + 02 | 36.7684 |
Mean | 36.9731 | 16421 | 929.47 | 6805.9 |
Standard deviation | 0.6817 | 7058.5 | 2916.9 | 9464.7 |
Maximum (worst) | 39.8254 | 19460 | 19615 | 19670 |
Success Rate | 20/20 | 2/20 | 0/20 | 13/20 |
Constraint handling
This section analyses the impact of the constraint in identifying the optimal solutions. Table 8 shows the performance of the algorithms with the constraint. From Table 8, it can be concluded that all the algorithms possess a 100% success rate in identifying the optimal solution. Table 9 presents the performance of the algorithms without constraint.
Performance of the algorithms with constraint
. | SaDE . | Jaya . | PSO . | CMAES . |
---|---|---|---|---|
Minimum | 36.7679 | 36.7940 | 44.8281 | 36.7686 |
Success rate | 20/20 | 20/20 | 20/20 | 20/20 |
. | SaDE . | Jaya . | PSO . | CMAES . |
---|---|---|---|---|
Minimum | 36.7679 | 36.7940 | 44.8281 | 36.7686 |
Success rate | 20/20 | 20/20 | 20/20 | 20/20 |
Performance of the algorithms without constraint
Statistical parameter . | SaDE . | Jaya . | PSO . | CMAES . |
---|---|---|---|---|
Minimum | Complex number | 8750.8 | 38.6359 | 36.8432 |
Success rate | 0/20 | 0/20 | 1/20 | 20/20 |
Statistical parameter . | SaDE . | Jaya . | PSO . | CMAES . |
---|---|---|---|---|
Minimum | Complex number | 8750.8 | 38.6359 | 36.8432 |
Success rate | 0/20 | 0/20 | 1/20 | 20/20 |
From Table 9, it can be noted that CMAES has an excellent performance, i.e. 100% success rate in identifying the optimal solutions without constraint. Meanwhile, SaDE and Jaya fail to identify the optimal solutions without the constraint, i.e. both algorithms have a 0% success rate. Fortunately, PSO identifies the optimal solutions in only one run out of 20 runs, i.e. it has only a 5% success rate. This shows that the proposed constraint helps the SaDE, Jaya and PSO algorithms to identify the feasible region of the solutions very quickly. This is evident from Case 2.
CONCLUSION
In the current study, the performance of Jaya, CMAES and SaDE have been compared in the evaluation of the parameters of the NMM. The performance of these algorithms has been compared with PSO, GA, DE, NONLR and BFGS. It is found that SaDE provides the minimal sum of square deviation between the observed and computed outflow values. Moreover, the SaDE is capable of finding the optimal value within the maximum function evaluations, when compared with other optimization algorithms. This proves that the SaDE algorithm is efficient in determining the optimal values with minimum mFes even in large search space. Further, the results show that SaDE, Jaya and PSO algorithms perform well in identifying feasible solutions with the help of proposed constraints in the formulated problem.
DATA AVAILABILITY STATEMENT
All relevant data are included in the paper or its Supplementary Information.
CONFLICT OF INTEREST
The authors declare there is no conflict.