Abstract
Hydraulic jump is a highly important phenomenon for dissipation of energy. This event, which involves flow regime change, can occur in many different types of stilling basins. In this study, hydraulic jump characteristics such as relative jump length and sequent depth ratio occurring in a suddenly expanding stilling basin were estimated using hybrid extreme learning machine (ELM). To hybridize ELM, imperialist competitive algorithm (ICA), firefly algorithm (FA) and particle swarm optimization (PSO) metaheuristic algorithms were implemented. In addition, six different models were established to determine effective dimensionless (relative) input variables. A new data set was constructed by adding the data obtained from the experimental study in the present research to the data obtained from the literature. The performance of each model was evaluated using k-fold cross-validation. Results showed that ICA hybridization slightly outperformed FA and PSO methods. Considering relative input parameters, Froude number (Fr), expansion ratio (B) and relative sill height (S), effective input combinations were Fr–B–S and Fr–B for the prediction of the sequent depth ratio (Y) and relative hydraulic jump length (Lj/h1), respectively.
HIGHLIGHTS
Suddenly expanding stilling basins were examined both experimentally and using AI.
Hydraulic jump characteristics were estimated using hybrid extreme learning machine.
New laboratory data was modeled using novel machine learning algorithms.
Among optimization algorithms, ICA was superior to PSO and FA.
The performance of each model was evaluated using k-fold cross-validation.
INTRODUCTION
Hydraulic jump is an important phenomenon that is widely used in hydraulic engineering. It is a rapid transition from a high-velocity flow to a slower stream movement. It usually occurs downstream of dam spillways, in streams and rivers and in industrial channels. There are many different types of stilling basins. According to plan geometry, they are classified as gradually expanding and suddenly expanding.
In the literature, hydraulic jump has been widely studied. Bakhmeteff & Matzke (1936) proposed dimensionless free-surface profiles and presented experimental data for the sequent depths and the length of jump. Bradley & Peterka (1957) developed a stilling basin type (i) where hydraulic jump occurs. Rajaratnam & Murahari (1971) presented an experimental study of forced hydraulic jumps formed with two-dimensional baffles or baffle walls. Hager (1985) studied different channel types dealing with hydraulic jumps in horizontal, rectangular and non-prismatic channels and U-shaped channels (Hager 1989). Gharangik & Chaudhry (1991) examined the one-dimensional Boussinesq equations, which were solved to simulate a hydraulic jump in a rectangular channel. Bremen & Hager (1993) comprehensively studied T-jump, which is one of the hydraulic jump types in abruptly expanding stilling basins. In addition, Bremen & Hager (1994) studied expanding stilling basins and the use of sills for energy dissipation. They proposed the use of a central sill as the most efficient method for energy dissipation. But they tested a range of expansion ratios B = 0.33–0.67 and suggested estimated values for B = 0.25. Zare & Doering (2011) examined forced hydraulic jump in the abrupt stilling basin.
Artificial intelligence studies have proved to be attractive modeling methods (Ebtehaj et al. 2016; Saghebian 2019; Dutta et al. 2020). Kisi (2005) examined the streamflow–suspended sediment relationship with artificial neural network (ANN) and neuro-fuzzy system (NF). Machine learning methods are widely used for the estimation of hydrological parameters. Kişi (2006) modeled daily pan evaporation process using NF and ANN techniques. Artificial intelligence is also widely used in the modeling of hydraulic structures. Paoli et al. (2010) used the multi-layer perceptron (MLP) network, which is the most widely used method among ANN architectures both for renewable energy and for time series forecasting. Roushangar et al. (2014) applied different methods to model energy dissipation in nappe and skimming flow regimes through a stepped spillway with ANNs and genetic expression programming (GEP) techniques. Azimi et al. (2018) modeled hydraulic jump characteristics in a rough channel bed using optimized firefly algorithm (FA) ANFIS. Ebtehaj & Bonakdari (2016) used evolutionary algorithms, particle swarm optimization (PSO) and imperialist competitive algorithms (ICA) to predict non-deposition sediment transport. In particular, ELM has many significant features that are distinct from the other aforementioned machine learning algorithms. Azimi et al. (2017) used ELM for sensitivity analysis to investigate the factors affecting the discharge coefficient in trapezoidal channels. They stated that ELM has advantages in terms of training speed and generalization performance. Ebtehaj et al. (2017) used an extreme learning machine method, self-adaptive extreme learning machine, to model maximum scour depth around bridge piers.
In the context of hydraulic jump, Güven et al. (2006) modeled pressure fluctuations beneath a type of hydraulic jump (B-jump) using a multilayer feed-forward neural network with a back-propagation learning algorithm. Karbasi & Azamathulla (2016) used GEP to predict characteristics of a hydraulic jump over a rough bed. They compared the performance of the GEP model, traditional equations and common artificial intelligence techniques (ANN and support vector regression). Roushangar et al. (2017) examined sudden expanding stilling basins. They modeled hydraulic jump characteristics such as sequent depth ratio (h2/h1) and relative jump length (Lj/y1). The data used in their modeling was obtained from Bremen (1990). They compared GEP and existing empirical equations in the literature and found that GEP had the best performance. Senthil Kumar et al. (2013) studied modeling of streamflow at Kasol in India using decision tree, MT, fuzzy logic and ANN. They concluded that the decision tree algorithm (REPTree) consistently performed better in terms of selected performance criteria.
Prediction of hydraulic jump characteristics such as hydraulic jump length, width, roller length, and sill height is of great importance in hydraulic engineering. The aim of this study is to estimate characteristics of the hydraulic jump occurring in a suddenly expanding stilling basin with central sill with many different variable combinations using hybrid algorithms. Some of the data used in the modeling were obtained from the literature, and others were from a new experimental study performed in this research. The new experimental data has expansion ratio (B = 0.25) for the first time. Novel algorithms were used for modeling, such as ELM-PSO, ELM-FA and ELM-ICA.
The current study presents two significant novelties. First, evolutionary algorithms, PSO, FA and ICA are employed to investigate the modeling performances and second, new experimental data collected by the authors are combined with existing data in the literature for the machine learning procedure. The data are divided into ten subsets for k-fold cross-validation, i.e. ten-fold cross-validation is performed. The new experimental data is added to the tenth fold. The performance of this fold is very important because it has a new data range. It should be noted that the machine learning algorithm may not learn accurately in the training phase. This makes it important to perform k-fold cross-validation especially with the broad range of data considered in the study, including the new experimental data added to the available literature data, which is an innovative aspect of the present research.
MATERIALS AND METHODS
Data set
A total of 165 data points were used to model the suddenly expanding stilling basin, ten of which were new. These data were obtained from Bremen & Hager (1994) and a new experimental study conducted in the Hydraulic Laboratory of Inonu University. The experiments of Bremen were carried out at the Laboratoire de Constructions Hydrauliques of the Ecole Polytechnique Federale de Lausanne (EPFL), which were designed for expanding channels with a central sill (Figure 1(a)). The new experiments were carried out as summarized in Table 1 (Q is the flow rate) with the experimental setup determined as b1 = 30 cm, b2 = 120 cm and xs = 18 cm created by pumps controlled by a PLC automation system (Figure 1(b)). Discharge was measured by an electromagnetic flow meter with an accuracy of ±0.01 m3/s. The water level was measured with a Mitutoyo digital meter with an accuracy of ±0.01 mm.
New experimental results for modelings
Run . | 1 . | 2 . | 3 . | 4 . | 5 . | 6 . | 7 . | 8 . | 9 . | 10 . |
---|---|---|---|---|---|---|---|---|---|---|
Q (l/s) | 40 | 35 | 30 | 25 | 20 | 40 | 35 | 30 | 25 | 20 |
s (cm) | 5 | 5 | 5 | 5 | 5 | 7.5 | 7.5 | 7.5 | 7.5 | 7.5 |
Lj (cm) | 211.3 | 200.3 | 186.4 | 175.4 | 168.6 | 190.3 | 184.6 | 174.7 | 156.9 | 143 |
h1 (cm) | 3.98 | 3.55 | 3.03 | 2.61 | 2.22 | 3.98 | 3.55 | 3.03 | 2.61 | 2.22 |
Run . | 1 . | 2 . | 3 . | 4 . | 5 . | 6 . | 7 . | 8 . | 9 . | 10 . |
---|---|---|---|---|---|---|---|---|---|---|
Q (l/s) | 40 | 35 | 30 | 25 | 20 | 40 | 35 | 30 | 25 | 20 |
s (cm) | 5 | 5 | 5 | 5 | 5 | 7.5 | 7.5 | 7.5 | 7.5 | 7.5 |
Lj (cm) | 211.3 | 200.3 | 186.4 | 175.4 | 168.6 | 190.3 | 184.6 | 174.7 | 156.9 | 143 |
h1 (cm) | 3.98 | 3.55 | 3.03 | 2.61 | 2.22 | 3.98 | 3.55 | 3.03 | 2.61 | 2.22 |
A view of suddenly expanding stilling basin with central sill: (a) schematic and (b) plan view of the new experimental setup.
A view of suddenly expanding stilling basin with central sill: (a) schematic and (b) plan view of the new experimental setup.
In Figure 1, b1 is the width of the first section before sudden expansion, b2 is the width of the second section with sudden expansion, bs is the width of the central sill, x1 is the length of the hydraulic jump before sudden expansion and xj is the length of the hydraulic jump that occurs within a sudden expansion. Adding this value to x1 gives the jump length Lj and Y is the ratio of the water depth after the hydraulic jump (h2) to the water depth before the hydraulic jump (h1); s is the height of the step or sill.

Statistical indices for experimental tests
Data Type . | Minimum . | Maximum . | Mean . | Std Deviation . | Skewness . |
---|---|---|---|---|---|
Fr | 2.97 | 9.02 | 6.05 | 1.96 | − 0.026 |
B | 0.25 | 0.67 | 0.48 | 0.15 | 0.014 |
S | 0.66 | 3.33 | 1.44 | 0.73 | 1.08 |
Y | 2.89 | 10.63 | 6.29 | 2.14 | 0.12 |
Lj/h1 | 34.43 | 118.25 | 71.99 | 23.37 | 0.1873 |
Data Type . | Minimum . | Maximum . | Mean . | Std Deviation . | Skewness . |
---|---|---|---|---|---|
Fr | 2.97 | 9.02 | 6.05 | 1.96 | − 0.026 |
B | 0.25 | 0.67 | 0.48 | 0.15 | 0.014 |
S | 0.66 | 3.33 | 1.44 | 0.73 | 1.08 |
Y | 2.89 | 10.63 | 6.29 | 2.14 | 0.12 |
Lj/h1 | 34.43 | 118.25 | 71.99 | 23.37 | 0.1873 |
Different combinations of the parameters
Model Name . | Selected Parameters . | Predicted Parameter . |
---|---|---|
Model 1 | Fr, B, S | Y=h2/h1 |
Model 2 | Fr, S | |
Model 3 | Fr, B | |
Model 4 | Fr, B, S | Lj/h1 |
Model 5 | Fr, S | |
Model 6 | Fr, B |
Model Name . | Selected Parameters . | Predicted Parameter . |
---|---|---|
Model 1 | Fr, B, S | Y=h2/h1 |
Model 2 | Fr, S | |
Model 3 | Fr, B | |
Model 4 | Fr, B, S | Lj/h1 |
Model 5 | Fr, S | |
Model 6 | Fr, B |
Extreme learning machine
ELM is a neural network with a single hidden layer (Huang Zhu & Siew 2006). The learning structure of ELMs has many advantages over the classical BP algorithm. While the conventional BP algorithm, a gradient-based learning process, is calculated by tuning, ELM begins with the generation of random weight and bias values for the network. ELM consists of three layers, an input layer, an output layer and a hidden layer. These layers form a single hidden layer forward network where linear algebra is used for calculating the equations to obtain optimum output layer weights. The training process is extremely fast and the generalization potential is high in ELM (Li et al. 2019).
Particle swarm optimization
The particle swarm optimization (PSO), meta-heuristic algorithm was inspired by the actions of fish and birds, and was developed by Kennedy & Eberhart (1995). The PSO, a population-based approach to stochastic optimization, starts with a random solution or a particle population in the search area and updates optima iteratively. The consequence of this simulation of social behavior is a search mechanism by which particles travel to appropriate locations. Particles learn from each other in the community based on information gained; they move towards better neighbors. At any moment, a particle changes its location in the search space to the best position by far and the best position in the neighborhood. Particle (i) is considered to be a vector and position vector in an area of the n-dimensional space. Growing particle update uses two demonstrative particles. First, the best solution to date, called ‘pbest’, is found by particles. Another is the best ever between all particles in the ‘gbest’ group. The PSO algorithm structure is shown in Figure 3.
The first step in Figure 3 is an arbitrary distribution of speeds and sites to begin the initial population. The next step is to test this particle using a statistical approach in a regression analysis. One can stop the scheme and export the parameters specified, once the best fitness standard of particulates meets the stop criterion. If the level of operation is insufficient for interruptions, the particle speed and position will be changed in two cases (Kennedy & Eberhart 1995; Shi & Eberhart 1999).





Firefly algorithm
Imperial competitive algorithm

k-fold cross validation
Cross-validation tests the performance of a predictive model and is applied to a specific data set in statistical analyses. Many kinds of cross-validations are available, including repeated random sub-validations, k-fold cross-validation, Monte Carlo test, etc. as seen in Figure 5 (Bengio & Grandvalet 2004; Rohani et al. 2018).
In the first stage, the data set to be evaluated is divided into subsets equal to k. Up to k−1 subsets are selected as training data for the model. The fold-t subset is selected as test data. The calculated accuracy value for the fold-t subset is added to the cross validation (CV) array. This process is repeated for the number of subsets (k). All accuracies calculated in the final process are averaged. Either this average or the lowest accuracy is used to indicate the performance of the model (see Figure 6).
RESULTS AND DISCUSSION
Hybridization
The aforementioned optimization algorithms (PSO, FA and ICA) were implemented for the hybridization of ELM. A number of hidden layer neurons between ten and 20 was selected for each model to provide the optimum performance. With bias and weight values collected in a vector, firstly, the initial population was created for metaheuristic algorithms. Secondly, the population was searched for the best solution, and according to the best weight and bias values, the test data was investigated. This process was performed in each fold. Table 4 shows the initial parameters of the evolutionary algorithms.
Initial parameters for PSO, FA and ICA evolutionary algorithms
. | Parameters of Algorithms . | |||||
---|---|---|---|---|---|---|
PSO | Maximum Number Iterations | Population Size | Inertia Weight | Inertia Weight Damp Rate | Personal Learning Coefficient | Global Learning Coefficient |
100 | 200 | 0.1 | 0.99 | 0.05 | 2 | |
FA | Maximum Number Iterations | Population Size | Light Absorption Coefficient | Attraction Coefficient Base Value | Mutation Coefficient | Mutation Coefficient Damp Rate |
100 | 20 | 0.9 | 2 | 0.3 | 0.99 | |
ICA | Maximum Number Iterations | Population Size | Number of Empires | Selection Pressure | Assimilation Coefficient | Revolution Rate |
100 | 100 | 10 | 1 | 1.5 | 0.1 |
. | Parameters of Algorithms . | |||||
---|---|---|---|---|---|---|
PSO | Maximum Number Iterations | Population Size | Inertia Weight | Inertia Weight Damp Rate | Personal Learning Coefficient | Global Learning Coefficient |
100 | 200 | 0.1 | 0.99 | 0.05 | 2 | |
FA | Maximum Number Iterations | Population Size | Light Absorption Coefficient | Attraction Coefficient Base Value | Mutation Coefficient | Mutation Coefficient Damp Rate |
100 | 20 | 0.9 | 2 | 0.3 | 0.99 | |
ICA | Maximum Number Iterations | Population Size | Number of Empires | Selection Pressure | Assimilation Coefficient | Revolution Rate |
100 | 100 | 10 | 1 | 1.5 | 0.1 |
Evaluation of model performance
Comparison of models
All the simulations were conducted in the MATLAB 2016 environment running on a PC with 2.67 GHz CPU and 4 Gb memory space. ELM was tuned using PSO, FA, and ICA during the training phase. RMSE was used as the best objective function in the process. The evaluation was continued using 100 iterations. The stability of the machine learning models highly depends on the properties of the data. As can be seen in Figure 7, stability of folds for Y is better than Lj/h1. As can be seen in the VAF results in Table 5, this may be due to the lower variance of Y. ELM-PSO, ELM-FA and ELM-ICA became almost stable after the 20th iteration. When the behavior of each fold is analyzed, almost all folds during the training lead to very close error rates for the Y prediction, while there are differences in the behavior of the folds for the Lj/h1 prediction (Figure 7). In Table 6, CPU time is summarized for each of the folds for ELM-PSO, ELM-FA and ELM-ICA during the training phase. ELM-ICA outperformed ELM-PSO and ELM-FA for every fold and every model considering time consumption.
Performance criteria for modeling with different models
Models . | R . | VAF . | RMSE . | SI . | MAE . | MARE . | MRE . | BIAS . | Nash . |
---|---|---|---|---|---|---|---|---|---|
ELM-PSO | |||||||||
Model 1 | 0.99 | 97.99 | 0.19 | 0.03 | 0.15 | 0.03 | 0.002 | 0.005 | 0.96 |
Model 2 | 0.99 | 97.95 | 0.19 | 0.03 | 0.15 | 0.03 | 0.0009 | − 0.004 | 0.96 |
Model 3 | 0.99 | 96.18 | 0.51 | 0.08 | 0.44 | 0.07 | 0.010 | 0.034 | 0.90 |
Model 4 | 0.92 | 72.13 | 16.42 | 0.23 | 14.81 | 0.21 | 0.056 | 0.899 | 0.48 |
Model 5 | 0.92 | 74.99 | 15.92 | 0.23 | 14.46 | 0.20 | 0.055 | 0.861 | 0.50 |
Model 6 | 0.92 | 71.32 | 10.56 | 0.15 | 8.85 | 0.12 | 0.034 | 0.828 | 0.72 |
ELM-FA | |||||||||
Model 1 | 0.99 | 97.99 | 0.19 | 0.03 | 0.15 | 0.03 | 0.002 | 0.007 | 0.97 |
Model 2 | 0.99 | 97.83 | 0.21 | 0.03 | 0.17 | 0.03 | 0.004 | 0.008 | 0.96 |
Model 3 | 0.99 | 96.26 | 0.50 | 0.08 | 0.44 | 0.07 | 0.010 | 0.029 | 0.91 |
Model 4 | 0.92 | 75.48 | 16.12 | 0.23 | 14.67 | 0.21 | 0.056 | 0.791 | 0.48 |
Model 5 | 0.92 | 75.54 | 15.62 | 0.22 | 14.07 | 0.19 | 0.051 | 0.662 | 0.50 |
Model 6 | 0.92 | 71.99 | 10.44 | 0.15 | 8.71 | 0.12 | 0.034 | 0.769 | 0.72 |
ELM-ICA | |||||||||
Model 1 | 0.99 | 97.67 | 0.24 | 0.04 | 0.19 | 0.03 | 0.0008 | 0.004 | 0.97 |
Model 2 | 0.99 | 97.60 | 0.24 | 0.04 | 0.19 | 0.03 | − 0.0006 | − 0.008 | 0.96 |
Model 3 | 0.99 | 96.47 | 0.50 | 0.08 | 0.44 | 0.08 | 0.011 | 0.028 | 0.91 |
Model 4 | 0.92 | 77.62 | 15.19 | 0.21 | 13.70 | 0.19 | 0.051 | 0.502 | 0.50 |
Model 5 | 0.92 | 76.21 | 14.67 | 0.21 | 13.16 | 0.18 | 0.042 | 0.143 | 0.53 |
Model 6 | 0.92 | 71.05 | 10.35 | 0.15 | 8.60 | 0.12 | 0.033 | 0.702 | 0.72 |
Models . | R . | VAF . | RMSE . | SI . | MAE . | MARE . | MRE . | BIAS . | Nash . |
---|---|---|---|---|---|---|---|---|---|
ELM-PSO | |||||||||
Model 1 | 0.99 | 97.99 | 0.19 | 0.03 | 0.15 | 0.03 | 0.002 | 0.005 | 0.96 |
Model 2 | 0.99 | 97.95 | 0.19 | 0.03 | 0.15 | 0.03 | 0.0009 | − 0.004 | 0.96 |
Model 3 | 0.99 | 96.18 | 0.51 | 0.08 | 0.44 | 0.07 | 0.010 | 0.034 | 0.90 |
Model 4 | 0.92 | 72.13 | 16.42 | 0.23 | 14.81 | 0.21 | 0.056 | 0.899 | 0.48 |
Model 5 | 0.92 | 74.99 | 15.92 | 0.23 | 14.46 | 0.20 | 0.055 | 0.861 | 0.50 |
Model 6 | 0.92 | 71.32 | 10.56 | 0.15 | 8.85 | 0.12 | 0.034 | 0.828 | 0.72 |
ELM-FA | |||||||||
Model 1 | 0.99 | 97.99 | 0.19 | 0.03 | 0.15 | 0.03 | 0.002 | 0.007 | 0.97 |
Model 2 | 0.99 | 97.83 | 0.21 | 0.03 | 0.17 | 0.03 | 0.004 | 0.008 | 0.96 |
Model 3 | 0.99 | 96.26 | 0.50 | 0.08 | 0.44 | 0.07 | 0.010 | 0.029 | 0.91 |
Model 4 | 0.92 | 75.48 | 16.12 | 0.23 | 14.67 | 0.21 | 0.056 | 0.791 | 0.48 |
Model 5 | 0.92 | 75.54 | 15.62 | 0.22 | 14.07 | 0.19 | 0.051 | 0.662 | 0.50 |
Model 6 | 0.92 | 71.99 | 10.44 | 0.15 | 8.71 | 0.12 | 0.034 | 0.769 | 0.72 |
ELM-ICA | |||||||||
Model 1 | 0.99 | 97.67 | 0.24 | 0.04 | 0.19 | 0.03 | 0.0008 | 0.004 | 0.97 |
Model 2 | 0.99 | 97.60 | 0.24 | 0.04 | 0.19 | 0.03 | − 0.0006 | − 0.008 | 0.96 |
Model 3 | 0.99 | 96.47 | 0.50 | 0.08 | 0.44 | 0.08 | 0.011 | 0.028 | 0.91 |
Model 4 | 0.92 | 77.62 | 15.19 | 0.21 | 13.70 | 0.19 | 0.051 | 0.502 | 0.50 |
Model 5 | 0.92 | 76.21 | 14.67 | 0.21 | 13.16 | 0.18 | 0.042 | 0.143 | 0.53 |
Model 6 | 0.92 | 71.05 | 10.35 | 0.15 | 8.60 | 0.12 | 0.033 | 0.702 | 0.72 |
Time consumption during the training phase
Fold No . | 1 . | 2 . | 3 . | 4 . | 5 . | 6 . | 7 . | 8 . | 9 . | 10 . | Sum . |
---|---|---|---|---|---|---|---|---|---|---|---|
ELM-PSO | |||||||||||
Model 1 | 17.74 | 27.84 | 22.48 | 17.42 | 16.60 | 18.38 | 13.52 | 18.97 | 13.37 | 12.06 | 178.4 |
Model 2 | 14.56 | 17.51 | 13.34 | 12.47 | 12.43 | 12.80 | 12.29 | 12.67 | 12.26 | 13.13 | 133.5 |
Model 3 | 14.23 | 12.41 | 12.64 | 11.92 | 11.88 | 11.86 | 11.82 | 13.34 | 13.08 | 15.73 | 128.9 |
Model 4 | 16.97 | 14.73 | 21.81 | 19.76 | 13.00 | 13.27 | 12.84 | 12.93 | 12.70 | 12.86 | 150.9 |
Model 5 | 14.90 | 14.62 | 14.03 | 12.81 | 12.49 | 12.76 | 13.11 | 12.54 | 12.96 | 13.22 | 133.4 |
Model 6 | 14.42 | 12.65 | 12.78 | 12.02 | 12.04 | 12.47 | 12.92 | 12.58 | 14.00 | 12.47 | 128.4 |
ELM-FA | |||||||||||
Model 1 | 15.61 | 12.82 | 12.62 | 12.74 | 12.61 | 12.68 | 13.36 | 13.07 | 12.97 | 13.17 | 131.7 |
Model 2 | 14.39 | 12.65 | 12.41 | 12.50 | 12.43 | 12.38 | 12.42 | 12.44 | 12.39 | 12.53 | 126.5 |
Model 3 | 14.50 | 12.77 | 12.44 | 12.40 | 12.38 | 12.34 | 12.30 | 12.22 | 12.32 | 12.42 | 126.1 |
Model 4 | 14.50 | 13.02 | 12.67 | 12.69 | 12.84 | 12.67 | 12.67 | 12.62 | 12.67 | 12.69 | 129.0 |
Model 5 | 14.71 | 12.69 | 12.45 | 12.35 | 12.49 | 13.01 | 12.59 | 12.58 | 12.63 | 13.59 | 129.1 |
Model 6 | 15.28 | 12.89 | 12.68 | 12.58 | 12.64 | 12.57 | 12.52 | 12.53 | 12.53 | 12.67 | 128.9 |
ELM-ICA | |||||||||||
Model 1 | 11.59 | 10.25 | 10.01 | 9.97 | 10.06 | 10.01 | 9.91 | 9.89 | 10.02 | 10.14 | 101.9 |
Model 2 | 13.78 | 13.93 | 11.92 | 10.78 | 10.75 | 10.55 | 10.55 | 10.40 | 10.64 | 10.66 | 113.9 |
Model 3 | 12.18 | 10.82 | 10.66 | 10.52 | 10.60 | 10.57 | 12.33 | 13.61 | 10.85 | 10.74 | 112.9 |
Model 4 | 12.19 | 10.95 | 10.70 | 10.73 | 10.77 | 10.66 | 10.58 | 10.70 | 10.87 | 10.92 | 109.1 |
Model 5 | 12.54 | 11.04 | 10.81 | 10.60 | 11.03 | 10.65 | 10.75 | 10.74 | 10.66 | 10.94 | 109.7 |
Model 6 | 12.04 | 10.89 | 10.59 | 11.48 | 10.64 | 10.55 | 10.53 | 12.30 | 10.64 | 10.71 | 110.4 |
Fold No . | 1 . | 2 . | 3 . | 4 . | 5 . | 6 . | 7 . | 8 . | 9 . | 10 . | Sum . |
---|---|---|---|---|---|---|---|---|---|---|---|
ELM-PSO | |||||||||||
Model 1 | 17.74 | 27.84 | 22.48 | 17.42 | 16.60 | 18.38 | 13.52 | 18.97 | 13.37 | 12.06 | 178.4 |
Model 2 | 14.56 | 17.51 | 13.34 | 12.47 | 12.43 | 12.80 | 12.29 | 12.67 | 12.26 | 13.13 | 133.5 |
Model 3 | 14.23 | 12.41 | 12.64 | 11.92 | 11.88 | 11.86 | 11.82 | 13.34 | 13.08 | 15.73 | 128.9 |
Model 4 | 16.97 | 14.73 | 21.81 | 19.76 | 13.00 | 13.27 | 12.84 | 12.93 | 12.70 | 12.86 | 150.9 |
Model 5 | 14.90 | 14.62 | 14.03 | 12.81 | 12.49 | 12.76 | 13.11 | 12.54 | 12.96 | 13.22 | 133.4 |
Model 6 | 14.42 | 12.65 | 12.78 | 12.02 | 12.04 | 12.47 | 12.92 | 12.58 | 14.00 | 12.47 | 128.4 |
ELM-FA | |||||||||||
Model 1 | 15.61 | 12.82 | 12.62 | 12.74 | 12.61 | 12.68 | 13.36 | 13.07 | 12.97 | 13.17 | 131.7 |
Model 2 | 14.39 | 12.65 | 12.41 | 12.50 | 12.43 | 12.38 | 12.42 | 12.44 | 12.39 | 12.53 | 126.5 |
Model 3 | 14.50 | 12.77 | 12.44 | 12.40 | 12.38 | 12.34 | 12.30 | 12.22 | 12.32 | 12.42 | 126.1 |
Model 4 | 14.50 | 13.02 | 12.67 | 12.69 | 12.84 | 12.67 | 12.67 | 12.62 | 12.67 | 12.69 | 129.0 |
Model 5 | 14.71 | 12.69 | 12.45 | 12.35 | 12.49 | 13.01 | 12.59 | 12.58 | 12.63 | 13.59 | 129.1 |
Model 6 | 15.28 | 12.89 | 12.68 | 12.58 | 12.64 | 12.57 | 12.52 | 12.53 | 12.53 | 12.67 | 128.9 |
ELM-ICA | |||||||||||
Model 1 | 11.59 | 10.25 | 10.01 | 9.97 | 10.06 | 10.01 | 9.91 | 9.89 | 10.02 | 10.14 | 101.9 |
Model 2 | 13.78 | 13.93 | 11.92 | 10.78 | 10.75 | 10.55 | 10.55 | 10.40 | 10.64 | 10.66 | 113.9 |
Model 3 | 12.18 | 10.82 | 10.66 | 10.52 | 10.60 | 10.57 | 12.33 | 13.61 | 10.85 | 10.74 | 112.9 |
Model 4 | 12.19 | 10.95 | 10.70 | 10.73 | 10.77 | 10.66 | 10.58 | 10.70 | 10.87 | 10.92 | 109.1 |
Model 5 | 12.54 | 11.04 | 10.81 | 10.60 | 11.03 | 10.65 | 10.75 | 10.74 | 10.66 | 10.94 | 109.7 |
Model 6 | 12.04 | 10.89 | 10.59 | 11.48 | 10.64 | 10.55 | 10.53 | 12.30 | 10.64 | 10.71 | 110.4 |
RMSE error for each iteration in the training phase.
In Table 5, the results are summarized for models that predicted Y and Lj/h1. Results for the prediction of Y indicate that Model 1 was superior to other models including ELM-PSO, ELM-FA and ELM-ICA, with RMSE = 0.19, RMSE = 0.19 and RMSE = 0.24, respectively. When comparing the machine learning algorithms, ELM-FA outperformed ELM-PSO and ELM-ICA for Model 1, with Nash = 0.97, Nash = 0.96 and Nash = 0.97. Results for relative hydraulic jump length, Lj/h1, indicate that Model 6 was superior to other models among the machine learning algorithms, ELM-PSO, ELM-FA and ELM-ICA, with RMSE = 10.56, RMSE = 10.44 and RMSE = 10.35, respectively. ELM-ICA was slightly better than ELM-PSO and ELM-FA. Scatter plots of all models are presented in Figure 8. As can be seen in Figure 8, Y is in the shaded area (± 10% confidence intervals) while Lj/h1 is distributed. Despite the addition of experimental data with a new range, very good results were obtained. Training and testing data are very important for machine learning methods. These methods can better estimate the range of data learned during the testing phase. Therefore, the distribution of training and test data is very important. In this study, unlike the suddenly expanding energy dissipation models in the literature, both existing experimental data in the literature and the collected data with a new range were used and evaluated with the reliable k-fold cross validation method. Despite all these difficulties, hybrid models, ELM-PSO, ELM-FA and ELM-ICA generated almost the same results as SVM and GEP. Roushangar et al. (2017) and Roushangar & Ghasempour (2018) modeled data from Bremen & Hager (1994) via SVM and GEP, respectively. Their studies included the expansion range of B = 0.33–0.75. In estimation of Y, SVM outperformed GEP with R = 0.993 for SVM compared with R = 0.948 for GEP. Also, SVM was superior to GEP for Lj/h1 with R = 0.93 for SVM compared with R = 0.82 for GEP.
Scatter for all models with ±10% confidence intervals (shaded area).
Experimental validation




The performance of various models for the new experimental data obtained from the experiments in the present research for sequent depth ratio and relative jump length is presented in Table 7. Model 3 and Model 5, which are superior models in machine learning algorithms, were used for new experimental data fitness (Figure 10). ELM-PSO, ELM-FA and ELM-ICA outperformed conventional regression equations, Herbrand (1973), Bremen & Hager (1994), and Zare & Doering (2011) for estimation of Y with RMSE = 0.63, RMSE = 0.64, RMSE = 0.61, RMSE = 1.77, RMSE = 0.68 and RMSE = 1.67, respectively (Table 7). Measurement of the hydraulic jump length is very difficult because it has a dynamic length. Therefore, the variance of the values of Lj/h1 was higher than the value of Y (Table 7).
Performance of various regression equations and superior machine learning models for the new experimental data in the present research
Models for Y . | R . | VAF . | RMSE . | SI . | MAE . | MARE . | MRE . | BIAS . | Nash . |
---|---|---|---|---|---|---|---|---|---|
Herbrand (1973) | 0.93 | 60.02 | 1.77 | 0.31 | 1.71 | 0.30 | 0.30 | 1.71 | − 0.04 |
Bremen & Hager (1994) | 0.97 | 90.15 | 0.68 | 0.12 | 0.64 | 0.11 | − 0.11 | − 0.64 | 0.58 |
Zare & Doering (2011) | 0.93 | 61.09 | 1.67 | 0.29 | 1.61 | 0.28 | 0.28 | 1.61 | − 0.04 |
ELM-PSO | |||||||||
Model 1 | 0.94 | 13.84 | 0.37 | 0.07 | 0.27 | 0.05 | 0.01 | 0.07 | 0.73 |
Model 2 | 0.94 | 8.49 | 0.40 | 0.07 | 0.27 | 0.05 | 0.02 | 0.15 | 0.69 |
Model 3 | 0.95 | 54.5 | 0.63 | 0.1 | 0.58 | 0.10 | − 0.092 | − 0.56 | 0.50 |
ELM-FA | |||||||||
Model 1 | 0.94 | 12.22 | 0.37 | 0.07 | 0.28 | 0.05 | 0.004 | 0.04 | 0.73 |
Model 2 | 0.94 | 0.12 | 0.41 | 0.08 | 0.28 | 0.05 | 0.03 | 0.18 | 0.67 |
Model 3 | 0.95 | 53.55 | 0.64 | 0.10 | 0.59 | 0.10 | − 0.09 | − 0.56 | 0.49 |
ELM-ICA | |||||||||
Model 1 | 0.94 | − 6.56 | 0.38 | 0.07 | 0.31 | 0.06 | − 0.006 | − 0.02 | 0.71 |
Model 2 | 0.94 | 18.89 | 0.38 | 0.07 | 0.25 | 0.04 | 0.021 | 0.13 | 0.71 |
Model 3 | 0.97 | 72.36 | 0.61 | 0.10 | 0.56 | 0.09 | − 0.092 | − 0.56 | 0.53 |
Models for Lj/h1 | |||||||||
Herbrand (1973) | 0.88 | 54.16 | 21.40 | 0.36 | 20.74 | 0.34 | 0.34 | 20.74 | − 0.04 |
Bremen & Hager (1994) | 0.78 | 40.23 | 6.02 | 0.10 | 4.69 | 0.08 | − 0.01 | − 0.11 | − 4.53 |
Zare & Doering (2011) | 0.88 | 57.72 | 17.72 | 0.30 | 16.98 | 0.28 | 0.28 | 16.98 | − 0.05 |
ELM-PSO | |||||||||
Model 4 | 0.88 | − 10.88 | 12.99 | 0.18 | 12.18 | 0.17 | − 0.17 | − 12.19 | 0.19 |
Model 5 | 0.88 | − 36.03 | 14.47 | 0.20 | 13.70 | 0.19 | − 0.19 | − 13.70 | 0.16 |
Model 6 | 0.56 | 27.63 | 26.87 | 0.32 | 25.42 | 0.30 | − 0.30 | − 25.42 | − 0.02 |
ELM-FA | |||||||||
Model 4 | 0.88 | − 36.86 | 14.31 | 0.20 | 13.52 | 0.19 | − 0.19 | − 13.52 | 0.16 |
Model 5 | 0.88 | − 36.22 | 13.84 | 0.19 | 13.02 | 0.18 | − 0.18 | − 13.02 | 0.17 |
Model 6 | 0.56 | 27.64 | 26.32 | 0.31 | 24.78 | 0.29 | − 0.29 | − 24.79 | − 0.03 |
ELM-ICA | |||||||||
Model 4 | 0.88 | − 47.48 | 12.88 | 0.18 | 11.98 | 0.17 | − 0.17 | − 11.98 | 0.19 |
Model 5 | 0.88 | − 41.50 | 11.21 | 0.16 | 10.43 | 0.15 | − 0.15 | − 10.17 | 0.23 |
Model 6 | 0.52 | 24.6 | 26.03 | 0.31 | 24.13 | 0.28 | − 0.28 | − 24.13 | − 0.05 |
Models for Y . | R . | VAF . | RMSE . | SI . | MAE . | MARE . | MRE . | BIAS . | Nash . |
---|---|---|---|---|---|---|---|---|---|
Herbrand (1973) | 0.93 | 60.02 | 1.77 | 0.31 | 1.71 | 0.30 | 0.30 | 1.71 | − 0.04 |
Bremen & Hager (1994) | 0.97 | 90.15 | 0.68 | 0.12 | 0.64 | 0.11 | − 0.11 | − 0.64 | 0.58 |
Zare & Doering (2011) | 0.93 | 61.09 | 1.67 | 0.29 | 1.61 | 0.28 | 0.28 | 1.61 | − 0.04 |
ELM-PSO | |||||||||
Model 1 | 0.94 | 13.84 | 0.37 | 0.07 | 0.27 | 0.05 | 0.01 | 0.07 | 0.73 |
Model 2 | 0.94 | 8.49 | 0.40 | 0.07 | 0.27 | 0.05 | 0.02 | 0.15 | 0.69 |
Model 3 | 0.95 | 54.5 | 0.63 | 0.1 | 0.58 | 0.10 | − 0.092 | − 0.56 | 0.50 |
ELM-FA | |||||||||
Model 1 | 0.94 | 12.22 | 0.37 | 0.07 | 0.28 | 0.05 | 0.004 | 0.04 | 0.73 |
Model 2 | 0.94 | 0.12 | 0.41 | 0.08 | 0.28 | 0.05 | 0.03 | 0.18 | 0.67 |
Model 3 | 0.95 | 53.55 | 0.64 | 0.10 | 0.59 | 0.10 | − 0.09 | − 0.56 | 0.49 |
ELM-ICA | |||||||||
Model 1 | 0.94 | − 6.56 | 0.38 | 0.07 | 0.31 | 0.06 | − 0.006 | − 0.02 | 0.71 |
Model 2 | 0.94 | 18.89 | 0.38 | 0.07 | 0.25 | 0.04 | 0.021 | 0.13 | 0.71 |
Model 3 | 0.97 | 72.36 | 0.61 | 0.10 | 0.56 | 0.09 | − 0.092 | − 0.56 | 0.53 |
Models for Lj/h1 | |||||||||
Herbrand (1973) | 0.88 | 54.16 | 21.40 | 0.36 | 20.74 | 0.34 | 0.34 | 20.74 | − 0.04 |
Bremen & Hager (1994) | 0.78 | 40.23 | 6.02 | 0.10 | 4.69 | 0.08 | − 0.01 | − 0.11 | − 4.53 |
Zare & Doering (2011) | 0.88 | 57.72 | 17.72 | 0.30 | 16.98 | 0.28 | 0.28 | 16.98 | − 0.05 |
ELM-PSO | |||||||||
Model 4 | 0.88 | − 10.88 | 12.99 | 0.18 | 12.18 | 0.17 | − 0.17 | − 12.19 | 0.19 |
Model 5 | 0.88 | − 36.03 | 14.47 | 0.20 | 13.70 | 0.19 | − 0.19 | − 13.70 | 0.16 |
Model 6 | 0.56 | 27.63 | 26.87 | 0.32 | 25.42 | 0.30 | − 0.30 | − 25.42 | − 0.02 |
ELM-FA | |||||||||
Model 4 | 0.88 | − 36.86 | 14.31 | 0.20 | 13.52 | 0.19 | − 0.19 | − 13.52 | 0.16 |
Model 5 | 0.88 | − 36.22 | 13.84 | 0.19 | 13.02 | 0.18 | − 0.18 | − 13.02 | 0.17 |
Model 6 | 0.56 | 27.64 | 26.32 | 0.31 | 24.78 | 0.29 | − 0.29 | − 24.79 | − 0.03 |
ELM-ICA | |||||||||
Model 4 | 0.88 | − 47.48 | 12.88 | 0.18 | 11.98 | 0.17 | − 0.17 | − 11.98 | 0.19 |
Model 5 | 0.88 | − 41.50 | 11.21 | 0.16 | 10.43 | 0.15 | − 0.15 | − 10.17 | 0.23 |
Model 6 | 0.52 | 24.6 | 26.03 | 0.31 | 24.13 | 0.28 | − 0.28 | − 24.13 | − 0.05 |
CONCLUSIONS
In this study, the hydraulic jump characteristics for a suddenly expanding stilling basin were estimated with sequent depth ratio and relative hydraulic jump length Y and Lj/h1 using novel hybrid machine learning algorithms. This study presents a novel method to examine the suitability of new experimental data with those previously presented in the literature. New experimental data ranges (B = 0.25) that are not in the literature were modeled using novel machine learning algorithms, ELM-PSO, ELM-FA and ELM-ICA, by adding them to the existing data. The classical approach examines the suitability of new experimental data with the regression equations proposed in the literature. In this study, the suitability of new data with previous studies was examined with both classical regression equations and the k-fold cross validation method. As a result, it was shown that the tenth fold of the new experimental data presented in this study fits very well with the folds containing the data previously presented in the literature. The best input combinations according to the Y and Lj/h1 modeling results for ELM-ICA, which is superior among the modeling methods, are Model 1 and Model 6, Fr–B–S and Fr–S, respectively. Machine learning model results were better than classical regression results. Compared with optimization algorithms, ICA was superior to PSO and FA.
ACKNOWLEDGEMENTS
This research was supported by IUBAP (Inonu University Scientific Projects Unit) under the project numbers FCD-2018-1324 and FBG-2018-1474.
CONFLICTS OF INTEREST
The authors declare no conflicts of interest.
DATA AVAILABILITY STATEMENT
All relevant data are included in the paper or its Supplementary Information.