In this study, hydraulic jumps over expanding beds with artificial roughness were simulated using FLOW-3D across Froude numbers ranging from 4.34 to 9.37. The simulations were conducted on both smooth and rough beds, with roughness in the form of half-spheres of 3, 4, and 5 cm in diameter, and divergence angles of 7°, 14°, and 90°. The results showed that for maximum discharge in a sudden divergent channel, a rough bed with 5-cm diameter elements reduced flow depth by 19.77% compared to a smooth bed. Additionally, in all scenarios, the ratio of y2/y1 increased as the Froude number increased. In the second phase, soft computing models – such as Linear Regression, Support Vector Regression, Decision Tree, Random Forest, Bagging, Gradient Boosting, MLP, and Stacking – were employed to model the relationships between input parameters (Fr1, θ, D/b1, and Kb) and outputs (y2/y1 and Lj/y1). The R2 coefficient value in the training stage of the Stacking model for the parameter (y2/y1) was 0.978 and in the testing stage it was 0.988, and for the parameter (Lj/y1) in the training and testing stages this coefficient was estimated to be 0.971 and 0.987, respectively.

  • This study investigates the combined influence of expanding basins and bed roughness on energy dissipation in alluvial channels using numerical and intelligent modeling methods.

  • The research focuses on hydraulic jump characteristics in expanding beds with artificial roughness, considering parameters such as Froude number, roughness diameter, and divergence angle.

Sometimes flows in nature cause damage such as bed erosion and damage to downstream structures such as bridge foundations due to high energy. Therefore, as much as possible, its energy should be controlled and the flow should be calmed. Usually, stilling basins are placed in the flow path and reduce the energy and speed of the flow in order to prevent further damage. Hydraulic jump is a common phenomenon downstream of hydraulic structures. Hydraulic jump is a fast and irreversible variable current that causes supercritical flow in a relatively short period of time to be converted into subcritical flow and causes energy dissipation. The study of past research's show that bed roughness or expanding has an important effect in reducing secondary depth and length of hydraulic jump and increasing energy dissipation. Therefore, the simultaneous effect of expanding and bed roughness reduces the dimensions and costs of stilling basin. Ead & Rajaratnam (2002) initially investigated hydraulic jump on a rough bed. He proposed an equation for the rough bed representation of shear force. Gohari & Farhoudi (2009) used rectangular wood strips with two heights of 25 and 15 mm and distances of 75, 60, 45, 30, and 15 mm as bed roughness in the range of Froude numbers 2.98–9.89. They conducted 210 experiments on flumes with a length of 12 m, a width of 25 cm, and a height of 50 cm and concluded that y2/y1 values decreased with increasing the distance between the roughness and the depth of the inflow and the height of the roughness had no significant effect on the depth ratio of the values. Velioglu et al. (2015) investigated the hydraulic jump on the rough bed numerically and laboratory and evaluated the characteristics of hydraulic jump including secondary depth, length of jump and energy loss. The results showed that roughness had a positive effect on hydraulic jump characteristics and the secondary depth was decreased by 18–20% and the length of jump was about 20–25% compared to the classic jump. Parsamehr et al. (2017) in laboratory studies, the effect of rough bed and slope of channel on hydraulic jump characteristics were investigated. They used cubic elements with heights of 0.014 and 0.028 m, and the slope of the channel floor was considered as a horizontal state once, once with a reverse slope of 1.5% and a reverse slope of 2.5%. By comparing the results of the experiments, they concluded that by increasing the height of uneven elements and speeding up the slope of the canal, energy dissipation increased and the length of hydraulic jump decreased.

Azimi et al. (2018) conducted a laboratory study on roughness with different shapes in the range of 3–7.5 landing numbers. They used roughness with rectangular, triangular, sinusoidal and trapezoidal shapes with two slopes of 45° and 60° and concluded that the roughness of the bed reduced the jump length and secondary depth compared to the jump on the smooth bed. Palermo & Pagliara (2018) by examining the properties of hydraulic jump on the smooth bed and rough bed of the rectangular channel without slope and with an inverse slope concluded that the amount of energy dissipation depends heavily on the roughness of the bed and also the slope of the channel floor. Nikmehr & Aminpour (2020) by numerical simulation of hydraulic jump on rough bed and investigation of the effect of roughness distance and their height in the range of Froude number 5.01–13.7, concluded that the roughness of the bed reduces the secondary depth and length of hydraulic jump. Maleki & Fiorotto (2021) by analyzing the properties of hydraulic jump on rough bed concluded that the length of hydraulic jump depends on the secondary depth and does not depend on the Froude number and the dimensionless roughness ratio. Also, by increasing the roughness size, the secondary depth of hydraulic jump decreases. Felder et al. (2021) in laboratory conditions, the effect of inlet to upstream flow conditions on hydraulic jump conditions was investigated. The results showed that free surface profiles, roller lengths, and jump claws are heavily influenced by inflow conditions, and hydraulic jumps with fully developed inflow conditions have a relatively lower length compared to partially develop hydraulic jumps.

Ziari et al. (2023) investigated the effect of divergence and bed roughness on hydraulic jump characteristics in FLOW-3D software. The results show that the (EL/E1) ratio increases with the application of bed roughness. Also by applying a roughness with a diameter of 5 cm, the value of (EL/E1) increases by about 5% on average compared to a smooth bed, and by applying a sudden divergence, the value of (EL/E1) compared to a bed diverging under an angle of 7° the average increases by 4.58%. In this experiment, EL is the energy difference between the beginning and the end of the hydraulic jump and E1 is the amount of energy at the beginning of the hydraulic jump. By examining previous studies, it is clear that bed roughness or divergence plays an important role in reducing secondary depth and length of hydraulic jump. Since new ideas in construction cause changes in the performance of stilling basin, the simultaneous effect of expanding and bed roughness for energy dissipation is very important. Therefore, in this research, first a laboratory sample was simulated in software and then new samples were modeled and the modeling results were used in soft computing.

Today, with the increase in computing power of computers, researchers have provided solutions to solve complex, non-linear, indeterminate and ambiguous problems by implementing soft calculations in data-based models, causing accurate estimation and saving time and money. In the past, researchers have used soft calculations for many researches related to hydraulic science such as water quality prediction (Najah et al. 2012), Suspended sediment load prediction (Banadkooki et al. 2020) and predicting ground water level (Sheikh Khozani et al. 2022). In the field of hydraulic jump and soft calculations, the following past studies can be mentioned (Karbasi & Azamathulla 2016). The characteristics of hydraulic jump over rough beds were compared using the Gene Expression Programming (GEP) model and the artificial neural network (ANN). The GEP model demonstrated superior performance, yielding more accurate results than the ANN. Azimi et al. (2018) to estimate the rolling length of the hydraulic jump in the conditions of bed roughness using adaptive neuro-fuzzy inference system (ANFIS) and Firefly Algorithm – Adaptive Neuro-Fuzzy Inference System (FA-ANFIS) models, the mentioned parameter as a function of the Froude number, the ratio of the bed roughness to the flow depth upstream of the jump and the ratio of the two depths) were considered. According to the results, the FA-ANFIS model was obtained as the best model for estimating the length of the hydraulic jump (Alizadeh et al. 2020). Using the Genetic Algorithm – Adaptive Neuro-Fuzzy Inference System GA-ANFIS model, they estimated the hydraulic jump length in roughness and slope conditions. The results showed that the initial landing number is the most effective parameter in hydraulic jump modeling. Zounemat-Kermani & Mahdavi-Meymand (2021) evaluated the learning ability and performance of five meta-heuristic optimization algorithms in training forward and recurrent fuzzy-based machine learning models, such as ANFIS and RANFIS (recurrent ANFIS), to predict downstream flow depth (h2) and jump length (Lj). The results show that the embedded ANFIS and RANFIS models are more accurate than the empirical relations proposed by the previous researchers. Comparing the performance of the embedded RANFISs and ANFISs methods in predicting Lj represents the superiority of the RANFIS models to the ANFISs. Dasineh et al. (2021) investigated the characteristics of free and submerged hydraulic jumps on the triangular bed roughness s in various T/I ratios using Computational Fluid Dynamics (CFD) modeling techniques and compared using artificial intelligence methods. The results shows for predicting the length ratio of free jumps (Ljf/y2) in the optimal gamma is γ = 10 and the length ratio of submerged jumps (Ljs/y2) is γ = 0.60. Based on sensitivity analysis, the Froude number has the greatest effect on predicting the (y3/y1) compared with submergence factors (SF) and T/I. It should be noted that y3 is the submerged jump depth. Hassanpour et al. (2022) considered dependent variables: the sequent depth ratio, the relative length of the jump, the relative roller length of the jump, the relative energy dissipation and the water surface profile. They undertook a set of formulations based on the regression analysis and Sugeno Fuzzy Logic (SFL) to predict these variables based upon experimental data. The results show that the prediction residuals for SFL are homoscedastic for all hydraulic parameters investigated except for the water surface profile, the prediction residuals of which for the regression equations are heteroscedastic. Pourabdollah et al. (2022) evaluated the accuracy of ANFIS and Particle Swarm Optimization (PSO)-ANFIS models in estimating hydraulic jump characteristics compared to laboratory results. The results showed that these models are able to estimate the characteristics of hydraulic jump with high accuracy. Of course, the ANFIS model was slightly more accurate in estimating the conjugate depth ratio, relative jump length, relative roll length and relative energy loss than the PSO-ANFIS model. Daneshfaraz et al. (2024) used laboratory data and analyzed the characteristics of hydraulic jump with artificial intelligence. The results showed that the Elman Neural Network (NN) model outperformed the other models in estimating the ratio of hydraulic jump length to the initial depth (Lj/y1), also the Hammerstein-Weiner Model (HWM) has a weaker performance than other models in estimating Lj/y1.

It seems that the simultaneous effect of expanding and roughness bed has not been studied in previous research. In this research, the simultaneous effect of expanding and roughness bed on the hydraulic jump characteristics was investigated with FLOW-3D software. This research analyzes the existing gaps in studies related to the characteristics of hydraulic jumps and their impacts on the economic design of stilling basins. In the past, most studies have separately examined the effects of bed roughness or divergence, but this research comprehensively investigates these two factors simultaneously. The main innovation of this study lies in the use of advanced machine learning methods and soft computing techniques designed to predict the characteristics of hydraulic jumps under various conditions. These models, with their ability to simulate and analyze non-linear data, enhance the accuracy of predictions and allow for a more precise evaluation of the impacts of roughness and divergence. By employing intelligent models, this research is able to generate precise results, enhancing the understanding of hydraulic jump behavior under different conditions. This study also conducts a thorough analysis of data and numerical simulations using advanced software, yielding more reliable outcomes compared to previous studies. Ultimately, this research addresses the existing research gaps, improves the design of hydraulic systems, and enhances the performance of stilling basins, serving as a valuable resource for future studies in this field. In this research, the characteristics of hydraulic jump in the diverging channel under different divergence angles and rough bed with different roughness have been investigated in the FLOW-3D software. In this study, modeling of the ratio of the secondary depth to the primary depth of divergence (y2/y1) and the ratio of the jump length to the primary depth (Lj/y1) with soft computing methods has been done. The input data included the angle of divergence (θ), the ratio of roughness to the width of the upstream channel (D/b1), the ratio of rough to smooth bed (kb), and the Froude number before divergence (Fr1).

The relationship that exists to calculate the height of water after hydraulic jump is known as the Blanger relationship, which is (Henderson 1966) (Equation (1)):
(1)

In the above relationship, y1 and y2 are flow depth before and after hydraulic jump and Fr1 is the Froude number at upstream.

The characteristics of the hydraulic jump on the rough bed is a function of the flow characteristics and channel conditions. In the analysis of the hydraulic jump downstream of the valve and on the divergence channel with a rough bed, the following factors are effective:

Supercritical flow velocity (v1), primary depth (y1), secondary depth (y2), hydraulic jump length (Lj), kinematic viscosity (ν), gravitational acceleration (g), roughness average diameter (d50), specific mass of water (ρ), upstream channel width (b1), divergence angle (θ).

The effective parameters in hydraulic jump on an uneven bed can be expressed as Equation (2).
(2)
In the introduced function, there are nine parameters with three main quantities: length, mass and time. By using Buckingham's dimensional analysis and considering ρ, ν and y1 as the main parameters, the following dimensionless function is obtained (Equation (3)):
(3)
Considering the turbulence of the flow, the effect of Reynolds number can be ignored and Equation (4) can be proposed.
(4)
Using the Equation (5), the dimensionless parameter () can be defined as a function of other effective parameters.
(5)
If we consider the length of the jump as a dependent relationship, Equation (6) is obtained with an argument similar to the one above.
(6)
Since there are no rough elements in the smooth bed, Equations (7) and (8) and are as follows:
(7)
(8)

FLOW-3D

FLOW-3D software is a suitable model for solving complex fluid dynamics problems and is able to model a wide range of fluid flows. This software has the ability to analyze 2D or 3D flow field and has a wide range of applications in fluid mechanics problems.

The equations used in FLOW-3D are:

  • 1. The continuity equation whose general form is the following equation:
    (9)
  • where (u,v,w) represents speed and (Ax, Ay, Az) represents the ratio of area in directions (x,y,z).

  • 2. Momentum equations or equations of motion for fluid velocity components (u, v, w) in three coordinate directions or in other words, the Navier–Stokes equations in the x direction are as follows (Equation (10)):
    (10)
  • 3. Free surface profile equation estimated using fluid volume function F(x,y,z). The function expresses the amount of fluid volume in the computational cell and is expressed as Equation (11):
    (11)

Simulated model specifications

The present study was conducted according to the preliminary data of the laboratory model conducted in Maragheh University laboratory located in Iran. Experiments were carried out in a laboratory flume with walls and floors made of plexyglass with a length of 5 m, a width of 0.3 m and a height of 0.45 m with a length slope of zero. To create supercritical flow, steel valve with height of 0.65 m and thickness of 3 mm and opening height of 1.7 cm valve for non-prismatic channel with expanding ratio of 0.33 was used. In order to create symmetric opening ratios of 0.33 glass boxes with length of 50 cm, height of 20 cm and widths of 10 cm on both sides of the flume and for roughness bed sands with an average diameter of 1.9 cm with a length of 120 cm were used. So that their upper surface is aligned with the floor of the upstream and downstream channels. Figure 1 shows the model of simulating in roughness bed.
Figure 1

Roughness bed: (a) laboratory model and (b) FLOW-3D model.

Figure 1

Roughness bed: (a) laboratory model and (b) FLOW-3D model.

Close modal

For numerical simulation of experimental experiments in FLOW-3D software, only 2.3 m of channel length was modeled including 60 cm before valve, 50 cm later valve to the beginning of expanding and 120 cm length of stilling basin. Because according to the laboratory results, in all experiments, the flow surface at this distance is almost uniform. In the present study, three mesh block were used. From the beginning to 0.55 m, the cell dimensions were selected in three directions: 1 cm and from 0.55 m to the end of the channel, cell dimensions were selected in three directions 0.5 cm and finally 687.600 cells were formed. In order to select the turbulence model, five experiments with RNG and kɛ turbulence models were modeled and by comparing the results, finally kɛ turbulence model was selected as the optimal model. The upstream boundary condition was selected as volume flow rate, downstream boundary as outflow, sides and bed floor as wall, and border on water level symmetry. Also, symmetry was considered at the junction of two solving mesh blocks. In FLOW-3D software, according to the selection of four types of beds (smooth bed, rough bed with roughness in hemisphere form and diameter of 3, 4, and 5 cm) and three angles of expanding (7°, 14°, and 90°) and five initial Froude numbers (Froude numbers 4.34, 5.71, 6.95, 8.17, and 9.37) for 4 × 3 × 5 = 60 different simulation experiments were performed. Then, the results of software simulations were used for soft computing.

Data description

Table 1 shows a summary of the dataset used in this study. As shown in this table, all variables have 60 records. And the target variable named (y2/y1) and (Lj/y1) has a mean equal to 5.119780. The minimum value of this variable is 3.000000 and its maximum value is 8.661417.

Table 1

Data description

Fr1θ (rad)D/b1Kby2/y1Lj/y1
Count 60.000000 60.000000 60.000000 60.000000 60.000000 60.000000 
Mean 6.908000 0.645667 0.300000 0.877500 5.119780 63.740053 
Std 1.786143 0.661740 0.188662 0.087151 1.249317 14.901369 
Min 4.340000 0.122000 0.000000 0.770000 3.000000 33.333333 
25% 5.710000 0.122000 0.225000 0.815000 4.168841 52.009223 
50% 6.950000 0.244000 0.350000 0.870000 5.072464 64.841604 
75% 8.170000 1.571000 0.425000 0.932500 5.774875 74.382606 
Max 9.370000 1.571000 0.500000 1.000000 8.661417 93.700787 
Fr1θ (rad)D/b1Kby2/y1Lj/y1
Count 60.000000 60.000000 60.000000 60.000000 60.000000 60.000000 
Mean 6.908000 0.645667 0.300000 0.877500 5.119780 63.740053 
Std 1.786143 0.661740 0.188662 0.087151 1.249317 14.901369 
Min 4.340000 0.122000 0.000000 0.770000 3.000000 33.333333 
25% 5.710000 0.122000 0.225000 0.815000 4.168841 52.009223 
50% 6.950000 0.244000 0.350000 0.870000 5.072464 64.841604 
75% 8.170000 1.571000 0.425000 0.932500 5.774875 74.382606 
Max 9.370000 1.571000 0.500000 1.000000 8.661417 93.700787 

Figure 2 shows the correlation between these variables. In fact, correlation calculates the degree of similarity (correlation) of the output estimated by the regression model with the actual output. If the regression model works well, the outputs estimated by the model will be very similar to the actual output, and as a result the correlation between the estimated output and the actual output will be close to one. Correlation values are in the range of −1 and 1, the closer the value is to one, that is, the estimated outputs are similar to the real outputs, and the closer it is to −1, it means that there is not only no similarity between the two outputs, but a behavior They are completely opposite. The correlation analysis, as illustrated in Figure 2, provides insights into the relationships between the target variables (Lj/y1 and y2/y1) and the predictor features (Fr1, θ, D/b1, and Kb). The variable Lj/y1 shows a moderate to strong positive correlation with Fr1, indicating that an increase in this parameter is likely to result in higher Lj/y1 values. Similarly, y2/y1 exhibits a positive correlation with Fr1, though slightly weaker compared to Lj/y1. Conversely, both target variables show a weaker correlation with Kb, suggesting that this parameter have a relatively limited influence on the target outcomes. The negative correlation observed between Kb and Lj/y1 and y2/y1 highlights the inverse relationship, where increased bed roughness tends to reduce the jump length. These correlations provide a foundational understanding of the predictive relationships and guide the feature importance analysis in the subsequent modeling process.
Figure 2

Correlation of variables.

Figure 2

Correlation of variables.

Close modal

Implementation of prediction model

This section discusses how to implement the proposed approaches. In this study, open-source libraries are used in the Python environment. The models were developed using the Keras toolkit written in Python and TensorFlow, an open-source software library provided by Google. Also, a number of other essential packages for data processing, manipulation, and visualization, including sklearn, NumPy, Pandas, and Matplotlib, have also been used in this study. Also, the system used was a Core i7 11800H processor, 16 GB of RAM and an RTX 3060 graphics card. In addition, all calculations are performed on the CPU.

Prediction integrated model

Ensemble machine learning is a machine learning approach in which multiple weak learners or base models are trained to address a problem. These classification algorithms are combined to generate more accurate and robust models, enhancing the overall performance of the system (Tutsoy & Polat 2022). This study suggests the use of stacking, Stacking, or stacked generalization, is an ensemble learning technique that combines multiple predictive models to enhance overall performance by leveraging the strengths of each individual model. This method involves training various base models, which may include diverse algorithms such as decision trees, support vector machines, and neural networks, on the same dataset. The predictions generated by these base models are then used as inputs for a metamodel, a higher-level model that learns how to effectively integrate these predictions to produce a final output. By capturing complex relationships in the data that single models might overlook, stacking aims to improve predictive accuracy and robustness. Stacking makes a final prediction by combining the metamodel and base models (Alizadeh et al. 2023). To make this model, three steps are done. In the first stage, the data are divided into two parts: training data and test data. After that, the basic classification models should be tested in order to select the most accurate ones as base and meta models for Stacking algorithm. In this study, eight regression models were initially trained on training set and then evaluated on test set. They include Linear Regression, Support Vector Regression (SVR), Decision Tree, Random Forest, Bagging, Gradient Boosting, and MLP. After that the top 3 most accurate models were selected as base and Meta Regressor models for stacking algorithm. Figure 3 shows the structure of this algorithm. In this study, Bagging and Gradient Boosting are considered as base models and multi-layer perceptron (MLP) is considered as Metamodel.
Figure 3

Structure of the proposed stacking model.

Figure 3

Structure of the proposed stacking model.

Close modal

In this study, we used cross-validation to tune hyperparameters and select the best model structure for each algorithm. Cross-validation ensures that the models are generalized and avoids overfitting, as it splits the dataset into training and validation subsets to test the performance of the models across various configurations. Below are the selected hyperparameters and mathematical formulations for each model.

Bagging

Ensemble machine learning is one of the main and most advanced methods that are used both to solve classification problems and to solve regression problems (Tutsoy & Sumbul 2024). One of the types of ensemble machine learning methods is the Bagging technique, which uses the Bootstrap Aggregating concept to create different estimates in order to reduce the learning error by using a set of similar/homogeneous machine learning models and achieve a more robust final model. Find when we train a model, regardless of whether it is a classification or regression problem, we get a function that takes an input and returns an output given the training set. The idea of the Bagging method is simple, several independent models are trained and their predictions are combined with each other to get a model with less variance.

In general, Bagging uses a single base learning algorithm. Bagging assumes that the educational dataset is representative of the society under investigation and all kinds of realized states of the society can be simulated from this dataset. Therefore, by using resampling by using different datasets, the required diversity will be achieved. Therefore, Bagging has a number of the same basic models that are taught in different ways. These basic models should be coherently combined with each other so that when a new example enters each of the classes, a majority agreement is used to recognize the desired class. In regression problems, the final output is obtained by performing a simple averaging between the outputs of the basic models (Seilsepour et al. 2022).

Although one of the algorithms used in stacking is the Bagging algorithm and both of these algorithms are one of the types of ensemble machine learning, however, the combined stacking method is different from boosting methods in two ways. First, stacking uses heterogeneous base models and combines different machine learning algorithms with each other, while Bagging uses a homogeneous base model. Second, stacking combines basic models using metamodels, while Bagging uses deterministic algorithms to combine basic models.

  • - Hyperparameters for y2/y1: {'max_features’: 1.0, ‘max_samples’: 1.0, ‘n_estimators’: 400, ‘oob_score': True}

  • - Hyperparameters for Lj/y1: {'max_features’: 1.0, ‘max_samples’: 1.0, ‘n_estimators’: 600, ‘oob_score': True}

Gradient Boosting

Another type of machine learning model that can be used for both regression and classification problems is Gradient Boosting. Gradient Boosting consists of a loss function, a weak learner and an ensemble model. The loss function is used for optimization, the weak learner for prediction and the collective model is used to add a weak learner to minimize the loss function in Gradient Boosting. In this algorithm, several models are successively trained and for each new model, the model gradually minimizes the loss function using the Gradient Descent method.

Gradient Boosting uses weak decision trees as the weakest search models. Because the nodes in the decision trees contain different branches of features to choose the best prediction, which means that the trees are not identical. These trees are created sequentially, and each new tree that is created takes into account the amount of errors of the last tree. In other words, the decision of each successive tree is made based on the errors of the previous tree. In Gradient Boosting, each tree that is created is trained with the data that the previous tree incorrectly predicted.

The objective function of Gradient Boosting is shown in Equation (12). In this equation, the loss function is denoted by L, which measures the difference between the label of the ith sample and the prediction in the last step plus the output of the current tree. also represents regularization and reduces the complexity of the new tree by creating a penalty.
(12)
  • Hyperparameters for y2/y1: {'learning_rate': 0.1, ‘max_depth': 5, ‘n_estimators’: 60}

  • Hyperparameters for Lj/y1: {'learning_rate': 0.1, ‘max_depth': 4, ‘n_estimators’: 60}

Multi-layer perceptron

The MLP uses a dissipation function to evaluate neuronal function. This algorithm uses an optimization algorithm to reduce the amount of loss to zero. In other words, the goal and the task of MLP are to minimize the error function and optimize the weights. A reduction gradient algorithm is also used to do this (Alizadeh et al. 2021).

In this algorithm, the data are processed in a forward direction until they estimate the target. These data are processed in the opposite direction to adjust the weights and minimize the loss. This forward and backward trend continues until the optimal values are reached. Based on the learning algorithm, the weights change until the loss will be minimal. This trend is shown in Figure 4.
Figure 4

Neural network learning process.

Figure 4

Neural network learning process.

Close modal
The structure of MLP consists of an input layer, several hidden layers, and an output layer. Hidden Layers are the layers between the input and output layers. The Figure 5 shows the MLP structure. The layers that are close to the input layer are usually called bottom layers. Layers that are close to the output layer are also called top layers. Except for the output, each layer has a bias. The more hidden layers an MLP has, the stronger its ability to learn and the greater the risk of over fitting.
Figure 5

The structure of MLP.

Figure 5

The structure of MLP.

Close modal
Equation (13) is the activation function of the neurons. In this Equation is the output of hidden layers, f is the activation function of the neurons and is the weight of the input connection to the neuron .
(13)

The selected MLP model for this study is designed with a focus on capturing complex non-linear relationships between the input features and the target variables. The architecture consists of five hidden layers, each employing the ReLU (Rectified Linear Unit) activation function, which is known for its efficiency in handling non-linearities while avoiding vanishing gradient problems. The input layer contains 90 neurons, corresponding to the feature space dimensions. Subsequent hidden layers are structured with 90, 50, 100, and 70 neurons, respectively, reflecting a deep and comprehensive architecture capable of learning intricate patterns in the data. The output layer comprises a single neuron with a ReLU activation function, suitable for regression tasks where the target is non-negative. The model is trained using the mean squared error (MSE) as the loss function and optimized with the Adam optimizer, which ensures faster convergence through adaptive learning rates. Training is performed for 150 epochs with a batch size of 4, balancing computational efficiency and model convergence. This hyperparameter configuration was chosen after rigorous experimentation and cross-validation to achieve optimal performance for predicting both y2/y1 and Lj/y1.

Linear regression

Linear regression is one of the simplest regression models that assumes a linear relationship between input features and the target variable. It minimizes the error by fitting a straight line to the data. While it is computationally efficient, it often struggles with complex relationships and non-linearity in data. Linear regression assumes a linear relationship between input features (X) and the target variable (y), and the goal is to minimize the error by solving (Equation (14)):
(14)
where β0 is the intercept, βi is the coefficient, and ɛ\epsilonɛ is the error term.

Support vector regression

SVR is a type of regression algorithm based on support vector machines (Ghazvinian & Karami 2024). It attempts to find a hyperplane in a high-dimensional space that fits the data within a specified margin. SVR is effective in handling non-linear relationships, thanks to its use of kernel functions.

SVR finds a hyperplane that fits the data within a specified margin by minimizing the loss function (Equation (15)):
(15)
where C is the penalty term, www is the weight vector, and ξi are slack variables.
  • Hyperparameters for y2/y1: {'C': 3, ‘gamma': ‘scale’, ‘kernel': ‘poly'}

  • Hyperparameters for Lj/y1: {'C': 5, ‘gamma': ‘scale’, ‘kernel': ‘poly'}

Decision tree

Decision Trees use a tree-like model of decisions to predict the target variable (Ghazvinian et al. 2021). While they are interpretable and can handle non-linear data, they are prone to overfitting, especially when the tree becomes too deep. Decision Trees predict the target variable by splitting the data at nodes based on the chosen criterion (e.g., absolute error) to minimize impurity. The mathematical criterion is (Equation (16)):
(16)
where pi is the probability of class i.
  • Hyperparameters for y2/y1 and Lj/y1: {'criterion': ‘absolute_error’, ‘max_depth': 7, ‘min_samples_split': 2}

Random Forest

Random Forest is an ensemble learning method that builds multiple decision trees and averages their predictions. By reducing variance through bagging, it often provides better generalization than a single decision tree.

Random Forest averages predictions from multiple decision trees, and its objective function combines bagging with impurity reduction (Equation (17)):
(17)
where ft(x) is the prediction of the t-th tree.
  • Hyperparameters for y2/y1y_2/y_1 y2/y1: {'criterion': ‘absolute_error’, ‘max_depth': 6, ‘n_estimators’: 140}

  • Hyperparameters for Lj/y1Lj/y_1 Lj/y1: {'criterion': ‘friedman_mse’, ‘max_depth': 6, ‘n_estimators’: 1000}

Model evaluation

In this study, several regression tests have been employed to assess the accuracy and performance of machine learning models in predicting (y2/y1) and (Lj/y1). These tests include MSE, mean absolute error (MAE), mean absolute percentage error (MAPE), and R-squared (R2) (Mousapour Mamoudanet al. 2023).

R2 measures the quality of the fit of a regression model to a dataset. A model is considered well-fitted when the difference between the actual output and the output predicted by the model is both minimal and unbiased (Karami et al. 2023). The meaning of unbiased is that the difference between the estimated output and the actual output is nowhere too small or too large (Karami & Ghazvinian 2022). R2 represents the explanatory power of the model. In other words, it indicates how well the model can account for the variability of the dependent variable (Ghazvinian & Karami 2023; Ghazvinian et al. 2024). It is calculated according to Equation (18).

While R2 is a relative measure used to assess the model's fit to the dependent variables, MSE is an absolute measure for this purpose. MSE is the average of the squared differences between the predicted and actual values, and it is calculated according to Equation (19). MSE indicates how much the results produced by the model deviate from the actual values. This criterion aids in selecting the best model. A lower MSE indicates stronger model performance, as it reflects a smaller discrepancy between the predicted and actual outputs.

The MAE is similar to the MSE in terms of its purpose, with the key difference being that instead of calculating the MSE (the difference between the estimated output and the actual output), MAE calculates the absolute value of the error (Dadrasajirlou et al. 2022). Compared to MSE, the MAE measure provides a more direct representation of the total error (Dehghanipour et al. 2021a, b). MSE handles errors differently from MAE. For example, when the error is large, the error value increases significantly when raised to the power of two, whereas if the error is small, the value does not change much when squared. In contrast, MAE treats all errors equally by calculating the absolute value of the error (the difference), regardless of the size of the error (Samii et al. 2023). It is also calculated as Equation (20).

MAPE is similar to MAE, but instead of the absolute error, it uses relative error, which is calculated according to Equation (21). Since this measure yields a unitless result, it is useful for reporting outcomes.

The P-BIAS (Percentage Bias) evaluation metric is a statistical indicator used to assess the accuracy of predictive models compared to observational data, particularly in the fields of hydrology and environmental sciences (Dehghanipour et al. 2021a, b). This metric is expressed as a percentage and is calculated using Equation (22). Where yi represents observational data and xi represents predicted data. The values of P-BIAS can vary within different ranges: a P-BIAS of 0% indicates perfect model accuracy, while a positive P-BIAS indicates that the model has generally over-predicted values, and a negative P-BIAS indicates under-prediction. Generally, a P-BIAS less than −10% signifies significant under prediction, between −10 and 10% indicates acceptable accuracy, and greater than 10% suggests over prediction. This metric helps us evaluate the performance of models in predicting actual values. In all Equations (18) to (22), yi represents the predicted value, and xi represents the true value.
(18)
(19)
(20)
(21)
(22)

In this study, in order to predict (y2/y1) and (Lj/y1), it is suggested to use the stacking approach. Since stacking is based on metamodels and basic models, first some types of machine learning algorithms have been implemented separately on the dataset, and then the three algorithms that are most accurate in prediction (y2/y1) and (Lj/y1) were selected as metamodels and basic models. These three algorithms are Gradient Boosting, Bagging, and MLP. Table 2 compares these algorithms for both (y2/y1) and (Lj/y1) target.

Table 2

Results to select metamodels and basic models

Results of (y2/y1)Results of (Lj/y1)
MSEMAEMAPER2P-BIASMSEMAEMAPER2P-BIAS
Linear regression 3.233 15.599 0.054 0.919 3.432 0.218 0.080 0.051 0.932 3.253 
Supper vector regression 2.841 13.148 0.047 0.932 2.765 0.226 0.066 0.048 0.944 2.683 
Decision Tree 4.665 25.880 0.079 0.866 5.553 0.424 0.258 0.081 0.781 6.521 
Random Forest 2.465 9.556 0.050 0.951 2.114 0.205 0.061 0.048 0.948 3.242 
Bagging 2.418 9.554 0.049 0.951 1.893 0.201 0.055 0.046 0.953 2.531 
Gradient Boosting 1.823 3.949 0.033 0.980 1.474 0.154 0.036 0.034 0.969 2.321 
MLP 4.447 1.690 0.031 0.977 1.654 0.020 0.128 0.027 0.983 1.924 
Results of (y2/y1)Results of (Lj/y1)
MSEMAEMAPER2P-BIASMSEMAEMAPER2P-BIAS
Linear regression 3.233 15.599 0.054 0.919 3.432 0.218 0.080 0.051 0.932 3.253 
Supper vector regression 2.841 13.148 0.047 0.932 2.765 0.226 0.066 0.048 0.944 2.683 
Decision Tree 4.665 25.880 0.079 0.866 5.553 0.424 0.258 0.081 0.781 6.521 
Random Forest 2.465 9.556 0.050 0.951 2.114 0.205 0.061 0.048 0.948 3.242 
Bagging 2.418 9.554 0.049 0.951 1.893 0.201 0.055 0.046 0.953 2.531 
Gradient Boosting 1.823 3.949 0.033 0.980 1.474 0.154 0.036 0.034 0.969 2.321 
MLP 4.447 1.690 0.031 0.977 1.654 0.020 0.128 0.027 0.983 1.924 

In the next step, this study used three algorithms, Gradient Boosting, Bagging and MLP as metamodel and base model in the stacking algorithm to predict (y2/y1) and (Lj/y1). The obtained results show that the performance of the stacking algorithm was higher than other machine learning algorithms, which is shown in Table 3. The comparison between these algorithms is also shown in the R2, MAPE, MAE, and MSE criteria in Figure 6.
Table 3

Prediction results

Results of (y2/y1)Results of (Lj/y1)
MSEMAEMAPER2P-BIASMSEMAEMAPER2P-BIAS
Linear regression 3.233 15.599 0.054 0.919 2.428 0.218 0.080 0.051 0.932 2.429 
Supper vector regression 2.841 13.148 0.047 0.932 2.286 0.226 0.066 0.048 0.944 2.582 
Decision Tree 4.665 25.880 0.079 0.866 4.619 0.424 0.258 0.081 0.781 5.217 
Random Forest 2.465 9.556 0.050 0.951 1.626 0.205 0.061 0.048 0.948 2.792 
Bagging 2.418 9.554 0.049 0.951 1.739 0.201 0.055 0.046 0.953 3.316 
Gradient Boosting 1.823 3.949 0.033 0.980 1.248 0.154 0.036 0.034 0.969 2.514 
MLP 4.447 1.690 0.031 0.977 1.562 0.020 0.128 0.027 0.983 1.859 
Stacking 1.103 1.456 0.024 0.988 1.135 0.019 0.028 0.021 0.987 1.562 
Results of (y2/y1)Results of (Lj/y1)
MSEMAEMAPER2P-BIASMSEMAEMAPER2P-BIAS
Linear regression 3.233 15.599 0.054 0.919 2.428 0.218 0.080 0.051 0.932 2.429 
Supper vector regression 2.841 13.148 0.047 0.932 2.286 0.226 0.066 0.048 0.944 2.582 
Decision Tree 4.665 25.880 0.079 0.866 4.619 0.424 0.258 0.081 0.781 5.217 
Random Forest 2.465 9.556 0.050 0.951 1.626 0.205 0.061 0.048 0.948 2.792 
Bagging 2.418 9.554 0.049 0.951 1.739 0.201 0.055 0.046 0.953 3.316 
Gradient Boosting 1.823 3.949 0.033 0.980 1.248 0.154 0.036 0.034 0.969 2.514 
MLP 4.447 1.690 0.031 0.977 1.562 0.020 0.128 0.027 0.983 1.859 
Stacking 1.103 1.456 0.024 0.988 1.135 0.019 0.028 0.021 0.987 1.562 
Figure 6

Comparison of the results for (y2/y1).

Figure 6

Comparison of the results for (y2/y1).

Close modal
As shown in Figure 6 and Figure 7, stacking has a value equal to 0.988 and 0.987 in R2 for predicting (y2/y1) and (Lj/y1), which shows more accuracy than other algorithms. A higher value of this criterion indicates stronger model performance. This algorithm to predict (y2/y1) has a value equal to 0.024, 1.456 and 1.103 in MAPE, MAE, and MSE criteria, respectively, which is a little lower than other prediction models. Also, to predict (Lj/y1) it has value equal to 0.021, 0.028 and 0.019 in MAPE, MAE, and MSE criteria, respectively Also, since regression models are used in the prediction (y2/y1) and (Lj/y1), the resulting diagram of each of these diagrams is also shown in Figures 8 and 9.
Figure 7

Comparison of the results for (Lj/y1)

Figure 7

Comparison of the results for (Lj/y1)

Close modal
Figure 8

Comparison of the results of regression models for (y2/y1).

Figure 8

Comparison of the results of regression models for (y2/y1).

Close modal
Figure 9

Comparison of the results of regression models for (Lj/y1).

Figure 9

Comparison of the results of regression models for (Lj/y1).

Close modal

The following points can be considered for the sensitivity analysis of the current research:

Sensitivity analysis of the model's input variables (Fr1, θ, D/b1, and Kb) on the output variables (y2/y1 and Lj/y1) can determine the relative importance of each of these variables. This information can be valuable for optimizing the model and gaining a deeper understanding of the existing relationships. Conducting sensitivity analysis can help identify sensitive input variables. This means determining which variables have the greatest impact on the model's outputs. This information can be beneficial in the design and development of future models. The results of sensitivity analysis can help deepen the understanding of the causal relationships between input and output variables. This information may help explain the underlying physical phenomena more clearly. In the sensitivity analysis section, various methods such as correlation coefficient analysis, variance analysis, regression analysis, etc., can be employed. Each of these methods may provide different insights that are useful in interpreting the results. Finally, the relationship between the sensitivity analysis results and a deeper understanding of the studied phenomenon (i.e., hydraulic jump characteristics) should be clarified. This can help confirm and strengthen the conclusions drawn in the study.

As observed, in the present study, the Stacking model provided the highest accuracy, with an R2 coefficient of 0.988 for (y2/y1) and 0.987 for (Lj/y1) in the testing phase. In the study by Daneshfaraz et al. (2024), which examined the effect of sand-roughened beds on hydraulic jump characteristics using models such as Emotional Neural Networks (EANNs), Elman Neural Networks (ENNs), the HWM, and Extreme Learning Neural Networks (ExLNNs). The ENN performed better than other models, achieving an R2 of 0.9999 and an MSE of 0.065 in the testing phase. The present study, compared to the study by Daneshfaraz et al. (2024), achieved high accuracy in prediction; the stacking model in the present study (R2 = 0.988) and the Elman NN model in Daneshfaraz et al. (2024) (R2 = 0.9999) showed close performance in terms of prediction accuracy. However, the Elman NN outperformed the Stacking model, albeit slightly. This study employed a wider range of machine learning models, including ensemble techniques such as Stacking, which combine the predictions of several models for improved accuracy. In both studies, roughness had a significant impact on hydraulic jump behavior. The present study demonstrated that increasing roughness decreases flow depth, while Daneshfaraz et al. (2024) reported a reduction in jump length due to the roughened sand bed. Both studies confirmed that rough beds lead to increased energy dissipation and reduced jump length and depth.

The study by Maleki & Fiorotto (2021) introduces a new method for designing stilling basins over rough beds, considering both the Froude number and bed roughness. This analysis is based on experimental data, with the aim of improving existing guidelines for hydraulic jump design. The study emphasizes that rough elements significantly reduce both the subcritical depth and the jump length, leading to increased energy dissipation. A reduction of 30–35% in subcritical depth was observed, enhancing the efficiency of stilling basins. Similar results were shown in the present research, where roughness increased energy dissipation by reducing flow depth. With the use of rough elements, a 19.77% reduction in flow depth was achieved. In Maleki & Fiorotto (2021), the highest error in predicting the subcritical depth was 6–8%, and a 35% reduction in jump length was measured. This method provided results with a 5% expected error in real-world applications. As mentioned, in this study, the Stacking model achieved high predictive accuracy, with R2 values of 0.988 and 0.987, indicating a close match between predicted and actual hydraulic jump characteristics. Both studies emphasize the importance of roughness in reducing jump length and depth, which leads to increased energy dissipation. Consequently, while both studies achieved comparable results in reducing jump length and increasing energy dissipation, Maleki & Fiorotto (2021) focuses on developing practical guidelines based on physical experiments, whereas the present study uses intelligent methods for precise predictive modeling. Both approaches offer new insights for optimizing hydraulic jump design over rough beds.

In the study by Xu et al. (2024), several machine learning models, including Physics-Informed Neural Networks (PINNs), Convolutional Neural Network (CNN), and Deep Neural Network (DNN), were used to predict the hydraulic jump length. The results indicate that PINNs performed the best, with an R2 score of 0.8818 and an RMSE of 4.4627 cm. Comparing the results of this study with the present research, it can be observed that using more complex models like PINNs and Stacking can contribute to more accurate analyses in the design of hydraulic structures and the prediction of hydraulic jumps. Furthermore, by utilizing advanced machine learning models, high levels of accuracy were achieved, making this approach a valuable reference for researchers and engineers in this field.

In this study, in order to predict (y2/y1) and (Lj/y1), one of the ensembles learning algorithms (stacking) is proposed. To create an ensemble learning, in the first step, the basic classification models and algorithms must be selected to be coherently combined with each other. In other words, this method tries to improve the performance of prediction models by combining multiple classifiers and voting among them.

To make this model, three steps are done. In the first stage, the data are divided into two parts, training data and test data, so that the data can be used after determining the basic models. For each base model, a prediction is made, then an algorithm is considered as a metamodel to combine the results of the models. In this study, in order to perform the above steps, first, a number of classification algorithms were implemented on the dataset, and then basic models and metamodels were selected. The selected models in this study are Bagging, Gradient Boosting and MLP. The results show that in predicting both variables (y2/y1) and (Lj/y1), the proposed model was the most accurate. Stacking has been able to have an accuracy of 0.988 and 0.987 in R2 for predicting (y2/y1) and (Lj/y1), respectively. The following suggestions are provided for future research:

  • Investigating the effect of different types of bed materials on hydraulic jump characteristics.

  • Investigating the effect of adverse slope of bed on hydraulic jump characteristics.

  • Use of new optimization models.

All relevant data are included in the paper or its Supplementary Information.

The authors declare there is no conflict.

Alizadeh, A., Yosefvand, F. & Rajabi, A. (2020) Modeling Hydraulic Jump Length on Sloping Rough Beds using Adaptive Neuro Fuzzy Inference Systems-Genetic Algorithm. Water and Soil Science, 29 (4), 175–187.
Alizadeh
M.
,
Mousavi
S. E.
,
Beheshti
M. T. H.
&
Ostadi
A.
(
2021
) ‘
Combination of feature selection and hybrid classifier as to network intrusion detection system adopting FA, GWO, and BAT optimizers
’,
2021 7th International Conference on Signal Processing and Intelligent Systems (ICSPIS)
.
IEEE
,
1
7
.
Alizadeh
M.
,
Beheshti
M. T. H.
,
Ramezani
A.
&
Bolouki
S.
(
2023
)
An optimized hybrid methodology for short-term traffic forecasting in telecommunication networks
,
Transactions on Emerging Telecommunications Technologies
,
34
(
12
),
e4860
.
Banadkooki
F. B.
,
Ehteram
M.
,
Ahmed
A. N.
,
Teo
F. Y.
,
Ebrahimi
M.
,
Fai
C. M.
,
Huang
Y. F.
&
El-Shafie
A.
(
2020
)
Suspended sediment load prediction using artificial neural network and ant lion optimization algorithm
,
Environmental Science and Pollution Research
,
27
(
30
),
38094
38116
.
Dadrasajirlou
Y.
,
Ghazvinian
H.
,
Heddam
S.
&
Ganji
M.
(
2022
)
Reference evapotranspiration estimation using ANN, LSSVM, and M5 tree models (Case study: of Babolsar and Ramsar Regions, Iran)
,
Journal of Soft Computing in Civil Engineering
,
6
(
3
),
103
121
.
Daneshfaraz
R.
,
Sammen
S. S.
,
Norouzi
R.
,
Abba
S. I.
,
Salem
A.
,
Mirzaee
R.
,
Sihag
P.
&
Elbeltagi
A.
(
2024
)
Estimating the effect of sand-roughened bed on hydraulic jump characteristics using heuristic models
,
Results in Engineering
,
23
,
102724
.
Dehghanipour
M. H.
,
Ghazvinian
H.
&
Dehghanipour
A.
(
2021a
)
Evaluation of the efficiency of artificial intelligence models for simulating evaporation in arid, semi-arid, and very-wet climates of Iran
,
Iran-Water Resources Research
,
17
(
1
),
318
327
.
Dehghanipour
M. H.
,
Karami
H.
,
Ghazvinian
H.
,
Kalantari
Z.
&
Dehghanipour
A. H.
(
2021b
)
Two comprehensive and practical methods for simulating pan evaporation under different climatic conditions in Iran
,
Water
,
13
(
20
),
2814
..
Ead
S. A.
&
Rajaratnam
N.
(
2002
)
Hydraulic jumps on corrugated beds
,
Journal of Hydraulic Engineering
,
128
(
7
),
656
663
.
Felder
S.
,
Montano
L.
,
Cui
H.
,
Peirson
W.
&
Kramer
M.
(
2021
)
Effect of inflow conditions on the free-surface properties of hydraulic jumps
,
Journal of Hydraulic Research
,
59
(
6
),
1004
1017
.
Ghazvinian
H.
&
Karami
H.
(
2023
)
Effect of rainfall intensity and slope at the beginning of sandy loam soil runoff using rain simulator (Case study: Semnan city)
,
JSTNAR
,
26
(
4
),
319
334
.
Ghazvinian
H.
&
Karami
H.
(
2024
)
Investigating the effect of climatic parameters predicting the mortality rate due to cardiovascular and respiratory disease with soft computing methods
,
Computational Engineering and Physical Modeling
,
7
(
4
),
1
21
.
Ghazvinian
H. R.
,
Karami
H.
&
Dadrasajirlou
Y.
(
2024
)
Field comparison studies of the rate of evaporation between Colorado sunken evaporation pan and class A evaporation pans in the arid areas (Case study: Semnan city)
,
JWSS-Isfahan University of Technology
,
28
(
2
),
45
65
.
Gohari
A.
&
Farhoudi
J.
(
2009
). ‘
The characteristics of hydraulic jump on rough bed stilling basins
’,
33rd IAHR Congress, Water Engineering for A Sustainable Environment
.
Vancouver, British Columbia
,
9
14
.
Henderson, F. M. (1966) Open Channel Flow. Macmilan Pubishing Co., Inc., New York.
Karami
H.
&
Ghazvinian
H.
(
2022
)
A practical and economic assessment regarding the effect of various physical covers on reducing evaporation from water reservoirs in arid and semi-Arid regions (Experimental study)
,
Iranian Journal of Soil and Water Research ISNN
,
53
(
6
),
1297
1313
.
Karbasi
M.
&
Azamathulla
H. M.
(
2016
)
GEP to predict characteristics of a hydraulic jump over a rough bed
,
KSCE Journal of Civil Engineering
,
20
,
3006
3011
.
Maleki
S.
&
Fiorotto
V.
(
2021
)
Hydraulic jump stilling basin design over rough beds
,
Journal of Hydraulic Engineering
,
147
(
1
),
4020087
.
Mousapour Mamoudan
M.
,
Ostadi
A.
,
Pourkhodabakhsh
N.
,
Fathollahi-Fard
A. M.
&
Soleimani
F.
(
2023
)
Hybrid neural network-based metaheuristics for prediction of financial markets: a case study on global gold market
,
Journal of Computational Design and Engineering
,
10
(
3
),
1110
1125
.
Najah
A. A.
,
El-Shafie
A.
,
Karim
O. A.
&
Jaafar
O.
(
2012
)
Water quality prediction model utilizing integrated wavelet-ANFIS model with cross-validation
,
Neural Computing and Applications
,
21
(
5
),
833
841
.
Nikmehr
S.
&
Aminpour
Y.
(
2020
)
Numerical simulation of hydraulic jump over rough beds
,
Periodica Polytechnica Civil Engineering
,
64
(
2
),
396
407
.
Parsamehr
P.
,
Farsadizadeh
D.
,
Hosseinzadeh Dalir
A.
,
Abbaspour
A.
&
Nasr Esfahani
M. J.
(
2017
)
Characteristics of hydraulic jump on rough bed with adverse slope
,
ISH Journal of Hydraulic Engineering
,
23
(
3
),
301
307
.
Pourabdollah
N.
,
Abedi Koupai
J.
,
Heidarpour
M.
&
Akbari
M.
(
2022
)
Evaluation of ANFIS and ANFIS-PSO models for estimating hydraulic jump characteristics
,
JWSS-Isfahan University of Technology
,
25
(
4
),
253
266
.
Samii
A.
,
Karami
H.
,
Ghazvinian
H.
,
Safari
A.
&
Dadrasajirlou
Y.
(
2023
)
Comparison of DEEP-LSTM and MLP models in estimation of evaporation pan for arid regions
,
Journal of Soft Computing in Civil Engineering
,
7
(
2
),
155
175
.
Seilsepour
A.
,
Alizadeh
M.
,
Ravanmehr
R.
,
Beheshti
M. T. H.
&
Nassiri
R.
(
2022
). ‘
Self-supervised sentiment classification based on semantic similarity measures and contextual embedding using metaheuristic optimizer
’,
2022 8th Iranian Conference on Signal Processing and Intelligent Systems (ICSPIS)
.
IEEE
,
1
7
.
Tutsoy
O.
&
Sumbul
H. E.
(
2024
)
A novel deep machine learning algorithm with dimensionality and size reduction approaches for feature elimination: thyroid cancer diagnoses with randomly missing data
,
Briefings in Bioinformatics
,
25
(
4
),
bbae344
.
Available at: https://academic.oup.com/bib/article/doi/10.1093/bib/bbae344/7713726
.
Velioglu
D.
,
Tokyay
N. D.
&
Dincer
A. I.
(
2015
). '
A numerical and experimental study on the characteristics of hydraulic jumps on rough beds
’,
E-proceedings of the 36th IAHR World Congress
,
1
9
.
Ziari
M.
,
Karami
H.
&
Daneshfaraz
R.
(
2023
)
Investigating the simultaneous effect of divergence and Bed roughness on hydraulic jump characteristics
,
Iran. Jhyd.iha.ir. ISSN
,
18
(
2
),
53
65
.
https://doi.org/10.30482/jhdy.2023.362465.1619
.
This is an Open Access article distributed under the terms of the Creative Commons Attribution Licence (CC BY 4.0), which permits copying, adaptation and redistribution, provided the original work is properly cited (http://creativecommons.org/licenses/by/4.0/).