## Abstract

In this paper, the trap efficiency (TE) of retention dams was investigated using laboratory experiments. To map the relation between TE and involved parameters, artificial intelligence (AI) methods including artificial neural network (ANN), adaptive neuro fuzzy inference system (ANFIS) and support vector machine (SVM) were utilized. Results of experiments indicated that the range of TE varies between 30 and 98%; hence, this structure can be recommended to control sediment transport in watershed management plans. Experimental results showed that by increasing the longitudinal slope of streams, TE decreases. This finding was observed for *V _{f}/V_{s}* parameter, as well. By increasing the mean diameter grain size (D

_{50}) and specific gravity of sediments (Gs), TE increases. Results of all applied AI models demonstrated that all of them have suitable performance; however, the minimum data dispersivity was observed in SVM outcomes. It is notable that the best performance of transfer, membership and kernel functions were related to tansig, gaussmf and radial basis function (RBF) for ANN, SVM and ANFIS, respectively.

## NOTATION

- AI
artificial intelligence

- ANN
artificial neural network

*C*concentration of sediment

- CFD
computational fluid dynamic

- D
_{50}mean diameter of sediments

- DDR
developed discrepancy ratio

- gaussmf
Gaussian curve membership function

- GP
genetic programing

- G
_{s}specific gravity

- h
_{max}maximum depth of flow

- logsig
log-sigmoid transfer function

- MARS
multivariate adaptive regression splines

- MFs
membership functions

- MLPNN
multilayer perceptron neural network

- purelin
linear transfer function

- Q
_{in}discharge of inflow

- Q
_{out}discharge of outflow

- R
^{2}coefficient of determination

- radbas
radial basis transfer function

- RBF
radial basis function

- RMSE
root mean square of error

*S*longitudinal slope of river

- SVM
support vector machine

- tansig
hyperbolic tangent sigmoid transfer function

- T
_{E}trap efficiency

- V
_{f}volume of flood

- V
_{s}volume of reservoir

- wtaver
weighted avarage

*γ*_{s}density of sediment

*γ*_{w}density of flow

## INTRODUCTION

Deposition of sediments in dam reservoirs causes a reduction in their useful volumes and, in addition, this phenomenon at the entrance of intake structures causes difficulties in flow diversion (Moradinejad *et al.* 2017). Sediment transport in hydrosystems sometime causes the distortion of hydraulic structures specifically hydro-mechanical installations such as pump stations, bottom outlets and sluice gates (Madadi *et al.* 2016). Sediment deposition through water conveyance structures such as irrigation channels reduce their discharge capacity and changes their hydraulic characteristics such as manning factor (Depeweg *et al.* 2014; Parsaie & Haghiabi 2016a). Recently, the effect of sediment deposition on hydraulic efficiency of weirs has been investigated. Based on the reports, sediment deposition significantly decreases the discharge capacity of weirs (Fahmy 2015). One way to control sediment transport is to develop watershed plans such as retention dams (Cao *et al.* 2011). By reducing the flood peak and flood velocity, retention dams decrease the power of flow which leads to deposition of sediments and reduction of flow erosivity (Fiener *et al.* 2005; Camnasio & Becciu 2011; Vorogushyn *et al.* 2012; Del Giudice *et al.* 2014; Yazdi & Salehi Neyshabouri 2014; Liu *et al.* 2015; Nikoo *et al.* 2015; Yazdi *et al.* 2016). Performance of a retention dam is defined by trap efficiency (TE) factor. This factor is calculated as the ratio of sediment entering to the outlet sediments from reservoir of a retention dam. Based on reports, retention dams have a high ability to reduce and delay the flood peak. This approach has been proposed as a rational idea among watershed management projects to improve the lifetime of high dam projects (Parsaie *et al.* 2017a). The main points related to retention dams are site selection and defining these heights. To select the site, a geographic information system and Google Earth have been proposed (Yazdi & Salehi Neyshabouri 2012; Madadi *et al.* 2015). To study the feasibility of retention dams, in addition to site selection, measuring and estimation of sediment load is necessary (Parsaie & Haghiabi 2017a, 2017b; Parsaie *et al.* 2017b). To estimate the discharge of flow and sediment load, in addition to field measurements, advanced numerical methods including artificial intelligence (AI) techniques and computational hydraulic packages such as GSTARS and HEC-RAS have been proposed (Hassan-Esfahani & Banihabib 2016; Parsaie *et al.* 2016a; Parsaie & Haghiabi 2016a). AI techniques including artificial neural network (ANN), support vector machine (SVM), genetic programming, and adaptive neuro fuzzy inference system (ANFIS) have been successfully utilized all around the world to predict river flow and sediment loads (Kiat *et al.* 2008; Ghani *et al.* 2011; Baghbanpour & Kashefipour 2012; Azamathulla *et al.* 2013; Ghani & Azamathulla 2014). Based on the important role of retention dams for controlling floods and sediment transport, in this study, the performance of retention dams in terms of sediment deposition is investigated. To this end, parameters involved in the performance of retention dams are derived using dimensional analysis and a series of experiments is programmed. To accurately model the relation between the influenced parameters and TE, obtained results of experiments are modeled and predicted using AI, including ANFIS, ANN and SVM.

## MATERIALS AND METHODS

### Dimensional analysis

*C*is the concentration of sediment;

*S*is the longitudinal slope of river; and T

_{E}is trap efficiency. Using Buckingham theorem and considering and as repeated parameters, dimensionless parameters are derived as Equation (2).

*C*is the function of slope, because retention dams are constructed on mountainous areas and discharge of outflow depends on upstream flow head; hence, three dimensionless factors, including , are removed from Equation (2) and parameters involved in TE of retention dams are presented in Equation (3).

### Experimental setup

Regarding Equation (3), a series of experiments were programmed and conducted at the laboratory of the Iranian Institute of Soil Conservation and Watershed Management (SCWMRI). Experiments were carried out in a flume with 0.25 m width, 0.25 m depth and 6 m length. At the entrances of flume, two reservoirs were considered for clear water as flood and sediment load. Three sediment sizes with different properties including G_{s} = 2.65 with D_{50} = 0.178 mm, G_{s} = 1.291 with D_{50} = 0.271 mm and G_{s} = 1.523 with D_{50} = 0.243 mm were utilized for sediments. The concentration of sediments was evaluated to define the performance of retention dams. For example, while the flood discharge was equal to 0.3 (m^{3}/s), the concentration of sediment was considered to be 10%. This means that the volume of sediments was equal to 0.03 (m^{3}). Figure 1 shows a schematic sketch of laboratory model used to study T_{E} of retention dams.

### AI techniques

#### Artificial neural networks

ANN is an artificial intelligent technique, the idea of which was taken from the nervous system of the human brain. An ANN consists of a group of neurons that have interconnected relation. In such a system, input data are introduced to neurons and by conducting a series of mathematical operations on them, they will be mapped to output. A simple form of ANN is shown in Figure 2. As shown in this figure and as stated previously, an ANN consists of neurons categorized in one or more layer(s). The first layer of such network is considered as input layer, the main task of which is to introduce inputs information. Certain mathematical operations are not performed in this layer. The next layer(s) that are categorized as hidden layer(s) are the main part of ANN on which the main mathematical operations are performed. In this part of the network, inputs are multiplied by weights and then are summed by a constant value called bias. As shown in Figure 2, is the weight and is the bias for each neuron. Results of mathematical operations are passed through a function called transfer function. Different types of transfer functions have been introduced, most types of which that have been widely used in hydrology engineering (Araghinejad 2013). The last layer called the output layer is assigned for summarization of mathematical process of network. Outputs of such network are compared with observed data. To minimize the difference between output data and observed data, justifying the values of weights and biases are considered. To this purpose, conventional approaches such as Levenberg–Marquardt technique or advanced optimization methods have been proposed (Heddam 2016a, 2016b, 2016c, 2016d, 2016e).

#### Adaptive neuro fuzzy inference systems

*A*,

_{1}*A*,

_{2}*B*and

_{1}*B*are the membership functions acted on inputs

_{2}*x*and

*y*; respectively;

*p*;

_{1}*q*;

_{1}*r*and

_{1}*p*;

_{2}*q*;

_{2}*r*are the parameters of output function. The structure of ANFIS is presented in Figure 3. At the first layer, all inputs variable take grades of membership and in layer 2, all membership grades are multiplied by each other. In layer 3, all grades of members are normalized, and in layer 4, the grant of all rules is computed. In the last layer, output is obtained as the weighted average of grade memberships (Azamathulla

_{2}*et al.*2008, 2009; Noori

*et al.*2015; Parsaie & Haghiabi 2016b; Parsaie

*et al.*2016b, 2016c).

#### Support vector regression

SVMs are a set of related supervised learning methods used for classification and regression. In many applications, a non-linear classifier provides better accuracy. In SVM, the input *x* is first mapped onto an m-dimensional feature space using some fixed (nonlinear) mapping, and then a linear model is constructed in this feature space. The naive way of making a non-linear classifier out of a linear classifier is to map our data from the input space X to a feature space F using a non-linear function

- I.
Linear kernel:

- II.
Polynomial kernel:

- III.
RBF kernel:

- IV.
Sigmoid kernel:

Here *γ*, *r* and *d* are kernel parameters. It is well known that SVM generalization performance (estimation accuracy) depends on a good setting of meta-parameters, parameters and as well as the kernel parameters. The choices of and control the prediction (regression) model complexity. The problem of optimal parameter selection is further complicated, since the SVM model complexity (and hence its generalization performance) depends on all three parameters. Kernel functions are used to change the dimensionality of input space to perform classification (Azamathulla *et al.* 2016; Haghiabi *et al.* 2017; Parsaie & Haghiabi 2017a, 2017b).

## RESULTS AND DISCUSSION

### Results of experiments

Results obtained from laboratory experience are presented in Table 1. Variation of TE along the slope and *V _{f}/V_{s}* are shown in Figure 4. As shown in this figure, by increasing the slope and

*V*, the TE decreases. Increasing the slope of main channel leads to an increase in the flow velocity and shear stress. Increasing the flow velocity leads to loss time opportunities of sediment deposition. Increasing the shear stress resulted from increasing the slope of the main channel, in addition to reducing the sediment deposition, sometimes causes the flushing phenomenon. By increasing the concentration of sediment load, the performance of retention dams regarding sediment deposition (TE) as shown in Figure 4, decreases, as well; since, by increasing the sediment concentration, the fall velocity decreases. As shown in Figure 4, by increasing the mean grain size of sediment (D

_{f}/V_{s}_{50}), TE increases, as well. This finding is repeated for specific gravity (Gs). In other words, by increasing Gs, the performance of retention dams regarding sediment deposition increases. Increasing Gs and D

_{50}leads to an increase in the falling velocity; hence, TE increases. The solid lines in Figure 4 show the trend of TE versus the longitudinal slope and

*V*, considering the mean size of grain of sediment and specific gravity, respectively.

_{f}/V_{s}Run no. . | S . | G_{s}
. | . | . | . | Run no. . | S . | G_{s}
. | . | . | . |
---|---|---|---|---|---|---|---|---|---|---|---|

1 | 0.025 | 1.6 | 0.0005 | 1.32 | 37.235 | 26 | 0.045 | 2.65 | 0.0002 | 1.457 | 96.564 |

2 | 0.05 | 1.6 | 0.0005 | 1.23 | 37.688 | 27 | 0.06 | 1.29 | 0.0005 | 1.268 | 80.66 |

3 | 0.05 | 1.29 | 0.0005 | 1.90 | 68.939 | 28 | 0.025 | 1.6 | 0.0005 | 1.212 | 36.25 |

4 | 0.045 | 1.6 | 0.0005 | 1.3 | 36.755 | 29 | 0.05 | 2.65 | 0.0002 | 1.967 | 90.409 |

5 | 0.055 | 1.29 | 0.0004 | 1.478 | 43.786 | 30 | 0.04 | 2.65 | 0.0002 | 1.264 | 90.566 |

6 | 0.04 | 1.29 | 0.0004 | 1.667 | 45.184 | 31 | 0.04 | 1.29 | 0.0004 | 1.322 | 58.094 |

7 | 0.04 | 1.6 | 0.0005 | 1.236 | 37.855 | 32 | 0.06 | 1.6 | 0.0003 | 1.936 | 98.556 |

8 | 0.05 | 1.6 | 0.0005 | 1.375 | 37.821 | 33 | 0.025 | 1.29 | 0.0004 | 1.108 | 71.262 |

9 | 0.04 | 2.65 | 0.0002 | 1.512 | 92.346 | 34 | 0.045 | 2.65 | 0.0002 | 1.765 | 97.425 |

10 | 0.055 | 1.29 | 0.0005 | 1.404 | 59.275 | 35 | 0.05 | 1.29 | 0.0004 | 2.083 | 38.729 |

11 | 0.06 | 1.29 | 0.0004 | 1.667 | 43.576 | 36 | 0.025 | 2.65 | 0.0002 | 1.068 | 94.654 |

12 | 0.025 | 2.65 | 0.0002 | 1.231 | 94.666 | 37 | 0.05 | 1.6 | 0.0005 | 1.222 | 37.775 |

13 | 0.045 | 2.65 | 0.0002 | 1.661 | 92.363 | 38 | 0.025 | 1.6 | 0.0005 | 1.196 | 36.5 |

14 | 0.045 | 2.65 | 0.0002 | 1.562 | 92.171 | 39 | 0.045 | 2.65 | 0.0002 | 1.361 | 90.412 |

15 | 0.045 | 2.65 | 0.0003 | 1.475 | 96.878 | 40 | 0.05 | 2.65 | 0.0002 | 1.750 | 90.264 |

16 | 0.04 | 1.29 | 0.0005 | 1.367 | 77.459 | 41 | 0.055 | 1.29 | 0.0004 | 2.112 | 36.597 |

17 | 0.06 | 1.29 | 0.0004 | 2.341 | 31.844 | 4 | 0.045 | 2.65 | 0.0002 | 1.325 | 95.365 |

18 | 0.05 | 1.29 | 0.0004 | 1.448 | 36.698 | 43 | 0.025 | 1.29 | 0.0005 | 0.889 | 80.557 |

19 | 0.06 | 2.65 | 0.0003 | 1.812 | 97.777 | 44 | 0.025 | 1.29 | 0.0004 | 1.084 | 77.795 |

20 | 0.025 | 1.29 | 0.0005 | 1.11 | 81.365 | 45 | 0.04 | 1.6 | 0.0005 | 1.238 | 37.888 |

21 | 0.05 | 1.29 | 0.0004 | 1.999 | 38.709 | 46 | 0.04 | 1.6 | 0.0005 | 1.227 | 36.72 |

22 | 0.035 | 1.6 | 0.0005 | 1.225 | 36.95 | 47 | 0.025 | 1.6 | 0.0005 | 1.312 | 37.431 |

23 | 0.04 | 2.65 | 0.0002 | 1.666 | 92.872 | 48 | 0.05 | 2.65 | 0.0002 | 1.448 | 88.325 |

24 | 0.06 | 1.29 | 0.0005 | 1.329 | 64.549 | 49 | 0.035 | 1.6 | 0.0005 | 1.211 | 36.55 |

25 | 0.025 | 2.65 | 0.0002 | 1.659 | 97.665 | 50 | 0.03 | 1.6 | 0.0005 | 1.285 | 36.778 |

Run no. . | S . | G_{s}
. | . | . | . | Run no. . | S . | G_{s}
. | . | . | . |
---|---|---|---|---|---|---|---|---|---|---|---|

1 | 0.025 | 1.6 | 0.0005 | 1.32 | 37.235 | 26 | 0.045 | 2.65 | 0.0002 | 1.457 | 96.564 |

2 | 0.05 | 1.6 | 0.0005 | 1.23 | 37.688 | 27 | 0.06 | 1.29 | 0.0005 | 1.268 | 80.66 |

3 | 0.05 | 1.29 | 0.0005 | 1.90 | 68.939 | 28 | 0.025 | 1.6 | 0.0005 | 1.212 | 36.25 |

4 | 0.045 | 1.6 | 0.0005 | 1.3 | 36.755 | 29 | 0.05 | 2.65 | 0.0002 | 1.967 | 90.409 |

5 | 0.055 | 1.29 | 0.0004 | 1.478 | 43.786 | 30 | 0.04 | 2.65 | 0.0002 | 1.264 | 90.566 |

6 | 0.04 | 1.29 | 0.0004 | 1.667 | 45.184 | 31 | 0.04 | 1.29 | 0.0004 | 1.322 | 58.094 |

7 | 0.04 | 1.6 | 0.0005 | 1.236 | 37.855 | 32 | 0.06 | 1.6 | 0.0003 | 1.936 | 98.556 |

8 | 0.05 | 1.6 | 0.0005 | 1.375 | 37.821 | 33 | 0.025 | 1.29 | 0.0004 | 1.108 | 71.262 |

9 | 0.04 | 2.65 | 0.0002 | 1.512 | 92.346 | 34 | 0.045 | 2.65 | 0.0002 | 1.765 | 97.425 |

10 | 0.055 | 1.29 | 0.0005 | 1.404 | 59.275 | 35 | 0.05 | 1.29 | 0.0004 | 2.083 | 38.729 |

11 | 0.06 | 1.29 | 0.0004 | 1.667 | 43.576 | 36 | 0.025 | 2.65 | 0.0002 | 1.068 | 94.654 |

12 | 0.025 | 2.65 | 0.0002 | 1.231 | 94.666 | 37 | 0.05 | 1.6 | 0.0005 | 1.222 | 37.775 |

13 | 0.045 | 2.65 | 0.0002 | 1.661 | 92.363 | 38 | 0.025 | 1.6 | 0.0005 | 1.196 | 36.5 |

14 | 0.045 | 2.65 | 0.0002 | 1.562 | 92.171 | 39 | 0.045 | 2.65 | 0.0002 | 1.361 | 90.412 |

15 | 0.045 | 2.65 | 0.0003 | 1.475 | 96.878 | 40 | 0.05 | 2.65 | 0.0002 | 1.750 | 90.264 |

16 | 0.04 | 1.29 | 0.0005 | 1.367 | 77.459 | 41 | 0.055 | 1.29 | 0.0004 | 2.112 | 36.597 |

17 | 0.06 | 1.29 | 0.0004 | 2.341 | 31.844 | 4 | 0.045 | 2.65 | 0.0002 | 1.325 | 95.365 |

18 | 0.05 | 1.29 | 0.0004 | 1.448 | 36.698 | 43 | 0.025 | 1.29 | 0.0005 | 0.889 | 80.557 |

19 | 0.06 | 2.65 | 0.0003 | 1.812 | 97.777 | 44 | 0.025 | 1.29 | 0.0004 | 1.084 | 77.795 |

20 | 0.025 | 1.29 | 0.0005 | 1.11 | 81.365 | 45 | 0.04 | 1.6 | 0.0005 | 1.238 | 37.888 |

21 | 0.05 | 1.29 | 0.0004 | 1.999 | 38.709 | 46 | 0.04 | 1.6 | 0.0005 | 1.227 | 36.72 |

22 | 0.035 | 1.6 | 0.0005 | 1.225 | 36.95 | 47 | 0.025 | 1.6 | 0.0005 | 1.312 | 37.431 |

23 | 0.04 | 2.65 | 0.0002 | 1.666 | 92.872 | 48 | 0.05 | 2.65 | 0.0002 | 1.448 | 88.325 |

24 | 0.06 | 1.29 | 0.0005 | 1.329 | 64.549 | 49 | 0.035 | 1.6 | 0.0005 | 1.211 | 36.55 |

25 | 0.025 | 2.65 | 0.0002 | 1.659 | 97.665 | 50 | 0.03 | 1.6 | 0.0005 | 1.285 | 36.778 |

### Results of AI models

*et al.*(2017) were considered. They stated that to develop the MLPNN model, at the first stage, one hidden layer that consists of neurons equal to input features is designed. At this stage, the performance of different types of transfer functions is tested. After justifying the transfer function to increase the accuracy of the developed model, increasing the number of hidden layers and/or increasing the number of neurons in each hidden layer may be considered. This approach of development leads to preparing an optimal structure for MLPNN and avoids the increase in computation costs. In this study, the performance of different transfer functions, including tansig, logsig, radbas, and purelin were tested. The optimal achieved structure is shown in Figure 5. As shown in this figure, the developed MLPNN model consisted of two hidden layers, in the first and second hidden layers of which there are five and three neurons, respectively. The best performance of tested transfer function for hidden layer was related to tansig and, for output layer, it was related to purelin. Results of MLPNN model during the training and testing stages are shown in Figure 7. The approach considered for designing the structure of MLPNN model was considered for developing the ANFIS model. The structure of ANFIS model was presented in Table 2. As presented in this table, the Sugeno type was considered for developing the ANFIS model and the weighted average was utilized for defuzzification. The best performance among the tested membership functions was related to gaussmf. Results of ANFIS model in development stages (training and testing) are shown in Figure 7. The approach of developing SVM was the same as MLP and ANFIS. The structure of prepared SVM is shown in Figure 6. To develop the SVM model, different types of kernel functions introduced in the materials and methods section were tested and the best accuracy was related to RBF kernel function. The performance of SVM in calibration and validation stages is shown in Figure 7. Reviewing Figure 7 demonstrated that all applied models have acceptable accuracy in terms of standard error indices including coefficient of determination (R

^{2}) and root mean square of error (RMSE). To provide more information about the performance of applied models, another index proposed by (Noori

*et al.*(2011)) was used. This index is called developed discrepancy ratio (DDR) and is defined as Equation (11). Results of DDR in training and testing stages are shown in Figure 8. Evaluating the performance of applied models in terms of DDR index shows that less data dispersivity is related to SVM. This means that results of SVM are more reliable.

Parameter . | N . | MF . | And method . | Or method . | Defuzz method . | Agg method . | Type . |
---|---|---|---|---|---|---|---|

S | 5 | gaussmf | prod | max | wtaver | max | sugeno |

Gs | 5 | gaussmf | prod | max | wtaver | max | |

V _{f}/V_{s} | 5 | gaussmf | prod | max | wtaver | max | |

D_{50}/(V _{f})^{0.33} | 5 | gaussmf | prod | max | wtaver | max |

Parameter . | N . | MF . | And method . | Or method . | Defuzz method . | Agg method . | Type . |
---|---|---|---|---|---|---|---|

S | 5 | gaussmf | prod | max | wtaver | max | sugeno |

Gs | 5 | gaussmf | prod | max | wtaver | max | |

V _{f}/V_{s} | 5 | gaussmf | prod | max | wtaver | max | |

D_{50}/(V _{f})^{0.33} | 5 | gaussmf | prod | max | wtaver | max |

## CONCLUSION

Controlling sediment transport in rivers, especially at upstream areas of the river basin, is a rational approach to improve the lifetime of big dams and water engineering projects such as irrigation and drainage networks. Retention dams have been proposed as one of the main hydraulic structures for this purpose. In this paper, the TE of retention dams was investigated using experiments and numerical methods. Experimental results indicated that using retention dams has a high impact on deposition of sediment loads. Hence, using them upstream of rivers, especially in catchments, is strongly recommended for controlling sediment transport. Results of experiments showed that by increasing the longitudinal slope of channel, the TE decreased. This finding was repeated for variation of TE versus the *V _{f}/V_{s}*. These two parameters are vital for the feasibility study of retention dams. The longitudinal slope is derived using the surviving operation and

*V*is a representative of river sediment. In this study, to map the relation between the TE and involved parameters, AI techniques including ANN, ANFIS and SVM were utilized. Results of applied AI models indicated that they have suitable performance for mapping and predicting TE; however, results of SVM were more reliable.

_{f}/V_{s}## CONFLICT OF INTEREST

No conflict of interest.

## REFERENCES

^{®}in Water Resources and Environmental Engineering

*.*

*.*