Abstract
Dam deformation monitoring and prediction are crucial for evaluating the safety of reservoirs. There are several elements that influence dam deformation. However, the mixed effects of these elements are not always linear. Oppose to a single-kernel extreme learning machine, which suffers from poor generalization performance and instability, in this study, we proposed an improved bat algorithm for dam deformation prediction based on a hybrid-kernel extreme learning machine. To improve the learning ability of the global kernel and the generalization ability of the local kernel, we combined the global kernel function (polynomial kernel function) and local kernel function (Gaussian kernel function). Moreover, a Lévy flight bat optimization algorithm (LBA) was proposed to overcome the shortages of bat algorithms. The results showed that our model outperformed other models. This proves that our proposed algorithm and methods can be used in dam deformation monitoring and prediction in different projects and regions.
HIGHLIGHTS
The Lévy flight was improved bat algorithm to overcome the shortcoming about speed and effectiveness of bat algorithm.
We integrated the advantage of Gaussian and polynomial functions as the kernel function of KELM (PGKELM).
The LBA-PGKELM at the top of LBA-SVM and BPNN which based on gradient descent method.
INTRODUCTION
The role of a dam in the functionalities of a reservoir is undeniable, and any failure in a dam threatens humans and causing significant damages for downstream regions (Milillo et al. 2016). Therefore, proposing a dam deformation model is important for dam safety and preventing financial losses (Mohanty et al. 2020). Dam deformation monitoring and prediction is an efficient way to discover the potential risks in the construction of dams (Chongshi et al. 2011). Due to the complexity of the dam structure, the relationship between influencing factors (e.g., water pressure, aging, and temperature) and dam deformation is non-linear (Xu et al. 2012). Therefore, traditional dam deformation models (e.g., multiple linear regression and other linear models) (Yu et al. 2018; Li et al. 2019) may not reflect the direct relationship between explanatory variables and dam deformation because of linear assumption.
Machine learning methods have shown their ability in different fields, such as epidemiological studies (Chen et al. 2021), air quality forecasting (Karimian et al. 2017), and ecological security (Gong et al. 2017). In recent years, machine learning models have shown their feasibility in dam deformation prediction and other fields due to their high prediction performance and capability to handle complex and non-linear problems (Salazar et al. 2015; Al-musaylh et al. 2018; Gupta et al. 2020). Support vector machine (SVM) (Song et al. 2011), artificial neural network (ANN) (Stojanovic et al. 2016), and random forest regression (RFR) (Dai et al. 2018) are some of the widely used models. Su et al. (2018) combined an improved particle swarm optimization (PSO) algorithm with the wavelet SVM and the radial kernel function-based SVM. The authors claimed that this model has better parameter selection that reduces the iteration number, shortens the computation time, and avoids the local optimization. Bui et al. (2018) used a neural fuzzy inference system to create a regression model, whereas PSO was employed to search the best parameters for the model. It is accepted to be an effective tool for modeling the non-linear time-varying behavior of a dam. The machine learning methods applied in dam deformation can detect patterns between variables, and resist the noise interference in monitoring data, without presupposition, which is particularly suitable for the interpretation of dam behavior (Lin 2019).
The extreme learning machine (ELM) is one of the machine learning methods that, because of its excellent generalization ability and fast learning speed, has been adopted in various fields (e.g., dam displacement prediction (Cheng & Xiong 2017), water quality prediction (Lima et al. 2015), and software fault prediction (Mao et al. 2019)). However, the machine learning algorithms have their own limits. First, while trying to improve global optimization, the local best may be ignored. Second, the accuracy of an algorithm is improved by the price of diminishing processing speed (Reddy et al. 2018). Third, if there are outliers in the training samples, the hidden layer output matrix is ill-posed, which will affect the generalization ability and robustness of the model (Zhang & Zhang 2016). Finally, due to the random mapping of ELM, even with the same set of inputs, the outputs will be different (Barzegar et al. 2016). The kernel extreme learning machine can improve the stability and accuracy of the prediction, and can overcome the limitations of random mapping of ELM (Cao et al. 2020). Kang et al. (2017) proposed a prediction model based on Gaussian kernel function and extreme learning machine. They proved that the kernel extreme learning machine (KELM) has high learning efficiency and can well adapt to the complexity of dam deformation prediction. Maimaitiyiming et al. (2019) optimized an ELM model by mixing dual activation function. The results showed that the optimized KELM is good at improving model accuracy. Moreover, the generalization performance of KELM largely depends on the choice of kernel function parameters. However, the selection of kernel functions are still challenging tasks.
With the increase of kernel number, the parameter optimization is necessary to maximize the performance of a model. Optimization is an essential component of machine learning. As one of the optimization techniques, the swarm intelligence optimization algorithm has showed strong optimization performance (Igiri et al. 2019). Swarm intelligence algorithms include cuckoo search algorithm (Wong et al. 2015), fruit fly optimization algorithm (FOA), particle swarm algorithm (PSO) and bat algorithm (BA) (Degang & Ping 2019). Based on simplicity, strong searching ability and fast convergence speed, BA is widely applied to gray image edge detection (Dhar et al. 2017), capacitive vehicle routing problems (Zhou et al. 2016), and power systems (Sathya & Mohamed Thameem Ansari 2015). The BA algorithm is initialized as a set of random solutions to enhance the local searching ability and increase the processing speed.
This paper proposed a novel framework based on KELM and Lévy flight bat algorithm (LBA-PGKEML). We proposed a hybrid kernel for KELM. Moreover, the role of intelligence optimization algorithm in improving the performance of KELM is investigated. To the best of our knowledge, this is the first attempt at the application of the hybrid kernel function, which is done by exploiting the advantages of both Gaussian and polynomial functions as the hybrid kernel function for the KELM. Considering the high generalization and robustness of KELM models, we proposed a KELM model to solve the non-linear characteristics of dam deformation process. For this purpose, the global kernel (polynomial functions) and local kernel (Gaussian functions) are combined, and from a hybrid kernel (PGKELM). Our hybrid kernel exploits the generalization ability of global kernel and learning ability of local kernel. PGKELM does not suffer from the sensitivity of single kernel KELM. Consequently, it improves the prediction accuracy of the model. Moreover, for efficient selection of parameters, we use the Lévy flight bat algorithm (LBA), which aims to reduce long searching time and solve the problem of optimal parameters’ selection. Results are also compared with that of backpropagation neural networks (BPNN) and the SVM models.
MATERIALS AND METHODS
Data process
Hybrid kernel extreme learning machine (KELM)
From Equation (10), it can be inferred that the ELM used kernel mapping instead of random mapping. Besides the aforementioned advantages of our proposed method, there is no need to set the dimension of hidden layer feature mapping. Instead, the pointwise multiplication of the kernel function is used that reduces the computational complexity.
Lévy flight bat algorithm
Bat algorithm is a heuristic search algorithm based on swarm intelligence proposed by Yang (2010, 2011, 2013), and it is an effective method for searching the optimal global values.
In the above, is a random number, and represents the average loudness of the same generation.
However, from the BA algorithm, it can be inferred that the searching capability mainly depends on the interaction and influence between bat individuals. Once the individual is trapped in the local extremum, it is difficult to get rid of it, due to the lack of mutation mechanism. Under this circumstance, the Lévy flight is applied.
In the above, is the random step, and is the index parameters. Figure 1 shows the trajectory simulation diagram of Lévy flight with particles’ flight speed v= 1, the number of particles’ moving steps n= 10,000, and time parameter α= 1.5. We can see that Lévy's flight has many short-distance steps and a few large step jumps.
- 1.
Initializing algorithm parameters: set the number of bats , individual maximum pulse frequency , maximum pulse intensity , frequency increase coefficient , sound intensity attenuation coefficient and maximum iteration coefficient or search accuracy.
- 2.
Randomly initialize the bat position and find the best position () in the population.
- 3.
Generate a random number . If , update the current position of the bat according to Equation (21), otherwise, update it with a disturbed position, which randomly disturbs the current position of the bat.
- 4.
Generate a random number . If and the current position of the bat is improved, then fly to the updated position.
- 5.
If the bat i is superior to the best bat in the group, after updating its position, replace the best bat individual and adjust and according to Equations (18) and (19).
- 6.
Evaluate the bat population and find out the spatial position of the best bat.
- 7.
Once the desired accuracy is met, or the maximum iteration is reached, go to the last step; otherwise, go to step 3 for the new search.
- 8.
Output the global extreme points and optimal individual values.
Model building
Based on our proposed algorithm, a dam deformation prediction model, LBA-PGKELM, is constructed. The main point of our model is optimizing PGKELM through the LBA algorithm. For the combined KELM, because the parameters are the superposition of multiple kernel functions in KELM, the optimization is more complicated. However, using the LBA algorithm can solve this problem.
After preprocessing of the existing data, the input samples are prepared, and the dam deformation prediction model based on the LBA-PGKELM algorithm is established (Figure 2).
Based on the above flow chart, the steps of our proposed algorithm are as follows:
- 1.
The data are divided randomly into training set (80%) and test set (20%). The mean and standard deviation of training data are calculated for normalization and dimensionality reduction of data.
- 2.
The polynomial kernel and Gaussian RBF kernel are combined by weighting to construct a PGKELM function with improved performance.
- 3.
Build the PGKELM model. The PGKELM Kernel function is used to realize the mapping transformation of eigenvalues.
- 4.
Several parameters are taken as the optimization parameters of LBA, and the minimum square sum of the MSE between the training set and the test set is taken as the optimization criterion.
- 5.
Validate the model performance based on the test dataset using mean square error and coefficient of determination.
Verifying models
To verify the performance of our proposed LBA-PGKELM algorithm, we compared its performance with the BPNN and the LBA-SVM model.
Neural network models are flexible, and allow modeling complex and highly non-linear phenomena. Backpropagation neural network (BPNN) (Zou et al. 2018) is one of the NN algorithms, and it realizes the non-linear mapping between input and output. Otherwise, the use of gradient descent algorithm (GDA) optimized the parameters to reduce the error and get better results of BPNN. SVM (Vapnik 1998) was proposed in the 1990s. According to the principle of structural risk minimization, a linear classifier with the maximum decision boundary is designed to minimize the generalization error in the worst case. In SVM, the data are mapped to high space through kernel function. This converts the non-linear problem to a linear problem, and avoids training falling into a local minimum.
Model performance analysis
CASE STUDY
We validated the capability and performance of the LBA-PGKELM model step by step. First, the accuracy of data was tested by the LBA algorithm similar to the BA algorithm. Second, the performance of the LBA-PGKELM model was compared with the LBA-RBFKELM and LBA-POLKELM. It is worth mentioning that the kernel functions which were used for KELM are the RBF kernel function and the polynomial kernel function. Third, the prediction accuracy of the LBA-PGKELM model was compared with the BPNN and LBA optimized support vector machine (LBA-SVM).
Study site and data
The study area is LiShan reservoir which is located in Zhejiang Province, China, and it can be seen in Figure 3. It is used mainly for irrigation, water supply, and flood control. The specifications of the reservoir are provided in Table 1. We utilized the daily-averaged data of two observation points, A2 and A3, from December 2018 to January 2021. The observations include the vertical displacements and temperature. Because the water level was constant during our study period, we did not consider it in our model. In addition, considering all monitoring stations collectively, does not provide better results than using a single station (Kang et al. 2017). Therefore, in our study, the data record at each monitoring point was modeled independently. In order to verify the ability of our proposed model, we utilized two monitoring stations’ data. The dataset cleaning process was achieved through excluding outliers. After that, the daily averaged data were calculated. According to the Pearson correlation analysis, the average correlation coefficients of two monitoring stations between time and dam settlements and between temperature and dam settlements are R = 0.72 and R = 0.99, respectively. This indicates that time and temperature are two critical factors in dam settlements. According to Equation (7), the input samples are set as . Part of the data is shown in Table 2. At the experiment, the maximum number of iterations was 500, the volume attenuation coefficient was 0.9, and the search frequency enhancement coefficient was set to 0.9.
Project . | Value . |
---|---|
Catchment area | 1.74 km2 |
Flood level | 83.67 m |
Check flood level | 84.47 m |
Total capacity | 684,600 m3 |
Normal storage level | 81.8 m |
Normal capacity | 512,000 m3 |
Reservoir engineering level | V |
Project . | Value . |
---|---|
Catchment area | 1.74 km2 |
Flood level | 83.67 m |
Check flood level | 84.47 m |
Total capacity | 684,600 m3 |
Normal storage level | 81.8 m |
Normal capacity | 512,000 m3 |
Reservoir engineering level | V |
Number (day) . | Temperature (°C) . | Settlement (mm) . | Number (day) . | Temperature (°C) . | Settlement (mm) . |
---|---|---|---|---|---|
1 | 3.4 | 759.8 | 90 | 13.4 | 788.8 |
5 | 6.2 | 760.5 | 95 | 14.0 | 790.6 |
10 | 12.6 | 761.7 | 100 | 10.4 | 790.7 |
15 | 9.2 | 761.3 | 105 | 15.0 | 792.4 |
20 | 2.4 | 760.2 | 110 | 13.4 | 793.1 |
25 | 5.6 | 761.2 | 115 | 22.2 | 794.5 |
30 | 6.2 | 777.2 | 120 | 17.2 | 794.0 |
35 | 6.3 | 778.1 | 125 | 19.4 | 794.9 |
40 | 5.6 | 777.8 | 130 | 21.2 | 794.7 |
Number (day) . | Temperature (°C) . | Settlement (mm) . | Number (day) . | Temperature (°C) . | Settlement (mm) . |
---|---|---|---|---|---|
1 | 3.4 | 759.8 | 90 | 13.4 | 788.8 |
5 | 6.2 | 760.5 | 95 | 14.0 | 790.6 |
10 | 12.6 | 761.7 | 100 | 10.4 | 790.7 |
15 | 9.2 | 761.3 | 105 | 15.0 | 792.4 |
20 | 2.4 | 760.2 | 110 | 13.4 | 793.1 |
25 | 5.6 | 761.2 | 115 | 22.2 | 794.5 |
30 | 6.2 | 777.2 | 120 | 17.2 | 794.0 |
35 | 6.3 | 778.1 | 125 | 19.4 | 794.9 |
40 | 5.6 | 777.8 | 130 | 21.2 | 794.7 |
Results and discussion
LBA algorithm versus BA algorithm
In the first experiment, the PG kernel was used as the kernel function of KELM and the parameter optimization was carried out based on the BA algorithm and LBA algorithm, respectively.
To compare the convergence effect of the improved bat algorithm (LBA) and the bat algorithm (BA), the relationship between the fitness value and the iterations is shown in Figure 4, and the maximum number of iterations is 200, the population size was 100, over the same test dataset. The optimization results in the LBA and BA algorithm were estimated by the fitness function (Ragalo & Pillay 2018a, 2018b). The higher fitness value indicates the higher optimization ability of a model. As can be seen in Figure 4, the LBA-best (best performance) and LBA-worst (worst performance) have smaller difference than those between BA-best and BA-worst curve (Figure 4(b)). The best fitness value of the LBA algorithm is 0.6, which is better than the optimal result of the BA algorithm, 0.8 (Figure 4(a) and 4(b)). During the two tests, the LBA algorithm only used 100 iterations to obtain relatively ideal results. Moreover, the convergence speed of the BA algorithm is significantly slower. This is because the searching capability of the BA algorithm mainly depends on the interaction and influence between bat individuals. Moreover, due to the lack of mutation mechanism, once the bat individual is trapped in the local extremum, it is difficult to get rid of it. Whereas in the searching process of the LBA algorithm, utilization of frequently short distance local search and infrequently long distance global search can enhance the local search effect and improve the optimization ability. From our results, it can be inferred that the LBA algorithm performs better in both convergence result and convergence speed than the BA algorithm.
Hybrid kernel algorithm versus single kernel algorithm
Unlike most studies (Cao et al. 2020), we applied the combined two functions as the kernel function of KELM. We performed the optimization comparison between PG kernel function, polynomial kernel, and RBF kernel function (Tables 3 and 4; Figures 5 and 6). As can be seen in both stations, the PGKELM as a mixed kernel function showed better performance, with overall MSE = 0.5 mm2, than other kernel functions (MSE = 0.73 mm2 and 1.35 mm2 for RBKELM and POLKELM, respectively). Although the difference in R2 among different models is not big, it is not the only indicator for evaluating model performance. The extreme learning machine is one of the machine learning methods. On the other hand, the Gaussian kernel has strong local learning ability but weak generalization ability. By combining these two kernel functions, as the hybrid PGKELM kernel, better generalization performance and learning ability than a single kernel function are obtained.
Types . | . | PGKELM . | RBFKELM . | POLKELM . |
---|---|---|---|---|
Test set | R2 | 0.9989 | 0.9923 | 0.9923 |
MSE (mm2) | 0.5259 | 2.0800 | 2.0390 | |
Training set | R2 | 0.9982 | 0.9957 | 0.9944 |
MSE (mm2) | 0.4947 | 1.1883 | 1.4224 | |
Overall dataset | R2 | 0.9981 | 0.9982 | 0.9973 |
MSE (mm2) | 0.5403 | 0.4947 | 0.7225 |
Types . | . | PGKELM . | RBFKELM . | POLKELM . |
---|---|---|---|---|
Test set | R2 | 0.9989 | 0.9923 | 0.9923 |
MSE (mm2) | 0.5259 | 2.0800 | 2.0390 | |
Training set | R2 | 0.9982 | 0.9957 | 0.9944 |
MSE (mm2) | 0.4947 | 1.1883 | 1.4224 | |
Overall dataset | R2 | 0.9981 | 0.9982 | 0.9973 |
MSE (mm2) | 0.5403 | 0.4947 | 0.7225 |
The values in bold indicate the best results.
Types . | . | PGKELM . | RBFKELM . | POLKELM . |
---|---|---|---|---|
Test set | R2 | 0.9702 | 0.8997 | 0.9128 |
MSE (mm2) | 0.2050 | 0.8121 | 0.7644 | |
Training set | R2 | 0.9867 | 0.9347 | 0.9614 |
MSE (mm2) | 0.9623 | 1.4137 | 2.5159 | |
Overall dataset | R2 | 0.9797 | 0.9148 | 0.9355 |
MSE (mm2) | 0.4969 | 0.9678 | 1.9684 |
Types . | . | PGKELM . | RBFKELM . | POLKELM . |
---|---|---|---|---|
Test set | R2 | 0.9702 | 0.8997 | 0.9128 |
MSE (mm2) | 0.2050 | 0.8121 | 0.7644 | |
Training set | R2 | 0.9867 | 0.9347 | 0.9614 |
MSE (mm2) | 0.9623 | 1.4137 | 2.5159 | |
Overall dataset | R2 | 0.9797 | 0.9148 | 0.9355 |
MSE (mm2) | 0.4969 | 0.9678 | 1.9684 |
The values in bold indicate the best results.
LBA-PGKELM versus other algorithms
To verify the performance of our proposed LBA-PGKELM algorithm, we compared its performance with the BPNN and the LBA-SVM model. The BPNN comprised a hidden layer with 80 neurons, and the learning rate was 0.01. The maximum number of iteration cycles was set to 1,000, and the sigmoid function was selected as activation function. It is worth mentioning that the optimization parameters of LBA-SVM including SVM type, kernel function type, loss function and gamma function, were chosen similar to those of the KELM model.
As can be seen in Tables 5 and 6, and Figures 7 and 8, the LBA-PGKELM shows the best performance among all three models, with MSE = 0.7225 mm2. Considering the significant difference between MSE values of different models, we can infer that the LBA algorithm improves the capability of searching optimal parameters. The BPNN showed the worst performance. This may have been caused by using gradient descent algorithm to generate a local minimum that may lead to local optimum. Moreover, the BPNN has no optimization method to improve the generalization and global optimization capability. Thanks to the superior optimization ability of the LBA algorithm, the LBA-PGKELM model and the LBA-SVM model showed better prediction accuracy. On the one hand, the LBA algorithm has a better generalization. On the other hand, the LBA-PGKELM and LBA-SVM have a clear advantage in solving small-sample and non-linear problems due to their solid theoretical basis. Moreover, under the same swarm intelligence optimized KELM algorithm, the PGKELM algorithm performs better than non-hybrid kernel algorithm. This suggests that the mixed kernel function gives full play to the advantages of the two kernel functions and improves the prediction accuracy of the model. Our proposed model focuses on improving the prediction accuracy. Compared with LSTM (Yang et al. 2020), the prediction accuracy of non-linear small sample data is higher than that of time series method, the MSE of test is 0.2050 mm2 and 1.06 mm2, respectively. The comparison yields a conclusion consistent with the concrete dam case; the performance of the proposed LBA-PGKELM model is superior, with BPNN, LBA-SVM, and LSTM. Therefore, the effectiveness and universality of the proposed methodology are verified.
Types . | . | LBA-PGKELM . | LBA-SVM . | BPNN . |
---|---|---|---|---|
Test set | R2 | 0.9973 | 0.9941 | 0.9940 |
MSE (mm2) | 0.7225 | 1.5356 | 6.4120 | |
Training set | R2 | 0.9982 | 0.9972 | 0.9949 |
MSE (mm2) | 0.4947 | 0.7804 | 5.7209 | |
Overall set | R2 | 0.9981 | 0.9967 | 0.9945 |
MSE (mm2) | 0.5403 | 0.9314 | 5.8639 |
Types . | . | LBA-PGKELM . | LBA-SVM . | BPNN . |
---|---|---|---|---|
Test set | R2 | 0.9973 | 0.9941 | 0.9940 |
MSE (mm2) | 0.7225 | 1.5356 | 6.4120 | |
Training set | R2 | 0.9982 | 0.9972 | 0.9949 |
MSE (mm2) | 0.4947 | 0.7804 | 5.7209 | |
Overall set | R2 | 0.9981 | 0.9967 | 0.9945 |
MSE (mm2) | 0.5403 | 0.9314 | 5.8639 |
The values in bold indicate the best results.
Types . | . | LBA-PGKELM . | LBA-SVM . | BPNN . |
---|---|---|---|---|
Test set | R2 | 0.9702 | 0.9147 | 0.8547 |
MSE (mm2) | 0.3205 | 0.8606 | 1.0450 | |
Training set | R2 | 0.9867 | 0.9544 | 0.9036 |
MSE (mm2) | 0.8642 | 1.2437 | 1.9484 | |
Overall set | R2 | 0.9797 | 0.9366 | 0.8751 |
MSE (mm2) | 0.6594 | 0.9865 | 1.2452 |
Types . | . | LBA-PGKELM . | LBA-SVM . | BPNN . |
---|---|---|---|---|
Test set | R2 | 0.9702 | 0.9147 | 0.8547 |
MSE (mm2) | 0.3205 | 0.8606 | 1.0450 | |
Training set | R2 | 0.9867 | 0.9544 | 0.9036 |
MSE (mm2) | 0.8642 | 1.2437 | 1.9484 | |
Overall set | R2 | 0.9797 | 0.9366 | 0.8751 |
MSE (mm2) | 0.6594 | 0.9865 | 1.2452 |
The values in bold indicate the best results.
CONCLUSIONS
The LBA-PGKELM algorithm is proved to be an effective and simple method for establishing the dam deformation prediction. The hybrid kernel function for KELM and the Lévy flight optimized bat algorithm were combined to improve the dam deformation accuracy, based on machine learning method. The effectiveness and superiority of the proposed methodology are demonstrated by application to a real concrete dam, two observation points and compared with BPNN and SVM algorithms. The main conclusion are as follows.
The present study investigated the two kernel KELM model, which is formed by exploiting the advantages of both Gaussian and polynomial functions as the kernel for the KELM. The proposed hybrid kernel not only avoids the instability of traditional ELM, but also improves the generalization and learning ability of the model.
The modification on the BA algorithm, the LBA algorithm, can solve the disadvantages of the BA algorithm, such as slow convergence speed, low convergence precision, and easy to fall into a local minimum.
Finally, compared with the conventional single-core KELM model, the PGKELM has strong learning ability and generalization ability. The high performance of the proposed method indicates that the selection, processing, and coding of the input variables have been carried out successfully.
According to statistics, there are 98,822 reservoirs in China. The high accuracy of our proposed model demonstrates the feasibility of our model in dam deformation prediction, and it can be applied to other reservoirs. In a further study we will connect all the monitoring stations of the whole dam and consider the spatiotemporal diversity in deformation behavior, aiming to construct a more competitive model for dam deformation prediction.
AUTHOR CONTRIBUTIONS
Conceptualization, Youliang Chen and Gang Xiao; Methodology, Youliang Chen; Software, Gang Xiao; Validation, Youliang Chen; Formal analysis, Gang Xiao; Data curation, Xiangjun Zhang; Writing – original draft preparation, Xiangjun Zhang; Writing – review and editing, Hamed Karimian; Visualization, Xiangjun Zhang; Supervision, Youliang Chen and Hamed Karimian; Project administration and funding acquisition, Jinsong Huang.
ACKNOWLEDGEMENTS
This research is supported by the National Dam Center Open Fund Project of China (CX2019B07) and the Science and Technology Project of Jiangxi Provincial Department of Education (GJJ170522) and the Ganzhou Key R & D Project.
CONFLICTS OF INTEREST
The authors declare no conflict of interest.
DATA AVAILABILITY STATEMENT
Data cannot be made publicly available; readers should contact the corresponding author for details.