Long-term inflow forecasting is extremely important for reasonable dispatch schedules of hydropower stations and efficient utilization plans of water resources. In this paper, a novel forecast framework, meteorological data long short-term memory neural network (M-LSTM), which uses the meteorological dataset as input and adopts LSTM, is proposed for monthly inflow forecasting. First, the meteorological dataset, which provides more effective information for runoff prediction, is obtained by inverse distance weighting (IDW). Second, the maximal information coefficient (MIC) can adequately measure the degree of correlation between meteorological data and inflow; therefore, the MIC can distinguish key attributes from massive meteorological data and further reduce the computational burden. Last, LSTM is chosen as the prediction method due to its powerful nonlinear predictive capability, which can couple historical inflow records and meteorological data to forecast inflow. The Xiaowan hydropower station is selected as the case study. To evaluate the effectiveness of the M-LSTM for runoff prediction, several methods including LSTM, meteorological data backpropagation neural network (M-BPNN), meteorological data support vector regression (M-SVR) are employed for comparison with the M-LSTM and six evaluation criteria are used to compare its performance. Results revealed that M-LSTM outperforms other test methods in developing the long-term prediction method.

  • A highly accurate long-term inflow forecasting framework is developed using meteorological data based on long short-term memory neural network.

  • The meteorological dataset of the study area is obtained using inverse distance weighting.

  • The maximal information coefficient can adequately measure the degree of correlation between meteorological data and inflow.

  • The proposed framework demonstrates superior forecast performance.

In recent decades, researchers have devoted efforts to hydrological long-term runoff prediction, which is extremely important for water resource planning (Liu et al. 2021), reservoir operations (Maddu et al. 2022), risk management (Huang et al. 2022), flood control (Bahramian et al. 2023) and water abandonment (Jiang et al. 2018), especially in areas with concentrated rainfall (Liang et al. 2017). The main challenges with respect to inflow forecasting include complex structure prediction methods, low accuracy inflow predictions and multiple influence meteorological data that are caused by changeable climate, excessive human activities and complex and various natural runoff.

To address these challenges, researchers have devoted efforts to monthly runoff prediction and its prediction methods. Generally, the research approaches can be divided into three categories: statistical methods (Taormina & Chau 2015), physical methods (Duan 1992; Robertson et al. 2013) and machine learning methods (Wang et al. 2023a). However, there has been no single identified method that is universally appropriate for runoff prediction for any scenario because the hydrological characteristics of river basins and regions change with variations in time and space (Cheng et al. 2015).

Statistical methods represented by autoregressive models are usually based on historical inflow records and assume that the inflow series are stationary, linear and accurate. In other words, the relationship of the method is simply between the input and output. Nevertheless, real inflow series are in nature complex, nonlinear and chaotic (Dhanya & Nagesh Kumar 2011), and it is difficult to obtain high-accuracy predictions using statistical methods based on real inflow data. Physical methods such as the Soil and Water Assessment Tool (SWAT) (Ebod E 2023) have a clear physical mechanism of inflow generation and confluence. The method can reflect the characteristics of the study site but is highly dependent on the initial conditions and input data (Bennett et al. 2016). In addition, the parameters are not easy to determine, and the predictive ability is limited in many situations.

To further improve the runoff prediction results and accuracy, machine learning methods, such as artificial neural networks (ANNs) (Samantaray et al. 2022), support vector machines (SVMs) (Xu et al. 2023), and evolutionary algorithms (EAs) (Chadalawada et al. 2020; Cai et al. 2022) and so forth, have been proposed because they have shown excellent performance with respect to inflow prediction and are effective in handling the nonlinear relationship between input and output. Herath et al. (2021) utilized genetic programming (GP) as a rainfall–runoff method, preserving the prediction power of the short-term forecasting approach while benefiting from a better understanding of the catchment runoff dynamics. Sedighi et al. (2016) used SVR and ANN in rainfall–runoff modeling. Humphrey et al. (2016) carried out streamflow estimation using ANN. However, despite the use of the above methods, there still exist several limitations and drawbacks. For instance, ANNs are prone to being trapped by local minima and cannot avoid long-term dependency. To address this shortcoming, a novel method, long short-term memory neural network (LSTM), in which the central idea is a memory cell that can maintain its state over time, and nonlinear gating units, which regulate the information flow into and out of the cell (Greff et al. 2017), is formed.

In recent years, LSTM has been applied to many areas due to its strong nonlinear prediction ability, shorter convergence time and capture of the long-term correlation of time series. For example, Shahid et al. (2020) proposed deep learning methods to predict COVID-19, and the results can be exploited for pandemic prediction for better planning and management. Jiang et al. (2022) uncovered the flooding mechanisms across the contiguous United States through interpretive deep learning represented by LSTM on representative catchments. Zhang et al. (2020) studied LSTM to predict public environmental emotions and then used a variety of error assessment methods to quantitatively analyze the prediction results and verified LSTM's performance in prediction achievements. Qu et al. (2020) constructed a method with an M-B-LSTM hybrid network to predict short-term traffic flow, and the results showed that the proposed method has a better ability to solve uncertainty and overfitting problems. Wang et al. (2021) presented an ensemble hybrid forecasting method for annual runoff, which provided higher accuracy and consistency for annual runoff prediction. These studies proved the competitiveness of LSTM in many fields. Thus, LSTM is employed for monthly inflow prediction in this paper. For comparison purposes, the backpropagation neural network (BPNN) and support vector regression (SVR) were employed to forecast monthly inflow and are considered to be benchmark methods.

Moreover, a single prediction method still cannot accurately predict runoff, and a method of incorporating advances in both meteorological understanding and observations has been proposed (Abbs 1999), which aimed to attain a higher inflow prediction accuracy. For example, Yu et al. (2017) constructed using meteorological and hydrological data as inputs a method to forecast monthly inflow values, but only the monthly average air temperature was considered. Gauch et al. (2021) investigated two multitimescale LSTM architectures to forecast rainfall runoff based on adding meteorological data. The meteorological variables closely related to rainfall runoff were not filtered. However, it is not enough to use unscreened meteorological data series as input because strictly relevant sufficient potential input factors are a prerequisite for obtaining reliable and accurate prediction results.

However, now, to accurately and quickly select effective inputs, the maximal information coefficient (MIC) is employed to select input factors for inflow forecasting, which is a robust measure of the degree of correlation between two variables (Sun et al. 2018). Therefore, many researchers have proposed adding filtered meteorological data via the MIC to further enhance the accuracy of inflow forecasting. For instance, Liao et al. (2020) used the ERA-Interim reanalysis dataset as input, and ulteriorly adopted gradient-boosting regression trees and the MIC to forecast daily inflow.

To elevate the inflow forecasting accuracy, meteorological data generated by the European Centre for Medium-Range Weather Forecasts (ECMWF) of THORPEX Interactive Grand Global Ensemble (TIGGE) (Tao et al. 2014) were used as input (Saedi et al. 2019). The TIGGE network is a World Meteorological Organization project that searches to capture other sources of uncertainties associated with the meteorological model structure and the ensemble size (Velázquez et al. 2011). The meteorological dataset has the advantages of long time series and wide spatial distribution and can greatly compensate for the problems of uneven time-space distribution and lack of observation data.

Motivated by the above discussion, a novel forecast framework, the M-LSTM, aims to provide a reliable runoff prediction method for monthly inflow forecasting. The forecast framework adopts meteorological data as input, which ensures that ample information is supplied to depict inflow. Inverse distance weighting (IDW) was employed to obtain meteorological data. The MIC is used to identify effective features from massive features to reduce the computational burden. LSTM has good robustness and strong nonlinear fitting ability and is used as a prediction method to improve the monthly inflow forecasting accuracy.

This paper is organized as follows: Section 2 describes a case study and collected data. Section 3 introduces the theory and process of the methods used, including IDW, MIC and LSTM. Section 4 shows the results and discussion of the data, followed by the conclusions in Section 5.

Study area

The Lancang River originates on the Geladandong peak on the Qinghai–Tibet Plateau, runs through Qinghai, Tibet and Yunnan, and flows out of Xishuangbanna in southern Yunnan. The Lancang River is approximately 2,130 km long in China and has a drainage area of 113,300 km2 above the Xiaowan hydropower station, which has abundant hydropower resources and is the main controlling hydropower station on the Lancang River. Therefore, the Xiaowan hydropower station, chosen as the study site (Figure 1), is rather important for the security of the water resource management and operation of the middle and lower reaches of the Lancang River.
Figure 1

Location of the Xiaowan hydropower station.

Figure 1

Location of the Xiaowan hydropower station.

Close modal

Collected data

This study utilizes a meteorological dataset and observed monthly inflow data from the Xiaowan hydropower station over 14 years (January 2007 to December 2020). The meteorological dataset can be downloaded from https://apps.ecmwf.int/datasets/data/tigge/xia1, and it is available for every day on a 0.5° × 0.5° spatial grid. Figure 2 depicts the monthly inflow series. For machine learning methods, such as recurrent neural network (RNN) and LSTM, the overtraining problem is likely to occur, which means that the method has excellent performance on the training data but does not fit well to new data (Cheng et al. 2015). To prevent the overtraining problem, Chau et al. (2005) suggested dividing the data into three subsets: a training set for method training, a validation set for monitoring the training process and a testing set for method testing. Hence, the available data are divided into the above three datasets in this study. The data from January 2007 to December 2014 (96 months, accounting for approximately 57.2% of the whole dataset), from January 2015 to December 2017 (36 months, accounting for approximately 21.4% of the whole dataset) and from January 2017 to December 2020 (36 months, accounting for approximately 21.4% of the whole dataset) are used as the training, validation and testing sets, respectively. Based on expert knowledge and the available literature, the near-surface 11 variables from the meteorological data are considered potential predictors for monthly inflow forecasting. More details regarding the meteorological dataset are presented in Table 1.
Table 1

Description and notations of potential meteorological data

No.VariableDescriptionUnits
tcw Total column water kg/m2 
2mdt 2-m dewpoint temperature 
tcc Total cloud cover 
2mt 2-m temperature 
skt Skin temperature 
st Soil temperature top 20 cm 
sm Soil moisture top 20 cm kg/m3 
10u 10-m U-wind component m/s 
msl Mean sea level pressure Pa 
10 sp Surface pressure Pa 
11 10v 10-m V-wind component m/s 
No.VariableDescriptionUnits
tcw Total column water kg/m2 
2mdt 2-m dewpoint temperature 
tcc Total cloud cover 
2mt 2-m temperature 
skt Skin temperature 
st Soil temperature top 20 cm 
sm Soil moisture top 20 cm kg/m3 
10u 10-m U-wind component m/s 
msl Mean sea level pressure Pa 
10 sp Surface pressure Pa 
11 10v 10-m V-wind component m/s 
Figure 2

Monthly inflow series of the Xiaowan hydropower station.

Figure 2

Monthly inflow series of the Xiaowan hydropower station.

Close modal

Data preprocessing

Data normalization is a necessary process for the raw data before applying the forecasting methods to eliminate the dimensional effect, which can increase the prediction flexibility. The conversion function is as follows:
(1)
where and x indicate the scaled data and original data, respectively. and represent the maximum and minimum inflow series, respectively.

Inverse distance weighting

IDW is a deterministic estimation interpolator by which unknown values are computed by a linear combination of values at known points (Sapna et al. 2018). IDW produces surfaces by establishing a neighborhood search of points and weighting these points by a power function. Often, with the increase in power, the effect of the points that are farther diminishes. Lesser power distributes the weights more uniformly between neighboring points (Poshtmasari et al. 2012). For example, meteorological data from the Xiaowan hydropower station are obtained. First, a planar rectangular coordinate system, with the Xiaowan hydropower station as the center, longitude as the X-axis and latitude as the Y-axis, is generated. Second, according to the latitude and longitude of the Xiaowan station, the four nearest points are A, B, C, and D (i.e., A (100°0′0″E, 24°30′0″N), B (100°30′0″E, 24°30′0″N); C (100°E, 25°0′0″N); D (100°30′0″E, 25°0′0″N)). Finally, IDW is used to calculate the meteorological data of the Xiaowan hydropower station based on the four nearest points. The meteorological data calculation formula is:
(2)
where is the meteorological data of the unknown point; is the meteorological data of the known point; is the distance from the known point to the unknown point; and p is the weight index. When p = 0, Equation (2) evolves into the arithmetic average method. When p = 1, it is a simple inverse distance method; when p = 2, it is the widely used inverse distance square method.

Feature selection via the MIC

The MIC (Reshef et al. 2011) is a relatively new measure for associations between two variables that applies mutual information (MI) to continuously distributed random variables (Lu et al. 2021). The calculation of the MIC is based on concepts of MI (Kinney & Atwal 2014), and the process is described as follows:

Given two variables , such as observed inflow, and , such as meteorological data, the MI between X and Y is given by:

Step 1: Calculate the MI of X and Y as:
(3)
where is the joint probability density of variables X and Y and a and are the marginal probability densities of variables X and Y, respectively.
Step 2: Consider a given dataset D, including variables X and Y with a sample size n. First, draw scatter plots of X and Y as a grid, which is recorded as . , where is the MI of D|G. The normalized maximum MI can be expressed as:
(4)
where is the maximum MI of dataset D under grid G.
Step 3: The MIC is introduced as the maximum value of the characteristic matrix, and the specific calculation formula is as follows:
(5)
where is the upper bound of the grid size and is defined as, which is a function of sample size. A higher value of MIC (X, Y) indicates a stronger correlation between X and Y.

We perform feature selection from meteorological data in two steps via the MIC. First, we compute the MIC of each meteorological data and observed inflow. Then, we sort features based on the MIC in descending order and determine the optimum inputs using a trial-and-error procedure.

Long Short-Term Memory

LSTM is a widely used RNN architecture in the field of deep learning that can avoid long-term dependency problems by changing the state of the gate. LSTM consists of an input layer, a hidden layer and an output layer. Each hidden layer contains state variables ht and Ct, which are used to save the short-term state and long-term state, respectively. The method structure of LSTM is shown in Figure 3. LSTM controls Ct via an input gate and forget gate, where the forget gate determines how much of the previous cell state needs to be forgotten, the input gate decides what new information is going to be remembered by adding it to the cell state (Xiang et al. 2020), and the output gate learns when to let the activation signal out of the cell (Chen et al. 2021). In the forget gate, only the information conforming to algorithm authentication will remain, while the information not conforming will be forgotten; thus, it effectively solves the problem of long-term dependence. The calculation method is as follows:
  • (1) The forget gate
    (6)
    where is the forget gate parameter and is the activation function, which controls the percentage filtering of the gate. When the door is 0, it closes completely, and when it is 1, it opens completely. is the previous result, is the current input, is the weight of the forget gate, and is the bias of the forget gate.
  • (2) The input gate
    (7)
    where is the input gate parameter, is the weight of the input gate, and is the bias of the input gate.
  • (3) The output gate
    (8)
    where is the output gate parameter, is the weight of the output gate, and is the bias of the output gate.
  • (4) The long-term state of the current input
    (9)
    where is the long-term state of the current input, is the weight of the long-term state of the current input, is the input activation function, and is the bias of the long-term state of the current input.
  • (5) The long-term state of the current moment
    (10)
    where is the long-term state of the current moment.
  • (6) The final output result
    (11)
    where is the output activation function.
Figure 3

Method structure of LSTM.

Figure 3

Method structure of LSTM.

Close modal

Evaluation criteria of the runoff prediction method

To test the performance of prediction methods, the root-mean-squared error (RMSE), the Pearson correlation coefficient (CORR), the mean absolute error (MAE), the Nash–Sutcliffe efficiency (NSE), the Kling–Gupta efficiency scores (KGE) and the index of agreement (IA) are used to evaluate the performance on the basis of the forecasting and fitted values of the method compared with observed data.

The RMSE is the mean square root of the sum of the squares of the distances that the data deviate from the true value; that is, the mean square root of the sum of the squares of the errors. The MAE is the average value of the absolute error and reflects the actual situation of the forecasted value error. They are calculated using Equations (12) and (13), respectively:
(12)
(13)
where and are the inflow estimation and observed value at time i, respectively, and n is the number of samples.
The CORR is a statistical index of linear correlation between two variables. The NSE (Nash & Sutcliffe 1970) is used as an index for the coincidence degree between the predicted value and the observed data, and the closer the value is to 1, the better the forecasted results are (Chadalawada & Babovic 2019; Wang et al. 2023b). They are calculated using Equations (14) and (15), respectively:
(14)
(15)
where and are the mean values of the inflow observed and predicted values, respectively. The range of the CORR is between 0.0 and 1.0, and values close to 1.0 demonstrate a perfect predicted result.
The KGE (Knoben 2019) is also a widely used evaluation index, which = 1 indicates perfect agreement between estimations and observations. It can be provided following Equations (16)–(18).
(16)
(17)
(18)
where is the standard deviation of the observed values and is the standard deviation of the inflow estimation.
The IA (Willmott 1981) ranges from 0.0 to 1.0 and can be interpreted as similar to the coefficient of determination, with a higher value indicating better agreement between the predicted value and observed data. The calculation formula is as follows:
(19)

Overview of framework

The overall structure of the framework for monthly runoff prediction is given in Figure 4. The structure consists of two major methods: LSTM and M-LSTM.
Figure 4

The overall structure of the framework.

Figure 4

The overall structure of the framework.

Close modal

The LSTM method:

Step 1: Lag selection of observed inflow

We measure the relevance of the different lags in observed inflow using the partial autocorrelation function (PACF) and further select appropriate lags as predictors for the method using hypothesis testing and trial-and-error procedures.

Step 2: Normalizing the data

The data are normalized to improve the speed of the method and accuracy of prediction, and the dataset is transformed into supervised learning (in Section 2.3). Furthermore, the dataset is divided into a training set, validation set, and testing set according to the length of each dataset specified in advance (in Section 2.2).

Step 3: Obtaining the forecasted inflow of the LSTM method

The LSTM, a grid search algorithm, is used to guide the optimization of the method parameters by the evaluation of the validation set via MAE, and then, the prediction results are evaluated based on the testing set.

The method structure of LSTM is as follows:
(20)
where is the forecasted values of LSTM at the current time t. is the hyperparameters of LSTM at the current time t. is the inflow of the reservoir at time t-p. p is the lag of observed inflow via PACF.

M-LSTM uses the screened meteorological dataset by the MIC as input and adopts LSTM to forecast monthly inflow.

The M-LSTM method:

The M-LSTM fully incorporates the advantages of LSTM and meteorological data to forecast inflow. Specifically, the meteorological data obtained by IDW further identifies effective features from a large number of features via the MIC. Similarly, the MAE of the training set and validation set are used as the fitness of the M-LSTM, which can determine the ideal hyperparameter and realize the optimal search.

The method structure of the M-LSTM is as follows:
(21)
where is the forecasted values of the M-LSTM at the current time t. are the hyperparameters of the M-LSTM at the current time t. is the features from meteorological data, and q is the number of features from meteorological data determined via the MIC.

To compare the performance of the M-LSTM, the meteorological data backpropagation neural network (M-BPNN), and meteorological data support vector regression (M-SVR), which were obtained by replacing LSTM in the framework with BPNN and SVR, respectively, are also employed for monthly inflow forecasting. As mentioned previously, six indices, RMSE, CORR, MAE, NSE, KGE and IA, are used to evaluate the performance of these methods (Liao et al. 2020). In addition, the feature importance based on the M-LSTM is explored. All computations carried out in this paper were performed on a personal computer with 1.70 GHz processor and 16 GB RAM.

Feature selection

For time series forecasting, the PACF is used to diagnose the order of the autoregressive process and determine the input vector of the method (Cheng et al. 2015). Figure 5 shows the PACF and the corresponding 95% confidence interval from Lag 1 to Lag 12. Obviously, the PACF shows significant autocorrelation from Lag 1 to Lag 8, Lag 11, and Lag 12. Going further, Lag 11 exhibited a stronger correlation compared to Lag 12. As a result, inflow series 1–8 months and 11 months are selected as the inputs of the method based on PACF in this paper. Furthermore, a trial-and-error procedure is used to determine the optimal selection of hyperparameters (Chicco 2017), and the trial results are shown in Table 2. The results indicate that the epoch = 50 input structure obtains the best performance.
Table 2

The MAE under different epochs in LSTM

Method5102050100200
LSTM 262.56 223.69 219.12 210.18 213.64 212.46 
Method5102050100200
LSTM 262.56 223.69 219.12 210.18 213.64 212.46 

Note: The bold numbers represent the values of performance criterion for the best fitted methods.

Figure 5

The runoff PACF coefficient of Xiaowan.

Figure 5

The runoff PACF coefficient of Xiaowan.

Close modal
Accordingly, epoch = 50 is selected as the initial input condition of the M-LSTM. The use of the MIC was important for selecting meteorological data from the initial set of candidates. In this paper, the 10u component has a greater impact on the inflow than the 10v component due to significant differences in geopotential at Xiaowan station, a MIC greater than 0.4 was selected as the candidate input (Figure 6). Therefore, a total of 10 input structures are tested, as shown in Table 3. Based on the above results, 10 input structures are tested 40 times, and the trial results are shown in Figure 7. The results indicate that the 9th input structure shows the best performance, and thus, predictors Nos. 1–9 in Table 3 are selected as the method input.
Table 3

The candidate inputs selected from meteorological data species by MIC

NumberInput
Qt-1, Qt-2, Qt-3, Qt-4, Qt-5, Qt-6, Qt-7, Qt-8, Qt-11, tcw 
Qt-1, Qt-2, Qt-3, Qt-4, Qt-5, Qt-6, Qt-7, Qt-8, Qt-11, tcw, 2mdt 
Qt-1, Qt-2, Qt-3, Qt-4, Qt-5, Qt-6, Qt-7, Qt-8, Qt-11, tcw, 2mdt, tcc 
Qt-1, Qt-2, Qt-3, Qt-4, Qt-5, Qt-6, Qt-7, Qt-8, Qt-11, tcw, 2mdt, tcc, 2mt 
Qt-1, Qt-2, Qt-3, Qt-4, Qt-5, Qt-6, Qt-7, Qt-8, Qt-11, tcw, 2mdt, tcc, 2 mt, skt 
Qt-1, Qt-2, Qt-3, Qt-4, Qt-5, Qt-6, Qt-7, Qt-8, Qt-11, tcw, 2mdt, tcc, 2 mt, skt, st 
Qt-1, Qt-2, Qt-3, Qt-4, Qt-5, Qt-6, Qt-7, Qt-8, Qt-11, tcw, 2mdt, tcc, 2 mt, skt, st, sm 
Qt-1, Qt-2, Qt-3, Qt-4, Qt-5, Qt-6, Qt-7, Qt-8, Qt-11, tcw, 2mdt, tcc, 2 mt, skt, st, sm, 10u 
Qt-1, Qt-2, Qt-3, Qt-4, Qt-5, Qt-6, Qt-7, Qt-8, Qt-11, tcw, 2mdt, tcc, 2 mt, skt, st, sm, 10u, msl 
10 Qt-1, Qt-2, Qt-3, Qt-4, Qt-5, Qt-6, Qt-7, Qt-8, Qt-11, tcw, 2mdt, tcc, 2 mt, skt, st, sm, 10u, msl, sp 
NumberInput
Qt-1, Qt-2, Qt-3, Qt-4, Qt-5, Qt-6, Qt-7, Qt-8, Qt-11, tcw 
Qt-1, Qt-2, Qt-3, Qt-4, Qt-5, Qt-6, Qt-7, Qt-8, Qt-11, tcw, 2mdt 
Qt-1, Qt-2, Qt-3, Qt-4, Qt-5, Qt-6, Qt-7, Qt-8, Qt-11, tcw, 2mdt, tcc 
Qt-1, Qt-2, Qt-3, Qt-4, Qt-5, Qt-6, Qt-7, Qt-8, Qt-11, tcw, 2mdt, tcc, 2mt 
Qt-1, Qt-2, Qt-3, Qt-4, Qt-5, Qt-6, Qt-7, Qt-8, Qt-11, tcw, 2mdt, tcc, 2 mt, skt 
Qt-1, Qt-2, Qt-3, Qt-4, Qt-5, Qt-6, Qt-7, Qt-8, Qt-11, tcw, 2mdt, tcc, 2 mt, skt, st 
Qt-1, Qt-2, Qt-3, Qt-4, Qt-5, Qt-6, Qt-7, Qt-8, Qt-11, tcw, 2mdt, tcc, 2 mt, skt, st, sm 
Qt-1, Qt-2, Qt-3, Qt-4, Qt-5, Qt-6, Qt-7, Qt-8, Qt-11, tcw, 2mdt, tcc, 2 mt, skt, st, sm, 10u 
Qt-1, Qt-2, Qt-3, Qt-4, Qt-5, Qt-6, Qt-7, Qt-8, Qt-11, tcw, 2mdt, tcc, 2 mt, skt, st, sm, 10u, msl 
10 Qt-1, Qt-2, Qt-3, Qt-4, Qt-5, Qt-6, Qt-7, Qt-8, Qt-11, tcw, 2mdt, tcc, 2 mt, skt, st, sm, 10u, msl, sp 
Figure 6

The MIC of meteorological data and observed runoff data.

Figure 6

The MIC of meteorological data and observed runoff data.

Close modal
Figure 7

The experimental results of 10 input combinations of meteorological data. Note: The bold numbers represent the values of performance criterion for the best fitted methods.

Figure 7

The experimental results of 10 input combinations of meteorological data. Note: The bold numbers represent the values of performance criterion for the best fitted methods.

Close modal

Finally, a total of 18 variables, including nine observed variables and nine meteorological variables, are selected as the method inputs (Table 4). As shown in Table 4, Nos. 10–18 are meteorological variables, and the range of the MIC of the meteorological variables selected is 0.92 to 0.48. Moreover, No. 10 and No. 11 are variables related to the water content of the atmosphere. The total column water (No. 10) is the sum of water vapor, liquid water, cloud ice, rain and snow in a column extending from the surface of the Earth to the top of the atmosphere. The 2-meter dewpoint temperature (No. 11) is a measure of the humidity of the air; in general, it can be used to calculate the relative humidity combined with temperature and pressure. Nos. 12 and 16 are variables related to rainfall. The total cloud cover (No. 12) is the fraction of the sky covered by all visible clouds. Cloud cover can refer to a genus, species, variety, layer, or a certain combination of clouds. The soil moisture in the top 20 cm (No. 16) is the volumetric soil moisture in the top 20 cm of the soil layer. Nos. 13–15 are variables related to temperature. The 2-meter temperature (No. 13) is the temperature of air at 2 m above the surface of land, sea or inland waters, which is calculated by interpolating between the lowest model level and the Earth's surface. The skin temperature (No. 14) is the temperature of the surface of the Earth, which represents the temperature of the uppermost surface layer. The soil temperature in the top 20 cm (No. 15) is the average soil temperature in the top 20 cm of the soil layer. In summary, all selected predictors are interpretable and have a good physical connection with inflow.

Table 4

List of M-LSTM input factors

No.DescriptionIndexUnitMIC
Inflow at month t-1 Qt-1 m3/s – 
Inflow at month t-2 Qt-2 m3/s – 
Inflow at month t-3 Qt-3 m3/s – 
Inflow at month t-4 Qt-4 m3/s – 
Inflow at month t-5 Qt-5 m3/s – 
Inflow at month t-6 Qt-6 m3/s – 
Inflow at month t-7 Qt-7 m3/s – 
Inflow at month t-8 Qt-8 m3/s – 
Inflow at month t-11 Qt-11 m3/s – 
10 Total column water tcw kg/m2 0.92 
11 2-m dewpoint temperature 2mdt 0.91 
12 Total cloud cover tcc 0.83 
13 2-m temperature 2mt 0.65 
14 Skin temperature skt 0.65 
15 Soil temperature top 20 cm st 0.62 
16 Soil moisture top 20 cm sm kg/m3 0.61 
17 10-m U-wind component 10u m/s 0.56 
18 Mean sea level pressure msl Pa 0.48 
No.DescriptionIndexUnitMIC
Inflow at month t-1 Qt-1 m3/s – 
Inflow at month t-2 Qt-2 m3/s – 
Inflow at month t-3 Qt-3 m3/s – 
Inflow at month t-4 Qt-4 m3/s – 
Inflow at month t-5 Qt-5 m3/s – 
Inflow at month t-6 Qt-6 m3/s – 
Inflow at month t-7 Qt-7 m3/s – 
Inflow at month t-8 Qt-8 m3/s – 
Inflow at month t-11 Qt-11 m3/s – 
10 Total column water tcw kg/m2 0.92 
11 2-m dewpoint temperature 2mdt 0.91 
12 Total cloud cover tcc 0.83 
13 2-m temperature 2mt 0.65 
14 Skin temperature skt 0.65 
15 Soil temperature top 20 cm st 0.62 
16 Soil moisture top 20 cm sm kg/m3 0.61 
17 10-m U-wind component 10u m/s 0.56 
18 Mean sea level pressure msl Pa 0.48 

Hyperparameter optimization

Every machine learning system has hyperparameters, which are a parameter set before training and cannot be directly learned from the routine training process. It is imperative to tune the hyperparameters of the method to improve the performance of the machine learning. The grid search method is used to optimize the hyperparameters of LSTM, M-LSTM, M-BPNN and M-SVR. Tables 5 and 6 show the results of various hyperparameter values in different methods, respectively.

Table 5

Tunig parameters for M-LSTM and M-BPNN

MethodTuning parameter (epoch)
5102050100200
M-LSTM 218.14 210.68 196.79 185.72 192.44 199.31 
M-BPNN 367.07 277.12 289.75 270.71 267.74 265.22 
MethodTuning parameter (epoch)
5102050100200
M-LSTM 218.14 210.68 196.79 185.72 192.44 199.31 
M-BPNN 367.07 277.12 289.75 270.71 267.74 265.22 

Note: The bold numbers represent the values of performance criterion for the best fitted methods.

Table 6

Tunig parameters for M-SVR

MethodNo.Tuning range
Evaluation criteria
CMAE(m3/s)
M-SVR 5,000 0.2 0.2 659.04 
3,000 0.2 0,3 645.40 
1,000 0.2 0.35 650.05 
100 0.2 0.25 624.88 
10 0.15 0.2 543.94 
30 0.05 0.1 498.71 
MethodNo.Tuning range
Evaluation criteria
CMAE(m3/s)
M-SVR 5,000 0.2 0.2 659.04 
3,000 0.2 0,3 645.40 
1,000 0.2 0.35 650.05 
100 0.2 0.25 624.88 
10 0.15 0.2 543.94 
30 0.05 0.1 498.71 

Note: The bold numbers represent the values of performance criterion for the best fitted methods.

The M-LSTM is used as the training algorithm of the neural network, and the hidden layers are fixed in three layers. The activation functions are selected as tanh and sigmoid, and batch_Size chooses 1. To select the optimal parameter, 40 LSTM are trained for each parameter combination to alleviate the influence of random initialization of weights. The optimal epoch is determined by selecting the minimal MAE of the validation set. Table 5 shows the result of hyperparameter optimization. It is clear that the optimum result can be obtained when the epoch = 50 for the M-LSTM. Hyperparameter optimization of the M-BPNN is similar to that of the M-LSTM, and the optimum result can be obtained when epoch = 200.

For M-SVR, the radial basis function (RBF) is selected as the core function in inflow simulation according to Lin et al. (2006), and the RBF outperforms other kernel functions for runoff modeling. Therefore, the RBF is used as a contrast method in this study. There are three parameters that need to be tuned. First, the appropriate parameter setting range is determined by the trial-and-error procedure. Then, MAE is used to optimize these parameters by a grid search algorithm to obtain the optimal selection of these parameters. The optimal tuning parameters of SVR are shown in Table 6. The results show that the optimum results can be obtained when = 0.1, C = 30 and = 0.05.

Input comparison

To avoid local minimum problems, the M-LSTM and LSTM are trained 40 times, and the average value of 40 times is the final forecast result. Figure 8(a) provides the monthly inflow forecasted results for LSTM and the M-LSTM. It can be seen that the M-LSTM shows great performance compared with LSTM. To further analyze the prediction performance of the M-LSTM, it was compared with the M-LSTM for monthly runoff prediction. Figure 8(b) shows the forecasted inflow of LSTM and M-LSTM versus the observed inflow in the testing set (January 2017–December 2020). The slopes and R2 values of the fitting curves of the M-LSTM and LSTM are 0.94 and 0.90 and 0.90 and 0.88, respectively, which demonstrates that the forecasts based on the M-LSTM have minimal deviations and a stronger correlation. In other words, the results of the forecasted inflow for the M-LSTM are closer to the observed inflow. In conclusion, this reveals that the prediction performance of monthly runoff prediction for LSTM could be elevated by adding the meteorological data selected by the MIC.
Figure 8

Inflow forecasts of LSTM and M-LSTM for the testing set method comparison: (a) comparison of the observed and forecasted inflow and (b) observed versus forecasted inflow.

Figure 8

Inflow forecasts of LSTM and M-LSTM for the testing set method comparison: (a) comparison of the observed and forecasted inflow and (b) observed versus forecasted inflow.

Close modal

The M-LSTM, M-SVR and M-BPNN with the optimal hyperparameters are employed for monthly inflow forecasting. The summarized results for the six indices of the testing set are presented in Table 7. The M-LSTM provides great performance compared to M-BPNN and M-SVR. The RMSE and MAE of the M-LSTM achieve 7.15 and 3.19% and 30.58 and 30.41% reductions for testing set forecasting compared with M-SVR and M-BPNN, respectively. The CORR, NSE, KGE, and IA of the M-LSTM increase by 0.68, 2.03, 0.98 and 0.38% for testing set forecasting compared with M-SVR and 1.43, 15.41, 50.56 and 6.78% for testing set forecasting compared with the M-BPNN, respectively. In general, the M-LSTM has higher prediction accuracy than the M-SVR and M-BPNN methods.

Table 7

Comparison of performance indices of different methods

MethodM-LSTMM-BPNNM-SVR
RMSE (m3/s) 247.89 357.06 266.97 
CORR (%) 94.85 93.50 94.21 
MAE (m3/s) 186.29 267.71 192.43 
NSE (%) 88.95 77.07 87.18 
KGE (%) 91.95 61.07 91.06 
IA (%) 97.14 90.98 96.78 
MethodM-LSTMM-BPNNM-SVR
RMSE (m3/s) 247.89 357.06 266.97 
CORR (%) 94.85 93.50 94.21 
MAE (m3/s) 186.29 267.71 192.43 
NSE (%) 88.95 77.07 87.18 
KGE (%) 91.95 61.07 91.06 
IA (%) 97.14 90.98 96.78 

Note: The bold numbers represent the values of performance criterion for the best fitted methods.

To compare these methods with a more intuitive understanding, the radar in Figure 9 is utilized to indicate the degree of overall spread in the observed and forecasted runoff series. In summary, the M-LSTM is the most effective method in terms of forecasting monthly runoff accurately for the testing set.
Figure 9

Comparison of performance indices in M-LSTM, M-BPNN, and M-SVR for the testing set.

Figure 9

Comparison of performance indices in M-LSTM, M-BPNN, and M-SVR for the testing set.

Close modal

The M-LSTM is employed to make monthly inflow forecasts, and the M-BPNN and M-SVR are proposed for comparison with the M-LSTM in this study. The meteorological data were obtained by IDW and further screened by MIC. These methods are compared using six evaluation criteria: RMSE, CORR, MAE, NSE, KGE and IA. We find that the performances of the M-LSTM outperform the M-BPNN and M-SVR at the monthly inflow forecast. According to a comparison of the forecasted results of LSTM and M-LSTM, it is shown that the M-LSTM can be used for more accurate and reliable inflow forecasting and that meteorological data selected by the MIC greatly improve upon LSTM forecasting. Moreover, the feature importance achieved by the M-LSTM demonstrates that the total amount of water vapor in a column, dewpoint temperature near the ground and the fraction of the sky covered by all visible clouds contribute to increasing the prediction accuracy of inflow. In summary, the research results are of great significance for guiding hydropower stations to formulate reservoir management schedules, improve water resource planning and reduce water abandonment. Another possibility to improve the results may be the consideration of a parallel algorithm to optimize the method parameters, which could search optimization parameters more quickly.

This research is supported by the National Natural Science Foundation of China (No. 52379004 and No. 51979023). We are grateful for meteorological data provided by European Centre for Medium-Range Weather Forecasts.

Data cannot be made publicly available; readers should contact the corresponding author for details.

The authors declare there is no conflict.

Abbs
D. J.
1999
A numerical modeling study to investigate the assumptions used in the calculation of probable maximum precipitation
.
Water Resources Research
35
(
3
),
785
796
.
https://doi.org/10.1029/1998WR900013
.
Bahramian
K.
,
Nathan
R.
,
Western
A. W.
&
Ryu
D.
2023
Probabilistic conditioning and recalibration of an event-based flood forecasting model using real-time streamflow observations
.
Journal of Hydrologic Engineering
28
(
4
),
04023003
.
https://doi.org/10.1061/(ASCE)HE.1943-5584.00022
.
Bennett
J. C.
,
Wang
Q. J.
,
Li
M.
,
Robertson
D. E.
&
Schepen
A.
2016
Reliable long-range ensemble streamflow forecasts: Combining calibrated climate forecasts with a conceptual runoff model and a staged error model
.
Water Resources Research
52
(
10
),
8238
8259
.
https://doi.org/10.1002/2016WR019193
.
Cai
H.
,
Liu
S.
,
Shi
H.
,
Zhou
Z.
,
Jiang
S.
&
Babovic
V.
2022
Toward improved lumped groundwater level predictions at catchment scale: Mutual integration of water balance mechanism and deep learning method
.
Journal of Hydrology
613
,
128495
.
https://doi.org/10.1016/j.jhydrol.2022.128495
.
Chadalawada
J.
&
Babovic
V.
2019
Review and comparison of performance indices for automatic model induction
.
Journal of Hydroinformatics
21
(
1
),
13
31
.
https://doi.org/10.2166/hydro.2017.078
.
Chadalawada
J.
,
Herath
H. M. V. V.
&
Babovic
V.
2020
Hydrologically informed machine learning for rainfall-runoff modeling: A genetic programming-based toolkit for automatic model induction
.
Water Resources Research
56
(
4
),
e2019WR026933
.
https://doi.org/10.1029/2019WR026933
.
Chau
K. W.
,
Wu
C. L.
&
Li
Y. S.
2005
Comparison of several flood forecasting models in Yangtze river
.
Journal of Hydrologic Engineering
10
(
6
),
485
491
.
https://doi.org/10.1061/(ASCE)1084-0699(2005)10:6(485)
.
Chen
Y.
,
Cui
S.
,
Chen
P.
,
Yuan
Q.
,
Kang
P.
&
Zhu
L.
2021
An LSTM-based neural network method of particulate pollution forecast in China
.
Environmental Research Letters
16
(
4
),
44006
.
https://doi.org/10.1088/1748-9326/abe1f5
.
Cheng
C.
,
Feng
Z.
,
Niu
W.
&
Liao
S.
2015
Heuristic methods for reservoir monthly inflow forecasting: A case study of xinfengjiang reservoir in Pearl River, China
.
Water
7
(
12
),
4477
4495
.
https://doi.org/10.3390/w7084477
.
Chicco
D.
2017
Ten quick tips for machine learning in computational biology
.
Biodata Mining
10
(
1
).
https://doi.org/10.1186/s13040-017-0155-3
.
Dhanya
C. T.
&
Nagesh Kumar
D.
2011
Predictive uncertainty of chaotic daily streamflow using ensemble wavelet networks approach
.
Water Resources Research
47
(
6
).
https://doi.org/10.1029/2010WR010173
.
Duan
Q. S. S. G.
1992
Effective and efficient global optimization for conceptualRainfall-RunoffModels
.
Water Resources Research
28
(
4
),
1015
1031
.
https://doi.org/10.1029/91WR02985
.
Gauch
M.
,
Kratzert
F.
,
Klotz
D.
,
Nearing
G.
,
Lin
J.
&
Hochreiter
S.
2021
Rainfall-runoff prediction at multiple timescales with a single long short-term memory network
.
Hydrology and Earth System Sciences
25
(
4
),
2045
2062
.
https://doi.org/10.5194/hess-25-2045-2021
.
Greff
K.
,
Srivastava
R. K.
,
Koutnik
J.
,
Steunebrink
B. R.
&
Schmidhuber
J.
2017
LSTM: A search space odyssey
.
Ieee Transactions On Neural Networks and Learning Systems
28
(
10
),
2222
2232
.
https://doi.org/10.1109/TNNLS.2016.2582924
.
Herath
H. M. V. V.
,
Chadalawada
J.
&
Babovic
V.
2021
Genetic programming for hydrological applications: To model or to forecast that is the question
.
Journal of Hydroinformatics
23
(
4
),
740
763
.
https://doi.org/10.2166/hydro.2021.179
.
Huang
X.
,
Xu
B.
,
Zhong
P.
,
Yao
H.
,
Yue
H.
,
Zhu
F.
,
Lu
Q.
,
Sun
Y.
,
Mo
R.
,
Li
Z.
&
Liu
W.
2022
Robust multiobjective reservoir operation and risk decision-making model for real-time flood control coping with forecast uncertainty
.
Journal of Hydrology
605
,
127334
.
https://doi.org/10.1016/j.jhydrol.2021.127334
.
Humphrey
G. B.
,
Gibbs
M. S.
,
Dandy
G. C.
&
Maier
H. R.
2016
A hybrid approach to monthly streamflow forecasting: Integrating hydrological model outputs into a Bayesian artificial neural network
.
Journal of Hydrology
540
,
623
640
.
https://doi.org/10.1016/j.jhydrol.2016.06.026
.
Jiang
Z.
,
Li
R.
,
Li
A.
&
Ji
C.
2018
Runoff forecast uncertainty considered load adjustment model of cascade hydropower stations and its application
.
Energy
158
,
693
708
.
https://doi.org/10.1016/j.energy.2018.06.083
.
Jiang
S.
,
Zheng
Y.
,
Wang
C.
&
Babovic
V.
2022
Uncovering flooding mechanisms across the contiguous United States through interpretive deep learning on representative catchments
.
Water Resources Research
58
(
1
),
e2021W
.
R030185. https://doi.org/10.1029/2021WR030185
.
Kinney
J. B.
&
Atwal
G. S.
2014
Equitability, mutual information, and the maximal information coefficient
.
Proceedings of the National Academy of Sciences – PNAS
111
(
9
),
3354
3359
.
https://doi.org/10.1073/pnas.1309933111
.
Knoben
W. J. M. F.
2019
Technical note: inherent benchmark or not comparing nash-Sutcliffe and kling- gupta efficiency scores
.
Hydrology and Earth System Sciences
23
,
4323
4331
.
https://doi.org/10.5194/hess-23-4323-2019
.
Liang
J.
,
Yuan
X.
,
Yuan
Y.
,
Chen
Z.
&
Li
Y.
2017
Nonlinear dynamic analysis and robust controller design for francis hydraulic turbine regulating system with a straight-tube surge tank
.
Mechanical Systems and Signal Processing
85
,
927
946
.
https://doi.org/10.1016/j.ymssp.2016.09.026
.
Liao
S.
,
Liu
Z.
,
Liu
B.
,
Cheng
C.
,
Jin
X.
&
Zhao
Z.
2020
Multistep-ahead daily inflow forecasting using the ERA-Interim reanalysis data set based on gradient-boosting regression trees
.
Hydrology and Earth System Sciences
24
(
5
),
2343
2363
.
https://doi.org/10.5194/hess-24-2343-2020
.
Lin
J.
,
Cheng
C.
&
Chau
K.
2006
Using support vector machines for long-term discharge prediction
.
Hydrological Sciences Journal
51
(
4
),
599
612
.
Liu
B.
,
Wang
Y.
,
Xia
J.
,
Quan
J.
&
Wang
J.
2021
Optimal water resources operation for rivers-connected lake under uncertainty
.
Journal of Hydrology
595
,
125863
.
https://doi.org/10.1016/j.jhydrol.2020.125863
.
Lu
P.
,
Lin
K.
,
Xu
C.
,
Lan
T.
,
Liu
Z.
&
He
Y.
2021
An integrated framework of input determination for ensemble forecasts of monthly estuarine saltwater intrusion
.
Journal of Hydrology
598
,
126225
.
https://doi.org/10.1016/j.jhydrol.2021.126225
.
Maddu
R.
,
Pradhan
I.
,
Ahmadisharaf
E.
,
Singh
S. K.
&
Shaik
R.
2022
Short-range reservoir inflow forecasting using hydrological and large-scale atmospheric circulation information
.
Journal of Hydrology
612
,
128153
.
https://doi.org/10.1016/j.jhydrol.2022.128153
.
Nash
J. E.
&
Sutcliffe
J. V.
1970
River flow forecasting through conceptual models part I-A discussion of principles
.
Journal of Hydrology
10
(
3
),
282
290
.
https://doi.org/10.1016/0022-1694(70)90255-6
.
Poshtmasari
H. K.
,
Sarvestani
Z. T.
,
Kamkar
B.
,
Shataei
S.
&
Sadeghi
S.
2012
Comparison of interpolation methods for estimating pH and EC in agricultural fields of Golestan province (north of Iran)
.
International Journal of Agriculture and Crop Sciences (IJACS)
4
(
4
),
157
167
.
Qu
Z. W.
,
Li
H. T.
,
Li
Z. H.
&
Tao
Z.
2020
Short-term traffic flow forecasting method with MB-LSTM hybrid network
.
Ieee Transactions On Intelligent Transportation Systems
23
(
1
),
225
235
.
https://doi.org/10.1109/TITS.2020.3009725
.
Reshef
D. N.
,
Reshef
Y. A.
,
Finucane
H. K.
,
Grossman
S. R.
,
McVean
G.
,
Turnbaugh
P. J.
,
Lander
E. S.
,
Mitzenmacher
M.
&
Sabeti
P. C.
2011
Detecting novel associations in large data sets
.
Science
334
(
6062
),
1518
1524
.
Robertson
D. E.
,
Pokhrel
P.
&
Wang
Q. J.
2013
Improving statistical forecasts of seasonal streamflows using hydrological model output
.
Hydrology and Earth System Sciences
17
(
2
),
579
593
.
https://doi.org/10.5194/hess-17-579-2013
.
Saedi
A.
,
Saghafian
B.
,
Moazami
S.
&
Aminyavari
S.
2019
Performance evaluation of sub-daily ensemble precipitation forecasts
.
Meteorological Applications
27
(
1
).
https://doi.org/10.1002/met.1872
.
Samantaray
S.
,
Sawan Das
S.
,
Sahoo
A.
&
Prakash Satapathy
D.
2022
Monthly runoff prediction at Baitarani river basin by support vector machine based on Salp swarm algorithm
.
Ain Shams Engineering Journal
13
(
5
),
101732
.
https://doi.org/10.1016/j.asej.2022.101732
.
Sapna
K.
,
Thangavelu
A.
,
Mithran
S.
&
Shanthi
K.
2018
Spatial analysis of river water quality using inverse distance weighted interpolation in Noyyal Watershed in Coimbatore, Tamilnadu, India
.
Life Science Informatics Publications
4
(
1
),
150
161
.
https://doi.org/10.26479/2018.0401.13
.
Sedighi
F.
,
Vafakhah
M.
&
Javadi
M. R.
2016
Rainfall-runoff modeling using support vector machine in snow-affected watershed
.
Arabian Journal for Science and Engineering
41
(
10
),
4065
4076
.
https://doi.org/10.1007/s13369-016-2095-5
.
Shahid
F.
,
Zameer
A.
&
Muneeb
M.
2020
Predictions for COVID-19 with deep learning models of LSTM, GRU and Bi-LSTM
.
Chaos, Solitons & Fractals
140
,
110212
.
https://doi.org/10.1016/j.chaos.2020.110212
.
Sun
G.
,
Li
J.
,
Dai
J.
,
Song
Z.
&
Lang
F.
2018
Feature selection for IoT based on maximal information coefficient
.
Future Generation Computer Systems
89
,
606
616
.
https://doi.org/10.1016/j.future.2018.05.060
.
Tao
Y.
,
Duan
Q.
,
Ye
A.
,
Gong
W.
,
Di
Z.
,
Xiao
M.
&
Hsu
K.
2014
An evaluation of post-processed TIGGE multimodel ensemble precipitation forecast in the Huai river basin
.
Journal of Hydrology
519
,
2890
2905
.
https://doi.org/10.1016/j.jhydrol.2014.04.040
.
Taormina
R.
&
Chau
K.
2015
Neural network river forecasting with multi-objective fully informed particle swarm optimization
.
Journal of Hydroinformatics
17
(
1
),
99
113
.
https://doi.org/10.2166/hydro.2014.116
.
Velázquez
J. A.
,
Anctil
F.
,
Ramos
M. H.
&
Perrin
C.
2011
Can a multi-model approach improve hydrological ensemble forecasting? A study on 29 French catchments using 16 hydrological model structures
.
Advances in Geosciences
29
,
33
42
.
https://doi.org/10.5194/adgeo-29-33-2011
.
Wang
W.
,
Du
Y.
,
Chau
K.
,
Xu
D.
,
Liu
C.
&
Ma
Q.
2021
An ensemble hybrid forecasting model for annual runoff based on sample entropy, secondary decomposition, and long short-term memory neural network
.
Water Resources Management
35
(
14
),
4695
4726
.
https://doi.org/10.1007/s11269-021-02920-5
.
Wang
W.
,
Cheng
Q.
,
Chau
K.
,
Hu
H.
,
Zang
H.
&
Xu
D.
2023a
An enhanced monthly runoff time series prediction using extreme learning machine optimized by salp swarm algorithm based on time varying filtering based empirical mode decomposition
.
Journal of Hydrology
620
,
129460
.
https://doi.org/10.1016/j.jhydrol.2023.129460
.
Wang
S.
,
Jiang
Z.
,
Tang
Z.
,
Zhang
H.
&
Wang
P.
2023b
Evaluation of an inflow forecast correction method based on Multi-Scenarios division
.
Journal of Hydrology
618
,
129162
.
https://doi.org/10.1016/j.jhydrol.2023.129162
.
Willmott
C. J.
1981
On the validation of models
.
Physical Geography
2
(
2
),
184
194
.
http://dx.doi.org/10.1080/02723646.1981.10642213
.
Xiang
Z.
,
Yan
J.
&
Demir
I.
2020
A rainfall-runoff model with LSTM-based sequence-to-sequence learning
.
Water Resources Research
56
(
1
).
https://doi.org/10.1029/2019WR025326
.
Xu
D.
,
Hu
X.
,
Wang
W.
,
Chau
K.
&
Zang
H.
2023
An enhanced monthly runoff forecasting using least squares support vector machine based on Harris hawks optimization and secondary decomposition
.
Earth Science Informatics
1
21
.
https://doi.org/10.1007/s12145-023-01018-3
.
Yu
Y.
,
Wang
P.
,
Wang
C.
,
Qian
J.
&
Hou
J.
2017
Combined monthly inflow forecasting and multiobjective ecological reservoir operations model: Case study of the three gorges reservoir
.
Journal of Water Resources Planning and Management
143
(
8
).
https://doi.org/10.1061/(ASCE)WR.1943-5452.0000786
.
Zhang
Q.
,
Gao
T.
,
Liu
X.
&
Zheng
Y.
2020
Public environment emotion prediction model using LSTM network
.
Sustainability
12
(
4
),
1665
.
https://doi.org/10.3390/su12041665
.
This is an Open Access article distributed under the terms of the Creative Commons Attribution Licence (CC BY-NC-ND 4.0), which permits copying and redistribution for non-commercial purposes with no derivatives, provided the original work is properly cited (http://creativecommons.org/licenses/by-nc-nd/4.0/).