ABSTRACT
Machine learning (ML) models offer advantages over process-based models for real-time reservoir operation modelling, yet the impact of input variable selection (IVS) and data pre-processing on model performance remains underexplored. This study investigates various input variables for simulating daily reservoir outflow, using the Sirikit reservoir in Thailand as a case study. The datasets include daily storage and inflow of the Sirikit reservoir, outflow of Bhumibol (neighbouring reservoir), downstream discharge, and temporal factors (month and day of the week). Time series decomposition and correlation analyses were used to assess data relationships. We tested seven ML models: multiple linear regression, support vector machine, K-nearest neighbour, classification and regression tree, random forest, multi-layer perceptron, and recurrent neural network (RNN). The optimal input set comprised the previous day’s storage, inflow from 2 days before to 2 days after, and month. With these inputs, all ML models simulated outflow adequately (KGEtraining = 0.42–1.0 and KGEtesting = 0.46–0.56), with RNN showing the most potential for improvement. Input scaling significantly enhanced model performance, reducing RMSEtraining by 44 m3 s-1 and RMSEtesting by 14 m3 s-1. This study’s novelty lies in its comprehensive insights of IVS and data scaling, highlighting their critical roles in enhancing ML model application for operational reservoir simulations.
HIGHLIGHTS
Machine learning (ML) models can adequately simulate the daily outflow of Sirikit reservoir using (1) the past storage, (2) past and future inflow, and (3) month of the year.
The month of the year effectively represents the operating rule curves of the Sirikit reservoir in all selected ML algorithms.
Scaling the input data improves the accuracy of the Sirikit outflow simulations across all selected ML algorithms.