ANN was used to create a storage-based concurrent flow forecasting model. River flow parameters in an unsteady flow must be modeled using a model formulation based on learning storage change variable and instantaneous storage rate change. Multiple input-multiple output (MIMO) and multiple input-single output (MISO models in three variants were used to anticipate flow rates in the Tar River Basin in the United States. Gamma memory neural networks, as well as MLP and TDNNs models, are used in this study. When issuing a forecast, storage variables for river flow must be considered, which is why this study includes them. While considering mass balance flow, the proposed model can provide real-time flow forecasting. Results obtained are validated using various statistical criteria such as RMS error and coefficient of correlation. For the models, a coefficient of correlation value of more than 0.96 indicates good results. While considering the mass balance flow, the results show flow fluctuations corresponding to expressly and implicitly provided storage variations.

  • Gamma memory usage along with storage in river flow prediction.

  • Comparison and finding the best model to work on in real-time scenario using various ANN models.

  • Incorporating storage rate change along with flow values such as discharge and gauge height.

  • Use of mass balance flow and continuity equation satisfying river flow studies.

  • Practical applicability of ANN-based models in real-world situations.

Channel flooding is a complex dynamic process characterized by spatial and temporal variations inflow parameters. River flow and flooding are highly complicated processes that are space and time dependent, necessitating the use of space and time-dependent functions to represent them. Researchers have investigated the efficacy of techniques used in river flow modeling that rely on the application of artificial neural networks (ANNs). Giles et al. (1997) utilized one such model, namely multilayer perceptron (MLP), to predict flow variable(s), which is a feedforward network. Many investigators have used MLP, but MLP being a static network does not account for memory elements, hence cannot identify time-related variations in input sequences. Memory in ANN can be accounted for using two parameters, namely feedback and feedforward delays. Memory by feedback delays is accounted for by the self-recurrent network, given by Elman & Zipser (1988), while memory in feedforward delays by Time-Delay Neural network as provided by Singh et al. (2021b).

In 2015, Choudhury et al. used gamma memory neural network to update time-variant patterns in input sequences, since it has an adjustable memory parameter that assimilates both feed-forward and feedback delays. In situations where time-bound input data set patterns are unknown, ANNs with adaptable and updatable memory characteristics are significantly better and more efficient than static ANNs. Choudhury & Ullah (2015) in their work have used focused ANN, namely multiple gamma memory neural network (MGMNN) where it can spontaneously select best memory parameters, such as memory depth, hence utilizing the updating characteristics of time-varying river flow pattern in input flow sequences (Singh et al. 2021a).

The flow rate and depth of an unstable flow are constantly changing throughout time. The flow rate and storage that are interconnected over time are governed by river reach and upstream drainage parameters based on the geomorphologic structure of a river. In river flow modeling, the principles of continuity and mass balancing flow are always important. In most of the literature (Than et al. 2021; Zakaria et al. 2021) routing type ANN models used do not account for storage variation, hence do not comply with continuity law. Sil & Choudhury (2016) used fractional storage to formulate the upstream and downstream flow forecasting models, whereas in the year 2015, Choudhury & Roy (2015) used flow rate and flow depth based on learning characteristics of actual and fractional-storage variation that comply with mass balance flow in forecasting concurrent flow in river reach. They have implicitly incorporated storage rate change, which is contemporaneous with the flow rate change, into their study. In the case of river flow and flooding, the utilization of storage variables is critical. The goal of this study is to explicitly include storage change factors as well as flow rates at a specific time interval. This study is a continuation of Choudhury & Roy's work where the utilization of storage is considered both explicitly and implicitly in the flow forecasting technique of river channels, storage, which is the most significant factor when it comes to river flow studies, cannot be overlooked. In the forecasting river flow study, the use of instantaneous storage rate change and storage factors, as well as flow rate and flow depth, has been utilized. Model forms such as multiple inputs multiple outputs (MIMO-1 & MIMO-2) that Choudhury & Roy 2015 utilized in their paper based on static and dynamic ANN (MLP, TDNN, and GMNN) give this paper an extension of employing in conjunction with storage model form.

In the paper of Than et al. (2021), storage variable and flow rates are interlinked and governed by the following equation:
formula
(1)
where
  • =storage parameter calculated explicitly at time t

  • =discharge at the upstream section calculated at time t

  • =flow rate/discharge obtained at the downstream section at time t

  • =river basin characteristics

In case of characteristics flow variation, flow at upstream and downstream stations that produce no flow after a time t can be written as:
formula
(2)
formula
(3)

Equations (2) and (3), giving discharge at time for upstream and downstream bounding sections in a river system, are obtained as per the work in Choudhury & Roy in 2015. When forecasting river flow, they do not explicitly account for the consideration of storage rate change variables. Storage, which is an important element for making a prediction in a basin channel, must be taken into account while modeling a river system.

Similarly, for the Muskingum model in a river reach, equations for the flow at upstream and downstream can be given by Choudhury (2007), Sil & Choudhury (2016) and Barbetta et al. (2017)
formula
(4)
formula
(5)

Here, c1, c3 = Muskingum Model parameters that represent river flow properties at a common section in an equivalent flow while , = upstream hydrograph evolution parameters and define the initial flow condition at the upstream and the downstream stations that produce no downstream flow after a time interval, Δt (Barbetta et al. 2017; Hadiyan et al. 2020; Zakaria et al. 2021).

Now, from Equation (1), storage can be written for ( interval) as:
formula
(6)
which is equivalent to () (Cartuyvels et al. 2021; DeVries & Principe 1992) which is defined as section reach properties on which , depends.

Storage rate change splits into two complementary parts as characteristics flow variation as:

for equivalent inflow

And for downstream flow.

Equation (4) can be split into N different parts. While Equation (5) depicts no flow at all upstream gauging stations at time , having initial flow state given by [(1− ) upstream flow shift factors], while (1−) at downstream depicts fractional storage in the river system.

In terms of fractional storage change, Choudhury & Roy's formula for the relationship between upstream and downstream discharge for a river system can be stated as a function of channel reach parameters (Choudhury & Roy 2015).
formula
formula
(7)
formula
(8)

The fractional-storage change is complementary and sum to actual storage change. Models used in forecasting are MIMO ANNs models by Choudhury (2007) and Choudhury & Roy (2015) that predict upstream and downstream flow and storage rate change. For predicting flow at upstream and downstream stations, ANN having similar number of input and output nodes may be taken as , as inputs and , 0 as desired outputs data set. For prediction in the downstream flow section of a river channel , as inputs and 0, as outputs can be used. MIMO-1 ANNs can be used to describe these prediction models. However, in addition to river flow, gauge height and storage rate change characteristics can be analyzed at the same time. Storage refers to the average or mean of all gauge heights from both inflow and outflow stations. The storage rate change parameter is represented by the average mean depth of all gauging stations. Combining two MIMO-1 ANNs like , and, , for learning, the actual storage variation will be termed as the MIMO-2 ANN model given in Choudhury & Roy's (2015) work. MISO ANNs, on the other hand, forecast for one single station and learn arbitrary storage change where training networks can be , as inputs and as one single output.

Three major components in dynamic ANN govern the memory element part in the neural network, namely depth, order, and resolution denoted by D, P, and μ. Memory depth at an instance, as the name implies, remembers how far the input parameter can be stored in the system. It refers to the size of a window into the past. The number of delay sections with transfer function G (z) is represented by memory order. The first tap is always initialized as current input and assigned as tap-zero. Current input, i.e., I(t) is having memory of P+1 number of taps. Memory depth initializes the number of taps and the delay between each tap (Tap Delay) in the delay line input. The length of the memory window in samples is the product of these two quantities. Order (P) of the memory remains tap minus one always. On the other hand, memory resolution refers to the fineness with which information is stored in individual taps. According to Choudhury & Roy (2015), it can be expressed in terms of memory taps and interpreted as taps per unit time step. It means that the resolution will be one-fourth if the data are stored in four time steps per unit time step. The solution in TDNN is always unity and cannot be changed, indicating that memory order and depth are always equal. However, in the case of Gamma Memory, where resolution fluctuates during training and is updated over a set period of time resolution=P/D.

To optimize the best memory depth based upon input, the gamma memory can update its memory depth parameter. Being recursive in nature, the gamma parameter adapts the memory depth during learning. The gamma memory adapts to choose an appropriate ratio of memory depth/resolution, which is very important in dynamic modeling with neural networks. Initially, the gamma parameter assigns its value to one. During adaptation, the gamma parameter decreases and then searches for further deepening memory depth. Memory depth is the ratio of the number of taps and gamma parameters. Gamma memory structure becomes stable when its value reaches 0.5.

Time-delay neural network

Here, the transfer function g(t)=Z−1 is generating a kernel. It is a unit delay operator operating on discrete variable I(t) that gives a delayed version of I(t−1).

Equation (9) gives a convolution sum having I(t) as input processing from input to hidden layer neuron having memory order P (Choudhury & Ullah 2015; Singh et al. 2021a)
formula
(9)
is the weight vector and is a vector of input having P delayed lines. represents synaptic weights, while is biased. Here, the output of neuron i can be a function of f [ui(t)], which may activate neuron i.
Hadiyan et al. (2020) gave the equation for calculating the output of neuron j, where j=1,2,. . ., N+1, having N+1 input nodes and a single hidden layer of m nodes with memory order P, given by
formula
(10)
where
formula
(11)

Here is input to the hidden node for multiple input variables at time (t). And r=1, 2, 3, . ., N+1. f is the activation function of hidden node ‘i’ while F(.) is the activation function of output neuron j. is bias and is a weight connection connecting pth tap of rth node to ith neuron.

The fact that TDNN employs the backpropagation technique makes it ideal for river flow investigations. The inability to update memory elements while training to learn actual storage properties is also a drawback, resulting in poor outcomes and sub-optimal solutions.

In gamma memory, there is a connection between the local recurrent network and the feed-forward network. Local recurrent networks have feedback connections that are restricted to processing units only at the local level. Arslan (2021) emphasized the use of gamma memory having order P with one single input and multiple outputs that are based on linear structure. They also described impulse response in continuous time of the pth tap as given by
formula
(12)
Choudhury & Roy (2015) depicted gamma memory as a particular case of generalized feed-forward filter. Figure 1 depicts the same for which the impulse response of pth tap can be computed recursively from one time lag (p−1)th tap. Filter weights () can be updated by feed-forward adaptation. Impulse response of pth tap () having order P in a focused gamma memory neural network can be given by
formula
(13)
where first tap response is input.
Figure 1

(a) MIMO-1 MLP for prediction at Enfield station with storage rate change variable forecast; (b) MIMO-1 TDNN for prediction at Enfield station with storage rate change variable forecast; (c) MIMO-1 GMNN for prediction at Enfield station with storage rate change variable forecast.

Figure 1

(a) MIMO-1 MLP for prediction at Enfield station with storage rate change variable forecast; (b) MIMO-1 TDNN for prediction at Enfield station with storage rate change variable forecast; (c) MIMO-1 GMNN for prediction at Enfield station with storage rate change variable forecast.

Recursive relation can be obtained by the following equation
formula
(14)
being positive always.
Initial conditions of x remain zero for p=2, 3…P, which is the same as the convolution memory model suggested by DeVries & Principe (1992), Arslan (2021), and Singh et al. (2021b). Changing the derivative with a first-order forward difference, the following equation can be obtained
formula
(15)
The input receiving weighted outputs from the taps in gamma memory filter can be written as (Cartuyvels et al. 2021)
formula
(16)
  • =weight of the connection that joins neuron i to the pth tap in the memory filter.

Figure 2(a) depicts focused gamma memory networks.

Figure 2

(a) MIMO-1 MLP for prediction at Hilliardstone Station with storage rate change variable forecast; (b) MIMO-1 TDNN for prediction at Hilliardstone Station with storage rate change variable forecast; (c) MIMO-1 GMNN for prediction at Hilliardstone Station with storage rate change variable forecast.

Figure 2

(a) MIMO-1 MLP for prediction at Hilliardstone Station with storage rate change variable forecast; (b) MIMO-1 TDNN for prediction at Hilliardstone Station with storage rate change variable forecast; (c) MIMO-1 GMNN for prediction at Hilliardstone Station with storage rate change variable forecast.

Activation can be written as
formula
(17)
where p is the tap and r is the input node, p ranging from p=1,2,3…P; r=1,2,3 …(N+1)
And first tap activation is given by
formula
The activation is usually the feed-forward mapping network MLP. The output in the first hidden layer is written as
formula
(18)
is the transfer function of neuron i in the hidden layer and w is the synaptic weight having pth tap rth input node and ith neuron as subscript.

The input layers in a focussed MGMNN are recursive where memory parameter is a special back propagation, through time.

In estimating the weights utilized while training the ANN, Equation (19) as given by DeVries & Principe (1992) and Arslan (2021) can be minimized locally as
formula
(19)
where d(t) implies the desired output while latter y(t) signifies corresponding network output, respectively. In the case of MLP and TDNN, which are feed-forward networks, since mapping is instantaneous and error gradient is not depending on time, the weights in the network get updated by applying the back-propagation technique (Cartuyvels et al. 2021). The simple partial derivative is used to update the network weights while training, as given by Werbos (1990) in Equation (20).
formula
(20)
here, is the summation of the product of and (t) from j=1 to N1, i.e.

Here N1 is the number of nodes in the previous layer.

is negative of the product of learning rate to simple partial derivative, i.e. – .

Here is learning rate while the latter is a simple partial derivative.

Rate of increment in the weight can be computed using ordered derivative as

= ordered derivative of the error function concerning weight.

Here = E_wi,j which is the product of E_neti(t) and xj(t).

Here neti(t) is the function of current activation only in nodes j, and for the recurrent network such as gamma memory neti(t) can be given as (Than et al. 2021):
formula
(21)
The net error gradient is derived by computing instantaneous gradients and aggregating the effects over time in Neuro solution, which is analogous to the static delta rule. The BPTT algorithm, as expressed in Equation (22), is used by the gamma memory neural network.
formula
(22)

Here T is the trajectory length, while [neti(t)] is the derivative of the transfer function concerning neti(t).

Model application and results

The use of ANNs to predict concurrent flows in the Tar-Pamlico river in the United States has been investigated. Figure 2(c) depicts the research area. In this diagram, all of the gauging stations are portrayed as dots, with Enfield, Hilliardstone, and Rockymount as upstream flow stations and Tarboro as a downstream flow station. Data has been collected from the USGS streamflow archive (https://waterdata.usgs.gov/nc/nwis/current/?type=flow&group_key=basin_cd) where concurrent flow records for the aforesaid gauging stations from 29 July 2004 to 1 October 2004, are utilized. This study used 786 consecutive data sets with stream flow and gauge height spread at 2-h intervals. Peak flow rates of 3,400, 1,420, and 6,280 for Enfield, Hilliardstone, and Rockymount stations, and 14,133 for Tarboro outflow station, have been compared using Peak flow criterion, and it has been determined that flood episodes are moderate to low for the assigned time period. The model architecture is chosen through trial and error, whereas the network design is determined by the training, which is detailed in Table 1(b). For training purposes, the first 65 percent of data sets are randomly selected or sequentially selected. Fifteen per cent are utilized for cross-validation, while the remaining 20% are used for network performance testing. The river network with four flow measuring sections is separated into a river system with three upstream flow sections located at Enfield, Hilliardstone, and Rockymount, while Tarboro works as a common downstream outflow section in forecasting concurrent flow of the Tar basin. MIMO-1 networks are trained to learn fractional-storage rate change utilizing flow rates as input for all four sections at time t + t, with zero flow rate assigned to all but the forecasting section as the intended output. As inputs and outputs, the MIMO-2 ANNs are trained to learn actual storage characteristics using concurrent flow data sets separated at 2-h time intervals for all sections. MISO ANN models, on the other hand, which are based on learning arbitrary storage fluctuations, are trained using only the section used in predicting as input and flow rate for t + Δt as the desired output. Equation (22) is used for calculating the rate of change of error with respect to weights where is observed while is forecasted flow rates at section r at time t. Performance result and model performances in forecasting the testing data sets with a lead time of 2 h are given in Table 1(b) for MIMO-2 models. The greatest RMSE values estimated from all gauging sites, namely Enfield, Hilliardstone, Greenville, and Rockymount, are less than 10% of the measured mean flow rate. Forecasted flow rates for various sections, including explicitly calculated storage rate change having a lead time of 2-h using various models like MGMNN, are shown in Figures 1(c), 3(c), 4(c) and 5(c). The values obtained in the graph clearly show that the highest deviation of the predicted peaks from the observed data sets is computed to be less than 280 cfs, which is less than that of deviation of estimated peaks by using computer software mentioned earlier. The R value obtained is mostly more than 0.90, which shows satisfactory results given that R = 1 is a perfect fit model.

Table 1

(a) MIMO-2 model form for prediction at all the gauging stations using MLP; (b) Network Architecture of ANN models used in the research for predicting in Tar River system for learning storage characteristics implicitly and explicitly from the data sets

PerformanceEnfieldHilliardstoneRockymountTarboroStorage rate changeAverage storage
(a) 
MSE 4017.92 4367.16 47,145.18 15,5231 44,829.61 43,854.74 
NMSE 0.0646 0.63503 0.2833 0.2066 0.07879 0.07535 
MAE 52.8300 53.8488 172.95 341.690 171.165 169.291 
Min Abs Error 0.0673 1.53316 1.5096 0.66911 1.54003 0.79339 
Max Abs Error 167.181 146.563 572.23 983.924 543.570 503.691 
0.98268 0.96226 0.9203 0.94103 0.9866 0.98802 
(b) 
MISO MLP P = 0 6–9–1 35 2,000 Static Back-propagation 
TDNN P = 2 12–6–1 65 2,000 
MGMNN P = 2 12–6–1 92 2,000 Back-propagation through time, momentum = 0.7, Learning rate = 0.01 
MIMO MLP P = 0 6–9–6 42 5,000 Static Back-propagation 
TDNN P = 2 12–4–6 98 10,000 
MGMNN P = 2 12–4–6 98 10,000 Back-propagation through time, momentum = 0.7, Learning rate = 0.01 
PerformanceEnfieldHilliardstoneRockymountTarboroStorage rate changeAverage storage
(a) 
MSE 4017.92 4367.16 47,145.18 15,5231 44,829.61 43,854.74 
NMSE 0.0646 0.63503 0.2833 0.2066 0.07879 0.07535 
MAE 52.8300 53.8488 172.95 341.690 171.165 169.291 
Min Abs Error 0.0673 1.53316 1.5096 0.66911 1.54003 0.79339 
Max Abs Error 167.181 146.563 572.23 983.924 543.570 503.691 
0.98268 0.96226 0.9203 0.94103 0.9866 0.98802 
(b) 
MISO MLP P = 0 6–9–1 35 2,000 Static Back-propagation 
TDNN P = 2 12–6–1 65 2,000 
MGMNN P = 2 12–6–1 92 2,000 Back-propagation through time, momentum = 0.7, Learning rate = 0.01 
MIMO MLP P = 0 6–9–6 42 5,000 Static Back-propagation 
TDNN P = 2 12–4–6 98 10,000 
MGMNN P = 2 12–4–6 98 10,000 Back-propagation through time, momentum = 0.7, Learning rate = 0.01 

a = input nodes; b = hidden node; c = output node.

Figure 3

(a) MIMO-1 MLP for prediction at Rockymount Station with storage rate change variable forecast; (b) MIMO-1 TDNN for prediction at Rockymount station with storage rate change variable forecast; (c) MIMO-1 MGMNN for prediction at Rockymount Station with storage rate change variable forecast.

Figure 3

(a) MIMO-1 MLP for prediction at Rockymount Station with storage rate change variable forecast; (b) MIMO-1 TDNN for prediction at Rockymount station with storage rate change variable forecast; (c) MIMO-1 MGMNN for prediction at Rockymount Station with storage rate change variable forecast.

Figure 4

(a) MIMO-1 MLP for prediction at Tarboro Station with storage rate change variable forecast; (b) MIMO-1 TDNN for prediction at Tarboro Station with storage rate change variable forecast; (c) MIMO-1 GMNN for prediction at Tarboro Station with storage rate change variable forecast.

Figure 4

(a) MIMO-1 MLP for prediction at Tarboro Station with storage rate change variable forecast; (b) MIMO-1 TDNN for prediction at Tarboro Station with storage rate change variable forecast; (c) MIMO-1 GMNN for prediction at Tarboro Station with storage rate change variable forecast.

Figure 5

MIMO-2 GMNN for prediction at all gauging stations with storage rate change variable forecast.

Figure 5

MIMO-2 GMNN for prediction at all gauging stations with storage rate change variable forecast.

The network architectures employed and other essential aspects of training the model form are presented in Table 1(b), whereas the model architecture is based on a trial and error approach in the research reported in this work. The memory order P indicates whether the data are in the present observation or in the lagged form. P = 0 always depicts current observation, while value 1 means memory order in current observation and one lagged input. Figures 1(a), 3(a) and 4(a) depict the flow forecast of observed and predicted for one particular inflow gauging station using the MIMO-1 model form having MLP memory, including the storage rate change variable. Figures 1(b), 3(b) and 4(b) presented the graph for TDNN memory using the MIMO-1 model form for flow forecast in outflow station. Values obtained for the same are presented in tables having MLP memory in Tables 2(a), 3(a), and 4(a) for inflow station, and Table 5(a) for outflow station. Tables 2(b), 3(b), and 4(b) show the TDNN memory parameter values for inflow gauging stations, while Table 5(b) shows those for outflow gauging stations.

Table 2

(a) MIMO-1 MLP model performances, (b) MIMO-1 TDNN model performances; (c) MIMO-1 MGMNN model performances for predicting at Enfield station

PerformanceEnfieldHilliardstoneRockymountTarboroStorage rate changeAverage storage
(a) 
MSE 2,206.416 2.79 × 10−7 0.00012 0.00027 2,173.492 4,750.32 
NMSE 0.0350 – – – 0.034 0.0189 
MAE 32.9729 0.0004 0.00887 0.0137 33.049 42.527 
Min Abs Error 0.3006 1.233 × 10−5 2.66 × 10−5 0.0001 0.0574 0.6218 
Max Abs Error 136.8921 0.0008 0.0208 0.0292 135.143 199.804 
R 0.9952 – – – 0.995 0.994 
(b) 
MSE 4,481.87 0.0001 9.21 × 10−5 4.65 × 10−5 4,074.788 62,873.20 
NMSE 0.0710 – – – 0.0646 0.2503 
MAE 59.013 0.0118 0.0090 0.0058 56.5728 189.57 
Min Abs Error 0.0329 1.621 × 10−5 3.74155 × 10−5 4.2 × 10−5 3.3458 5.8211 
Max Abs Error 167.66 0.0167 0.0136 0.0157 162.5547 700.67 
R 0.9784 – – – 0.9784 0.974 
(c) 
MSE 4,907.743 0.0006 0.0001 9.6 × 10−5 3,261.432 26,296.15 
NMSE 0.0778 – – – 0.0517 0.1047 
MAE 60.7313 0.0248 0.0126 0.0093 49.286 104.04 
Min Abs Error 0.9153 0.0137 0.0044 0.0026 0.4009 0.1678 
Max Abs Error 136.1445 0.0330 0.0204 0.0168 117.5207 528.048 
R 0.9629 – – – 0.9829 0.9842 
PerformanceEnfieldHilliardstoneRockymountTarboroStorage rate changeAverage storage
(a) 
MSE 2,206.416 2.79 × 10−7 0.00012 0.00027 2,173.492 4,750.32 
NMSE 0.0350 – – – 0.034 0.0189 
MAE 32.9729 0.0004 0.00887 0.0137 33.049 42.527 
Min Abs Error 0.3006 1.233 × 10−5 2.66 × 10−5 0.0001 0.0574 0.6218 
Max Abs Error 136.8921 0.0008 0.0208 0.0292 135.143 199.804 
R 0.9952 – – – 0.995 0.994 
(b) 
MSE 4,481.87 0.0001 9.21 × 10−5 4.65 × 10−5 4,074.788 62,873.20 
NMSE 0.0710 – – – 0.0646 0.2503 
MAE 59.013 0.0118 0.0090 0.0058 56.5728 189.57 
Min Abs Error 0.0329 1.621 × 10−5 3.74155 × 10−5 4.2 × 10−5 3.3458 5.8211 
Max Abs Error 167.66 0.0167 0.0136 0.0157 162.5547 700.67 
R 0.9784 – – – 0.9784 0.974 
(c) 
MSE 4,907.743 0.0006 0.0001 9.6 × 10−5 3,261.432 26,296.15 
NMSE 0.0778 – – – 0.0517 0.1047 
MAE 60.7313 0.0248 0.0126 0.0093 49.286 104.04 
Min Abs Error 0.9153 0.0137 0.0044 0.0026 0.4009 0.1678 
Max Abs Error 136.1445 0.0330 0.0204 0.0168 117.5207 528.048 
R 0.9629 – – – 0.9829 0.9842 
Table 3

(a) MIMO-1 MLP model performances; (b) MIMO-1 TDNN model performances; (c) MIMO-1 MGMNN model performances for predicting at Hilliardstone station

PerformanceEnfieldHilliardstoneRockymountTarboroStorage rate changeAverage storage
(a) 
MSE 0.0002 423.6829 2.13 × 10−5 0.0001 480.7132 2027.12 
NMSE – 0.0603 – – 0.0685 0.0095 
MAE 0.0122 15.1370 0.00359 0.0094 15.5219 42.0944 
Min Abs Error 3.8 × 10−5 0.3035 1.54294 × 10−5 1.35 × 10−6 0.1037 5.5711 
Max Abs Error 0.0383 64.4334 0.01051 0.0322 68.1354 82.6840 
R – 0.9841 – – 0.9812 0.99 
(b) 
MSE 7.1 × 10−5 1917.0373 6.43 × 10−5 0.0003 3109.76 23,499.64 
NMSE – 0.2732 – – 0.4433 0.1109 
MAE 0.0062 34.1655 0.006545688 0.0181 40.3894 118.1489 
Min Abs Error 9.8 × 10−5 0.0352 0.000154039 0.0003 0.3941 1.9747 
Max Abs Error 0.0250 107.6675 0.020841308 0.030 129.8926 448.3595 
R – 0.9570 – – 0.9600 0.9816 
(c) 
MSE 4.3 × 10−5 2295.4190 0.0004 0.0005 2329.384 20,813.30 
NMSE – 0.3272 – – 0.3320 0.0982 
MAE 0.0048 35.7521 0.0193 0.0220 35.8428 112.26 
Min Abs Error 1.2 × 10−6 0.4672 0.0077 0.0010 0.1181 2.1977 
Max Abs Error 0.0151 125.7821 0.0282 0.0298 122.1344 391.7531 
R – 0.9603 – – 0.95818 0.9888 
PerformanceEnfieldHilliardstoneRockymountTarboroStorage rate changeAverage storage
(a) 
MSE 0.0002 423.6829 2.13 × 10−5 0.0001 480.7132 2027.12 
NMSE – 0.0603 – – 0.0685 0.0095 
MAE 0.0122 15.1370 0.00359 0.0094 15.5219 42.0944 
Min Abs Error 3.8 × 10−5 0.3035 1.54294 × 10−5 1.35 × 10−6 0.1037 5.5711 
Max Abs Error 0.0383 64.4334 0.01051 0.0322 68.1354 82.6840 
R – 0.9841 – – 0.9812 0.99 
(b) 
MSE 7.1 × 10−5 1917.0373 6.43 × 10−5 0.0003 3109.76 23,499.64 
NMSE – 0.2732 – – 0.4433 0.1109 
MAE 0.0062 34.1655 0.006545688 0.0181 40.3894 118.1489 
Min Abs Error 9.8 × 10−5 0.0352 0.000154039 0.0003 0.3941 1.9747 
Max Abs Error 0.0250 107.6675 0.020841308 0.030 129.8926 448.3595 
R – 0.9570 – – 0.9600 0.9816 
(c) 
MSE 4.3 × 10−5 2295.4190 0.0004 0.0005 2329.384 20,813.30 
NMSE – 0.3272 – – 0.3320 0.0982 
MAE 0.0048 35.7521 0.0193 0.0220 35.8428 112.26 
Min Abs Error 1.2 × 10−6 0.4672 0.0077 0.0010 0.1181 2.1977 
Max Abs Error 0.0151 125.7821 0.0282 0.0298 122.1344 391.7531 
R – 0.9603 – – 0.95818 0.9888 
Table 4

(a) MIMO-1 MLP model performances; (b) MIMO-1 TDNN model performances; (c) MIMO-1 GMNN model performances for predicting at Rockymount station

PerformanceEnfieldHilliardstoneRockymountTarboroStorage rate changeAverage storage
(a) 
MSE 7.1 × 10−5 0.0001 4672.28 4.3 × 10−5 4862.02 11,556.10 
NMSE – – 0.02796 – 0.02910 0.0400 
MAE 0.0068 0.0066 44.5391 0.0056 44.8619 78.133 
Min Abs Error 0.0001 1.15 × 10−5 0.21484 9.82 × 10−5 0.2759 0.1942 
Max Abs Error 0.0198 0.0265 232.025 0.01454 225.264 283.233 
R – – 0.98835 – 0.9885 0.9970 
(b) 
MSE 0.0003 0.0007 22,048.51 0.0002 24,310.4 13,625.06 
NMSE – – 0.1319 – 0.1455 0.04724 
MAE 0.0157 0.0271 121.5309 0.0162 126.202 99.6741 
Min Abs Error 0.0069 0.0126 2.5753 0.0114 1.4478 0.8670 
Max Abs Error 0.0376 0.0338 388.0812 0.0264 418.652 251.1397 
R – – 0.9554 – 0.9608 0.9887 
(c) 
MSE 8.84 × 10−5 0.0003 11,379.90 1.5 × 10−5 19,707.56 19,458.30 
NMSE – – 0.0681 – 0.1179 0.0674 
MAE 0.0088 0.0185 82.042 0.0027 104.8487 104.71 
Min Abs Error 0.0004 0.0006 0.1881 8.8 × 10−6 0.2549 0.22 
Max Abs Error 0.0172 0.0349 337.59 0.0110 367.049 326.006 
R – – 0.9694 – 0.97637 0.9911 
PerformanceEnfieldHilliardstoneRockymountTarboroStorage rate changeAverage storage
(a) 
MSE 7.1 × 10−5 0.0001 4672.28 4.3 × 10−5 4862.02 11,556.10 
NMSE – – 0.02796 – 0.02910 0.0400 
MAE 0.0068 0.0066 44.5391 0.0056 44.8619 78.133 
Min Abs Error 0.0001 1.15 × 10−5 0.21484 9.82 × 10−5 0.2759 0.1942 
Max Abs Error 0.0198 0.0265 232.025 0.01454 225.264 283.233 
R – – 0.98835 – 0.9885 0.9970 
(b) 
MSE 0.0003 0.0007 22,048.51 0.0002 24,310.4 13,625.06 
NMSE – – 0.1319 – 0.1455 0.04724 
MAE 0.0157 0.0271 121.5309 0.0162 126.202 99.6741 
Min Abs Error 0.0069 0.0126 2.5753 0.0114 1.4478 0.8670 
Max Abs Error 0.0376 0.0338 388.0812 0.0264 418.652 251.1397 
R – – 0.9554 – 0.9608 0.9887 
(c) 
MSE 8.84 × 10−5 0.0003 11,379.90 1.5 × 10−5 19,707.56 19,458.30 
NMSE – – 0.0681 – 0.1179 0.0674 
MAE 0.0088 0.0185 82.042 0.0027 104.8487 104.71 
Min Abs Error 0.0004 0.0006 0.1881 8.8 × 10−6 0.2549 0.22 
Max Abs Error 0.0172 0.0349 337.59 0.0110 367.049 326.006 
R – – 0.9694 – 0.97637 0.9911 
Table 5

(a) MIMO-1 MLP model performances; (b) MIMO-1 TDNN model performances; (c) MIMO-1 MGMNN model performances for predicting at Tarboro station

PerformanceEnfieldHilliardstoneRockymountTarboroStorage rate changeAverage storage
(a) 
MSE 4.3 × 10−5 5.10 × 10−6 6.20 × 10−6 6213.19 6011.07 3761.24 
NMSE – – – 0.0072 0.00700 0.0055 
MAE 0.0053 0.0020 0.0021 64.1414 62.8454 53.2630 
Min Abs Error 1.6 × 10−5 3.38172 × 10−5 1.57 × 10−5 0.54994 0.03488 0.1343 
Max Abs Error 0.0130 0.0048 0.0054 202.781 207.766 123.888 
R – – – 0.9973 0.997 0.99911 
(b) 
MSE 7.6 × 10−5 0.0001 0.0001 122,791.9 145,073.2 80,668.77 
NMSE – – – 0.14310 0.16907 0.1200 
MAE 0.0060 0.0083 0.0103 308.1076 339.253 251.4257 
Min Abs Error 1.7 × 10−6 9.52 × 10−6 5.54 × 10−5 4.7350 0.0090 3.83098 
Max Abs Error 0.0224 0.0265 0.0304 744.564 807.648 564.30 
R – – – 0.9687 0.9510 0.9734 
(c) 
MSE 3.4 × 10−5 0.0003 0.0004 90,508.8 108,300.3 30,266.9 
NMSE – – – 0.1054 0.1262 0.0450 
MAE 0.0054 0.0147 0.0160 227.888 219.009 148.994 
Min Abs Error 0.0001 2.48 × 10−5 0.0001 0.450210 0.7565 0.0600 
Max Abs Error 0.0081 0.0404 0.0464 698.8216 797.253 310.112 
R – – – 0.946 0.9438 0.9906 
PerformanceEnfieldHilliardstoneRockymountTarboroStorage rate changeAverage storage
(a) 
MSE 4.3 × 10−5 5.10 × 10−6 6.20 × 10−6 6213.19 6011.07 3761.24 
NMSE – – – 0.0072 0.00700 0.0055 
MAE 0.0053 0.0020 0.0021 64.1414 62.8454 53.2630 
Min Abs Error 1.6 × 10−5 3.38172 × 10−5 1.57 × 10−5 0.54994 0.03488 0.1343 
Max Abs Error 0.0130 0.0048 0.0054 202.781 207.766 123.888 
R – – – 0.9973 0.997 0.99911 
(b) 
MSE 7.6 × 10−5 0.0001 0.0001 122,791.9 145,073.2 80,668.77 
NMSE – – – 0.14310 0.16907 0.1200 
MAE 0.0060 0.0083 0.0103 308.1076 339.253 251.4257 
Min Abs Error 1.7 × 10−6 9.52 × 10−6 5.54 × 10−5 4.7350 0.0090 3.83098 
Max Abs Error 0.0224 0.0265 0.0304 744.564 807.648 564.30 
R – – – 0.9687 0.9510 0.9734 
(c) 
MSE 3.4 × 10−5 0.0003 0.0004 90,508.8 108,300.3 30,266.9 
NMSE – – – 0.1054 0.1262 0.0450 
MAE 0.0054 0.0147 0.0160 227.888 219.009 148.994 
Min Abs Error 0.0001 2.48 × 10−5 0.0001 0.450210 0.7565 0.0600 
Max Abs Error 0.0081 0.0404 0.0464 698.8216 797.253 310.112 
R – – – 0.946 0.9438 0.9906 

The results of MISO models ANN are also encouraging, but because they rely on arbitrary flow matching approaches, they do not comply with mass balance continuum flow mechanics. The model performance of TDNN is very similar to the findings of MGNN. Figure 6 shows that the observed values acquired in MGMNN are almost identical to the predicted value. These models are also useful in circumstances where real-time flow forecasting is required.

Figure 6

(a) Gamma memory neural network; (b) Gamma memory unit in focussed GMNN (c) Map of study area Tar river basin North Carolina.

Figure 6

(a) Gamma memory neural network; (b) Gamma memory unit in focussed GMNN (c) Map of study area Tar river basin North Carolina.

The MIMO-1 model's results in matching zero flow rates and flow depth for sections other than the forecasting section accurately match the observed flow rate when forecasting that section. This is especially important when there is a provision for matching known flow rates, which gives an extra advantage when assessing forecast accuracy. Connection weights in the MISO ANN model form which are specifically joining the output node through the last hidden node are tapped to zero except for the forecasting section, which incorporates undefined storage and flows variation. Hence in the spatial-temporal domain, these models do not comply with mass balance flow in river flow studies. Although the performances of the MISO and MIMO models are nearly identical, the MISO model's connection weights are less significant. The data presented in this paper suggest that when giving a forecast, storage parameters should be used openly as well as implicitly. The research presented in this paper shows that storage is just as important as other flow parameters like flow rate or flow depth. As a result, when training MIMO and MISO ANN models for flow rate forecasting, including instantaneous and average storage is just as important as observing the continuity norm. Multiple portions of the Tar River Basin in the United States have been forecasted using both static and dynamic ANN. The model's performance is satisfactory when measured using several statistical metrics such as RMSE and CE. For river flow investigations with varying temporal patterns, focused GMNN is appropriate. Other memory parameters, such as Laguaare, should be investigated further. Furthermore, understanding the physics of the model will require an understanding of the connection weights.

All relevant data are available from https://waterdata.usgs.gov/nc/nwis/current/?type=flow.

Choudhury
P.
2007
Multiple inflows Muskingum routing model
.
Journal of Hydrologic Engineering
12
(
5
),
473
481
.
Choudhury
P.
&
Roy
P.
2015
Forecasting concurrent flows in a river system using ANNs
.
Journal of Hydrologic Engineering
20
(
8
),
06014012
.
Choudhury
P.
&
Ullah
N.
2015
Downstream flow top width prediction in a river system
.
Water SA
40
(
3
),
481
490
.
982–994
.
DeVries
B.
&
Principe
J. C.
1992
The gamma model – a new neural model for temporal processing
.
Neural Networks
5
(
4
),
565
576
.
Elman
J. L.
&
Zipser
D.
1988
Discovering the hidden structure of speech
.
Journal of the Acoustical Society of America
83
,
1615
1626
.
Giles
C. L.
,
Lawrence
S.
&
Tosi
A. C.
1997
Rule inference for financial prediction using recurrent neural networks
. In:
Proceedings of IEEE Conference on Computational Intelligence for Financial Engineering
,
New York
, pp.
253
259
.
Hadiyan
P. P.
,
Moeini
R.
&
Ehsanzadeh
E.
2020
Application of static and dynamic artificial neural networks for forecasting inflow discharges, case study: Sefidroud Dam reservoir
.
Sustainable Computing: Informatics and Systems
27
,
100401
.
Sil
B. S.
&
Choudhury
P.
2016
Muskingum equation based downstream sediment flow simulation models for a river system
.
International Journal of Sediment Research
31
(
2
),
139
148
.
Singh
A.
,
Singh
R. M.
,
Senthil Kumar
A. R.
,
Kumar
A.
,
Hanwat
S.
&
Tripathi
V. K.
2021a
Evaluation of soft computing and regression-based techniques for the estimation of evaporation
.
Journal of Water and Climate Change
12
(
1
),
32
43
.
https://doi.org/10.2166/wcc.2019.101
.
Werbos
P. J.
1990
Back propagation through time: what it does and how to do it
.
Proceedings of the IEEE
78
(
10
),
1550
1558
.
Zakaria
M. N. A.
,
Malek
M. A.
,
Zolkepli
M.
&
Ahmed
A. N.
2021
Application of artificial intelligence algorithms for hourly river level forecast: a case study of Muda River, Malaysia
.
Alexandria Engineering Journal
60
(
4
),
4015
4028
.
This is an Open Access article distributed under the terms of the Creative Commons Attribution Licence (CC BY 4.0), which permits copying, adaptation and redistribution, provided the original work is properly cited (http://creativecommons.org/licenses/by/4.0/).