The objective of this study is to develop hybrid models by combining data-driven models, including support vector machines (SVM) and generalized regression neural networks (GRNN), and wavelet decomposition for aggregation and disaggregation of rainfall. The wavelet-based support vector machines (WSVM) and wavelet-based generalized regression neural networks (WGRNN) models are obtained using mother wavelets, including db8, db10, sym8, sym10, coif6, and coif12. The developed models are evaluated in the Bocheong-stream catchment, an International Hydrological Program representative catchment, Republic of Korea. WSVM and WGRNN models with mother wavelet db10 yield the best performance as compared with other mother wavelets for estimating areal and disaggregated rainfalls, respectively. Among 12 rainfall stations, SVM, GRNN, WSVM (db10 and sym10), and WGRNN (db10 and sym10) models provide the best accuracies for estimating the disaggregated rainfalls at Samga (No. 7), and the worst accuracies for estimating the disaggregated rainfalls at Yiweon (No. 11) stations, respectively. Results obtained from this study indicate that the combination of data-driven models and wavelet decomposition can be a useful tool for estimating areal and disaggregated rainfalls satisfactorily, and can yield better efficiency than data-driven models.

## INTRODUCTION

Rainfall modeling is a complex task. The use of conventional approaches in modeling rainfall time series is far from trivial, since hydrometeorologic processes are complex and involve various factors, such as landscape and climatic factors, which are still not well understood (Wu *et al.* 2010).

Areal rainfall is the average rainfall over a region and is estimated by one of the popular methods, such as arithmetic mean, Thiessen polygon, isohyetal, spline, kriging, and copula among others (Chow *et al.* 1988; Goovaerts 2000; AghaKouchak *et al.* 2010). The arithmetic mean method is the simplest one for determining areal rainfall. The Thiessen polygon method assumes a linear variation in rainfall between two neighboring stations, and polygons are constructed which are essentially areal weights. This method is considered more accurate than the arithmetic mean method. The isohyetal method involves construction of isohyets using observed depths at rainfall stations and assumes a linear variation between two adjacent isohyets (Chow *et al.* 1988; Singh 1992). The spline method is an interpolation method that divides interpolation intervals into small subintervals, and each of these subintervals is interpolated by using the third-degree polynomial (Apaydin *et al.* 2004; Tait *et al.* 2006). The kriging method is an optimal interpolator, based on regression against observed rainfall values of surrounding rainfall points, weighted according to spatial covariance values (Goovaerts 2000; Ly *et al.* 2011). The copula method can be employed to describe the dependencies among *n* random variables on an *n* dimensional unit cube (uniform). Description of the spatial dependence structure independent of the marginal distribution is one of the most attractive features of copulas (Genest *et al.* 2007; Zhang & Singh 2007). In this study, rainfall aggregation means the estimation of areal rainfall using the conventional approaches such as arithmetic mean, Thiessen polygon, isohyetal, spline, kriging, and copula methods.

Rainfall disaggregation can be both temporal and spatial. Temporal rainfall disaggregation entails disaggregating hourly, daily or longer duration rainfall into shorter time rainfall, and many techniques for temporal rainfall disaggregation have been proposed (Hershenhorn & Woolhiser 1987; Ormsbee 1989; Koutsoyiannis & Xanthopoulos 1990; Glasbey *et al.* 1995; Connolly *et al.* 1998; Olsson 1998; Olsson & Berndtsson 1998; Durrans *et al.* 1999; Sivakumar *et al.* 2001; Socolofsky *et al.* 2001; Gyasi-Agyei 2005; Zhang *et al.* 2008; Knoesen & Smithers 2009). However, relatively limited research has been reported on spatial rainfall disaggregation (Perica & Foufoula-Georgiou 1996; Venugopal *et al.* 1999; Sharma *et al.* 2007) as compared with temporal rainfall disaggregation.

Data-driven models, including artificial neural networks (ANNs), neuro-fuzzy, and genetic programming, are computational methods that have been primarily used for pattern recognition, classification, and prediction (Haykin 2009). During the past decades, various data-driven models have been developed and applied for temporal rainfall disaggregation (Burian *et al.* 2000, 2001; Burian & Durrans 2002). Burian *et al.* (2000) evaluated ANNs for disaggregation of hourly rainfall into subhourly time increments. Results have shown that ANNs are comparable to other disaggregation methods, and improve the prediction of maximum incremental rainfall intensity. Burian *et al.* (2001) investigated the training performance of various ANN models’ characteristics including data standardization, the geographic location of training data, quantity of training data, the number of training iterations, and the number of hidden neurons in ANNs. Burian & Durrans (2002) examined how the errors in the disaggregated rainfall hyetograph translate to errors in the prediction of the runoff hydrograph. However, research on the development and application using data-driven models for spatial rainfall disaggregation (Kim & Singh 2015) has been limited compared with temporal rainfall disaggregation. Kim & Singh (2015) developed ANN models, including multilayer perceptron (MLP) and Kohonen self-organizing feature map (KSOFM), for spatial disaggregation of areal rainfall in the Wi-stream catchment, an International Hydrological Program (IHP) representative catchment, Republic of Korea. Results showed that MLP and KSOFM models could disaggregate areal rainfall into individual point rainfall with spatial concepts successfully.

In recent years, wavelet decomposition and data-driven models have been combined and successfully implemented in hydrological applications including rainfall, streamflow, water stage, evapotranspiration, groundwater, reservoir inflow, and sediment load, etc. (Wang & Ding 2003; Cannas *et al.* 2006; Wang *et al.* 2009; Adamowski & Sun 2010; Kisi 2010; Rajaee 2010; Tiwari & Chatterjee 2010; Kisi & Cimen 2011; Adamowski & Chan 2011; Rajaee *et al.* 2011; Adamowski & Prasher 2012; Nejad & Nourani 2012; Okkan 2012; Wei *et al.* 2012; Okkan & Serbes 2013; Seo *et al.* 2015). The wavelet decomposition is a specific data-preprocessing method which can analyze a signal in both time and frequency so that it can overcome the drawbacks of the conventional Fourier transform method. The wavelet decomposition permits an effective decomposition of time series so that the decomposed data increase the performance of hydrological models by capturing the useful information at different decomposition levels (Nourani *et al.* 2009, 2011).

Adamowski & Sun (2010) suggested the method based on combining discrete wavelet transforms and ANNs for streamflow forecasting in non-perennial rivers. They found that the WA-ANN models provided more accurate streamflow forecasting than the ANN models. Tiwari & Chatterjee (2010) developed a hybrid wavelet-bootstrap-ANN (WBANN) model to investigate the potential of wavelet and bootstrapping techniques for developing an accurate and reliable ANN model for hourly flood forecasting. They found that the WBANN model improved the reliability of flood forecasting with greater confidence. Adamowski & Prasher (2012) compared support vector regression (SVR) and wavelet networks (WN) for daily streamflow forecasting in a mountainous watershed. They found that the best WN model performed slightly better than the best SVR model. Okkan & Serbes (2013) developed different models combining discrete wavelet transform (DWT) and different data-driven models, including multiple linear regression (MLR), feed forward neural networks (FFNN), and least square-support vector machines (LS-SVM) for reservoir inflow modeling. They found that the DWT-FFNN model performed better than the other models in terms of mean square error (MSE) and coefficient of determination (R^{2}). Nourani *et al.* (2014) recently reviewed the definition and advantages of wavelet-based models, as well as the history and potential future of their application in hydrology to predict important processes of the hydrologic cycle.

Although there have been investigations using the combination of data-driven models and wavelet decomposition, their applications for aggregation and disaggregation of rainfall has been limited. Mathematical formulas between areal and individual rainfalls on the catchment cannot be derived or developed using the conventional methods, including simple regression analysis. Therefore, the strong nonlinear behavior in nature, such as aggregation and disaggregation of rainfall, can be overcome by using the combination of data-driven models and wavelet decomposition successfully.

The objective of this study, therefore, is to develop and apply two different hybrid models, wavelet-based support vector machines (WSVM) and wavelet-based generalized regression neural networks (WGRNN), for aggregation and disaggregation of rainfall and evaluate them in the Bocheong-stream catchment, an IHP representative catchment, Republic of Korea. The paper is organized as follows: the second part describes the methodology including wavelet decomposition, support vector machines (SVM), generalized regression neural networks (GRNN), and WSVM and WGRNN, respectively. The third part describes the study area and data, and the fourth part presents the results and discussion. Conclusions are presented in the last part of the paper.

## METHODOLOGY

### Wavelet decomposition

*et al.*2009): where = the scale parameter, = the translation parameter, * = the complex conjugate, and = the mother wavelet. CWT necessitates a large amount of computation time and resources, while DWT requires less computation time and is simpler to implement than CWT. DWT involves choosing scales and positions, which are called dyadic scales and positions, based on powers of two. This is achieved by modifying the wavelet representation as (Mallat 1989; Nourani

*et al.*2009): where

*j*and

*k*=the integers that control the wavelet dilation and translation, respectively. is a fixed dilation step, and = the location parameter. The most common and simplest choice for parameters are and (Nourani

*et al.*2009). Using the wavelet discretization, the time scale can be sampled at discrete levels.

*et al.*2005). Multiresolution analysis by Mallat's algorithm is a procedure to obtain ‘approximations’ and ‘details’ for a given time series signal. An approximation holds the general trend of the original signal, while a detail depicts high-frequency components of it. A multilevel decomposition process (Figure 1) can be achieved, where the original signal is broken down into lower resolution components (Catalão

*et al.*2011). Detailed information for Mallat's algorithm can be found in Nason (2010).

### SVM

SVM models have found wide applications in several areas, including pattern recognition, regression, multimedia, bio-informatics, and artificial intelligence. An SVM model is a new kind of classifier that is motivated by two concepts. First, transformation of data into a high-dimensional space can transform complex problems into simpler problems that can use linear discriminant functions. Second, the SVM model is motivated by the concept of training, and uses only those inputs that are near the decision surface (Principe *et al.* 2000; Tripathi *et al.* 2006; Vapnik 2010). The solution of traditional neural network models may tend to fall into a local optimal solution, whereas a global optimum solution is guaranteed for the SVM model (Haykin 2009). The current study uses an *ɛ*-support vector regression (*ɛ*-SVR) model. It has been successfully applied for modeling hydrological processes (Tripathi *et al.* 2006; Kim *et al.* 2012; 2013a, 2013b). During the *ɛ*-SVR model training performance, the purpose is to find a nonlinear function that minimizes a regularized risk function. This is achieved for the least value of the desired error criterion (e.g., root mean square error (RMSE)) for various constant parameters C_{C}, and *ɛ* and various kernel functions with various constant σ values. Detailed information on the SVM model can be found in Vapnik (2010), Principe *et al.* (2000), Tripathi *et al.* (2006), and Kim *et al.* (2012, 2013a, 2013b).

### GRNN

GRNN is a neural network model based on the nonlinear regression theory. The GRNN model, as a universal approximation for smooth functions, is capable of solving any smooth function approximation problem. The process of GRNN modeling can solve the problem of local minimum (Specht 1991; Sudheer *et al.* 2003). GRNN is composed of four layers: that is, the input layer, the hidden layer, the summation layer, and the output layer. The input layer, the hidden layer, and the summation layer neurons are completely connected, whereas the output layer neuron is connected only with some of the summation layer neurons. The summation layer is composed of two types of neurons, including several summation neurons and one division neuron. Each output layer neuron is connected to the summation neuron and division neuron of the summation layer, and the connection weights are not composed between the summation layer and the output layer (Specht 1991; Wasserman 1993; Tsoukalas & Uhrig 1997).

GRNN training performance is very different from the training performance used in the MLP. The training performance between the input and hidden layers is composed of unsupervised training performance like the radial basis function (RBF). Thus, it requires a special clustering algorithm such as the K-means or orthogonal least squares (OLS) algorithms, and the radius of cluster should be set before the training performance starts. Also, the training performance between the hidden and the summation layers is composed of the supervised training performance based on a minimizing process of the mean square error for the output value from the hidden layer. Therefore, the parameters that need to be optimized during the training performance are centers, widths/spreads, and connection weights. The RBF is widely used for the transfer function of the hidden layer (Wasserman 1993; Tsoukalas & Uhrig 1997; Kim & Kim 2008b). A GRNN model has been successfully developed and investigated for hydrological modeling (Kisi 2006; Kim & Kim 2008b; Kim *et al.* 2012; 2013b). Detailed information on the GRNN model can be found in Tsoukalas & Uhrig (1997), Kim & Kim (2008b), and Kim *et al.* (2012, 2013b).

### WSVM and WGRNN

WSVM is a combination of wavelet decomposition and SVM, whereas WGRNN is a combination of wavelet decomposition and GRNN. The wavelet decomposition is employed to decompose an input time series into approximation and detail components. The decomposed time series are used as inputs to SVM and GRNN for WSVM and WGRNN models, respectively. The application of WSVM and WGRNN models in hydrology and water resources can be found from the literature (Kisi 2011; Kisi & Cimen 2011, 2012).

*x*(

*t*) into D

_{1}, D

_{2}, and A

_{2}, where D

_{1}and D

_{2}are details and A

_{2}is an approximation. D

_{1}, D

_{2}, and A

_{2}are used as input to SVM and GRNN. The second step corresponds to training and testing phases using SVM and GRNN, respectively. Figure 2 shows the flowchart for rainfall aggregation using WSVM and WGRNN.

## STUDY AREA AND DATA

^{2}, a channel length of approximately 49.0 km, a channel slope of approximately 0.582%, a shape factor of approximately 0.166, and a river density of approximately 0.111. The catchment is short from east to west and long from south to north. There are 5 river stage stations, 5 groundwater stations, 12 rainfall stations, and 12 evaporation stations in the catchment (Ministry of Construction & Transportation 1982–2007). The hydrological data, such as rainfall, river stage, discharge, and groundwater table, had been recorded from 1982 to 2007.

To estimate areal rainfall using the Thiessen polygon, spline and kriging methods in the Bocheong-stream catchment, the hourly rainfall data from 12 rainfall stations, including Myogeum (No. 1), Cheongsan (No. 2), Neungweol (No. 3), Jungnyul (No. 4), Kwangi (No. 5), Pyeongon (No. 6), Samga (No. 7), Songjug (No. 8), Samsan (No. 9), Dongjeong (No. 10), Yiweon (No. 11), and Annae (No. 12) stations were used. Only Myogeum (No. 1) and Annae (No. 12) stations are located outside the Bocheong-stream catchment. Since all stations are spread almost uniformly, the areal rainfall using the Thiessen polygon, spline and kriging methods can capture the natural phenomena of individual rainfall patterns in the catchment. In order for data-driven models to be able to make generalizations about rainfall, sufficient rainfall data should be available (Kim & Kim 2008a). Rainfall events must be recorded over 24 hours, including non-rainfall hours. Twelve rainfall events (events 1–12), including six floods and six typhoon events, were chosen from the mid-1980s to the mid-1990s to meet this condition. Since the kriging method includes considerable variables to estimate the areal rainfall compared with the Thiessen polygon and spline methods, the areal rainfall estimated using the kriging method was considered as observed areal rainfall.

For the data-driven model, data were split into training, cross-validation, and testing data. The training data were used for optimizing the connection weights and bias of the data-driven model, the cross-validation data were used to select the model variant that provides the best level of generalization, and the testing data were used to evaluate the chosen model against unseen data (Dawson & Wilby 2001; Izadifar & Elshorbagy 2010). The cross-validation method provides a rigorous test of a data-driven model's skill (Dawson & Wilby 2001) and is generally used to overcome the overfitting problem inherent in the data-driven models (Haykin 2009). This technique has often been applied at the end of training performance in the literature (Smith 1993; Haykin 2009) and is also employed for data-driven model selection (Stone 1974).

The training data consist of the rainfall events resulting in river floods, and the cross-validation and testing data consist of rainfall events when typhoons pass and affect the Republic of Korea. In all of these applications, 47% of data (events 1, 4, 7, 9, 11, and 12, *N* = 459 hours) were applied for training, 25% of data (events 5, 8, and 10, *N* = 245 hours) for cross-validation, and the remaining 28% of data (events 2, 3, and 6, *N* = 280 hours) for testing. Since floods and typhoons have occurred frequently during the summer season, the hourly rainfall data are sufficient to explain the rainfall patterns for floods and typhoons. However, it can be found that the data length does not significantly affect the performance of data-driven models. Tokar & Johnson (1999) indicated that the data length has less effect than the data quality on the performance of a neural network model. Sivakumar *et al.* (2002) indicated that it is imperative to select good training data from the available data series. They indicated that the best way to achieve a good training performance is to include most of the extreme events, such as very high and very low values, in the training data.

Table 1 summarizes statistical indices of training, cross-validation, and testing data. In Table 1, X_{mean}, X_{max}, X_{min}, S_{x}, C_{v}, C_{sx}, and SE denote the mean, maximum, minimum, standard deviation, coefficient of variation, skewness coefficient and standard error values for training, cross-validation, and testing data, respectively. Songjug (No. 8) and Dongjeong (No. 10) stations show high variation (see C_{v} values in Table 1) in training and cross-validation data. Pyeongon (No. 6), Songjug (No. 8), Dongjeong (No. 10), and Yiweon (No. 11) stations show high skewed distributions (see C_{sx} values in Table 1) in training and cross-validation data.

Statistical . | . | Rainfall stations . | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|

indices . | Data . | No. 1 . | No. 2 . | No. 3 . | No. 4 . | No. 5 . | No. 6 . | No. 7 . | No. 8 . | No. 9 . | No. 10 . | No. 11 . | No. 12 . |

X_{mean} | Training (N = 459) | 1.25 | 1.04 | 1.13 | 1.56 | 1.38 | 1.40 | 1.20 | 0.91 | 1.36 | 0.60 | 0.97 | 1.25 |

Cross-validation (N = 245) | 1.17 | 1.19 | 1.11 | 1.80 | 1.28 | 2.65 | 1.26 | 0.69 | 0.96 | 0.30 | 1.22 | 0.69 | |

Testing (N = 280) | 2.10 | 1.61 | 1.19 | 1.66 | 1.61 | 1.52 | 2.18 | 2.42 | 1.88 | 1.51 | 1.90 | 2.31 | |

X_{max} | Training (N = 459) | 26.00 | 30.00 | 26.50 | 49.00 | 32.00 | 53.00 | 34.50 | 47.00 | 37.00 | 23.00 | 58.00 | 33.00 |

Cross-validation (N = 245) | 18.00 | 24.50 | 24.00 | 38.00 | 17.50 | 111.00 | 35.50 | 44.50 | 15.00 | 22.00 | 42.00 | 19.00 | |

Testing (N = 280) | 65.00 | 34.00 | 21.00 | 30.00 | 31.50 | 25.00 | 37.00 | 34.00 | 24.00 | 17.00 | 25.00 | 69.00 | |

X_{min} | Training (N = 459) | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |

Cross-validation (N = 245) | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | |

Testing (N = 280) | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | |

S_{x} | Training (N = 459) | 3.85 | 3.38 | 3.81 | 4.71 | 4.54 | 4.42 | 3.97 | 4.21 | 4.31 | 2.72 | 3.82 | 4.34 |

Cross-validation (N = 245) | 3.30 | 3.49 | 3.55 | 5.30 | 3.27 | 9.12 | 4.29 | 3.45 | 2.68 | 1.91 | 3.89 | 2.84 | |

Testing (N = 280) | 6.08 | 3.78 | 2.63 | 3.80 | 3.52 | 3.31 | 4.85 | 5.19 | 4.00 | 3.03 | 3.92 | 5.97 | |

C_{v} | Training (N = 459) | 3.07 | 3.26 | 3.37 | 3.02 | 3.29 | 3.15 | 3.30 | 4.64 | 3.17 | 4.55 | 3.95 | 3.48 |

Cross-validation (N = 245) | 2.81 | 2.93 | 3.19 | 2.95 | 2.56 | 3.44 | 3.40 | 4.98 | 2.80 | 6.44 | 3.20 | 4.10 | |

Testing (N = 280) | 2.89 | 2.35 | 2.21 | 2.29 | 2.19 | 2.18 | 2.23 | 2.15 | 2.13 | 2.00 | 2.07 | 2.58 | |

C_{sx} | Training (N = 459) | 3.99 | 4.26 | 4.50 | 4.96 | 4.46 | 5.62 | 4.29 | 6.97 | 4.81 | 5.20 | 9.00 | 4.46 |

Cross-validation (N = 245) | 3.42 | 4.01 | 3.99 | 3.90 | 2.85 | 7.61 | 4.56 | 9.67 | 3.23 | 8.81 | 6.02 | 4.57 | |

Testing (N = 280) | 6.32 | 4.10 | 3.20 | 4.08 | 3.89 | 3.74 | 3.69 | 3.11 | 3.09 | 3.21 | 2.88 | 6.36 | |

SE | Training (N = 459) | 0.18 | 0.16 | 0.18 | 0.22 | 0.21 | 0.21 | 0.19 | 0.20 | 0.20 | 0.13 | 0.18 | 0.20 |

Cross-validation (N = 245) | 0.21 | 0.22 | 0.23 | 0.34 | 0.21 | 0.58 | 0.27 | 0.22 | 0.17 | 0.12 | 0.25 | 0.18 | |

Testing (N = 280) | 0.36 | 0.23 | 0.16 | 0.23 | 0.21 | 0.20 | 0.29 | 0.31 | 0.24 | 0.18 | 0.23 | 0.36 |

Statistical . | . | Rainfall stations . | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|

indices . | Data . | No. 1 . | No. 2 . | No. 3 . | No. 4 . | No. 5 . | No. 6 . | No. 7 . | No. 8 . | No. 9 . | No. 10 . | No. 11 . | No. 12 . |

X_{mean} | Training (N = 459) | 1.25 | 1.04 | 1.13 | 1.56 | 1.38 | 1.40 | 1.20 | 0.91 | 1.36 | 0.60 | 0.97 | 1.25 |

Cross-validation (N = 245) | 1.17 | 1.19 | 1.11 | 1.80 | 1.28 | 2.65 | 1.26 | 0.69 | 0.96 | 0.30 | 1.22 | 0.69 | |

Testing (N = 280) | 2.10 | 1.61 | 1.19 | 1.66 | 1.61 | 1.52 | 2.18 | 2.42 | 1.88 | 1.51 | 1.90 | 2.31 | |

X_{max} | Training (N = 459) | 26.00 | 30.00 | 26.50 | 49.00 | 32.00 | 53.00 | 34.50 | 47.00 | 37.00 | 23.00 | 58.00 | 33.00 |

Cross-validation (N = 245) | 18.00 | 24.50 | 24.00 | 38.00 | 17.50 | 111.00 | 35.50 | 44.50 | 15.00 | 22.00 | 42.00 | 19.00 | |

Testing (N = 280) | 65.00 | 34.00 | 21.00 | 30.00 | 31.50 | 25.00 | 37.00 | 34.00 | 24.00 | 17.00 | 25.00 | 69.00 | |

X_{min} | Training (N = 459) | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |

Cross-validation (N = 245) | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | |

Testing (N = 280) | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | |

S_{x} | Training (N = 459) | 3.85 | 3.38 | 3.81 | 4.71 | 4.54 | 4.42 | 3.97 | 4.21 | 4.31 | 2.72 | 3.82 | 4.34 |

Cross-validation (N = 245) | 3.30 | 3.49 | 3.55 | 5.30 | 3.27 | 9.12 | 4.29 | 3.45 | 2.68 | 1.91 | 3.89 | 2.84 | |

Testing (N = 280) | 6.08 | 3.78 | 2.63 | 3.80 | 3.52 | 3.31 | 4.85 | 5.19 | 4.00 | 3.03 | 3.92 | 5.97 | |

C_{v} | Training (N = 459) | 3.07 | 3.26 | 3.37 | 3.02 | 3.29 | 3.15 | 3.30 | 4.64 | 3.17 | 4.55 | 3.95 | 3.48 |

Cross-validation (N = 245) | 2.81 | 2.93 | 3.19 | 2.95 | 2.56 | 3.44 | 3.40 | 4.98 | 2.80 | 6.44 | 3.20 | 4.10 | |

Testing (N = 280) | 2.89 | 2.35 | 2.21 | 2.29 | 2.19 | 2.18 | 2.23 | 2.15 | 2.13 | 2.00 | 2.07 | 2.58 | |

C_{sx} | Training (N = 459) | 3.99 | 4.26 | 4.50 | 4.96 | 4.46 | 5.62 | 4.29 | 6.97 | 4.81 | 5.20 | 9.00 | 4.46 |

Cross-validation (N = 245) | 3.42 | 4.01 | 3.99 | 3.90 | 2.85 | 7.61 | 4.56 | 9.67 | 3.23 | 8.81 | 6.02 | 4.57 | |

Testing (N = 280) | 6.32 | 4.10 | 3.20 | 4.08 | 3.89 | 3.74 | 3.69 | 3.11 | 3.09 | 3.21 | 2.88 | 6.36 | |

SE | Training (N = 459) | 0.18 | 0.16 | 0.18 | 0.22 | 0.21 | 0.21 | 0.19 | 0.20 | 0.20 | 0.13 | 0.18 | 0.20 |

Cross-validation (N = 245) | 0.21 | 0.22 | 0.23 | 0.34 | 0.21 | 0.58 | 0.27 | 0.22 | 0.17 | 0.12 | 0.25 | 0.18 | |

Testing (N = 280) | 0.36 | 0.23 | 0.16 | 0.23 | 0.21 | 0.20 | 0.29 | 0.31 | 0.24 | 0.18 | 0.23 | 0.36 |

Table 2 summarizes statistical indices of areal rainfall data using the Thiessen polygon, spline and kriging methods. It is seen from Table 2 that the areal rainfall using three methods shows similar values for training, cross-validation, and testing data. The estimated rainfall values were compared with observed ones using five performance evaluation criteria: the correlation coefficient (CC), RMSE, Nash–Sutcliffe coefficient (NS) (Nash & Sutcliffe 1970; ASCE 1993), mean absolute error (MAE), and average performance error (APE). Although CC is one of the most widely used criteria for calibration and evaluation of hydrological models with observed data, it alone cannot discriminate which model is better than others. The standardization inherent in CC as well as its sensitivity to outliers yields high CC values, even when the model performance is not perfect. Legates & McCabe (1999) suggested that various evaluation criteria (e.g., RMSE, MAE, NS, and APE) must be used to evaluate model performance. Table 3 shows mathematical expressions of performance evaluation criteria used in this study.

. | . | Statistical indices . | ||||||
---|---|---|---|---|---|---|---|---|

Data . | Methods . | X_{mean}
. | X_{max}
. | X_{min}
. | S_{x}
. | C_{v}
. | C_{sx}
. | SE . |

Training | Thiessen polygon | 1.16 | 20.01 | 0.00 | 2.77 | 2.38 | 3.22 | 0.13 |

Spline | 1.16 | 20.72 | 0.00 | 2.83 | 2.44 | 3.29 | 0.13 | |

Kriging | 1.17 | 18.71 | 0.00 | 2.76 | 2.37 | 3.16 | 0.13 | |

Cross-validation | Thiessen polygon | 1.21 | 17.24 | 0.00 | 2.93 | 2.42 | 3.31 | 0.19 |

Spline | 1.16 | 16.63 | 0.00 | 2.77 | 2.38 | 3.19 | 0.18 | |

Kriging | 1.19 | 17.61 | 0.00 | 2.94 | 2.46 | 3.33 | 0.19 | |

Testing | Thiessen polygon | 1.79 | 18.60 | 0.00 | 2.85 | 1.59 | 2.41 | 0.17 |

Spline | 1.80 | 18.39 | 0.00 | 2.87 | 1.60 | 2.39 | 0.17 | |

Kriging | 1.82 | 19.66 | 0.00 | 2.95 | 1.62 | 2.63 | 0.18 |

. | . | Statistical indices . | ||||||
---|---|---|---|---|---|---|---|---|

Data . | Methods . | X_{mean}
. | X_{max}
. | X_{min}
. | S_{x}
. | C_{v}
. | C_{sx}
. | SE . |

Training | Thiessen polygon | 1.16 | 20.01 | 0.00 | 2.77 | 2.38 | 3.22 | 0.13 |

Spline | 1.16 | 20.72 | 0.00 | 2.83 | 2.44 | 3.29 | 0.13 | |

Kriging | 1.17 | 18.71 | 0.00 | 2.76 | 2.37 | 3.16 | 0.13 | |

Cross-validation | Thiessen polygon | 1.21 | 17.24 | 0.00 | 2.93 | 2.42 | 3.31 | 0.19 |

Spline | 1.16 | 16.63 | 0.00 | 2.77 | 2.38 | 3.19 | 0.18 | |

Kriging | 1.19 | 17.61 | 0.00 | 2.94 | 2.46 | 3.33 | 0.19 | |

Testing | Thiessen polygon | 1.79 | 18.60 | 0.00 | 2.85 | 1.59 | 2.41 | 0.17 |

Spline | 1.80 | 18.39 | 0.00 | 2.87 | 1.60 | 2.39 | 0.17 | |

Kriging | 1.82 | 19.66 | 0.00 | 2.95 | 1.62 | 2.63 | 0.18 |

Evaluation criteria . | Equation . |
---|---|

CC | |

RMSE | |

NS | |

MAE | |

APE |

Evaluation criteria . | Equation . |
---|---|

CC | |

RMSE | |

NS | |

MAE | |

APE |

= the observed hourly rainfall (mm); = the estimated hourly rainfall (mm); = the mean of observed hourly rainfall (mm); = the mean of estimated hourly rainfall (mm); and n = the total number of hourly rainfall values considered.

## RESULTS AND DISCUSSION

### Rainfall aggregation using data-driven models

*et al.*2001; Makarynskyy

*et al.*2005). Conventional data-driven models adopt one hidden layer for model construction, since it is well known that one hidden layer is enough to represent the nonlinear complex relationship (Kumar

*et al.*2002; Makarynskyy

*et al.*2005). The number of hidden nodes of data-driven models for rainfall aggregation was determined using a trial and error approach. Figure 4(a) shows the developed structure of SVM (12-12-1) comprising input (12 nodes), hidden (12 nodes), and output (1 node) layers for estimating areal rainfall in this study. Figure 4(b) shows the developed structure of GRNN (12-12-2-1) comprising input (12 nodes), hidden (12 nodes), summation (1 summation and 1 division node), and output (1 node) layers for estimating areal rainfall in this study.

*et al.*2009; Tiwari & Chatterjee 2010; Adamowski & Chan 2011; Nejad & Nourani 2012). In this study, the decomposition level was determined using the following empirical equation (Nourani

*et al.*2009): where

*L*= the decomposition level,

*N*= the number of time series data, and int[·] = the integer-part function. In this study, two decomposition levels were obtained. Thus, input times series were decomposed using different mother wavelets, and details mode (D

_{1}, D

_{2}), and approximation mode (A

_{2}) for individual input data were obtained for the training, cross-validation, and testing periods.

*et al.*2014). Daubechies, Symmlet, and Coiflet wavelets provide compact support (Vonesch

*et al.*2007; Mathworks 2014), indicating that the wavelets have non-zero basis functions over a finite interval, as well as full scaling and translational orthonormality properties (Popivanov & Miller 2002; de Artigas

*et al.*2006). These features are important for localizing events in the time-dependent signals (Popivanov & Miller 2002). Based on these features, Daubechies, Symmlet, and Coiflet wavelets were selected as mother wavelets in this study. Figure 5 shows an example of the original time series and sub-time series (D1, D2, and A2) decomposed using db10 wavelet for the training period.

Selection of effective wavelet components is important for model performance. Previous studies selected effective wavelet components using the CC between wavelet components and observed values (Alikhani 2009; Tiwari & Chatterjee 2010; Kisi & Cimen 2011). The effective wavelet components have also been selected using other methods, including Mallow's *C _{p}* (Okkan 2012; Okkan & Serbes 2013), CC, mutual information, Shannon entropy (Khanghah

*et al.*2012), and self-organizing map (Nourani

*et al.*2012). Several researchers also used all decomposed components as effective wavelet components (Adamowski & Sun 2010; Adamowski & Chan 2011; Kisi 2011; Adamowski & Prasher 2012).

To construct new input time series from the wavelet components, several methods have been used, including summing the effective components (Partal & Cigizoglu 2008; Alikhani 2009; Kisi 2010; Kisi & Cimen 2011), summing the components for different levels (Adamowski & Chan 2011; Adamowski & Prasher 2012), using all components for different levels without summing the components (Adamowski & Sun 2010; Kisi 2011), and using only effective components without summing them (Okkan 2012). Based on the modeling strategies, one SVM, one GRNN, six WSVM, and six WGRNN models were developed for the rainfall aggregation.

Table 4 summarizes statistical results for rainfall aggregation models during the testing performance. It is clear from Table 4 that all the models generally perform well. Comparison of SVM and WSVM models with different mother wavelets indicates that the results of WSVM models are better than those of the SVM model. Comparison of GRNN and WGRNN models with different mother wavelets also indicates that the results of WGRNN models are better than those of the GRNN model. Furthermore, it can be found from Table 4 that the results of SVM and WSVM are better than those of the GRNN and WGRNN with respect to different mother wavelets, respectively. Comparison of different mother wavelets reveals that db10 yields the best accuracy for the rainfall aggregation for the WSVM and WGRNN models. This indicates that wavelet decomposition using mother wavelet, db10, can improve the performance of SVM and GRNN models as compared with the other mother wavelets. These results are consistent with those reported by Seo *et al.* (2015).

. | Evaluation criteria . | ||||
---|---|---|---|---|---|

Models . | CC . | RMSE (mm) . | NS . | MAE (mm) . | APE (%) . |

SVM | 0.950 | 0.958 | 0.895 | 0.516 | 31.347 |

WSVM_db8 | 0.967 | 0.770 | 0.932 | 0.486 | 29.493 |

WSVM_db10 | 0.972 | 0.711 | 0.942 | 0.437 | 25.546 |

WSVM_sym8 | 0.954 | 0.914 | 0.905 | 0.508 | 29.771 |

WSVM_sym10 | 0.962 | 0.881 | 0.913 | 0.486 | 28.843 |

WSVM_coif6 | 0.952 | 0.941 | 0.901 | 0.515 | 30.766 |

WSVM_coif12 | 0.955 | 0.924 | 0.903 | 0.510 | 30.889 |

GRNN | 0.891 | 1.460 | 0.756 | 0.905 | 56.532 |

WRNNN_db8 | 0.935 | 1.119 | 0.858 | 0.525 | 34.540 |

WGRNN_db10 | 0.944 | 1.052 | 0.874 | 0.460 | 27.765 |

WGRNN_sym8 | 0.903 | 1.350 | 0.792 | 0.821 | 51.078 |

WGRNN_sym10 | 0.919 | 1.190 | 0.838 | 0.682 | 42.136 |

WGRNN_coif6 | 0.899 | 1.393 | 0.778 | 0.885 | 54.276 |

WGRNN_coif12 | 0.904 | 1.338 | 0.796 | 0.771 | 47.550 |

. | Evaluation criteria . | ||||
---|---|---|---|---|---|

Models . | CC . | RMSE (mm) . | NS . | MAE (mm) . | APE (%) . |

SVM | 0.950 | 0.958 | 0.895 | 0.516 | 31.347 |

WSVM_db8 | 0.967 | 0.770 | 0.932 | 0.486 | 29.493 |

WSVM_db10 | 0.972 | 0.711 | 0.942 | 0.437 | 25.546 |

WSVM_sym8 | 0.954 | 0.914 | 0.905 | 0.508 | 29.771 |

WSVM_sym10 | 0.962 | 0.881 | 0.913 | 0.486 | 28.843 |

WSVM_coif6 | 0.952 | 0.941 | 0.901 | 0.515 | 30.766 |

WSVM_coif12 | 0.955 | 0.924 | 0.903 | 0.510 | 30.889 |

GRNN | 0.891 | 1.460 | 0.756 | 0.905 | 56.532 |

WRNNN_db8 | 0.935 | 1.119 | 0.858 | 0.525 | 34.540 |

WGRNN_db10 | 0.944 | 1.052 | 0.874 | 0.460 | 27.765 |

WGRNN_sym8 | 0.903 | 1.350 | 0.792 | 0.821 | 51.078 |

WGRNN_sym10 | 0.919 | 1.190 | 0.838 | 0.682 | 42.136 |

WGRNN_coif6 | 0.899 | 1.393 | 0.778 | 0.885 | 54.276 |

WGRNN_coif12 | 0.904 | 1.338 | 0.796 | 0.771 | 47.550 |

### Disaggregation of areal rainfall using data-driven models

The specific rainfall stations (e.g., No. 6, No. 8, No. 10, and No. 11), which have high skewed distributions in training and cross-validation data, did not generally show satisfactory results in the performance evaluation criteria (CC and RMSE (mm)) of SVM and GRNN models. However, wavelet composition techniques can overcome the weakness of SVM and GRNN models effectively. It can be found from this observation that data quality can affect the performances of data-driven models. This is in agreement with Tokar & Johnson (1999) and Sivakumar *et al.* (2002). Comparison of SVM and GRNN models indicates that the results of the SVM model are better than those of the GRNN model for disaggregating the areal rainfall. Also, comparison of WSVM and WGRNN models with mother wavelets db10 and sym10 indicates that the results of the WSVM model with mother wavelets db10 and sym10 are better than those of the WGRNN model with mother wavelets db10 and sym10 for disaggregating the areal rainfall, respectively.

## CONCLUSIONS

This study develops and evaluates the combination of wavelet decomposition and data-driven models for aggregation and disaggregation of rainfall in the Bocheong-stream catchment, an IHP representative catchment, Republic of Korea.

The SVM and GRNN models are used to estimate areal rainfall and individual point rainfall. Wavelet decomposition is employed and sub-components are used as input to SVM and GRNN to obtain WSVM and WGRNN models, respectively. Comparison of SVM and WSVM models with different mother wavelets indicates that the results of WSVM models with different mother wavelets are better than those of the SVM model. Comparison of GRNN and WGRNN models with different mother wavelets indicates that the results of WGRNN models with different mother wavelets are better than those of the GRNN model.

The WSVM models with mother wavelet db10 yield the best performance for rainfall aggregation among SVM and WSVM models with different mother wavelet models. The WGRNN models with mother wavelet db10 yield the best performance for rainfall aggregation among GRNN and WGRNN models with different mother wavelet models.

The SVM, GRNN, WSVM (db10 and sym10), and WGRNN (db10 and sym10) models are used for estimating the disaggregated rainfall. The SVM, GRNN, WSVM (db10 and sym10) and WGRNN (db10 and sym10) models are generally found to be sensitive to individual rainfall stations. The disaggregated rainfall at Samga (No. 7) yields the best results among the 12 rainfall stations for the SVM, GRNN, WSVM, and WGRNN models. The disaggregated rainfall at Yiweon (No. 11) station yields the worst results among the 12 rainfall stations for the SVM, GRNN, WSVM, and WGRNN models.

Comparison of the SVM and GRNN models indicates that the results of the SVM model are better than those of the GRNN model for disaggregating the areal rainfall. Comparison of the WSVM and WGRNN models with mother wavelets db10 and sym10 indicates that the results of the WSVM model with mother wavelets db10 and sym10 are better than those of the WGRNN model with mother wavelets db10 and sym10 for disaggregating the areal rainfall, respectively. The WSVM and WGRNN models with mother wavelet db10 are found to be optimal models for disaggregating the areal rainfall in this study.