Incorporating river basin simulation models in heuristic optimization algorithms can help modelers address complex, basin-scale water resource problems. We have developed a hybrid optimization-simulation model by linking a stretching particle swarm optimization (SPSO) algorithm and the MODSIM river basin decision support system (DSS), and have used the SPSO-MODSIM model to optimize water allocation at basin scale. Due to high computational cost of the SPSO-MODSIM model, we have, subsequently, used four meta-model types of artificial neural networks (ANN), support vector machines (SVM), kriging and polynomial response functions, replacing the MODSIM DSS, in an adaptively learning meta-modeling approach. The performances of the meta-models are first compared in two Ackley and Dejong benchmark functions optimization problems, and the meta-models are then evaluated by solving the Atrak river basin water allocation optimization problem in Iran. The results demonstrate that independent of the meta-model type, the sequentially space-filling meta-modeling approach can improve the performance of meta-models in the course of optimization by adaptively locating the promising regions of the search space where more samples need to be generated. However, the ANN and SVM meta-models perform better than others in saving the number of costly, original objective function evaluations.

## INTRODUCTION

Modeling water resources at basin scale calls for different kinds of mathematical models, representing complex hydrologic, legal-administrative framework, socio-economic and management processes taking place in a river basin. Most of the mathematical river basin management models, e.g. MODSIM (Labadie 1995), ACRES (Sigvaldson 1976), RIBASIM (Hydraulics 1991), CALSIM (Draper *et al.* 2004), MikeBasin (DHI Water & Environment 2006), River Ware (Zagona *et al.* 2001), HEC 5 (US Corps of Engineers; Bonner 1989) or its modified version HEC-ResSim and WEAP (SEI 1999), have been developed for simulation purposes. They typically make use of pre-defined reservoir operation rules or single-period optimization schemes to optimally allocate available water resources among competing users, so they are not able to identify optimal size and operation of the systems' components by using multi-period optimization. There have been some cases where large scale, multi-reservoir systems management have been approached by full dynamic, multi-period optimization algorithms with more degree of simplifications, e.g. linearization (Kuczera & Diment 1988; Cai *et al.* 2002; Jenkins *et al.* 2004).

The purpose of modeling a river basin system could also go beyond the simulation of a multiple reservoir system operation or even optimizing water allocations for multiple supply-demand systems. This is because a basin system consists of various components such as reservoirs, aquifers, pumping systems, hydroelectric power plants, diversions, and water transfer systems. Therefore, there are various processes or inter-relationships taking place within or among the components, e.g. erosion, sedimentation, runoff and flow routing, recharge, pollution and contamination transport, eutrophication, represented by highly non-linear, non-convex, discontinuous and non-algebraic equations solution of which demands more detailed, computationally intensive simulation models.

Simulation-optimization is an alternative approach for solving large-scale river basin optimization problems, in which a simulation model is linked to a global heuristic optimization algorithm (Shourian *et al.* 2008). One of the main advantages of the approach is that it does not require the variables, functions, relations and the related computer codes, simulating the system's processes, to be continuous, differentiable, algebraic and even completely accessible by users. These features provide the potential use of more detailed simulation models that can better represent the real processes being simulated. However, the main problem with this approach is the computational load needed to run the simulation model.

Although meta-heuristic, nature-inspired stochastic search methods, which use random elements to transfer one candidate solution into a hopefully better solution, are efficient optimization tools, they need a large number of function evaluations, and each requires the running of the mentioned complex simulation model for objective function evaluation. This makes the resulting simulation-optimization approach computationally intensive. Surrogate modeling, also known as meta-modeling (Blanning 1975), has evolved and is extensively used to produce computationally efficient surrogates of high-fidelity models (Sacks *et al.* 1989; Jones 2001; Razavi *et al.* 2012a). In this technique, a cheaper-to-run surrogate of the original model, running of which is much faster than the original simulation model, is used instead of the exact model. Artificial neural networks (ANN), support vector machines (SVM), kriging and polynomial response functions are some of the most commonly used approximation techniques in meta-modeling that have been applied to various engineering optimization problems.

Robinson & Keane (1999) presented a case for employing variable-fidelity analysis models and approximation techniques to improve the efficiency of evolutionary optimization for complex design tasks. This problem was revisited by El-Beltagy *et al.* (1999) who argued that the issue of balancing the concerns of optimization with that of design of experiments must be addressed. Farhang-Mehr & Azarm (2005) employed a sequential approach with adaptation to irregularities in the response behavior for Bayesian meta-modeling in engineering design simulations. Computational frameworks for integrating a class of single-point approximation models with evolutionary algorithms (EAs) were proposed by Nair & Keane (2001) for a domain-specific class of approximation models and for more general models by Ratle (2001), who integrated an EA with kriging models. EAs and global optimization algorithms were coupled with local search and quadratic response surface methods (Liang *et al.* (2000), stochastic response surface models (Regis & Shoemaker 2007), radial basis functions (RBFs) (Shoemaker *et al.* 2007) and ANN (Jin *et al.* 2002). Simpson *et al.* (2004) made a detailed survey on the state-of-the-art in meta-modeling.

Mousavi & Shourian (2010) presented the adaptive sequentially space filling (ASSF) meta-modeling approach in which the sub-problems of design of experiments, function approximation and function optimization in a surrogate optimization problem were sequentially solved in a feedback loop. Tsoukalas & Makropoulos (2015) compared the performance of state-of-the-art evolutionary and surrogate-based multi-objective optimization algorithms within the parameterization-simulation-optimization framework for robust multi-reservoir rules generation under hydrological uncertainty.

In spite of extensive work done in meta-modeling for decreasing the computational burden of global optimization techniques with computationally intensive function evaluations, the comparison of different meta-models, particularly in hydro-systems, is limited. Razavi *et al.* (2012b) listed 32 studies in water-resources-related problems and pointed out that the focus of few of them is on the comparison of different meta-models' performances. In this study, we aim to compare ANN, SVM, kriging and polynomial models in a surrogate optimization-based water allocation problem at the Atrak river basin in Iran. The surrogate models replace MODSIM decision support system (DSS) in the particle swarm optimization (PSO) algorithm using the ASSF approach already developed by Mousavi & Shourian (2010).

The paper is structured as follows: first a brief overview of the ASSF approach, PSO algorithm and principle features of the four mentioned meta-modeling techniques are presented. Next, the results of application of the ASSF approach to two bench-mark function optimization problems with different meta-models are reported to assess the performance of the proposed tools. We then apply the meta-modeling techniques to a real world, basin-scale water allocation case study in north-east of Iran, and the results are presented and discussed followed by an overall evaluation of the considered meta-modeling techniques in the summary and conclusion section.

## ASSF META-MODELING

*,*and some of the mathematical functions of and could be non-smooth or can make the feasible space of the problem non-convex. Facing these difficulties, meta-heuristic and evolutionary optimization techniques are promising in solving these types of optimization problems. They can easily be linked with any simulation model without the need to have access to computer codes or details of the function . Nevertheless, they typically need thousands of objective function, , evaluations before converging to a good or near optimum solution. Since each function evaluation needs the high-fidelity simulation model to run, problem (1) could become computationally very difficult to solve. To deal with this difficulty, a meta-model replacing the simulation model, may be used.

*,*an approximate function is optimized that is referred to as problem (2) (Mousavi & Shourian 2010): Because is to be evaluated instead of evaluating the original function , the first issue in surrogate optimization is about how to determine the approximate, surrogate function with the smallest possible number of evaluations. Below is the condition for an adequately accurate function approximation: where

*ɛ*is the accuracy parameter, is the approximation error and

*X*is the search space of problem (1). A reasonable way to secure the required precision for the approximate function in the optimization problem (2) is to design sufficient number of experiments to fill the search space uniformly. These experiments are then implemented to construct a meta-model. Therefore, if

*D*experiments are needed for constructing the meta-model, the function has to be evaluated

*D*times. In other words, to determine the approximate function in problem (2), a set of experiments known as must be generated. This problem, dealing with design of experiments, is referred to as problem (3).

The ASSF approach starts with generating a relatively small number of experiments in the search space, just enough to construct a preliminary approximate surrogate model. These experiments are designed randomly, and the function is evaluated for each of them by running of the original, exact simulation model (design of experiments and function evaluation, problem (3)). Two main factors affecting the initial number of experiments needed in the search space (size of ) are non-linearity of function *f* and the dimension of input vector (Mousavi & Shourian 2010). The minimum initial number of data points is selected so as not to have an over-fitted function approximation model. A meta-model, such as ANN, is trained then by using the generated data. This stage is about approximating function *f* (function approximation sub-problem that is referred to as problem (2a)).

Subsequently, the minimum or optimum value of approximate function *f* will be determined by using an evolutionary optimization algorithm. This stage is in fact the approximate function optimization stage represented as problem (2). If the function approximation stage has been undertaken precisely, one can expect that the minimum value of the approximate function *f*, reached by solving problem (2), is almost the same as the true minimum value of the original function In order to check whether or not the optimum value of (problem (2)) can be considered as the solution of problem (1), Mousavi & Shourian (2010) suggested checking two criteria: (1) existence of a predefined required number of experiments in the neighborhood of the solution in the training set. This criterion is used to check if the meta-model has learned well to approximate the region around the solution, and (2) approximation error at the solution being smaller than the predefined threshold error . This criterion is set to evaluate the error of the online approximation in terms of optimization.

*f*and otherwise, it will be considered as a false solution, located at the region that is not filled by adequate data samples (experiments), and therefore the meta-model has not learned the actual (original) function in this region. The algorithm stores the solution and marks it so that the search mechanism never finds it again. The EA will converge then to another point for which the two criteria will be checked again, and the process is continued up to a predefined maximum number of iterations. Once the iteration exceeds the maximum number of iterations, a set of non-perfect solutions, considered as center points of the gap regions, are located, and additional experiments will be designed in the neighborhood of each center point. These new experiments are added to the training data set generated before and constructs the new, updated training set. The meta-model is re-trained then using the new training data set, and a new run of EA, in which the re-trained meta-model is used, is performed to find new solutions. It is highly likely that the new EA skips the false solutions, already found by the EA. This procedure continues until the EA does not converge to a false solution any more. The flow diagram of the proposed approach is in Figure 1. More details of the ASSF approach may be found in Mousavi & Shourian (2010). In our study, PSO and MODSIM are the evolutionary optimization algorithm and the simulation model, , respectively.

### Stretching particle swarm optimization algorithm

PSO is a stochastic-based search algorithm introduced by Kennedy & Eberhart (1995). The algorithm, simulating the behavior of a bird flock, serves as an optimization algorithm for nonlinear problems. In PSO, each particle represents a single intersection of all search dimensions. The particles change their positions according to the objective function while sharing memories of their ‘best’ positions to adjust their velocities as well as their positions. In a *D*-dimensional search space, each particle *i* of the swarm is represented by a *D*-dimensional vector, In each iteration, the particle remembers its previous best position, referred to as *,*. The velocity of each particle is represented by another *D*-dimensional vector, . The velocity of any particle is updated so that the particle moves toward its own previous best position, , and the best position among all other particles positions, .

*N*is the size of the swarm, is a constriction factor that is used in constrained optimization problems in order to control the magnitude of the velocity, , are two positive constants called cognitive and social parameters, respectively, , are random numbers uniformly distributed in [0,1], is the iteration number, and finally is the inertia weight that can be updated dynamically in every iteration by using Equation (6): where is the inertia weight at iteration

*n*, is the maximum number of iterations, and and are, respectively, the maximum and minimum inertia weights (Parsopoulos & Vrahatis 2002). The PSO algorithm starts with a set of randomly generated solutions. The swarm is then updated by using Equations (4) and (5) in each iteration. This process is continued until the stopping criteria are met.

One of the main problems encountered by optimization methods including the PSO algorithm is the problem of local minima, especially in multimodal functions. PSO, despite being an efficient method, also suffers from this problem. Function stretching is one of the techniques that can help omit undesirable local minima of the objective function. The idea behind function stretching is to perform a two-stage transformation of the original objective function. This can be applied immediately after a local minimum has been detected. The first transformation stage elevates the objective function and eliminates all the local minima that are located above the detected local minimum. The second stage stretches the neighborhood of the local minimum upwards by assigning higher function values to those points. Neither stage alters the local minima located below the detected minimum; thus, the location of the global minimum is left unaltered. More details on the stretching PSO (stretching particle swarm optimization (SPSO)) can be found in Parsopoulos & Vrahatis (2002) and Mousavi & Shourian (2010).

In this study, we have made use of the SPSO algorithm with the following function approximation techniques as meta-models in the ASSF approach.

## FUNCTION APPROXIMATION TECHNIQUES

### ANN

A neural network, which was first introduced by McCulloch & Pitts (1943), is a computational model that is loosely based on the neuron cell structure of the biological nervous system. Given a training set of data, the network can learn the data with a learning algorithm; once properly trained, the network provides a data-driven model that is capable of giving reasonable answers when presented with input vectors that have not been encountered during the training process.

*i*only to nodes in layer

*i*

*+*

*1*(Figure 2). Each of the hidden layer nodes (neurons) computes a weighted sum of inputs, passes the sum through the transfer (activation) function and presents the result to the next layer until the output layer is reached.

We have used the back propagation (BP) with the Levenberg–Marquet training algorithm. More details on FFN and the BP algorithm may be found in different references (Hornik *et al.* 1989; Werbos 1994).

### SVM

*ɛ-insensitive*is developed to produce a sparseness property for SVR as follows (Vapnik 2000): where is the approximate value of

*f*, and the corresponding errors being less than

*ɛ-boundary*(

*ɛ*-tube) are not penalized (Figure 3). For linear function approximation, all the linear functions of input vector

*x*have the following representation: where the angle bracket indicates the inner (or dot) product of two vectors in a Hilbert space. To find , it is necessary to minimize the regulated risk functional () defined as follows: where

*R*is the empirical error of training data that is defined in

_{emp}*ɛ*-

*insensitive*loss function framework. Coefficient

*C*in Equation (10) is a complexity indicator of function .

### Kriging

*R*is the correlation matrix, and is the correlation function between any two of the samples and with unknown parameters Note that

*R*is an symmetric matrix with diagonal elements equal to 1, and the form of the correlation function can be chosen by the user. Among the variety of correlation functions proposed in the literature, Gaussian correlation function is most frequently used. It is defined as follows: where is the vector of correlation parameters. The kriging prediction of the fitness function value at any point is given by the equation: where is the estimated value of is the correlation vector evaluated at ,

*y*is the vector of responses to the sample locations , and

*F*denotes the following matrix: In this study, the kriging models are constructed by means of the DACE Toolbox. This environment provides both the kriging predictions and the related mean squared error (MSE) estimations (Lophaven

*et al.*2002) given by:

### Polynomial response surface models

*ɛ*is a random error that is assumed to be an independent, normally distributed variable with zero mean and variance . The polynomial function, , used to approximate is typically a low order polynomial, which is assumed to be either linear, Equation (23), or quadratic, Equation (24): where , , and , the parameters of the polynomials in Equations (23) and (24), are determined using the least-squares regression method.

### Experimental setup

*f*. The MSE provides a general evaluation of the overall prediction accuracy. We have used the MSE index in training, testing and validation stages.

The parameter values of meta-models, e.g. the regularization parameters in SVR and kriging, could have considerable impact on their predictive ability. A simple, yet effective way for selecting parameter values is to use *k-*fold cross-validation scheme. In *k-*fold cross validation, the data is first partitioned into *k* equally (or nearly equally) sized segment folds. Subsequently, *k* iterations of training and validation are performed such that within each iteration a different fold of the data is held-out for validation, and the remaining (*k-1*)-folds are used for learning. We have used a 10-fold cross validation scheme for selecting the best parameter values for each of the four meta-models. Under this scheme, the sample data are divided into 10 equally sized subsets and 10 iterations of training and validation tasks are then performed. In every iteration, the meta-models are trained with different parameter settings, and the parameter settings are evaluated with the validation set. The grid search is used to explore the entire parameter space, and the optimal parameter set with a minimum average error on the validation set is found for each of the meta-models.

## RESULTS

### Benchmark optimization problems

The proposed methods are first tested in optimizing two well-known benchmark functions that have widely been used for evaluating the performance of meta-heuristic optimization algorithms and function approximation techniques while doing surrogate optimization. In fact, it is important to know how the performances of the considered meta-modeling techniques are affected by moving from a simpler two-dimensional (2D) optimization problem to a problem whose dimension is comparable with the real engineering design problem under study. Note that because of the need to calibrate and tune the parameter sets of any optimization or meta-modeling approaches, these types of tests are much more difficult to do on real engineering problems with computationally intensive simulation models, whereas they can easily be applied to benchmark optimization problems with fast objective function evaluations.

*n*is the dimension of the functions. The global minimum of both functions is located at , . Figure 4 illustrates the 2D Dejong (Dejong-2D) and 2D Ackley (Ackley-2D) functions:

The selected benchmark problems to be solved by the ASSF approach are Dejong-2D, Dejong-8D, Ackley-2D and Ackley-8D function optimization problems. The PSO swarm sizes were 10 and 30 and the PSO maximum iterations were 100 and 300 for 2D and 8D problems, respectively. More than one solution was determined by the ASSF approach because the best solution with the best objective value reached by the approach may not be the one having the best original objective function. Therefore, the results reported in Table 1 presents the best solutions among 10 runs of the SPSO algorithm. In this table, the approximate optimum function , the original fitness function and the related error have been reported.

. | Optimum point located . | Approximate obj. function . | Original obj. function . | MSE . | Number of evaluations of the original function (NFE) . |
---|---|---|---|---|---|

Dejong-2D | |||||

SPSO | [0 0] | – | 0 | – | 1,000 |

SVM | [0 0] | −0.01 | 0 | 0.01 | 341 |

ANN | [0 0] | 0.0419 | 0 | −0.0419 | 627 |

Kriging | [0 0] | −8.84 × 10^{−7} | 0 | −8.84 × 10^{−7} | 346 |

Polynomial | [0 0] | −4.34 × 10^{−15} | 0 | 4.34 × 10^{−15} | 306 |

Dejong-8D | |||||

SPSO | [00000000] | – | 0 | – | 9,000 |

SVM | [00000000] | −0.0408 | 0 | 0.0408 | 1,159 |

ANN | [00000000] | 8.14 × 10^{−4} | 0 | −0.0114 | 2,021 |

Kriging | [-0.0636–0.029–0.0596–0.1245 0.0166 0.0023 0.006–0.0270] | −5.2683 × 10^{−4} | 0.8854 | 0.8859 | 2,346 |

Polynomial | [00000000] | −2.329 × 10^{−3} | 0 | −2.329 × 10^{−3} | 1,201 |

Ackley-2D | |||||

SPSO | [0 0] | – | 0 | – | 1,000 |

SVM | [0 0] | 0.1672 | 0 | −0.1672 | 256 |

ANN | [0 0] | 0.159 | 0 | −0.159 | 494 |

Kriging | [0 0] | 0.0896 | 0 | −0.0896 | 667 |

Polynomial | [0 0] | 0.9987 | 0 | −0.998 | 2,893 |

Ackley-8D | |||||

SPSO | [00000000] | – | 0 | – | 9,000 |

SVM | [00000000] | 0.00275 | 0 | 0.00275 | 1,956 |

ANN | [00000000] | 0.2025 | 0 | −0.2025 | 6,215 |

Kriging | [−0.0175 0.0779–0.0046–0.006 0.05 0.0776–0.0125 0.1071] | 0.07981 | 0.07869 | −0.012 | 8,345 |

Polynomial | [−0.2548–0.2732-0.0562–0.4618 0.1307–0.1127 0.3673–0.1004] | 4.7988 | 2.7613 | −2.0376 | 9,689 |

. | Optimum point located . | Approximate obj. function . | Original obj. function . | MSE . | Number of evaluations of the original function (NFE) . |
---|---|---|---|---|---|

Dejong-2D | |||||

SPSO | [0 0] | – | 0 | – | 1,000 |

SVM | [0 0] | −0.01 | 0 | 0.01 | 341 |

ANN | [0 0] | 0.0419 | 0 | −0.0419 | 627 |

Kriging | [0 0] | −8.84 × 10^{−7} | 0 | −8.84 × 10^{−7} | 346 |

Polynomial | [0 0] | −4.34 × 10^{−15} | 0 | 4.34 × 10^{−15} | 306 |

Dejong-8D | |||||

SPSO | [00000000] | – | 0 | – | 9,000 |

SVM | [00000000] | −0.0408 | 0 | 0.0408 | 1,159 |

ANN | [00000000] | 8.14 × 10^{−4} | 0 | −0.0114 | 2,021 |

Kriging | [-0.0636–0.029–0.0596–0.1245 0.0166 0.0023 0.006–0.0270] | −5.2683 × 10^{−4} | 0.8854 | 0.8859 | 2,346 |

Polynomial | [00000000] | −2.329 × 10^{−3} | 0 | −2.329 × 10^{−3} | 1,201 |

Ackley-2D | |||||

SPSO | [0 0] | – | 0 | – | 1,000 |

SVM | [0 0] | 0.1672 | 0 | −0.1672 | 256 |

ANN | [0 0] | 0.159 | 0 | −0.159 | 494 |

Kriging | [0 0] | 0.0896 | 0 | −0.0896 | 667 |

Polynomial | [0 0] | 0.9987 | 0 | −0.998 | 2,893 |

Ackley-8D | |||||

SPSO | [00000000] | – | 0 | – | 9,000 |

SVM | [00000000] | 0.00275 | 0 | 0.00275 | 1,956 |

ANN | [00000000] | 0.2025 | 0 | −0.2025 | 6,215 |

Kriging | [−0.0175 0.0779–0.0046–0.006 0.05 0.0776–0.0125 0.1071] | 0.07981 | 0.07869 | −0.012 | 8,345 |

Polynomial | [−0.2548–0.2732-0.0562–0.4618 0.1307–0.1127 0.3673–0.1004] | 4.7988 | 2.7613 | −2.0376 | 9,689 |

Dejong function is a well-behaved, convex, uni-model function. For the case of Dejong-2D function, the ASSF approach with all the meta-models has successfully located the optimum point (*x** = [0]_{1×2}) with minimum and maximum NFE associating with polynomial and ANN meta-models, respectively. However, SVM and kriging have maintained almost the same NFE, which means that these two meta-models have performed almost the same. Polynomial, SVM and kriging meta-models have been able to find the global minimum with NFE less than 350 function evaluations, whereas ANN has needed more function evaluations. The reason why ANN has required a larger NFE to optimize such a simple function is consistent with the fact that larger sets of design points are typically required to train neural networks (Yan & Minsker 2006; Zou *et al.* 2007), and at the same time the adaptively updating mechanism of the ASSF approach plays a less significant role in low-dimensional search spaces.

For the case of Dejong-8D function, the SVM, ANN and polynomial meta-models have successfully located the optimum point (*x** = [0]_{1×8}) with minimum NFE and 4.08% error percentage belonging to SVM (Table 1). For the smooth function of Dejong, simple learning techniques such as polynomial response functions are enough, and more complicated techniques of ANN and kriging, requiring more data points to be properly trained, are not necessary.

The Ackley function is a more complex, non-smooth function containing numerous local minima with no detectible trend toward the global region of attraction. For the case of Ackley-2D function, the polynomial meta-model has required a larger NFE to reach the optimum point (*x** = [0]_{1×2}), and SVM has outperformed the others with the smallest NFE equal to 256 followed by ANN with 494. Therefore, SVM and ANN surrogates have performed better for the more complex function optimization problem. The polynomial meta-model has failed to reach the optimum point due to the limitation in the random access memory (RAM) of the computer used.

### Atrak river basin water allocation problem

#### SPSO-MODSIM model

^{2}inside Iran and a small part within the territory of Turkmenistan (Sheikh 2014). Figure 7 is a schematic map of the river basin showing the locations of reservoirs. There are 12 reservoirs among which Ghordanlu, Darband, Garmab, Amand, Chaily, Chandir and Sumbar have not been constructed yet and, therefore, it is necessary to decide their capacity.

*l*is the set of all arcs or links in the network,

*N*is the set of all nodes, Q

_{i}is the set of links originating at node

*i*(i.e. outflow links),

*I*is the set of links terminating at node

_{i}*i*(i.e. inflow links),

*q*is the integer-valued flow rate in link

_{l}*l, c*is the cost, weighting factor or priority per unit flow rate in each link

_{l}*l*,

*l*is the lower bound on the flow in link

_{l}*l*, and

*u*is the upper bound on the flow in link

_{l}*l.*Further, the mass balance equation must be satisfied at every node of the model. It is possible in MODSIM to define a cost coefficient for each connection link per unit of water transferred. Assigning a negative coefficient shows the benefit and a positive coefficient represent the cost of water transferred to the ending node of the link.

Despite all MODSIM's remarkable capabilities, it is not a full-dynamic, multi-period optimization model. Consequently, MODSIM, by itself, would not be able to determine an optimum design or operation policies. On the other hand, one of the features of MODSIM's latest versions is the ability of preparing customized codes in the VB.NET or C.NET (used in this study) languages compiled with MODSIM through the .NET Framework. Therefore, it is possible to link MODSIM with the PSO algorithm and use it as an efficient simulation optimization approach for solving a variety of large scale, basin-wide water resource problems (Shourian *et al.* 2008, 2009). Taking advantages of MODSIM's custom coding features, one can embed MODSIM in the SPSO algorithm and build an efficient simulation optimization tool to solve the Atrak river basin water allocation optimization problem. Additionally, because MODSIM is relatively a high-fidelity simulation model with several NPFs to be solved in a large, complex river basin, the resulting SPSO-MODSIM model is computationally intensive; thus it can be considered as an appropriate application of the ASSF approach.

Determining the sizes of water resource development projects and deciding how to allocate available water resources among different demand nodes over time and space while considering coordinated operation of the system components are important to a basin management. These issues in the Atrak river basin system, which is under development by constructing a number of dams and irrigation projects as well as already-constructed reservoirs being operated, can be dealt with by formulating the problem as a large-scale, simulation optimization model.

Relative priority numbers assigned to target reservoirs' storage levels can be taken as operational variables, according to which the NFP employed in MODSIM determines whether to release water stored in a reservoir in each time period to meet that period's water demand or to keep it in the reservoir for future uses. Therefore, there are two main types of decision variables including design variables, consisting of the capacities of unconstructed reservoirs or water transfer components, and operational variables, i.e. relative priority numbers of target reservoirs' storage levels of both unconstructed and constructed dams.

In the case of Atrak river basin, we have considered a six-dimensional problem with the capacities of Darband, Garmab and Ghordanlu reservoirs as design decision variables, and the priorities of these three dams as operational variables. Other reservoirs have smaller potential capacities, so their capacities are assumed to be known. Considering the geologic, topographic and hydrologic conditions, the ranges of capacities are 5 million cubic meters (mcm) to 10 mcm for Darband and Garmab reservoirs and between 5 mcm to 50 mcm for Ghordanlu reservoir (Table 2). Municipal and agricultural water demands have been estimated based on the projection of future population growth and agricultural land and crop patterns at year 2025. Other hydrologic input data including natural and sub-basin river flows, evaporation from and precipitation in to lakes have been gathered and introduced to the model over a historical 20-year period (1976 to 1996) of available records to account for hydrologic variability.

Dam . | Darband . | Garmab . | Ghordanlu . |
---|---|---|---|

Maximum | 10 mcm | 10 mcm | 50 mcm |

Minimum | 4 mcm | 5 mcm | 5 mcm |

Dam . | Darband . | Garmab . | Ghordanlu . |
---|---|---|---|

Maximum | 10 mcm | 10 mcm | 50 mcm |

Minimum | 4 mcm | 5 mcm | 5 mcm |

where:

: water volume supplied to demand nodes in Khorasan-Shomali Province.

: water volume supplied to demand nodes in Golestan Province.

: target water demand volume of Khorasan-Shomali Province.

: target water demand volume of Golestan Province.

Although *TARGETKh _{_Shomali}* and

*TARGET*are just two input parameters in the above function, the optimization model results and, therefore, the shares of water allocations to each of the provinces are quite sensitive to these parameters' values. On the other hand, the competition between the two upstream (Khorasan-Shomali) and downstream (Golestan) provinces, as the main water consumers of the system, is a source of conflict over these parameter values. Each of the provincial parties does their best to convince the members of a national, third-party committee, among whom the second author is included, to recognize their water requirements are as large as possible. In fact, each province may tend to report its own water demand as larger than reality in order to attract more water to the province's demand sites. Specifically, there are a number of newly developed irrigation lands in Khorasan-Shomali Province whose water supply is severely criticised by the downstream Golestan provincial administration. In other words, due to severe water shortages in the downstream Golestan Province, stakeholders there are insisting on the cessation of any development projects in the upstream Khorasan-Shomali Province including the irrigation schemes as well as any under-study dam reservoirs.

_{Golestan}Facing such a critical dispute, we, as a member of a national, third party committee, ran MODSIM model over a 20-year historical period under a water demand condition in which none of the elements of irrigation and surface reservoir development projects were put into operation. Under this condition, the volumes of water supplied to different demand nodes of the two provinces, i.e. Khorasan-Shomali and Golestan, were determined as 756.5 mcm and 195 mcm, respectively. These amounts were then set as *TARGET _{Kh_Shomali}* and

*TARGET*input parameters to the model in subsequent runs under an increased water demand condition due to new irrigation development projects as well as new dam reservoirs to be optimally sized. In this condition, the total average annual demands of the two provinces were equal to 1,202.3 and 531.6 mcm, respectively. However, a higher priority was given to meeting the portion of water demand of each province that had been established historically (first-priority demands) than that given to the part established due to more recently developed projects (second-priority demands).

_{Golestan}By the above-described mechanism, the SPSO outer objective function (Equation (29)) drives the search toward the solutions that best meet the two objectives of minimizing the capacities of new dam projects as well as the shortages in supplying water to first-priority demands, and the one-period, minimum-cost objective functions of the MODSIM's NFPs will guide the model to deliver any excess water to second-priority water demands. This procedure is, in fact, a methodological setting of the first-in-time first-in-right water allocation mechanism combined with the riparian allocation mechanism in our modeling exercise. By this approach, the national, third-party committee was able to moderately reduce some disagreements over actual water demands between the two main provinces in the Atrak river basin.

The SPSO-MODSIM model was then used to solve the 8D problem presented above. The values of weighs , , , and were, respectively, set to 1, 1, 100, 100 and 1 by using a trial and error based procedure. The number of particles and the maximum number of iterations were set as 30 and 3,000, respectively. The stopping criteria were either reaching the maximum number of iterations or no improvement of the SPSO best objective function value over 200 successive iterations. Other SPSO parameters were , , and . The upper and lower bounds of priority numbers in MODSIM were, respectively, equal to 10 and 1. Table 2 presents the minimum and maximum values of the design variables introduced to the model. Note that the system used was a PC 3.33 GHz, Intel(R) Core(TM)2 Duo CPU, with 10 GB RAM, for which the regular time needed for each function evaluation (MODSIM run) was about 20 seconds.

Table 3 reports the best solution obtained by the SPSO-MODSIM model with a total number of function evaluations of 30,000 because of the dominance of the second stopping criterion. Due to stochastic nature of PSO algorithm, the results reported in this section (Tables 3 and 4) present the best solutions among 10 runs of the SPSO algorithm.

Optimized variables . | Optimized values . |
---|---|

Darband reservoir capacity (mcm) | 5 |

Garmab reservoir capacity (mcm) | 5 |

Ghordanlu reservoir capacity (mcm) | 40 |

Darband target storage level's priority number | 1 |

Garmab target storage level's priority number | 3 |

Ghordanlu target storage level's priority number | 10 |

Objective function | 137,837 |

Optimized variables . | Optimized values . |
---|---|

Darband reservoir capacity (mcm) | 5 |

Garmab reservoir capacity (mcm) | 5 |

Ghordanlu reservoir capacity (mcm) | 40 |

Darband target storage level's priority number | 1 |

Garmab target storage level's priority number | 3 |

Ghordanlu target storage level's priority number | 10 |

Objective function | 137,837 |

Method . | SPSO-MODSIM . | SPSO-MODSIM ∼ ANN . | SPSO-MODSIM ∼ SVM . | SPSO-MODSIM ∼ KRIGING . | SPSO-MODSIM ∼ POLYNOMIAL . |
---|---|---|---|---|---|

Approximate best obj. func. value | – | 137,844 | 137,830 | 137,836 | 137,577 |

Original obj. func. value | 137,837 | 137,837 | 137,837 | 137,837 | 137,837 |

MSE | – | −5.08 × 10^{−5} | 5.37 × 10^{−5} | 1.05 × 10^{−5} | 0.00188 |

Number of original obj. function evaluations | 30,000 | 482 | 477 | 1,500 | 1,667 |

Darband reservoir capacity (mcm) | 5 | 5 | 5 | 5 | 5 |

Garmab reservoir capacity (mcm) | 5 | 5 | 5 | 5 | 5 |

Gordanlu reservoir capacity (mcm) | 40 | 40 | 40 | 40 | 40 |

Darband target storage level's priority number | 1 | 1 | 3 | 6 | 4 |

Garmab target storage level's priority number | 3 | 3 | 4 | 8 | 6 |

Ghordanlu target storage level's priority number | 10 | 5 | 3 | 4 | 1 |

Method . | SPSO-MODSIM . | SPSO-MODSIM ∼ ANN . | SPSO-MODSIM ∼ SVM . | SPSO-MODSIM ∼ KRIGING . | SPSO-MODSIM ∼ POLYNOMIAL . |
---|---|---|---|---|---|

Approximate best obj. func. value | – | 137,844 | 137,830 | 137,836 | 137,577 |

Original obj. func. value | 137,837 | 137,837 | 137,837 | 137,837 | 137,837 |

MSE | – | −5.08 × 10^{−5} | 5.37 × 10^{−5} | 1.05 × 10^{−5} | 0.00188 |

Number of original obj. function evaluations | 30,000 | 482 | 477 | 1,500 | 1,667 |

Darband reservoir capacity (mcm) | 5 | 5 | 5 | 5 | 5 |

Garmab reservoir capacity (mcm) | 5 | 5 | 5 | 5 | 5 |

Gordanlu reservoir capacity (mcm) | 40 | 40 | 40 | 40 | 40 |

Darband target storage level's priority number | 1 | 1 | 3 | 6 | 4 |

Garmab target storage level's priority number | 3 | 3 | 4 | 8 | 6 |

Ghordanlu target storage level's priority number | 10 | 5 | 3 | 4 | 1 |

### Analysis of meta-models in Atrak river basin problem

According to Table 4, all meta-models with fewer original function evaluations have found solutions similar to that of SPSO-MODSIM with the best objective function value equal to 137,837. Similar to the Ackley-8D problem, SVM and ANN outperformed the other two meta-models with respect to NFE required, whereas kriging and polynomial have needed a larger NFE to reach the solution. Although the polynomial meta-model with higher MSE value has not been as accurate as the other models, it is a relatively simple, efficient technique needing much less learning time.

## SUMMARY AND CONCLUSIONS

Incorporating a river basin simulation model in a heuristic optimization algorithm can help modelers address basin-scale water resource management problems more efficiently via an integrated simulation-optimization approach. The stretching PSO (SPSO) and the MODSIM river basin DSS were linked, and the SPSO-MODSIM tool was used to solve a large-scale, complex water allocation problem at Atrak river basin in Iran with a major dispute over water among upstream and downstream stakeholders. We estimated, by MODSIM, the amount of water supplied historically to two competing provincial water consumers over a long period as the first-priority water demands, and the demands in excess of that part, which were related to more recently established water resource development projects, were given a less priority. The SPSO objective function drives the search toward the solutions that best meet the two objectives of minimizing the capacities of new dam construction projects as well as the shortages in supplying water to first-priority demands, and the one-period, minimum-cost objective function of the MODSIM NFPs guide the solutions to deliver any excess water to second-priority water demands.

Because of high computational cost of the SPSO-MODSIM model, we, subsequently, considered the use of the adaptive sequentially space filing (ASSF) meta-modeling approach, where MODSIM was replaced by four types of meta-models, i.e. ANN, SVM, kriging and polynomial response functions. To test how well these approximation techniques perform, the performances of the techniques were first compared through two benchmark function optimization problems. The meta-models were then evaluated by solving the real-world Atrak river basin water allocation optimization problem in Iran. The results of the ASSF approach with different meta-models were compared to those of the exact SPSO-MODSIM model as a basis to test if the proposed meta-models can replace the computationally intensive SPSO-MODSIM model.

In the benchmark problems, all the meta-models were able to achieve accurate solutions while saving the number of function evaluations (NFE) for both benchmark functions of Ackley and Dejong. The exact global optimum solution was found by all the meta-models for the 2D problems; however, kriging was able to locate just a good solution, not the global optimum, for the 8D problems. Additionally, polynomial and SVM meta-models performed better than others by locating the global optimum of the Dejong functions (2D and 8D) with fewer NFEs, whereas for more complex Ackley functions (2D and 8D), the SVM and ANN meta-models were better than the others.

For the Atrak river basin problem, all the meta-models were able to locate the best solution reached by the exact, computationally intensive SPSO-MODSIM model with significantly fewer NFEs compared with the exact model. This is mainly because of the fact that the ASSF approach sequentially improves the performance of any types of meta-models in the course of optimization by adaptively locating the promising regions of the search space where more samples need to be generated for meta-models training. However, ANN and SVM performed better than other meta-models in saving the number of costly, original function evaluations. The performances of SVM and ANN were almost the same, but SVM was slightly better.

The results show that simpler meta-models such as polynomial, which does not require a sophisticated training algorithm, could perform well enough for less-complex functions. However, the meta-models with more specific training algorithms such as ANN and SVM are needed for solving more complex problems with larger decision variables and highly non-linear response functions. Moreover, ANN requires a large number of design points to be trained even though they are learning a relatively simple function (Yan & Minsker 2006; Zou *et al.* 2007). Therefore, the adaptively updating mechanism of the ASSF approach is less significant when it comes to simple, low-dimensional function optimization problems. That is why ANN needed about 62% and 22% of exact NFE for Dejong-2D and Dejong-8D functions, respectively, compared with 30.6 and 13.4% obtained by the polynomial meta-model.

Another point to have in mind is that ANN meta-models require the selection of appropriate network structures with a large number of parameters (weights and biases) to be adjusted (Maier & Dandy 2000), which needs more expertise. SVM and kriging also have regularization parameters with considerable impact on their performance that need to be determined by the users prior to the training process. It is worth noting that one reason for the performance of kriging not being as good as other meta-models, as mentioned by Razavi *et al.* (2012a), could be the fact that the DACE toolbox involves a global search method that is not efficient in improving the solution in the main region of attraction, and mostly focuses on surpassing the misleading valleys over the non-informative regions. Moreover, for the Ackley-8D problem, the polynomial meta-model failed to reach the optimum point due to the limitation in the RAM of the PC used. This difficulty can, however, be solved by accessing PCs with higher RAM capacities although larger NFE would be required.

Finally, more research is needed to explore the potential advantages of using other sampling techniques, such as Latin hypercube, and other meta-models including RBF.

## ACKNOWLEDGEMENTS

The authors would like to thank Iran Water Resources Center of Water Research Institute for providing the data of the case study and M. Shourian for his help regarding implementation of the ASSF approach.