This article aims to present a new methodology for waste load allocation (WLA) in a riverine system considering the uncertainty and achieve the lowest amount of inequity index, cost, and fuzzy risk of standard violation. To find a surface of undominated solutions, a new modified PAWN method, initially designed for sensitivity analysis, was developed and coupled with a simulation-optimization process using multi-objective particle swarm optimization (MOPSO) algorithm, to consider the uncertainty of all affecting variables and parameters by using their probability distribution. The proposed methodology applied to Sefidrood River in the northern part of Iran. Graph model for conflict resolution (GMCR) as a subset of game theory was implemented to attain a compromise on WLA among the stakeholders of a river system's quality in Iran: Department of Environment, Municipal Waste Water, and Private Sector. Some undominated solutions were used in GMCR model and modeling the conflict among decision makers reveals that their preferences and the status quo do not lead to a solely stable equilibrium; thus the intervention of a ruler as arbitrator leads them to reach a compromise on a scenario that has a median FRVS and cost. Sensitivity analysis was done using the PAWN method to assess the sensitivity of three intended objectives to all variables and parameters.
WLA in a river with three decision makers done by considering minimizing inequity, cost, and fuzzy risk of standard violation.
The optimum solution found by using a stochastic simulation-optimization model.
The sensitivity of functions’ outputs to inputs has been determined by using PAWN method.
Game theory and GMCR+ model used to discover the option agreed upon by the parties.
Cost and standard violation have the most sensitivity to the upstream flow and inequity has the most to the dischargers' BOD.
Increasing human demand for water has made this vital element noteworthy, and its ownership rights to use have become more challenging. Using water bodies, especially rivers, for industrial purposes such as cooling the industries and assimilating the wastewater, is one of the consequences of the industrialization era. Dwindling the ecological capacity of the rivers to treat wastewater, population growth – which leads to producing more wastewater – and industries bloom located along the rivers have reduced the quality of the river's water. This reduction is due to the drainage of the wastewater from the riverside dischargers into it. The requirements to maintain water quality in compliance with the approved standards and preserve its capability to receive the maximum flow of wastewater from industries has led to allocating the shares of wastewater discharge to the dischargers as a controversial and problematic issue. In other words, waste load allocation (WLA) in rivers refers to the determination of required pollutant treatment levels to a group of pollution dischargers to ensure that water quality standards are maintained along the river (Ghosh & Mujumdar 2010). Determining the shares for each of the dischargers, whether industries or municipal wastewater treatment facilities or non-point sources at the river border to discharge the wastewater, can be accomplished using various methods. Optimization (Yandamuri et al. 2006; Cho & Lee 2014), water quality trading (Breetz et al. 2004; Sarang et al. 2008), and game theory (Niksokhan et al. 2009; Nikoo et al. 2012) are among the most common ways to do this.
Application of game theory in water quality management issues is preferred to conventional decision analysis methods because of two reasons. Firstly, unlike multi-criteria decision making (MCDM), this approach addresses decision-maker behavior and finds solutions that will not be refused by them. Finally, this method, unlike MCDM, is capable of expressing and reflecting the various characteristics of water resource problems, such as engineering, socio-economical, and political issues (Mirchi et al. 2010).
Optimal waste load allocation implies that the treatment levels not only maintain the water quality at standard level but also result in the best value for the objective function defined for the water quality management problem in the riverine system (Burn & Lence 1992). Chadderton et al. (1981) introduced and argued various basic theoretical approaches to apportion waste load and brought forward ‘percentage of equal treatments’ as the best option among those 20 methods. Different scholars with the different point of views tried to allocate waste load among dischargers by using various methods and tools: Hathhorn & Tung (1989), Burn & Lence (1992), Kerachian & Karamouz (2005), Saadatpour & Afshar (2007), Qin et al. (2009), Li et al. (2014), Allam et al. (2016), Mohan & Kumar (2016), Xu et al. (2017), Rafiee et al. (2017), and Afshar et al. (2018) are among the extensive range of them.
Burn & Yulianti (2001) presented a methodology that contains three models, two of which are pairwise models (cost versus standard violation, the cost versus equity) to allocate waste in a riverine system. Feizi Ashtiani et al. (2015) studied an approach to meet environmental, economic, and equity objectives simultaneously. The waste loads were allocated according to the discharge permit market. They used two discrete graphs of cost versus violation and cost versus inequity to select four scenarios based on these two models' Pareto Fronts. Saberi & Niksokhan (2017) proposed a methodology using the decision support system (DSS) for the graph model for conflict resolution II (GMCR II), multi-criteria decision making (MCDM) analysis and the multi-objective particle swarm optimization (MOPSO) algorithm while considering cost and standard violation to allocate waste load among dischargers.
The lack of considering the equity as a third objective in the optimization model leads to the setup of a three objectives simulation-optimization model. The difference between the current work and those mentioned above is that ongoing work leads to a surface of the undominated solution, where all points on that surface are optimum in terms of intended objectives. In contrast, the others are two-axis diagrams where the feasible space of the answers is a curve. This study, instead of using a deterministic model that they used, applied a stochastic model, which uses the distribution function of all variables and parameters that take part in the simulation-optimization process.
In this article, a novel methodology proposed to allocate waste load among dischargers located adjacent to a riverine system equitably and cost-effectively, as well as encompassing the lowest level of fuzzy risk of violating the standard (FRVS) for biochemical oxygen demand (BOD). Achieving a precise allocation for removal percent so that three minimizing objectives attained with a porous ratio, taking into account the uncertainty of variables and parameters affecting the simulation model, is required. Three different criteria (objectives) to allocate the waste load to three dischargers, which have controversial preferences, lead to the use of the game theory instead of conventional optimization approaches.
A stochastic simulation-optimization process was run applying MOPSO optimization algorithms, and quality simulation derived from fate and transport equations to find undominated solutions. In the last section, the GMCR+ model was utilized to find a stable equilibrium for the conflict about selecting a specific scenario by three decision makers: Department of Environment (DOE), Municipal Waste Water (MWW), and Private Sector (PS). The impossibility to attain one specific equilibrium through conflict resolution caused the entrance of the judiciary; this intervention leads to achieving one particular equilibrium from six selected scenarios from the simulation-optimization process. A sensitivity analysis (SA) by the PAWN method (Pianosi & Wagener 2015) has been carried out to assess the outputs' sensitivity to the inputs parameters and variables uncertainty in the simulation process.
Using an integrated simulation-optimization which considers all goals simultaneously to find the Pareto Front was not done before since the previous studies usually use two separated simulation-optimization process considering two goals (Burn & Yulianti 2001; Feizi Ashtiani et al. 2015). In this article, an integrated model was set up that considers the uncertainty of all parameters.
How to sum up with the obtained output series from stochastic simulation-optimization reliably and flexibly, whether the median, average, or any other statistics should be used, was one of the difficulties which led authors to use a risk-based index: FRVS. The way sensitivity analysis is used for three intended functions output was another challenge, which forced us to run a separate simulation optimization process that uses standard violation function as a substitute function for FRVS to assess the output sensitivity to the inputs. According to existing decision makers and status, the conflict did not lead to a single stable equilibrium; thus, an arbitrator interferes with the conflict resolution process and is forced to omit some of the available scenarios to achieve a unique stable equilibrium.
The current article includes four main sections: introduction, methodology, results, and conclusion. The introduction section comprises a summary of works done in the field of WLA. The methodology section has eight subsections. The proposed model is the first subsection of this part; four different objective functions are presented and discussed. The optimization algorithm, simulation-optimization model, sensitivity analysis are the next subsections. At the end of this section, the PAWN method – which is used for sensitivity analysis – conflict resolution and the case study are described in detail. The third section, which is named results, covers the simulation-optimization, conflict resolution, and sensitivity analysis results. At the end of this article, a conclusion will be presented.
In this section, four objective functions, optimization algorithm, uncertainty analysis, simulation-optimization, sensitivity analysis, PAWN method, and conflict resolution, are discussed, and finally, the case study used to apply the methodology on is described. As depicted in Figure 1, the proposed model to WLA is made up of three parts: simulation, optimization, and conflict resolution. In the first section, parameters and variables used to solve the problem are defined. These are collected from official reports and previous studies. It is vital to determine the acceptable range for data by using a literature review and field study for variables. Although considering the seasonal uncertainty and extreme events is noteworthy and helps to have a cost-effective allocation, the lack of precise data series leads to consider the worst situation that covers all plausible conditions. The worst situation for water quality issues is the lowest flow, defined by the 7Q10 method and the highest temperature.
The river water quality simulation will be run in the deterministic mode. To define sensitive parameters and variables, a sensitivity analysis of water quality simulation was carried out, and sensitive inputs reassessed to be at their correct, acceptable range. In the optimization section, first of all, a stochastic simulation-optimization will be run, and optimal removal percent defined. The undominated surface (Pareto Front) of solutions will be depicted based on the triple objective functions which try to be achieved. The generated Pareto Front presents the feasible space for the problem's answer, and each point on this surface that will be selected has equal worth in comparison to the other points on this surface. In the next step of this section, six scenarios will be chosen from 100 scenarios, which generate Pareto Front, and each of these points contains removal percent for each discharger. Finally, a stochastic simulation model will be run against 100 selected scenarios of removal percent to define the value for violating standards from chosen scenarios.
In the last section of the proposed methodology, a conflict resolution approach will be used to select the most acceptable scenario by stakeholders who are called players in this method. After defining players and their preferences based on the six selected scenarios, the impossible states and status quo will be set for the conflict resolution model. Stable equilibriums will be defined by analyzing the game. If there was no stable equilibrium, the third-party intervenes in the conflict as an arbitrator and imposes its preferences on the game to lead to a sole stable equilibrium and put an end to the conflict.
Three objectives have been considered to allocate waste load for this river in stochastic mode. In the deterministic mode, one of those objectives will be replaced by a fourth function based on the reasons that will be presented in the following parts: (1) minimizing amount of violation from the national standard of water bodies contamination concentration (standard violation) (Equation (1)), (2) minimizing total cost for wastewater treatment and violating standard penalties (Equations (10)–(13)), and (3) minimizing the inequity index (Equation (14)) are used in a deterministic way, and in addition to the first two, (4) minimizing the FRVS (Equation (15)) had been used instead of minimizing the violation of the national standard in the stochastic model. The equations used in the simulation process can be found in the next sections.
is the concentration of oxidizable matter, which here is BOD in mg/L (expressed in oxygen level), is the initial concentration of oxidizable matter in mg/L at the upstream, is the total removal rate in , which is made of decomposition rate (kd) and settling rate (ks). ka is reareation rate (), is the concentration of saturated DO (mg/L), o is the concentration of DO in the river (mg/L), D is the oxygen deficit (mg/L), is the initial oxygen deficit (mg/L), x is the distance from the previous discharger (m), u is contamination velocity (m/s) and and are DO and BOD's standards established by Iran National Standards Organization (INSO) in mg/L. Due to the standard requirement to comply BOD and DO levels at checkpoints simultaneously, the proposed model always checks whether the DO concentration at checkpoint satisfies this requirement or not. Based on INSO, the DO concentration in checkpoints should be more than 5 mg/L.
In the abovementioned equations, is the cost pay for removal percent treatment at the ith discharger, is the penalty applied to this discharger for violating the standard at the checkpoint. The second equation (Equation (11)) is used to estimate the cost for municipal wastewater treatment and the third (Equation (12)) belongs to industries cost to treat their wastewater. In these two equations, ci is the total payment for the treatment of one cubic meter of wastewater in one year (y) in USD.
Fuzzy risk of violating standard
Risk assessment is used extensively in water resources management fields such as water shortage (Feng & Huang 2008; Buurman et al. 2020), supply chain (Schaefer et al. 2019), drought (Shahid & Behrawan 2008; Chou et al. 2019), climate change (Dessai & Hulme 2007; Ronco et al. 2017), quality management (Mujumdar & Sasikumar 2002; Pullan et al. 2016; Lin et al. 2018; Liu et al. 2018), and waste load allocation (Rehana & Mujumdar 2009).
The use of evolutionary algorithms for multi-objective optimization problems has been increased and has led to the development of various algorithms; NSGA-II (Deb et al. 2002), PAES (Knowles & Corne 2000), microGA (Coello & Pulido 2001), and MOPSO (Coello et al. 2004) are some of this evolutionary multi-objective optimization algorithms.
Particle swarm optimization (PSO) is a single objective optimization algorithm inspired by the choreography of a bird flock. It was developed for multi-objective issues and has been used in a wide range of problems such as multipurpose multi-reservoir operations (Fallah-Mehdipour et al. 2011), the study of ship's principal parameters (Lei 2011), chemical engineering (Taha et al. 2011), and textiles (Taheri et al. 2013). It is essential to use an optimization algorithm coupled with the simulation model to minimize the total cost of the system as well as another two objectives. The MOPSO model had been used since it is easy to implement and use, and its efficiency had been proven in some of the studies which were carried out in different fields (Coello et al. 2004; Goudos & Sahalos 2006; Durillo et al. 2009). The structure and principle of the MOPSO algorithm can be found in Coello et al. (2004).
Outcomes or events which cannot be predicted with certainty are often called risky or uncertain. The term ‘risk’ is usually reserved to describe situations where probabilities are available to explain the likelihood of various events or outcomes. If the probabilities of multiple events or outcomes cannot be quantified, or if the events themselves are unpredictable, some would say the problem is one of uncertainty, and not of risk (Loucks et al. 2005). Uncertainty and associated terms such as error, risk, and ignorance are defined and interpreted by different authors, see Walker et al. (2003) for a review.
Refsgaard et al. (2007) adopted a subjective interpretation of uncertainty based on Klauer & Brown's (2004) definition in which the degree of confidence that a decision maker has about possible outcomes and/or probabilities of these outcomes is the central focus. Thus, according to the definition, a person is uncertain if s/he lacks confidence about the specific outcomes of an event. Reasons for this lack of confidence may include a judgment of the information as incomplete, blurred, inaccurate, unreliable, inconclusive, or potentially false. Similarly, a person is certain if s/he is confident about the outcomes of an event. It is possible that a person feels certain but has misjudged the information (i.e. his/her judgment is wrong).
Loucks et al. (2005) described two important classifications for uncertainty, where the first one divides uncertainty to knowledge uncertainty, decision uncertainty, and natural variability. In the second classification, they defined them as follows:
Informational uncertainties, which are made of imprecision in specifying the boundary and initial conditions as well as imprecision in measuring observed output variable values.
Model uncertainties, which are made of the uncertain model structure and parameter values, as well as the variability of observed input and output values over a region smaller than the spatial scale of the model. The variability of observed model input and output values within a period shorter than the time scale of the model and errors in linking models of different spatial and temporal scales are also made model uncertainties.
Numerical errors or errors in the model solution algorithm are specific sources of uncertainty.
While uncertainty analysis examines the lack of knowledge about the real value of model parameters, sensitivity analysis (SA) attempts to distinguish the ordinal or cardinal rank of variables' effects on changing output functions. Uncertainty can sometimes be reduced through further study, and by collecting additional data. Uncertainty analysis tries to determine the characteristics of different model's inputs and output distributions as well as functions of those random output variables that are performance indicators or measures. There are many tools and methods developed to take uncertainty into account. Refsgaard et al. (2007) listed 14 different methodologies and tools suitable for supporting uncertainty assessment and their guidance to the applicability. Data Uncertainty Engine (DUE) (Refsgaard et al. 2005), Error Propagation Equations (e.g. Mandel (1984)), Expert Elicitation (e.g. Spetzler & Stael von Holstein (1975)), Inverse Modelling (parameter estimation, freeware like PEST (Doherty 2003) and UCODE (Poeter & Hill 1998)), Monte Carlo Analysis (free software packages like @risk (Corporation 2009) and SimLab (Saltelli et al. 2004)), NUSAP (Funtowicz & Ravetz 1990; Van der Sluijs et al. 2004), and Uncertainty Matrix (Janssen et al. 2003; Walker et al. 2003) are among these methodologies and tools.
This article used SAFE Toolbox (Pianosi et al. 2015) and embedded methods for sensitivity analysis to take uncertainty modeling into account. Since the first phase of sensitivity and uncertainty analysis are both generating a large number of input from their acceptable range and their second phase is evaluating the generated inputs against fitness function (objective function), two initial steps of PAWN method code were used to generate input variables and parameters in their acceptable range and evaluate them. At the final phase, the output generated through the objective function calculated by the previous step will be used to generate three objective functions distributions. The average statistic of cost and inequity index distribution, as well as the FRVS as a sole value, will be assessed in the optimization process.
Based on the suggestion given by Nikoo et al. (2012) to take uncertainty into account in the simulation model, Q, BOD concentration, and temperature of upstream, tributaries and dischargers, as well as Ka and kr, which include 76 parameters and variables, will be considered.
A set of 100 series of random are generated, i.e. 100 different values for , , , …, and in the range 0–1. A set of 76 input parameters and variables will be generated randomly from their acceptable range to calculate three output values for cost, inequity index, and standard violation. Therefore, in the initial phase, there are 100 undominated solutions in the repository, each of which represents a series of (, , , …, and ) and have three values for the triple abovementioned objective function.
In the second phase, a new set of 100 series of is generated randomly and evaluated against objective functions by the MOPSO algorithm. If some of these new series of dominate any of the past 100 series, it will substitute the old and dominated ones. To take the uncertainty of variables and parameters into account in the simulation-optimization process for each set of 100 different , 200 different series of variable and input parameters were generated by the Latin-Hypercube method based on a normal distribution from their allowable space. Thus, for 200 different sets of inputs, which were generated by Latin-Hypercube and 100 sets of that were generated by MOPSO in the previous step, the simulation method gives three sets of 200 output for each of series, so for 100 different there are three distributions, each of which has a set of 200 numbers.
These 200 numbers lead to a probability distribution for each of the triple objectives. To assess each set of 100 with the previous set, which is in the repository, the MOPSO compares the average of cost and inequity index and the FRVS of 100 new with the prior set of in the repository. This process will repeat 500 times to achieve 100 different sets of undominated .
Eventually, at the end of 100,000 runs of the simulation-optimization process, an undominated surface will be generated that is made of 100 sets of undominated . Each of these points has a coordinate of three values: cost, inequity index, and FRVS. The first and second values are the average of a distribution made of 200 values. It is essential to indicate that each of these 100 points on the undominated surface, which is a Pareto Front, has the same worthiness, and all of them are optimal based on three objective functions.
One may use MCDM to select some of these optimal values, but authors use game theory to choose one distinct point which leads to a set of , those twelve dischargers are required to treat their wastewater up to this level and then empty to the river.
Sensitivity analysis attempts to determine changes in outcomes values that are caused by changes in inputs. In other words, sensitivity analysis aims to comprehend the extent of change in output related to the change in inputs. It will evaluate the importance of uncertainty or inaccuracy of the model's inputs for the modeling or decision-making process. Sensitivity analysis could determine the change in system's optimal efficiency regarding the change in different parameters and also could determine how an optimal decision would change due to changes in the levels of resource constraints or desired outputs. Various techniques are developed to determine the sensitivity level of a model's outputs regarding a variety of input values. Most methods examine the effect of changing the value of a single parameter or input variable by taking into account the remaining parameters or inputs at a constant value. Sensitivity analysis could also be developed to investigate the composite effects of multiple sources of error (Loucks et al. 2005).
Sensitivity analysis is made up of three fundamental phases, which can be summarized as: (1) generation of input samples through their acceptable range; (2) model assessment for sets of the generated input value; and (3) post-processing to analysis input and output samples to estimate the sensitivity index (Pianosi & Wagener 2015).
Factor prioritization (FP), factor fixing (FF) or screening, and factor mapping (FM) are three primary purposes of the sensitivity analysis. The relative ranking of the model inputs contribution to output uncertainty is the aim of FP, determining the inputs that have no contribution to output uncertainty is the aim of FF, and discovering the regions in the inputs space that create specific output values is the aim of FM (Saltelli et al. 2008). FP and FF in sensitivity analysis are carried out by using the sensitivity index – a synthetic index quantifies the relative contribution to output uncertainty from each input. In a case where a parameter's sensitivity index value is zero, this parameter does not have any contribution to the output uncertainty, and wherever this index obtains a more significant value, expected parameters have a more substantial contribution to the output sensitivity (Pianosi & Wagener 2015).
Four main categories can be expressed for the sensitivity analysis as: (1) local sensitivity analysis and global sensitivity analysis; (2) mathematical, statistical and graphical; (3) screening and refined; and (4) qualitative and quantitative. Song et al. (2015) compiled a summary table of these basic categories for the sensitivity analysis methods, which contain descriptions, characteristics, and application cases of them as well as summary definitions of sensitivity analysis in the different fields. In a regional sensitivity analysis, the uncertainty caused by the change in input values around a specific point such as the average will be defined. General sensitivity analysis will determine the change in input values within the entire feasible space; this definition is also correct if used for outputs, which means it could investigate either entire feasible space or a specific area of an output (Pianosi & Wagener 2015).
Global sensitivity analysis methods are usually recommended in hydrological modeling applications because they have certain advantages compared with the local sensitivity analysis methods. The elementary effects test (EET, or Morris method (Morris 1991), regional sensitivity analysis (RSA, Spear & Hornberger 1980; Wagener & Kollat 2007), variance-based sensitivity analysis (VBSA, or Sobol’ method, e.g. Saltelli et al. (2008)), the Fourier amplitude sensitivity test (FAST by Cukier et al. (1973)), dynamic identifiability analysis (DYNIA by Wagener et al. (2003)) and a novel density-based sensitivity method, PAWN, by Pianosi & Wagener (2015) are embedded in the SAFE Toolbox (Pianosi et al. 2015).
Global, quantitative, model-independent, unconditional on any assumed input value, easy to interpret, easy to compute, stable, and moment independent are those features which a sound sensitivity analysis index should satisfy (Liu & Homma 2009). While variance-based sensitivity indices satisfy all the above features, but the last one, the PAWN method, meets all features (Pianosi & Wagener 2015).
In this article, the authors use the PAWN method (derived from the names of Pianosi and Wagener) to perform sensitivity analysis. PAWN is a method for global sensitivity analysis and its central idea is to characterize output distributions by their cumulative distribution functions (CDF), which are easier to derive than probability distribution functions (PDFs). This method uses the sensitivity index to define outputs' sensitivity to inputs variability. SAFE toolbox and PAWN can use different methods such as Latin-Hypercube (McKay et al. 1979) and one-factor-at-a-time at the first phase to generate random input series. PAWN makes an N*M matrix of input values within their acceptable space, which contains N different values for M model inputs. In the second phase, a series of N values for M different input parameters and variables will enter the simulation model to calculate a series of N values for Y outputs. In the final phase of the process, the model's output series will be assessed by a sensitivity analysis method such as Morris, Sobol’, PAWN, and so on to analyze the sensitivity (Pianosi et al. 2015).
PAWN measures the distances between conditional and unconditional CDF of outputs for a specific parameter in which this method is trying to estimate its sensitivity index. In the first phase of estimating this distance, selecting all parameter and variables to calculate CDF of the output y will be done in a stochastic manner, and every value in the region of the allowable space of inputs has the chance to be selected and used; this kind of CDF is called unconditional CDF. To create a conditional CDF, parameter x, the parameter which the model tries to measure output sensitivity relative to it must be set to a specific value and selection of the other parameters and variables will be stochastic (. As one of the fundamental properties that a ‘good’ global sensitivity index should satisfy is to be unconditional about any assumed input value (Liu & Homma 2009), PAWN uses n different values for x to estimate n conditional CDFs and address this issue (Pianosi & Wagener 2015).
The existence of disagreement among stakeholders led to the development of methods to address conflicts in different fields such as social, systems engineering, psychology, and operation research. A mathematical approach to model the behavior of decision makers (DMs) – here called players – on occasions where the preferences and decisions of one player may affect the other's preference and decision, is known by the term ‘game theory’. Although each decision maker has some power over the outcome, no single decision maker has full control (Hipel & Fang 2005). Rationality is the fundamental assumption of this method and means players who are making the decisions are intelligent, hence when they are trying to achieve distinct objectives, they understand and take into account other decision makers' rationality and, accordingly, tune their preferences based on this point (Parrachino 2006).
According to the scope of problems, a variety of game theory methods have been developed and extended. Madani & Hipel (2011) depicted two general categories of games and their subcategories: (1) quantitative procedures, which include normal form, extensive form, and cooperative game theory; (2) nonquantitative approaches which include metagame analysis and this, in turn, includes conflict analysis and drama theory.
Conflict resolution, which is a subset of non-quantitative approaches, has a supportive history of methods such as non-cooperative game theory (Von Neumann & Morgenstern 1947), metagame analysis (Howard 1971), conflict analysis (Fraser & Hipel 1984), drama theory (Howard 1994a, 1994b), theory of moves (Brams 1994), theory of fuzzy moves (Kandel et al. 1998; Li et al. 2001), hypergame analysis and graph model for conflict resolution (Kilgour et al. 1987).
Conflict resolution has been used in different fields such as tribal issues (Pereira & Athparia 2017), air traffic management (Hwang et al. 2007; Hou et al. 2017), automated guided vehicle (Reveliotis 2000), public–private partnerships (Osei-Kyei et al. 2019) and so on. In the case of water resources management, this method was used in both interstate (Eleftheriadou & Mylopoulos 2008; Mianabadi et al. 2015; He et al. 2018; Tayia 2019) and intrastate disputes (Amakali 2005; Schlager & Heikkila 2014), specifically for waste allocation (Karamouz et al. 2006).
Kinsara et al. (2015) introduced the fundamental steps of GMCR as modeling and analysis (Figure 4). Determining the DMs, all possible options, feasible state, allowable state transition, and relative preferences are the first step's aims. On the other hand, ascertaining the possible equilibria or outcomes of the conflict under a common stability definition is the purpose of the analysis step. The difference in concepts of stability definition refers to the difference in human behavior in dealing with strategic risk and their insight into the future. Four common stability definitions are Nash stability (R) (Nash 1950, 1951), general meta-rationality (GMR), symmetric meta-rationality (SMR) (Howard 1971), and sequential stability (SEQ) (Fraser & Hipel 1984). Three models have been developed for GMCR: GMCR I, GMCR II, and GMCR+. While the GMCR I (Hipel et al. 1990) model lacks the interface, it allows for the implementation of the conflict modeling and analysis. GMCR II had an interface and allowed interpreting and analyzing the results. Due to the introduction of the inverse graph model for conflict resolution concept, the GMCR+ model was introduced by Kinsara et al. (2015).
To analyze conflict in the GMCR+ model, infeasible states must be omitted, preferences prioritized, the direct ranking should be done, and the reachability matrix created. Preferences prioritization is a process that can make a list of players' (DMs') preferences based on their payoffs from corresponding states. Weighing is used by the GMCR+ for preferred states of every single DM and also presents manual prioritizing in which users could sort DMs preferred states manually (Kinsara et al. 2015).
Sakamoto et al. (2005) showed that in the cases where a stable equilibrium is not achieved, third-party intervention could alter the game status. It can play different roles: changing preferences and priorities of other decision makers (coordinator), leads other players to exclude some of their preferences (the arbitrator), and arrange for relevant parties to come to the negotiation table. Zanjanian et al. (2018) indicated that the entrance of the judiciary as an arbitrator in Iran could lead to a stable equilibrium.
The proposed methodology was applied to the Sefidrood river in Gilan province, in northern Iran, to assess its applicability and efficiency. This river is 114.95 km length from the Sefidrood Dam outlet to the river meadows at the Caspian Sea. The river is divided into nine reaches according to its hydraulic characteristics. Twelve industries and municipal wastewater are discharging to the river. In this study it is supposed that each discharger uses its facilities to treat wastewater. Figure 5 depicts the location of Sefidrood and three tributaries that join the mainstream of this river: Tootakbon, Tarikrood, and Zilakirood.
This section provides the results of different processes and models that are used in the proposed methodology in three subsections. In the first subsection, simulation-optimization results are presented, then the results of sensitivity analysis are described, and ultimately, the final solution for the WLA is presented through a conflict resolution subsection.
One hundred undominated solutions, demonstrated in Figure 6(a), were obtained through the stochastic simulation-optimization model, which used MOPSO as an evolutionary optimization algorithm. Figure 6(b)–6(d) display the undominated solution in a 2D view from different perspectives. Six scenarios (Table 1) were selected based on the most distributed state according to the three axes, which are objective functions.
To obtain a comprehensive view of the alteration of the outputs of objectives functions, Figure 7 depicts the normalized values of the outputs in the range of zero to one. In this chart, the decline in the amount of one objective does not necessarily lead to a decrease in the amount of the other. For the S1 scenario, the inequity index is at the highest level, while FRVS and cost values are around the median in their allowable ranges. Cost function has the highest level in the S3 scenario, while this scenario has the lowest value in its acceptable range for FRVS and a low value for the inequity index. The S4 scenario, which is representative of a highly risky situation, has a low inequity index and a median cost. Nevertheless, the decline of FRVS will lead to an increase in the system's cost.
In the current study, all 76 parameters and variables used in the stochastic simulation optimization process were used to be assessed in the sensitivity analysis. A critical point that should be considered is that instead of analyzing the FRVS, authors used standard violation as an objective function to assess its sensitivity; the reason is related to the nature of these two different functions, despite their similarity. The second phase of sensitivity analysis leads to the generation of a series of N values that could be used to make a CDF or PDF, hence comparing the different CDFs or PDFs of outputs estimated from different input leads to carrying out the sensitivity analysis.
In uncertainty analysis, a considerable number of inputs will be entered into the model to have a large number of output values. These values in turn will be used for generating CDFs or PDFs of model outputs. The definition of FRVS (Equation (15)) indicates that the final answer of this function is a result of assessing the occurrence probability of different concentrations that could be measured at the checkpoint and is a scholar value instead of a series of numbers. One has to use the model output's PDF to calculate the fuzzy risk. In contrast, in the second phase of sensitivity analysis one has to reach a PDF or CDF, and this is not possible, hence instead of using FRVS, which is the final outcome of uncertainty analysis and is a scalar value, one has to use a function whose outcome of uncertainty analysis will be a series of scalar values instead of a single scalar value. Due to this point, a standard violation function, which has a type the same as p(c) in the FRVS (Equation (7)), had been used. The results of the uncertainty analysis are depicted in Figures 8–10. In these figures, Q, T, and BOD represent the flow, temperature, and biochemical oxygen demand of different dischargers flow to the riverine system, which includes industries, municipal wastewater treatment facilities, and the rivers tributaries: the upstream of Sefidrood, Tootakbon, Tarikrood, and Zilakirood. The subscripts of the above-mentioned parameters and variables refer to the number of each discharger; for instance, among them, Q1 and Q12 are the discharge of the first and twelfth dischargers, respectively, which can be industry or municipal wastewater dischargers. In these figures, u, ka, and kd are stream velocity, reareation rate, and deoxygenation rate in river's nine reaches, respectively.
The sensitivity analysis result (Figure 8) shows that standard violation has the most sensitivity to upstream flow (Qupstream), BOD concentration of the ninth discharger, BOD concentration of Tarikrood, BOD concentration of Zilakirood, and Tarikrood flow, respectively. Due to the direct effect of flow (Q) on the concentration of BOD in the riverine system, this objective function has the highest sensitivity to the river's flow, which contains upstream, Tarikrood, Zilakirood, and Tootakbon tributaries flow. The relatively high sensitivity of the standard violation to the concentration of the ninth discharger is due to the vicinity of this discharger to the checkpoint and its higher Q in contrast to other dischargers. At a constant concentration of the BOD, the higher the Q of a discharger leads to the entrance of the higher amount of waste load to the river.
In the case of the cost function, the results of sensitivity analysis (Figure 9) show that this function has the highest sensitivity to upstream flow (QUpstream), BOD concentration of Tarikrood tributary, BOD concentration of ninth discharger (BOD9), and its flow (Q9), respectively. The reason for this ordering is the same as the reasons mentioned above for standard violation.
Based on the sensitivity index driven from the PAWN method, which is depicted in Figure 10, the inequity index function has the highest sensitivity to the BOD concentration and flow of the ninth discharger's (BOD9 and Q9), the seventh discharger's flow (Q7), and the ninth discharger's flow (Q9), respectively. The order of the parameters sensitivity index indicates that after the previous parameters, the function has the most sensitivity to the different dischargers' Q and BOD concentrations in the next place. It should be noticed that the upstream discharge (QUpstream) has the least sensitivity index, among other parameters and variables. Since the definition of inequity (Equation (14)) is just dependent on the amount of waste load , and removal percent , such ordering of the sensitivity index can be justified.
According to the fact that the sensitivity index (the average of Ti index) is a dimensionless value, if the average of this index is calculated, it will reveal that the proposed methodology has the most sensitivity to upstream flow (QUpstream), the ninth discharger's BOD concentration and flow (BOD9 and Q9), and the seventh discharger's flow (Q7). It has also revealed that the methodology has less sensitivity to the reareation rate at the second reach of the river. An overall view of the sensitivity analysis for all 76 inputs is displayed in Figure 11.
To have a comprehensive view about the variability of output's CDF, Figures 12 and 13 show the variability of output's CDF based on the change in velocity of the seventh reach of the river (u7) and upstream flow (QUpstream) as the most and the least effective inputs on the standard violation function, respectively. In these figures, the red line indicates the unconditional CDF of output values while the gray lines depict the conditional CDF. A black and white spectrum exists at the right side of the graphs, which is an indicator of velocity in the seventh reach in Figure 12 and upstream flow (QUpstream) in Figure 13, both at the conditional mode. It can be understood from Figure 13 that as the upstream flow receives a more significant value at the conditional status, Q = 0.551 CMS, the standard violation will lead to the smallest value. This CDF, the first gray CDF of the graph from the left side, indicates the lowest variability of CDF for standard violation. On the other hand, the smallest amount of upstream discharge, Q = 0.1766 CMS, causes the most violations from a specified standard; the CDF for this conditional output is drawn up in black color at the end of the graph.
Two of the intended objective functions, cost, and FRVS, are in contrast to each other, so that achieving better performance for one of them requires getting away from the best performance for another one. Reducing the FRVS needs higher treatment percentage, and hence increases dischargers' costs, therefore reducing the system's total cost will lead to a higher FRVS amount.
Two different entities rule water management issues in Iran. The Ministry of Energy (MOE) is responsible for supplying water to all consumers as well as providing wastewater facilities. The Department of Environment (DOE) is liable for protecting water bodies' environmental quality. Regulations issued by DOE are not mandatory for MOE because of institutional problems, so there is a conflict among these parties on water quality. The Municipal Waste Water (MWW), as an affiliation of MOE, specifically focuses on wastewater affairs, tries to reduce its costs of construction and operation of wastewater treatment facilities and thus decrease the treating of wastewater's cost. Whereas the Iranian government is responsible for building and operating wastewater treatment facilities, the DOE and its affiliate, MWW, are its representatives.
On the other hand, DOE only considers declining waste load discharged from municipal and private wastewater treatment facilities to the water bodies to increase water quality, specifically in checkpoints.
The third player in this conflict is the private sector (PS), which is representative of a coalition of industries surrounding the river and owned by the non-governmental sector. This player or DM aims to maximize its profit or minimize its cost; thus, it has specific preferences, which would be in contrast to other players' preferences. Table 2 reveals the player's payoffs for various scenarios obtained from the simulation-optimization section and summarized in Table 1. The differences among scenarios' payoffs for each one of the players show the differences in their preferences.
Players' preferences, which are provided in Table 3, based on their payoffs and interests, introduced to the GMCR+ model. DOE tries to reduce the FRVS, so its choices are sorted based on the minimum amount of this objective up to the maximum ones. MWW and PS both also try to reduce their costs; thus, they will sort their preferences based on the cost that they must pay for scenarios.
In the GMCR+ model, a player should decide about an option clearly and answer explicitly about the selection of an option: Yes or No. The combination of scenarios (options) selected by players creates a state. For instance, one option can be S3, S5, and S2 which means the DOE selects scenario number S3, MWW selects S4, and eventually, PS chooses S2. The presence of three players and six options for each one lead to states in this conflict. Defining impossible states reduced the entire states to 216. One player does not decide about a specific option in a state (null status), one player does not select any options in a state so that it selects No for all options and eventually, selecting more than one choice in a state are impossible states.
Analyzing conflict stability leads to 125 stable equilibrium states, in which the status quo is one of them. In the status quo, which is representative of the worst situation of the river in terms of quality, dischargers treat waste load as little as possible to pay the lowest cost on waste load treatment. It should be noted that GMCR+ finds stable equilibriums or solutions only based on equilibrium definitions and does not consider the status quo. After finding stable equilibriums, the status quo should be selected among states that exist in the model and detect final stable equilibrium or ultimate solution of conflict by tracing a tree graph. In the status quo of this conflict, MWW will pick option S5; PS will select S2, and it is inevitable for DOE to choose S4. The GMCR+ model numbered this state as 64. Analyzing the extracted graph (Figure 14) from the model reveals that, based on the current status quo, the conflict will not obtain a single stable equilibrium.
In this figure, blue lines show the DOE movements, green lines depict WWC movements, and yellow lines indicate PS movements. Since each player prefers to move to a state with more payoff for itself in comparison to the status quo, DOE can change its selected state from state number 64 to one numbered 61, 62, 63, 65, or 66. WWC will prefer to choose one of the state numbers 40, 46, 52, 58, or 70 to achieve more payoff, while the best payoff for PS will be to move to one of the states numbered 28, 100, 136, 172, or 208. By changing its selection, each player will motivate two other players to improve their choice; thus, the current status quo alongside the players' preferences will not lead to a common option selection by two other players, and a stable equilibrium will not be achieved. For instance, PS comprehends that selecting state number 58 is in its favor; thus, select this state, but two other players try to choose other states, which has more payoff for them in comparison to status number 58.
The intervention of a third party was assessed to comprehend whether it is useful to resolve the conflict. Hence the judiciary entered the conflict as an arbitrator who can force players not to select specific options, including the options with the highest FRVS and the most significant inequity index. The judiciary will ban players from selecting S2 and S4 as the options, with the highest amount of FRVS and S1 as the option with the highest inequity index.
Removing the afore-mentioned triple options causes a change in the status quo, and selected options will move to the nearest ones. In this state, which is representative of the worst situation and numbered 23 by GMCR + , MWT still select S5, DOE selected option will move from S4 to S5, and finally, PS will select S6.
Analyzing the new conflict results from the entrance of arbitrator to previous conflicts shows that all possible stable equilibriums will decline to nine items. By considering the new status quo, GMCR+ outputs prove PS will move from S6 to S5, and consequently, two other players comprehending moving from S5 to another option will not increase their payoffs, and thus they do not change their selection. This state is named number 14 by the model. Therefore, one movement from status number 23 to status number 14 by PS will lead to a stable equilibrium, which is the final resolution (Figure 15).Table 4 shows the twelve dischargers' removal percent for stable equilibrium, which is the final WLA based on the predefined objective functions and triple decision makers' preferences.
S5 will be the option in which the entrance of an arbitrator causes it to be accepted by all parties of the conflict. This option leads to a total cost of 1,298,112 USD per year and MWW should pay 1,137,180 USD, and PS should pay 160,932 USD. Selecting this option also leads to an inequity value of 14.65 and FRVS of 11.29.
Three objectives were considered in this article simultaneously to allocate waste load quotes among dischargers located adjacent to a river. Minimizing dischargers' cost for their wastewater treatment, as well as the penalty for violating the standard, was one of those objectives. Fuzzy risk of violating the standard (FRVS) was the second function, which represents the amount of the risk that may exist in the riverine system to infract the predefined BOD level at the checkpoint due to the wastewater discharged. The inequity index function was considered to increase fairness among dischargers which belong to the Municipal Waste Water treatment facilities and private sector industries. The allowable range of all variables is taken into account for considering their variability and uncertainty, which affect WLA. In the first phase, the MOPSO algorithm was coupled to the Streeter–Phelps equation in Matlab software to find the undominated solution surface Pareto Front. By considering the most distribution on the feasible space for the answer, six scenarios from 100 undominated solutions were selected from the Pareto Front surface. Although the performance of solution procedure can be impressive theoretically, its low capacity to implement in real-world cases leads to its rejection by decision makers. Therefore, because of the successful implementation of the game theory in different fields, the graph model for conflict resolution was utilized to find a stable equilibrium or ultimate resolution among the selected scenarios. Modeling scenarios in the GMCR+ model indicate that three decision makers (DOE, MWW, and PS) had not attained a single stable equilibrium unless an arbitrator intervenes in the conflict. In this study, a sensitivity analysis was carried out by the PAWN method for intended functions. Its results indicate that discharge of the upstream of the Sefidrood river, the case study, has the most influence on the cost and the standard violation sensitivity. The results also show that the inequity index was not sensitive to the upstream discharge, it was sensitive to discharge and the BOD level of one of the nearest dischargers to the checkpoint. Using qualitative approaches of game theory could compensate for the drawback of confining the selectable scenario for players to a limited number of points on the Pareto Front. Instead of using a stochastic simulation-optimization as has been developed in this article, a robust optimization algorithm could be used to deal with seasonal uncertainties and extreme events and find robust solutions. Using a more extensive range of pollution such as phosphorous and nitrogen in WLA, considering each discharger's treatment cost function and compelling the agriculture sector to take part in reducing the discharged load to the water body, thus reducing the total cost of the system, are among other extensions for further development.