A practical methodology to perform global sensitivity analysis for 2D hydrodynamic computationally intensive simulations

Sensitivity analysis is a commonly used technique in hydrological modeling for different purposes, including identifying the influential parameters and ranking them. This paper proposes a simplified sensitivity analysis approach by applying the Taguchi design and the ANOVA technique to 2D hydrodynamic flood simulations, which are computationally intensive. This approach offers an effective and practical way to rank the influencing parameters, quantify the contribution of each parameter to the variability of the outputs, and investigate the possible interaction between the input parameters. A number of 2D flood simulations have been carried out using the proposed combinations by Taguchi (L27 and L9 orthogonal arrays) to investigate the influence of four key input parameters, namely mesh size, runoff coefficient, roughness coefficient, and precipitation intensity. The results indicate that the methodology is adequate for sensitivity analysis, and that the precipitation intensity is the dominant parameter. Furthermore, the model calibration based on local variables (cross-sectional water level) can be inaccurate to simulate global variables (flooded area).


INTRODUCTION
In hydrologic modeling, the use of 2D hydrodynamic models is common in applications like floods or water quality assessment (e.g., Yu & Lane 2006;Park et al. 2014;Zischg et al. 2018). Those models, based on the numerical solution of the 2D shallow water equations (SWEs), use different types of input parameters with complex domain spaces (e.g., hydrological data, floodplain and channel geometry, initial and boundary conditions, and roughness). Most of these parameters cannot be measured directly and can only be inferred by calibration to observed system responses (Ghasemizade et al. 2017;Zadeh et al. 2017). As a result, input parameters are one of the main sources of uncertainty in flood simulation processes, and a considerable amount of literature has investigated the relative importance of the inputs of interest and their contributions to the numerical models' outputs.
In recent years, due to the advances in computational power of computers and accessibility of graphics processing units (GPU), hydrodynamic models are becoming increasingly complex which intend to apply inputs and processes similar to what happens in nature such as spatially/temporally varying precipitation, high spatial resolution data that represent more details and infiltration/drainage processes. One example is the study conducted by Sauer & Ortlepp (2021), in which different rainfall scenarios are defined (time and space invariant, space invariant and time varying, and space and time varying) over a small area (12 km 2 ), and the relative influence of hydraulic roughness coefficients and DTM resolution on the models' output has been roughly assessed with a simple sensitivity analysis (SA). They concluded that rainfall data, as well as the spatial resolution of the digital elevation model, have a strong influence on the surface runoff dynamics in terms of water levels in space and time. The effect of different input parameters including spatial resolution, rainfall inputs, and the parameters which are linked with the land-use information such as roughness and drainage has been investigated in a hydrodynamic urban flood simulation by Xing et al. (2021) using a detailed SA for a very small catchment (0.59 km 2 ). They reported that model sensitivity to the input parameters can vary depending on the outputs choice. They found that the influence of the spatial resolution is more tightly related to the flood flow movements, whereas that of the rainfall inputs is more relevant to the flood water volume.
Most of the hydrodynamic simulations in the above-mentioned studies were performed in small areas or limited specific river reaches. Few studies were made at a bigger catchment scale. Some examples are the study by Fernández-Pato et al. (2016) that investigated the influence of infiltration, and the studies by Bellos et al. (2020) and Zischg et al. (2018) that analyzed the effect of precipitation spatial patterns. According to the literature, still little attention has been paid to the influence of parameters such as infiltration, runoff coefficient, spatially/temporally varying precipitation, and the possible interaction between them.
Beside the importance of input parameters in the modeling process, one of the key preliminary steps for the application of the physically based distributed modeling based on the 2D SWEs is the generation of a computational mesh (Ferraro et al. 2020). In this context, many studies have investigated the effect of mesh structure, shape, and size on the accuracy of the simulated results and computation time (Caviedes-Voullième et al. 2012;Gibson et al. 2016;Hu et al. 2018). Some of the studies reported that there is no optimal mesh type for 2D flood modeling, because each element type (e.g., triangle and quadrilateral) is advantageous under different circumstances (e.g., Kim et al. 2014), while some tried to address ways to generate the most optimum computational mesh (acceptable degree of accuracy and computational costs) (e.g., Bomers et al. 2019;Ferraro et al. 2020). Most of these studies were performed in small scales such as flood plain scale or urban areas, while a few studies have assessed the issue at the catchment scale with regard to possible interactions between land-related features (i.e., friction, infiltration, and runoff coefficient) (e.g., Caviedes-Voullième et al. 2012;Costabile & Costanzo 2021).
The quest for developing accurate and computationally efficient simulations prompts the need for understanding the models' behavior under different conditions, which can be offered by SA methods.
SA methods can be broadly classified into local and global methods (van Griensven et al. 2006;Campolongo et al. 2011;Tian 2013;Song et al. 2015). Local sensitivity measures, often referred to as 'one at a time' measures, assess how variation in one input affects the model output keeping the other inputs fixed to a nominal value (Campolongo et al. 2011). The main drawback of local SA methods is that interactions among inputs cannot be detected (Campolongo et al. 2011), which can render nonreliable results for complex and nonlinear systems (Yang 2011), where input factors usually interact with each other. In contrast, global SA methods can be used as a reliable approach to investigate the influence of inputs on outputs while they are varied simultaneously over their entire domain space. Song et al. (2015) reviewed various types of global SA methods in the field of hydrological modeling and provided a framework for SAs in hydrological modeling. Pianosi et al. (2016) also provided a comprehensive review of existing SA methods following different end purposes such as uncertainty assessment, model calibration, diagnostic evaluation, dominant control analysis, and robust decision-making in a classified context. SA methods have been used in hydrodynamic modeling, mainly focusing on the uncertainties arising from model parameters and inputs.
Most of the studies mentioned above use global SA methods. The main drawback of the global SA methods is that they are computationally intensive, time-consuming, and require too many simulations. Therefore, global SA is often difficult to be applied in engineering practice and to overcome that the number of required simulations must be drastically reduced. In that path, the methods and expertise accumulated in the field of design of experiments (DoE) (named after Rao 1947) can be useful for developing a time and cost-effective design for performing SA under various conditions. The DoE is a statistical approach to reaction optimization that allows the variation of multiple factors simultaneously in order to screen 'reaction space' for a particular process (Murray et al. 2016). Making an analogy between experiments and simulations, the statistical DoE can be implemented as a method to effectively reduce the number of simulations (experiments) and explore the relationship between the input parameters (predictor variables) and the generated output (response variables), as well as various interactions that may exist between the input variables (Yuangyai & Nembhard 2010;Khang et al. 2017;Whitford et al. 2018). This approach can be classified as a global SA method since several parameters are varied systematically and simultaneously to obtain sufficient information (Whitford et al. 2018). Taguchi (1986) has developed a methodology for the application of designed experiments based on orthogonal design to reduce the number of experimental combinations. A large number of studies about the successful application of the Taguchi method can be found in laboratory experimental practices, manufacturing, and design processes (e.g., Rao et al. 2004;Sadeghi et al. 2012;Hadi et al. 2017). But no previous study has used the Taguchi method in hydrodynamic flood simulations.
The present study presents a global SA of flood simulations solving the 2D SWE at the catchment scale (for a big catchment, 1,767 km 2 ), involving various parameters at different levels, in which each simulation takes a long time (about 3 h using 8Â NVIDIA Tesla P100 (16 GB) GPU (Model: 2Â Intel(R) Xeon(R) CPU E5-2698 v4@2.20 GHz)). Considering these features, the Taguchi method has been chosen. The main purpose of this paper is, therefore, to evaluate the application of the Taguchi method as a feasible engineering SA tool for 2D hydrodynamic simulations at the catchment scale and to identify the most influential parameters.

CASE STUDY
Tovdal river, located in Agder province (Norway), is selected as the case study. The computational domain is the part of the catchment upstream of the small lake named Flaksvann, just beside a town called Birkeland (Figure 1). In order to avoid the outlet boundary interference, the domain was extended 11 km further to the downstream of the lake. The length of the main river reaches approximately 130 km, and the catchment with an area of about 1,767 km 2 is dominated by forests (about 74%). The elevation ranges from 10.00 to 872.34 m.a.s.l, and the average slope of the catchment along the river is about 0.65%. The mean annual precipitation is approximately 1,261 mm, with most of the rainfall occurring between October and March (about 60%). On 2 October 2017, an extreme flood event occurred in this area which was the highest ever recorded flood in this river. Recorded data of the event (Table 1) (including recorded water level, discharge, flood hydrograph at Flakksvann cross-section (CS), and flood maps) and a 2D hydrodynamic model were used to simulate the event and validate the methodology. For the calibration process, the weighted average of precipitation intensities which occurred 1.5 days before the 2017 flood event was used as the reference precipitation (Table 1). Subsequently, by adjusting the friction factor and the runoff coefficient values for each land class in a trial-and-error process, calibration was performed such that simulated water level and discharge resembled the event's recorded information (Table 1).

Hydrodynamic model
In the current study, a GPU-based 2D horizontal hydrodynamic model named HiSTAV was used to simulate floods. The model was originally proposed by Ferreira et al. (2009) and optimized by Conde et al. (2020). The core of the model is a hyperbolic system of partial differential equations expressing mass and momentum conservation principles for flow. The model is closed in terms of flow resistance and a specific return-to-capacity parameter by a modified closure model. The SWEs are solved using a finite volume scheme, which is applied on a spatial discretization using unstructured meshes. The discretization method allows body fitting unstructured grids which can optimize the computational workload by using large cells, where flow gradients are expected to be mild and small cells in specified areas. The detailed information about HiSTAV and the model structure can be found in the study presented by Conde et al. (2020).

Simulation set-up and approach
In order to simulate the flow, the model solves the equations based on the following sources:

Input variables 2. Computational mesh 3. Initial and boundary conditions
The 2D model used here requires different data sets drawn from two main sources: (a) raster data structures including the topo-bathymetric dataset, Strickler roughness coefficient values, and runoff coefficient values, and (b) precipitation records including the spatial distribution of rainfall measurement stations.
Raster data structures and grids are used to present the catchment discretization and to describe spatially distributed terrain parameters (i.e., elevation, bathymetry, land use, etc.). A 10 Â 10 m 2 DTM including the river bathymetry information is used to represent topographic features and derive hydrologic characteristics (i.e., slope, flow direction, flow accumulation, stream network, computational cascade for flow routing, etc.).  Hydraulic roughness is inserted in the model in the form of a grid structure raster file (100 Â 100 m 2 resolution), in which each cell represents Strickler roughness values. The spatial distribution of roughness values is determined based on 100 Â 100 m 2 land cover maps (obtained from https://land.copernicus.eu/-'Corine Land Cover 2018, Version 20'). Subsequently, using typical Strickler roughness coefficient tables, the roughness values were assigned for each cell (Arcement & Schneider 1989;Dorn et al. 2014). Different land types are classified in Table 2, and possible roughness variation ranges are assigned for each class.
The runoff coefficient C is a dimensionless factor that is used by HiSTAV to convert the rainfall amounts to runoff (effective precipitation h p ¼ C Á i P in Equation (1), being i P the precipitation intensity). In this study, we used the runoff coefficient as a parameter which reflects the lumped effect of several processes such as the antecedent conditions of the catchment and infiltration, land cover effect, and spatial variability of the rainfall intensity. The concept of event runoff coefficients is widely used in hydrologic modeling (e.g., Merz et al. 2006;Viglione et al. 2009;Chen et al. 2019). The choice of runoff coefficient values was based on the average antecedent conditions for the catchment and land features. The values were then calibrated in a way that the correspondence of the event's rainfall and 2017 flood flow (level and velocity) is achieved. Similar to the hydraulic roughness values, the runoff coefficient is used in the form of raster data.
In this study, a combination of two different approaches to calculate the runoff coefficient was used. Firstly, by using land cover maps and recommended values of runoff coefficient for different types of areas (Subramanya 2013), a range was determined for each cell. Secondly, since the HiSTAV model assumes a uniform spatial distribution of precipitation as input, the runoff coefficient was used as an artifact to introduce spatial variability of the precipitation. For this purpose, focusing on the flood of 2 October 2017, daily precipitation data were obtained from 37 stations ( Figure 2). The rainfall observations at the stations were interpolated to delineate the spatial distribution of precipitation. For this purpose, the inverse distance weighting interpolation method was used, giving similar precipitation patterns like the one presented by the Norwegian Meteorologic Institute (http://www.senorge.no/). Thereafter, precipitation zones were delineated in Figure 2. The combination of the two mentioned approaches was used to calculate the final runoff coefficient in each cell, C, as follows: where i P max is the maximum amount of precipitation intensity among the recorded values, i P cell is the recorded precipitation intensity in each cell, and C initial is the runoff coefficient with values displayed in Table 2 for each land cover class.
In order to define a reliable and time-consistent range to consider different flood magnitudes, three different rainfall intensities, i P ranging from 3 to 9 mm/h, with a duration equal to 2 days, were considered for the simulations (Table 3). The range is obtained based on the average precipitation intensity that occurred in the flood event (6 mm/h) with 50% bounds. This means that the precipitation is treated as time invariant (or as block), which is rather a coarse and simplistic approach to the real conditions. Nevertheless, since the main goal of this study is to check the feasibility of using a simplified method (Taguchi method) for performing a global SA, we focus on reducing to a minimum number of inputs while having input ranges that cover in the full input-output space. This was needed because for such a big catchment, the inclusion of more inputs (like synthetic hyetographs defined by several parameters) would render many simulations. HiSTAV employs adaptable triangular meshes in order to discretize the catchment and balance acceptable computational workloads, different detail levels, and complex geometry fitting capabilities (Conde et al. 2015). In this study, by employing Godunov's finite volume approach (LeVeque 2002), a mixed mesh of triangular cells are constructed in three sizes over the computational domain by Gmsh (Geuzaine & Remacle 2009), which is a free finite element grid generator with a built-in CAD engine and post-processor. The small cells or so-called finer resolutions are assigned to the cells where the flow gradients are expected to be large such as the river channel, medium-sized cells, or average resolution over the flood plain, and coarser resolution over other parts of the catchment, where flow gradients are expected to be mild (Figure 3). In the post-processing step, the constructed meshes were checked for shape and size quality. The initial conditions for the equations are zero water depth and zero discharge everywhere (dry surface conditions). Water enters the domain only through rainfall; hence there are no inlet boundaries. The only open boundary is at the outlet (downstream), where free outflow is assumed. Warm-up  simulations were performed until reaching the normal water level in the river network. A 9-day precipitation with an intensity of 2 mm/h was found to be long enough to fill the river basin and the lakes with the steady flow over the main water courses.

Taguchi method
Taguchi developed a system of tabulated designs (arrays) that allows for the maximum number of main effects to be estimated in an unbiased manner while using only a minimum number of experimental tests (Taguchi 1986). He introduced his approach using experimental design aiming to develop a process to be robust to component variation and also to minimize variation around a target value (Ross & Ross 1988). The process of Taguchi analysis can be summarized into seven steps (Bao et al. 2013). The flow chart of the whole process is illustrated in Figure 4.
In this study, the local water level and total flooded area are selected as objective parameters. To calculate these parameters, nine different CS are specified in Figure 5. Several different CS were chosen to cover the changes in hydrodynamic behavior of the river, and the water level is observed at the time step equal to 34 h (this time step is selected based on the duration of the 2017 flood event). The CS were selected at different locations such as the areas of key importance, adjacent to bridge structures, adjacent to the flow measure stations (level and velocity), and at locations that represent different geometries of the river. Therefore, two of the CS were chosen on the lake where the geometry and flow conditions differ from other parts of the river to assess the flow behavior and water-level sensitivity.
The total flooded area was delineated as the subarea starting from the Birkeland city region and extended to the end of the computational domain, which represents around 11 km along the main river.  The most important stage in the design of an experiment is the selection of the control factors. In this study, we selected the following four control factors: mesh size, hydraulic roughness parameter (K S ), runoff coefficient (C), and precipitation intensity (i P ). These factors reflect the set of data, in which HiSTAV uses as an input to simulate the flow. We assigned three values to each of these control factors, as shown in Table 3. The three levels reflect the possible variation range for each parameter.
In the full factorial design, all possible combinations for a given set of factors are considered (Fanchi 2005), which, depending on the number of factors and associated levels, can result in a large number of experiments. The Taguchi method uses the orthogonal design to reduce the number of combinations. Using orthogonal arrays allows us to collect the necessary information by testing specific combinations, instead of examining all possible combinations. The orthogonal arrays have special properties such as (i) all the levels appear an equal number of times (balancing property), (ii) all the levels of parameters (factors) are used for conducting the experiments, and (iii) the array of each factor columns is mutually orthogonal to any other column (Kacker et al. 1991). Accordingly, the sequence of levels for conducting the experiments follows a standard sequence and cannot be changed. There are many standard orthogonal arrays available, depending on the number of parameters and the levels of variation for each parameter. Detailed information on the construction of orthogonal arrays can be found in the works presented by Plackett & Burman (1946) and Taguchi (1986).
The full combination of the experiments (full factorial design), considering four parameters with three levels, corresponds to 81 simulations. However, to effectively reduce the number of experiments, the Taguchi method proposes two possible orthogonal arrays, namely L27 and L9. The L27 consists of 27 sets of combinations and assesses the main effects and the interaction between the parameters, while the L9 consists of nine combinations, which only concentrates on main effects and ignores the interactions.
According to the importance of the interactions between the parameters in SA studies, this study is mainly focused on the L27 array. However, the results for the L9 array are also briefly presented to identify the possibility of performing the SAs The L9 and L27 orthogonal arrays (see first five columns in Tables 4 and 5) were constructed using the aforementioned factors (Table 3) based on the Taguchi method (Kacker et al. 1991). Following the combinations constructed in the orthogonal tables, the simulations were then performed, and the identified objective outputs were calculated as the system's response.
The Taguchi design uses the signal-to-noise ratio (SNR) to measure the deviation of the response from the mean value (Wang et al. 2017). Based on the purpose of the experiment, SNRs can be classified into three types: smaller the better type (minimize the response), larger the better type (maximize the response), and nominal the best type (based on the SNR on standard deviation). The equation to calculate SNR values for larger the better type is presented in the following equation: where Y i represents the measured results in the ith experiment/simulation and n is the number of experiments/simulations. To investigate the conditions that result in the major flood, the goal is defined as larger the better. Therefore, the larger values for the water level and the flooded area are desirable, and the SNR is calculated by Equation (2). The averages of the SNR values are calculated for each factor j at level l using the following equation to analyze the effect of each factor on the outputs: where n jl is the number of experiments that have factor j at level l and i represents the ith experiment with factor j at level l. The significant parameters were then identified as those producing the highest difference in the mean SNR and mean response (target outputs) values. To find the rank of the effect of each parameter in the Taguchi analysis results, a parameter (Δ j ) is defined, which is the difference between the maximum and minimum SNR of all levels for each factor (see Figure 8 and the example in Figure 6(a)). Higher values for (Δ j ) indicate that the model has a higher sensitivity to factor j.

Analysis of variance
In addition to the SNR analysis, analysis of variance (ANOVA) was performed to identify the most significant factors and their potential interactions. This method focuses on the analysis of the variance explained by each factor and is accomplished by estimating Fischer's test value (F-value). The impact of any factor is explained by its F-value and the corresponding sum of squares that represent variance. Higher F-value and sum of squares of any factor indicate its relative importance in the process of the response (Karmakar et al. 2018). Moreover, to explore the impacts of individual factors, the percentage contribution (PC) of each factor is calculated using the following equation (Yang & Tarng 1998;Sadrzadeh & Mohammadi 2008): where SS T is the total sum of squares and SS j is the sum of squared deviations for each factor j. Both can be calculated using the following equations, respectively: where n is the number of experiments/simulations, Y i represents the measured results in the ith experiment/simulation, Y is the average of results, Y ji is the mean of results for the factor j at level i, and n jl is the number of experiments that have factor j at level l (Stahle & Wold 1989).

RESULTS AND DISCUSSION
Following the combinations constructed in the Taguchi tables (Tables 4 and 5), 9 and 27 sets of simulations were performed. Two different output parameters, namely local water level and total flooded area, were considered as the system response to investigate the variation resulting from each combination and subsequently to explore the significance of each input parameter in the simulations. The calculated SNR values (based on Equation (2)) for the water level in each CS and the flooded area are presented in Tables 4 and 5.
The calculated standard deviation for the SNR values in each of the local responses (CS water level) ranges between 0.29 and 0.69 for L27 (see Table 5) and for the global response (flooded area) it is equal to 1.36. The coefficient of variation which is defined as the ratio of the standard deviation to the mean is calculated to measure the degree of variation in each of the results series (Tables 4 and 5). The higher coefficient of variation values for the flooded area indicates that it is much more sensitive than the local water level. This result is important because in most of the applications, the hydrodynamic models are calibrated based on local variables (e.g., water level) with some known inputs (e.g., precipitation intensity) and some particular combinations of calibrated inputs (e.g., runoff coefficient, roughness coefficient, and mesh), which after all can correspond to a quite different flooded area if a different combination of calibrated inputs was used.
From Table 5, it can be seen that the highest SNR results from the combination 22 (1 3 1 3) for the water level and the flooded area, corresponding to the fine mesh size, highest runoff coefficient, smallest roughness coefficient, and highest precipitation intensity. This result is not surprising since the SNR is being computed with larger the better, i.e., the highest water level or the flood area will give the highest SNR.
In order to quantify the significance of each parameter, the mean of the SNR for each factor at the ith level was calculated using Equation (3) and is presented in Figures 6 and 7.
The results are compared and ranked based on Δ parameter for the L9, the L27, and the full factorial design (L81) in Figure 8. Moreover, the PC of each factor obtained from ANOVA is presented for the mentioned designs in Figure 9 (further details of the Taguchi and ANOVA analyses results are presented in the Supplementary Material).
As it is shown in Figures 8 and 9, for both responses (water level (panel a) and flooded area (panel b)) in all types of the designs, the precipitation intensity is the most influencing parameter, followed by the runoff coefficient, the roughness coefficient, and lastly the mesh. The only exception can be seen for CS1 and CS9 in L9 design, in which the mesh size and the  roughness effect have equal ΔSNR values (0.12 and 0.3, respectively) and as a result are equally ranked as 3. However, comparing the L27 design results with the full factorial design (L81), it can be seen that there are no big differences between the results which means the results of the ANOVA and the Taguchi analyses using L27 combinations are consistent with those obtained from the full factorial design (L81). This finding confirms the effectiveness of the L27 design. Although the L9 design requires few numbers of experiments and provides results almost in accordance with full factorial design results, caution must be applied, as the findings in two of the CS (CS1 and CS9) showed that the mesh size effect has been overestimated by L9.
The results obtained from Figure 9 indicate that precipitation intensity is by far the most important factor, contributing about 80%, while the runoff coefficient and roughness coefficient contribute about 11 and 6%, respectively, and mesh accounts only for 1% contribution. This result highlights the huge importance of either estimating the precipitation intensity adequately with a given return period (which possibly has a big uncertainty due to climate change) for flood mapping or of predicting accurately the precipitation intensity (which has the uncertainty associated with the stochastic nature of weather) for flood forecasting systems. Also interesting is the fact that the runoff coefficient is slightly more relevant than the roughness coefficient. Usually, hydrology modelers attribute more relevance to the runoff coefficient, whereas hydraulic modelers focus on the importance of the roughness coefficient. From the joint hydrologic and hydraulic modeling, it seems that the runoff coefficient is quite relevant, which gives an indication to hydraulic modelers that the input discharge (usually coming from hydrological modeling, involving precipitation and runoff coefficient) influences their final results much more than the roughness coefficient.
The results in Figures 6 and 7 show that finer meshes (level 1) lead to slightly larger SNR values than coarser ones (levels 2 and 3). Nevertheless, the performance improvement with mesh refinement is not always true since the SNR value for the coarser mesh (level 3) is higher than the intermediate one (level 2). This puts in evidence that refining unstructured meshes with underlying complex topography can lead to no improvement. As stated before, for the runoff coefficient, roughness coefficient, and precipitation intensity, the performance results follow what is usually expected, i.e., the increase in runoff coefficient (levels 1-3) increases the SNR values, the increase in the Strickler coefficient (i.e., decrease in roughness, levels 1-3) reduces the SNR values and the increase in the precipitation intensity (levels 1-3) results in higher SNR values. This is confirmed by the maximum and minimum flooded areas, corresponding to combinations 22 and 15, respectively, that are presented in Figure 10.
In complex and nonlinear systems, the joint effects of input parameters can have a significant influence on the outputs. Thus, in order to have a complete SA, the interactions among parameters are analyzed using the ANOVA technique. In this regard, CS4, which is considered an important CS located in the Flaksvann lake where a hydrological station exists, was selected to explore the interaction between the parameters. The interactions are analyzed for both L27 and L81 at a specified CS. Since the results were similar, the interaction matrix of the parameters is only presented for the L27 design in Figure 11. If the curves are parallel with each other, it means that there are no interaction effects of the two parameters (Wang et al. 2017).
The intersection lines in Figure 11(b) imply that the mesh size mostly interacts with the roughness coefficient, presenting similar performances for the rougher case (smaller Strickler coefficient, i.e., level 1), but a much more mixed behavior for less rough cases (levels 2 and 3). As can be seen in Figure 11(a)-11(c), and already mentioned, mesh refinement will not necessarily improve performance (i.e., the SNR). In Figure 11(b), the mesh refinement always improves the performance for the smoother case (roughness at level 3), but that is not true for the intermediate case (level 2), where a mesh refinement from level 3 to level 2 produces worse results. As stated before, this is related to the unstructured mesh generation, in particular, near the river main channel where the roughness coefficient varies. From those figures, it is seen that, in general, the intermediate mesh (level 2) presents lower performance, and the finer mesh (level 1) presents slightly better results than the coarser mesh (level 3). The latter becomes more evident for high-intensity precipitation (see level 3 in Figure 11(c)), which indicates that for simulating extreme flood events refining the mesh can originate a better performance.
The runoff coefficient slightly interacts with the precipitation intensity (Figure 11(e)). It is clear that runoff coefficients are more important for model performance as the intensity precipitation level is higher. This is logic since the runoff coefficient represents a precipitation percentage loss, which, for the same level (i.e., percentage) and higher intensity of precipitation, corresponds to a much higher amount of water flowing through the catchment (i.e., higher SNR). Nevertheless, this points out to the importance of a correct calibration of the runoff coefficient, especially for high-intensity precipitation events that originate floods.
Based on the mean response variation range (see blue dashed circles in Figures 11(e) and 11(f)), it can be observed that the importance of the parameters changes depending on the precipitation intensity. At the lower intensity (level 1), the roughness parameter corresponds to higher variation comparing to the other parameters. Whereas, under moderate and high intensities (levels 2 and 3), the runoff coefficient is the most influencing parameter, which corresponds to the higher variation range in the mean water level.

CONCLUSION
In this study, the application of the Taguchi technique and ANOVA was investigated to identify and rank the most influencing parameters in a 2D hydrodynamic flood simulation model. In general, this approach has several advantages: (i) it can effectively reduce the number of simulations needed for SA, (ii) it quantifies the contribution of each parameter and ranks them based on the associated variation in target outputs, and (iii) it explores the interaction between input parameters.
• The main conclusions of this study are: the implemented Taguchi-ANOVA approach can be used as an efficient SA method in 2D flood modeling to quantify and rank the affecting parameters. However, the results suggest that the minimum orthogonal array (L9) should be used cautiously and that an intermediate orthogonal array (L27) is preferred, if affordable.
• The considered global response (flooded area) is more sensitive to the input parameters variation comparing to the local response (cross-sectional water level). This shows the necessity of considering both local and global scale calibrations for flood simulation purposes.
The following secondary conclusions, which require further validation for a more complex hydrological approach, can be drawn from the results presented in this paper: • The precipitation intensity is identified as the most significant parameter in flood modeling, accounting for about 80% contribution. Then follows runoff coefficients (11%), roughness coefficient (6%), and mesh size (1%). This highlights the great importance of accurate estimation of the precipitation intensity for flood modelers and decision makers. This can be a major challenge due to the uncertainties resulting from climate change and the stochastic nature of the weather. • Generally, among the proposed mesh sizes, the finer mesh size (level 1) led to slightly better results compared with the other mesh sizes (levels 2 and 3). Especially, under high-intensity precipitation (level 3), the difference is clearer. However, the mesh refinement is not always a solution to improve the results due to the underlying complex gridded data, including topography, roughness, and runoff coefficient values. Moreover, the strong interaction between the roughness coefficient and the mesh size provides further support for the hypothesis that mesh refinement can, in some cases, negatively affect the results.
• The importance of the input parameters can change depending on the hydrological conditions. Under the low-intensity precipitation (level 1), the roughness parameter is affecting the results more than other parameters. Whereas, under moderate and high-intensity precipitations (levels 2 and 3), the runoff coefficient is the most influencing parameter, which corresponds to the higher variation in outputs.