## Abstract

To improve computational efficiency without accuracy losses for surface flow simulation, a dynamic bidirectional coupling model based on multigrid was applied. The flood-prone areas, where the inundation conditions were needed to detail, were discretized using finer grids, while the rest of the domains were assigned coarser grids to reduce computation time. Different time steps were adopted in coarse and fine grids. This paper presented an approach to quantitatively assess the computational efficiency under different domain discretization and grid generation conditions. Three test cases were considered to demonstrate the performance of the applied model. The results showed that the model had high computational efficiency while maintaining satisfactory simulation accuracy. The computational efficiency of the model was not only related to the size ratio of coarse to fine grids, but the area ratio of coarse grids to the entire computational domain. If the area ratio of coarse grids to the computational domain was constant, the computational efficiency improved exponentially with the increasing size ratio. If the size ratio of coarse to fine grids was constant, the computational efficiency improved linearly with the increasing area ratio. The proposed method offers a promising option for quantitatively assessing the computational efficiency of models.

## HIGHLIGHTS

A dynamically coupled hydrologic-hydrodynamic model is applied.

A parameter is proposed to quantitatively assess the computational efficiency.

The efficiency exponentially improves with the increase of the grid size ratio.

The efficiency linearly improves with the increase of coarse grids.

## INTRODUCTION

Numerical models play a major role in predicting flood processes, e.g., pollutant and sediment transports (Saadatpour *et al.* 2019; Jelti & Boulerhcha 2022). These models are capable of providing accurate solutions, which have received much attention in recent years (Guan *et al.* 2014; Fernández-Pato *et al.* 2020; Li *et al.* 2021). Associated with these numerical models, two main issues normally raised concerns, i.e., numerical accuracy and computational efficiency. Many attempts have been made to improve the numerical accuracy, such as developing a coupling model (Feistl *et al.* 2014; Salheddine *et al.* 2020; Gomes *et al.* 2021), dividing domain with refine grids (Schumann *et al.* 2013, 2014; Li *et al.* 2016; David & Schmalz 2021). These methods increase numerical precision, but are not conducive to improving computing efficiency. For example, if a large domain is divided using fine uniform grids, many computational nodes are involved, which substantially costs more computational resources and time. Therefore, this essentially restricts the application of the numerical model in realistic flood simulations that usually involve large domains.

Several methods are proposed to improve the computational efficiency of numerical models. Schumann *et al.* (2013) and Bates *et al.* (2010) developed a simplified partial inertial model ignoring the dynamics term in full two-dimensional (2D) shallow water equations (SWEs), and the results showed the simplified model can save more computational time compared with the full 2D SWEs. However, since the dynamic terms are not considered, the model cannot reliably reproduce the physical processes and predict the velocity of rapidly varying flows, such as dam breaks.

To further improve computational efficiency, many researchers have optimized the grid resolution during flood simulations, and adaptive mesh refinement (AMR) is the most common method (Hu *et al.* 2018; Ghazizadeh *et al.* 2020). The computational domain was decomposed into coarse grids that were linked with sub-grid sets of various sizes. These dynamically adaptive meshes can adjust grid density in accordance with the domain features or flow conditions, which can be applied for local grid refinement. Liang *et al.* (2015) used the AMR to divide the computational domain, where high-resolution simulation was carried out in areas in which the surface water flow needs to be detailed. The results indicated that complex grid adaptation seems to cost more computation time and resources, although they improve the computational efficiency in comparison with the overall fined grids. In addition to the intricate grid generation techniques, a lot of storage requirements are also needed, which further restrict its applications, especially in larger watersheds. Besides, the mass and momentum conservation may not be ensured at the same time when using a dynamically adaptive grid to divide the computational domain (Bermudez & Ma 1994). It is well known that mass conservation is a critical element to be satisfied in models for practical applications, while momentum conservation influences the numerical stability of models and thereby is important to preserve its robustness.

Static nonuniform grids for surface water flow simulations have received increasing attention in recent years. The flood-prone areas where the inundation conditions are needed to detail are discretized using finer grids, while the rest of the regions are deployed using coarse grids. This method can easily achieve local grid refinement and effectively avoid the troublesome dynamical links between numerical solutions obtained from different grids in AMR. Furthermore, compared with AMR, the static nonuniform grids also have the advantage of a simplified grid generation procedure and may save more computation time. Hou *et al.* (2018) proposed a static nonuniform grid system to identify and refine the crucial regions where a high-resolution grid is essential for accurate surface flow simulation, which can save more computation cost and time than AMR. However, the computational efficiency is limited by the grid sizes, since a 2:1 rule is introduced and no cell is allowed to have a neighbor that is more than twice bigger or smaller. A multigrid dynamic bidirectional coupling model (M-DBCM) based on multi-grids was developed by Shen *et al.* (2021), where different size ratios of coarse to fine grids were used to divide the computational domain. The computational efficiency improved in various degrees based on different size ratios. However, a superficial assessment of computational efficiency was carried out to evaluate the performance of the M-DBCM.

It is also crucial to evaluate the computational efficiency of the numerical model. Hou *et al.* (2018) evaluated the simulation accuracy of the proposed model using the root mean square error, but only the computation time was reported. No further discussion and analysis were adopted on the improvement in computational efficiency. Other researchers, e.g., Shen *et al.* (2021), Yu & Duan (2017), Liu *et al.* (2019) and Jiang *et al.* (2021), have quantitatively assessed the numerical accuracy by comparing the simulation results with a measure or analytical data. Only computational efficiency was reported, but it is lacking in strictly quantitative analysis. At present, the computational efficiency of models is simply assessed by comparing them with the computation time of different algorithms, without further discussing and analyzing the influence factors and improvement degree of computational efficiency. Nevertheless, it is significant to determine the factors influencing computational efficiency and assess sensitivity of these factors for improving the performance of models. The degree of improvement in computing efficiency is also important to study the performance of the model. The underlying question is how to quantitively evaluate the computational efficiency of the numerical model and analyze the factors influencing efficiency.

Based on the M-DBCM proposed by our team, this work is devoted to proposing a method to quantitively evaluate the computational efficiency of models. The remainder of this paper is arranged as follows: the multigrid technique was introduced first; and then the dynamically coupled hydrologic-hydrodynamic model used in this study was detailed; thirdly, an assessment parameter was produced and its derivation process was also described in detail; Section 3 quantitively demonstrated the computational efficiency of the M-DBCM based on three cases; and Section 4 described the conclusions.

## METHODS

In this study, a dynamic bidirectional coupled hydrologic-hydrodynamic model proposed by our team based on multigrid is applied. The M-DBCM consists of two components: a hydrologic model that simulates overland flow in the upstream area and a hydrodynamic model that details flood processes in the downstream area vulnerable to floods. For the model setup, multigrid is designed to discretize the computational domain. The Fortran programming language is adopted to apply the coupling model.

### Multigrid generation

The balance between the need of improving resolution in simulating flood processes and computational costs makes it desirable to refine the model grid locally. A structured multigrid system is developed in M-DBCM. The grid system is designed to provide fined grid cells at certain parts of the areas where high-resolution representation is crucial for flood simulation, while the rest of the domain is discretized into coarse cells. The areas where high-resolution representation is crucial can be determined according to topographic data. Generally speaking, low-lying areas, such as rivers and lakes, are vulnerable to floods. Therefore, the parts that require high-resolution representation were marked. And then, refine the marked areas while coarsening the rest of the domain.

The three parts consist of one coarse-grid region and two fine-grid regions. A hydrologic model is applied to simulate the rainfall runoff on coarse-grid regions. The fine grids are divided into two parts, i.e., inundation and noninundation regions. A water depth threshold is used as a simple way for determining these two parts. In a grid cell, if the water depth is higher than the predefined threshold, it is considered an inundation region, where the hydrodynamic model is applied; conversely, if the water depth is lower than the threshold, it is defined as noninundation region, where the hydrologic model is applied. When the rain intensity increases, the inundation regions expand because of the gradual accumulation of surface water volume. The position of inflow discharge positions, flow path and discharge values change subsequently. Therefore, a moving coupling interface (Line-2) is formed between the inundation and noninundation regions. The hydrologic and hydrodynamic models are coupled in a two-way manner, and the coupling boundary is moving and time-dependent.

### The dynamic coupled hydrologic-hydrodynamic model

#### Hydrologic model

*et al.*2004), as follows:where

*h*is the water depth (m);

*u*and

*v*are velocities in the

*x*and

*y*directions (m/s); and are the unit discharge in the

*x*and

*y*directions (m

^{2}/s), respectively, which are calculated by Equations (2) and (3); equals to rainfall intensity minus infiltration rate (m/s); and are the water level gradients in the

*x*and

*y*directions, respectively, ,

*z*is the bottom elevation (m); is Manning's roughness coefficient.

#### Hydrodynamic model

*g*is gravity acceleration (m/s

^{2}).

#### The coupling of hydrologic-hydrodynamic model

In fine-grid areas, the dynamically coupled hydrologic-hydrodynamic model is applied to simulate the flooding process. The hydrologic and hydrodynamic models are coupled by a moving coupling interface. The key issue of the model coupling is to develop a reasonable method to determine the fluxes passing through the coupling interface, which should integrate the effect of the current flow state obtained from these two models on both sides of the coupling interface.

*i*,

*j*refers to the grid cell in noninundation regions, while

*i*+ 1,

*j*is the grid cell in inundation regions.

Based on Equation (9), only the mass conservation through the coupling boundary can be obtained under this situation.

Based on Equations (10) and (11), both the mass and momentum conservation through the coupling boundary can be obtained. The flow and discharge of information at inundation and noninundation regions can also affect each other.

In the presented dynamic bidirectional coupling approach, overland flow can be discharged from the inundation regions to noninundation regions and the effects owing to the overflow of the floodplain and backwater effects at confluences are also considered.

*x*and

*y*directions (m/s), respectively; and is the water depth on the fine grids (m).

The time step of the coarse grids () was determined based on that of the fine grids. If the size of the coarse grid was *n* times that of the fine grid, the time step of the coarse grid was determined to be .

### The assessment of computational efficiency

*A*is the area of the entire computational domain;

*A*

_{1}is the area of fine-grid regions;

*A*

_{2}is the area of coarse-grid regions.

Except for the computation time of hydrologic and hydrodynamic models, the time required in other ways (i.e., the coupling of hydrologic and hydrodynamic model, the variables interchange between coarse and fine grids) can be expressed as . Therefore, the computation time of M-DBCM is .

*C*is an assessment parameter indicating the computation time reduction ratio, which is used to evaluate the computational efficiency of M-DBCM. A smaller

*C*value means higher computational efficiency.

From Equation (21), the computational efficiency of M-DBCM is not only related to the size ratio of coarse to fine grids, but the area ratio of coarse grids to the entire domain. If the area of coarse-grid regions is much greater than that of the fine-grid regions, that is, , the assessment parameter becomes . It is indicated that the computational efficiency of M-DBCM exponentially improves with the increase of the size ratio. If the size of coarse grids is much more than that of fine grids, that is, , the assessment parameter becomes . It is stated that the computational efficiency of M-DBCM improves linearly with the increasing of the area ratio of the coarse grids to the entire domain.

## CASE STUDY

The performance of the proposed M-DBCM is demonstrated by applying it to three cases. The results are compared with those obtained from the model that has similar numerical schemes but on uniform fine grids, referred to as DBCM in the following sections. Besides, the HM2D on fine uniform girds is also applied to the same cases to compare with the proposed model.

### Rainfall runoff over a slope plane

In this case, a rainfall runoff on a constant slope (Kim *et al.* 2012) is designed to evaluate the performance of M-DBCM. The sketch of the case is shown in Figure A.1 in the Supplementary Material. The length of the plane is 182.88 m. A Manning's coefficient of 0.025 s/m^{1/3} is recommended. The bed slope is 0.016. The constant rainfall intensity is 50.8 mm/h. The rainfall duration is 1,800s and the total simulation time is performed for 3,600 s. Different cases with various grid size ratios were designed to evaluate the performance of the M-DBCM, as listed in Table B.1 in the Supplementary Material. In M-DBCM, the size of fine grids is 1.83 m while the rest of the domain is coarsened to levels higher, as presented in Figure A.2 in the Supplementary Material. Fine uniform grids with a resolution of 1.83 m are also adopted for comparative simulations.

The total computational time of different cases on a desktop computer with an Intel i7-11700K CPU and 64G RAM was determined, presented in Table 1. Since uniform fine grids were applied in case00 and case01, the computation time of both cases was higher than that of others. Case01 cost more computational time compared with case00 due to the solution of 2D SWEs in the whole region. Most areas were discretized into coarse grids, whereas only a small part of regions was divided into fine grids in case12, case15 and case10; the number of cells was much lower than that of case00 and case01. As a result, the computation time of M-DBCM based on nonuniform grids was lower than that of the others.

N
. | Cases . | Number of grids . | Runtimes (s) . |
---|---|---|---|

– | case00 | 1,000 | 94.422 |

– | case01 | 1,000 | 99.812 |

0.3 | case12 | 820 | 83.719 |

case15 | 784 | 79.281 | |

case10 | 772 | 77.344 | |

0.5 | case12 | 655 | 65.704 |

case15 | 544 | 59.609 | |

case10 | 527 | 58.775 | |

0.7 | case12 | 505 | 52.875 |

case15 | 352 | 45.797 | |

case10 | 329 | 44.484 |

N
. | Cases . | Number of grids . | Runtimes (s) . |
---|---|---|---|

– | case00 | 1,000 | 94.422 |

– | case01 | 1,000 | 99.812 |

0.3 | case12 | 820 | 83.719 |

case15 | 784 | 79.281 | |

case10 | 772 | 77.344 | |

0.5 | case12 | 655 | 65.704 |

case15 | 544 | 59.609 | |

case10 | 527 | 58.775 | |

0.7 | case12 | 505 | 52.875 |

case15 | 352 | 45.797 | |

case10 | 329 | 44.484 |

*C*of different cases is shown in Figure 4. Lower

*C*values mean higher computational efficiency. If the size ratio of coarse to fine grids is constant,

*C*values decreased linearly with the development of

*n*. Higher

*n*means larger coarse-grid regions. It is well known that the simulations on coarse grids involve few computational nodes and substantially cost less computation time. From this figure, if

*n*is constant,

*C*values decreased with the increasing size ratio of coarse to fine grids. The size of fine grids is the same (1.83 m × 1.83 m) in case12, case15 and case10, but the size of coarse grids is different in these cases. The sizes of coarse grids are twice, 5 times and 10 times that of fine grids in case12, case15 and case10, respectively. Therefore, the computational grids of these three cases ranked from more to less are: case12 > case15 > case10. Fewer computational grids required less time for calculation. The computational efficiency could be further improved with the increasing size ratio of the coarse grid to the fine grid.

The *C* values of case12, case15 and case10 were 0.59, 0.56 and 0.55, respectively, when ; while the *C* values of case12, case15 and case10 were 0.36, 0.28 and 0.25, respectively, when . It is observed that the variation degree of *C* values was different when *n* varied. The total computation time is mainly dependent on that of fine grids since fine-grid regions are large when *n* is small.

### Rainfall runoff over a plane with variable slope and roughness

In this case, a sloping plan measuring 500 m × 400 m was designed, with slopes and along the *x* and *y* directions, respectively (Jaber & Mohtar 2003), presented in Figure A.3 in the Supplementary Material. The Manning's coefficient is equal to , where and . The rainfall intensity is given by a symmetric triangular hyetograph with and . The simulations are run for 14,400 s. Except for a transmissive boundary at the watershed outlet, the other boundaries were reflective. Different cases, as shown in Table B.2 in the Supplementary Material, were developed to simulate the rainfall runoff. In M-DBCM, the size of fine grids was 1 m × 1 m while the rest of the domain was coarsened to levels higher, as presented in Figure A.4 in the Supplementary Material. Fine uniform grids with a resolution of 1 m × 1 m were also adopted for comparative simulations, as case00 and case01 shown in Table B.2.

*et al.*2018; Yang

*et al.*2021) and a model developed by Jaber & Mohtar (2003) were used to simulate the rainfall runoff process under the same conditions, which were used to assess the performance of the M-DBCM. The hydrographs obtained from different models are shown in Figure 5. Really fine grids were used to divide the computational domain to obtain more accurate results in the model developed by Jaber & Mohtar (2003). Therefore, it can be considered that the results obtained from Jaber & Mohtar (2003) was equivalent to the approximate analytical solution. The hydrographs showed good agreement, indicating that the simulation accuracy of the M-DBCM were satisfactory. The peak discharge and receding limb simulated by IFMS were slightly under-predict discharge. The 2D SWEs were solved to simulate the flooding process in the whole regions in IFMS, while the M-DBCM converted from a hydrodynamic model to a hydrologic model when the water depth was below the threshold. When the M-DBCM converted from hydrodynamic model to hydrologic model, the approach to calculate the velocity is also changed accordingly, then the discharge difference between the IFMS and M-DBCM appeared. The outlet flow was slightly larger, but later slightly smaller in M-DBCM, assuring that the overall mass was conserved.

The runtimes of different cases were reported, as presented in Table 2. A comparison was first made between the fine uniform grids and nonuniform grids to check the difference between these two mesh representations. The computation time of the case00 and case11 was higher than that of others. A total of 200,000 grids were generated based on fine uniform grids. Most of the areas were discretized into coarse grids, and only a small part of the region was divided into fine grids on a nonuniform grids system; the calculation grid number was much lower than that of case00 and case01. Consequently, the computation time of fine uniform grids was much larger than that of nonuniform grids. Furthermore, case01 cost more computational time and resources compared with other cases, due to the solution of the 2D SWEs in the whole region.

n
. | Cases . | Number of grids . | Runtimes (s) . |
---|---|---|---|

– | case00 | 200,000 | 6,404.828 |

– | case01 | 200,000 | 7,871.844 |

0.4 | case12 | 142,090 | 4,651.515 |

case15 | 128,088 | 4,404.905 | |

case10 | 127,463 | 4,328.813 | |

0.6 | case12 | 115,127 | 4,025.687 |

case15 | 92,122 | 3,594.938 | |

case10 | 91,779 | 3,415.014 | |

0.9 | case12 | 66,062 | 2,863.735 |

case15 | 29,405 | 2,259.016 | |

case10 | 25,670 | 2,009.016 |

n
. | Cases . | Number of grids . | Runtimes (s) . |
---|---|---|---|

– | case00 | 200,000 | 6,404.828 |

– | case01 | 200,000 | 7,871.844 |

0.4 | case12 | 142,090 | 4,651.515 |

case15 | 128,088 | 4,404.905 | |

case10 | 127,463 | 4,328.813 | |

0.6 | case12 | 115,127 | 4,025.687 |

case15 | 92,122 | 3,594.938 | |

case10 | 91,779 | 3,415.014 | |

0.9 | case12 | 66,062 | 2,863.735 |

case15 | 29,405 | 2,259.016 | |

case10 | 25,670 | 2,009.016 |

*C*values were calculated to assess the computational efficiency of M-DBCM, as shown in Figure 6. When , the

*C*values ranged from 0.5 to 0.6, which indicated that the computational efficiency improved by about 40% based on the multi-grids. The

*C*decreased linearly with the increase of

*n*. When , the coarse grid regions accounted for 90% of the total watershed. The

*C*values ranged between 0.25 and 0.40, indicating that the computational efficiency improved by about 70% when the computational domain meshed with multi-grids. When

*n*is constant,

*C*values decreased with the increasing of the size ratio. The size of fine grids is the same in all the cases. In case12, the size of coarse grid is twice that of the fine grid, while the size of the coarse grid is 5 times and 10 times that of the fine grids in case15 and case10, respectively. The number of grid cells ranked from more to less is as follows: case12 > case15 > case10. Therefore, case12 cost more computation time compared with case15 and case10. The computation time decreased with the increase in the size of the coarse grids, resulting in diminishing

*C*values.

In addition, an interesting phenomenon was observed in this figure. When , *C* values changed slightly in case12, case15 and case10. But *C* values of different cases changed a lot with the increase of *n*. When , *C* values of case12, case15 and case10 were 0.36, 0.29 and 0.25, respectively. When *n* is small, the coarse-grid regions accounted for a small proportion of the entire computational domain, and the number of fine grids is larger. The total runtimes mainly depended on the computation time of fine grids. Consequently, closer *C* values were produced in different cases when *n* is small. The proportion of coarse grids in the computational domain increased with the increase of *n*. When , the number of grids varied greatly in different cases. Therefore, the computation time of different cases also has great differences. It was concluded that the area ratio of the coarse-grid regions to the entire domain had a great influence on computational efficiency.

### Flood simulation in a natural watershed

The Goodwin Creek watershed is located in Panola County, Mississippi, USA, which covers an area of 21.3 km^{2}. The overall terrain gradually decreased from northeast to southwest, which is consistent with the trend of the main channel, and the elevation ranged from 71 to 128 m. The computational basin and bed elevations are shown in Figure A.5 in the Supplementary Material. Land use in this watershed was divided into four classes including forest, water, cultivated and pasture, and their Manning coefficients were 0.05, 0.01, 0.03 and 0.04, respectively (Sánchez 2002). The infiltration coefficients of different soil types were determined according to Blackmarr (1995).

The rainfall duration was 4.8 h. The simulations were performed for 12 h. Different cases with various grid resolutions were developed to verify the computational efficiency and numerical accuracy of M-DBCM, as listed in Table B.3 in the Supplementary Material. In M-DBCM, the rivers were covered by fine-grid cells of 10 m × 10 m while the rest of the domain was coarsened to levels higher, presented in Figure A.6 in the Supplementary Material. Fine uniform grids with a resolution of 10 m were also adopted for comparative simulations.

In terms of computational efficiency, the total execution time for simulating surface flow based on different cases was determined, presented in Table 3. Uniform fine grids were used to divide the computing zones in case00 and case01, where the calculation grid number was 207,198. Consequently, both cases had high computational burden and cost more computation time. In case01, as the 2D SWEs were calculated in all regions, which required a large volume of computational time and resources, the simulation time was higher than that of case00. In the multigrid system, the computation grid number was much lower than that of case00, which can save more computation time.

n
. | cases . | Number of grids . | runtimes (h) . |
---|---|---|---|

– | case00 | 207,198 | 40.13 |

– | case01 | 207,198 | 42.91 |

0.5 | case12 | 151,063 | 22.65 |

case15 | 102,560 | 22.08 | |

case10 | 93,640 | 21.80 | |

0.7 | case12 | 104,555 | 21.82 |

case15 | 80,109 | 20.69 | |

case10 | 76,114 | 19.71 |

n
. | cases . | Number of grids . | runtimes (h) . |
---|---|---|---|

– | case00 | 207,198 | 40.13 |

– | case01 | 207,198 | 42.91 |

0.5 | case12 | 151,063 | 22.65 |

case15 | 102,560 | 22.08 | |

case10 | 93,640 | 21.80 | |

0.7 | case12 | 104,555 | 21.82 |

case15 | 80,109 | 20.69 | |

case10 | 76,114 | 19.71 |

*C*values of different cases were calculated, as shown in Figure 8, when , that were 0.528, 0.515 and 0.508, respectively. In contrast, when ,

*C*values of case12, case15 and case10 were 0.509, 0.482 and 0.459, respectively.

*C*values decreased with the increase of

*n*, which indicated that the area ratio of coarse grids to the entire computational domain had a great influence on the computational efficiency. Larger coarse-grid regions mean less computation time since there were few calculation grids. If

*n*is constant,

*C*values decreased with the increase of the size ratio of coarse to fine grids. There were fewer calculation grids in the larger size ratio, which required less time for calculation; therefore, the computational efficiency can be improved.

However, it is noted that *C* values of different cases changed slightly in the condition where , compared with that in the condition where . It was demonstrated that the computational efficiency was affected by the area ratio of coarse grids to the entire computational domain. In the condition where , the coarse-grid regions were much more than fine-grid regions. The total runtimes mainly depended on the computation time on coarse grids. The calculation grid number decreased a lot with the increase in the size ratio of coarse grids to fine grids. The computation time was consequently reduced. When , the total runtimes mainly depended on the computation time on fine grids. High-resolution simulations on fine grids inevitably involved a large number of computational nodes and substantially increased computational cost.

## CONCLUSIONS

A dynamic bidirectional coupling model with a nonuniform grid was applied in this study to improve computational efficiency and meanwhile preserve simulation accuracy, and an assessment parameter was developed to quantitively assess the computational efficiency of models. The regions where high-resolution representation was needed in a computational domain were divided into fine grids, while the rest of the regions were deployed using coarse grids to reduce the computational burden. The hydrologic model was applied to the coarse grid regions, while on the fine girds, the hydrologic and hydrodynamic models were coupled in a two-way manner to ensure mass and momentum conservation. The computation time on coarse and fine grids was analyzed, respectively. Based on this, a parameter for assessing computational efficiency was proposed.

The performance of M-DBCM was validated against three test cases. Numerical results were compared with those predicted on fine uniform grids. The M-DBCM was demonstrated to be able to effectively simulate the flow process and ensure reliable simulations. While producing numerical results of similar accuracy, the M-DBCM saved much computation time compared with the DBCM or HM2D on fine grids. The computational efficiency was quantitively evaluated based on an assessment parameter. Lower *C* values mean higher computational efficiency. *C* values varied indifferent cases. Three important conclusions were obtained based on the *C* values. Firstly, if the area ratio of coarse grids to the entire domain was constant, only the size ratio of coarse to fine grids had influence on the computational efficiency; the computation time decreased dramatically with the increase of the size ratio. Secondly, if the size ratio of the coarse to fine grids was constant, only the proportion of coarse grids in the entire computational domain had an affect on the computational efficiency; the computation time decreased linearly with the increasing of the area ratio of coarse grids to the entire computational domain. Lastly, if there were fewer coarse-grid regions, *C* values decreased slightly with the increase of the size ratio of coarse to fine grids because the total runtimes mainly depended on computation time on fine grids under this condition.

The proposed M-DBCM is able to accurately and efficiently reproduce the flooding process and therefore has the potential for a wide range of practical applications. Unstructured grids would be used to divide the computational domain since they provide great flexibility in comforting complex domain geometries. The GPU may also be used to accelerate the computation time.

## ACKNOWLEDGEMENTS

This study was supported by the National Science Foundation of China (No. 52179068) and the Key Laboratory of Hydroscience and Engineering (No. 2021-KY-04). The authors thank the anonymous reviewers for their valuable comments.

## DATA AVAILABILITY STATEMENT

All relevant data are included in the paper or its Supplementary Information.

## CONFLICT OF INTEREST

The authors declare there is no conflict.

## REFERENCES

*GIS-Based Upland Erosion Modeling, Geovisualization and Grid Size Effects on Erosion Simulations with CASC2D-SED*

*PhD Thesis*