Flooding is one of the most frequent natural hazards and causes more economic loss than all the other natural hazards. Fast and accurate flood prediction has significance in preserving lives, minimizing economic damage, and reducing public health risks. However, current methods cannot achieve speed and accuracy simultaneously. Numerical methods can provide high-fidelity results, but they are time-consuming, particularly when pursuing high accuracy. Conversely, neural networks can provide results in a matter of seconds, but they have shown low accuracy in flood map generation by all existing methods. This work combines the strengths of numerical methods and neural networks and builds a framework that can quickly and accurately model the high-fidelity flood inundation map with detailed water depth information. In this paper, we employ the U-Net and generative adversarial network (GAN) models to recover the lost physics and information from ultra-fast, low-resolution numerical simulations, ultimately presenting high-resolution, high-fidelity flood maps as the end results. In this study, both the U-Net and GAN models have proven their ability to reduce the computation time for generating high-fidelity results, reducing it from 7–8 h down to 1 min. Furthermore, the accuracy of both models is notably high.

  • In our study area, our models have demonstrated the capability to dramatically decrease the computation time required to generate high-fidelity results, reducing it from 7–8 h to 1 min.

  • The GAN model displays a lower sensitivity to changes in input resolution compared with the U-Net model.

  • The proposed method effectively recovers lost information because of the large grid size in the low-resolution geometry.

Flooding stands out not only as the most prevalent natural hazard (Parhi 2018; Rentschler & Salhab 2020), but also as the natural hazard that produces more annual damage than any other weather-related occurrence in the United States (NOAA 2016). From 1998 to 2017, flooding affected over two billion people, claiming the top spot among all natural hazards in terms of its impact on population (Cred 2018). The consequences of severe floods can be dire, encompassing substantial economic losses and endangering human lives (Haltas et al. 2021). Hurricane Harvey hit Houston in August 2017, resulting in over $125 billion of economic loss (Kousky et al. 2020). Similarly, a record-breaking rainstorm struck Henan, China, in July 2021, claiming 398 lives and causing an economic setback of $17.8 billion (He et al. 2023). More recently, in August 2023, Typhoon Doksuri slammed northeast China, triggering a severe flood that displaced over a million individuals. Apart from the direct threats to both human lives and commerce (Alexander et al. 2019), floods contribute to environmental and public health risks (Okaka & Odhiambo 2018; Rivett et al. 2022). Moreover, the increasing impact of global warming and climate change has amplified the occurrence and intensity of heavy rainfall events (Donat et al. 2016; NASA 2017), leading to a higher frequency of severe floods in recent decades (Popescu & Bărbulescu 2023). Thus, fast and accurate flood prediction holds immense significance in preserving lives, minimizing economic damages, and reducing environmental and public health risks.

In recent years, numerical methods have been widely used in the field of hydrology and hydraulics for flood inundation map generation. Prominent models and software applications, such as HEC-RAS (USACE 2018), FLO-2D (FLO-2D 2018), and SRH-2D (Lai 2010), have been developed to numerically solve the two-dimensional (2D) shallow water equations. These models and software have been extensively utilized by researchers to simulate multiple flood scenarios across various floodplains (Rangari et al. 2019; Ongdas et al. 2020; Iroume et al. 2022; Pathan et al. 2022; Shaikh et al. 2023). Although numerical methods are widely accepted because of their high accuracy and reliability, it is important to acknowledge that achieving high-fidelity simulations can be computationally intensive and time-consuming (He et al. 2023). This is mainly due to the nature of the numerical methods, which often involve considerations of scheme complexity, mesh convergence, and sometimes the requirement of a full momentum solver instead of a diffusion wave solver.

With the current era of big data, in recent years deep learning models have also achieved remarkable success in hydrology and hydraulics (Assem et al. 2017; Hosseiny et al. 2020; Cai et al. 2022; Jiang et al. 2022; Park et al. 2022; Shi et al. 2023; Yin et al. 2023). These models all concentrate on predicting water depths at specific observation points, as these points serve as their training dataset. However, due to the 2D nature of flood mapping outputs, acquiring an appropriate training dataset solely from observational data at water stations is challenging. This issue fundamentally poses a barrier to the effective implementation of deep learning models for the flood mapping problem. As a consequence, the utilization of deep learning models for generating spatially varied flood maps remained largely unexplored until around 2022, with only a limited number of studies delving into this area (Bentivoglio et al. 2022). To overcome the issue, some earlier research attempted to transform the problem into a classification task, employing geographic information system digital elevation data to predict the likelihood of flooding for individual cells or pixels (Bui et al. 2020; Nemni et al. 2020; Muñoz et al. 2021). Although this approach may overcome the issue and simplify the problem, it also results in a substantial reduction in the significance of the information conveyed by the flood maps.

A novel approach has emerged in the field of fluid dynamics, gaining rapid popularity, which involves the integration of super-resolution networks with numerical modeling (Pourbagian & Ashrafizadeh 2022; Bao et al. 2023; Long et al. 2023; Xu et al. 2023; Yasuda & Onishi 2023). However, the application of this innovative super-resolution method in the domains of hydrology and hydraulics remains almost nill. In the context of flood map prediction, the work of He et al. (2023) stands as the only paper that employs this cutting-edge methodology. He et al. (2023) employ 2D hydrodynamic models to generate data for both coarse and fine grids, subsequently utilizing a deep learning model to enhance the resolution of results from the coarse grid to match the fine grid. However, there are two noteworthy limitations in their current approach. First, the grid resolution utilized in their 2D hydrodynamic model is too coarse, even in the finest grid setting of 30m resolution. The authors themselves acknowledge that such coarse grids might fail to capture certain important flow physics, potentially leading to gaps in the precision of their super-resolution outcomes. Second, the study area of this paper is a medium-sized watershed in a rural region, characterized by an elevation difference exceeding 900 m. Despite this considerable topographic variation, the study still encounters notable prediction errors. However, it is worth noting that in most urban areas, elevation differences are considerably smaller. Hence, whether this method can still accurately capture the high resolution of a street-level flood map is worth further investigation. Therefore, the exploration of super-resolution techniques for generating high-fidelity flood maps in urban areas remains incomplete.

The primary objective of this study is to introduce an approach capable of fast and accurate modeling of urban riverine flood maps with water depth information. It is crucial to have a rapid and accurate response when preparing for a real hurricane or flood event. The proposed methodology will combine the strengths of numerical methods and neural networks to establish a framework that consistently enhances low-fidelity simulation results to attain high-fidelity outcomes. To achieve this, first, we will construct a U-Net architecture and assess its performance in urban flooding scenarios. Second, since the generative adversarial network (GAN) model showed superior performance in the field of computer vision, we will adapt and evaluate the GAN model's performance in the context of urban flooding. Lastly, we will undertake an investigation of the model's sensitivity to input resolution, probing whether our proposed methods can maintain high accuracy and effectiveness with lower-resolution input data.

Study area

The Miami River, among the most significant tributaries, originates from the Everglades, flowing through Miami's Downtown before joining the Biscayne Bay system in the southern region of Florida, USA. A large number of businesses and industries are built along the river, making it one of the densely populated urban waterway systems. The downstream Miami River region is located at the coastal zone, which exhibits notable vulnerability to severe hurricanes accompanied by heavy precipitation and flooding (Azzi et al. 2020). Given its significant economic importance, densely populated nature, and elevated risk of hurricane impact, it is suitable to choose it as the case study in this paper. The geographic location of the downstream Miami River is shown in Figure 1.
Figure 1

Geographic map of the downstream Miami River.

Figure 1

Geographic map of the downstream Miami River.

Close modal
Figure 2 illustrates the layout of the study region. The underlying framework of Figure 2 consists of a digital elevation model (DEM) that is obtained from the Florida Geographic Information Office. In our study area, shown in the red box, the elevations span from −2 to 40 ft. The bathymetric data, sourced from the South Florida Water Management District (SFWMD), are contained within a distinct DEM. The integration of the two DEMs will take place within the RAS Mapper. Our study area contains 5.6 miles of the downstream Miami River and two tributaries: C4 (upper) and C6 (lower). Five water stations, indicated by red triangles, are situated in our study area. These stations can serve as boundary conditions and validation points for our numerical model: HEC-RAS. The upstream water stations, namely, S25A, S25B, and S26, have multiple hydraulic structures, including spillway gates, culverts, and pumps. The flow regulated by these structures is logged at 3-min intervals. These recorded flow rates also serve as boundary conditions in our HEC-RAS model. The flow regulated by these structures is logged at 3-min intervals. Additionally, water station S1, located at the center of the study area, is employed for water stage recordings, which can serve as a validation location for our HEC-RAS model.
Figure 2

Schematic representations of the study area.

Figure 2

Schematic representations of the study area.

Close modal

Data preparation

Data generation and extraction

Both low-resolution and high-resolution simulation results are computed by 2D HEC-RAS in this paper. The location of the boundary conditions is explained in the previous section, and all time-series boundary conditions and validation data are acquired on DBHYDRO from the SFWMD. In this study, we planned to use 40 training cases to ensure the size of the training dataset; however, finding such a comparable quantity of significant flooding events from the past 10 to 20 years is challenging. Therefore, we decided to artificially generate certain boundary conditions for training purposes. For three upstream flow conditions, we selected the 10 largest flow rate hydrographs at each location from the past 10 years and applied the Gaussian distribution to generate 40 sets of different flow rate hydrograph inputs. Notably, the length of the 10 flow hydrographs is not the same because the event is defined by their corresponding rainfall data. Regarding the downstream water stage conditions, manipulating the time series pattern is not feasible as it reflects the actual tide wave. Thus, we chose the highest annual tide stage from the past 10 years. These tide stage hydrographs are then replicated three times and randomly distributed across the 40 training sets. In the testing dataset, we used two historical hurricane events: Hurricane Irma from 2017 and Tropical Storm Isaias from 2020. Both the high-resolution test case simulations are validated at water station S1.

The mesh size used for the low-resolution simulations is 150 ft, while the mesh size for the high-resolution simulations is set as 20 ft. This results in a significant difference in the total cell count, with the high-resolution count being 56 times larger than the small-resolution cell count. Additionally, we added a refinement along the riverbank to enhance the reliability of our high-resolution simulation. Regarding the setup of the numerical solver, it is essential to utilize a full momentum equation solver for situations impacted by tidal conditions (USACE 2018). Therefore, we used the full momentum equation solver to uphold the high-fidelity nature of our high-resolution case, while we used the diffusion wave equation solver in our low-resolution simulations to ensure computational efficiency. The outputs of the 2D HEC-RAS simulations at nodes and faces are automatically stored in HDF5 format by default. Therefore, we employed a Python script to retrieve the saved simulation results from all cases.

Data preprocessing

The primary determinant of flooding in a given area is the water depth, therefore, the water depth from the high-fidelity simulation is our prediction variable. Although the water depth derived from low-fidelity simulations might not be entirely accurate, it still can contribute some basic information, and it will be used as one of our input variables in the neural networks. Furthermore, including elevation and slope data can also provide useful information to neural networks and improve the neural networks’ performance. This is due to the general trend of lower water depths at higher elevations and the intuitively higher acceleration of water flow in steeper slope areas. Hence, the high-resolution elevation and slope data will also be integrated as our input parameters. Obtaining and calculating high-resolution elevation and slope data do not require performing any simulation, since it is the characteristic of the study area. The total dataset that we need includes X and Y coordinates, water depths from both low and high resolutions, the elevation data from DEM, as well as the slope calculated from the elevation data. The equation for calculating the slope from the elevation data is as follows:
formula
(1)
formula
(2)
formula
(3)
where x is the horizontal coordinate, y is the vertical coordinate, z is the elevation value at each pixel, i is the horizontal index, j is the vertical index, and are the horizontal weighted counts of valid cells, and and are the vertical weighted counts of valid cells.
Thus far, the input and output variables do not have the same dimensions. This inconsistency of input dimensions will provide imbalanced data, which will introduce difficulties for neural networks in generating accurate predictions. Additionally, relying solely on convolutional neural networks to address inconsistent dimensions between input and output variables is also challenging. In most practical applications, the output dimensions typically do not align as integer multiples of the input dimensions. Padding too many zero values in convolutional neural networks will significantly influence the model performance as well. It is worth noting that even the high-resolution water depth data might still require cubic interpolation to align with their own resolution. This interpolation step is necessary when any refinement grids have been used in specific regions. The detailed data processing framework is shown in Figure 3.
Figure 3

Data preprocessing and the neural network framework.

Figure 3

Data preprocessing and the neural network framework.

Close modal

As presented in Figure 3, the lower-resolution water depth data derived from low-fidelity simulations have the dimensions of , whereas the elevation data from DEM and the slope data calculated from the elevation data exhibit dimensions of . The elevation and slope data have an even higher resolution than the resolution of the high-fidelity simulation. Therefore, it becomes imperative to perform an interpolation to establish consistent dimensions, as indicated by the blue arrow. Following this interpolation, the three input variables are combined into three channels, akin to the concept of RGB (red, green, and blue) channels within the field of computer vision. During the training phase, the high-resolution water depth data serve as labeled data to compute the loss with the model predicted values so that the model can keep updating its learning parameters.

Neural network model architecture

U-Net architecture

The U-Net architecture, known for its encoder–decoder structure, consistently demonstrates its ability to produce favorable results efficiently and swiftly across a range of vision tasks. A major advantage of the U-Net lies in its capacity to perform well without the need for a large training dataset or extensive graphics processing unit (GPU) memory. A key factor contributing to the success of the U-Net is its incorporation of data augmentation techniques, such as elastic deformation, enabling the deep neural network to effectively learn from diverse input data variations, even when working with a limited number of annotated images.

Figure 4 provides an overview of the U-Net architecture, so named due to its distinctive U-shaped design. This architecture consists of three core elements: (1) the left side corresponds to the contracting path, featuring a Convolutional Neural Networks (CNN) structure with multiple consecutive convolution layers, each of which incorporates a Residual block (ResBlock); (2) the right side represents the expansive path, characterized by upsampling layers that replace pooling layers to generate higher-resolution lower-level outputs; and (3)the skip connections between the contracting and expansive paths that create alternative short connections between the two aforementioned parts. Within the left side, ResBlocks combine the output of the upsampling layer with the convolution output, effectively integrating higher-level features to extract more comprehensive information. By contrast, the right side employs up-convolutions, concatenation operations, and skip connections to create a symmetric U-shaped structure. This expansive path ultimately yields segmentations at full resolution.
Figure 4

U-Net structure.

Figure 4

U-Net structure.

Close modal

In this paper, we adopt the structure of Res-U-Net, which employs ResBlockss to enhance information flow, addressing the issue of gradient vanishing. Our implementation is organized into four distinct resolution steps on each side, denoted by the blue-colored layers in Figure 4. Each ResBlock in this structure consists of a pair of consecutive 3 × 3 convolutions. On the right side, each layer includes a 2 × 2 transpose convolution layer along with two 3 × 3 convolutions. Rectified Linear Unit (ReLU) serves as the activation function for the entire framework.

Generative adversarial networks architecture

The GAN architecture comprises two main Deep Neural Network (DNN) structures: the generator and the discriminator. The generator acts analogous to the left side of the U-Net, getting low-resolution images as an input and transforming them into a high-resolution output. The architecture of the generator used in this paper is shown in Figure 4. The discriminator, however, is a novel component that distinguishes the real high-resolution inputs and classifies images as real (close to the real flood map) or fake (far from the real flood map) and sends the feedback to the generator. As illustrated in Figure 5, five layers of convolution have been implemented for the discriminator in this work that consists of the convolution layer and the parametric ReLU (PReLU) layer.
Figure 5

Discriminator structure.

Figure 5

Discriminator structure.

Close modal
The previous objective functions in DNN structures typically focused on optimizing mean squared reconstruction error, overlooking perceptual quality and image fidelity considerations (Ledig et al. 2017). The GAN model addresses this by incorporating two loss functions during optimization: (1) content (perceptual) loss and (2) adversarial loss. The total loss calculation can be seen from Equations (4) to (6). The content loss calculates the Euclid distances with label data and produced data, while the adversarial loss assesses the realistic nature of the content produced by the generator. This leads to a competition between the generator and the discriminator, where the generator creates a high-resolution image and tries to convince the discriminator, while the discriminator excludes the distinguishable results. This competition could potentially provide a more realistic high-resolution flood map.
formula
(4)
formula
(5)
formula
(6)
where n is the total number of cell points, stands for the generator predicted value on each cell, and represents the ground truth value on each cell.

Results visualization

Hurricane Irma resulted in the most substantial rainfall and the most severe flooding situation in the past 10 years. Figure 6 presents the machine learning predicted flood map under Hurricane Irma's impact. The background of Figure 6 is digital elevation information, where darker means higher elevation. In terms of flood depth, the low-resolution simulation has a resolution of 161 by 96, while the fine gird simulation has a resolution of 1208 by 719, resulting in roughly a 56-times resolution increase. The issues with simulated water depth from the coarse grid include not only a lack of resolution but also a significant overestimation of the flooded area. It is also evident that a substantial portion of flow physics cannot be captured in the low-resolution simulation. This is mainly due to two factors. First, low-resolution simulations lack enough grid to accurately resolve flow characteristics in local areas. Second, the low-resolution simulations are constrained to solve the diffusion wave equation, while a full momentum equation solver is essential in this domain where tides have a significant influence. Therefore, our machine learning model must address not only the need to enhance output resolution but also to recover the missing physics information.
Figure 6

Simulated and predicted flood map for the Hurricane Irma event: (a) simulated water depth from a coarse grid; (b) simulated water depth from a fine grid; (c) U-Net model, predicted water depth; and (d) GAN model, predicted water depth.

Figure 6

Simulated and predicted flood map for the Hurricane Irma event: (a) simulated water depth from a coarse grid; (b) simulated water depth from a fine grid; (c) U-Net model, predicted water depth; and (d) GAN model, predicted water depth.

Close modal

As Figure 6(c) and 6(d) presents, both the U-Net and GAN models can successfully enhance the solution resolution and accurately fill up the missing information. The difference between the flood map generated by the machine learning model and the high-resolution simulations (considered ground truth) is extremely small. It is even challenging for the human eye alone to distinguish this distinction. Furthermore, the machine learning model has successfully captured all the intricate details of flood depths at the street level, making it feasible to identify which streets are affected by flooding in the machine learning-generated map.

Tropical Storm Isaias, categorized as a destructive Category 1 hurricane, brought with it considerably less rainfall and caused fewer flooding issues. The simulated and machine learning predicted flood map for Tropical Storm Isaias is shown in Figure 7. The potential flood risk is only concentrated along the riverbank. Similar to the results observed during Hurricane Irma, the low-resolution simulation tends to overestimate water depth in this case as well. Notably, the low-resolution simulation exhibits a significant misprediction at the middle part of the river. However, this problematic data point does not undermine the accuracy of our machine learning model. Our machine learning model successfully rectified the issue and exhibited a good agreement with the fine-resolution simulation in this minor flood scenario.
Figure 7

Simulated and predicted flood map for the Tropical Storm Isaias event: (a) simulated water depth from the coarse grid; (b) simulated water depth from the fine grid; (c) U-Net model predicted water depth; and (d) GAN model predicted water depth.

Figure 7

Simulated and predicted flood map for the Tropical Storm Isaias event: (a) simulated water depth from the coarse grid; (b) simulated water depth from the fine grid; (c) U-Net model predicted water depth; and (d) GAN model predicted water depth.

Close modal
Figure 8 presents the simulated and predicted water depths along the river and its tributaries. In Figure 8, the left column depicts water depths during the Hurricane Irma scenario, while the right column illustrates water depths during the Tropical Storm Isaias scenario. The first, second, and third rows correspond to water depths along the main riverbed, upper tributary, and lower tributary, respectively. The x-axis represents the distance along the river or tributary.
Figure 8

Simulated and predicted water depth along the river and tributaries: (a) main river water depth under Hurricane Irma; (b) main river water depth under Tropical Storm Isaias; (c) upper tributary under Hurricane Irma; (d) upper tributary under Tropical Storm Isaias; (e) lower tributary under Hurricane Irma; and (f) lower tributary under Tropical Storm Isaias.

Figure 8

Simulated and predicted water depth along the river and tributaries: (a) main river water depth under Hurricane Irma; (b) main river water depth under Tropical Storm Isaias; (c) upper tributary under Hurricane Irma; (d) upper tributary under Tropical Storm Isaias; (e) lower tributary under Hurricane Irma; and (f) lower tributary under Tropical Storm Isaias.

Close modal

There is a significant difference between the low-resolution simulation results and the high-resolution results, particularly noticeable in two of the tributaries. The water depth data produced by the low-resolution simulation tend to exhibit instability due to the limited number of grid points. This issue is apparent when examining Figures 6 and 7, which reveal that there are only one to two computational cells spanning the river. However, the elevation changed rapidly in these one to two computational cells. This is the major reason for the significant mismatch. Nevertheless, our machine learning models effectively addressed this issue. As shown in Figure 8, both the U-Net and GAN models can provide water depth results that are closely aligned with those from the high-resolution simulation.

Performance and error analysis

As demonstrated in the previous section, the difference between the predicted flood maps from both the U-Net and GAN models and the simulated water depth based on fine grids is minimal and not easily distinguished through visual inspection alone. Therefore, the predicted versus actual plot is presented in Figure 9. The majority of the discrepancies occur within the 0–2 ft range. It is anticipated that predicting water depths during a Hurricane Irma-like event will yield a larger margin of error compared with a Tropical Storm Isaias-like event, primarily because predicting near-zero depths is inherently more challenging. Nevertheless, the overall predicted outcomes exhibited strong alignment with all test datasets.
Figure 9

Predicted versus actual scatter. Top: Hurricane Irma. Bottom: Tropical Storm Isaias. Left: U-Net model. Right: GAN model.

Figure 9

Predicted versus actual scatter. Top: Hurricane Irma. Bottom: Tropical Storm Isaias. Left: U-Net model. Right: GAN model.

Close modal
Accuracy, precision, and recall serve as well-known metrics for evaluating flood map predictions (Anbarasan et al. 2020). These parameters are computed based on the values of true negatives (TN), false positives (FP), true positives (TP), and false negatives (FN). TN number corresponds to the total count of cells that were both simulated as flooding and correctly predicted as flooding. FP stands for the total count of cells that were simulated as not flooding but incorrectly predicted as flooding. FN number represents the total count of cells that were simulated as flooding but incorrectly predicted as not flooding. TN number denotes the total count of cells that were simulated as not flooding and correctly predicted as not flooding. The values of these four metrics for both models under two test event conditions are illustrated in Figure 10.
Figure 10

True negative, false positive, true positive, and false negative values for the test case.

Figure 10

True negative, false positive, true positive, and false negative values for the test case.

Close modal
Accuracy is a measure of correctness of flood detection, which can be expressed as Equation (4). Precision is a metric that evaluates the accuracy of attributes in the solution relative to the data. To be precise, it is a ratio of correctly predicted flooding cell numbers to the total predicted flooding cell numbers. The precision can be written as Equation (5). Recall, however, assesses the effectiveness of the proposed solution in retrieving the correct data attributes. In simpler terms, recall is a ratio of the successfully predicted flooding cell numbers to the total flooding cell numbers. The recall is calculated based on Equation (6).
formula
(7)
formula
(8)
formula
(9)

Table 1 presents the performance of the proposed method in various test scenarios. The GAN model exhibited superior performance compared with the U-net model in terms of mean absolute error (MAE) and root mean square error (RMSE). This is expected since the GAN model's generator is based on the U-Net architecture, and the discriminator network could provide the additional information that helped the generator perform better. However, it is worth noting that the GAN model lagged behind the U-Net model in terms of precision, indicating that the GAN model tends to slightly overestimate the flood area. This overestimation may not necessarily be detrimental in engineering practice, as it can provide higher safety factors.

Table 1

Performance of the proposed method under a real test stormwater event

Hurricane Irma
Tropical Storm Isaias
U-NetGANU-NetGAN
MAE [ft] 0.00219 0.00133 0.00048 0.00046 
RMSE [ft] 0.01075 0.00847 0.00409 0.00393 
Accuracy 0.99890 0.99912 0.99985 0.99984 
Precision 0.99867 0.99715 0.99947 0.99923 
Recall 0.99020 0.99392 0.99537 0.99505 
Hurricane Irma
Tropical Storm Isaias
U-NetGANU-NetGAN
MAE [ft] 0.00219 0.00133 0.00048 0.00046 
RMSE [ft] 0.01075 0.00847 0.00409 0.00393 
Accuracy 0.99890 0.99912 0.99985 0.99984 
Precision 0.99867 0.99715 0.99947 0.99923 
Recall 0.99020 0.99392 0.99537 0.99505 

Coarse-grid resolution study

The selection of the low-resolution simulation grid size can impact the performance of our proposed model. This is because a coarser grid will provide the neural network with lesser input information, in terms of quantity as well as accuracy. To investigate this effect, we conducted tests using three additional coarser grid resolutions: 96 by 57, 60 by 36, and 40 by 24, respectively. Figure 11 presents the comparison between the original low-resolution simulation results and the simulation results obtained using these three additional grid sizes. The simulation results are visualized based on the Hurricane Irma event. As the mesh size increases, there is a corresponding increase in the overestimation of the flood area, and the flow physics becomes less defined.
Figure 11

Simulated low-resolution water depth with different grid resolutions: (a) original resolution: 161 by 96; (b) 96 by 57; (c) 60 by 36; and (d) 40 by 24.

Figure 11

Simulated low-resolution water depth with different grid resolutions: (a) original resolution: 161 by 96; (b) 96 by 57; (c) 60 by 36; and (d) 40 by 24.

Close modal

The performance of the proposed method with different coarse-grid sizes under the Hurricane Irma event is summarized in Table 2. Both the U-Net and GAN models exhibit reduced performance as the input low-resolution mesh becomes coarser. Notably, the performance of U-Net declined more pronouncedly compared with the GAN model. The MAE and RMSE of using the coarsest mesh size as input is around four times higher than using the original mesh size. While the GAN model's performance drops initially when coarse-grained, it stays stable as the grid coarsens further.

Table 2

Performance of the proposed method with different coarse-grid sizes under the Hurricane Irma event

161 by 9696 by 5760 by 3640 by 24
U-Net MAE [ft] 0.00219 0.00260 0.00698 0.00997 
RMSE [ft] 0.01075 0.011385 0.02570 0.03670 
R2 0.99998 0.99998 0.99990 0.99981 
GAN MAE [ft] 0.00133 0.00263 0.00269 0.00280 
RMSE [ft] 0.00847 0.01410 0.01389 0.01398 
R2 0.99999 0.99997 0.99997 0.99997 
161 by 9696 by 5760 by 3640 by 24
U-Net MAE [ft] 0.00219 0.00260 0.00698 0.00997 
RMSE [ft] 0.01075 0.011385 0.02570 0.03670 
R2 0.99998 0.99998 0.99990 0.99981 
GAN MAE [ft] 0.00133 0.00263 0.00269 0.00280 
RMSE [ft] 0.00847 0.01410 0.01389 0.01398 
R2 0.99999 0.99997 0.99997 0.99997 

Advantages of the proposed method

The biggest advantage of our proposed approach lies in its ability to combine speed and high accuracy. As Table 3 shows, conducting a low-resolution simulation takes around 40 s, but it lacks accuracy and reliability. By contrast, performing a high-fidelity and high-resolution simulation consumes 7–8 h, which may significantly reduce decision-making and execution time when a huge stormwater event is approaching. Thus, when only using numerical methods, it is almost impossible to achieve both speed and accuracy at the same time. Our proposed method could be considered a ‘convertor’ that can consistently transform low-resolution simulation outcomes into high-resolution results. The total time required to obtain high-fidelity results using our method equals the time spent generating the low-resolution simulation plus the neural network processing time, which amounts to approximately 50 s in our case. This means our method makes it feasible to attain a millionth-level resolution within a minute, opening the door to real-time predictions and digital twins after further development.

Table 3

Comparison of computation time

Average low-resolution simulationAverage high-resolution simulationProposed method
Computation time 40 s 7.5 h 40 + 10 s 
Average low-resolution simulationAverage high-resolution simulationProposed method
Computation time 40 s 7.5 h 40 + 10 s 

Another notable advantage of our proposed method is its efficient information representation. Unlike conventional neural networks that require a large number of boundary conditions and operational variables, our method relies on just three matrices of physical variables as input. More specifically, boundary conditions and operational variables such as rainfall patterns, tide patterns, and hydraulic structure operation states typically exhibit extremely high variability. Training a neural network to accurately capture and learn all possible patterns within these variables is an exceedingly challenging task. To overcome this challenge, a huge amount of data is usually needed to feed the neural network in the training phase. As we know, high-quality data in the hydrology and hydraulic fields are limited. Attempting to train the neural network with billions of input samples, as is commonly done in computer vision or natural language processing, is impractical for our hydrology and hydraulic problems. However, in our framework, the input to the neural networks can be understood as ‘preprocessed input’ by HEC-RAS, resulting in a significant reduction in variability. The coarse HEC-RAS model took care of these high-variability variables and processed them into a low-variability flood map information. The range of possible flood map patterns along the riverine is limited and follows certain physical rules. For instance, water consistently accumulates at lower elevation locations. The neural networks could rapidly capture these simpler patterns and provide results with a high level of accuracy. This is also the major reason for the success achieved with just 40 sets of training data, a quantity typically deemed extremely insufficient in conventional deep learning frameworks.

Why did the proposed method succeed in flooding prediction?

There are two major reasons for our proposed method to achieve such incredible accuracy with a very limited training dataset. The two major reasons are explained as follows.

The primary factor is efficient information representation, as previously discussed. In contrast to traditional super-resolution tasks, such as enhancing the resolution of 10,000 different types of images, our target, the flood map, exhibits significantly lower variability. With the assistance of HEC-RAS, the initially high-variability input data has been transformed into flood map data, which has much lower variability. The potential shapes and dimensions of flood maps are highly constrained. Therefore, if the training data cover a wide range of flood levels and is well distributed, the extrapolation issues could be completely eliminated. Interpolation predictions tend to yield much higher accuracy in almost all machine learning cases.

The second major reason is that we actually converted a super-resolution problem to a denoising problem from the perspective of neural networks’ functionality. In the field of traditional computer science, nearly all conventional super-resolution networks deal with images composed of three channels: RGB. However, our framework manipulates directly on scalar matrices. In many cases, the pixel values in RGB are unsuitable for numerical interpolation, whereas our 2D scalar matrices allow for such operations. Consequently, most super-resolution frameworks require upsampling within the network to enhance resolution, whereas our neural network does not, as the input and output dimensions in our method are identical. By this uniformity in input and output dimensions, we can effectively transfer low-level feature map information to the latter stages of the neural network through skip connections. This step can keep more low-level feature map information and often leads to a significant improvement in model performance.

Limitations of this work

There are several limitations to this work. First, the study area examined in this research only exhibits a tide-dominant domain, which means the magnitude and gradient of the downstream tide condition influenced more than the magnitude and gradient of the upstream flow rates. The flow rate-dominant domain was not investigated in this paper owing to the content length constraints and time limitations. Such flow rate-dominant domains could potentially yield different model performance outcomes. Second, the methodology employed in this study demands a certain level of GPU memory, even though it offers rapid training and testing speeds. The GPU memory requirement is directly linked to the total number of grid points of the target resolution. Presently, it has been observed that an 8GB GPU can only accommodate up to one million grid points. For conducting super-resolution tasks on very large riverine systems, it is advisable to employ GPUs with larger memory capacities or use other methods. Lastly, the current work falls under the category of ‘small models’ in the field of computer science and artificial intelligence. This implies that each model only works in a specific domain. Generalizing the findings of this study to create a ‘large model’ capable of addressing all types of riverine systems remains a formidable challenge with the existing model architecture. Achieving this ultimate objective of artificial intelligence may require the adoption of ‘large model’ structures such as Swin-Transformer, Vision Transformer (VIT), or diffusion models.

This paper presents a super-resolution-assisted framework that can rapidly and accurately model the riverine system flood map. The proposed method combines the strength of both numerical simulation and neural networks. It can present a high-fidelity numerical simulation result by using low-resolution numerical simulation. This process shortens the total computation time from many hours to approximately 1 min. The drastic reduction in computational time makes real-time prediction feasible, which is significantly important in real hurricane or flood preparation scenarios. The model's performance has been evaluated and a high accuracy is achieved within engineering practices. The findings are as follows:

  • 1.

    In our study area, both the U-Net and GAN models have demonstrated the capability to dramatically decrease the computation time required to generate high-fidelity results, reducing it from 7–8 h to 1 min. Furthermore, the accuracy of both models is notably high, as evidenced by the MAE values. The U-Net model yielded MAEs ranging from 0.00048 to 0.00219 ft, while the GAN model produced MAEs within the range of 0.00046–0.00133 ft.

  • 2.

    The GAN model displays a lower sensitivity to changes in input resolution when compared with the U-Net model, although both models exhibit a decrease in performance as the input resolution decreases.

  • 3.

    The proposed method extends the applications of super resolution beyond its traditional use in computer vision. It effectively recovers lost information because of the large grid size in the low-resolution geometry.

The authors gratefully acknowledge the financial support from the National Science Foundation under Grant CBET 2203292. Also, the authors are grateful to the anonymous reviewers for their constructive comments, which helped to significantly improve the quality of the manuscript.

All relevant data are included in the paper or its Supplementary Information.

The authors declare there is no conflict.

Anbarasan
M.
,
Muthu
B.
,
Sivaparthipan
C. B.
,
Sundarasekar
R.
,
Kadry
S.
,
Krishnamoorthy
S.
&
Dasel
A. A.
2020
Detection of flood disaster system based on IoT, big data and convolutional deep neural network
.
Computer Communications
150
,
150
157
.
Assem
H.
,
Ghariba
S.
,
Makrai
G.
,
Johnston
P.
,
Gill
L.
&
Pilla
F.
2017
Urban water flow and water level prediction based on deep learning. In: Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2017, Skopje, Macedonia, September 18–22, 2017, Proceedings, Part III 10. Springer International Publishing, pp. 317–329
.
Azzi
Z.
,
Matus
M.
,
Elawady
A.
,
Zisis
I.
,
Irwin
P.
&
Gan Chowdhury
A.
2020
Aeroelastic testing of span-wire traffic signal systems
.
Frontiers in Built Environment
6
,
111
.
Bao
K.
,
Zhang
X.
,
Peng
W.
&
Yao
W.
2023
Deep learning method for super-resolution reconstruction of the spatio-temporal flow field
.
Advances in Aerodynamics
5
(
1
),
1
16
.
Bentivoglio
R.
,
Isufi
E.
,
Jonkman
S. N.
&
Taormina
R.
2022
Deep learning methods for flood mapping: A review of existing applications and future research directions
.
Hydrology and Earth System Sciences
26
(
16
),
4345
4378
.
Bui
D. T.
,
Hoang
N. D.
,
Martínez-Álvarez
F.
,
Ngo
P. T. T.
,
Hoa
P. V.
,
Pham
T. D.
,
Samui
P.
&
Costache
R.
2020
A novel deep learning neural network approach for predicting flash flood susceptibility: A case study at a high frequency tropical storm area
.
Science of The Total Environment
701
,
134413
.
Cred
U.
2018
Economic Losses, Poverty & Disasters 1998–2017
.
Université Catholique de Louvain (UCL)
,
Brussels, Belgium
, p.
33
.
Donat
M. G.
,
Lowry
A. L.
,
Alexander
L. V.
,
O'Gorman
P. A.
&
Maher
N.
2016
More extreme precipitation in the world's dry and wet regions
.
Nature Climate Change
6
(
5
),
508
513
.
FLO-2D
2018
FLO-2D Pro Version: Two-Dimensional Flood Routing Model
.
Haltas
I.
,
Yildirim
E.
,
Oztas
F.
&
Demir
I.
2021
A comprehensive flood event specification and inventory: 1930–2020 Turkey case study
.
International Journal of Disaster Risk Reduction
56
,
102086
.
Hosseiny
H.
,
Nazari
F.
,
Smith
V.
&
Nataraj
C.
2020
A framework for modeling flood depth using a hybrid of hydraulics and machine learning
.
Scientific Reports
10
(
1
),
8222
.
Iroume
J. Y. A.
,
Onguéné
R.
,
Djanna Koffi
F.
,
Colmet-Daage
A.
,
Stieglitz
T.
,
Essoh Sone
W.
,
Bogning
S.
,
Olinga
J. M. O.
,
Ntchantcho
R.
,
Ntonga
J.-C.
,
Braun
J.-J.
,
Briquet
J.-P.
&
Etame
J.
2022
The 21st August 2020 flood in Douala (Cameroon): A major urban flood investigated with 2D HEC-RAS modeling
.
Water
14
(
11
),
1768
.
Jiang
S.
,
Zheng
Y.
,
Wang
C.
&
Babovic
V.
2022
Uncovering flooding mechanisms across the contiguous United States through interpretive deep learning on representative catchments
.
Water Resources Research
58
(
1
),
e2021WR030185
.
Kousky
C.
,
Palim
M.
&
Pan
Y.
2020
Flood damage and mortgage credit risk: A case study of hurricane harvey
.
Journal of Housing Research
29
(
sup1
),
S86
S120
.
Lai
Y. G.
2010
Two-dimensional depth-averaged flow modeling with an unstructured hybrid mesh
.
Journal of Hydraulic Engineering
136
(
1
),
12
23
.
Ledig
C.
,
Theis
L.
,
Huszár
F.
,
Caballero
J.
,
Cunningham
A.
,
Acosta
A.
,
Aitken
A.
,
Tejani
A.
,
Totz
J.
,
Wang
Z.
&
Shi
W.
2017
Photo-realistic single image super-resolution using a generative adversarial network
. In:
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, pp.
4681
4690
.
Long
D.
,
McMurdo
C.
,
Ferdian
E.
,
Mauger
C. A.
,
Marlevi
D.
,
Nash
M. P.
&
Young
A. A.
2023
Super-resolution 4D flow MRI to quantify aortic regurgitation using computational fluid dynamics and deep learning
.
The International Journal of Cardiovascular Imaging
39
,
1
14
.
Muñoz
D. F.
,
Muñoz
P.
,
Moftakhari
H.
&
Moradkhani
H.
2021
From local to regional compound flood mapping with deep learning and data fusion techniques
.
Science of the Total Environment
782
,
146927
.
NASA
2017
Earth Observatory: How Will Global Warming Change Earth?
Nemni
E.
,
Bullock
J.
,
Belabbes
S.
&
Bromley
L.
2020
Fully convolutional neural network for rapid flood segmentation in synthetic aperture radar imagery
.
Remote Sensing
12
(
16
),
2532
.
NOAA
2016
U.S. Climate Resilience Toolkit: Inland Flooding
.
Okaka
F. O.
&
Odhiambo
B.
2018
Relationship between flooding and out break of infectious diseasesin Kenya: A review of the literature
.
Journal of Environmental and Public Health
2018
.
Ongdas
N.
,
Akiyanova
F.
,
Karakulov
Y.
,
Muratbayeva
A.
&
Zinabdin
N.
2020
Application of HEC-RAS (2D) for flood hazard maps generation for Yesil (Ishim) river in Kazakhstan
.
Water
12
(
10
),
2672
.
Parhi
P. K.
2018
Flood management in Mahanadi Basin using HEC-RAS and Gumbel's extreme value distribution
.
Journal of The Institution of Engineers (India): Series A
99
(
4
),
751
755
.
Pourbagian
M.
&
Ashrafizadeh
A.
2022
Super-resolution of low-fidelity flow solutions via generative adversarial networks
.
Simulation
98
(
8
),
645
663
.
Rangari
V. A.
,
Umamahesh
N. V.
&
Bhatt
C. M.
2019
Assessment of inundation risk in urban floods using HEC RAS 2D
.
Modeling Earth Systems and Environment
5
,
1839
1851
.
Rentschler
J.
&
Salhab
M.
2020
People in Harm's way: Flood Exposure and Poverty in 189 Countries
.
The World Bank
.
Rivett
M. O.
,
Tremblay-Levesque
L. C.
,
Carter
R.
,
Thetard
R. C.
,
Tengatenga
M.
,
Phoya
A.
&
… Kalin
R. M.
2022
Acute health risks to community hand-pumped groundwater supplies following Cyclone Idai flooding
.
Science of the Total Environment
806
,
150598
.
Shaikh
A. A.
,
Pathan
A. I.
,
Waikhom
S. I.
,
Agnihotri
P. G.
,
Islam
M. N.
&
Singh
S. K.
2023
Application of latest HEC-RAS version 6 for 2D hydrodynamic modeling through GIS framework: A case study from coastal urban floodplain in India
.
Modeling Earth Systems and Environment
9
(
1
),
1369
1385
.
Shi
J.
,
Yin
Z.
,
Myana
R.
,
Ishtiaq
K.
,
John
A.
,
Obeysekera
J.
,
Leon
A.
&
Narasimhan
G.
2023
Deep learning models for water stage predictions in South Florida. arXiv preprint arXiv:2306.15907
.
USACE
2018
HEC-RAS River Analysis System (Version 5.0.6)
.
Xu
W.
,
Grande Gutierrez
N.
&
McComb
C.
2023
MegaFlow2D: A parametric dataset for machine learning super-resolution in computational fluid dynamics simulations
. In:
Proceedings of Cyber-Physical Systems and Internet of Things Week 2023
, pp.
100
104
.
Yin
Z.
,
Bian
L.
,
Hu
B.
,
Shi
J.
&
Leon
A. S.
2023
Physic-informed neural network approach coupled with boundary conditions for solving 1D steady shallow water equations for riverine system
. In:
World Environmental and Water Resources Congress
, pp.
280
288
.
This is an Open Access article distributed under the terms of the Creative Commons Attribution Licence (CC BY 4.0), which permits copying, adaptation and redistribution, provided the original work is properly cited (http://creativecommons.org/licenses/by/4.0/).