Forecasting flood inundation in urban areas is challenging due to the lack of validation data. Recent developments have led to new genres of data sources, such as images and videos from smartphones and CCTV cameras. If the reference dimensions of objects, such as bridges or buildings, in images are known, the images can be used to estimate water levels using computer vision algorithms. Such algorithms employ deep learning and edge detection techniques to identify the water surface in an image, which can be used as additional validation data for forecasting inundation. In this study, a methodology is presented for flood inundation forecasting that integrates validation data generated with the assistance of computer vision. Six equifinal models are run simultaneously, one of which is selected for forecasting based on a goodness-of-fit (least error), estimated using the validation data. Collection and processing of images is done offline on a regular basis or following a flood event. The results show that the accuracy of inundation forecasting can be improved significantly using additional validation data.

Forecasting real-time flood inundation is challenging due to the lack of validation data and high-computational time required by two-dimensional (2D) inundation models for producing flood inundation maps. Thus, researchers have focused on using alternatives to 2D inundation models. A straightforward approach is to generate a large database of inundation maps, using either 2D inundation models (Disse et al. 2018) or historical satellite images (Bhatt et al. 2017), and create rules to select the most likely inundation map, based on forecasted discharges or flood stages (Bhola et al. 2018). However, the uncertainty associated with this approach is too large (Henonin et al. 2013). Another alternative is the use of surrogate models (Bermúdez et al. 2018) that replace expensive 2D inundation models with data-driven models or more simplified model structures (Razavi et al. 2012).

Inundation models are available with various levels of simplification (Néelz & Pender 2009; Bach et al. 2014). A widely used model is a diffusive wave model that simplifies full dynamic equations to reduce the computational time (Leandro et al. 2014). These models are suitable when inertial terms are not important, which is often the case for flood inundations in urban areas (Martins et al. 2017). Inundation models are typically calibrated, often using Manning's coefficient, to reproduce a set of observations, e.g. water levels, inundation extent. This coefficient represents the resistance to flood flows in the model domain. Various studies point out that inundation models can be very sensitive to these coefficients, which leads to a higher degree of uncertainty (Oubennaceur et al. 2018). Despite uncertainties, a single calibrated model is used in operational forecast applications (Henonin et al. 2013) instead of using multiple models in forecasting mode.

Validation of the inundation forecasting is essential to evaluate its accuracy and predictive capabilities. However, spatial and temporal flood validation data in urban areas are scarce (Leandro et al. 2011). Fortunately, recent developments in technology and crowdsourcing have led to new sources of data. A few researchers have used remote sensing data to validate inundation maps with satellite images (Poser & Dransch 2010; McDougall 2012). There have also been attempts to gather crowdsourced hydrological measurements using smartphones and to develop a low-cost, practical method of data collection that can be used to predict floods (Kampf et al. 2018).

Computer vision algorithms, such as edge detection and image segmentation, have been used to extract information from images (Zhai et al. 2008) and have been applied to many new areas of research (Uma et al. 2016). For instance, Jaehyoung & Hernsoo (2010) found the water level by measuring the water surface height with reference to an indicator (an invariant feature in the image). Techniques such as Scale-Invariant Feature Transform (SIFT) and automatic adaptive selection of region-of-interest have been used to detect edges and water lines in an image (Hies et al. 2012; Narayanan et al. 2014). In addition, Nair & Rao (2017) estimated flood depth by segmenting humans from a flood scene and detected their face and gender using deep learning algorithms.

Recent studies have integrated crowdsourced data in the field of inundation modelling in which images and video recordings from smartphones are used to investigate hindcasted flood events (Triglav-Čekada & Radovan 2013; Kutija et al. 2014; Dapeng et al. 2016). In another example, Wang et al. (2018) used a manual approach to detect objects in images, such as lamp posts and pavement fences, to identify the boundary of the flood extent. Lowry & Fienen (2013) encouraged citizen scientists to participate in capturing stream flows and evaluated the accuracy of citizen measurements. Although several applications of crowdsourced data exist, they are limited to hindcasting the flood events. Hence, there remains a need to use this validation data in improving the forecasting and establishing a back communication from crowdsource to the inundation forecasts.

In this paper, we present a methodology that integrates additional validation data, which are extracted from an image with the assistance of a computer vision algorithm. The main focus is to improve the accuracy of the inundation forecasting by using water levels obtained from images, which are collected on a regular basis or following a flood event. The methodology is tested on three historical flood events and is applied to the city of Kulmbach, Germany.

Kulmbach

The present study is in the city of Kulmbach (Figure 1), which is located in Upper Main river catchment in the north-east of the Free State of Bavaria in Southern Germany. The city has around 26,000 inhabitants. With a population density of 280 inhabitants per km2 in an area of 92.8 km², it is categorized as a great district city. Traditionally, it has been a manufacturing base for the food and beverage industry. On 28th May 2006, up to 80 L/m2 intense rainfall occurred and within a few hours all the streams and rivers were filled (Tvo 2015). The incident prompted decision makers to revisit the flood protection measures for the city.

Figure 1

The location and land use classes of the study area in the city of Kulmbach, Germany (Data source: Bavarian Environment Agency and Water Management Authority, Hof). The river flows from east to west.

Figure 1

The location and land use classes of the study area in the city of Kulmbach, Germany (Data source: Bavarian Environment Agency and Water Management Authority, Hof). The river flows from east to west.

Close modal

Hydrological data

Three hydrological events are used to assess the methodology. The hydrographs of the events upstream of the city at gauges Ködnitz on the river White Main and Kauerndorf on the river Schorgast are presented in Figure 2. Hydrological measurement data for the events were collected by the Bavarian Hydrological Services.

Figure 2

Discharge hydrographs at upstream gauges Ködnitz (in black) and Kauerndorf (in grey) for three events, (a) Event I on 14th January 2011, (b) Event II on 13th April 2017, and (c) Event III on 7th December 2017 (Data source: Bavarian Hydrological Service, www.gkd.bayern.de, accessed 16 March 2018).

Figure 2

Discharge hydrographs at upstream gauges Ködnitz (in black) and Kauerndorf (in grey) for three events, (a) Event I on 14th January 2011, (b) Event II on 13th April 2017, and (c) Event III on 7th December 2017 (Data source: Bavarian Hydrological Service, www.gkd.bayern.de, accessed 16 March 2018).

Close modal

The winter flood in January 2011 (event I) was one of the largest in terms of its magnitude and corresponded to a discharge of 100-year return period at gauge Kauerndorf and 10-year return period at gauge Ködnitz (Figure 2(a)). Intense rainfall and snow melting in the Fichtel mountains caused floods in several rivers of Upper Franconia. Within 5 days, two peak discharges were recorded. The first one occurred on 9th January and the second peak measured 5 days later on 14th January caused even higher discharges and water levels. The maximum discharge of 92.5 m³/s was recorded at gauge Kauerndorf and 75.3 m³/s at gauge Ködnitz. Agricultural land and traffic routes were flooded, but no serious damage was reported. In Kulmbach, a dyke in the region of Burghaig was about to collapse due to the large volume of water. The Water Management Authority opened the weir in Kulmbach which saved potential damages (Hof 2011).

Events II and III that occurred on 13th April 2017 and 7th December 2017 respectively, were of relatively smaller magnitudes as compared to event I and corresponded to a discharge of the lowest value of a year (MNQ) and the arithmetic mean (MQ) respectively (Figure 2(b) and 2(c)). During these events, the water was contained well within the floodplains and thus, no inundation was recorded in the urban area.

Measured water levels and available images

The images and water levels were collected in three phases. In the first phase (event I), the Water Management Authority in Hof, Germany collected data during the winter flood and recorded water levels at eight bridges in Kulmbach. Figure 1 shows the location of bridges and Figure 3 shows the images taken. Based on the locations, the sites are categorized in four groups: sites 1, 2, and 3 at the river White Main; site 4 at Dobrach canal in the north; site 5 at a side canal; and sites 6, 7, and 8 at Mühl canal. Reference dimensions of the bridges were taken from a database, SIB-Bauwerke, that is developed by the German Federal Highway Research Institute (Bundesanstalt für Straßenwesen) (Bauwerke 2016). The database contains the design and detailed measurements of the infrastructures. The water levels were measured using a levelling instrument, Ni 2 (Faig & Kahmen 2012). The instrument was used due to its availability and high accuracy, therefore associated uncertainties were not evaluated in this study. The event was used in calibrating the 2D inundation model and identifying model parameter sets for the inundation forecast.

Figure 3

Images taken during event I on 14th January 2011 for the eight sites (Source: Water Management Authority in Hof, Germany).

Figure 3

Images taken during event I on 14th January 2011 for the eight sites (Source: Water Management Authority in Hof, Germany).

Close modal

For the second phase (event II), images were taken to increase the computer vision data set (Figure 4). For the third phase (event III), both images and water depths were recorded (Figure 5). During the event, the water surface heights were recorded using an electrical contact gauge, which is a measuring tape connected to an electric sensor used to detect water depth in tanks. The heights were measured from the tops of the bridges and converted to water levels using the reference dimensions of the bridges. Event III was used in validating the 2D inundation model.

Figure 4

Images taken during event II on 13th April 2017 for the eight sites.

Figure 4

Images taken during event II on 13th April 2017 for the eight sites.

Close modal
Figure 5

Images taken during event III on 7th December 2017 for the eight sites.

Figure 5

Images taken during event III on 7th December 2017 for the eight sites.

Close modal

Topography and land use

The quality of inundation maps mainly depends on the topography of the study area. Topography data for this study were provided by the Water Management Authority, Hof. In the digital elevation model, the terrain is determined by airborne laser scanning and airborne photogrammetry, whereas the river bed is mostly recorded by terrestrial survey (Skublics 2014).

The land use of the model domain generally consists of agricultural land, specifically floodplains and grasslands, and covers up to 62% of the total model area. Water bodies make up to 7% and include river channels and lakes. The urban area covers around 26% and includes industrial, residential areas and transport infrastructure, whereas the forests form barely 5% of the total area.

This section briefly describes the methodology used for flood inundation forecasts, next the 2D inundation model HEC-RAS used for generating inundation maps, then the computer vision algorithm used to extract the water level from an image, and finally the goodness-of-fit used for model calibration and performance analysis to accomplish the objectives of this study.

Flood inundation forecasting

The conceptual flow chart of flood inundation forecasts integrates the validation data obtained with the assistance of computer vision algorithms (Figure 6). The methodology is an extension of the FloodEvac tool (Leandro et al. 2017) in which the discharges are forecasted in real-time at upstream gauging stations. The calibrated inundation model (MCal), determined based on a pre-selected event, is then run based on the forecasted discharges as input boundary conditions. The results of this model are forecasted as inundation maps. The contribution here is the incorporation of n + 1 number of models as well as a computer vision algorithm to improve the selection of flood inundation maps. In real-time, n different model parameter sets are run simultaneously with the calibrated model parameter set (n + 1). This is motivated by the concept of equifinality (Beven & Binley 1992) in which far more models are used as they represent the modelled system equally well. If an image becomes available in the model domain, the computer vision methodology is applied. Goodness-of-fit is calculated between n + 1 model results and computer vision results. The model that produces the least error is selected for inundation forecasting. If no image is available, calibrated model parameter results (MCal) are used as a default.

Figure 6

Conceptual flow chart of flood inundation forecasts incorporating a computer vision algorithm for n + 1 models.

Figure 6

Conceptual flow chart of flood inundation forecasts incorporating a computer vision algorithm for n + 1 models.

Close modal

2D Flood inundation model: HEC-RAS

The 2D flood inundation maps were generated using HEC-RAS 2D. This is a non-commercial hydrodynamic model developed by the U.S. Army Corps of Engineers and has been used widely for various flood inundation applications (Moya Quiroga et al. 2016; Patel et al. 2017; Bhola et al. 2018). The model employs an implicit finite difference scheme to discretize time derivatives and hybrid approximations, combining finite differences and finite volumes to discretize spatial derivatives. The implicit method allows for larger computational time steps compared to an explicit method. HEC-RAS solves either 2D Saint Venant or 2D diffusion wave equations. The latter allows faster calculation and has greater stability due to its complex numerical schemes (Martins et al. 2017). Due to these advantages and suitability for use in real-time inundation forecasts (Henonin et al. 2013), we have used the diffusive wave equations in this study. For the diffusive wave approximation, it is assumed that the inertial terms are less than the gravity, friction, and pressure terms. Flow movement is driven by barotropic pressure gradient balanced by bottom friction (Brunner 2016). The equations of mass and momentum conservation are as follows:
(1)
(2)
(3)
(4)
where H is the surface elevation (m); h is the water depth (m); u and v are the velocity components in the x- and y-directions respectively (ms−1); q is a source/sink term; g is the gravitational acceleration (ms−2); cf is the bottom friction coefficient (s−1); R is the hydraulic radius (m); |V| is the magnitude of the velocity vector (ms−1); and M is the inverse of Manning's n (m(1/3) s−1).

The model was set up for the city of Kulmbach using the gathered data and Table 1 summarizes the model properties and the details of the mesh size in the model domain. The model parameter consists of roughness coefficient Manning's M for five land use classes. Aronica et al. (1998) suggested using extreme feasible upper and lower ranges for the parameters because a simple model structure does not reflect the true distribution of the parameters in the basin. Hence, literature-based extreme ranges of Manning's M are set as: 9.1–40.0 for agriculture, which covers a range from short grass to medium-dense brush; 6.7–66.7 for water bodies, very weedy reaches to rough asphalt; 5.0–9.1 for forest, in dense trees (Chow 1959); 50.0–83.3 for transportation, firm soil to concrete; and 12.5–25.0 in urban area, cotton fields to small boulders (Arcement & Schneider 1989). Sensitivity analysis of the model was performed using 1,000 uniformly distributed model parameter sets for event I.

Table 1

2D Hydrodynamic model properties of the HEC-RAS 2D model

DataValue
Model area 11.5 km2 
Total number of cells 430,485 
Δt 20 s 
Minimum cell area 6.8 m2 
Maximum cell area 59.8 m2 
Average cell area 24.8 m2 
Downstream boundary condition slope 0.0096 
DataValue
Model area 11.5 km2 
Total number of cells 430,485 
Δt 20 s 
Minimum cell area 6.8 m2 
Maximum cell area 59.8 m2 
Average cell area 24.8 m2 
Downstream boundary condition slope 0.0096 

Computer vision

The work flow of the computer vision algorithm used to estimate water level is shown in Figure 7. Input images consist of reference and target images. The reference images are collected over a period in known locations, and relevant objects such as bridges and buildings are identified in the images. The dimensions of the objects are marked in the reference images using the SIB-Bauwerke database.

Figure 7

Work flow of water level estimation algorithm and annotated image of a flood scene. The reference level (b) taken from the SIB-Bauwerke database in metres above mean sea level (m asl), the thickness of the slab/object (a) in m, and the distance between water surface and reference level (c).

Figure 7

Work flow of water level estimation algorithm and annotated image of a flood scene. The reference level (b) taken from the SIB-Bauwerke database in metres above mean sea level (m asl), the thickness of the slab/object (a) in m, and the distance between water surface and reference level (c).

Close modal

Target images are obtained as described in the section Study site and data. Based on their locations, the target images are compared with the reference images and the relevant edges of the objects are mapped in them. The relevant edges to be mapped from the reference image are two horizontal edges corresponding to a known dimension of the bridge and a vertical edge corresponding to a vertical railing on the bridge (Figure 7). The water surface line in the target image is then detected. In order to estimate the water levels, the work flow steps include: (1) mapping the relevant edges of the object from the reference image to the target image, and identifying the water line in the target image; (2) measuring the pixel distance between the relevant edges in the target image; (3) correlating the pixel distance with the real-world dimension of the object and calculating the ratio; and (4) estimating the water surface height in metres based on the ratio and conversion to water level in metres above mean sea level (m asl). The procedure was fully automatized except for step 1.

The image processing is performed using computer vision, coded in the programming language Python using OpenCV, which is a library of open-source codes that solves real-time computer vision algorithms. One of the key aspects of the algorithm is mapping pixel dimensions to physical dimensions in the target image. This ratio will be different for each target image and is obtained using the known physical dimensions of the bridge, obtained from the reference image, and the known reference dimensions in pixels, obtained from the target image (see Figure 7).

To estimate the water level in an image, the parameters marked in Figure 7: the thickness of the bridge slab (a) in m, and the reference elevation of the bridge (b) in m asl were used as input to the code. In order to reduce the perspective distortion of the image, a vertical line was drawn to calculate the ratio of pixels to the physical dimension. The line must align with a vertical railing on the bridge to ensure that it is perpendicular to the horizontal edges, even though it may not appear perpendicular in the image due to the perspective. The perspective distortion was reduced by restricting the drawn edges to coincide with the edges on the bridge, both in horizontal and vertical directions. The distance between the water surface and reference level (c) in m was obtained using Equation (5):
(5)
where a_pixel and c_pixel are the pixel distances of the bridge slab and water surface in the image. Their ratio was calibrated for each image using many iterations by manually detecting the edges. In this approach, ten iterations for each image were used to calibrate the ratio. The water level in m asl was calculated as the difference of b and c.

A set of requirements was developed to minimize the error in estimating the water levels. A suitable input image must meet the following three requirements: (1) the edges of the bridge and the water line should be clearly visible in the image; (2) the camera should be placed in front of the bridge to capture the image such that the edges of the bridge and water line appear as three parallel lines, which is important to minimize the perspective distortion; and (3) the image should be taken in proper lighting conditions.

Evaluation metrics

Model selection

For the real-time forecasting, n + 1 number of model parameter sets were selected from 1,000 uniformly distributed parameter sets based on the sum of the absolute error between the simulated and the measured water levels at eight sites (Figure 1). The goodness-of-fit (e) was calculated using Equation (6), which returns an array of 1,000 values. The values were sorted and n + 1 least errors were selected for the inundation forecast:
(6)
where r is the number of models, p is the number of sites, Mi is the measured water level and Si(r) is the water level of the rth model at the ith site.

Comparison of inundation maps comparison

For evaluating the performance between predicted and reference inundation extents, Fit-Statistic (F) was used. It is widely used for cell-based models (Moya Quiroga et al. 2016). It varies between 1 for a perfect fit and 0 when no overlap exists. It is defined as in Equation (7):
(7)
where Acal is the area of flooded cells in the calibrated model (MCal), Asel is the area of flooded cells in the selected model and A0 is the overlap of Acal and Asel. A cell is defined as flooded if the water depth in it is more than 0.10 m (Leandro et al. 2011). In our application, 1 depicts no difference by the introduction of computer vision, whereas 0 shows very large differences. The root-mean-square error (RMSE) was also calculated for the assessment between the selected and calibrated models. It is calculated using Equation (8) for flooded cells:
(8)
where n is the number of flooded cells, mi and si are the water depths in the calibrated and selected models, respectively.

Calibration and validation of the HEC-RAS 2D model

The water levels measured in event I were used to calibrate the model parameters. Table 2 presents the measured and simulated water levels, along with the maximum water depth at the eight sites. The calibrated inundation model results were in good agreement with the measured data. The sites located at the river White Main (sites 1, 2, and 3) showed a good match with a maximum difference of 0.13 m (measured water depth of 2.93 m) at site 3. A slight over-prediction of 0.08 m (in 1.43 m) was observed at site 4 (Dobrach canal). The water level at site 5 (side canal) over-predicted the water level by 0.16 m (in 1.75 m). Sites located at the Mühl canal (6, 7, and 8) were under-predicted, with a reasonable agreement of 0.15 m (in 2.31 m) and 0.14 m (in 2.36 m) at sites 7 and 8. However, significant under-prediction of 0.24 m (in 0.89 m) was observed at site 6.

Table 2

The performance of the calibration model parameter MCal for event I, on 14th January 2011, and event III, on 7th December 2017

Site no.Event I, 14th January 2011
Event III, 7th December 2017
TimeMeasured vs. HEC-RAS 2D (m)Measured water depth (m)TimeMeasured vs. HEC-RAS 2D (m)Measured water depth (m)
14:09 −0.01 2.78 10:02 0.09 1.41 
14:18 0.01 2.90 10:22 0.27 1.57 
14:23 −0.13 2.93 10:58 0.40 2.03 
14:26 −0.08 1.43 11:10 0.40 1.03 
13:27 −0.16 1.75 11:43 −0.10 0.04 
14:01 0.24 0.89 12:35 −0.01 0.60 
14:35 0.15 2.31 13:02 – – 
14:35 0.14 2.36 13:02 −0.02 0.96 
Site no.Event I, 14th January 2011
Event III, 7th December 2017
TimeMeasured vs. HEC-RAS 2D (m)Measured water depth (m)TimeMeasured vs. HEC-RAS 2D (m)Measured water depth (m)
14:09 −0.01 2.78 10:02 0.09 1.41 
14:18 0.01 2.90 10:22 0.27 1.57 
14:23 −0.13 2.93 10:58 0.40 2.03 
14:26 −0.08 1.43 11:10 0.40 1.03 
13:27 −0.16 1.75 11:43 −0.10 0.04 
14:01 0.24 0.89 12:35 −0.01 0.60 
14:35 0.15 2.31 13:02 – – 
14:35 0.14 2.36 13:02 −0.02 0.96 

The table shows the time at which the images were captured, measured water depth in m and the difference between measured and calibrated water levels in m. The positive values show an under-prediction, whereas the negative values represent over-prediction of the water level by the model.

Validation of the model was carried out using event III, the non-flood event measured on 7th December 2017. Site 7 located at the Mühl canal was under construction, hence it was not possible to gather the measured water level for that site. Nevertheless, an exceptional agreement was observed at two sites (6 and 8) at the Mühl canal. A reasonable agreement was also observed at site 1 with an under-prediction of 0.09 m (in 1.41 m). However, substantial under-prediction of 0.27 m (in 1.57 m) and 0.40 m (in 2.03 m) was observed downstream at sites 2 and 3, respectively. Under-prediction of 0.40 m (in 1.03 m) was also observed at site 4. However, at site 5 no inundation was measured (0.04 m water depth) but it over-predicted the water levels by 0.10 m.

The maximum inundation focussed on the eight sites is shown in Figure 8 for the three events. In event I, the floodplains were flooded but as mentioned before, no damage was done as the flood did not overflow the side banks of the White Main. The street, Theodor-Heuss-Allee, at site 5 was flooded as well as the motorway B 289, and the dykes were at their full capacity.

Figure 8

Maximum flood inundation maps for the three events.

Figure 8

Maximum flood inundation maps for the three events.

Close modal

In general, the inundation areas were predicted with good precision. Most of the inundation areas were within the flood plains and the inundation extent matched with the observation images and on-field survey.

No inundation outside the main channel was observed during non-flood events II and III (Figure 8(b) and 8(c)). As mentioned before, the events were of smaller magnitude as compared to event I (Figure 2(b) and 2(c)). The simulations were in line with the measurements. Overall, considering the simple model structure of the HEC-RAS 2D, which disregards the sewer network and urban key features (Leandro et al. 2016), the results were considered satisfactory. In addition, they show the robustness of the model as it was able to simulate both high and low events within acceptable limits.

This section presents the water levels extracted from the images using computer vision and the models selected for flood inundation forecasting.

Water levels obtained by computer vision

A computer vision algorithm was applied to the images collected from three different events at eight sites in the city of Kulmbach. The images that were suitable for computer vision are shown in Figures 911. Images of event II were used as the reference images and events I and III as the target images. The water levels obtained from the algorithm were compared with the measured water levels.

Figure 9

Available sites for application of computer vision for event I on 14th January 2011.

Figure 9

Available sites for application of computer vision for event I on 14th January 2011.

Close modal
Figure 10

Available sites for application of computer vision for event II on 13th April 2017.

Figure 10

Available sites for application of computer vision for event II on 13th April 2017.

Close modal
Figure 11

Available sites for application of computer vision for event III on 7th December 2017.

Figure 11

Available sites for application of computer vision for event III on 7th December 2017.

Close modal

A box plot of the pixel distance ratio – division of the distance between the water surface and the height of the referenced object in pixels (c_pixel), and the referenced dimension of the object in pixels (a_pixel) is shown in Figure 12. The ratio was calculated using ten iterations for each image by manually drawing the edges. The figure also shows the mean and standard deviation (SD) of the iterations that indicates the error in estimating the ratio.

Figure 12

Box plot showing the pixel distance ratio of the distance between the water surface and the referenced object in pixels (c_pixel) and the height of the referenced object in pixels (a_pixel) for ten iterations.

Figure 12

Box plot showing the pixel distance ratio of the distance between the water surface and the referenced object in pixels (c_pixel) and the height of the referenced object in pixels (a_pixel) for ten iterations.

Close modal

The height of referenced objects (a) are same for all three events: 0.43, 0.35, and 0.30 m for sites 2, 5, and 6 respectively. Thus, the water surface height (c) was calculated in m using Equation (5). Furthermore, the mean of the c was converted to water level. The difference of the water levels between the measured and predicted by computer vision for the seven images is shown in Table 3. As mentioned before, no measurement was performed for event II, hence the difference cannot be calculated.

Table 3

Difference between the measured and the computer vision water levels predicted for events I and III

EventSite no.Measured vs. computer vision (m)Measured water depth (m)
−0.01 2.90 
0.17 1.75 
−0.07 0.89 
III 0.04 1.57 
III 0.12 0.60 
EventSite no.Measured vs. computer vision (m)Measured water depth (m)
−0.01 2.90 
0.17 1.75 
−0.07 0.89 
III 0.04 1.57 
III 0.12 0.60 

Flood inundation forecasting

The total number of models to be simulated in real-time is restricted by the computational resources. Given a large infrastructure, a large number of models can be realized with this methodology, however, in our case, existing resources limited the number of models to six (1 + 5). To conclude, out of 1,000 models, we selected six models that produced the least error in the water levels. As mentioned above, the sensitivity analysis was performed for a single event (event I) based on pre-determined ranges of the 2D inundation model parameter. Figure 13 presents the six models that return the least error in the water levels at the eight sites. The radar plot shows the variability of Manning's M for each land use class. It is evident from the figure that the parameter space is different in each model, which results in different output. The output of the models is presented in Table 4, which shows the difference between the measured and the simulated water levels resulted from the six models for event I. A threshold value of ± 0.15 m is used for highlighting the differences in the model results.

Table 4

Difference in water levels between measured and six HEC-RAS 2D models in m

Site no.Measured water depth (m)Measured vs. Mcal (m)Measured vs. M1 (m)Measured vs.M2 (m)Measured vs. M3 (m)Measured vs. M4 (m)Measured vs. M5 (m)
2.78 0.01 0.01 0.02 0.07 0.03 0.00 
2.90 0.01 0.06 0.04 0.01 0.01 0.05 
2.93 0.13 0.06 0.11 0.02 0.12 0.05 
1.43 0.08 0.11 0.06 0.08 0.07 0.10 
1.75 −0.16 0.08 0.15 0.09 0.13 0.08 
0.89 0.24 0.44 0.23 0.45 0.28 0.44 
2.31 0.15 0.60 0.16 0.62 0.21 0.57 
2.36 0.14 0.58 0.14 0.60 0.19 0.56 
Site no.Measured water depth (m)Measured vs. Mcal (m)Measured vs. M1 (m)Measured vs.M2 (m)Measured vs. M3 (m)Measured vs. M4 (m)Measured vs. M5 (m)
2.78 0.01 0.01 0.02 0.07 0.03 0.00 
2.90 0.01 0.06 0.04 0.01 0.01 0.05 
2.93 0.13 0.06 0.11 0.02 0.12 0.05 
1.43 0.08 0.11 0.06 0.08 0.07 0.10 
1.75 −0.16 0.08 0.15 0.09 0.13 0.08 
0.89 0.24 0.44 0.23 0.45 0.28 0.44 
2.31 0.15 0.60 0.16 0.62 0.21 0.57 
2.36 0.14 0.58 0.14 0.60 0.19 0.56 

The threshold value of up to ±0.15 m is italicized in the table.

Figure 13

Six model parameter sets for five land use classes. The figure shows Manning's M in m(1/3)/s resulting from the sensitivity analysis of 1000 HEC-RAS 2D model runs.

Figure 13

Six model parameter sets for five land use classes. The figure shows Manning's M in m(1/3)/s resulting from the sensitivity analysis of 1000 HEC-RAS 2D model runs.

Close modal

To select the most suitable model out of the six, water levels obtained using computer vision are used as the validation data. The goodness-of-fit (Equation (6)) is calculated for the three events for the six models and one least error model is selected for the real-time forecast for each event. If there were no validation data, inundation maps of the calibrated model (MCal) would have been used as the final forecast.

To assess the difference between the calibrated and selected models, goodness-of-fit Fit-Statistic (F) in percentage and root-mean-square error (RMSE) in m (Equations (8) and (9)) is presented in Table 5. For events I and III, the calibrated model was not the selected model, hence the difference is reported. For event II, the calibrated model produced the least error using the computer vision water levels. Large differences were found in event I. The spatial distribution of the error for event I is shown in Figure 14.

Table 5

Selected model and goodness-of-fit between the calibrated and the selected model (Mcal) for the peak inundation time step

EventModel selectedFit-statistics (%)RMSE (m)
M3 89 0.40 
II MCal 100 
III M4 99 0.03 
EventModel selectedFit-statistics (%)RMSE (m)
M3 89 0.40 
II MCal 100 
III M4 99 0.03 
Figure 14

Difference in the water depths between the calibrated and the selected model using computer vision for event I (14th January 2011).

Figure 14

Difference in the water depths between the calibrated and the selected model using computer vision for event I (14th January 2011).

Close modal

Computer vision

For event I, only the images of sites 2, 5, and 6 were amenable for analysis using computer vision. The images from the other sites did not satisfy the specified requirements for analysis (section Methodology – Computer vision): in Figure 3(a) (site 1) the water line is not clearly visible on the image; in Figure 3(d) and 3(g) (sites 4 and 7), the reference lines of the bridges are not visible; in Figure 3(c) and 3(h) (sites 3 and 8), the reference lines and the water line are not parallel to each other, whereas in Figure 3(f) (site 6), the railings are right on top of the vertical embankment of the river, so the three reference lines are practically in the same vertical plane, thus it can be used for computer vision. Furthermore, for events II and III, only the images of Figure 4(b) and 4(f) and Figure 5(b) and 5(f) (sites 2 and 6) were deemed suitable for computer vision. The water level is very low in Figure 4(e) and 5(e) (site 5) and thus, the water line is not clearly visible. These examples indicate that the local conditions may constrain the application of computer vision and it should be ensured that the requirements are met while capturing images.

Uncertainty was quantified based on the edge detection in an image. We assumed an error of ±0.50 mm in the detection of each reference line, which results in an error of ±1 mm in the estimation of a reference dimension. The images used have a high resolution of 300 dpi. Each millimetre corresponds to 11.8 pixels. Based on the ratio of pixels to physical dimensions (0.011 m per pixel), the error in physical dimension was calculated to be 0.13 m. Therefore, the uncertainty in the computer vision-based water level was estimated at ±0.13 m. In some cases, the side surface used as the reference for drawing the edges is not entirely vertical, as in site 6 (Figures 9(c), 10(b) and 11(b)), which could introduce additional errors since the algorithm assumes the surface to be entirely vertical.

The calibrating parameter c_pixel/a_pixel and a were used to estimate c for the ten iterations. The maximum standard deviation of ±0.18 m in the value of c was observed for Figure 10(a) (Event II: site 2) and ±0.11 m for Figure 9(a) (Event I: site 2). The values of the mean were converted to the water level in m asl. These values compare well with the value of ±0.13 m estimated previously.

A reasonable match was found in the measured and the computer vision water levels on five images: sites 2, 5, and 6 in event I and sites 2 and 6 in event III. The water levels predicted from computer vision for event II (Figure 10(a) and 10(b)) were 1.12 and 0.92 m for sites 2 and 6, respectively. In the absence of measured water levels, the calibrated HEC-RAS model results at those sites, 1.2 and 0.65 m, can be used as good estimates to evaluate the performance of computer vision. The image for site 2 is more in line with the requirements than the image for site 6. This has potentially resulted in better results for site 2 than site 6. If the images are captured as per the requirements, computer vision has the potential to be a good validation tool for flood inundation forecasting.

One of the limitations of these methodologies (as in Wang et al. 2018) is the manual approach used to map the edges from the reference images and to detect the water surface line, which is not automatized. However, this step would only be a crucial step if it had to be run continuously in real-time. This is not the case in this study, since the procedure for selecting the forecast model (section Methodology – Flood inundation forecasting) can be performed offline on a regular basis or following a flood event. In our methodology, the model that produces the least error is selected for inundation forecasting only if images become available, otherwise the calibrated model (MCal) is used as a default.

If locations that have not been referenced are included in this procedure, it may be difficult to generate the reference elevation or measurements. The images would first need to be referenced manually using the database and the location can then be used as a target location in our methodology. The locations could include either hotspots in the city or major bridges that are easily accessible and regularly monitored via social media or CCTV cameras.

Flood inundation forecasting

It can be seen from the model parameter distribution (Figure 13) that six different sets of parameters were selected based on the least error. Equifinality can be observed from Table 4 where multiple model parameter sets represent the modelled system equally well and the six models can be accepted. However, depending on the sites where computer vision is applied (i.e. where images are available), equifinality will be reduced, because this will now become the main criteria for the selection of the model used for forecasting inundation. The additional validation will ensure that the number of false alarms can be reduced significantly by the forecasting framework. The error can be minimized using back communication from computer vision to the inundation forecast. If no computer vision is available, the calibrated model MCal is used.

Comparison between the 2D model and computer vision was carried out on the available sites (see Table 4). For event I, the model M3 was selected based on the least error (Table 5). For event II, there was no change in the selected model. Based on the comparison, the calibrated model was selected. For event III, the selected model was M4. To assess the differences in the forecasted inundation extents between using or not using computer vision, the Fit-Statistic (F) and root-mean-square error (RMSE) is used (see section Evaluation metrics). Larger differences can be observed in event I (Figure 14) since the F is 89% and the RMSE is 0.40 m. The selected model generally had higher water depths as compared to the calibrated model as 24.7% of the total flooded cells contains higher water depths (range of −0.10 to −0.50 m). Furthermore, 72.8% of the flooded cells had a minimal difference in the range of −0.10 to −0.10 m as compared to the calibrated model. Very few cells showed water depths smaller than the calibrated model.

For event III, the model selected using computer vision was very similar to the calibrated model, hence the differences were minimal. This can be explained by the similar Manning's M values of MCal and M4 in the main channel and the floodplain. As the discharges were considerably low in event III, the water did not leave the main channel and hence not much difference was observed. In event II, there was no change in the selected model by applying computer vision.

These examples show that the inclusion of computer vision can produce changes in the forecasted inundation extent. In this study, we assumed that computer vision was the prevailing source of accurate data.

We present a methodology for real-time flood inundation forecasts incorporating additional crowd-sourced validation data generated with the assistance of the computer vision algorithm. Six 2D diffusive wave models (HEC-RAS 2D) are run in parallel. The selection of models used for the inundation forecasting is based on 1,000 models run for a single event. In this study, validation of the methodology is carried out using three events on eight sites located in the Kulmbach inner city. Model selection (one out of six) for flood forecasting is based on the least error using computer vision at available sites. The computer vision algorithm is used to estimate the water levels of the images that meet the requirements of the proposed guidelines. The algorithm uses specific features, such as bridges and water surfaces, to estimate water levels in the images. Since the procedure is not fully automated, we suggest collecting images on a regular basis or following a flood event for model selection.

The major advantage of the forecast framework is its fast run-time and easy application to other study areas. The framework of the back communication from computer vision to the forecasts shows how alternative data sources can improve inundation forecasts. Furthermore, equifinality can be reduced by employing computer vision validation for the selection of the appropriate model for forecasting inundations. The validation data can be in the form of georeferenced images captured by citizens (Lowry & Fienen 2013), security cameras or the fire fighters at referenced locations.

The results obtained from computer vision can be used as additional point source validation data and substantially improve flood inundation forecasting. However, the procedure is not yet entirely automated, requiring the user to detect the edges manually. In future, edge detection should be automatized using, for example, SIFT or image segmentation algorithms as described by Narayanan et al. (2014), Nair & Rao (2017) and Geetha et al. (2017). Moreover, the method should include image enhancement techniques, such as power-law and logarithmic transformation (Maini & Aggarwal 2010), to deal with the issue of poor lighting conditions in an image. The enhancement will mitigate one of the requirements concerning the proper lighting conditions and allow more images to be processed. Furthermore, setting up a network of pre-installed CCTV cameras that fulfils the requirements should be explored.

The inundation model should be extended to simulate urban pluvial flooding (Arnbjerg-Nielsen et al. 2016) in future by including a 1D-2D sewer/overland flow coupled-model structure. With ever-increasing computational performance and the introduction of cloud computing, the integration of more complex models will become feasible. In addition, analysing additional model outputs, such as flow velocities and hazards, should improve the existing forecasting framework by incorporating flood risk assessments.

We would like to acknowledge Sino-German Cooperation Group (GZ912) for supporting the research. The research project FloodEvac is financed by the German Federal Ministry of Education and Research (BMBF, FKZ 13N13196) and the Department of Science and Technology (DST) of the Government of India. The authors would like to thank all contributing project partners, funding agencies, politicians and stakeholders in different functions in Germany. A very special thanks to the Bavarian Water Authority and Bavarian Environment Agency in Hof for providing us with the quality data to conduct the research. The Indian authors would like to acknowledge the support and encouragement for this project provided by their Chancellor, Dr Mata Amritanandamayi Devi, a world-renowned humanitarian leader popularly known as Amma. The authors are grateful to the anonymous reviewers for their helpful comments in improving the clarity of the paper.

Arcement
G. J.
Jr
&
Schneider
V. R.
1989
Guide for Selecting Manning's Roughness Coefficients for Natural Channels and Flood Plains
,
Water-Supply paper 2339
.
United States Department of Transportation
,
Denver
,
USA
.
Arnbjerg-Nielsen
K.
,
Langeveld
J.
&
Marsalek
J.
2016
Urban drainage research and planning. Quo vadis? Urban drainage research and planning. Quo vadis?
In:
Global Trends & Challenges in Water Science, Research and Management: A Compendium of Hot Topics and Features From IWA Specialist Groups
(
Li
H.
, ed.).
IWA Publishing
,
London
, pp.
133
135
Bach
P. M.
,
Rauch
W.
,
Mikkelsen
P. S.
,
McCarthy
D. T.
&
Deletic
A.
2014
A critical review of integrated urban water modelling – urban drainage and beyond
.
Environ. Model. Softw.
54
,
88
107
.
Bauwerke
S.
2016
SIB Bauwerke. WPM-Ingenieure 2016
. .
Bermúdez
M.
,
Ntegeka
V.
,
Wolfs
V.
&
Willems
P.
2018
Development and comparison of two fast surrogate models for urban pluvial flood simulations
.
Water Resour. Manage.
32
(
8
),
2801
2815
.
Brunner
G. W.
2016
HEC-RAS River Analysis System: Hydraulic Reference Manual
,
Version 5.0
.
US Army Corps of Engineers, Institute for Water Resources, Hydrologic Engineering Center
,
Davis, CA
.
Chow
V. T.
1959
Development of uniform flow and its formulas
. In:
Open-channel Hydraulics
,
Chapter 5
(
Davis
H. E.
, ed.).
McGraw-Hill Book Company
,
USA
, pp.
89
114
.
Dapeng
Y.
,
Jie
Y.
&
Min
L.
2016
Validating city-scale surface water flood modelling using crowd-sourced data
.
Environ. Res. Lett.
11
,
1
21
.
Disse
M.
,
Konnerth
I.
,
Bhola
P. K.
&
Leandro
J.
2018
Unsicherheitsabschätzung für die Berechnung von Dynamischen Überschwemmungskarten – Fallstudie Kulmbach (Uncertainty assessment for the calculation of dynamic flood maps – case study Kulmbach)
. In:
Vorsorgender und Nachsorgender Hochwasserschutz (Preventative and Aftercare Flood Protection)
(
Heimerl
S.
, ed.).
Springer Fachmedien Wiesbaden
,
Wiesbaden
,
Germany
, pp.
350
357
.
Faig
W.
&
Kahmen
H.
2012
Differential levelling
. In:
Surveying
(H. Kahmen & W. Faig, eds ).
De Gruyter
,
Berlin
, pp.
321
386
.
Geetha
M.
,
Manoj
M.
,
Sarika
A. S.
,
Mohan
M.
&
Rao
S. N.
2017
Detection and estimation of the extent of flood from crowd sourced images
. In:
International Conference on Communication and Signal Processing (ICCSP)
.
IEEE
,
Chennai
,
India
, pp.
0603
0608
.
Henonin
J.
,
Russo
B.
,
Mark
O.
&
Gourbesville
P.
2013
Real-time urban flood forecasting and modelling – a state of the art
.
J. Hydroinform.
15
,
717
736
.
Hies
T. B.
,
Parasuraman
S.
,
Wang
Y.
,
Duester
R.
,
Eikaas
H.
&
Tan
K. M.
2012
Enhanced water-level detection by image processing
. In:
10th International Conference on Hydroinformatics
.
Hamburg
,
Germany
.
Hof
W.
2011
Flood Events, January 2011, Area Main
. .
Jaehyoung
Y. U.
&
Hernsoo
H.
2010
Remote detection and monitoring of a water level using narrow band channel
.
J. Inf. Sci. Eng.
26
,
71
82
.
Kampf
S.
,
Strobl
B.
,
Hammond
J.
,
Anenberg
A.
,
Etter
S.
,
Martin
C.
,
Puntenney-Desmond
K.
,
Seibert
J.
&
van Meerveld
I.
2018
Testing the waters: mobile apps for crowdsourced streamflow data
.
Eos
99
.
DOI: 10.1029/2018EO096355
.
Kutija
V.
,
Bertsch
R.
,
Glenis
V.
,
Alderson
D.
,
Parkin
G.
,
Walsh
C.
,
Robinson
J.
&
Kilsby
C.
2014
Model validation using crowd-sourced data from a large pluvial flood
. In:
Informatics and the Environment: Data and Model Integration in a Heterogeneous Hydro World. 11th International Conference on Hydroinformatics
(
Piasecki
M.
, ed.).
Curran Associates Inc.
,
New York
.
Leandro
J.
,
Djordjević
S.
,
Chen
A. S.
,
Savić
D. A.
&
Stanić
M.
2011
Calibration of a 1D/1D urban flood model using 1D/2D model results in the absence of field data
.
Water Sci. Technol.
64
,
1016
1024
.
Leandro
J.
,
Konnerth
I.
,
Bhola
P.
,
Amin
K.
,
Köck
F.
&
Disse
M.
2017
FloodEvac Interface zur Hochwassersimulation mit integrierten Unsicherheitsabschätzungen (FloodEvac interface for flood simulation with integrated uncertainty assessments). In: Forum für Hydrologie und Wasserbewirtschaftung, Issue 38.17, 185
.
Maini
R.
&
Aggarwal
H.
2010
A comprehensive review of image enhancement techniques
.
J. Comput.
2–3
,
8
13
.
Martins
R.
,
Leandro
J.
,
Djordjević
S.
&
Chen
A.
2017
A comparison of three dual drainage models: Shallow Water vs. Local Inertial vs. Diffusive Wave
.
J. Hydroinform.
19
,
331
348
.
McDougall
K.
2012
An assessment of the contribution of volunteered geographic information during recent natural disasters
. In:
Spatially Enabling Government, Industry and Citizens: Research and Development Perspectives
(
Rajabifard
A.
&
Coleman
D.
, eds).
GSDI Association Press
,
Needham, MA
,
USA
, pp.
201
214
.
Moya Quiroga
V.
,
Kure
S.
,
Udo
K.
&
Mano
A.
2016
Application of 2D numerical simulation for the analysis of the February 2014 Bolivian Amazonia flood: application of the new HEC-RAS version 5
.
RIBAGUA Rev. Iberoam. Agua
3
,
25
33
.
Nair
B. B.
&
Rao
S. N.
2017
Flood monitoring using computer vision
. In:
The 15th ACM International Conference on Mobile Systems, Applications, and Services (ACM Moises 2017)
.
ACM
,
New York
.
Narayanan
R.
,
Lekshmy
V. M.
,
Rao
S.
&
Sasidhar
K.
2014
A novel approach to urban flood monitoring using computer vision
. In:
5th International Conference on Computing, Communications and Networking Technologies (ICCCNT)
.
IEEE
,
Hefei
,
China
, pp.
1
7
.
Néelz
S.
&
Pender
G.
2009
Desktop Review of 2D Hydraulic Modelling Packages
.
Technical report SC120002
.
Environment Agency
,
Bristol
,
UK
.
Oubennaceur
K.
,
Chokmani
K.
,
Nastev
M.
,
Tanguy
M.
&
Raymond
S.
2018
Uncertainty analysis of a two-dimensional hydraulic model
.
Water
10
,
1
19
.
Poser
K.
&
Dransch
D.
2010
Volunteered geographic information for disaster management with application to rapid flood damage estimation
.
Geomatica
64
(
1
),
89
98
.
Razavi
S.
,
Tolson
B. A.
&
Burn
D. H.
2012
Review of surrogate modeling in water resources
.
Water Resour. Res.
48
,
1
32
.
Skublics
D. A.
2014
Grossräumige hochwassermodellierung im einzugsgebiet der bayerischen donau: Retention, rückhalt, ausbreitung (Large-Scale Flood Modeling in the Catchment Area of the Bavarian Danube: Retention, Backwater, Widening)
.
PhD thesis
,
Chair of Hydraulic and Water Resources Engineering, Technical University of Munich
,
Germany
.
Triglav-Čekada
M.
&
Radovan
D.
2013
Using volunteered geographical information to map the November 2012 floods in Slovenia
.
Nat. Hazards Earth Syst. Sci.
13
,
2753
2762
.
Tvo
2015
Hochwasserschutz Kulmbach: Neugestaltung der Flutmulde (Flood Protection Kulmbach: Redesign of the Flood Basin)
.
TV Upper Franconia
,
Hof
,
Germany
.
p.16.03.2018
.
Uma
G.
,
Narayanan
R.
,
Rangan
P. V.
&
Hariharan
B.
2016
Re-orchestration of remote teaching environment in eLearning
. In:
18th International Conference on Enterprise Information Systems (ICEIS 2016)
.
SCITEPRESS
,
Portugal
, pp.
223
229
.
Wang
Y.
,
Chen
A.
,
Fu
G.
,
Djordjević
S.
,
Zhang
C.
&
Savic
D.
2018
An integrated framework for high-resolution urban flood modelling considering multiple information sources and urban features
.
Environ. Model. Softw.
107
,
85
95
.
Zhai
L.
,
Dong
S.
&
Ma
H.
2008
Recent methods and applications on image edge detection
. In:
International Workshop on Education Technology and Training and International Workshop on Geoscience and Remote Sensing
.
IEEE
,
Shanghai
,
China
, pp.
332
335
.
This is an Open Access article distributed under the terms of the Creative Commons Attribution Licence (CC BY 4.0), which permits copying, adaptation and redistribution, provided the original work is properly cited (http://creativecommons.org/licenses/by/4.0/).