Fully remote surface flow measurements are crucial for flow monitoring during floods and in difficult-to-access areas. Recently, optics-based surface flow monitoring has been enabled through a permanent gauge-cam station on the Tiber River, Rome, Italy. Therein, a system of lasers and an internet protocol camera equipped with two optical modules afford video acquisitions of the river surface every 10 minutes. In this work, we establish a standard video-processing protocol by analyzing more than 10 Gb of footage data captured during low discharge regime from May 2nd to 11th, 2015, through particle tracking velocimetry (PTV). We show that good image-based velocity data can be obtained throughout the day – from 6 am to 8 pm – despite the challenging experimental settings (direct sunlight illumination, mirror-like river surface, and overlying bridge shadow). Further, we demonstrate that images captured with a 27° angle of view optical sensor lead to average velocity measurements in agreement with available radar data. Consistent with similar optical methods, PTV is not applicable in case of adverse illumination and at night; however, it is more robust for dishomogeneous distributions of floaters in the field of view.
The comprehension and forecasting of hydrological phenomena largely rely on the availability of diverse and accurate data. Specifically, data should portray the temporal evolution of hydrological processes and should capture the complex intertwining of the multiple events occurring at heterogeneous spatial scales in natural catchments (Hrachowitz et al. 2013). In recent years, several efforts have been devoted to promoting the advancement of approaches and technologies available to scientists for hydrological observations. Among others, these initiatives have led to the establishment of dedicated working groups (see, for instance, the Measurements & Observations in the 21st Century Working Group of the International Association of Hydrological Sciences (MOXXI 2016)) and the rise of a multidisciplinary perspective to tackle hydrological phenomena (Selker et al. 2006; Haberlandt & Sester 2010; Hut et al. 2010; Allamano et al. 2015).
In the realm of surface hydrology, flow monitoring has been traditionally addressed through the installation of gauging stations equipped with water level meters at selected cross sections in riverine environments (Creutin et al. 2003). More recently, radar technology has been introduced to provide information of surface flow kinematics over areas of limited extent (Costa et al. 2006; Fulton & Ostrowski 2008). Technological advances and the availability of high-performance optics equipment at reasonable prices have leveraged the use of image-based approaches in surface hydrological monitoring (Gunawan et al. 2012). Optics-based approaches offer several advantages with respect to traditional water gauges. First, they enable fully remote measurements over surface areas. Velocity measurements can be executed in challenging settings, such as during high flow regimes and in difficult-to-access areas. Further, they are inherently suited for continuous observations and enable measurements in diverse ecosystems, spanning from small-scale rills to channel environments.
Recently, image-based approaches have been coupled with high-visibility surface tracers (Leibundgut et al. 2009) to monitor surface flow velocity in hillslope rills (Tauro et al. 2012b), mountainous streams (Tauro et al. 2012a), and medium- to large-scale rivers (Hauet et al. 2008b, 2009). Generally, raw images are orthorectified, georeferenced, and analyzed through large-scale particle image velocimetry (LSPIV) to generate surface flow velocity maps (Fujita et al. 1997; Muste et al. 2008). In particular, LSPIV has been adopted to characterize flow patterns in lakes (Admiraal et al. 2004), rivers (Kantoush et al. 2011), and estuaries (Bechle et al. 2012). In Hauet et al. (2008b), a fixed LSPIV station has been installed on the Iowa River, and similar implementations have afforded monitoring of flood events (LeBoursicaud et al. 2015; Ran et al. 2016).
Despite their advantages, image-based approaches often require the acquisition of ground control points through Global Positioning System (GPS) or total stations, thus limiting their implementation to areas that are accessible to operators (Tauro 2015). Further, they are affected by illumination conditions, which may negatively impact image quality (Hauet et al. 2008a). In addition to such limitations, LSPIV also tends to be highly sensitive to the presence and spatial distribution of floating tracers (Kim 2006). For instance, in Tauro et al. (2014, 2016a), images of the Tiber River analyzed through LSPIV are shown to lead to consistently underestimated surface flow velocities due to the meager occurrence of tracers. To partially mitigate such issues, a portable experimental apparatus hosting a system of green lasers for remote image calibration has been proposed in Tauro et al. (2014). In this set-up, image orthorectification is circumvented by placing the camera axis perpendicular with respect to the water surface. Similar implementations have also been integrated onboard aerial platforms for acquisitions in difficult-to-access environments (Tauro et al. 2015, 2016b).
The promise of the experimental set-up in Tauro et al. (2014) for fully remote flow measurements has inspired the design and development of a permanent gauge-cam station located in the Tiber River at Ponte del Foro Italico, in the center of Rome, Italy. Since December 2014, the gauge-cam station has been continuously capturing videos of an area of approximately 20 × 15 m2 of the river water surface. Affording remote and distributed measurements on the Tiber River, the gauge-cam station offers a unique opportunity to explore the river dynamics and to refine image-based approaches for hydrological monitoring. With regards to the latter (Tauro et al. 2016c), three videos recorded in January and February 2015 have been analyzed through two alternative algorithms, namely particle tracking velocimetry (PTV) and LSPIV, to generate surface flow velocity field maps. Based on experimental findings in Tauro et al. (2016c), videos analyzed through PTV are in stronger agreement than LSPIV with the available RVM20 speed surface radar measurements.
Building on previous findings in Tauro et al. (2016c), in this paper, we demonstrate the potential of the gauge-cam station by systematically analyzing videos recorded for 10 consecutive days in May 2015 through PTV. Specifically, we establish standard protocols to process captured videos and to extract average surface flow velocities over the recorded field of view through PTV. Further, we demonstrate the station's continuous operation and explore the relationship between image-based data and radar measurements. By comparing both independent datasets, we evaluate the performance of the approach and suggest possible ameliorations for the implementation of similar monitoring schemes in riverine ecosystems.
The gauge-cam station
The gauge-cam station is located on the Tiber River underneath Ponte del Foro Italico in the center of Rome. The station design is inspired from the portable apparatus in Tauro et al. (2014) and comprises a control unit and a sensing platform (Figure 1). The control unit is based on the advanced Multi-Hazard System technology developed by CAE S.p.a. for integrated environmental monitoring (CAE S.p.a. 2016). The sensing platform comprises a Mobotix FlexMount S15 weatherproof internet protocol camera (Mobotix 2016) and two < 20 mW green lasers (532 nm in wavelength) installed 50 cm apart from the camera axes. The camera includes two miniature optical modules with independent sensors and lenses. The right-side L25 lens (82 ° angle of view and 4 mm focal length) captures a large area of the river surface (about 20 × 15 m2), whereas the left-side L76 lens (27 ° angle of view and 12 mm focal length) enables finer observations in the center of the L25 sensor field of view (about 7 × 5 m2). Left- and right-side lenses present consistent pixel resolution; therefore, objects captured in left-side videos appear larger than in right-side videos. Both modules are placed with their optical axis perpendicular to the water surface. The Tiber River's bed slope in the proximity of the station site is quite regular and mild, therefore, the water surface can be assumed horizontal. The river's gentle slope leverages the use of the lasers for remote image calibration, and prevents time-consuming on-site acquisition of ground reference points.
Surface flow observations at high temporal resolutions are obtained by capturing 1-minute long videos every 10 minutes. The frame acquisition frequency during the recordings is automatically set based on the illumination conditions sensed by the optical sensors. Image resolution for both optical sensors is set to 1,024 × 768 pixels. Laser modules are operated for 20 s at the beginning of each video recording. Videos are stored in the MxPEG audio/video container format, which guarantees the synchronous stream of good quality images at efficient compression. Videos are stored through a nestled folding system in the external hard drive. Further details on the gauge-cam station can be found in Tauro et al. (2016c).
Next to the gauge-cam station, an existing monitoring apparatus managed by Centro Funzionale–Regione Lazio includes a ULM 20 ultrasonic meter by CAE S.p.a., which records water levels proximal to the midspan of the bridge every 15 minutes. Further, an RVM20 speed surface radar sensor by CAE S.p.a., which operates in the 0.30 to 15 m/s velocity range with an accuracy of ±0.02 m/s, records surface velocity every 15 minutes over an area of a few squared centimeters.
We analyzed 70 videos captured from May 2nd to May 11th, 2015. Out of the total footage data captured in such period of time, analyzed videos account for 11 Gb of data and are manually selected based on the presence of homogeneously distributed and naturally occurring tracers. Due to improved visibility of the tracers, most videos are captured from 6 to 7 am and from 6 to 8 pm. A few videos are recorded from 12 to 5 pm. Videos acquired after 8 pm are not analyzed due to insufficient external illumination.
Footage data are analyzed as follows. MxPEG videos are converted to ‘.avi’ files through the Mobotix Control Center software. Then, images are extracted from files through an ad hoc developed Matlab script. At this stage, the frame acquisition frequency is also estimated. Extracted images are separated in two sequences of 1,024 × 768 pixels frames depicting the left- and right-side fields of view. To emphasize lighter particles against a dark background, images are gamma-corrected to darken midtones (Forsyth & Ponce 2011). Right-side sequences are fish-eye undistorted using the Adobe Photoshop ‘Lens correction’ filter. Figure 2 details the image preparation phases. Both sequences of images are processed by mean intensity subtraction to further highlight the presence of floating tracers against homogeneous backgrounds. Image calibration factors (to convert from pixel to metric velocities) are determined from the average frame acquisition rate of the videos and by estimating the metric distance in pixels through the lasers’ trace onto the water surface.
Sequences of ‘.bmp’ images from the left- and right-side sensors are analyzed using a modified version of PTVlab (Brevis et al. 2011). Specifically, we develop a command-line interface version of the PTVlab toolbox that facilitates serial analysis of massive quantities of data. In the approach, particle detection is enabled through a Gaussian mask procedure. Tracking is based on cross-correlation between pairs of subsequent images. Velocities along the current and perpendicular to the flow are computed at the nodes of a 10 × 10 pixels cell grid overlayed on images. Surface flow velocity maps are computed by interpolating over the sequence of grids. To provide a comparison of image-based data to radar velocities, we compute the space-averaged velocity and standard deviation of the maps.
PTV is executed on subregions of the original images (upper and lower borders display date, time, illumination intensity, and recording parameters and are, therefore, excluded from the analysis). Parameters adopted for left- and right-side images are identified upon a preliminary sensitivity analysis and reported in Table 1.
|Corr. thresh.||Radius [pix]||Inten. thresh.||Int. area [pix]||Min. corr.||Sim. neigh. [%]|
|Corr. thresh.||Radius [pix]||Inten. thresh.||Int. area [pix]||Min. corr.||Sim. neigh. [%]|
The image dataset
Analyzed images account for approximately 14 Gb of data (data size increases upon frame extraction from videos). On average, each left- and right-side sequence comprises 514 frames. The minimum and maximum numbers of frames for each video are equal to 336 and 604, respectively. The average frame acquisition frequency over the 70 videos is equal to 8.7 Hz (minimum and maximum equal to 7.4 Hz and 9.6 Hz, respectively). Illumination intensity of the videos ranges from less than 10 lux, up to approximately 7,000 lux.
The image dataset presents several challenges: frames are captured in low discharge regime conditions; the river generally shows a mirror-like surface; and the overlying bridge permanently creates two regions of different intensity in the images. The Tiber River low discharge conditions generally correspond to a meager quantity of tracers transiting in the field of view. This led to the selection of a subset of videos out of the total number of files captured from May 2nd to 11th, 2015 at the gauge-cam station. Further, the mirror-like surface is detrimental for particle visibility, whereas overlying shapes, such as clouds, are clearly recognizable in images. Finally, the bridge shadow generates a dark lower image region where tracer visibility is enhanced. On the other hand, in the upper half of the images, noise due to direct sunlight reflections negatively affects particle detection and tracking.
To characterize the amount of tracers in the selected videos, we a posteriori estimate particle seeding density through a Gaussian detection procedure. Specifically, the number of round particles of a selected diameter is automatically identified in experimental images. To reduce computational time, the procedure is applied to a subset (one image in 50) of each experimental image sequence. Particle detection is performed through cross-correlation with a Gaussian kernel (set to 10 pixels and to 7 pixels in the left- and right-side images, respectively). On average, left- and right-side images present particle densities equal to 3.09 × 10−4 and 1.30 × 10−4, respectively, whereby particle density is evaluated as the ratio of the total number of particles to total image pixels. With respect to left-side sequences, minimum and maximum densities of 2.02 × 10−5 and 1.09 × 10−3 are observed. In case of right-side images, particle densities range from 2.92 × 10−4 to 2.75 × 10−3. Given the larger field of view, particle density is generally higher in right-side images.
Compared to standard image-based laboratory applications, particle density is low and, therefore, high-speed cross-correlation algorithms, such as LSPIV, may be difficult to implement. Interestingly, the density of the tracers in the field of view is relatively unaffected by illumination conditions, as it remains constant in videos captured at different times of the day. This suggests that image quality is acceptable despite highly varying external light settings. In the case of right-side images, however, slightly higher standard deviations are observed (4.39 × 10−4 against 2.14 × 10−4 in the case of left-side sequences).
With regards to the bridge shadow, both left- and right-side sequences display a marked difference in the intensity of the upper and lower halves of images. This phenomenon is inevitable in the case of fixed optical equipment installed underneath cableways and/or bridges, and it has been shown to be mitigated through the use of aerial platforms (Tauro et al. 2015, 2016b, 2016d). The intensity difference between upper and lower image regions is quantitatively estimated by computing the average upper and lower intensities for one image of each sequence. Generally, differences between upper and lower image regions are higher in right-side sequences and vary throughout the day. In particular, intensity difference spans from 100 to 180 in images captured at 6–7 am, ranges from 10 to 80 at 12–5 pm, and varies from 80 to 160 at 7–8 pm. Left-side images present lower intensity differences: from 40 to 140 at 6–7 am, from 0 to 20 at 12–5 pm, and from 40 to 150 at 7–8 pm. Higher differences in right-side sequences often lead to unrealistically high velocity results in the upper part of the images. This is attributed to light reflections on the water surface and is confirmed by velocity vectors directed in the opposite direction of flow. To circumvent this issue, herein, we compute space-averaged velocities from PTV results obtained in the lower half of right-side images. Conversely, space-averaged velocities for left-side sequences are computed over the entire images.
RESULTS AND DISCUSSION
Figure 3 depicts surface flow velocity maps for two representative left-side (Figure 3(a) and 3(b)) and right-side (Figure 3(c) and 3(d)) videos. Colored contours indicate velocity magnitude, and black arrows show velocity directions. Figure 3(a) and 3(c) represent two optimal cases: tracers are homogeneously distributed across the field of view and surface velocities are estimated in the entire region of interest. In Figure 3(a), velocity values are constant in the region, whereas in Figure 3(c), higher values are found in the top right corner of the field of view. Further, in the top region in Figure 3(c), some areas depict velocity vectors opposite to flow direction. This is due to noise introduced by light reflections lying outside the bridge shadow. Figure 3(b) displays a map for a left-side video that is negatively affected by light reflections in the upper half of the image. Specifically, when tracers move from the lower to the upper half, the algorithm fails to detect them, thus resulting in empty areas. The transversal line-shaped high-velocity area in the center of the image lies at the borders of the lower dark and upper brighter regions. Such higher velocities are likely unrealistic and due to surface ripples and water reflections. Figure 3(d) clearly shows the severe effect due to water reflections in the upper half of the image outside the shadowy region. In this case, velocities in the upper half are above 1.5 m/s.
A comparison of image-based against benchmark radar data is presented in Figure 4. Herein, raw radar data are displayed with a black solid line. Left- and right-side velocities are computed from maps similar to the graphs reported in Figure 3. Specifically, time-averaged velocities are further averaged in space (in the entire field of view). Such values are shown as colored markers in Figure 4. While radar data are continuously sampled every 15 minutes, image-based measurements are not computed at a regular time interval, and each value is obtained from a minute-long video. Generally, image-based measurements lie in proximity of radar data. However, right-side values tend to consistently overestimate surface flow velocity.
We quantify the difference between image-based and radar data by aggregating benchmark radar and image-based measurements hourly and computing the error between the time series. The root mean squared error is equal to 0.11 and 0.24 for left- and right-side data, respectively.
In Figure 5, we further explore differences between image-based and radar data by plotting simultaneous measurements aggregated at the hourly scale obtained with both approaches. For clarity, in the plot, we remove radar data that do not correspond to simultaneous image-based measurements. Left-side data (top) are in very good agreement with benchmark data, whereas right-side data (bottom) display higher variations. Analyzed through the Wilcoxon test, the p-value of left-side data is well above 0.05, whereas the p-value of right-side measurements is equal to 3.48 × 10−4. Therefore, radar and left-side measurements can be regarded as datasets with equal medians, whereas the difference between right-side data and radar measurements is statistically significant.
Discrepancies between left- and right-side velocity estimates can be noted in Figure 6. Left-side velocities are, on average, lower than 1 m/s, and maximum standard deviations are within 0.4 m/s. Conversely, right-side values display higher fluctuations (maximum average value equal to 1.64 m/s) and maximum standard deviations greater than 0.7 m/s. By visually inspecting images, we attribute such higher inaccuracy in right-side image sequences to the detrimental effect of water reflections.
In Figure 7, the dependence of image-based average velocity on the time of the day is investigated for left- and right-side sequences (left and right graphs in Figure 7). The color bar reports illumination intensity. Left-side estimates are generally lower than right-side values, and standard deviations tend to be lower. Further, average velocities are relatively insensitive to illumination intensity. For instance, velocities estimated from 12 to 5 pm are consistent regardless illumination intensity, which varies from 2,000 to more than 6,000 lux. In addition, measurements executed in similar illumination conditions (for instance, early in the morning and at dawn) present different velocity values. Since illumination intensity is highly dependent on cloud cover and seasonality, and it can be critical for image quality, the robustness of the presented approach to such parameter is promising, and supports future implementations of gauge-cam systems in riverine environments.
Based on our findings, left-side sequences lead to surface flow velocity estimates in better agreement with radar data than right-side sequences. This is due to the fact that right-side images depict a larger field of view and, therefore, tend to be more severely affected by illumination conditions. In particular, light reflections on the water surface act as noise, thus often resulting in unrealistically high velocity estimates in the opposite direction of flow. In similar future implementations, this behavior may be accounted for by integrating light filters and optics (for instance, polarizers) in the camera system. Similar to our approach, evaluating the difference in intensity between subareas of the field of view may be beneficial to disclose eventual criticalities related to illumination conditions. In the eventual presence of secondary currents, velocity vectors in the opposite direction of flow may also be observed. However, areas depicting stationary currents may be consistently identified in images, and then excluded from the computation.
Our results generally support the use of PTV for surface flow measurements in hydrological systems. As already pointed out in Tauro et al. (2014, 2016a), alternative algorithms, such as LSPIV, largely rely on homogeneous and abundant tracing density. In the case of the low seeding density of our videos, LSPIV has led to inaccurate velocity estimates that are often underestimated as compared to radar data (Tauro et al. 2016c). In the experimental videos herein presented, we expect that LSPIV would have led to similarly underestimated results. On the other hand, besides its robustness to seeding density, PTV is more user-assisted than LSPIV, and this yields longer processing times.
In this work, we process videos captured for 10 consecutive days in May 2015 from the fully-remote optics-based gauge-cam station located in the Tiber River, Rome Italy, and previously presented in Tauro et al. (2016c). Videos are analyzed through PTV, and estimated surface velocities are compared to simultaneous radar data. The algorithm successfully enables particle tracking in a wide range of illumination intensities from 6 am to 8 pm. On the other hand, consistent to similar optical methods, PTV is not applicable on videos recorded at night.
Images captured with two different optical sensors are affected by the shadow created by the local bridge, with improved tracer visibility in the darker area of the field of view. Based on our findings, PTV leads to velocity measurements in better agreement with radar data in the case of images captured with a 27 ° angle of view optical sensor. On the other hand, images recorded with a fish eye lens yield less accurate velocities and tend to be severely affected by illumination conditions.
Our analysis suggests that PTV may be successful in a multitude of experimental settings (including low flow regimes) and may be promising for similar future gauge-cam implementations. Different from radar instrumentation, gauge-cam stations are expected to provide distributed velocity measurements and, potentially, information on river morphology at larger areas of interest and at competitive costs. Possible ameliorations to the current approach may entail preliminary video pre-processing through unsupervised procedures to rapidly estimate particle density and detect image quality. Finally, similar image-based measurement stations should integrate additional optics in the fish eye lens to mitigate the effect of light reflections on video quality.
This work was supported by POR-FESR 2014–2020 n. 737616 INFRASAFE and by the UNESCO Chair in Water Resources Management and Culture. The authors gratefully thank Dr Salvatore Grimaldi, Alessia Boni, Eloisa Petricca, and members of the Mechanical Engineering for Hydrology and Water Science Laboratory (www.mechydrolab.org) at University of Tuscia for support and help with the experiments. Further, support from CAE S.p.a. in the development of the gauge-cam station and help from Centro Funzionale–Regione Lazio for radar data availability are acknowledged.