Fully remote surface flow measurements are crucial for flow monitoring during floods and in difficult-to-access areas. Recently, optics-based surface flow monitoring has been enabled through a permanent gauge-cam station on the Tiber River, Rome, Italy. Therein, a system of lasers and an internet protocol camera equipped with two optical modules afford video acquisitions of the river surface every 10 minutes. In this work, we establish a standard video-processing protocol by analyzing more than 10 Gb of footage data captured during low discharge regime from May 2nd to 11th, 2015, through particle tracking velocimetry (PTV). We show that good image-based velocity data can be obtained throughout the day – from 6 am to 8 pm – despite the challenging experimental settings (direct sunlight illumination, mirror-like river surface, and overlying bridge shadow). Further, we demonstrate that images captured with a 27° angle of view optical sensor lead to average velocity measurements in agreement with available radar data. Consistent with similar optical methods, PTV is not applicable in case of adverse illumination and at night; however, it is more robust for dishomogeneous distributions of floaters in the field of view.
The comprehension and forecasting of hydrological phenomena largely rely on the availability of diverse and accurate data. Specifically, data should portray the temporal evolution of hydrological processes and should capture the complex intertwining of the multiple events occurring at heterogeneous spatial scales in natural catchments (Hrachowitz et al. 2013). In recent years, several efforts have been devoted to promoting the advancement of approaches and technologies available to scientists for hydrological observations. Among others, these initiatives have led to the establishment of dedicated working groups (see, for instance, the Measurements & Observations in the 21st Century Working Group of the International Association of Hydrological Sciences (MOXXI 2016)) and the rise of a multidisciplinary perspective to tackle hydrological phenomena (Selker et al. 2006; Haberlandt & Sester 2010; Hut et al. 2010; Allamano et al. 2015).
In the realm of surface hydrology, flow monitoring has been traditionally addressed through the installation of gauging stations equipped with water level meters at selected cross sections in riverine environments (Creutin et al. 2003). More recently, radar technology has been introduced to provide information of surface flow kinematics over areas of limited extent (Costa et al. 2006; Fulton & Ostrowski 2008). Technological advances and the availability of high-performance optics equipment at reasonable prices have leveraged the use of image-based approaches in surface hydrological monitoring (Gunawan et al. 2012). Optics-based approaches offer several advantages with respect to traditional water gauges. First, they enable fully remote measurements over surface areas. Velocity measurements can be executed in challenging settings, such as during high flow regimes and in difficult-to-access areas. Further, they are inherently suited for continuous observations and enable measurements in diverse ecosystems, spanning from small-scale rills to channel environments.
Recently, image-based approaches have been coupled with high-visibility surface tracers (Leibundgut et al. 2009) to monitor surface flow velocity in hillslope rills (Tauro et al. 2012b), mountainous streams (Tauro et al. 2012a), and medium- to large-scale rivers (Hauet et al. 2008b, 2009). Generally, raw images are orthorectified, georeferenced, and analyzed through large-scale particle image velocimetry (LSPIV) to generate surface flow velocity maps (Fujita et al. 1997; Muste et al. 2008). In particular, LSPIV has been adopted to characterize flow patterns in lakes (Admiraal et al. 2004), rivers (Kantoush et al. 2011), and estuaries (Bechle et al. 2012). In Hauet et al. (2008b), a fixed LSPIV station has been installed on the Iowa River, and similar implementations have afforded monitoring of flood events (LeBoursicaud et al. 2015; Ran et al. 2016).
Despite their advantages, image-based approaches often require the acquisition of ground control points through Global Positioning System (GPS) or total stations, thus limiting their implementation to areas that are accessible to operators (Tauro 2015). Further, they are affected by illumination conditions, which may negatively impact image quality (Hauet et al. 2008a). In addition to such limitations, LSPIV also tends to be highly sensitive to the presence and spatial distribution of floating tracers (Kim 2006). For instance, in Tauro et al. (2014, 2016a), images of the Tiber River analyzed through LSPIV are shown to lead to consistently underestimated surface flow velocities due to the meager occurrence of tracers. To partially mitigate such issues, a portable experimental apparatus hosting a system of green lasers for remote image calibration has been proposed in Tauro et al. (2014). In this set-up, image orthorectification is circumvented by placing the camera axis perpendicular with respect to the water surface. Similar implementations have also been integrated onboard aerial platforms for acquisitions in difficult-to-access environments (Tauro et al. 2015, 2016b).
The promise of the experimental set-up in Tauro et al. (2014) for fully remote flow measurements has inspired the design and development of a permanent gauge-cam station located in the Tiber River at Ponte del Foro Italico, in the center of Rome, Italy. Since December 2014, the gauge-cam station has been continuously capturing videos of an area of approximately 20 × 15 m2 of the river water surface. Affording remote and distributed measurements on the Tiber River, the gauge-cam station offers a unique opportunity to explore the river dynamics and to refine image-based approaches for hydrological monitoring. With regards to the latter (Tauro et al. 2016c), three videos recorded in January and February 2015 have been analyzed through two alternative algorithms, namely particle tracking velocimetry (PTV) and LSPIV, to generate surface flow velocity field maps. Based on experimental findings in Tauro et al. (2016c), videos analyzed through PTV are in stronger agreement than LSPIV with the available RVM20 speed surface radar measurements.
Building on previous findings in Tauro et al. (2016c), in this paper, we demonstrate the potential of the gauge-cam station by systematically analyzing videos recorded for 10 consecutive days in May 2015 through PTV. Specifically, we establish standard protocols to process captured videos and to extract average surface flow velocities over the recorded field of view through PTV. Further, we demonstrate the station's continuous operation and explore the relationship between image-based data and radar measurements. By comparing both independent datasets, we evaluate the performance of the approach and suggest possible ameliorations for the implementation of similar monitoring schemes in riverine ecosystems.
The gauge-cam station
Surface flow observations at high temporal resolutions are obtained by capturing 1-minute long videos every 10 minutes. The frame acquisition frequency during the recordings is automatically set based on the illumination conditions sensed by the optical sensors. Image resolution for both optical sensors is set to 1,024 × 768 pixels. Laser modules are operated for 20 s at the beginning of each video recording. Videos are stored in the MxPEG audio/video container format, which guarantees the synchronous stream of good quality images at efficient compression. Videos are stored through a nestled folding system in the external hard drive. Further details on the gauge-cam station can be found in Tauro et al. (2016c).
Next to the gauge-cam station, an existing monitoring apparatus managed by Centro Funzionale–Regione Lazio includes a ULM 20 ultrasonic meter by CAE S.p.a., which records water levels proximal to the midspan of the bridge every 15 minutes. Further, an RVM20 speed surface radar sensor by CAE S.p.a., which operates in the 0.30 to 15 m/s velocity range with an accuracy of ±0.02 m/s, records surface velocity every 15 minutes over an area of a few squared centimeters.
We analyzed 70 videos captured from May 2nd to May 11th, 2015. Out of the total footage data captured in such period of time, analyzed videos account for 11 Gb of data and are manually selected based on the presence of homogeneously distributed and naturally occurring tracers. Due to improved visibility of the tracers, most videos are captured from 6 to 7 am and from 6 to 8 pm. A few videos are recorded from 12 to 5 pm. Videos acquired after 8 pm are not analyzed due to insufficient external illumination.
Sequences of ‘.bmp’ images from the left- and right-side sensors are analyzed using a modified version of PTVlab (Brevis et al. 2011). Specifically, we develop a command-line interface version of the PTVlab toolbox that facilitates serial analysis of massive quantities of data. In the approach, particle detection is enabled through a Gaussian mask procedure. Tracking is based on cross-correlation between pairs of subsequent images. Velocities along the current and perpendicular to the flow are computed at the nodes of a 10 × 10 pixels cell grid overlayed on images. Surface flow velocity maps are computed by interpolating over the sequence of grids. To provide a comparison of image-based data to radar velocities, we compute the space-averaged velocity and standard deviation of the maps.
PTV is executed on subregions of the original images (upper and lower borders display date, time, illumination intensity, and recording parameters and are, therefore, excluded from the analysis). Parameters adopted for left- and right-side images are identified upon a preliminary sensitivity analysis and reported in Table 1.
|Corr. thresh. .||Radius [pix] .||Inten. thresh. .||Int. area [pix] .||Min. corr. .||Sim. neigh. [%] .|
|Corr. thresh. .||Radius [pix] .||Inten. thresh. .||Int. area [pix] .||Min. corr. .||Sim. neigh. [%] .|
The image dataset
Analyzed images account for approximately 14 Gb of data (data size increases upon frame extraction from videos). On average, each left- and right-side sequence comprises 514 frames. The minimum and maximum numbers of frames for each video are equal to 336 and 604, respectively. The average frame acquisition frequency over the 70 videos is equal to 8.7 Hz (minimum and maximum equal to 7.4 Hz and 9.6 Hz, respectively). Illumination intensity of the videos ranges from less than 10 lux, up to approximately 7,000 lux.
The image dataset presents several challenges: frames are captured in low discharge regime conditions; the river generally shows a mirror-like surface; and the overlying bridge permanently creates two regions of different intensity in the images. The Tiber River low discharge conditions generally correspond to a meager quantity of tracers transiting in the field of view. This led to the selection of a subset of videos out of the total number of files captured from May 2nd to 11th, 2015 at the gauge-cam station. Further, the mirror-like surface is detrimental for particle visibility, whereas overlying shapes, such as clouds, are clearly recognizable in images. Finally, the bridge shadow generates a dark lower image region where tracer visibility is enhanced. On the other hand, in the upper half of the images, noise due to direct sunlight reflections negatively affects particle detection and tracking.
To characterize the amount of tracers in the selected videos, we a posteriori estimate particle seeding density through a Gaussian detection procedure. Specifically, the number of round particles of a selected diameter is automatically identified in experimental images. To reduce computational time, the procedure is applied to a subset (one image in 50) of each experimental image sequence. Particle detection is performed through cross-correlation with a Gaussian kernel (set to 10 pixels and to 7 pixels in the left- and right-side images, respectively). On average, left- and right-side images present particle densities equal to 3.09 × 10−4 and 1.30 × 10−4, respectively, whereby particle density is evaluated as the ratio of the total number of particles to total image pixels. With respect to left-side sequences, minimum and maximum densities of 2.02 × 10−5 and 1.09 × 10−3 are observed. In case of right-side images, particle densities range from 2.92 × 10−4 to 2.75 × 10−3. Given the larger field of view, particle density is generally higher in right-side images.
Compared to standard image-based laboratory applications, particle density is low and, therefore, high-speed cross-correlation algorithms, such as LSPIV, may be difficult to implement. Interestingly, the density of the tracers in the field of view is relatively unaffected by illumination conditions, as it remains constant in videos captured at different times of the day. This suggests that image quality is acceptable despite highly varying external light settings. In the case of right-side images, however, slightly higher standard deviations are observed (4.39 × 10−4 against 2.14 × 10−4 in the case of left-side sequences).
With regards to the bridge shadow, both left- and right-side sequences display a marked difference in the intensity of the upper and lower halves of images. This phenomenon is inevitable in the case of fixed optical equipment installed underneath cableways and/or bridges, and it has been shown to be mitigated through the use of aerial platforms (Tauro et al. 2015, 2016b, 2016d). The intensity difference between upper and lower image regions is quantitatively estimated by computing the average upper and lower intensities for one image of each sequence. Generally, differences between upper and lower image regions are higher in right-side sequences and vary throughout the day. In particular, intensity difference spans from 100 to 180 in images captured at 6–7 am, ranges from 10 to 80 at 12–5 pm, and varies from 80 to 160 at 7–8 pm. Left-side images present lower intensity differences: from 40 to 140 at 6–7 am, from 0 to 20 at 12–5 pm, and from 40 to 150 at 7–8 pm. Higher differences in right-side sequences often lead to unrealistically high velocity results in the upper part of the images. This is attributed to light reflections on the water surface and is confirmed by velocity vectors directed in the opposite direction of flow. To circumvent this issue, herein, we compute space-averaged velocities from PTV results obtained in the lower half of right-side images. Conversely, space-averaged velocities for left-side sequences are computed over the entire images.
RESULTS AND DISCUSSION
We quantify the difference between image-based and radar data by aggregating benchmark radar and image-based measurements hourly and computing the error between the time series. The root mean squared error is equal to 0.11 and 0.24 for left- and right-side data, respectively.
Based on our findings, left-side sequences lead to surface flow velocity estimates in better agreement with radar data than right-side sequences. This is due to the fact that right-side images depict a larger field of view and, therefore, tend to be more severely affected by illumination conditions. In particular, light reflections on the water surface act as noise, thus often resulting in unrealistically high velocity estimates in the opposite direction of flow. In similar future implementations, this behavior may be accounted for by integrating light filters and optics (for instance, polarizers) in the camera system. Similar to our approach, evaluating the difference in intensity between subareas of the field of view may be beneficial to disclose eventual criticalities related to illumination conditions. In the eventual presence of secondary currents, velocity vectors in the opposite direction of flow may also be observed. However, areas depicting stationary currents may be consistently identified in images, and then excluded from the computation.
Our results generally support the use of PTV for surface flow measurements in hydrological systems. As already pointed out in Tauro et al. (2014, 2016a), alternative algorithms, such as LSPIV, largely rely on homogeneous and abundant tracing density. In the case of the low seeding density of our videos, LSPIV has led to inaccurate velocity estimates that are often underestimated as compared to radar data (Tauro et al. 2016c). In the experimental videos herein presented, we expect that LSPIV would have led to similarly underestimated results. On the other hand, besides its robustness to seeding density, PTV is more user-assisted than LSPIV, and this yields longer processing times.
In this work, we process videos captured for 10 consecutive days in May 2015 from the fully-remote optics-based gauge-cam station located in the Tiber River, Rome Italy, and previously presented in Tauro et al. (2016c). Videos are analyzed through PTV, and estimated surface velocities are compared to simultaneous radar data. The algorithm successfully enables particle tracking in a wide range of illumination intensities from 6 am to 8 pm. On the other hand, consistent to similar optical methods, PTV is not applicable on videos recorded at night.
Images captured with two different optical sensors are affected by the shadow created by the local bridge, with improved tracer visibility in the darker area of the field of view. Based on our findings, PTV leads to velocity measurements in better agreement with radar data in the case of images captured with a 27 ° angle of view optical sensor. On the other hand, images recorded with a fish eye lens yield less accurate velocities and tend to be severely affected by illumination conditions.
Our analysis suggests that PTV may be successful in a multitude of experimental settings (including low flow regimes) and may be promising for similar future gauge-cam implementations. Different from radar instrumentation, gauge-cam stations are expected to provide distributed velocity measurements and, potentially, information on river morphology at larger areas of interest and at competitive costs. Possible ameliorations to the current approach may entail preliminary video pre-processing through unsupervised procedures to rapidly estimate particle density and detect image quality. Finally, similar image-based measurement stations should integrate additional optics in the fish eye lens to mitigate the effect of light reflections on video quality.
This work was supported by POR-FESR 2014–2020 n. 737616 INFRASAFE and by the UNESCO Chair in Water Resources Management and Culture. The authors gratefully thank Dr Salvatore Grimaldi, Alessia Boni, Eloisa Petricca, and members of the Mechanical Engineering for Hydrology and Water Science Laboratory (www.mechydrolab.org) at University of Tuscia for support and help with the experiments. Further, support from CAE S.p.a. in the development of the gauge-cam station and help from Centro Funzionale–Regione Lazio for radar data availability are acknowledged.