Accurate, vast, and real-time coverage of water level monitoring is crucial for the advancement of environmental research, specifically in the areas of climate change, water distribution, and natural disaster preparedness and management. The current state of the monitoring network requires an immediate solution to produce low-cost and accurate water level measurement sensors. This research presents a novel methodology for intelligent stream stage measurement, creating a distinct opportunity for a low-cost, camera-based embedded system that will measure water levels and share surveys to support environmental monitoring and decision-making. It is implemented as a stand-alone device that utilizes a registry of structures and points of interest (POI) along with the core modules of the application logic: (1) deep-learning powered water segmentation; (2) visual servoing; and (3) POI geolocation computation. The implementation relies on a Raspberry-Pi with a motorized camera for automated measurements and is supported by a Proportional–Integral–Derivative controller and multiprocessing. For future work, the involvement of the camera supports further use cases such as recognizing objects (e.g., debris, trees, humans, and boats) on the water surface. Additionally, the method shown can be made into a Progressive Web Application (PWA) that can be used on smartphones to allow crowdsourced citizen science applications for environmental monitoring.

  • A methodology is proposed for intelligent stream stage measurements to support widespread environmental monitoring.

  • It is implemented on a Raspberry-Pi for affordable and continuous water level monitoring.

  • The system is applicable worldwide with validated functionality for operational viability.

Graphical Abstract

Graphical Abstract
Graphical Abstract

Water-related natural hazards, including flooding and droughts, caused over $86 billion in damage and killed over 57,000 people around the world between the period of 2009–2019 (CRED 2020). Water resources support hundreds of billions of dollars in commerce, provide safe drinking water, and support recreation, irrigation, power generation, and manufacturing (Loucks & Beek 2017; Seo et al. 2019). Reliable and real-time monitoring of water resources is critical to minimize the loss of life and property from water-related hazards and the effective management of water resources (Guo 2010; Sit et al. 2019). The high cost of deployment and maintenance of these sensors causes challenges for data coverage, which is crucial to tackle vital issues like natural disaster mitigation (Yildirim & Demir 2021), water resource management (Demir et al. 2009), and climate change (Khan et al. 2020). For instance, there are 30 million stream reaches in the USA with an insufficient monitoring network of just 8,300 sensors (Marcarelli 2019). Currently, in the USA, federal and state agencies are using stage sensors that cost from $3,000 to $15,000, with annual maintenance costs ranging from $1,000 to $15,000 (Hennigan 2011). Furthermore, a Federal Emergency Management Agency (FEMA)-sponsored report shows that for every $1 spent on disaster mitigation efforts from federal grants, $6 are saved from disaster damages on average (Multihazard Mitigation Council 2018). Thus, these statistics can be interpreted to conclude that the current state of water monitoring requires an immediate solution for low-cost and accurate water level measurement sensors.

The United Nations (UN) Sustainable Development Goals (SDG) identify the most important challenges currently facing humanity, including the pursuit of clean water, combatting climate change impacts, and creating disaster-resilient communities (UN 2016). Recently, the UN Interagency Task Team on Science, Technology, and Innovation for the SDGs (IATT 2021) published a report outlining how the rapid development of technology can aid in achieving the SDGs, particularly the utilization of autonomous sensors for environmental monitoring purposes (Demir et al. 2015; Boesl et al. 2021). A flexible approach that can be realized in a variety of devices to be applicable in a multitude of scenarios for water level measurement is needed, including in situ placement of stand-alone sensors as well as portable devices (Abolghasemi & Anisi 2021).

There are various approaches that have been taken to tackle the issues of scalable and affordable stream stage measurements (Lin et al. 2017). One of the most widely studied and implemented methods of utilizing cameras in automated stage measurements is the use of staff gauges or known structures and shapes at the point of interest (POI) (Leduc et al. 2018; Chapman et al. 2022). While establishing and installing a reference point proves to be useful in accurate measurements, it requires a substantial amount of investment financially and in terms of acquiring appropriate permits (Sabbatini et al. 2021). Depth estimation via stereo imaging as well as the use of low-cost LIDAR sensors have also been studied for water monitoring (Mordvintsev & Abid 2013). However, the major challenge with such approaches is their tendency to yield substantially increasing margins of error as the range (e.g., distance) increases (Zhao et al. 2020). More recently, Karegar et al. (2022) presented a novel, low-cost, and demonstrated approach to utilize a Global Navigation Satellite System (GNSS) interferometric reflectometry technique to identify the water level by installing a Raspberry-Pi-powered antenna by a river and observed successful stage approximations, while acknowledging the technique may not perform well for rivers with a width less than 50 m, rivers that are almost dried out or at low levels, and for locations where satellite signals may be blocked.

This research presents a novel methodology for intelligent stream stage measurement that utilizes prevalent sensors commonly found in smart devices. The methodology creates a distinct opportunity for a low-cost, camera-based embedded system that will measure water levels and share surveys to support environmental monitoring and decision-making (Teague et al. 2021). The presented intelligent stage sensing is implemented as an in situ installation of a complete single-board computer (i.e., a stand-alone sensor), which utilizes a registry of structures and POIs along with the core modules of the application logic: (1) deep-learning powered water segmentation module and (2) geometric POI calculation module. The implementation relies on a Raspberry-Pi with a motorized camera for automated measurements and is supported by a Proportional–Integral–Derivative (PID) controller and multiprocessing. An administrative web panel is developed to manage, monitor, and calibrate intelligent stage sensors upon deployment.

One of the main motivations behind this study is to demonstrate the need for and value of intelligent stream monitoring approaches and to establish a plethora of research and application areas to propel future work. Furthermore, the presented affordable sensor solution, along with the potential for the methodology's expansion into a smartphone application, can provide benefits, especially to underserved nations and communities (Cosgrove & Rijsberman 2014) and facilitate collaboration with data scientists and geoscientists (Ebert-Uphoff et al. 2017). Given water data's utility in various environmental tasks (e.g., flood mitigation; Alabbad et al. 2022), intelligent sensing approaches can support the SDG goals as established in the aforementioned UN report.

The remainder of this paper is organized as follows: Section 2 describes the proposed methodology along with a thorough description of the system architecture. Section 3 presents the preliminary results of the experiment, both as a simulation and in operation. Finally, Section 4 concludes the article with a concise summary of findings and outlines future work and recommendations to tackle the challenges of intelligent stream monitoring.

Background

Within the scope of addressing the scarcity of stream stage measurement points worldwide, five geometry-based approaches have been proposed by Sermet et al. (2020) to measure water levels utilizing inexpensive sensors. Previously, the 3-Cone Intersection method was implemented and tested as a mobile application to promote crowdsourcing. While it proved to be highly flexible and is readily applicable worldwide without requiring any site-specific preprocessing, error margins introduced by human errors are not trivial in consideration of the measurements’ usage in environmental tasks (e.g., flood forecasting and reservoir governance).

This research investigates the realization of another approach, entailing the convergence of a vector with a known structure at the POI, in pursuit of achieving a reliable, accurate, and actionable data flow (Demir & Sermet 2021). It is applicable for sites that have a clearly visible structure (e.g., building, infrastructure, and land) that intersects with the water body and requires only a single survey (i.e., taking a picture of the water intersection). The mechanism behind this approach can be summarized as assessing the water level at a POI by establishing a virtual vector with the initial point represented by the sensor geolocation and the absolute orientation of the camera at the time of the survey. The constructed vector is then intersected with the 3D model of the feature on-site to yield the altitude of the current water level.

The presented methodology creates a distinct opportunity for two innovative approaches for stage measurement: (1) a smartphone application and (2) a camera-equipped embedded system. As a citizen science practice, the public can use the mobile application to perform and share surveys to support environmental research and decision-making (Ewing & Demir 2021). For continuous and reliable monitoring of selected sites, stand-alone water level measurement sensors that are comprised of low-cost, camera-based single-board computers can be utilized as a significantly low-cost complement to existing systems. As part of this study, stand-alone sensor implementation is explored.

Sensor design and implementation

The presented approach relies on several independent core components, including a registry of geographic objects representing the POIs, image processing to determine water–structure intersection, and geometric calculations based on an earth model. Though they are implemented at the sensor level to support edge computing, their independent and flexible nature allows their migration to a centralized web server to enable intelligent stream stage sensing via various devices (e.g., smartphones) that are capable of producing the defined signals.

The method is realized as a single-board computer (e.g., Raspberry-Pi) equipped with a single camera, a servomotor, an Inertial Measurement Unit (IMU), a GPS receiver, and a battery. In order to increase accuracy and allow different use cases, the system supports additional optional hardware, including a cell modem, a LIDAR sensor, a laser sensor, and a solar panel. In order to achieve the lowest cost (e.g., component cost, the effort required for deployment, and personnel cost for maintenance) possible with viable accuracy, the implementation in this study does not depend on any of the optional hardware and they were not considered for the simulation. Figure 1 depicts the system architecture along with components and their interactions.
Figure 1

System architecture for implementation on a single-board computer.

Figure 1

System architecture for implementation on a single-board computer.

Close modal

POI registry

The presented approach aims to find the water level by virtually constructing a 3D vector between the camera and the water intersection with a structure. It requires a clearly visible structure (e.g., bridge column and building) with a modellable surface and a known geolocation that intersects with the water. The 3D model of the structure on the measurement site is defined before deployment and saved in a relational database accessible via an Application Programming Interface (API) with functionality allowing the device to query by location and orientation to retrieve the desired POI in close proximity. The 3D model consists of a combination of geometrical objects with geolocation representing a simplified version of the intersecting structure, the location and shape of which can be assessed in a variety of strategies, including retrieving structure plans and data from building owners or from the city, taking manual on-site measurements using precise land surveying equipment, and utilizing high-accuracy or upscaled Digital Elevation Models (DEMs) (Demiray et al. 2021). However, for the sake of rapid prototyping, worldwide applicability, convenience, and low-cost, Google Earth has been used to extract the geolocations of the objects of interest and define the POI.

Water segmentation

The sensor leverages a custom image segmentation model to automatically detect the intersection of the water with the structure using neural networks. The sensor performs the recognition at a predefined frequency to monitor the changes in water level. If the water level changes, the water intersection identified by the segmentation will drive the sensor camera movement, making it possible to calculate the new stage. Within the scope of this paper, an open-source and well-established obstacle detection model for autonomous boats has been adopted to utilize the underlying U-Net-based convolutional neural network to determine the waterline (Steccanella et al. 2020).

POI calculation

For geometrical calculations, all geolocations are converted (by default, the WGS84 datum/ellipsoid) to the Earth-Centered, Earth-Fixed (ECEF) Cartesian coordinate system for decreased error margin. Thus, latitude, longitude, and altitude are represented as x, y, and z. The direction of the camera is expressed in terms of pitch and yaw angles. The vector is limited to a length of 120 m, as the measurement site is expected to be closer than that distance for accuracy. Thus, the 3D vector can be defined by its origin and end points (Equation (1)). Once the 3D vector is generated, it is then intersected with geometrical objects representing the structure on the measurement site. For simplicity, during the case study, planes are used as the default geometrical object, which can be defined by three points on its surface. Once the intersection is calculated, its ECEF coordinate (x, y, z) is converted back to latitude, longitude, and altitude, of which the resulting altitude is the water level. For geometrical operations against the Karney ellipsoidal earth model, an open-source geodesy library is utilized (PyGeodesy 2022).
(1)

Automatic controller

In order to let the findings from the camera feed control the camera movements, an implementation of visual servoing is required. Two control techniques for visual servoing are considered, namely Image-based (2D) and Pose-based (3D) (Chaumette & Hutchinson 2006). Since pose does not hold significance in the monitoring of surface water, the Image-based approach has been adopted. We implemented a PID control system (Rosebrock 2019) to automate the process of keeping track of the water intersection at the POI. As widely established in the literature, PID controllers function broadly to minimize the error between a setpoint and the process variable as part of a feedback loop (Pawar et al. 2018). In the proposed sensor's context, the desired condition is to center the water intersection line as per the camera perspective, the process variable is a point selected on the extracted perimeter subject to conditions attached, and the error is the pixel-wise distance between the variable and the image center.

The main course of action taken by the controller will be tilting the camera over the x-axis (i.e., up and down) to track the changes in water level, though the controller supports panning over the y-axis (i.e., left and right) for use during initialization and in rare cases where water visibility shifts as the stage fluctuates. The employed water segmentation method readily identifies the water-representing pixel groups defined by their perimeter. To contextualize the output for use in the controller, a point must be selected dynamically as the target for guiding the servo sensors. Considering the high variance introduced by the sampling process (e.g., unsteady water surface, image noise, and component error margins), considerations need to be made in the point selection as to eliminating infinite loops. Hence, the PID controller must enforce satisfaction criteria. Taking continuous monitoring into account, the process variable is determined as the point on the perimeter that (1) is closest to the image center and (2) does not have another perimeter point below in its orthographic projection. Because the ground truth changes with each observation (i.e., water is in motion), we need a fail-safe mechanism to prevent another type of possible infinite loop that is caused by the intrinsic feature of the monitored resource and not the controller. As a solution, the system keeps records of all the error terms, e(t), as well as the resulting camera orientation, in order to construct a trendline and assess the noise based on a threshold. Hence, the system estimates the true stage by minimizing negative effects that may be caused by waves and debris. Given that the sensor operation is automated and does not involve user interaction, the entire workflow is executed as concurrent processes using Python's multiprocessing package, which manipulates the same variables shared over the server process maintained by the manager component (Marowka 2018).

Administrative web panel

A web server is created to serve as an administrative panel for the setup and maintenance of the sensor. The flask web framework is utilized to expose the core functionalities offered by the unit to initialize, fine tune, and monitor the measurements as well as to handle site-specific anomalies. An intuitive user interface is designed with features including observing the real-time feed from the camera, manually aligning the camera, setting orientational offsets (e.g., pitch and yaw), and manually setting the deployed sensor geolocation (Figure 2). It should be noted that these features are optional and the sensor is capable of fully automated operation. For security as well as privacy reasons (e.g., water data quality for research and decision-making, camera access), the control panel can only be accessed from devices connected to the same network. However, depending on the requirements, the server can be exposed to the internet (e.g., ngrok), opening up use cases to remotely manage a network of sensors from a centralized application on the field.
Figure 2

Admin panel for the sensor.

Figure 2

Admin panel for the sensor.

Close modal

System initialization and continuous monitoring

The stage sensor is first mounted at a location with a clear sight to a POI that possesses the characteristic of a water resource (e.g., river) intersecting with an immovable object such as a bridge, a building, or terrain, while the latter should be evaluated case-by-case depending on the error margin it may introduce due to the factors such as erosion. Upon booting, the sensor measures and records its geolocation and rotates the camera to the water intersection automatically. This rotation is measured and saved in the form of servomotor steps as well as the IMU measurements to accurately calculate the 3D orientation of the camera. Hence, the sensor is autonomously aimed at a potential POI candidate visible to the camera during the installation, upon which the user can customize the parameters manually via the admin panel. The 3D model of the POI structure or terrain is retrieved once during initialization via a REST API from a centralized repository, with the selection criterion as the POI being contained within the spherical wedge defined by a portion of a sphere that has the sensor geolocation as its center with a diameter of 120 m (i.e., the practical limit of camera ranges with respect to the measurement accuracy). Of all the POIs returned from the repository for this query, the one with the closest proximity to the sensor is selected.

By this point, the system has been using absolute orientation, which is prone to errors and noise due to environmental factors. That is why, once the POI is defined, the system goes through a calibration round in order to mitigate the noise by switching to relative orientation. A singular point on the POI is selected to mathematically calculate the pitch and yaw angles required to aim the camera (Equation (2)). Once the absolute orientation is identified (in radians) to constitute the reference point, the sensor then performs another stage of automated recognition to rotate the camera to the water intersection, however, recording relative movements. Finally, a 3D vector is created using the camera orientation, upon which there is supposed to be a point intersecting with the reference object. The altitude of this intersection represents the water level, which the system records along with a compressed picture of the site at the moment of measurement.
(2)
Once the initialization process is complete, the system performs continuous monitoring at predefined time intervals and relays the survey results to a centralized system for storage and visualization (Figure 3). During continuous monitoring, the system prioritizes tracking in a vertical fashion to measure the fluctuations in the water level. Hence, the yaw angle is fixed while the pitch angle stays as a variable, resulting in the creation of a 2D vector to assess the stage.
Figure 3

Flowchart describing the continuous monitoring process upon deployment (i.e., happy path).

Figure 3

Flowchart describing the continuous monitoring process upon deployment (i.e., happy path).

Close modal
The presented intelligent stage sensing methodology has been validated with a simulation that was conducted for a virtual sensor located at the IIHR–Hydroscience and Engineering building, with the measurement site (POI) as the Iowa Power Plant, which intersects with the Iowa River (Figure 4). The measurement site has been defined by a 3D plane for simplicity, which is described by three points on its surface. After the simulation data were supplied to the software, the water stage was calculated as 188.72 m, in comparison to the ground truth of 189 m, meaning that our approach was successfully able to estimate the water level with an error of 0.28 m for a POI that is approximately 90 m away. Upon the methodology's validation, an implementation was realized as a stand-alone sensor for in situ installation in the locale as the simulation. The stage sensor is placed in a room that has a direct view of the water resource and has a Wi-Fi internet connection and electricity to power the sensor (i.e., plugged into an electrical outlet) (Figure 4(a)). The sensor was calibrated through the control panel and provided real-time measurements based on the video feed, proving its functionality in the operational setting, though the measurements exhibited a fluctuating accuracy due to the aggressive waves caused by the dam right near the POI (Figure 4(b)).
Figure 4

System validation and case study: (a) Survey setup; (b) close-up picture of the POI (not from the sensor camera); and (c) simulation parameters and result.

Figure 4

System validation and case study: (a) Survey setup; (b) close-up picture of the POI (not from the sensor camera); and (c) simulation parameters and result.

Close modal

This study details the methodology and its implementation, along with a simulated case study to showcase functional accuracy. However, a systematic experiment needs to be designed and executed at viable locales to study its feasibility in a real-life setting. The experiment can be conducted at places with existing USGS sensors to serve as reference points, focusing on the analysis of (1) the role that different weather conditions play (e.g. fog, precipitation, night time, temperature, and humidity) in image processing and sensor reliability; (2) identification of the ideal interval for measurements and discretization; (3) assessing and comparing the accuracy of singular point measurements with running average trendlines; and (4) calibration of critical parameters, including camera specifications, signal filters, and power usage.

This study presents a novel methodology for water level measurement that utilizes prevalent sensors commonly found in smart devices and showcases its utility by realizing a low-cost, camera-based embedded system for on-site surveys. It utilizes static (e.g., a known structure that intersects with the water body) and dynamic parameters for the deployment site to power the presented geometry-based liquid level estimation algorithm. Its main purpose is to serve as a complementary solution to filling the substantial data gap caused by unmonitored sites and to support environmental research and decision-making by providing reliable and consistent stage measurements. An experiment has been conducted to validate the proposed implemented system's functional operation and stability, along with a simulation that validated the degree of error margin under optimal conditions. Thus, while the methodology is validated, future case studies are needed to assess measurement accuracy in varying environmental conditions and sensory combinations.

The findings of this study reveal numerous opportunities for future work, both from an application and research standpoint, some of which are identified below:

  • (a)

    The accuracy and cost needed for the real-world application of stage measurements vary among different organizations (e.g., federal and state organizations, insurance companies, research groups, and property owners). For instance, federal organizations with decision-making motivation prioritize accuracy to the maximum extent possible, whereas increased coverage is just as valuable in the case of determining flood insurance costs for real estate. Similarly, different use cases may favor a higher initial deployment cost to decrease the long-term cost of maintenance, while others require minimizing upfront costs. In future work, such use cases should be carefully studied to design optimal stage sensor variations based on optional hardware and pre-deployment activities.

  • (b)

    Another area of future work is utilizing neural networks to support or replace the PID controller. Currently, the stage sensor functions on a feedback loop to continuously adjust the camera orientation based on the waterline identification until a satisfaction threshold is reached. As an enhancement, each sensor can be equipped with a predictive model that is continuously trained upon the pixel-wise difference in waterline and resulting servo updates (in addition to other pertaining parameters) to calculate the required degree of change. Each sensor would have its own model calibrated to its site (e.g., POI distance and environmental parameters). Once satisfactory precision is achieved, the model can replace the PID controller and allow the camera to aim at the new water intersection in a single step (i.e., a single servo update). While the training can be computationally expensive and power-hungry initially, it may, in fact, increase the power efficiency of the sensor in the long term by minimizing the overhead of servo operation caused by the PID controller.

  • (c)

    The proposed stage sensor relies on the workflow of physically aiming the camera to the water intersection, thereby including a dynamic component that may complicate long-term maintenance and may suffer from the imprecision of the aiming process for further distances, which would be even more imperfect in the case of the implementation of the methodology as a smartphone application. This issue brings out the need to derive an equation to calculate the changes in water level with pixel differences without requiring the rotation of the camera, given that the POI is still visible. The equation would require the incorporation of additional parameters such as the camera specifications and highly precise structure and terrain models, which may be particularly difficult in cases of irregular surfaces.

  • (d)

    Depending on the installation location, it is very likely that the sensor camera will be subject to obstructions and noise. In the case of the presented experiment, the windows are often covered with dust, spider webs, bugs, raindrops, and mud splashes. Deep-learning powered noise elimination approaches can be integrated into the image processing module. For instance, Liu et al. (2020) presented a method based on deep convolutional neural networks to automatically recover images through obstructions such as reflections, raindrops, and fences. The repository is open-source and directly applicable to the problem at hand.

  • (e)

    The proposed methodology can be implemented as a Progressive Web Application (PWA) to pave the way to effective and convenient crowdsourced stage measurements running entirely using smartphones. One of the main advantages of the PWA approach is that the application can function on different types of devices (e.g., iOS, Android) utilizing the smartphone sensors and camera without requiring native development. Furthermore, the PWA approach enables the system's utilization all around the world without individual sensor manufacturing, deployment, permit, and maintenance costs. A centralized POI repository can be developed and continuously expanded with support from local communities and administrative units. To encourage the public and ensure a continuous flow of stage measurements from the community, gamification methods can be employed (e.g., collecting points, feedback mechanisms, and recognition).

  • (f)

    As a way to increase the effectiveness and interoperability of water level data collected via a wide range of devices and locations, water data sharing platforms and workflows can be researched, particularly based on blockchain technology. Water monitoring collaboration among institutions based on widespread IoT devices and immutable and verifiable records can enable holistic analysis for flood forecasting, circumventing the well-established challenges of water management (Lin et al. 2017).

  • (g)

    On-site availability of internet-enabled and camera-equipped devices with high computational power opens a plethora of opportunities for future enhancements and alternative applications. In addition to the water level measurement, the presence of the camera enables further monitoring scenarios such as recognizing objects (e.g., debris, tree, human, and boat) on the water surface using deep learning and supplying annotated data for use in hydrological processes including surface water modeling, streamflow estimation, and flood prediction (Sit et al. 2020).

This project is based upon work supported by the University of Iowa and partially funded as GR-015130-00019 under the provisions of section 104 of the Water Resources Research Act of 1984 annual base grants (104b) distributed through the Iowa Water Center.

All relevant data are included in the paper or its Supplementary Information.

Ibrahim Demir and Yusuf Sermet have patent #US20210310807A1 pending to University of Iowa Research Foundation UIRF.

Abolghasemi
V.
&
Anisi
M. H.
2021
Compressive sensing for remote flood monitoring
.
IEEE Sensors Letters
5
(
4
),
1
4
.
Alabbad
Y.
,
Yildirim
E.
&
Demir
I.
2022
Flood mitigation data analytics and decision support framework: Iowa Middle Cedar Watershed case study
.
Science of the Total Environment
814,
152768
.
Boesl
D. B.
,
Haidegger
T.
,
Khamis
A.
,
Mai
V.
&
Mörch
C.
2021
Automating the Achievement of SDGs: Robotics Enabling & Inhibiting the Accomplishment of the SDGs
. In IATT Report for the STI Forum 2021. UN interagency task team on STI for the SDGs, New York, NY.
Chapman
K. W.
,
Gilmore
T. E.
,
Chapman
C. D.
,
Birgand
F.
,
Mittlestet
A. R.
,
Harner
M. J.
, Mehrubeoglu, M. &
Stranzl
J. E.
2022
Open-source software for water-level measurement in images with a calibration target
.
Water Resources Research
58
(
8
),
e2022WR033203
.
Chaumette
F.
&
Hutchinson
S.
2006
Visual servo control. I. Basic approaches
.
IEEE Robotics & Automation Magazine
13
(
4
),
82
90
.
Cosgrove
W. J.
&
Rijsberman
F. R.
2014
World Water Vision: Making Water Everybody's Business
.
Routledge
, London, UK.
CRED (Centre for Research on the Epidemiology of Disasters)
2020
Natural Disasters 2019: Now is the Time to not Give up
.
CRED
,
Brussels
.
Demir
I.
&
Sermet
M. Y.
2021
Camera-Based Liquid Stage Measurement. U.S. Patent Application No. 17/223,270
.
Demir
I.
,
Jiang
F.
,
Walker
R. V.
,
Parker
A. K.
&
Beck
M. B.
2009
Information systems and social legitimacy scientific visualization of water quality
. In:
2009 IEEE International Conference on Systems, Man and Cybernetics
, San Antonio, TX, 11-14 October 2009.
IEEE
, New York, NY, pp.
1067
1072
.
Demir
I.
,
Conover
H.
,
Krajewski
W. F.
,
Seo
B. C.
,
Goska
R.
,
He
Y.
,
McEniry
M. F.
,
Graves
S. J.
&
Petersen
W.
2015
Data-enabled field experiment planning, management, and research using cyberinfrastructure
.
Journal of Hydrometeorology
16
(
3
),
1155
1170
.
Demiray
B. Z.
,
Sit
M.
&
Demir
I.
2021
D-SRGAN: DEM super-resolution with generative adversarial networks
.
SN Computer Science
2
(
1
),
1
11
.
Ebert-Uphoff
I.
,
Thompson
D. R.
,
Demir
I.
,
Gel
Y. R.
,
Karpatne
A.
,
Guereque
M.
,
Kumar
V.
,
Cabral-Cano
E.
&
Smyth
P.
2017
A vision for the development of benchmarks to bridge geoscience and data science
. In
7th International Workshop on Climate Informatics
, Boulder, CO, 20-22 September 2017 (Lyubchich, V., Oza, N. C., Rhines, A. & Szekely, E., eds.). OpenSky.
Guo
H.
2010
Understanding global natural disasters and the role of earth observation
.
International Journal of Digital Earth
3
(
3
),
221
230
.
Hennigan
G.
2011
Water Watchers: Sensors Monitor Flood Threats in Eastern Iowa
.
IATT
2021
Emerging Science, Frontier Technologies, and the SDGs – Perspectives From the UN System and Science and Technology Communities
.
United Nations Interagency Task Team on Science, Technology and Innovation for the Sustainable Development Goals
,
New York
.
Available from: http://sdgs.un.org/tfm/.
Karegar
M.
,
Kusche
J.
,
Geremia-Nievinski
F.
&
Larson
K.
2022
Raspberry Pi reflector (RPR): a low-cost water-level monitoring system based on GNSS interferometric reflectometry
.
Water Resources Research
58, e2021WR031713.
Leduc, P., Ashmore, P. & Sjogren, D. 2018 Stage and water width measurement of a mountain stream using a simple time-lapse camera. Hydrology and Earth System Sciences 22 (1), 1–11.
Lin
Y. P.
,
Petway
J. R.
,
Anthony
J.
,
Mukhtar
H.
,
Liao
S. W.
,
Chou
C. F.
&
Ho
Y. F.
2017
Blockchain: the evolutionary next step for ICT e-agriculture
.
Environments
4
(
3
),
50
.
Liu
Y. L.
,
Lai
W. S.
,
Yang
M. H.
,
Chuang
Y. Y.
&
Huang
J. B.
2020
Learning to see through obstructions
. In
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition
, pp.
14215
14224
.
Loucks
D. P.
&
Beek
E. V.
2017
Water resources planning and management: an overview
. In
Water Resource Systems Planning and management
(Loucks, D. P. & Beek, E. V., eds.). Springer, Cham, Switzerland, pp.
1
49
.
Marcarelli
A.
2019
Swept Away: Stream Gauges Essential to Storm Resilience. AGU Bridges
.
Marowka
A.
2018
On parallel software engineering education using python
.
Education and Information Technologies
23
(
1
),
357
372
.
Mordvintsev
A.
&
Abid
K.
2013
Depth Map From Stereo Images
. .
Multihazard Mitigation Council
2018
Natural Hazard Mitigation Saves: 2018 Interim Report
.
Principal Investigator Porter, K. National Institute of Building Sciences
,
Washington, DC
.
Pawar
K. S.
,
Palwe
M. V.
,
Ellath
S. B.
&
Sondkar
S. Y.
2018
Comparison of performance of PID controller and state feedback controller for flow control loop
. In:
2018 Fourth International Conference on Computing Communication Control and Automation (ICCUBEA)
, Pune, India, 16-18 August 2018.
IEEE
, New York, NY, pp.
1
5
.
PyGeodesy. The PyGeodesy Python Library Implementation and Description
2022
Available from: https://github.com/ mrJean1/PyGeodesy (accessed 9 February 2022)
.
Rosebrock
A.
2019
Pan/tilt face tracking with a Raspberry Pi and OpenCV. PyImageSearch
.
Sabbatini
L.
,
Palma
L.
,
Belli
A.
,
Sini
F.
&
Pierleoni
P.
2021
A computer vision system for staff gauge in river flood monitoring
.
Inventions
6
(
4
),
79
.
Seo
B. C.
,
Keem
M.
,
Hammond
R.
,
Demir
I.
&
Krajewski
W. F.
2019
A pilot infrastructure for searching rainfall metadata and generating rainfall product using the big data of NEXRAD
.
Environmental Modelling & Software
117
,
69
75
.
Sermet
Y.
,
Villanueva
P.
,
Sit
M. A.
&
Demir
I.
2020
Crowdsourced approaches for stage measurements at ungauged locations using smartphones
.
Hydrological Sciences Journal
65
(
5
),
813
822
.
Sit
M.
,
Sermet
Y.
&
Demir
I.
2019
Optimized watershed delineation library for server-side and client-side web applications
.
Open Geospatial Data, Software and Standards
4
(
1
),
1
10
.
Sit
M.
,
Demiray
B. Z.
,
Xiang
Z.
,
Ewing
G. J.
,
Sermet
Y.
&
Demir
I.
2020
A comprehensive review of deep learning applications in hydrology and water resources
.
Water Science and Technology
82 (12), 2635–2670.
Steccanella
L.
,
Bloisi
D. D.
,
Castellini
A.
&
Farinelli
A.
2020
Waterline and obstacle detection in images from low-cost autonomous boats for environmental monitoring
.
Robotics and Autonomous Systems
124
,
103346
.
Teague
A.
,
Sermet
Y.
,
Demir
I.
&
Muste
M.
2021
A collaborative serious game for water resources planning and hazard mitigation
.
International Journal of Disaster Risk Reduction
53
,
101977
.
United Nations (UN)
2016
Transforming Our World: The 2030 Agenda for Sustainable Development
.
Zhao
C.
,
Sun
Q.
,
Zhang
C.
,
Tang
Y.
&
Qian
F.
2020
Monocular depth estimation based on deep learning: an overview
.
Science China Technological Sciences
63
(
9
),
1612
1627
.
This is an Open Access article distributed under the terms of the Creative Commons Attribution Licence (CC BY 4.0), which permits copying, adaptation and redistribution, provided the original work is properly cited (http://creativecommons.org/licenses/by/4.0/).