Thanks to the rapid development of internet technology and computer hardware, it is now possible to use web services to provide visual simulations of flow field calculation results. Visualization technology can display complex water flow data and the laws that govern water flow through graphical means, and can be used to solve scientific and engineering problems related to water conservancy. In this study, we developed a platform for visualizing flow in a watershed based on a Cesium rendering framework with Browser/Server (B/S) architecture that used isosurface, particles, texture-based, and dynamic flow visualization techniques to visualize scalar field, vector field, and dynamic flow field data. Furthermore, our performance test results indicate that the rendering performance meets the practical application requirements for visualizing large-scale flow fields by employing frame interpolation and viewpoint-based dynamic rendering techniques. The results from testing the water flow visualization platform in the Beijiang River Basin, Guangdong Province, China, demonstrated that the platform performed well on different devices and that the running frame rate reached 50–60 fps. These findings can be used to guide further development and applications of web-side flow field visualization technology.

  • The visualization of multiple flow field calculations on the web side is achieved based on Cesium.

  • Frame interpolation technology and dynamic viewpoint-based rendering strategies are used to optimize the visualization of large-scale flow field data.

  • The visualization of multiple flow field calculations on the web side is achieved based on Cesium.

With the rapid development of computer hardware and software in recent years, researchers have become increasingly interested in simulating water flow in watersheds. Flow visualization technology uses computer graphics and image processing technology to convert the data computed by hydrodynamic models into images or animations, so that complex abstract data for water flows and the laws of water flow can be expressed in an intuitive and vivid way. A visualization platform that combines water flow visualization technology with geospatial data, station monitoring information, satellite remote sensing data, and data from various other sources can be established to support integrated watershed management (Worley et al. 2023). With their ability to manage information and provide a continuous flow of visual information, such platforms can support decisions about river basin scheduling and planning for disaster relief (Zhang et al. 2019; Li et al. 2020).

Water flow visualization techniques are generally classified into two categories: visual effect visualization and scientific visualization (Zhang et al. 2017). Visual effect visualization focuses on rendering the real effect of the water flow, while scientific visualization focuses on rendering the physical state of the water flow. Water flow visualization can be further divided into vector field visualization and scalar field visualization based on the feature of the visualized data. Additionally, water flow visualization can be further classified as either static field visualization or dynamic field visualization, depending on how the data change over time. However, the visualization of vector and dynamic flow fields is more difficult than that of scalar and static fields due to their higher amount of information.

Reasonable visual mapping methods can improve the visualization of the flow field characteristics from different types of data. There are four main types of visual mapping methods for water flow data (Chen & Leitte 2011): (1) direct, which uses icons or color coding to represent the flow field; (2) geometric, which visualizes the movement of the water using vector lines and vector surfaces (Stalling et al. 1997); (3) texture-based, which generates dense and high-quality texture images and provides a clear and continuous display of detailed information about the flow field space (Cabral & Leedom 1993; Wijk 2002), and (4) feature-based, which highlights feature areas in the flow field by extracting features (Xu et al. 2011).

Recently, numerous scholars have been involved in researching, developing, and applying new technology for visualizing water flow. Zhang et al. (2016, 2017) developed a platform to support integrated watershed management based on the OpenSceneGraph (OSG) graphic engine that combined various visualization methods, including arrow visualization, Image-based Flow Visualization (IBFV), and Line Integral Convolution (LIC), and achieved good results when they applied it to a river navigation management situation. Zhang et al. (2020) visualized flow field data by dynamic texture mapping and particle tracking with World Wind open-source software and then developed a three-dimensional (3D) visualization software to manage river shipping based on the Client/Server (C/S) architecture that combined the flow model with ship animation. The above-mentioned water flow visualization applications are all based on C/S architecture, which means that it is difficult to work across different platforms and maintain systems and is not sufficiently flexible or efficient to support watershed management. As web technology has developed, B/S architecture, with effective cross-platform features, convenient maintenance, and expansion capability, has gradually replaced C/S architecture.

At the same time, WebGL technology also provides strong support for the scientific visualization of flow field data on the Web (Congote et al. 2011). Van Ackere et al. (2016) developed a 3D flood visualization application with B/S architecture based on WebGIS technology, which was able to display flood inundation effects and visualize property damage during flood events. Zhao et al. (2019) developed a 3D flow visualization platform based on Web Virtual Reality (WebVR) and displayed the flow field superimposed on 3D terrain in the browser. However, while desktop visualization applications are relatively common, there are few applications of water flow visualization technology on the web because of a lack of graphics libraries and poor performance, meaning that there are few ways to simulate different types of water flow data. Moreover, the vast majority of applications do not integrate geospatial data or cannot be combined with the virtual earth environment, and so are not conducive to combining information in the basin.

Refined simulations of large-scale flow fields rely on extensive data support (Bernardon et al. 2007), thus slow data loading, the low frame rate of the visualization display, and non-smooth interactions that occur when massive data are used in the visualization process on the web side (Lin et al. 2022). Existing research, therefore, is focused on optimizing large-scale data by (1) improving the visualization algorithm, extracting features by analyzing visual data, and simplifying the amount of data to improve the rendering efficiency (Li et al. 2022), and (2) using the Levels of Detail (LOD) strategy (Lindstrom et al. 1996), in which the grid is layered and divided into blocks to establish a multi-resolution hierarchical structure model, and the data are called and rendered dynamically according to the viewpoint change. These optimization methods have achieved good results in visualizing large-scale terrain and ocean current data (Hou & Zhang 2012). However, it is more difficult to visualize large-scale river flow data and provide precise simulations of the river, because the boundaries are mostly fitted by unstructured grids and the topology is complex, meaning that it is difficult to build a spatial hierarchy (Ma et al. 2011). Moreover, if the flow field data are simplified, river features may be omitted, thereby affecting the visualization. To date, few researchers have managed to optimize the display when visualizing the enormous volume of large-scale river flow data, and it is difficult to maintain a smooth display and ensure a highly accurate simulation.

In this study, therefore, in an attempt to address the challenges mentioned above, we developed a visualization platform for water flow in a watershed based on B/S architecture and then tested it in the Beijiang River Basin using scalar field, vector field, and dynamic flow field visualization techniques to support the visualization of different types of flow field data on the web. An attempt has also been made to optimize the display method to improve the efficiency of the visual rendering of large-scale data plotted in a flow field. It is expected that the results of this study and the methodology used will help demonstrate how flow field visualization techniques can be used to support engineering and scientific applications on the web.

In this study, the development of a watershed visualization platform based on B/S architecture is described, which is specifically designed for visualizing water flow. In line with the logical relationship for business processing, the platform is divided into three levels, namely the data layer, application layer, and display layer. The overall framework of the platform is shown in Figure 1.
Figure 1

Watershed flow visualization platform framework based on B/S architecture.

Figure 1

Watershed flow visualization platform framework based on B/S architecture.

Close modal

The data layer, for data storage and retrieval, is managed by the database server. In this study, the monitoring data, water flow simulation results, and geospatial data were stored in the data layer. The monitoring data, including water level and flow data, were mainly sourced from hydrological stations and were transmitted to the data server in real time through the communication network. A hydrodynamic model was used to simulate the water flow and produce an output from the monitoring station data. Geospatial data, including image, terrain, vector layer, and scalar layer data, were processed and loaded to establish a 3D environment in the basin.

The application layer connects the data layer and the display layer, processes and converts the data, and is where the business logic in the platform is implemented. The application layer includes three modules, namely the flow calculation module, the flow visualization module, and the 3D visualization module. The flow calculation module reads the station monitoring data in the data layer as the initialization conditions of the two-dimensional hydrodynamic model, simulates the river flow information, and stores the simulation results to the database, according to different attributes. The flow visualization module is an important part of the platform. Depending on the types of data input for the water flow, this module provides visualization methods for three different types of flow field data, namely vector field visualization, scalar field visualization, and dynamic field visualization. The data from the model calculation results are rendered into color texture information using shader technology. In the 3D visualization module, the digital elevation model (DEM) data are rendered into a terrain grid through the graphics rendering engine, and the images, annotations, and model data are loaded to form a virtual environment of the basin.

The display layer is the user display interface where the interactive instructions of the client and the rendering of the results are processed. The display effect of the client is rendered in the browser. When the user sends an interactive request to the server side through the operation interface, the application server responds to the request, calls the relevant module, and returns the result to the client side for display.

The platform was developed in front- and back-end separation mode, with open-source permissions for the languages and toolkits used. In the front end, bootstrap was used as the basic UI (User Interface) framework, the water flow, and 3D visualization part used CesiumJS as the rendering engine (Müller et al. 2016), and the graphics shader part was based on the OpenGL ES 2.0 version. In the back end, Flask was used as the web server framework for monitoring and responding to requests. Functional modules were deployed in the server to provide call interfaces. The data were stored in the MySQL database. Access to large unstructured data, such as high-precision images and videos, was provided in the form of file servers.

Scalar field visualization

The scalar field data, including water depth and water level, are used to simulate the basin flow. These values have a size but no direction. The scalar field is often displayed with isosurface using methods such as Marching Cubes and Marching Tetrahedra (Lorensen & Cline 1987; Doi & Koide 1991). In this study, the web-side isosurface was drawn through the GPU rendering pipeline using the raster interpolation feature of the shader. The scalar data were converted to textures and then fitted to the terrain to build the isosurface in the rendering pipeline. The rendering pipeline is divided into four stages: (1) application, (2) geometry processing, (3) rasterization, and (4) pixel processing, as shown in Figure 2.
Figure 2

The process of drawing the isosurface in the rendering pipeline.

Figure 2

The process of drawing the isosurface in the rendering pipeline.

Close modal
Firstly, in the application stage, the isosurface primitive is created according to the primitive interface in Cesium, which contains the geometry and the appearance. The geometry defines the basic structure of the primitive and stores information about, e.g., vertices. The appearance defines the coloring of the primitives, including shader programs and some rendering states. Then, the data are transmitted from the CPU to GPU by a draw call, and the vertices are assembled into triangle primitives through primitive assembly in the rasterization stage. The isosurface color is drawn in the fragment shader. After rasterization, fragments are generated, and each fragment obtains an independent scalar value by interpolating the vertex attributes in the primitive. In the shader program, color is computed according to the scalar value, and a color mapping relationship for this study was defined by Equation (1):
(1)
where f denotes the color function, h denotes the scalar value of the fragment, is a Vec3 variable that contains the RGB information, and (, ) is the search range of the scalar value. The calculated fragments are called alternative pixels. After various steps, such as a transparency test, depth detection, and mixing, the pixels are stored in the frame buffer and finally displayed on the screen.

Vector field visualization

Particle-based visualization of the slope confluence

Watersheds are generally divided into two systems, slope and channel, during rainfall, and the rainfall on the slope surface will be pooled together in the form of water flow and then transported to the river channel. By analyzing and calculating the slope DEM, water flow is simulated to converge from the slope to the river channel, which constitutes the complete process of visualizing the flow on the surface of the watershed.

The flow direction factor is the direction of the flow at the lowest outlet point in the basin, which is one of the most important parts in the analysis of the slope confluence vector field. Methods for calculating the flow direction are divided into two categories, single-flow direction and multi-flow direction algorithms (Jenson & Domingue 1988; Tarboton 1997). The commonly used D8 algorithm can be computed efficiently, is well suited for complex terrain areas such as depressions and depressions (Freeman 1991), and so was selected to calculate the flow direction in this study. The slope convergence was visualized as follows:

  • (1)
    Calculating the flow direction. The maximum descent gradient was calculated from the elevation values of the DEM grid at the selected point and its surrounding grid cells with Equation (2).
    (2)
    where denotes the final flow cell, x and xi denote the central grid cell and the surrounding grid cells, H is the elevation of the cell, and D is the distance from the cell xi to the center of the cell x.
  • (2)

    Initializing particles. Particles in vector fields generally have a life cycle, motion properties, and appearance properties. The life cycle of a particle refers to the length of time particles that exist in the vector field when the old particle dies out and the new particle is recreated. Motion properties of a particle include the magnitude and direction of the velocity, and the direction of the particle velocity determines the flow direction of the particles, which is calculated by Equation (2). The appearance of the particle includes size, shape, and color properties. Reading the texture initializes the appearance and adjusts color properties such as transparency and contrast, which can simulate morphological changes during the particle movement.

  • (3)

    Motion of particles. The particle initialization information is read in Cesium and the particles are added to the vector field as Entity. The initial position of the particle is determined from the coordinates of the central grid, which is calculated in step (1), and its motion is controlled by the velocity and flow direction. By updating the particle properties with the CallbackProperty function, the particle motion can be plotted in real time. Once the life cycle of a particle exceeds a specified time, or the particle moves outside the vector field region, the particle will immediately die out and a new particle will be generated in the vector field.

IBFV-based river flow visualization

The IBFV algorithm is frequently used to visualize vector fields and shows the motion characteristics of vector fields with high frame-rate animation simulations. The IBFV algorithm is based on the principle of texture advection (displayed in Figure 3), whereby the previous frame image texture is mixed with the noise texture to generate a new background image, by distorting the texture mesh to achieve the effect of texture particle flow.
Figure 3

The process of implementing the principle of the IBFV algorithm.

Figure 3

The process of implementing the principle of the IBFV algorithm.

Close modal
The particle tracks in the IBFV vector field are described by the following equation:
(3)
where p is the position of the particle, k is the number of frames drawn, F denotes the color value of the particle, G is the noise texture, and α is the blending parameter and has a value between 0 and 1. The equation shows that the color value of the is generated by convolving the color of the and mixing it with the noisy texture G. As the mixing ratio α increases, the noise of the noisy texture becomes obvious and the vectorial of the water flow decreases. The implementation of the web-side IBFV algorithm involves the following steps:
  • (1)
    Texture mesh warping. The background image is mapped onto the mesh by means of texture mapping. As the vertices move, the mesh is deformed, which causes the textures on the mesh to flow. The mesh warping is based on the offset motion of the mesh vertices, and the coordinates of the new mesh vertices resulting from the deformation are calculated as shown in Equation (4).
    (4)
    where (, ) and (, ) refer to the coordinate values of the vertices before and after the deformation, u and v are the flow velocities in the x-axis and y-axis directions at the grid nodes and t is the number of frames.
  • (2)

    Noise texture generating. Because a single noise texture can only produce a static image in an IBFV flow field, the noise textures for each advection must be different. The dynamic effect of the vector fields is obtained by generating a set of random white noise textures with periodic cyclic characteristics in the shader, which are called at each frame drawing.

  • (3)

    Texture blending. From Equation (3), the image of the new frame is generated by mixing the noise texture with the image of the previous frame, so the rendering result of each frame needs to be saved in the rendering process for the next rendering. However, texture objects in WebGL do not support recursive calls during a single rendering. This problem can be solved by combining the RTT (Render to Texture) technology and ping-pong technology that are used in GPU computing. First, two frame buffer objects (FBOs) are created, and the rendered result texture is stored in the FBO during the drawing using the RTT technique. The texture objects are acquired in two adjacent frames by calling each other for the two FBOs. The uniformMap is an object in Cesium, which contains some uniform variables used in the rendering computation of the fragment shader. When each frame is drawn, the uniformMap object will be pushed to the fragment shader to update the attribute values in the fragment computation. Through the mechanism of updating uniform variables in Cesium, two FBOs can be continuously pushed to the fragment shader for rendering asynchronously, thus realizing the acquisition of texture in the previous frame.

Dynamic field visualization

Water movement in actual rivers is complex and variable. Static visualization describes only the instantaneous momentary state of the flow field, so the spatial and temporal characteristics of the water flow cannot be demonstrated, meaning that the results lack objectivity (Haller 2015). With dynamic flow field visualization, the changing state of the unsteady flow field is simulated in the real environment by plotting a series of time-varying flow field data that show the spatial and temporal changes as the water flows. In a static flow field drawing, the water flow file is stored in the GPU buffer as vertex attributes, and the rendering pipeline converts the data into pixel colors. There are a large number of time-step files in the dynamic field rendering. While the topology of the grid does not change with time, the vertex attribute value and the grid position both change over time. Therefore, a time-varying flow field data drawing can be obtained by replacing the data in the rendering pipeline and updating the attribute data in the vertex buffer. The specific implementation steps are as follows.

  • (1)

    Water flow file reading and processing. The results from calculating the water flow field are stored according to a certain time step. Because of the discreteness of the file, the plotted frame in two adjacent moments is the description of the flow field result at the previous moment. The simulation can be enhanced to approach the effect of real-time drawing by compressing the time step, such that the file update drawing frequency accelerates; however, this results in lower drawing efficiency by bringing higher I/O (Input/Output) and rendering pressure.

To ensure that the changes in the dynamic flow field are smooth and continuous, the flow field data over a short period of time are interpolated to fill the animation frames in the adjacent moments. The interpolation principle is shown in Figure 4, and the water flow results are interpolated with Equation (5):
(5)
where and are the results of the flow field calculations at and , and δ0 is the gradient matrix of the flow field cells in the time period . Therefore, for any moment t in the time period to , the flow field results are described by Equation (6).
(6)
  • (2)

    Updating the flow field data and plotting status. First, the current moment and the next moment of the flow file are read at time to initiate the flow field, and the calculated gradient matrix passes into the vertex buffer at the same time. Then, during the rendering of the flow field, the fragment shader program gets the frame time and the gradient matrix of the current time period, and the flow field result is updated using Equation (6). When the time reaches the moment , the gradient matrix changes, and the results file for the moment is read to calculate the new gradient matrix , according to Equation (5). DrawnCommand, which includes drawing data and drawing behavior, is a class wrapped by Cesium based on WebGL and plays a key role in Cesium's rendering process. When rendering water flow primitives, the renderer updates and executes the contents of DrawCommand, the vertex data, uniform data, and some rendering states in the rendering pipeline will be updated by the draw data instead, and the renderer will compile and link the shader program in the draw behavior to the rendering pipeline. The gradient matrix of the flow field file is updated by the push mechanism of DrawCommand to refresh the vertex buffer data and then draw the dynamic flow field.

Figure 4

Principle of the dynamic time-varying flow field data frame interpolation.

Figure 4

Principle of the dynamic time-varying flow field data frame interpolation.

Close modal

A large amount of flow field data has a complex structure and changes with time (Hiep et al. 2011). In B/S rendering, the data reading is limited by the transmission between the server side and the user side. When the user interacts with the scene, frequent requests for files will cause blockages on the browser side, resulting in, e.g., black screens and stuck screens. At the same time, the real-time updates of the large-scale data drawing will also place demands on the GPU Video RAM (VRAM). In this study, to solve the problem of the file read and write delay and reduce the large rendering pressure during the large-scale data rendering on the B/S side, we adopted the viewpoint-based dynamic rendering strategy to reduce the amount of data calls and rendering data and improve the rendering efficiency. In our method, the visualization process is optimized in two stages, namely file loading and flow field rendering.

First, the grid data and water flow data are pre-processed to divide the blocks and to establish the indexing relationships between the files and blocks, so that the large-scale flow field is partitioned into multiple smaller areas. There is less data transfer congestion on the web side when the individual files are smaller. Then, in the file loading stage, the area to be drawn is determined according to the location of the camera viewpoint, and the flow field file in the corresponding area is requested to achieve on-demand loading. Finally, in the flow field rendering stage, the texture bound in the RTT camera is updated in real time according to the area to be drawn, so that it only draws the area within the viewport range, thereby reducing the rendering load further while maintaining the high-precision texture effect. This method is implemented as follows: (1) the range of the viewport is obtained by listening to the camera viewpoint position change, (2) the file index is queried according to the viewport range, and the water flow and grid file of the corresponding area are loaded and then read into the memory, and (3) the vertex buffer is refreshed and the draw command is executed to update the texture in the RTT. At the same time, the adjacent region file is asynchronously requested and read into the cache before the call. The pre-screening of the rendering area serves to decrease the amount of data called by the rendering, thereby saving the overhead of the system. When the viewpoint changes, the frame-rate fluctuation is small, and the water flow simulation effect is also good.

Performance test

The performance is an important indicator of how well the water flow visualization works and is also critical for the user experience. The frames per second (fps) are often a concern in interactive display interfaces (Liu et al. 2015). The fps can reflect the drawing performance of the system in real time, and, as the fps increases, the smoothness of the graphics improves, giving a better interactive experience. Here, the performance was evaluated from the fps, and the drawing efficiency of the animation under flow fields at different scales and for the different visualization methods was compared.

With the IBFV texture flow field as an example, multiple grid scenarios of different orders of magnitude were set up for testing to evaluate if the platform was capable of rendering a large-scale flow field. The tests were run on a machine configured with CPU Ryzen 7 5800H and a GPU RTX 3060 Laptop, as shown in Table 1. In the experiment, the rendering frame rate of the IBFV flow field was generally stable at 60 fps under the rendering of 3 million meshes, and the rendering efficiency was better than the texture flow fields reported in Zhang et al. (2020) and Zhao et al. (2019). As the number of triangles increases, the amounts of vertex and flow field data also increase. When the VRAM of the GPU is full, the data to be drawn need to be read from the memory into the memory, thereby causing a decrease in the rendering efficiency. In a recent study on water flow visualization, Zhao et al. (2019) visualized the flow field in the Mou River estuary with more than 30,000 grids in the study area and a web flow field rendering frame rate of 54. Zhang et al. (2020) showed a frame-rate performance of over 30 in a hybrid rendering with a rendered file size of over 300 megabytes. However, when compared to the experimental results in this paper, there is an advantage of our study in terms of both mesh rendering scale and efficiency.

Table 1

Rendering frame rate of the IBFV flow field for different amounts of meshes

CaseNumber of vertexNumber of trianglesVertex file size (MB)Flow field file size (MB)Average frame rate (fps)
200,004 398,021 15 60 
700,194 1,396,415 54 27 60 
1,147,873 1,728,926 78 45 60 
2,001,549 3,371,295 148 78 59 
4,383,788 8,757,647 363 117 52 
5,521,046 11,032,163 459 216 44 
6,325,435 12,639,145 527 247 40 
8,751,535 17,484,490 731 342 28 
9,908,946 19,799,965 829 378 24 
10 12,378,489 24,742,804 1,044 484 18 
CaseNumber of vertexNumber of trianglesVertex file size (MB)Flow field file size (MB)Average frame rate (fps)
200,004 398,021 15 60 
700,194 1,396,415 54 27 60 
1,147,873 1,728,926 78 45 60 
2,001,549 3,371,295 148 78 59 
4,383,788 8,757,647 363 117 52 
5,521,046 11,032,163 459 216 44 
6,325,435 12,639,145 527 247 40 
8,751,535 17,484,490 731 342 28 
9,908,946 19,799,965 829 378 24 
10 12,378,489 24,742,804 1,044 484 18 

Four test projects were also designed to evaluate the performance of the different visualization techniques described earlier in practical applications, including water depth isosurface rendering, slope particle flow rendering, river IBFV rendering, and flood dynamic evolution rendering. Different devices, namely a host, mobile phone, and tablet, were selected as client devices for testing to reflect the compatibility and portability of the B/S architecture water flow visualization platform. In the test, the water flow visualization platform application was deployed on the server side, while the user side used Google Chrome for access through the local area network. The specific configuration information is shown in Table 2.

Table 2

Rendering performance test equipment information

Operating systemCPUGPU
Server Windows 10 Intel Xeon E5-2609 v4 *2 NVIDIA Quadro P2000 
User Windows 10 AMD Ryzen 7 5800H GeForce RTX 3060 Laptop 
Android 12 Qualcomm Snapdragon 888 Qualcomm Adreno 660 
iPad OS 15 Apple A12X Apple A12X Bionic GPU 
Operating systemCPUGPU
Server Windows 10 Intel Xeon E5-2609 v4 *2 NVIDIA Quadro P2000 
User Windows 10 AMD Ryzen 7 5800H GeForce RTX 3060 Laptop 
Android 12 Qualcomm Snapdragon 888 Qualcomm Adreno 660 
iPad OS 15 Apple A12X Apple A12X Bionic GPU 

Human visual research shows that an experience becomes smoother when the animation rendering frame rate is greater than 25 fps. The test results are shown in Figure 5. The water depth contour, IBFV, and particle visualization reached 50–60 fps for all the devices, meaning that the visualization performance was good. Moreover, the results of the performance tests vary at different viewing heights. This is due to the fact that at higher viewing angles, there are more objects to render within the viewport, and the rendering pressure increases. The frame rates measured in this paper are calculated based on the average values.
Figure 5

Evaluation of the performance of the different water flow visualization projects in the platform.

Figure 5

Evaluation of the performance of the different water flow visualization projects in the platform.

Close modal

The input file had a time step of 5 min for the dynamic visualization. In a single time step, the water flow file was interpolated and the rendering data were updated every 40 ms. The flow field rendering results were smooth over time, with no obvious frame-skipping phenomenon. The frame rate reached 60 fps in high-performance devices, and the static frame rate reached 25–30 fps in low-performance devices. However, it is worth noting that the frame rate may fluctuate greatly when the viewpoint changes. The performance must be optimized further to visualize dynamic large-scale flow fields.

Application

We tested the water flow visualization platform on the web side using data for the river reach between the Feilaixia Water Conservancy Hub and the Qingyuan Water Conservancy Hub of the Beijiang River Basin, and the length of the river channel in the study area is 46 km. The Beijiang River Basin is mainly located in the northern part of Guangdong Province and is the second largest water system of the Pearl River. As the largest comprehensive water conservancy hub on the Beijiang River Basin, Feilaixia Water Conservancy Hub, whose operation and management play an important role in flood control, navigation, power generation, ecology, etc., is a key project in the comprehensive management of the Beijiang River Basin. In the comprehensive management of the basin, the flow field visualization platform on the web side can provide powerful decision support for different management departments. For example, the platform can show the flood inundation range in detail by visualizing the water depth data, which is beneficial for flood prevention. In inland navigation, the river flow distribution combined with the flow velocity provides navigation support for ships, etc.

The platform constructed a 3D Earth map from image and terrain data obtained online through the Map World web map server. Vector graphics of, for example, the spur dikes and bridges were loaded into the scene in Geographic JavaScript Object Notation (GeoJSON) format files to simulate the hydraulic structures in the river. For other formats, such as osgb, obj, fbx, etc., they need to be converted to the 3dTiles format supported by Cesium for distribution. An overview of the study area is shown in Figure 6(a) and 6(c). The white line indicates the border line of the river area, the white-striped part of the river is the spur dike, and the red-striped part is the bridge.
Figure 6

Overview map of the study area. (a) Study area scope: Feilaixia to Qingyuan; (b) the grid model of the river; and (c) river vector building model: spur dike and bridges.

Figure 6

Overview map of the study area. (a) Study area scope: Feilaixia to Qingyuan; (b) the grid model of the river; and (c) river vector building model: spur dike and bridges.

Close modal

The contents of the flow field files are the nodes, topological relationships, and node attributes of the flow field grid, which can be customized in format. The water flow in the study area was numerically simulated using an unstructured triangular grid (shown in Figure 6(b)), with a total of 266,007 triangular cells and 134,060 grid nodes. The areas in the river channel with spur dikes and bridges were encrypted by grids to improve the accuracy of the flow characteristics of local areas. The visualization data of the flow field were used in the form of a scheme library, which was derived from the upstream downstream flow and the downstream water level and was divided into 63 working conditions. The flow conditions under different flow scenarios were calculated by a two-dimensional hydrodynamic model, and the results were stored in the database.

The isosurface of the water depth in the scalar field is shown in Figure 7(a). The dark blue area represents the deep-water area, which is concentrated in the canyon area and the end of the spur dike. The IBFV-based animation of the vector field flow visualizes the flow velocity, flow direction, and vortex in the flow field, as shown in Figure 7(b). The visualization simulation of the slope confluence in the vector field is shown in Figure 7(c), where the water flow simulated the process of water flow from the slope to the channel as particle motion. In addition, in the visualization of the dynamic flow field, the flow data were updated and the flow field animation was drawn in real time according to the simulation results of the flow field, as shown in Figure 8. The flood inundation state was expressed in different moments and the flood movement law was intuitively reflected. This type of information can be used to help prevent floods, provide safety warnings before floods occur, and provide relief after disasters in the basin. The above case ran smoothly on a machine with Intel Xeon E5-2609 CPU and NVIDIA Quadro P2000 GPU, with frame rates of around 50–60 fps.
Figure 7

Effect of the flow field visualization in the platform. (a) Visualization of the water depth isosurface in a scalar field. (b) IBFV-based flow velocity visualization in a vector field. (c) Particle-based slope convergence in a vector field.

Figure 7

Effect of the flow field visualization in the platform. (a) Visualization of the water depth isosurface in a scalar field. (b) IBFV-based flow velocity visualization in a vector field. (c) Particle-based slope convergence in a vector field.

Close modal
Figure 8

Simulation of the flood evolution in dynamic fields. (a), (b), (c), and (d) are the flow field states at 0, 30, 60, and 90 min, respectively.

Figure 8

Simulation of the flood evolution in dynamic fields. (a), (b), (c), and (d) are the flow field states at 0, 30, 60, and 90 min, respectively.

Close modal

Future work

In watershed simulations, high-precision mesh rendering reveals richer and more complex details and enhances the overall visual impact, but it also leads to higher rendering pressure, and thus, further research on large-scale mesh rendering optimization is necessary.

Furthermore, due to the inadequacy of the existing hardware and algorithms, the results of the flow field calculations were interpolated during the visualization of the dynamic flow field to compensate for the flow field variations of the vacancies during each time period. In fact, to achieve real-time simulation of water flow, model computation needs to be closely linked to the visualization process, which requires further research in the field of graphical display and parallel computation on GPUs in water flow simulation.

Water flow visualization technology has an important role in watershed management and river science research. In this study, a watershed flow visualization platform based on B/S architecture was developed. Various visualization techniques and display optimization techniques were investigated and then tested with data from the Beijiang River Basin. The following conclusions were drawn:

  • (1)

    Based on the Cesium open-source rendering engine, a watershed virtual environment was constructed, studied scalar field, vector field, and dynamic field visualization techniques, and managed to integrate the flow field visualization and digital geospatial visualization on the web side. Various types of flow field data were visualized through WebGL technology, and users were able to view the flow field simulation results and visualization remotely, which contribute to the efficiency of watershed management.

  • (2)

    Interpolation techniques and viewpoint-based dynamic rendering strategies were used to optimize the large-scale data before visualization, thereby decreasing the data calls and the rendering data volume and improving the rendering efficiency. Results from testing the performance of the water flow visualization platform with data from a stretch of the Beijiang River showed that most of the devices used in the evaluation could run at a frame rate of 50–60 frames. This indicates that the results of this study have good application performance in practical applications and are important for solving engineering and scientific problems.

This study was supported by the National Key Research and Development Program of China (2022YFC3202004) and the National Natural Science Foundation of China (51979105). We thank Liwen Bianji (Edanz) (www.liwenbianji.cn) for editing the English text of a draft of this manuscript.

Data cannot be made publicly available; readers should contact the corresponding author for details.

The authors declare there is no conflict.

Bernardon
F. F.
,
Callahan
S. P.
,
Comba
J. L. D.
&
Silva
C. T.
2007
An adaptive framework for visualizing unstructured grids with time-varying scalar
.
Parallel Computing
33
,
391
405
.
https://doi.org/10.1016/j.parco.2007.02.015
.
Cabral
B.
&
Leedom
L. C.
1993
Imaging vector fields using line integral convolution
. In:
Proceedings of the 20st Annual Conference on Computer Graphics and Interactive Techniques
.
Siggraph
, pp.
263
270
.
https://doi.org/10.1145/166117.166151
.
Chen
M.
&
Leitte
H.
2011
An information-theoretic framework for visualization
.
IEEE Transactions on Visualization and Computer Graphics
16
,
1206
1215
.
https://doi.org/10.1109/TVCG.2010.132
.
Congote
J.
,
Segura
A.
,
Kabongo
L.
,
Moreno
A.
,
Posada
J.
&
Ruiz
O.
2011
Interactive visualization of volumetric data with WebGL in real-time
. In
Presented at the Proceedings – 16th International Conference on 3D Web Technology
.
Web3D
, pp.
137
146
.
https://doi.org/10.1145/2010425.2010449
.
Doi
A.
&
Koide
A.
1991
An efficient method of triangulating equivalued surfaces by using tetrahedral cells
.
IEICE Transactions on Information and Systems
74
(
1
),
214
224
.
Freeman
T. G.
1991
Calculating catchment area with divergent flow based on a regular grid
.
Computers & Geosciences
17
,
413
422
.
https://doi.org/10.1016/0098-3004(91)90048-I
.
Haller
G.
2015
Lagrangian coherent structures
.
Annual Review of Fluid Mechanics
47
,
137
162
.
Hiep
V.
,
Labatut
P.
,
Pons
J. P.
&
Keriven
R.
2011
High accuracy and visibility-consistent dense Multiview stereo
.
IEEE Transactions on Pattern Analysis and Machine Intelligence
34
,
889
901
.
https://doi.org/10.1109/TPAMI.2011.172
.
Hou
H.-d.
&
Zhang
J.-f.
2012
Research on real-time visualization of large-scale 3D terrain
.
Procedia Engineering
29
,
1702
1706
.
https://doi.org/10.1016/j.proeng.2012.01.198
.
Jenson
K.
&
Domingue
O.
1988
Extracting topographic structure from digital elevation data for geographic system analysis
.
Photogrammetric Engineering and Remote Sensing
54
(
11
),
1593
1600
.
Li
S.
,
Yang
J.
&
Zhang
Z.
2020
Research on 3D international river visualization simulation based on human-computer interaction
.
Wireless Communications & Mobile Computing
2020
(
December
).
 https://doi.org/10.1155/2020/8838617
.
Li
Z.
,
Xu
B.
,
Li
Y.
,
Gong
K.
&
Liu
G.
2022
Visualization of ocean flow field based on unstructured triangular mesh
.
Journal of Graphics
43
(
03
),
486
495
.
(in Chinese)
.
Lin
W.
,
Cui
B.
,
Tong
W.
,
Wang
J.
,
Wang
X.
&
Zhang
J.
2022
Development and application of three-dimensional intelligent monitoring system for rolling quality of earth-rock dam under B/S framework
.
Journal of Hohai University (Natural Sciences)
50
(
5
),
131
138
.
(in Chinese)
.
Lindstrom
P.
,
Koller
D.
,
Ribarsky
W.
,
Hodges
L.
,
Faust
N.
&
Turner
G.
1996
Real-time, continuous level of detail rendering of height fields
. In:
Presented at the Real-Time, Continuous Level of Detail Rendering of Height Fields
, pp.
109
118
.
https://doi.org/10.1145/237170.237217
.
Liu
P.
,
Gong
J.
&
Yu
M.
2015
Visualizing and analyzing dynamic meteorological data with virtual globes: a case study of tropical cyclones
.
Environmental Modelling & Software
64
,
80
93
.
https://doi.org/10.1016/j.envsoft.2014.11.014
.
Lorensen
W. E.
&
Cline
H.
1987
Marching cubes: a high resolution 3D surface construction algorithm
.
ACM SIGGRAPH Computer Graphics
21
,
163
.
https://doi.org/10.1145/37401.37422
.
Ma
Q.
,
Liu
T.
,
Wang
P.
,
Liu
Y.
&
Li
S.
2011
High-efficiency volume rendering of unstructured-grid time-varying flows using temporal and spatial coherence
.
Journal of Computer-Aided Design & Computer Graphics
23
(
11
),
1816
1824
.
Müller
R. D.
,
Qin
X.
,
Sandwell
D. T.
,
Dutkiewicz
A.
,
Williams
S. E.
,
Flament
N.
,
Maus
S.
&
Seton
M.
2016
The GPlates portal: cloud-based interactive 3D visualization of global geophysical and geological data in a web browser
.
PLoS ONE
11
,
e0150883
.
https://doi.org/10.1371/journal.pone.0150883
.
Stalling
D.
,
Zockler
M.
&
Hege
H. C.
1997
Fast display of illuminated field lines
.
IEEE Transactions on Visualization and Computer Graphics
3
,
118
128
.
https://doi.org/10.1109/2945.597795
.
Van Ackere
S.
,
Glas
H.
,
Beullens
J.
,
Deruyter
G.
,
De Wulf
A.
&
De Maeyer
P.
2016
Development of a 3D dynamic flood WEB GIS visualisation tool
.
International Journal of Safety and Security Engineering
6
,
560
569
.
https://doi.org/10.2495/SAFE-V6-N3-560-569
.
Wijk
J.
2002
Image based flow visualization
. In:
Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques
, pp.
745
754
.
https://doi.org/10.1145/566654.566646
.
Worley
L.
,
Underwood
L.
,
Diehl
R.
,
Matt
J.
,
Lawson
K.
,
Seigel
R.
&
Rizzo
D. M.
2023
Balancing multiple stakeholder objectives for floodplain reconnection and wetland restoration
.
Journal of Environmental Management
326
(
January
),
116648
.
https://doi.org/10.1016/j.jenvman.2022.116648
.
Xu
H.
,
Li
S.
,
Ma
Q. L.
&
Cai
X.
2011
Fuzzy description and extracting methods of complex feature regions in flow fields
.
Journal of Software
22
,
1960
1972
.
https://doi.org/10.3724/SP.J.1001.2011.03851
.
Zhang
S.
,
Zhang
T.
,
Wu
Y.
&
Yi
Y.
2016
Flow simulation and visualization in a three-dimensional shipping information system
.
Advances in Engineering Software
96
,
29
37
.
https://doi.org/10.1016/j.advengsoft.2016.01.004
.
Zhang
S.
,
Li
W.
,
Lei
X.
,
Ding
X.
&
Zhang
T.
2017
Implementation methods and applications of flow visualization in a watershed simulation platform
.
Advances in Engineering Software
112
,
66
75
.
https://doi.org/10.1016/j.advengsoft.2017.06.016
.
Zhang
Z.
,
Hu
H.
,
Yin
D.
,
Kashem
S.
,
Li
R.
,
Cai
H.
,
Perkins
D.
&
Wang
S.
2019
A cyberGIS-enabled multi-criteria spatial decision support system: a case study on flood emergency management
.
International Journal of Digital Earth
12
,
1364
1381
.
https://doi.org/10.1080/17538947.2018.1543363
.
Zhang
X.
,
Liu
J.
,
Hu
Z.
&
Zhong
M.
2020
Flow modeling and rendering to support 3D river shipping based on cross-sectional observation data
.
ISPRS International Journal of Geo-Information
9
(
3
),
156
.
https://doi.org/10.3390/ijgi9030156
.
Zhao
S.
,
Jin
S.
,
Ai
C.
&
Zhang
N.
2019
Visual analysis of three-dimensional flow field based on WebVR
.
Journal of Hydroinformatics
21
.
https://doi.org/10.2166/hydro.2019.101
.
This is an Open Access article distributed under the terms of the Creative Commons Attribution Licence (CC BY-NC-ND 4.0), which permits copying and redistribution for non-commercial purposes with no derivatives, provided the original work is properly cited (http://creativecommons.org/licenses/by-nc-nd/4.0/).