Haar wavelet-based adaptive finite volume shallow water solver

This paper presents the formulation of an adaptive finite volume (FV) model for the shallow water equations. A Godunov-type reformulation combining the Haar wavelet is achieved to enable solutiondriven resolution adaptivity (both coarsening and refinement) by depending on the wavelet’s threshold value. The ability to properly model irregular topographies and wetting/drying is transferred from the (baseline) FV uniform mesh model, with no extra notable efforts. Selected hydraulic tests are employed to analyse the performance of the Haar wavelet FV shallow water solver considering adaptivity and practical issues including choice for the threshold value driving the adaptivity, mesh convergence study, shock and wet/dry front capturing abilities. Our findings show that Haar wavelet-based adaptive FV solutions offer great potential to improve the reliability of multiscale shallow water models. doi: 10.2166/hydro.2015.039 s://iwaponline.com/jh/article-pdf/17/6/857/388797/jh0170857.pdf Dilshad A. Haleem Georges Kesserwani (corresponding author) Daniel Caviedes-Voullième Pennine Water Group, Department of Civil and Structural Engineering, University of Sheffield, Sheffield, UK E-mail: g.kesserwani@sheffield.ac.uk


INTRODUCTION
Within computational hydraulics, the finite volume (FV) Godunov-type method has been extensively used for modelling shallow water flows due to its desirable properties, such as locality, numerical mass conservation and ability to inher- For instance, they require error sensors and multiple user-chosen parameters (e.g., for setting up grid resolution coarsening vs. refinement), which introduce sensitivity (e.g., can lead to inadequate or excessive resolution), inflexibility and problem dependency (e.g., due to the need to tune many parameters for each simulation problem).
They also lack a rigorous strategy to accommodate flow data transfer and recovery between various inner resolution scales (given the changing nature of the mesh). Therefore, the design of a more integrated adaptation approach, which can address such problems, is still a crucial challenge (Hovhannisyan et al. ) and is the aim of this work.
The classical wavelet theory for local decomposition/ reconstruction of (self-similar) functions (by translation and dilatation of a single mother function, or wavelet, at finer resolution) offers a natural way for adaptive compression of data. Mathematicians and engineers have widely used the multiresolution capabilities of wavelets in many applications (e.g., signal processing and denoising) (Gargour et al. ) including the solution of partial differential equations. The true power of such a multiresolution approach arises from the fact that the local piecewise-constant data can be described as a set of scaling coefficients encapsulating higher-resolution details thereby allowing (Harten : Keinert ): (a) genuine information exchange between those (heterogeneous) elements with matching resolution (promotion and demotion of the details via application of highand low-pass filters); (b) achievement of grid resolution adaptivity by selecting certain details from the local compression dataset; and (c) quantitatively control the variation of the adaptive mesh solution with reference to the underlying uniform mesh solution relevant to the finest resolution.
In the context of the FV Godunov-type modelling, a couple of papers have successfully integrated the wavelet theory for adaptive solution of homogenous conservation law (scalar and system) (Harten ; Müller ). However, the implementation and implication of this idea in addressing practical aspects of shallow water flow simulation is unexplored as yet. To fulfil this gap, the Haar wavelet basis is used to reformulate a FV Godunov-type method to obtain a new adaptive multiresolution scheme, which will be referred to as Haar wavelets finite volume (HWFV). The technical development of the HWFV solving the onedimensional (1D) SWE with source term and wetting and drying is described. Particular focus is mainly placed on how wavelet-based adaptivity is achieved within the HWFV SWE numerical solver. Selected test cases are used to systematically verify the performance of the HWFV scheme addressing issues of adaptivity parameterizations, modelling of irregular topography with/without wetting and drying, accuracy preservation and mesh convergence.
The rest of the paper is organized as follows: a brief overview of 1D SWE is first presented, followed by a section describing the baseline FV method that will be used.
Second, the multiresolution analysis (MRA) and its mathematical properties are introduced, with particular focus on how the Haar wavelets basis and its scaling basis are incorporated into the FV method. Finally, the performance of the HWFV model is tested, analysed and discussed, and conclusions are drawn.

SWE
The 1D SWE considering the bed source term can be cast in a conservative matrix form: where t is the time (s), x is space (m) and U, F(U) and S are the vectors containing the conserved variables, the fluxes and the bed source terms, respectively, in which h is the water depth (m), q is the flow rate per unit width (m 2 /s), g is the acceleration gravity (m 2 /s) and z is the bed elevation (m).

OVERVIEW OF THE FV GODUNOV-TYPE FRAMEWORK
In this section, the Godunov-type FV method is briefly presented (Garcia-Navarro & Vazquez-Cendon ; Fraccarollo et al. ; Bouchut ). The computational domain is divided into N 0 uniform and non-overlapping yields the following conservative discrete form of the SWE: where U t kþ1 i and U t k i are piecewise-constant average representing the local numerical solution at the present and the next time level, respectively.F iþ1=2 are the numerical fluxes at cell interfaces, which are approximated according to a two-arguments numerical flux function,F, which is based on the approximate Riemann solver of Roe (Roe ), and S tm i includes the bed slope source term. Alternatively, Equation (3) can be rewritten in the following semidiscrete form: The techniques adopted for well-balanced discretization of the bed slope source term with wetting and drying are any function f supported on a baseline interval V 0 can be described on a dyadic sub-division of the baseline interval (Harten ). In particular, this property is valid for a func- g that can be spanned by a scaling basis φ (Figure 1). From the father basis φ, it is possible to span any sub-space V n via dilation and translation (Keinert ): where n is the dilation index and j is the translation index.
The wavelet sub-spaces W n (n ! 0) come into play as an orthogonal complement of V n in V nþ1 and they satisfy the conditions: Taking the Haar wavelet ψ as a mother basis that spans W 0 (Figure 2), any sub-space W n can also be spanned via its dilation and translation: From Equation (6), the function f can be described in any space V n as a linear combination of scaling bases ϕ n j and scaling coefficients s n j : This is called the single-scale expansion of f at resolution n, where s n j can be computed (or initialized) as: Owing to the nested sequence of subspaces and the use of Equations (6), (7), (8) and (10), an alternative expression of f can be obtained by means of the coarse description in V 0 (i.e., at resolution n ¼ 0) and its complementary details d n j in spaces W n-1 : This representation is called the multiresolution expansion of f at resolution n, where the detail coefficients d n j can be computed (or initialized) as: In the expansion (Equation (11)), the representation of the function f at higher resolution (n > 0) is enabled by the detail coefficients d n j when they are non-zero.

Two-scale transforming coefficients
Owing to the orthonormality and the compact support properties of ϕ n j and ψ n j , rigorous exchange of data across two resolution levels can be obtained by the so-called two-scale transformation. To do so, two types of filter bank coefficients (a common term in the field of signal processing) are needed. For simplicity, the two-scale transformation is explained for data transfer between the resolution (0) and (1). The first type of coefficients {k 0 , k 1 } is associated with low-pass filters, i.e., Equation (13); whereas the second type of coefficient {g 0 , g 1 } is associated with high-pass filters, i.e., Equation (14): Low-pass filter coefficients {k 0 , k 1 } are used to merge the two scaling coefficients at resolution (1) into one scaling coefficient at resolution (0) (this is referred to as demoting): High-pass filter coefficients {g 0 , g 1 } are used to obtain (or actually store) the complement details (or detail coefficients) of the scaling coefficients at resolution (1) in resolution (0): Combined use of both filter coefficients allows to compute the scaling coefficients at resolution (1) from the scaling coefficients at resolution (0) and their (stored) complement detail coefficients (this is referred to as promoting): By recursive application of the two-scale transform equations, the expansion of scaling coefficients describing a function f can be demoted or promoted across any different resolutions ( Figure 3). Consequently, promoting and demoting the coefficients across different resolutions, i.e., by using Equations (15) and (16), is the keystone of the HWFV scheme.

HWFV METHOD
This section shows how to exploit the multiresolution description of function f (recall the section 'MRA') to decompose the local piecewise-constant solution U i (recall the section 'Overview of the FV Godunov type-framework') as a compressed data set of the solution's information, which drives adaptivity. The strategy is to first generate a FV discretization at the highest resolution (herein presented for n ¼ 2 without loss of generality) to produce a fine uniform reference mesh. Then, over this mesh, MRA is performed to select up to which resolution the local numerical solution needs to be described (i.e., form a heterogeneous grid for actual FV calculations). In doing so, three main steps are needed: prediction for careful mesh refinement, multiresolution FV update and thresholding step for mesh coarsening through solution decompression.

Multiresolution FV formulation
Following the discretization presented in 'Overview of the FV Godunov type-framework' section, a higher resolution mesh is introduced. Each cell of the baseline coarse mesh, i.e., presented in that section, is subdivided into 2 n cells namely: Figure 4 shows the multiresolution stencil. In this multiresolution setting, the FV formulation, described in Equations (4) and (5), can be rewritten as: Here, the translating index j is related to the resolution level of the local cell n.  Equations (17) and (18) need to be only applied at a local level resolution (i.e., on the adaptive mesh). This mesh can be selected by, first, casting each local solution as in Equation (11). This requires demoting high-resolution data in order to store detail coefficients and produce the scaling coefficient at level (0). Then, a mesh prediction step is needed to locally promote the solution and refine the grid. This will yield the adaptive mesh on which calculation is performed. Finally, hard thresholding is performed to reduce the density of the mesh by discarding all non-significant details that fall below a certain threshold value (details in the sections below).

Prediction step for mesh refinement
Since the flow field is evolving in time, the prediction step must be performed after each time step to guarantee no sig- b) Compute the normalized gradient α between the local cell and its neighbour cell (see Figure 5(a)). decision for the local refinement mesh will take the following form according to the value of α. If α ! 0:1 the adaptive mesh is as in Figure 5(c). If 0:1 > α > 0:05 the adaptive mesh is as in Figure 5(b) and if 0:05 > α the mesh has the form illustrated in Figure 5(a).

Thresholding step for the decompression of mesh
Thresholding is applied on the detail coefficients after each update to decompress the mesh. The values of detail coefficients (D n ij ) become small when the numerical solution is smooth. Therefore, they can be cancelled without substantially affecting the accuracy of the numerical solution. To do so, all detail coefficients whose absolute values are below a normalized level-dependent threshold value are discarded, i.e., if they satisfy The components in D n ij are associated with the components of the conserved variables in U n ij (i.e., h and q). In real computations, it is impossible to know the optimal threshold value but a range of options is feasible (Gerhard

Adaptivity of topography
The topography is projected into the highest resolution (n ¼ 2) in the same way as the conserved variables. Thus the compressed data set of bed information over each cell is obtained. Finally, the adaptive bed mesh is obtained by applying the thresholding step. However, the topography adaptation is performed once (i.e., initially when t ¼ 0) and it remains constant throughout the simulation time.
To ensure the preservation of an accurate water surface elevation and mass conservation across levels, the bed mesh should be considered as a reference; this means that the HWFV scheme does not allow demoting the local numerical solution to coarsen the mesh to a resolution level lower than the topography refinement, even if the local numerical solution allows for a lower refinement level.

NUMERICAL RESULTS
This section shows the ability of the new HWFV scheme to solve 1D shallow water problems with fewer cells when comparing to the reference FV scheme, while retaining accuracy, total mass in the system, well-balancing and positivity of water depth. Five well-known benchmark problems are considered for validation of the HWFV scheme. Furthermore, the root mean square error (RMSE) and the relative mass error (RME) are computed. Two of the test cases are idealized dam-break cases over frictionless beds (regular and irregular), the third and fourth test cases consider steady flow over a hump (transcritical flow with shock and supercritical flow) and the fifth test case considers a deeper analysis on the test case of the oscillatory flow in a parabolic bowl.

Idealized dam-break
The purpose of this case is to test the capability of the HWFV scheme to efficiently and accurately solve the homogenous SWE. The solution is compared to the exact solution, using the RMSE and the maximum error metrics.
A 1D channel with a horizontal frictionless bed is considered. The length of the channel is 2,000 m and an imaginary dam is located at 1,000 m from the upstream end. Initially, the upstream water level is 20 m whereas at the downstream end, the water level is 5.0 m. The initial dis- Further quantitative analysis is performed via calculating the RMSE for both water surface and flow rate, i.e., in which AC is the total number of active cells forming the adaptive mesh, h n i,j , q n i,j are the numerical results andĥ n i,j ,q n i,j are the reference data (from the analytical solutions).  Table 1). These results show decreasing trend for the means accompanied by a faster reduction in their SDs with more profound refinement levels in the adaptive scheme. Therefore, the HWFV framework performs the adaptivity process without introducing any additional errors to the numerical solution and is able to sensibly decide resolution level in track with the dynamics of the flow. For this test, the adaptive model simulation costs around 78% less, in computational efforts, than the simulation on the finest uniform mesh.
In the following test cases, the highest level will be fixed to n ¼ 2 and the focus will be placed on exploring applied aspects that are relevant to hydraulic modelling.   Channel geometry and bed topography are identical to the previous test case, but the unit inflow rate and elevation water surface at the upstream of the channel are set to h þ z ¼ 2 m and q ¼ 25.0567 m 2 /s, respectively. Herein,

Dam-break over a triangular hump
This test case is employed to verify the capability of HWFV to preserve the total mass in the system in order to ensure that the adaptivity process does not introduce or lose mass even in the presence of shocks and wet/dry fronts. A hypothetical dam-break over a triangular hump test is set up (see Figure 11).
The length of the horizontal flume is 38 m and a dam is  overtops and interacts with the obstacle creating a reflected wave that will again be reflected by the boundary walls.
Since the system is closed, the initial mass of water should be conserved. During the simulation, the total mass of water at time t (M t ) is computed and compared with the total initial discrete mass (M 0 ) according to the RME below: The results in Figure 12 show that the adaptive HWFV models conserve the initial total mass in the system and the magnitude of RME is within the range of machine precision (1 × 10 À15 ) throughout the entire computational time. However, since in the HWFV scheme the mesh resolution is not fixed, the real physical mass (M R ) is also used, as a reference, to show the ability of HWFV to preserve the same amount of mass as compared to the corresponding uniform mesh size; this has been done according to Equation (25) below: In Figure 13, the results show that the RME profiles for both the adaptive mesh levels and their uniform mesh counterparts are the same. This means the HWFV is not introducing any additional mass error beyond the capability of the discretization relative to the fine reference uniform scheme.

Oscillatory flow in a parabolic bowl
This case is well known and recognized as a challenging test case for numerical models because it involves both moving wet/dry interfaces and it has an uneven topography. It is selected here to study accuracy and mesh convergence abilities of the adaptive HWFV scheme and to explore the sensitivity of the HWFV model, i.e., in performing the adaptivity process, considering various choices for the threshold value parameter and for the baseline coarsest mesh.
where B ¼ 5 is a constant value and s ¼ 1=2a ffiffiffiffiffiffiffiffiffiffi 8gh 0 p is the frequency. Under these conditions the oscillation period is

Threshold sensitivity
To understand the effect of the threshold value parameter on the adaptivity process, a baseline mesh with N 0 ¼ 40 cells is fixed, while considering the following threshold values: ε ¼ 0.0, 0.001, 0.01, 0.1 and 1.0. In Figure 14, the numerical water surface profiles at t ¼ 275 s (T/5) and t ¼ 2,020 s (1.5 T ) are compared with the analytical solution.
As seen Figure 14, the HWFV scheme refines in the region where the wet/dry interface is moving through. Meanwhile, other parts of the domain stay at the coarsest and the intermediate levels of refinement. Furthermore, it can be seen that varying the threshold value leads to different refinement patterns. In particular, it is shown that, as ε increases fewer cells are refined to higher levels during the simulation, since smaller detail coefficients are selectively omitted. In computational cells go to the highest level, thus resulting in a uniform mesh. Figure 14(b) shows an almost uniform mesh prediction when ε ¼ 0.001. Hence, it is not reasonable, in terms of efficiency, to use a threshold value ε < 0.001. A sensible refinement is obtained with ε ¼ 0.01 (Figure 14(c)). For ε ¼ 0.1 and ε ¼ 1.0 (shown in Figure 14(d) and 14(e), respectively) poor predictions at the wet/dry interfaces appear, which may be caused by the fact that a large threshold value leads to omitting some small detail coefficients, relevant to the wet/dry front, during the promotion process.

Baseline meshes
The choice of a baseline mesh with N 0 ¼ 40 is rather arbitrary, and it might be already too fine, or on the contrary, never allow for a fine enough mesh for this test case. This may affect the performance of the adaptivity process. Therefore, the influence of the baseline mesh should be studied. that ε ¼ 0.01 is the most sensitive to N 0 . Owing to this sensitivity, this threshold value allows for a wide, automatic response of the adaptive process (contrary to, for example ε ¼ 0.1) and therefore is likely to be the best to perform adaptivity in a prompt way. It is clear that an inefficient adaptive process is obtained with ε ¼ 0.001 except in Figure 15(e), when a very fine baseline mesh is used.

Mesh convergence
Mesh convergence is also studied in terms of the L 1 -norm as defined in Equation (28). Simulations were performed up to 2,020 s and for analysis the t ¼ 270 s (T/5) and t ¼ 2,020 s (1.5 T) are selected. Several baseline meshes at coarse level N 0 ¼ 10, 20, 30, 40, 80, 160 and 320 are introduced with using the same threshold values as defined previously.
It can be seen in Figure 16 that the behaviour of the L 1 -norm is asymptotic regardless of N 0 and ε. The uniform mesh (ε ¼ 0.0) is taken as a reference curve for comparison. Each point within a curve is associated with each baseline mesh. The difference between the convergence curve for ε ¼ 0.001 and the reference are relatively small. For ε ¼ 0.01 the convergence is slightly better than all as shown in Figure 16(a) and 16(b). The large error obtained with ε ¼ 1.0 and N 0 ¼ 20, 30 cells and this is due to too few cells being activated during the adaptivity process (over-filtering). Thus the quality of the numerical solution is affected. Nevertheless, the magnitude of the error becomes smaller as N 0 increases but still remains larger than the error of the reference curve. Furthermore, the better results are obtained for ε ¼ 0.1 but with sensible differences compared to ε ¼ 1.0. The magnitude of the L 1 -norm is increased from Figure 16(a) to 16(b). This is merely because of numerical diffusion which is an anticipated issue for FV first-order schemes. In Figure 17, the same analysis as the one reported in Figure 16 is considered to illustrate the relative performance of CPU time (RPCPU).  Numerical evidence confirms that the proposed HWFV scheme can accurately solve the SWE with source term.
Adaptive solutions are shown to be mass conservative. Notably, the new model is proven to be able to self-decide appropriate resolution levels following the dynamics of the flow, including shocks, wet/dry fronts and the presence of non-flat topography. Future research will consist of integrating the friction source term in a well-balanced manner, exploring sensitivity issues in relation to increasing the resolution levels with varying threshold values and extending the approach to 2D.