Sediment thickness and bedrock topography are vital for the terrestrial hydrosphere. In this study, we estimated sediment thickness by using information from digital elevation models, geological maps, and public databases. We discuss two different approaches: First, the horizontal distances to the nearest bedrock outcrop were used as a secondary function in kriging and cokriging. Second, we applied Poisson's equation to estimate the local trend of the sediment thickness where bedrock outcrops were used as boundary conditions. Differences between point observations and the parabolic surface from Poisson's equation were minimized by inverse modelling. Ordinary kriging was applied to the residuals. These two approaches were evaluated with data from the Øvre Eiker, Norway. Estimates derived from Poisson's equation gave the smallest mean absolute error, and larger soil depths were reproduced better if the local trend was included in the estimation procedure. An independent cross-validation was undertaken. The results showed the best accuracy and precision for kriging on the residuals from Poisson's equation. Solutions of Poisson's equation are sensitive to the boundary conditions, which in this case were locations of the bedrock outcrops. Bedrock outcrops are available for direct observations; hence, the quality of the estimates can be improved by updating input from high-resolution mapping.
Estimation of bedrock topography.
Spatial distribution of sediment thickness.
Use of Poisson's equation and kriging.
Utilization of the GRANADA well database.
Use of digital elevation models and Quaternary maps.
Questions are put forward as to whether current conceptual models in hydrology are able to predict the impact of environmental change on the terrestrial hydrosphere (Beven 2006; Wagner 2007; Gupta et al. 2012). A fundamental part of the terrestrial environment is the interface between the atmosphere and the unweathered bedrock – labelled the Critical Zone (CZ) in Earth Science (NRC 2001; Giardino & Houser 2015; Richardson 2017). Water saturation in the CZ is a key variable for rainfall-runoff processes (Dunne & Black 1970; Dunne 1978). Evaporation and transpiration from this domain are important for the terrestrial water flux of moisture to the atmosphere (Trenberth 1999; Bierkens & van den Hurk 2008). A major part of the CZ is the unconsolidated material above the bedrock – for simplicity, called the sediment thickness in this paper. The spatial distribution of the sediment thickness and the bedrock topography is interesting in many different applications. In this article, we do not focus on processes in the CZ, but we address different approaches to estimate the geometry of sediment thickness and bedrock topography by utilizing publicly available databases. Before looking more closely into the estimation problem, it is interesting to review briefly some studies that focus on the role of bedrock topography and sediment thickness for rainfall-runoff processes.
Graham et al. (2010) and Graham & McDonnell (2010) used field experiments and modelling results to demonstrate the importance of bedrock topography and permeability for threshold response in runoff events. Above a threshold discharge, the runoff increased dramatically with a given rainfall intensity. In such cases, a major fraction of the subsurface flow took place at the interface between the sediments and the less permeable bedrock. Hopp & McDonnell (2009) used a synthetic dataset to show the importance of surface topography, soil depth, and bedrock topography to simulate the rainfall-runoff response. They concluded that the subsurface connectivity of saturated areas explained the runoff characteristics. Gabrielli et al. (2012) studied the relation between runoff and groundwater in sediments and bedrock in two hillslope catchments in New Zealand and Oregon, USA. In both hillslopes, the bedrock influenced the streamflow, but rapid subsurface stormflow was apparently most important if the bedrock had low permeability compared with the soil. Based on a large dataset, Tromp-van Meerveld & McDonnell (2006a) showed that pipe flow, matrix flow, and total flow had a threshold response to precipitation. Total flow increased two orders of magnitude if precipitation increased to a specified threshold. From their analysis, they concluded that bedrock topography determined the upstream contributing area, not the surface area as usually assumed. In the following study, Tromp-van Meerveld & McDonnell (2006b) showed that micro depressions in the bedrock surface were filled before they contributed to subsurface runoff. Freer et al. (2002) studied the relation between the hydrological response of a storm event and the bedrock topography in the Panola Mountain Research Watershed in Georgia, USA. They concluded that in cases where the bedrock had low permeability, the bedrock topography was important for the groundwater level, which in turn explained the large-scale hydraulic connectivity of the subsurface. Connectivities of macrostructures are important for the estimation of peak flow and recession of water in the catchment. Several other studies also elucidate the importance of the bedrock topography to understand hydrological and geochemical response (Genereux et al. 1993a, 1993b; Hinton et al. 1993; Mulholland 1993; Jencso et al. 2009). In summary, these studies indicate relations between runoff response, sediment properties and bedrock topography that deserve some more attention.
Methods for mapping and/or estimation of sediment thickness or bedrock topography are determined by the purpose of the study. Test drilling yields usually the ultimate point precision, but for resource mapping or hazards assessments, geophysical prospecting methods might give the required precision. In the case of water balance, integrated grid estimates might be sufficient. A thorough review of geotechnical and geophysical methods are beyond the scope of this study, but a short summary of the most relevant estimation approaches is appropriate.
Rempe & Dietrich (2014) suggested to model the general trend of sediment thickness for a hillslope by the analytical solution of differential equations where the transport of sediments was expressed as a diffusive process. The assessment of sediment budget has been explored in the fluvial and glacial environment (Walling & Collins 2008; Bogen 2017) and explored for large mountain areas (Hinderer 2001; Otto et al. 2009; Straumann & Korup 2009), but the aims of such works are usually lumped estimates of large-scale processes and not estimates of the sediment thickness as a function in space. Sulebak & Hjelle (2003) applied Spline Algorithms (SAs) to model landscape geometry with different spatial resolutions. The spline methods are robust smoothing algorithms, and the methods work, therefore, best for areas with gentle variations in the bedrock topography. Clarke et al. (2009) applied an Artificial Neural Network (ANN) to estimate glacial bedrock topography. They evaluated the method by using synthetic input data where the (synthetic) ice thickness was calculated by solving the shallow ice equations. The spatial reproduction of the bedrock topography had a maximum misfit of more than 400 m with a root-mean-square error of 70 m. The relative error of ice volume varied from −45 to +220%. Mey et al. (2015) applied ANN in the same way as Clarke et al. (2009). They used the digital elevation model (DEM) to estimate the sediment thickness in the Rhone valley and reported an average estimation error in sediment fill thickness of less than 27%. The ANN method assumes a ‘sameness’ in the training data and the output data. The assumption implies that the geomorphic signature of the bedrock topography is similar above and below the elevation where the bedrock is covered by sediments or ice. The ANN approach is, therefore, sensitive to the training dataset with respect to statistical homogeneity. A practical challenge arises in cases where the (homogeneous) training data is minor compared with the area of estimation. Neither SA nor ANN has conditional uncertainties as part of the methodology. From a practical point of view, this is a drawback, because in many cases, the uncertainty is equally important as the most likely value.
On this background, Poisson's equation offers some interesting properties (Kazhdan et al. 2006; Calakli & Taubin 2011; Liu 2018). First, the solution of Poisson's equation yields parabolic functions in space, which is readily adaptable to U-shaped valleys. Second, Poisson's equation is sensitive to the boundary conditions. This sensitivity can be utilized if the altitude is known. Today, DEMs are available for large parts of the world, thus utilizing DEMs for this purpose is an interesting application. Third, in areas covered by sediments, inverse modelling of Poisson's equation can be used to fit a parabolic surface to the point observations of the sediment thickness.
Kazhdan et al. (2006) applied the numerical solutions of Poisson's equation as a filtering method for surface reconstruction. A conventional application of Poisson's equation assumes that the reconstruction domain is minor compared with the surface that is exposed to observations. The opposite is more common in geo-applications: The unknown domain is usually large compared with the exposed bedrock. One way to overcome this problem is to optimize the source term in Poisson's equation by inverse modelling. The residuals between observed sediment thickness and the optimal parabolic function can be modelled as a stochastic function in space.
Using in the estimation procedure was inspired by an intuitive procedure among drill operators: Based on previous experiences from a given area, it is often possible to judge the sediment thickness D at a location u, by simple visual inspection of the horizontal distance L, to the nearest bedrock outcrop (Figure 1). The statistical relation between and was used in cokriging and ordinary kriging to evaluate the effect on the estimation variance.
The quality of publicly available data is interesting to explore because the volume of information is increasing for every year. In this study, was taken from the Norwegian well database GRANADA (NGU 2020a). Data from mainland Norway (denoted the national- or large scale in the following) were used to estimate covariance- and cross-covariance functions of and . Local boreholes were used for conditional estimation and inverse modelling of the constant parameter in Poisson's equation.
The geostatistical methods we evaluate are available in open source and commercial software, but it is not standard procedure to include the bedrock outcrop as a secondary function in kriging or as boundary conditions in Poisson's equation. Most of the algorithms we applied were, therefore, tailored to the data material we used.
Two different types of cross-validation were performed: First, cross-validation was undertaken by leaving one observation out. In this procedure, all available point observations were included. The second cross-validation was done by dividing the observation material into two groups. The first group contained boreholes recorded before 2010, and the second group was sedimentary wells and boreholes recorded after 2010. Because the second cross-validation was done on independent data not included in the parameter inference, this procedure was called jackknife cross-validation.
The results were evaluated according to three simple criteria: The mean absolute error , accuracy , and precision (Goovaerts et al. 2005). In addition, we applied a simple probability score of observations and the continuous ranked probability score (CRPS; Gneiting & Raftery 2004). Before presenting the methods and the results, it is necessary to describe the data material we applied to evaluate the estimation procedures.
As indicated above, we employed three sources of information for the current study: (i) point observations of sediment thickness, (ii) DQM, and (iii) DEM of the surface. In addition, we also had access to geological maps of the bedrock for the study area. Even though the locations of the different bedrock types were not utilized in the current study, a brief description of the most prominent geological structures in the study area is interesting as background information.
The study area Øvre Eiker (9 km 7.1 km) is located about 60 km west of Oslo (Figure 2). Øvre Eiker is a part of Vestfold graben, which is one of the three main graben structures in the Oslo Rift. Large parts of the Vestfold graben come from batholithic intrusions and consist mainly of biotite granite. At the highest altitudes in the eastern parts of the study area, the bedrocks belong to the Glitrevann caldera, which was formed as part of the plutonic intrusion in Permian time (299–252 mill. years BP). At the eastern side of the valley, the bedrocks are mainly shales, marbles, and limestones from Ordovicium (488–440 mill. years BP). The bedrocks at the western side of the valley are mainly quartzites (estimated deposition time about 1,475 mill. years BP) and different kinds of gneiss (estimated intrusion time about 1,500 mill. years BP). At large, the topography of the study area mirrors the character of the bedrock with hard resistant bedrock at high altitudes and softer bedrocks in the valleys. More information on the bedrock geology can be found in Andersen et al. (2008) and NGU (2020b).
Quaternary maps and surface elevation
DQMs (NGU 2020c) were used to identify locations where the bedrock was exposed to the atmosphere or covered with a thin and patchy layer of organic matter (37% of the study area). For simplicity, the area where is referred to as exposed bedrock in this article. The remaining area where (63%) was covered by the unconsolidated material (Figure 3). Marine deposits are the most frequent sediment type in the area, which cover about 27% of the study area. The marine limit was levelled to 194 m above present mean sea level (Bargel 1987). More than 70% of the surface area covered by sediments was below the marine limit, while more than 75% of the exposed bedrock was located above this altitude (Figure 3).
Till deposits are the second most frequent surface sediment, which are exposed to about 13% of the study area. Most of the tills are deposited as a relatively thin layer above the bedrock, and more than 95% of the exposed tills are located above the marine limit. For simplicity, we also included front moraine in the till category for the present study, which implies that the sediment thickness may be larger in some locations. Glaciofluvial sediments are deposited below 200 m a.m.s.l. and fluvial sediments below 120 m a.m.s.l. (Figure 3). About 8% of the study area is covered by colluvial or weathered material. Peat and swamps are most frequent at higher altitudes and cover less than 2% of the area. A minor fraction of the area (0.14%) is anthropogenic material.
Sediment thickness and the horizontal distance to bedrock outcrop
Point observations of sediment thickness for mainland Norway were taken from the public well database GRANADA (NGU 2020a). DQM was used to calculate the horizontal distance to the nearest bedrock outcrop , for all GRANADA recordings . Semivariogram (covariance and cross-covariance) model parameters used in the current study were based on the GRANADA data registered before 2010 from all over Norway. At that time, the total number of reported boreholes with > 0 were = 19,682 (Kitterød 2017).
GRANADA recordings after 2010 located within the Øvre Eiker study area were included in this study, for primary cross-validation by leaving one observation out. A second cross-validation was done by dividing the dataset into two groups. This procedure is called jackknife cross-validation because it was done on an independent dataset. In addition to point observations of , we used to calculate in a regular grid with a spatial resolution of = 25 m 25 m. The purpose was to evaluate whether modelling results were improved or not if local information of was included as a secondary function. The local relation between and with current available data can be seen in Figure 4.
The statistical methods we applied for this study were based on the Gaussian theory, which is well documented in textbooks (Isaaks & Srivastava 1989; Journel & Huijbregts 1989; Deutsch & Journel 1998). The equations we apply will, therefore, not be reproduced here, but some more details are given in the Supplementary Appendix.
Ordinary kriging (), cokriging (), and bedrock kriging () of sediment thickness in the study area were based on semivariogram and cross-semivariogram parameters derived from the national GRANADA database. The estimation parameters should, therefore, be considered as average values for mainland Norway (Kitterød 2017). These estimation parameters are not stationary in space, and local estimation parameters should, in principle, give better results. In this case, the derivation of local parameters was not feasible due to the scarcity of point observations in the study area. For this reason, we preferred to perform local kriging with average large-scale semivariogram model parameters instead of less reliable local model parameters. Before the modelling took place, the GRANADA data required some pre-processing.
Boreholes and wells are usually located in populated areas, which means that most of the GRANADA recordings were from urban areas, while fewer recordings were from rural areas. If such data are used to calculate average properties, uneven sampling will affect the statistics because oversampled domains will dominate. To suppress such clustering effects, each observation received a weight calculated as a function of the distance to other boreholes. Boreholes located close to each other were given less weight than boreholes located far from each other. The calculation of declustering weights was done according to the moving grid algorithm suggested by Deutsch (1989) and applied in Kitterød (2017). It should be noted that declustering weights do not change the value of the observations that we used for conditional estimation. The weights were only applied for the calculation of experimental semivariograms and statistical moments.
We applied two methods to approach Gaussian distributions of the involved variables. For ordinary kriging () and cokriging (), we utilized the normal score transform (Supplementary Appendix A1), while bedrock kriging () was done on logarithmic values. For the sake of completeness, was also performed on logarithmic values. After calculations, the results were back-transformed to sediment depth in metric units for cross-validation, either by the inverse normal score transform or the inverse of the lognormal cumulative distribution function.
Ordinary kriging and cokriging
Three types of boundary conditions can be applied to solve Equation (4): (i) a Dirichlet condition, where elevation of the bedrock surface or soil depth is given; (ii) a Neumann condition, where the gradient is given; or (iii) a Cauchy condition where a weighted average of (i) and (ii) is given (Gosses et al. 2018; Liu 2018). Here, in this study, we applied the Dirichlet condition in two slightly different ways. Because in outcrop locations where the bedrock surface is identical to the terrain , we have two options: First, the topography elevation (m a.m.s.l.) can be used directly as boundary conditions: . In that case, is identical to the local trend in the bedrock topography: , where is given by Equation (6). Second, the soil depth (m) can be used as boundary condition because by definition in outcrop locations , thus: . In that case, is identical to the local trend of the sediment thickness , and the local trend of the bedrock topography is given by: .
We applied an explicit finite difference numerical scheme of second order to solve Equation (5). The explicit scheme implemented the boundary conditions very simply, and the matrix multiplication in MATLAB gave fast convergence to stable solutions. The evaluation of the numerical scheme was beyond the scope of this study, but no numerical instability was revealed, and high numerical precisions were achieved by feasible consumption of CPU time.
To keep the model evaluation as simple and consistent as possible, we decided to present the uncertainties in terms of probability density functions by calculation of percentiles (Supplementary Appendix A5).
The subscript m denotes the estimation model , where is ordinary kriging, is cokriging, is bedrock kriging, is Poisson kriging, and where u is the estimation location, is the percentiles, and denotes all available information.
By this definition, includes the model uncertainty, which means that in locations with less information and thus larger uncertainty, the model can be accurate (i.e. the estimates are between the given percentile intervals), but not very precise because high uncertainties yield large percentile intervals. An additional criterion for precision is, therefore, necessary.
Gneiting & Raftery (2004) introduced CRPS to measure the success of probabilistic forecasts and estimates. Mean CRPS was calculated by linear interpolation in the MATLAB script provided by Shrestha (2020). CRPS values were calculated for each cross-validation point for 10,000 realizations.
Finally, linear regression was included to indicate the deviation between cross-validated estimates and observations. Linear regression cannot be regarded as a formal evaluation method in this case because the assumption of independence of residuals (i.e. the difference between the regression model and the estimates) are not fulfilled. The correlation coefficients R should, therefore, be interpreted with caution. The slope coefficients () of the linear regression model indicate underestimation (1) or overestimation (1). A successful estimation method should have close to 1.
The modelling results were evaluated by comparing the solutions with respect to the evaluation criteria explained above. This was first done in a primary cross-validation procedure by leaving one observation out, and second, by jackknife cross-validation where the GRANADA observations were divided into two groups: Boreholes recorded before 2010, and sedimentary wells and boreholes recorded after 2010. The first group was used to calculate conditional estimates in the locations of the second group. For all cases presented below, we applied a grid resolution of m.
Semivariogram model parameters
The national GRANADA data were used to calculate experimental semivariograms for ordinary kriging (), cokriging (), and bedrock kriging (). Optimal parameters were obtained by using the simulated annealing algorithm (Kitterød 2017; MATLAB 2020). Semivariogram for Poisson kriging () was based on GRANADA boreholes located within the Øvre Eiker study area. The experimental semivariograms from logarithmic values are shown in Figure 5. For and , the semivariograms were derived from normal score transformed data (cf. Tables 3 and 4, case F in Kitterød (2017)). Estimated model parameters are given in Table 1.
a, and the semivariogram model are given by , where indicates either normal score or lognormal transformed sediment thickness , or the horizontal distance to the bedrock outcrop , , and X denotes the residuals between Poisson's equation and observations (Equation (7)). The practical range = log(0.05) for all models.
Inverse modelling of Poisson's equation
Poisson's equation was applied for trend analysis of the local bedrock topography. This was done by inverse modelling of the constant parameter in Equation (5), which means that we search for a that minimizes the mean absolute error, (Equation (9)). Since only one parameter was involved, this was obtained by simple incremental stepping of the factor. The sensitivity of was evaluated by successively leaving one observation out and, finally, by including all observations. This procedure was repeated twice. First with all local observations included ( = 35), and second for jackknife cross-validation where the observations were limited to boreholes recorded before 2010 ( = 10). The results show that a minimum was possible to obtain independent on the dataset included, but the absolute value varied between 10 and 11.5 m with all data included, and between 2 and 3 m for boreholes recorded before 2010 (Figure 6). The optimal load parameter varied between 4.0 × 10−5 and 4.5 × 10−5 for all data included and between 4.5 × 10−5 and 6 × 10−5 for boreholes recorded before 2010.
All available point observations of sediment thickness within the study area were used for primary cross-validation analysis. This cross-validation was performed for observations by leaving successively one observation out and using the remaining boreholes for conditional modelling of the cdf at the location of the left-out borehole: (). For ordinary kriging (), only observations of sediment thickness were used for conditional modelling, while for cokriging () and bedrock kriging (), the secondary variable was used also in the location of the left-out borehole. A DQM was used to identify grid locations u where bedrocks were exposed to the atmosphere or covered by deposits (Figure 7).
The primary cross-validation results in Figure 8 show that, in general, the results overestimated observations smaller than the lumped mean sediment thickness and underestimated observations larger than the lumped mean. Most of the estimates were around one standard deviation from the lumped mean observation. The and estimates gave slightly better results, but the best result was obtained from estimates (i.e. kriging on residuals from Poisson's equation, Equation (7)). The optimal solution of Poisson's equation (Equations (5) and (6)) gave significant deviations from observations (black squares in Figure 8).
A summary of the cross-validation results by leaving one observation out is given in Table 2. The results gave the mean absolute error equal to 6.51 m for normal score transformed data, and 7.46 m for lognormal transform. for and were 6.47 m and 7.26 m, respectively, and 11.30 m for inverse modelling of Poisson's equation () and 2.15 m for kriging on the residuals from Poisson's equation (). Least accuracy was obtained for estimates on log-transformed variables. The estimates had the highest accuracy, and 86% of the cross-validated observations were within predefined percentiles. The precision was a bit lower than the estimates, but the precision is calculated only for the accurate estimates, which was 61% for the estimates.
, Mean absolute error (Equation (9)); , Accuracy (Equation (10)); , Precision (Equation (11)); , Probability score (Equation (12)); , Continuous ranked probability score (Gneiting & Raftery 2004); , Correlation coefficient between observations and estimates; , Slope coefficient estimated by linear regression.
The mean probability score for the cross-validated observations was close to 0.5 for all methods, but this criterion must be judged together with standard deviation which was smallest for the estimates. Mean values of the are similar to with smallest values for the estimates. The standard deviation of the values was also smallest for the estimates.
Even though a simple linear regression is not a formal evaluation method, the slope coefficients ( in Table 2) show an average underestimation between 0.23 and 0.31 for and methods, and 0.37 for . For the method, the underestimation was 0.89, i.e. the regression model gave on average estimates that were 11% less than the observations. The correlation coefficients ( in Table 2) cannot be interpreted as a ‘success factor’, but they indicate the deviation between a linear regression line and the estimates. In this case, the method had estimates closest to the regression line ( = 0.99).
Conditional cdfs were calculated in all grid points within the study area for jackknife cross-validation. These cdfs were obtained by using the parameters listed in Table 1. The Poisson's equation was solved by using the optimal load parameter = 5.5 × 10−5 were all boreholes recorded before 2010 were used for inverse modelling (Figure 6(b)). All estimates were conditioned on local observations of from boreholes in the study area recorded before 2010 and by using the horizontal distance to the nearest bedrock outcrop () as secondary information (Figure 7). The results of the estimated cdfs were presented in terms of percentiles: given as fractional values.
Before presenting the jackknife cross-validation results, some characteristic features in Figure 9 should be noted. The results of and in Figure 9(a) and 9(c) show the local median () of the two variables. The estimates shown in Figure 9(e) were dominated by the recorded sediment depth close to the boreholes, while outside the range of the boreholes, the estimates are dominated by L. This effect on the estimates can be seen at the southwestern part of the image where L was large and where the distance to the nearest boreholes were larger than the range (Figure 9(e)). The results showed also the spatial structure of the secondary variable L, but less prominent than the estimates in areas that were distant to the borehole observations (Figure 9(c)). This can be seen in the west-east cross-section that goes through the two boreholes located south of the study area (Figure 10). In general, the uncertainty in the estimates was greater than the estimates. The results, however, show greater estimation variances than the estimates for locations that were close to the bedrock outcrops, but distant from the nearest D observation. The V-shape of the estimates was evident in the and the estimates, while estimates, on the other hand, had the characteristic U-shape as expected (Figure 9(f)).
Figure 10 shows clearly the differences in the percentile distribution around the expected median value. The , , and estimates had non-symmetric percentiles around the median values due to the non-Gaussian distribution of the sediment thickness. The percentiles, on the other hand, had symmetric distributions around the median value because these estimates were derived as residuals from the local trend. The cross-sections also illustrate clearly that the results had the smallest estimation variance.
Sedimentary wells and boreholes recorded after 2010 were used for independent cross-validation (Figures 2 and 4). The jackknife cross-validation results show, in general, that small sediment thicknesses were overestimated and large sediment thicknesses were underestimated (Figure 11). It should be noted that there are only minor differences between the parabolic surface obtained by Poisson's equation and the estimates in this case.
Cumulative histograms of probability scores (Equation (12)) summarize the results of , , and (Figure 12). Perfect estimates yield = 0.5 with estimation variance equal to zero, which correspond to the vertical bold dashed line in Figure 12. Cross-validation results that overestimated the observations ( had , and vice versa). The cross-validation results by leaving one out gave from 20% to 30% of the observations for the , , and estimates. A smaller fraction of the observations had . The estimates had 86% of the estimates between (corresponding to the = 0.86 in Table 2). In jackknife cross-validation, this number was reduced to 80% for and about 25% for (i.e. 35% over-estimations and 40% under-estimations).
The cross-validation results demonstrate that the assumption of second-order stationarity of sediment thickness or bedrock topography is not valid. This conclusion was supported by the relatively low performance of the ordinary kriging cases (, , , and in Table 2). We suggested applying Poisson's equation to estimate the trend in sediment thickness. For this purpose, we used bedrock outcrops as boundary conditions. Ordinary kriging on the residuals between observations and the parabolic surface obtained from Poisson's equation (labelled Poisson kriging in Table 2), reduced the estimation uncertainty. All evaluation criteria applied in this study were consistent with this result. Hence, the method was superior to the other methods.
In this study, we explored the use of public data to estimate the bedrock topography as a continuous function in space. Previous research has underlined the importance of the bedrock topography in rainfall-runoff modelling (Genereux et al. 1993a, 1993b; Hinton et al. 1993; Mulholland 1993; Freer et al. 2002; Jencso et al. 2009; Graham & McDonnell 2010; Graham et al. 2010; Gabrielli et al. 2012).
For runoff response, the relation between sediment thickness and bedrock topography is most relevant in areas with sparse sediment cover and low bedrock permeability. Focusing on flow and threshold effects due to undulations in the bedrock topography may cause non-linear hysteresis effects in rainfall-runoff relations (Tromp-van Meerveld & McDonnell 2006a; Graham & McDonnell 2010; Graham et al. 2010; Gabrielli et al. 2012). To improve such non-linear effects in hydrology, estimates of sediment thickness and bedrock topography need to be included as auxiliary functions.
We modelled the estimation uncertainty of the bedrock topography (Equation (2)) by using input data from the GRANADA database and digital information from the local study area (Øvre Eiker area, Figure 2). The purpose was to minimize the uncertainty by using DQMs, DEMs, and point observations of sediment thicknesses . The methods we applied relied on multi-Gaussian statistics, which is a simplification of the reality. We also assumed that the information in the DQM could be simplified to a spatial indicator function , where in areas where the bedrock was exposed (0 m), and in areas covered by deposits (0 m). Below, we discuss briefly some implications of the statistical and geological simplifications we made, but before doing so, it is necessary to comment on some general problems of using public data sources like the GRANADA database, for geostatistical modelling.
Clustering and bias of empirical data
Like most geo- and environmental databases, the GRANADA database contains clustered observations. This is due to the fact that spatial data is usually sampled for a specific purpose, which implies a large number of observations in a few limited areas, while most of the sampling domain has less frequent observations. In this case, boreholes and sedimentary wells were drilled in populated locations, which means that there is a high frequency of recordings in some places and less in other places. Uneven sampling known as clustering effects, ought to be controlled before statistical inference are undertaken. In this project, declustering was done by the moving grid method (Deutsch 1989; Deutsch & Journel 1998). Results from declustering of sediment thickness recorded in the GRANADA database show that small is more clustered than observations of large . Small sediment thicknesses should, therefore, receive less weights than large sediment thicknesses, and vice versa.
In addition to simple clustering, the GRANADA database has most likely another bias effect. The primary purpose for the GRANADA database was groundwater management, not sampling of sediment thicknesses. Because boreholes and sedimentary wells are usually drilled for economic reasons, locations with large sediment thickness are usually avoided due to the casing costs. Thus, boreholes may be abandoned for economic reasons if the sedimentary thickness is too large. The opposite is true for sedimentary wells. They are usually drilled for domestic water supply, which imply a desire for high water extraction capacity. The water flux into the well is governed by the hydraulic conductivity and the thickness of the water conductive layer. Preferred locations for such wells are, therefore, areas with large sediment thicknesses. Thus, there are different preferences with respect to sediment thickness D, for boreholes and sedimentary wells. The recordings of D are, therefore, prone to bias, with an over-representation of small D for boreholes and vice versa for sedimentary wells. These preferences have also impact on the location of boreholes and sedimentary wells with respect to horizontal distance to the nearest bedrock outcrop L, and the preferred Quaternary deposit. These effects are illustrated in the dataset from Øvre Eiker (Figure 2). Only one of the boreholes was drilled at a location where km, and 11 of the 25 boreholes had 100 m (Figure 4). Most of the sedimentary wells were also located quite close to the bedrock outcrop (100 m), but the sedimentary wells were located where D was expected to be large. All sedimentary wells within the study area were drilled in fluvial deposits. The nine clustered wells belonged to the same waterworks. The information attached to the well recordings indicate that these wells were located after a seismic survey, which means that a larger area was scanned and the most useful locations were selected. Since the water pumping capacity is proportional to the product of the permeability and the thickness of the sediment, the preferred location had large sediment thicknesses. Thus, if the relation is going to be utilized for modelling of the bedrock topography, data sources in addition to the GRANADA data should be considered. The GRANADA recordings are still useful for local conditioning, and because more and more boreholes and wells are registered the value of the database will increase.
Cross-validation of observations from the study area confirmed that kriging on the residuals from Poisson's equation gave the smallest mean absolute error, and best accuracy and precision. Estimation uncertainty was reduced by including locations of the bedrock outcrops, either as the boundary condition in Poisson's equation or as a secondary function in cokriging or bedrock kriging. The uncertainties were still significant as illustrated in Figures 8 and 11. This is not unexpected because the magnitude of the small-scale variance () was large compared with the total variance (). This fact explains the minor differences between a simple (regression) average and the kriging results. The large also explains the large spread in the percentile values at the points of observations. If the small-scale variance was zero (), all percentile values would have been equal to the observed value (Figure 10).
Bedrock kriging () and cokriging () rely on the relation between sediment thickness D and horizontal distance to the nearest outcrop L. The observations from the study area, however, show a large variance in the relation (Figure 4). This explains the large uncertainties in the and estimation results. These uncertainties were confirmed by the jackknife cross-validation exercise. It should also be noted that the relation in the national GRANADA database had a similar high variability. The percentile values for the national declustered relation (i.e. corrected for clustering effects) had the median value of = 4.7 [1.2, 18.0] m/km (0.25 and 0.75 percentile values in square brackets, Figure 13). In comparison, data from the study area Øvre Eiker had the median of 30 m/km, which corresponds to the 0.8 percentile value of the national dataset. This indicates that the local estimates should be utilized instead of the large-scale national percentiles. Histogram of from the national GRANADA dataset shows that for 50% of the boreholes with horizontal distance to bedrock outcrop 1 km had sediment thickness 4.7 m (Figure 13). Non-declustered median value of the same data material gave 6.7 m, which shows that there is a clustering of minor values in the GRANADA database. Declustered data shows that 25% of the wells with 1 km had sediment thickness 18.0 m and 10% had 60.8 m. Corresponding values for raw data (not declustered) was 20.2 and 54.7 m. For Øvre Eiker, the median value was 30.7 m/km. This shows that local conditioning should be used for modelling of bedrock topography if L is used as a secondary variable, which was the motivation for using cokriging and bedrock kriging in the present study. Even though the variance in the relation is high, the observations shown in Figure 4 indicate a physical relation: If L is large, there is no small D, and vice versa.
The cumulative histograms of (Equation (12)) indicate that the observed sediment thickness was not estimated well by ordinary kriging , bedrock kriging , or cokriging (Figure 12). Poisson kriging , however, gave better results. The main reason is that Poisson's equation utilizes more of the available information than the other kriging methods. In and , information from a single point only was utilized for estimation (i.e. the distance from the estimation point to the nearest outcrop), while in , the total information of the outcrop geometry was included. This ability of Poisson's equation is due to the fact that the boundary condition is incorporated in the solution. Another reason for using Poisson's equation is that only one parameter – the load factor in Equation (4) is necessary to optimize to find a parabolic function, which can be adapted to the variable of interest. In this context, we regard the load factor as a stationary stochastic function , which is equivalent to the assumption of the ‘sameness’ referred to in the ANN application (Clarke et al. 2009; Mey et al. 2015). In this case, the variable of interest is the result of glacial and fluvial processes, which have been modelled by others as diffusive processes. Thus, the parabolic functions from Poisson's equation are intuitively convenient to apply.
A final warning might be added on the application of Poisson's equation: If is given as a deterministic function, and the point observations are used as boundary conditions in the same way as the outcrop locations, it is possible to fit a parabolic surface that goes perfectly through all the point observations. This is bad modelling practice because the uncertainty in is ignored. Thus, we reemphasize that instead of striving for a ‘perfect’ fit, it is better modelling practice to appreciate the uncertainty because, in most cases, the uncertainty is equally (or more) important than the best fit.
Public data are always affected by uncertainties. The areal information of DQMs is not always consistent with independent point information. At least in one case, the bedrock surface was registered as 7 m below the ground in the GRANADA database, but from the DQM, the borehole was located on exposed bedrock. Thus, geographical uncertainties should be included as part of the estimation procedure, and not hidden in the results.
Modelling of the bedrock topography was first done by conditioning on normal score transformed data, and second, by using the lognormal transform. The normal score transform ranks the observations from minimum to maximum and finds the corresponding entries in a standard normal pdf (Deutsch & Journel 1998). If the empirical dataset does not cover extreme values, the normal score transform relies on extrapolations. Hence, the normal score transform is sensitive to the extreme values in the empirical data. The lognormal transform is by definition independent of the empirical data and should, therefore, yield more robust results with regard to extreme values. The estimation variance, however, usually becomes too large because the expected value is part of the variance (Supplementary Appendix Equations (A13) and (A14)). This drawback explains why the lognormal modelling results had low precision. In the present case study, the modelling results based on the normal score transform did not differ very much from the lognormal transformed results. One reason is that even though the univariate pdf by definition is perfect Gaussian, the same is not automatically true for the bivariate pdf. This problem is visualized in Figure 14, where the bivariate normal score transforms of D and L were compared with a synthetic bivariate Gaussian variable with the covariance structure given in Table 1 for separation distance = 0 m. As can be seen, the Gaussian character is roughly preserved, but the bivariate pdf is noisy.
It should be underlined that the percentiles of sediment thickness and the bedrock topography yield a smoothed version of the reality (Figure 10). Thus, the percentiles do not give a realistic picture of the bedrock topography itself. The percentiles, however, is used to visualize the estimation uncertainty of the bedrock topography. If the physical structure of the bedrock topography or the sediment thickness plays an important role, then equally probable realizations should be generated. This subject was beyond the scope of the present paper, and it is, therefore, left for further studies.
Multi-Gaussian methods maximize entropy, which implies that the connectivity of extreme values is underestimated (Gómez-Hernández & Wen 1998). Subsurface canyons for example, which are extremes with respect to topography, will not be captured well by multi-Gaussian modelling. This is a drawback in areas with glacial erosion. Glacial flow may give rise to significant differences in topography over short distances perpendicular to the flow direction, and the longitudinal extent of the valleys may, therefore, be significant. In geological terms, it means that steep and narrow valleys cannot be simulated unconditionally by multi-Gaussian methods.
Geological maps hold vast amounts of information that might be utilized to estimate the bedrock topography. In this study, however, the only information we used was the indicator function , where in locations where the bedrock was exposed to the atmosphere, and else . Another relevant question is the relation between the geological setting (i.e. the sediment type) and the bedrock topography. It is common knowledge that the glacial history of the area explains the main character of the landscape. The weight of the glacier suppressed the continental crust, and the isostatic rebound after the melting explains why marine deposits cover the bedrocks in the lower part of the study area. Glacial transgression and retreat explain the deposits of the area, but surface mapping of Quaternary deposits does not usually include thickness of sediments. Some sediment types (in specific areas) usually have significant thickness (like marine sediments), while other kinds of deposits usually are patchy and sparse (like colluvial or weathered material). Thus, a relevant project for further research is to match the GRANADA recordings with DQMs to find statistical relations between sediment category and sediment thickness. This kind of information might be utilized for future (soft) conditioning of the bedrock topography. For sediment category and sediment thickness D within the study area, it can be noted that boreholes located in fluvial deposits have the largest sediment thickness D. If there exists a statistically significant relation between sediment category and D, then modelling of D might be improved by using sediment category as an auxiliary function.
With this paper, we wanted to draw attention toward modelling of sediment thickness and the associated bedrock topography (Equation (2)). We explored the opportunities of combining information from the national well database (GRANADA), DQMs, and DEMs to reduce the estimation uncertainty.
Several MATLAB scripts were tailored to process the dataset for this study. This included declustering of the national data, normal score transformation, semivariogram calculations, and routines for inverse modelling of Poisson's equation. In this case, the small-scale variance (the nugget) was large compared with the total variance (the sill), which explains why the mean absolute error and the estimation variance were large. The large-scale national semivariogram model for the sediment thickness showed a correlation length of about 2 km, and the cross-semivariogram between the sediment thickness and the horizontal distance to the bedrock outcrop had a correlation length of more than 5.5 km (Table 1). These correlation lengths indicate that the estimation of the bedrock topography or the sediment thickness can capitalize on observations in the GRANADA database if the distances to the nearest GRANADA recording are less than 2–5.5 km depending on the estimation procedure.
Poisson's equation was used to estimate the local trend in the sediment thickness. This was done by inverse modelling of the load parameter in Poisson's equation (equivalent to the sink/source term in groundwater modelling). By this approach, we fitted a parabolic function to the bedrock topography, which minimized the differences between the point observations and the numerical solutions of Poisson's equation. At this stage, we solved Poisson's equation by using the Dirichlet boundary conditions and the sediment thickness as the primary variable. Identical results can be obtained by using terrain elevation as the Dirichlet boundary instead of the sediment thickness. Alternatively, the Neuman condition might be applied by using the gradient of the exposed bedrock topography, but this approach was not evaluated in the current study.
Local trend analysis improves the mean absolute error, but the independent (jackknife) cross-validation shows that the estimation uncertainty is large. Because the number of recorded boreholes, wells, and geotechnical test drillings increases every year, the conditional estimation uncertainties will be reduced. Point databases (as GRANADA and/or NADAG (NGU 2020d)) in combination with line and/or areal estimates (e.g. from geophysical surveys) will provide plentiful information in coming years. Thus, an important job for the future is to develop simple and robust estimation procedures that can benefit from the increasing amount of information continuously coming in.
The main part of the work was done by N.O.K. with significant contributions from É.L.
The authors declare no conflict of interests.
Thanks to the Geological Survey of Norway (NGU) for providing the dataset and to the Norwegian Institute of Bioeconomy Research (NIBIO) for supporting the data harvest. Thanks to anonymous and non-anonymous reviewers for critical and constructive comments, and finally big thanks to Camille Jouin who prepared the data.
DATA AVAILABILITY STATEMENT
All relevant data are available from the Geological Survey of Norway (geo.ngu.no/).