Inferred rainfall sequences generated by a novel method of inverting a continuous time transfer function show a smoothed profile when compared to the observed rainfall, however, streamflow generated using the inferred catchment rainfall is almost identical to observed streamflow (*R _{t}*

^{2}> 97%). This paper compares the effective rainfall inferred by the regularised inversion process (termed inferred effective rainfall (IER)) proposed by the authors with effective rainfall derived from the observed catchment rainfall (termed observed effective rainfall (OER)) in both time and frequency domains in order to confirm that, by using the dominant catchment dynamics in the inversion process, the main characteristics of catchment rainfall are being captured by the IER estimates. Estimates of the resolution of the IER are found in the time domain by comparison with aggregated sequences of OER, and in the frequency domain by comparing the amplitude spectra of observed and IER. The temporal resolution of the rainfall estimates is affected by the slow time constant of the catchment, reflecting the presence of slow hydrological pathways, for example, aquifers, and by the rainfall regime, for example, dominance of convective or frontal rainfall. It is also affected by the goodness-of-fit of the original forward rainfall–streamflow model.

## INTRODUCTION

Rainfall is the key driver of catchment processes and is usually the main input to rainfall–streamflow models. If the rainfall and/or streamflow data used to identify or calibrate a model are wrong or disinformative, the model will be wrong and cannot be used to predict the future with any certainty. Blöschl *et al.* (2013) state that if the dominant pathways, storage and time-scales of a catchment are well defined then a model should potentially reproduce the catchment dynamics under a range of conditions. It is often the case that hydrological variables, such as rainfall and streamflow, are measured at hourly or sub-hourly intervals then aggregated up to a coarser resolution before being used as input to rainfall–streamflow models resulting in the loss of information about the finer detail of the catchment processes (Littlewood & Croke 2008, 2013; Littlewood *et al.* 2010). Kretzschmar *et al.* (2014) have proposed a method for inferring catchment rainfall using sub-hourly streamflow data. The resulting rainfall record is smoothed to a coarser resolution than the original data but should still retain the most pertinent information.

This paper investigates the implications of the reduced resolution and the potential loss of information introduced by the regularisation process in both the time and frequency domains. Both temporal and spatial aggregation are incorporated in the transfer function model, however, only the temporal aspect is considered here. The effect of spatial rainfall distribution using sub-catchments will be the subject of a future publication.

The method developed and tested by Kretzschmar *et al.* (2014) – termed the RegDer method – inverts a continuous time transfer function (CT-TF) model using a regularised derivative technique to infer catchment effective rainfall from streamflow with the aim of improving estimates of catchment rainfall arguing that a model that is well-fitting and invertible is likely to be robust in terms of replicating the catchment system. In the context of this study, observed catchment rainfall (may be derived from one or more rain-gauges by any suitable method, e.g., Thiessen polygons) is converted to observed effective rainfall (OER) by a non-linear transform designed to render the relationship between the rainfall input and streamflow output (via a CT-TF) linear. The inversion process takes the catchment streamflow and, using a regularisation process, infers effective rainfall (IER), which is then converted to inferred catchment rainfall (ICR) by the reverse of the non-linear transform.

The effective rainfall (both OER and IER) may be termed scaled rainfall (related to Andrews *et al.* (2010)) as it is derived from the overall catchment rainfall.

The classical approach to inverse (as opposed to reverse) modelling involves the estimation of non-linearity (rainfall or baseflow separation) and the unit hydrograph (UH), which is an approximation to the impulse response of the catchment. Boorman (1989) and Chapman (1996) use sets of event hydrographs to estimate the catchment UH. Boorman (1989) superimposed event data before applying a separation technique and concluded that the data required may be more coarsely sampled than might be expected because one rain-gauge is unlikely to be representative of the whole catchment. Chapman (1996) used an iterative procedure to infer rainfall patterns for individual events before applying baseflow separation. The resultant UHs had higher peaks and shorter rise times and durations than those obtained by conventional methods. He viewed the effective rainfall as the output from a non-linear store. Duband *et al.* (1993) and Olivera & Maidment (1999) used deconvolution to identify mean catchment effective rainfall, which was redistributed using relative runoff coefficients while Young & Beven (1994) based a method for inferring effective rainfall patterns on the identification of a linear transfer function. In that study, a gain parameter varying with time accounted for the non-linearity in the relationship between rainfall and streamflow.

In recent years, a range of different approaches has been used to explore reverse modelling in hydrology, that is, estimating effective rainfall from streamflow. Notable publications include Croke (2006, 2010), Kirchner (2009), Andrews *et al.* (2010), Young & Sumisławska (2012), Brocca *et al.* (2013, 2014) and Kretzschmar *et al.* (2014). Kirchner's method links rainfall, evapotranspiration and streamflow through a sensitivity function making assumptions which allow rainfall to be inferred from the catchment streamflow. The method has been applied by Teuling *et al.* (2010) and Krier *et al.* (2012) to catchments in Switzerland and Luxembourg and has been found to work for catchments with simple storage–streamflow relationships and limited hysteresis. Brocca *et al.* (2013) employed a similar method based on the water balance equation but inferred the rainfall series from soil moisture. In a further study, Brocca *et al.* (2014) used satellite-derived soil moisture to infer global rainfall estimates. Rusjan & Mikoš (2015) applied Kirchner's simple dynamic system concept to a catchment in south-west Slovenia characterised most of the time by subsurface storage but showing a response that by-passed this storage after intense rainfall. They combined two separate sensitivity functions to enable the simulation of a range of contrasting hydrological conditions. Croke (2006) derived an event-based UH from streamflow alone but his approach was limited to ephemeral quick-flow-dominant catchments while Andrews *et al.* (2010) and Young & Sumisławska (2012) used a discrete model formulation inverted directly or via a feedback model (which could be adapted to CT formulation). Croke (2010) explored a similar approach to the one presented in Andrews *et al.* (2010) for several catchments. This is done in the context of slow flow, recharge and quickflow separation, relating the derived general model to existing ones (such as IHACRES). He also includes measures to constrain the rainfall estimate uncertainty. The flow components are estimated as individual discrete-time transfer functions separated using a relaxation procedure. The equivalent effective rainfall estimate is then obtained as a form of inverse discrete-time transfer function with the separated flow components as inputs. The approach proposed by Kretzschmar *et al.* (2014) combined a CT-TF model with regularized derivative estimates to infer the catchment rainfall from sub-hourly streamflow data, including comparisons to the direct inverse of a discrete transfer function model, similar to those used by Croke (2010) and Andrews *et al.* (2010).

Littlewood (2007) applied the IHACRES model (e.g., Jakeman *et al.* 1990) to the River Wye gauged at Cefn Brwyn showing that the values for the model parameters for that catchment changed substantially as the data time-step used for model calibration decreased. Littlewood & Croke (2008) extended this work to include a second catchment and found that as the time-step decreased the parameter values approached an asymptotic level (on a semi-log plot) concluding that, at small enough time-steps, parameters become independent of the sampling interval. They suggested further investigation using data-based mechanistic (DBM) modelling methods as described by Young & Romanowicz (2004) and Young & Garnier (2006) for estimating CT models from discrete input data. Such models generate parameter values independent of the input sampling rate – as long as the sampling rate is sufficiently high in comparison to the dominant dynamics of the system. Advantages of using the CT formulation include allowing a much larger range of system dynamics to be modelled, e.g., ‘stiff’ systems that have a wide range of time constants (TC), typical of many hydrological systems. The outputs from such a model can be sampled at any time-step, including non-integer, and the parameters have a direct physical interpretation (Young 2010).

Krajewski *et al.* (1991) compared the results from a semi-distributed model and a lumped model and concluded that catchment response is more sensitive to rainfall resolution in time than space, while a study by Holman-Dodds *et al.* (1999) demonstrated that models calibrated using a smoothed rainfall signal (due to coarse sampling) may result in underestimation of streamflow. Further calibration, required to compensate, leads to the loss of physical meaning of parameters. They also concluded that parameters estimated at one sampling interval were not transferable to other intervals; a conclusion echoed by Littlewood (2007) and Littlewood & Croke (2008).

Studies by Clark & Kavetski (2010) showed that in some cases, numerical errors due to the time-step are larger than model structural errors and can even balance them out to produce good results. The follow-up study by Kavetski & Clark (2010) looked at its impact on sensitivity analysis, parameter optimisation and Monte Carlo uncertainty analysis. They concluded that use of an inappropriate time-step can lead to erroneous and inconsistent estimates of model parameters and obscure the identification of hydrological processes and catchment behaviour. Littlewood & Croke (2013) found that a discrete model using daily data overestimated TC for the River Wye gauged at Cefn Brwyn when compared to those estimated from hourly data confirming that parameter values were dependent on the time-step. They discussed the loss of information due to the effect of time-step on time constants and suggested that plots of parameter values against time-step could be used as a model assessment tool. In a previous study, Littlewood & Croke (2008) compared the sensitivity of parameters for two catchments with respect of time-step and discussed the role of time-step dependency on the reduction of uncertainty. They also suggested CT-TF modelling using sub-hourly data to derive sampling rate independent parameter values. Littlewood *et al.* (2010) introduced the concept of the Nyquist–Shannon (N–S) sampling theorem, which defines the upper bound on the size of sampling interval required to identify the CT signal without aliasing, and consequentially its effect on the frequency of sampling required to specify a rainfall–streamflow model. Given a frequent enough sampling rate, the CT model is time independent and can be interpreted at any interval.

Further understanding may be gained by transforming rainfall and streamflow series from the time domain to the frequency domain and using spectral analysis. Several potential uses of spectral analysis in hydrology have been explored including modelling ungauged catchments, modelling karst systems and seasonal adjustment of hydrological data series. A maximum likelihood method for model calibration based on the spectral density function (SDF) has been suggested by Montanari & Toth (2007). The SDF can be inferred from sparse historic records in the absence of other suitable data making it a potentially useful tool for modelling ungauged catchments. They also suggest that spectral analysis may provide a means of choosing between different apparently behavioural models. Cuchi *et al.* (2014) used ‘black box’ modelling and frequency analysis to study the behaviour of a karst system (located at Fuenmajor, Huesca, Spain). They concluded that the method works well for a linear system and that Fuenmajor has a linear hydrological response to rainfall at all except high frequencies. They suggest that the non-linearity issues might be addressed using appropriate techniques such as wavelets or neural networks. Szolgayová *et al.* (2014) utilised wavelets to deseasonalise a hydrological time-series and suggested that the technique had potential for modelling series showing long-term dependency (interpreted as containing low frequency components).

The method introduced by Kretzschmar *et al.* (2014) showed that given that the rainfall–streamflow model captures the dynamics of the catchment system, the high frequency detail of the rainfall distribution is not necessary for the prediction of streamflow due to the damping (or low-pass filter) effect of the catchment response. The numerical properties of the regularisation as applied to the inversion process place a mathematical constraint of smoothness balanced against a loss of some temporal resolution in the inferred rainfall time-series. The regularisation and therefore smoothing level is controlled through the Noise Variance Ratio (*NVR*), optimised as part of the process and is only applied when necessary, i.e., when the analytically inverted catchment transfer function model is improper (has a numerator order higher than the denominator order).

## APPLICATION CATCHMENTS

RegDer has been tested on two headwater catchments with widely differing rainfall and response characteristics – Baru in humid, tropical Borneo and Blind Beck, in humid temperate UK.

### Baru – tropical catchment

^{2}Baru catchment (Figure 1(a)) is situated in the headwaters of the Segama river located in Sabah on the northern tip of Borneo, East Malaysia (4° 580′ N 117° 490′ E). The climate is equatorial with a 26 year (1985–2010) mean rainfall of 2,849 mm (Walsh

*et al.*2011) showing no marked seasonality but tending to fall in short (<15 min) convective events showing high spatial variability and intensities much higher than those of temperate UK (Bidin & Chappell 2003, 2006). Due to the high spatial variability, a network of six automatic rain-gauges (13.6 gauges per km

^{2}) was used to derive the catchment-average rainfall using the Thiessen polygon method. Haplic alisols, typically 1.5 m in depth and with a high infiltration capacity (Chappell

*et al.*1998), are underlain by relatively impermeable mudstone bedrock resulting in the dominance of comparatively shallow sub-surface pathways in this basin (Chappell

*et al.*2006, 2007). As a result of the high rainfall intensity and shallow water pathways, the stream response is very flashy (i.e., rapid recession in the impulse response function). Vegetation cover is lowland, evergreen dipterocarp forest, which was subject to selective logging during 1988–1989 (Greer

*et al.*1993). The data used in the analysis are from February 1996 sampled at 5 min intervals (Figure 1(b)) and have been modelled previously by Chappell

*et al.*(1999) and Walsh

*et al.*(2011).

### Blind Beck – temperate catchment

^{2}and lies in the headwaters of the Eden basin in northwest England, UK (54.51 °N 2.38 °W). The basin's response shows evidence of deep hydrological pathways due to the presence of deep limestone (62%) and sandstone (38%) aquifers resulting in a damped hydrograph response (Mayes

*et al.*2006; Ockenden & Chappell 2011; Ockenden

*et al.*2014). Winter rainfall in this basin is derived from frontal systems with typically lower intensities than the convective systems in the tropics (Reynard & Stewart 1993). Data from a single tipping bucket rain-gauge (i.e., 0.1 gauges per km

^{2}) located in the middle of the catchment were used in this study. The data used in the analysis cover the period from 26th December 2007 at 16:45 to 31st December 2007 at 21:45 sampled at 15 min intervals (Figure 2(b)) and was previously modelled by Ockenden & Chappell (2011) using an aggregated hourly time-step.

The choice of these two experimental catchments, therefore, allowed the initial evaluation of the estimation of catchment rainfall from streamflow for the end-member extremes of a basin with tropical convective rainfall and shallow flow pathways to a basin with temperate frontal rainfall (i.e., much lower intensity) and deep flow pathways (i.e., much greater basin damping or temporal integration).

## MODEL FORMULATION AND PHYSICAL INTERPRETATION

This study investigated the limits of inferred catchment effective rainfall estimation from streamflow. CT-TF models identified from the observed data using DBM modelling approaches (Young & Beven 1994; Young & Garnier 2006), are inverted using the RegDer method (Kretzschmar *et al.* 2014) and used to transform catchment streamflow into estimates of catchment inferred rainfall.

*P*is the observed rainfall,

*Q*the observed streamflow and α is a parameter, estimated from the data.

*P*is the effective observed rainfall (ERER) and

_{e}*Q*is used as a surrogate for catchment wetness. Both catchments used in this study proved to be predominantly linear in their behaviour so transformation (Equation (1)) was not used. In the initial study, a wide range of possible models was identified using algorithms from the Captain Toolbox for Matlab (Taylor

*et al.*2007). The models selected were a good fit to the data and were suitable for inversion. The Nash–Sutcliffe efficiency (NSE or

*R*

_{t}^{2}) is commonly used to compare the performance of hydrological models. Often, several models can be identified that fit the data well (the equifinality concept of Beven (2006)). From these, models with few parameters to be estimated that inverted well were selected. In this study, a second-order linear model was found to fit both catchments. The output from the RegDer process is an IER series to which the inverse of the power law is then applied, if necessary, to construct an ICR sequence. The process is illustrated in Figure 3.

*et al.*(2014). It involves transition from the transfer function catchment model:to the direct inverse (in general non-realisable):which is then implemented using regularised streamflow derivatives in the form of:where is the Laplace transform of the optimised regularised estimate of the

*n*th time derivative of

*Q*:. The regularised derivative estimates replace the higher order derivatives in Equation (3), which otherwise make Equation (3) unrealisable (improper) – this is the core of the method in Kretzschmar

*et al.*(2014). In the implementation,

*n*th derivatives in Equation (4) are not estimated, but advantage is taken of the filtering with the denominator polynomial, whereby only (

*n-m*)th order regularised derivative estimates of

**are required in combination with a proper transfer function.**

*Q**R*

_{t}^{2}> 97%). This indicates that the catchment dynamics, as captured by the transfer function model, renders the differences between observed and inferred rainfall immaterial. The reason for this becomes clear when looking at the frequency domain analysis of the inversion process shown in this paper.

*R*

_{t}^{2}or

*R*). Two methods of aggregation have been used: (1) averaging over a range of time-series and (2) moving average over varying time-scales. Two measures are used to assess the correspondence between the IR and the aggregated effective rain: (1)

*R*

_{t}^{2}, the coefficient of determination and (2)

*R*, the instantaneous Pearson correlation coefficient. They are given by:where ER indicates a value from the aggregated effective rainfall sequence with mean and

*IER*is the corresponding value from the IER sequence with mean . Both

*R*

_{t}^{2}and

*R*values tend towards a maximum value as aggregation increases. The aggregation level at which the maximum is reached is identified and taken as an estimate of the resolution of the inferred effective series. This value is then compared to the system fast time constant (

*TC*) and the N–S sampling limit.

_{q}### Continuous model formulation

*y*is the measured streamflow at time

*t,*is the transport delay and

*u*is the effective rainfall at time

*t*− . If the denominator can be factorised and has real roots, Equation (6) can be rewritten as:where

*TC*and

_{q}*TC*are the system time constants and are often significantly different – a ‘stiff’ system. Decomposing the model into a parallel form gives:where

_{s}*g*and

_{q}*TC*are the steady state gain and time constant of the fast response component and

_{q}*g*and

_{s}*TC*are the steady state gain and time constant of the slow response component. The steady state gain of the system as a whole is given by:so the fraction of the total streamflow along each pathway can be calculated from:

_{s}The fraction of streamflow attributed to the slow response component is sometimes termed the slow flow index (SFI) (Littlewood *et al.* 2010). The example shown here uses a second-order model but the general principle can be extended to higher order models. Details of the inversion and regularisation processes can be found in Kretzschmar *et al.* (2014).

### Sampling frequency

The N–S frequency gives the upper limit on the size of the sampling interval, *Δt*, that will enable the system dynamics to be represented without distortion (aliasing – Bloomfield 1976, p. 21). Aliasing occurs when a system is measured at an insufficient sampling rate to adequately define the signal from the data.

### Temporal aggregation of effective rainfall

Two methods for aggregating ER were used to estimate the time resolution of the IER. Rainfall is the total volume accumulated over the sampling interval so the ER was aggregated over progressively longer sampling periods of 2 to 24 times the base sampling period and averaged to form a new smoothed sequence that could be compared with the IER. For comparison, aggregation was also performed via a moving average process utilising the convolution method available in Matlab. Both methods may be affected by the aggregation starting point and edge effects. The aggregated ER sequences were compared to the IER using the coefficient of determination (*R _{t}*

^{2}) and the correlation (

*R*).

*R*

_{t}^{2}and

*R*tend towards a maximum value as aggregation increases. The aggregation time-step at which this value is established is used to estimate the resolution of the IER.

### Spectral analysis

Periodograms of the amplitude spectra of the observed and modelled series were plotted to test whether the ER and IER have the same dynamics in the critical frequency range, despite the loss of time resolution (related to low pass filtering due to regularisation). A periodogram is the frequency domain representation of a signal; transforming the signal into the frequency domain may reveal information that is not visible in the time domain. A transfer function shown in its equivalent frequency domain form describes the mapping between the input and the output signals' spectra for the linear dynamic systems used here. Signals may be easily transformed between the time and frequency domains (Wickert 2013).

Periodograms are obtained using the Matlab implementation of the fast Fourier transform and smoothed using the integrated random walk (e.g. Young *et al.* 1999); the same regularisation approach as used in the calculation of the IER, implemented in the Captain Toolbox (Taylor *et al.* 2007). Periodograms of ER, IER and catchment streamflow are compared on a single plot showing how the spectral properties of the inversion process are used to obtain the IER. The streamflow spectrum is the result of mapping the rainfall spectrum by the catchment dynamics. To make a full inversion of that mapping would involve very strong amplification of high frequencies with all the negative consequences discussed by Kretzschmar *et al.* (2014). The most significant implications of full inversion include the introduction of high amplitude, high frequency noise artefacts into the rainfall estimates. The regularisation of estimated derivatives introduces the effect of low-pass filtering into the inversion process, avoiding the excessive high frequency noise. Regularisation does not introduce any lag into the process, unlike traditional low-pass filtering.

## RESULTS AND DISCUSSION

Figure 4 illustrates the smoothed rainfall distribution of the IER sequence obtained using the RegDer method. Similar streamflow sequences are generated using either the observed rainfall or ICR sequences as model input (see Kretzschmar *et al.* 2014). The implication is that the catchment system dynamics are being captured despite the apparent difference in the rainfall distribution and that the detail of the rainfall series in time may not be important when modelling the dominant mode of streamflow dynamics.

*R*

_{t}^{2}) and the correlation (

*R*) between the aggregated sequence and the IER tends to a maximum then decreases as aggregation time increases – ultimately the variation in the sequence would be completely smoothed out. The point at which the maximum value is reached is taken as an estimate of the resolution of the IER. Plots of

*R*

_{t}^{2}or

*R*values are shown in Figures 6 (aggregation by resampling) and 7 (moving average estimate). Time resolution estimates are shown in Table 1 and compared with the fast time constant (

*TC*) and the N–S sampling limit.

_{q}. | . | . | . | . | . | Time resolution estimates . | . | |
---|---|---|---|---|---|---|---|---|

Catchment . | Sampling frequency (hr) . | TC (hr)
. _{q} | TC (hr)
. _{s} | SFI . | N–S limit (hr) . | Aggregation by resampling . | Aggregation by moving average . | Cut-off point (hr) . |

Blind Beck | 0.25 | 6.3 | 22.1 | 47% | 19.9 | 2.5 h (10 time periods) | 2.25 h (9 time periods) | 3.8 |

Baru | 0.083 | 1.1 | 18.7 | 62% | 3.4 | 0.9–1 h (11–12 time periods) | 1 h (12 time periods) | 1.7 |

. | . | . | . | . | . | Time resolution estimates . | . | |
---|---|---|---|---|---|---|---|---|

Catchment . | Sampling frequency (hr) . | TC (hr)
. _{q} | TC (hr)
. _{s} | SFI . | N–S limit (hr) . | Aggregation by resampling . | Aggregation by moving average . | Cut-off point (hr) . |

Blind Beck | 0.25 | 6.3 | 22.1 | 47% | 19.9 | 2.5 h (10 time periods) | 2.25 h (9 time periods) | 3.8 |

Baru | 0.083 | 1.1 | 18.7 | 62% | 3.4 | 0.9–1 h (11–12 time periods) | 1 h (12 time periods) | 1.7 |

Table 1 shows that the estimated resolution of the IER sequence for Blind Beck is around 9–10 time periods (i.e., 2.25–2.5 h) and for Baru it is 11–12 time periods (i.e., 55 min–1 hr). Both estimates are within the N–S safe sampling limit and below the fast time constant for both catchments indicating that even though resolution has been lost – the regularisation trade-off for numerical stability – the dominant mode of the rainfall–streamflow dynamics has been captured. Table 2 shows that the estimated resolution of the IER for both catchments is well within the Nyquist limit and, while the Blind Beck resolution is within the safe limits suggested by Ljung (1999) and Young (2010), the estimated resolution for Baru is close to the fast TC and outside the suggested limits. The estimates of resolution of the inferred sequence made from the aggregation plots are not always well defined and may be dependent on the length of record which will affect the number of aggregation periods that may be meaningfully calculated given the finite length of the data series. A better means of estimation of resolution may be achieved by examining the frequency spectra of the rainfall and streamflow sequences.

Catchment . | TC (hours)
. _{q} | Nyquist limit (hr) . | Ljung interval (hr) . | Young interval (hr) . | Estimated resolution (hr) . |
---|---|---|---|---|---|

Blind Beck | 6.3 | 19.9 | 3.98 | 3.32 | 2.25–2.5 |

Baru | 1.1 | 3.4 | 0.68 | 0.57 | 0.91–1.0 |

Catchment . | TC (hours)
. _{q} | Nyquist limit (hr) . | Ljung interval (hr) . | Young interval (hr) . | Estimated resolution (hr) . |
---|---|---|---|---|---|

Blind Beck | 6.3 | 19.9 | 3.98 | 3.32 | 2.25–2.5 |

Baru | 1.1 | 3.4 | 0.68 | 0.57 | 0.91–1.0 |

Table 1 lists the time constants, SFIs and cut-off points for both catchments. The cut-off point for Blind Beck (3.8 h) is outside the range of the catchment time constants (6.3–22.1 h), probably reflecting the frontal rainfall regime, which is relatively uniform in time and space. Flow along both pathways is almost evenly split indicating that they are both important in terms of flow generation. On the other hand, Baru's *TC _{q}* (1.1 h) is beyond the cut-off point (1.7 h) in the area where the spectra contain little power or information indicating why the catchment's variable, high intensity, high frequency, highly localised convective rainfall may not be easy to estimate. It is worth noting that the forward rainfall–discharge model does not fit the Baru catchment (88%) characterised by its highly variable, both spatially and temporally rainfall, as well as the Blind Beck catchment (98%) with its relatively uniform predominantly frontal rainfall (Kretzschmar

*et al.*2014).

## CONCLUSIONS

A combination of time and frequency domain techniques has been used to show that the IER time-series generated by the RegDer inversion method does, indeed, approximate the direct inverse of a transfer function to a high degree of accuracy within the frequency range which includes the dominant modes of the rainfall–streamflow dynamics. The direct inverse exaggerates low-amplitude high frequency noise, which is filtered out by the regularisation process involved in the RegDer method. The smoothing of the signal resulting from regularisation is quantified in the time domain by comparison with aggregated observed input data using standard model fit measures – coefficient of determination, *R _{t}*

^{2}, and correlation coefficient,

*R*– and analysed as a low-pass filtering process in the frequency domain. The smoothing effect is minimised within the constraints of the available data and catchment dynamics, through optimisation of the regularisation constants (NVRs) to obtain the best fit of the inversion process where both rainfall and discharge data are available.

## ACKNOWLEDGEMENTS

The authors would like to thank Mary Ockenden for the collection and quality assurance of the period of rainfall and streamflow for the Blind Beck catchment (NERC grant number NER/S/A/2006/14326), and also Jamal Mohd Hanapi and Johnny Larenus for the collection of the period of rainfall and streamflow utilised for the Baru catchment and to Paul McKenna for its quality assurance (NERC grant number GR3/9439). This work has been partly supported by the Natural Environment Research Council (Consortium on Risk in the Environment: Diagnostics, Integration, Benchmarking, Learning and Elicitation (CREDIBLE)) grant number: NE/J017299/1.