Automatic meter reading (AMR) provides real-time consumption data, enabling to collect a huge amount of information about daily, even hourly, consumptions. It is then easy to assess lost volumes on the network, using supplied volumes information. However, because of the multiple components of water losses, the metering and calculation inaccuracies, the occurrence of new (detectable) leaks is hard to detect. Therefore this paper aims at proposing a user-friendly statistical tool that helps to quickly and reliably detect new leakage occurrence. The use of process control chart (like exponential weighted moving average) enables us to detect changes in the water loss time series, in particular, a new leak occurrence.

## INTRODUCTION

The division of the network into areas called DMA (district metered areas) has enabled to monitor these sectors, in particular the follow-up of the losses. Supplied volumes, measured thanks to flowmeters (Morrison 2004), can indeed be used to detect leaks on the network, applying the method of the minimum night flow (MNF) (Amoatey *et al.* 2014). However, this method is based on the hypothesis of a constant night consumption depending of the network (Alkasseh *et al.* 2013), which might not be true, especially if industrial units or public gardens are present in the area.

Automatic meter reading (AMR) has brought new information on daily (and in some case hourly) consumptions. This information can be useful for network monitoring as it enables, with supplied volumes information, to reliably assess lost volume on the network, according to the IWA Water Balance (Lambert *et al.* 1999; Fanner 2003).

Yet, even if the calculation of losses is more accurate with AMR data than in the case of MNF calculation, some inaccuracies can trouble the lecture of losses (meter inaccuracies of consumption meter and/or flowmeter). Therefore, losses, calculated directly by differentiating supplied and consumed volumes, need a statistical treatment to be fully exploitable.

The purpose of this study is therefore to propose a statistical tool enabling an easy and reliable reading and monitoring of daily losses, calculated from AMR and flowmeter data. The use of control chart, like the exponentially weighted moving average (EWMA) (Roberts 1959) chart, enables a real-time leakage assessment and detection based on statistical process control theory. The construction of this chart consists in the calculation and the plotting of the EWMA statistic and two control limits. The reading of this chart should enable to easily detect out of control events, which correspond to leaks on the network.

## MATERIAL AND METHODS

*L*the value of the process at time

_{t}*t*, the method consists in calculating a statistic

*Z*such as where

_{t}*λ*is a factor that weights the past values. This statistic is then monitored in a chart with two control limits LCL and UCL (Montgomery 2009): where

*μ*is the level of the process when it is in control (without any disturbance),

*k*is the widening factor of the control limits and is the standard deviation of

*L*. This standard deviation (representing the inaccuracy in the losses) can be due to meter inaccuracies but also to estimation error if consumption is estimated in the case of a partial AMR deployment (Claudio

_{t}*et al.*2015).

The EWMA statistic *Z _{t}* is plotted on a chart with the two control limits and an outlier is detected when

*Z*[LCL; UCL]. The two parameters

_{t}*λ*and

*k*can be chosen according to the targeted average run length (ARL). ARL(

*δ*) is the mean time required by the model to detect a shift of . Concretely, the ARL(0) is the mean time for the model to detect an abnormality (leakage occurrence) where in reality no leak has occurred. The objective is to maximise the ARL(0) (reduction of false alarm) while maximising the ARL(

*δ*) for any . Lucas & Saccucci (1990) provide the associated couple (

*λ, k*) according to a value of

*δ*and the ARL(0).

*μ*is the value of the process under normal conditions. One might say that normal conditions would be no leakage on the network and therefore

*μ*would be equal to 0. However, it is almost impossible to find a network without any losses. According to the BABE (background and burst estimate) concept (Lambert 1994; Farley & Trow 2003), real losses can be divided into three parts, as shown in Figure 1.

*μ*as the volume of undetectable background losses (UBL). We will consider then that the lost process

*L*is under normal circumstances if there is only undetectable background leakage (UBL) and we will try to trigger alert when a detectable leakage occurred on the network. An estimation of the daily UBL volumes is given by the formula (Melato

_{t}*et al.*2009): where

*Lm*stands for the length of water mains (km),

*Ns*is the number of service connections (from main to property limit) and

*AZNP*is the average zone night pressure (

*m*).

## RESULTS AND DISCUSSION

We apply the method on the DMA of Canéjan (France). It is a simple network of 1,822 m, all equipped with DMA for contractual reasons. The average daily supplied volume is 850 m^{3}/day and the network length is about 41.5 km.

*L*models the daily lost volume at time

_{t}*t*. We decide that we want to detect any shift in the process of the magnitude

*σ*(i.e.

_{L}*δ*= 1). We select the ARL(0) equal to 100, which means that it will take 100 days to process to detect a false appearance. This value is superior to the mean time of detectable leakage reappearance, leading us to reduce the false alerts. According to these values, the advised parameters are

*λ*[0.16–0.19] and

*k*[2.298–2.346]. We choose then

*λ*= 0.18 and

*k*= 2.3. Once all the parameters have been selected, it is possible to plot an EWMA control chart as presented in Figure 2. We add in the chart information on network intervention occurred during the period.

We define as an ‘out of control event’ any point *Z _{t}* exceeding the upper control limit

*UCL*. Two interventions occurred during the studied period: a repair of a burst on a service connection (01/13/2011 and 01/14/2011) and a repair of an invisible leak on a main (03/31/2011). We will pay a particular attention to the second event. As can be seen in Figure 2, the main is repaired on March 31 whereas the leak seems to begin on January 20. We estimate the duration of the leak to be more than two months whereas the model detects an irregularity in the process on January 26, i.e. approximately one week after its occurrence.

Thanks to the model, the leak is quickly detected which means that the awareness time of the leak could have been of the order of a week. Assuming that the leakage location and repair are quickly carried out, the duration of the leak could have been shortened by two months.

Regarding the model accuracy, it is quite difficult to assess the numerical precision of this model as we are not sure about the exact moment of invisible leakage appearance; we could not exactly be sure of the accuracy of the model. Yet, we can assess the sensitivity of this model regarding the number of ‘out of control events’ (OOC) generated according to the values of *λ* and *k*.

*k*(rather than changes in

*λ*). Yet if we stay in the ranges of values proposed by Lucas & Saccucci (1990) (

*λ*[0.16–0.19] and

*k*∈ [2.298–2.346]), there is no significant changes in the evolution of the number of OOC. As we can see in Figure 3 and the Table 1, if we stay in the range of values proposed, the number of OOC varies between 72 and 73. Therefore, considering the result obtained by Lucas & Saccucci (1990), the model is not that sensitive to the choice of

*λ*and

*k*.

k = 0.1 | k = 0.16 | k = 0.18 | k = 0.19 | k = 0.2 | k = 0.3 | |
---|---|---|---|---|---|---|

λ = 2 | 80 | 77 | 78 | 78 | 78 | 76 |

λ = 2.2 | 73 | 75 | 75 | 75 | 75 | 76 |

λ = 2.3 | 73 | 72 | 73 | 73 | 73 | 75 |

λ = 2.4 | 73 | 72 | 72 | 73 | 73 | 73 |

λ = 3 | 71 | 70 | 70 | 70 | 71 | 72 |

k = 0.1 | k = 0.16 | k = 0.18 | k = 0.19 | k = 0.2 | k = 0.3 | |
---|---|---|---|---|---|---|

λ = 2 | 80 | 77 | 78 | 78 | 78 | 76 |

λ = 2.2 | 73 | 75 | 75 | 75 | 75 | 76 |

λ = 2.3 | 73 | 72 | 73 | 73 | 73 | 75 |

λ = 2.4 | 73 | 72 | 72 | 73 | 73 | 73 |

λ = 3 | 71 | 70 | 70 | 70 | 71 | 72 |

This chart is a simple analysis of the losses signal, a possible evolution of this work is to go deeper in the split of the lost flow. According to the BABE concept, there are three possible states for a leak that can be transposed to the network component (water main and service connection):

State 1: the component does not suffer leaks or only undetectable leaks,

State 2: the component suffers an unreported burst,

State 3: the component suffers reported bursts.

The purpose is decomposition, to be able, from the general leakage flow, to split the volume according to these three states. The modelling phase consists in building two sub-models.

*Y(t)*the state of the component

*i*at time

*t,*the survival functions are as follows: where =(

_{,}) is a vector of explanatory covariates for the component

*i*. α

_{j}is the parameter associated to the state

*j*,

*β*is a vector of parameters associated to the set of covariates

_{0}*Z*(summarising the condition of laid, e.g. period of laid, diameter, material used, etc.) and

_{0i}*β*is a vector of parameters associated to the set of covariates

_{1}*Z*(summarising the ageing factor, e.g. length, diameter, number of previous failures, etc.). From the survival functions, it is easy to estimate the probability of being in one state: This sub-model is stratified by material which means that the estimation of the set of parameters is carried on for each material. The application of such sub-model on our study enables to generate a chart of the state probability, as represented in Figure 4.

_{1i}*L*(

*t*), estimated from supplied volumes and AMR data), we calibrate the second sub-model: where is the leakage flow at time

*t*associated to the state

*j*, which includes the effect of pressure and the diameter.

Once these two sub-models are calibrated, it will be possible to assess the leakage flow in each state and then to improve the network monitoring by targeting the sector for active leakage research.

## CONCLUSION

We emphasise in this paper the need for a smart AMR data management, which can be useful not only for customer management but also for operational purpose.

AMR is the missing piece for a real-time monitoring of water losses on the network. However, the reliability of a real-time measure greatly depends on the metering devices and their accuracy. Thus, the basic calculation of losses (by differentiating the supplied volumes from the consumed volumes) can lead to some misreading of what really happens on the network. Therefore we proposed in this paper a friendly-user tool that enables a better reading of water losses and supplied a visual support for the detection of leakage appearance.

This tool, called a control chart, is based on a statistical treatment that enables transformation of the raw loss data into reliable and smoothed information, enabling better leakage control and detection.

One limit of this work is that as efficient as this tool can be for leakage detection, it is not efficient at all for leakage localization. We do not propose to replace the existing technology (acoustic sensor for instance) but we present a methodology to use AMR as a complement of these technologies.

A considered prospect of this research is to deepen the analysis of the leakage flow. The decomposition of the leakage flow into the three degradation states of a leak described by the BABE concept would allow the operator to improve knowledge of the network and adapt intervention means (active leakage control, repair or renewal). AMR would thus enhance the network real-time monitoring.