The last century has been marked by major advances in the understanding of microbial disease risks from water supplies and significant changes in expectations of drinking water safety. The focus of drinking water quality regulation has moved progressively from simple prevention of detectable waterborne outbreaks towards adoption of health-based targets that aim to reduce infection and disease to a level well below detection limits at the community level. This review outlines the changes in understanding of community disease and waterborne risks that prompted development of these targets, and also describes their underlying assumptions and current context. Issues regarding the appropriateness of selected target values, and how continuing changes in knowledge and practice may influence their evolution, are also discussed.
INTRODUCTION
In the 160 years since demonstration of the linkage between faecal contamination of water and human disease, our understanding of microbial disease risks and expectations of drinking water safety have changed markedly. From the initial focus on prevention of waterborne outbreaks, water quality regulations have moved towards adoption of health-based targets to limit infection and disease at the community level. This transition represents a change of several orders of magnitude in disease incidence, and as more countries begin to incorporate health-based targets into national regulations and guidelines, it is timely to examine the origins and current context of these targets.
We begin by outlining how current disease surveillance systems operate, the relationship between detected outbreaks and disease patterns in the community, and how understanding of the infection process has changed over recent decades. We then summarise the role of traditional microbial water quality indicators in reducing levels of waterborne disease in the first half of the 20th century, the subsequent emergence of viral and protozoal pathogens as significant causes of waterborne outbreaks, and increasing recognition of the need for a new approach to address these risks. The origins of the two most widely used health-based targets (the US Environmental Protection Agency (USEPA) annual infection risk and the World Health Organization (WHO) annual disability adjusted life years (DALY) burden) are outlined, and we examine the data and assumptions that underpin them. Finally, we discuss the context of current health-based water quality targets in relation to broader public health considerations, and how they may require further adaptation in the future.
DETECTION OF WATERBORNE DISEASE
Surveillance systems and waterborne disease
The predominant illness caused by waterborne pathogens is gastroenteritis, characterised primarily by diarrhoea and often accompanied by other symptoms including vomiting, abdominal cramps, nausea, or fever. Gastroenteritis remains a major cause of morbidity and mortality in the developing world, especially among young children. In developed nations, the health impacts are much less severe, but gastroenteritis remains a relatively common illness in the community, with estimated incidence rates varying from 0.1 to 3.5 episodes per person per year (Roy et al. 2006). This illness may be caused by a wide range of enteric pathogens, all of which can be acquired by multiple routes of infection including person-to-person transmission, contaminated food, drinking water, and recreational water. The source of infection for an individual case of disease cannot usually be determined except in the context of an outbreak investigation.
Surveillance for gastroenteritis pathogens and other infectious diseases relies predominantly on laboratory identification of individual pathogens through the healthcare system and reporting of these cases of infection to health agencies. The pathogens for which reporting is mandated are specified by the regulations of the relevant government agency. Routine monitoring of these data for evidence of disease outbreaks may consist simply of noting case numbers, and scanning for temporal or geographical clustering, which then triggers further investigation. For selected pathogens, a more active level of surveillance may be implemented by contacting the affected individuals and collecting information about recent exposures (e.g. food, water, and international travel) to seek evidence of a shared source. Routine surveillance systems detect only a small fraction of the pathogen infections that occur in the community, because they require: firstly, that the infected person experiences symptoms that are sufficiently severe to cause them to seek medical care; secondly, that the physician obtains an appropriate clinical specimen from the patient and orders relevant pathology tests; thirdly, that a positive test result is obtained by the laboratory; and finally, that the appropriate authority is notified of the positive result. The diminishing number of events at each stage of this process is typically depicted as a ‘reporting pyramid’ (Figure 1) (CDC 2014a).
The healthcare system may capture the number of cases in the upper portion of the pyramid (from seeking medical attention upwards), but the causative pathogen can only be identified among the subgroup of cases for whom an appropriate pathology test is ordered. Even when such a test is performed, the pathogen responsible for the illness may not be detected due to limitations in test sensitivity, and not all positive tests are reported to surveillance systems even when reporting is mandated. The lower levels of the pyramid may be investigated using epidemiological studies comparing the number of cases identified by normal clinical practice with disease incidence at a community level. Direct enumeration of the number of asymptomatic or mild infections in the community is particularly difficult as it requires obtaining and analysing faecal specimens from people who are either not ill or not sufficiently ill to seek healthcare. Routine surveillance systems are limited to pathogens for which laboratory tests are available and widely used, meaning that some important and common gastroenteritis pathogens (notably norovirus) are much less likely to be detected. Most surveillance systems also contain a provision for reporting of suspected foodborne or waterborne disease even where the pathogen is unknown.
The ratio between the number of cases notified to surveillance systems and the number of symptomatic cases in the community varies between pathogens according to symptom severity, and the nature of the healthcare and surveillance systems. A recent epidemiological study in the UK found that the overall ratio between identified enteric pathogen cases reported to the national surveillance system and symptomatic cases of gastroenteritis in the community was one reported pathogen per 147 community cases (Tam et al. 2012a). Another study in Canada estimated an average ratio of 313 community cases of infectious gastrointestinal illness for every case reported to the provincial surveillance system (Majowicz et al. 2005), while comparison of the total number of gastrointestinal pathogens reported to Australia's national surveillance system (NCDC 2002) and a national survey of gastroenteritis in 2002 (Hall et al. 2006) suggested a ratio of about 500 community cases to one notified pathogen. Even in research studies where an extensive range of pathogen tests are carried out on faecal specimens, a large proportion (commonly 50–60%) of community gastroenteritis cases do not have a pathogen identified (de Wit et al. 2001; Hellard et al. 2001; Tam et al. 2012b). Such cases may be attributable to known pathogens that are present but unable to be detected due to limitations in test methods, pathogens that are as yet undiscovered, or non-infectious causes of gastroenteritis.
Outbreaks and endemic disease
A disease outbreak is generally defined as a significant increase in the number of cases of a specific disease in a localised area over a short period of time. This definition is flexible, and individual jurisdictions may apply various criteria or algorithms to detect unusual spatial and/or temporal clustering of pathogen reports, which trigger further investigation. The characteristics of the pathogen and the nature of the illness influence the likelihood of detecting an outbreak, with unusual pathogen species/serotypes or rare/severe symptoms (e.g. bloody diarrhoea or illness requiring hospitalisation) more likely to trigger an investigation. Sometimes, an outbreak may be recognised as a cluster of gastroenteritis cases even before any attempt is made to identify a pathogen, particularly when the cases have an identifiable relationship, which is linked to the common exposure (e.g. attendees at a social event and residents in a healthcare facility). Identification of the source of an outbreak (e.g. contaminated water or food) may rely only on epidemiological evidence (significantly higher rate of illness in those with a particular exposure compared to those without the exposure) or may be supplemented by detection of pathogens or other evidence of contamination in the suspect transmission vehicle. The ability to recognise outbreaks and identify their source and causation is constrained by the human and technical resources available to public health agencies for such investigations.
Although the definition of an outbreak requires identification of as few as two cases associated with a common exposure source, consideration of the reporting pyramid suggests that somewhere between 50 and 150 gastroenteritis cases are probably required at the community level before an outbreak would be detected by routine surveillance systems. Once an outbreak is recognised, active case finding through contact with physicians and hospitals, local organisations, or media publicity will lead to identification of gastroenteritis cases at steps further down the pyramid. Most of these will be classified as outbreak cases only on the basis of symptoms and exposure to the suspect source during the relevant time period, with laboratory confirmation of pathogen infection usually being performed in only a minority of cases.
Cases linked to recognised gastroenteritis outbreaks make up only a small fraction of all reported enteric pathogen infection cases, and it is acknowledged that many outbreaks may pass undetected due to the low sensitivity of surveillance systems. Most cases of gastroenteritis in the community, however, probably do not arise from simultaneous exposure of a group of people to a common infection source, but are acquired independently by separate individuals at different times. Some pathogen infections are present continuously in a population at a low but fairly stable rate (endemic disease) while others appear intermittently (sporadic disease). The relationship between detected outbreaks, undetected outbreaks, and rates of endemic disease in the community is illustrated in Figure 2 (Frost et al. 1996). For obvious reasons, detected outbreaks are often said to represent the ‘tip of the iceberg’, while the vast bulk of disease cases exist well below the threshold of detection by routine surveillance.
Characterising the infection process
Understanding of the relationship between enteric pathogen exposure and infection was developed initially through investigation of foodborne disease outbreaks and later by experimental studies in animals, cell culture systems, and human subjects. The term ‘minimum (or minimal) infectious dose’ (MID) is often used in early publications, but the meaning of this term has always been imprecise (Ward & Akin 1984). The MID is commonly defined as ‘the smallest number of pathogens capable of causing an infection’, but this description lacks any quantitative measure of the exposed population (i.e. does it mean the dose required to infect one subject among 10 exposed, or perhaps one among 100 or even one among 1,000,000?). The term also conveys the implication that there exists a dose threshold for any given pathogen, below which infection does not occur. An early review of the minimum infective doses of human enteric viruses suggested that the MID should be defined as the dose required to infect 5% of subjects (ID5) or even 1% of subjects (ID1), but noted that the low number of human subjects (or tissue culture replicates) in experimental studies meant that the ID50 was usually reported as the MID (Plotkin & Katz 1965). Similarly, a review of data on the infectious dose for Salmonella infection found that the lowest dose tested in human studies was 103 cells, and even when no infections were observed at this dose, the small number of subjects meant that the true infection risk could have been as high as 23% (Blaser & Newman 1982). Furthermore, examination of available data from Salmonella outbreaks suggested infection and illness had sometimes resulted from doses as low as 17–30 organisms. Another virus review in 1984 found little had changed, and the ID50 remained the most frequently quoted statistic for describing the MID (Ward & Akin 1984). Use of the ID50 provides a benchmark to enable comparisons between different studies and between strains of pathogens, but clearly does not correspond to the popular perception of what is meant by the term ‘MID’.
Another question related to the concept of the MID is whether pathogenic microorganisms act cooperatively to establish infection (consistent with existence of a dose threshold), or whether each organism acts independently. Experimental work on Salmonella infections performed in the 1950s provided strong support for the independent action model (Meynell 1957; Meynell & Stocker 1957), but the assumption of cooperative action prevailed in the literature well into the 1980s despite accumulation of data from other bacterial genera, which also supported the independent action hypothesis (Rubin 1987). The mathematical modelling methods used to describe dose–response relationships for microbial pathogens subsequently evolved into the formal discipline of Quantitative Microbial Risk Assessment (QMRA) (Haas et al. 1999), utilising a four-step conceptual framework analogous to that previously developed for assessment of health risks from chemical exposures (NAS 1983). The current body of evidence from QMRA studies of experimental and outbreak data strongly supports the independent action (single-organism) hypothesis for pathogen infection (Haas et al. 1999).
Given appropriate input data on human exposure (infectious pathogen concentrations in water, volume of water ingested daily), the pathogen dose–response relationship (likelihood of infection for a given pathogen dose), and susceptibility (percentage of the exposed population not immune to the pathogen), QMRA permits an estimate to be made of infection risks for a human population consuming pathogen-contaminated drinking water. If information is available on the proportion of infected people who develop symptoms, the number of cases of illness can also be estimated.
The first applications of this approach to estimation of disease risks from exposure to waterborne pathogens focussed on recreational water quality (Fuhs 1974; Dudley et al. 1976). Later, as concern grew about the possibility that treated drinking water might still contain low concentrations of infectious pathogens (see below), the technique was widely used to model risks from drinking water supplies (Regli et al. 1991; Rose & Gerba 1991).
WATER QUALITY AND DISEASE
Indicator organisms and waterborne disease risks
The link between faecal pollution of drinking water and transmission of diseases such as cholera was first proposed in the 1850s, but it was not until three decades later that the ‘germ theory’ of disease was widely accepted and effective measures to reduce the incidence of infectious diseases began to be implemented in developed nations (Hrudey & Hrudey 2004). Although it was possible at that time to detect and identify some waterborne pathogens, the diversity of pathogens, the complexity of test methods, and the intermittent presence of individual species in water made direct testing for pathogens impractical for routine water quality testing. Instead, methods were developed to monitor the more abundant non-pathogenic species of faecal bacteria and use these as ‘indicators’ to assess levels of faecal pollution in water supplies (Gleeson & Gray 1997).
The bacterium Escherichia coli (at that time called Bacillus coli) was known to be one of the most numerous bacterial species in human faeces, but the lack of a simple one-step test for this organism led to the widespread adoption of the total coliform group as the routine microbial indicator in the early decades of the 20th century (Allen & Geldreich 1978). During this period, the frequency of waterborne outbreaks in developed nations fell markedly as basic water disinfection and treatment methods (chlorination and sand filtration) were progressively implemented, together with improved living conditions, sanitation, and better protection of source waters from human waste. The use of coliform indicator bacteria played a key role in reducing the risk of outbreaks by providing a means to assess faecal pollution in source waters, and evaluate the efficacy of disinfection and water treatment processes. Over time, tests for thermotolerant coliforms were added to monitoring programmes to provide a more focussed measure of faecal contamination, and then in the 1990s, defined substrate technology tests that permitted rapid detection and enumeration of both E. coli and total coliforms were widely adopted (Edberg et al. 1988).
During the 1970s and 1980s, there was an apparent increase in the number of drinking water-related outbreaks reported in both the USA and the UK (Craun 1978; Hunter 1997). It was also noted that the proportion of waterborne outbreaks attributable to bacterial pathogens appeared to be decreasing, while the proportion attributable to Giardia lamblia and enteric viruses was increasing. Indeed, Giardia had become most commonly identified cause of outbreaks associated with surface water supplies in the USA, accounting for more than half of outbreaks between 1971 and 1985 where a causative agent was identified (Craun 1988). It is not clear whether this apparent increase in the number of outbreaks reflected a real change in incidence, or whether it was at least partially attributable to more effective disease surveillance systems. Concurrent improvements in detection methods for viral and protozoal pathogens in both clinical and environmental samples permitted the identification of causative agents for outbreaks that in previous times would have been classified as having unknown aetiology. Underlying factors such as absent, interrupted, or inadequate water treatment and disinfection could be identified as the cause in many outbreaks (Craun 1988), but several virus outbreaks were documented in the USA and other countries in water supplies where coliform bacteria were not detected, and free chlorine residuals were maintained throughout the outbreak period (Hejkal et al. 1982; Bosch et al. 1991). Only a few years later, the recognition of several waterborne outbreaks caused by Cryptosporidium heralded the emergence of another significant pathogen with even higher levels of chlorine resistance than enteric viruses (D'Antonio et al. 1985; Hayes et al. 1989; Rush et al. 1990).
In parallel with the apparent upwards trend in reported waterborne outbreaks, evidence had been accumulating that culturable human viruses could be detected at low concentrations in apparently well operated, fully treated water supplies that complied with relevant water quality standards (Payment 1981; Keswick et al. 1984). These developments brought into question the prevailing belief that the absence of coliform bacteria was a reliable marker of ‘safe’ drinking water. There was growing knowledge that the persistence of viral and protozoal pathogens in the environment and their responses to water treatment and disinfection processes were significantly different from those of bacterial pathogens, and therefore, elimination of coliforms from treated water was not a guarantee that all classes of pathogen had been effectively removed. Many attempts have since been made to identify indicator organisms for protozoal and viral pathogens that could serve with the same utility as E. coli does for bacterial enteric pathogens. Candidate organisms have included faecal streptococci and enterococci, sulphite-reducing Clostridium species and several types of bacteriophage, but none has gained widespread acceptance for routine use in monitoring drinking water quality (Ashbolt et al. 2001).
The question of endemic waterborne disease
Early reports of infectious viruses in treated drinking water provoked debate about whether such low concentrations (generally averaging one tissue culture infectious virus dose per several 100 l of water) should be considered a public health risk (Plotkin & Katz 1965). The prevailing idea of an ‘MID’ for pathogens led many to conclude that such low exposures would be unable to initiate infections, but as QMRA techniques developed and the single-organism concept became more widely accepted, this view changed. It was predicted that even with very low pathogen concentrations, the large size of exposed populations and repeated daily exposures could potentially result in many cases of infection and illness arising annually from water supplies that had previously been considered ‘safe’.
These predictions prompted a number of epidemiological studies that attempted to detect evidence of endemic waterborne disease from drinking water supplies, including several studies undertaken as part of a research programme instituted by the Centers for Disease Control and Prevention and the USEPA (CDC 2014b). The body of evidence on endemic waterborne disease was summarised in a Supplement to the Journal of Water and Health in 2006. Observational studies of various designs that assessed illness rates or markers of infection in communities with differing water supplies or significant changes in water treatment gave mixed results, with some supporting the idea of significant waterborne disease transmission, while others did not (Craun & Calderon 2006). In addition, several randomised intervention trials have compared self-reported gastroenteritis rates in groups of people randomly allocated to drink tap water with or without additional point-of-use treatment to remove (presumed) residual pathogens. Studies of this design provide the strongest level of evidence for human disease, because they are able to control for underlying differences in non-water sources of gastrointestinal disease, which may influence the results of observational studies. The intervention trials have shown variable results, with some finding evidence that the intervention significantly reduced rates of disease (Payment et al. 1991, 1997; Borchardt et al. 2012) while others did not (Hellard et al. 2001; Colford et al. 2005). This may reflect different risk levels in the different water supplies being examined or different susceptibility to infection in the target population groups, but also may be at least partially attributable to limitations in some study designs (i.e. lack of blinding to water treatment allocation). Information from five such studies conducted before 2006 was used to construct an estimate of the fraction of gastroenteritis attributable to public drinking water supplies in the USA (Colford et al. 2006). A number of different scenarios were explored in regard to levels of risk from surface or groundwater sources, and effects of source water contamination, inadequate water treatment, or contamination in the distribution system. This analysis produced a median estimate that 12% of gastroenteritis among the immunocompetent population in the USA could be attributable to community drinking water systems. However, due to lack of specific information, many of the assumptions were necessarily arbitrary in nature. Another estimate by USEPA researchers using similar information produced a slightly lower figure of 8.5% for waterborne illness from community drinking water systems (Messner et al. 2006).
Although epidemiological studies can measure actual disease rates in a community, the size of the population that can be included (and consequently the statistical power of the study to detect differences between exposure groups) is limited by resource and logistical constraints. Increases in statistical power require a disproportionately large increase in sample size, and it has been calculated that a randomised trial capable of detecting 100 additional cases of gastroenteritis annually in a population of 10,000 (corresponding to roughly a 1% increase in gastroenteritis incidence) would require enrolment of around 416,000 people (Eisenberg et al. 2006). Randomised studies of this size are not feasible, and the most stringent resolution yet achieved by this type of study was around 10% of the overall gastroenteritis rate (Colford et al. 2005). Therefore, the potential existence of lower rates of waterborne disease in communities can only be addressed using QMRA for specific pathogens or modelling using information from the few randomised studies available.
HEALTH-BASED TARGETS
Alternative approaches to health-based targets
Recognition of the limited utility of traditional indicator organisms to assess risks from non-bacterial pathogens and the inability of epidemiological studies to detect small differences in illness rates led regulatory authorities to develop QMRA based approaches to assess drinking water safety and set regulatory targets. These targets have been formulated to set upper limits on the adverse health effects that may be suffered by consumer populations as a result of microbial contamination of drinking water. Internationally, two main approaches have been used to define microbial safety targets for water as follows:
USEPA annual infection risk target: the USEPA used QMRA to develop water treatment requirements for G. lamblia and enteric viruses in the Surface Water Treatment Rule (SWTR) (USEPA 1989). Subsequent changes to the rule have been aimed at enhancing pathogen removal capability for poor quality source waters and reducing risks of Cryptosporidium infection. The specified treatment requirements are consistent with limiting waterborne pathogen infections to a rate of one per 10,000 people per year, although this target figure has not been officially adopted into USEPA policy (Regli et al. 1999).
WHO DALY target: the WHO adopted a tolerable risk level expressed in terms of DALY in the 3rd edition of the drinking-water guidelines (WHO 2004). The DALY is a summary measure of the health impact of a disease that incorporates both fatal and non-fatal (mortality and morbidity) outcomes. One DALY can be thought of as one lost year of ‘healthy’ life. WHO has set the health-based target for microbial drinking water quality at 1 DALY per million persons per year.
The USEPA infection risk target and the WHO DALY target rely on the same data and models for QMRA calculations to predict infection risks from pathogens in drinking water. In theory, this process could be carried out for many pathogens, but in practice, the limitations of data on dose–response relationships, occurrence of pathogens in water, and their removal by water treatment processes mean that modelling is limited to a relatively small group of ‘reference pathogens’. These comprise representatives of the three major pathogens categories (viruses, bacteria, and protozoa) selected on the basis of demonstrated waterborne transmission, relatively high infectivity and severity of illness, as well as having sufficient data available to perform QMRA. The DALY approach then uses additional clinical and epidemiological information on the severity and duration of symptoms and risks of fatal outcomes to compute the health burden. The infection risk approach results in water treatment requirements corresponding to equal risks of infection for each category of pathogen, while the DALY approach aims to achieve water quality that would produce an equal health burden for each category of pathogen.
Selection of target values
In the SWTR, the USEPA expresses the belief that public water supplies should provide a much greater level of protection than simply that necessary to avoid outbreaks (citing estimated infection rates of 50 in 10,000 people or greater in reported Giardia outbreaks in the USA), and states that ‘providing treatment to ensure less than one case of microbiologically caused illness per year per 10,000 people is a reasonable goal’. The one in 10,000 annual risk target is also described as ‘comparable to other acceptable microbiological risk levels’ (Regli et al. 1991). This reference in turn cites the transcript of an expert panel discussion at the 1987 Calgary Giardia Conference (Regli et al. 1988). The expert panel canvassed different waterborne risk estimates (reported Giardia outbreaks, estimated symptomatic Giardia cases in the community, and gastroenteritis from recreational water use) as well as possible targets from QMRA modelling. These estimates ranged over several orders of magnitude, and the expert panel did not attempt to develop a consensus position on a suitable target for drinking water regulation.
Another line of reasoning in support of the one in 10,000 annual infection risk figure has also been presented (Macler & Regli 1993). These authors calculated that an annual infection risk of one in 10,000 for Giardia would be equivalent to approximately a one in 10 cumulative risk of waterborne infection over a 70-year lifetime. This was derived from a study that estimated the total number of ‘clinically significant infections’ for a range of pathogens in the USA in 1985 and the proportion attributable to various sources (food, water, zoonotic transmission, etc.) (Bennett et al. 1987). Giardia was estimated to cause a total of 120,000 cases of illness annually, with 60% being attributable to waterborne transmission. This pathogen was believed to be responsible for 8% of all waterborne infections, and there was a mean average 10% lifetime risk of microbial infection from drinking water (Macler & Regli 1993). The 95% upper-bound risk for this estimate was approximately one, and assuming the risk of death from waterborne illness in the USA is 0.1% of all cases (Bennett et al. 1987), this would give an estimated lifetime risk of death from waterborne infection of one in 1,000. Alternatively, if one assumes that only 10% of infections result in significant illness, and uses the mean lifetime risk of infection (rather than the upper-bound estimate), then the risk of death from waterborne infection would be about one in 100,000 over a lifetime. These figures are in the same range as lifetime risks of cancer that are considered by the USEPA to be acceptable for chemical contaminants in water (two in 100,000 to two in 10,000,000 theoretical upper-bound), thus providing similarity in tolerable risk levels for fatality for chemical and microbial contaminants.
This calculation is potentially open to question however, as the fatality rate estimated by Bennett et al. (1987) was based on cases of ‘clinically significant infections’ estimated by the Centers for Disease Control and Prevention. No definition is given for the term ‘clinically significant infection’, and it is not clear what proportion of community illness is encompassed by this term. A comparison of the overall enteric illness rate (0.11 illness cases per person per year) in the 1985 study (Bennett et al. 1987) with a more recent estimate (0.79 illness cases per person per year) in 1997 (Mead et al. 1999) suggests that the earlier study significantly underestimated the endemic disease rate. The estimated fatality rates in the two studies are also markedly different; 0.04% for all enteric cases and 0.10% for waterborne cases in the 1985 study, versus 0.003% for all enteric cases in the 1997 estimate. This disparity may be partially explained by changes in the relative prevalence of different pathogens and advances in clinical treatment in the intervening period, as well as the inclusion of all endemic cases in the denominator of the more recent study.
The 1987 Bennett et al. study has been cited by a number of authors (LeChevallier & Buckley 2007) as the source of the one in 10,000 annual waterborne disease target figure. In this interpretation, this is described as the rate of waterborne infections already tolerated in the USA in 1987 (cited as 25,000 cases of waterborne disease in a population of about 250 million). However, the number actually stated in the monograph is 940,000 annual cases of waterborne disease, or about 38 cases per 10,000 people per year. This was derived by multiplying the estimated waterborne fraction for several individual enteric pathogens by the estimated total number of cases of each pathogen. It is not evident how the numerical estimates given in the Bennett publication (either collectively or individually) could have been subsequently interpreted to derive a rate of one in 10,000 per person per year for waterborne disease rather than this higher figure.
The DALY was developed by Harvard University for the World Bank to provide a consistent framework to quantify and compare the health burden of a wide range of diseases and injuries on populations (World Bank 1993). This measure was developed as an alternative approach to simply using the number of deaths (mortality) or illnesses (morbidity) to rank the effects of diseases on populations. The DALY integrates disease impacts including premature death, degree of disability caused by an illness, and the length of time lived with disability into a single measure, which can be used to compare the importance of different diseases, injuries, and risk factors as part of health decision-making and planning processes. The DALY was used by the WHO in the first Global Burden of Disease Study in 1990 (Murray & Lopez 1997) and has become an established metric to quantify and compare the population health burden of diseases between countries, regions, and population groups. The DALY has also been widely used for priority setting and evaluating the impacts of specific public health interventions on reducing disease burdens (WHO 2009). However, its use to set a fixed regulatory target for a specific route of pathogen exposure (drinking water) is a novel application (Gibney et al. 2013).
The concept of applying a health-based target using the DALY metric to drinking water quality was discussed in a 2003 background document (Havelaar & Melse 2003; WHO 2004) in the lead-up to formulation of the 3rd edition of the WHO Guidelines for Drinking-water Quality. This 2003 report presents estimates of the DALY burden for several microbial pathogens that may be transmitted by drinking water, and for cancers potentially caused by two chemical contaminants (naturally occurring arsenic and bromate generated by ozone disinfection). In discussing selection of an appropriate ‘reference level of risk’ (health target), a lifetime excess risk of one excess cancer death per million people is mentioned as ‘a widely used threshold for environmental cancer risk assessment’. In relation to translating this fatality risk to DALYs, the authors note that the average number of life years lost per cancer death in the Netherlands (for all types of cancer) is 13.8 years. Therefore, counting only mortality and disregarding any contribution of morbidity to the health burden, an equivalent target of 13.8 DALYs per million people over a lifetime of exposure (70 years), or about 0.2 × 10–6 DALYs per person per year, can be derived on this basis, although the calculation is not presented in this publication.
The level of cancer risk used in the above example, however, is 10-fold lower than that conventionally adopted by the WHO for exposures to genotoxic carcinogens. Accordingly, in the 3rd edition of the WHO guidelines (WHO 2004), the target for microbial risk was selected by analogy to the reference level of a 10–5 lifetime excess cancer risk (one excess case of cancer per 100,000 population ingesting drinking water containing the carcinogen at the guideline value over a lifetime). This is an upper-bound estimate for cancer risk, approximating the 95 percentile limit. The specific cancer cited as an example is renal cell cancer, which may arise from exposure to bromate in drinking water. A figure of 11.4 DALYs per cancer case is said to be derived from a publication comparing microbial and cancer risks (Havelaar et al. 2000), but this number does not actually appear in this reference, rather a median value of 10 DALYs per cancer case is given in the text. However, using a value of 11.4 DALYs per cancer case and a tolerable cancer risk of 10–5 per 70 year life span produces an estimate of 1.6 × 10–6 DALYs per person per year, and this is then rounded down to 1.0 × 10–6 DALY per person per year. As the DALY impact of illness varies from one pathogen to another, adoption of a uniform DALY target for each of the three categories of waterborne pathogen results in different rates of illness (and thus different rates of infection) being tolerated for each category. The WHO guidelines also note that appropriate target values should be based on local circumstances, and that setting a stringent target for water quality may have little effect on the overall disease burden if high rates of pathogen transmission occur by other routes of exposure. Health targets in the range of one DALY per 100,000 to one DALY per 10,000 people per year may be suitable as an initial target in such circumstances.
Are the current target levels appropriate
The setting of a target level for the safety of drinking water (or any other potential hazard to which humans are exposed) recognises that a level of zero risk cannot be achieved, and therefore, some low level of risk must be acknowledged as being ‘tolerable’ or ‘acceptable’. In a background publication for the 2003 WHO Guidelines for Drinking-water Quality, various approaches that may be used to derive tolerable risk levels for regulatory purposes were discussed (Hunter & Fewtrell 2001). These include:
the risk falls below an arbitrary defined probability;
the risk falls below some level that is already tolerated;
the risk falls below an arbitrary defined attributable fraction of total disease burden in the community;
the cost of reducing the risk would exceed the costs saved;
the cost of reducing the risk would exceed the costs saved when the ‘costs of suffering’ are also factored in;
the opportunity costs would be better spent on other, more pressing, public health problems;
public health professionals say it is acceptable;
the general public say it is acceptable (or more likely, do not say it is not); and
politicians say it is acceptable.
In the case of cancer risks from chemical drinking water, the established guideline levels for both the USEPA and WHO may be viewed as being ‘below an arbitrarily defined probability’. The USEPA uses a target range of one in 10,000 to one in 1,000,000 (10–4–10–6) for carcinogens in drinking water (Cotruvo 1988), while the WHO sets guideline values for genotoxic carcinogens that are consistent with an upper-bound estimate of an excess lifetime cancer risk of one in 100,000 (10–5) (WHO 1993). Cancer risks in this range are considered to be negligible, and not to require further regulatory consideration. To place these risks in context, the current lifetime risk of a person being diagnosed with cancer is of the order of 300,000–400,000 in 1,000,000 (American Cancer Society 2014; Cancer Research UK 2014).
Both the USEPA and WHO microbial health-based targets can be related at least approximately to current regulatory targets for carcinogenic chemicals in water, and just as the target levels for carcinogens correspond to risk levels several orders of magnitude lower than the cancer risks that already exist in the population, so too the target levels for waterborne microbial risk are much lower than the rates of gastroenteritis already experienced in the community. Well operated water supplies in developed nations are likely to have waterborne illness levels below those that can be measured by routine surveillance systems or even by targeted epidemiological studies, and thus, any health gains from improvements to meet the health-based targets can only be inferred by QMRA modelling and not demonstrated by changes in illness rates or health service utilisation.
The stringency of the USEPA target has been questioned on the basis that it was formulated at a time when the magnitude of endemic gastroenteritis was not fully appreciated (Haas 1996), and the WHO target has also been criticised more recently as being overprotective and unlikely to provide quantifiable health benefits despite the potential for considerable public expenditure on water treatment (Mara 2011). If a current level of community gastroenteritis of one episode per person per year is assumed, then the USEPA target would restrict waterborne illness to less than 0.01% of all gastroenteritis (assuming all drinking water supplies operate at the target level), while the WHO target permits a slightly higher level. While one would not argue that current levels of community gastroenteritis should be considered tolerable, it is legitimate to question whether the balance between waterborne gastroenteritis risks and risks from all other sources implied by these targets is appropriate in terms of regulatory effort, public expenditure, and achievable health benefits.
Implications of changes in knowledge or practice
The scientific and clinical data that underpin both the USEPA target and the WHO target represent the best information available at the time each target was formulated. However, as knowledge increases and clinical practice changes, it may become necessary to revise these targets or reconsider how they are constructed. Both targets may be affected by changes in the data available for use in QMRA for the reference pathogens. For example, QMRA for Cryptosporidium was initially limited to human dose–response data for a single strain of Cryptosporidium parvum (the IOWA strain) (DuPont et al. 1995). In subsequent years, more human feeding trials have been performed and data are now available from two additional C. parvum strains (Okhuysen et al. 1999) and one C. hominis strain (Chappell et al. 2006). These studies showed a range of ID50 values for C. parvum isolates from 9 to 1,042 oocysts, illustrating the high variability between strains. In this situation, it is unclear whether the QMRA model used for target setting should be revised to use the ‘worst’ strain (most highly infective) in order to provide maximum protection, or perhaps a strain from the middle of the infectivity range. Perhaps, the model should remain unaltered, given the high degree of health protection already built in to the current target level. Alternatively, the additional knowledge could be incorporated by deriving a dose–response curve for a mixture of strains.
The basis of the WHO DALY target may also be affected by changes in clinical practice or increasing knowledge about the health impacts of infections. Rotavirus was selected as the reference virus because of its relatively high clinical impact in terms of severe illness and mortality rate in young children. However, an effective rotavirus vaccine has been available since the mid-2000s, and is being progressively incorporated into childhood vaccination programmes worldwide. This has resulted in a significant decline in both morbidity and mortality for rotavirus, which in turn has reduced the rotavirus DALY burden (Gibney et al. 2014). Should the DALY value for rotavirus used in derivation of the WHO target be revised (and water treatment requirements therefore relaxed) in view of this change? Or should the current target be retained as being representative of a ‘plausible worst case’ viral pathogen, which may emerge in the future?
Another area of knowledge that may have a significant impact on the DALY values for enteric pathogens is the accumulating evidence of clinical sequelae (long-term effects after the initial infection), which develop in some patients after an episode of gastroenteritis. The current WHO DALY calculation for Campylobacter infection includes the recognised sequelae of reactive arthritis and Guillain–Barré syndrome, and these illnesses accounted for 39% of the calculated average health burden for cases of symptomatic Campylobacter infection (WHO 2011). Recent research indicates that Campylobacter infection is also associated with irritable bowel syndrome (IBS), and the health burden for an average case would increase more than four-fold if this illness is also included in DALY calculations (Gibney et al. 2014). Evidence for other enteric pathogens is less extensive, but there are indications that IBS may also occur after Giardia infections (Wensaas et al. 2012) and viral infections (Zanini et al. 2012). If IBS is included in calculations of health burden while retaining the one DALY per million people per year target, then pathogen removal requirements for water supplies would increase correspondingly. This would also have the effect of reducing the tolerable level of waterborne disease as a proportion of all gastroenteritis in the community. On the other hand, progressive improvements in the treatment of acute gastroenteritis or sequelae may counterbalance these factors by reducing morbidity and mortality and thus reduce the average health impact for some or all pathogens.
DISCUSSION
The concept of endemic disease from drinking water had not been explicitly considered by health regulatory agencies prior to the 1980s, and thinking about water safety centred on outbreak prevention. However, developments in detection methods for viral and protozoal pathogens, together with evidence from some waterborne outbreaks, gradually led to acceptance of the concept that pathogens in drinking water considered ‘safe’ by then-current standards could be contributing to the ‘background’ or endemic rate of gastroenteritis in the community. The development of QMRA techniques permitted estimation of the number of infections that might be caused by exposure to very low concentrations of pathogens in treated drinking water.
Subsequently, the USEPA developed an annual infection risk target to limit the risks of endemic waterborne illness, and the WHO later developed a target based on the health burden of waterborne infections. Both targets can be roughly equated to health targets for carcinogenic chemical contaminants for the corresponding regulations or guidelines. As is the case for chemical contaminants, the disease levels incorporated in these targets are far below the levels that actually occur in the community, and therefore, the health benefits associated with achieving the target for water supplies that are already well operated can only be modelled and not measured.
Whether to adopt an infection risk or a DALY health target for water-related exposures, as well as which numerical values to choose, are decisions for individual jurisdictions to make based on economic, environmental, social, and cultural conditions. Knowledge of the origins for the numbers currently suggested as target values helps to understand the limitations of each quantitative choice and sheds light on the data assumptions that have been incorporated in developing these figures. Understanding the origin of the infection and DALY health targets for water-related exposures and the estimated magnitude of waterborne disease to the overall level of gastroenteritis in the community helps to determine the relevance and applicability of target values to individual settings. Consideration also needs to be given to how increasing scientific knowledge and changes in disease impacts may influence chosen target values. The information provided in this paper provides a context for decision-making about health-based target selection and highlights more generally the relative and judgemental nature of defining tolerable risk.
ACKNOWLEDGEMENTS
Joanne O'Toole holds an National Health and Medical Research Council (NHMRC) Training Fellowship and Karin Leder, an NHMRC Career Development Fellowship. Katherine Gibney is the recipient of the NHMRC Gustav Nossal Postgraduate Scholarship sponsored by CSL and a Faculty of Medicine, Nursing, and Health Sciences, Monash University postgraduate excellence award.