Environmental Engineering Reference
In-Depth Information
are available, provisional thresholds are derived. If there are no data from particu-
larly sensitive species, or if there are data for fewer than two trophic levels, then no
criteria are derived (Lepper 2002).
German methodology requires chronic toxicity data from four trophic levels
(bacteria/reducers, green algae/primary producers, small crustaceans/primary con-
sumers, and fish/secondary consumers) to derive criteria. If chronic NOECs are avail-
able for at least two trophic levels, acute data may be used to fill trophic level
gaps, but must be multiplied by an acute-to-chronic extrapolation factor (0.1); this
result is regarded to be a tentative criterion. If chronic data from at least two
trophic levels are not available, no criterion can be derived (Lepper 2002; BMU
2001; Irmer et al. 1995).
In Spain, aquatic life criteria are derived from acute or chronic data for at least
three species; species must include algae, invertebrates, and fish (Lepper 2002). The
UK requires acute or chronic data for algae or macrophytes, arthropods, nonarthropod
invertebrates, and fish to derive aquatic life criteria (Zabel and Cole 1999). Neither of
these methodologies describes how much data of each kind is required.
Several current derivation methodologies allow water quality guideline values to
be derived by applying AFs, even if experimental toxicity data are absent (deriva-
tions are based on QSARs). If enforceable criteria, which can be used directly in
setting water quality standards, are sought, a large, diverse ecotoxicity database is
required. The Canadian guidelines (CCME 1999) require at least six types of data;
others do not specify a number, but leave much to professional judgment. In all
cases, as the number and diversity of data increase, AFs decrease, thus reducing the
uncertainty-driven conservatism in criteria values.
For statistical extrapolations by parametric techniques, data requirements
range from n = 4 to 10. In discussing the use of statistical extrapolations for very
small data sets, Aldenberg and Luttik (2002) noted that sample sizes as small as
n = 2 can be used; however, the values derived from samples as small as n = 2-3
are not of much practical use because of their very high level of uncertainty.
Wheeler et al. (2002) analyzed the influence of data quantity and quality, and
model choice on SSD outcomes. They found that a minimum of n = 10 was
required to obtain a reliable estimate of a particular endpoint (e.g., an HC 5 ; haz-
ardous concentration potentially harmful to 5% of species). Okkerman et al.
(1991) conclude that, although seven kinds of data would be ideal, five are ade-
quate for the SSD procedure, described by Van Straalen and Denneman (1989).
According to Aldenberg and Slob (1993), the risk that a 50th percentile confi-
dence limit estimate of the HC 5 will result in underprotection decreases consider-
ably as sample size is increased from 2 to 5, but less so as it is increased from 5
to 10 and from 10 to 20.
Jagoe and Newman (1997) proposed using bootstrapping techniques with SSDs
to avoid the issue of fitting available data to a particular distribution. Later, Newman
et al. (2000) found that the minimum sample sizes required for a bootstrapping
method ranged from 15 to 55. In a similar analysis, Newman et al. (2002) discov-
ered that 40-60 samples were required to derive an HC 5 with an acceptable level of
precision. Van Der Hoeven (2001) described a nonparametric SSD method that
Search WWH ::




Custom Search