Environmental Engineering Reference
In-Depth Information
order to avoid violating them. Specific assumptions are highlighted under each
method described in this chapter.
2.2.2 Precision
When sampling a population, values are not estimated with certainty, but with
error (the converse of precision). There are two sources of error that influence
precision. Process (sampling) error is the result of the spatial distribution or
other characteristics of the population. Uncertainty arises here because the
individual sites or organisms we happen to have selected may by chance fail to be
representative of the population, even if the method used is theoretically unbiased.
Observation error , which may be present either as well as or instead of process
error, results from uncertainties in the way in which the population is observed.
It is important to recognise the dual sources of error because it can help to clarify
the most promising way to improve precision. For example, it may be pointless
exerting huge effort to reduce process error by sampling more sites if the real
problem is that estimates of detectability at a given site are hopelessly imprecise
(i.e. observation error is high).
Lack of precision in estimates is a problem because it can obscure real differences ,
such as significant decline in a population due to harvesting. If one is setting out to
detect differences, it is therefore worth first defining the magnitude of difference
that you would like to be able to detect, and knowing the degree of precision that will
be sufficient to do this. Very good precision (a coefficient of variation of around 3%)
is required to detect a change in population size of 10%. Typically, population sur-
veys achieve a coefficient of variation in the region of 10-20%, which only allows
the confident detection of a 40-80% change in population size.
In order to decide how much confidence to place in a given estimate (either
qualitatively, or when formally testing whether there is a significant difference
from some baseline), we need some way to measure precision. The classical
parametric approach assumes a certain underlying distribution and estimates
the parameters of this distribution from the data. For example, given a normal
distribution with mean 10 and sampling variance 4, statistical theory allows us to
calculate that there is a 95% chance that a value randomly selected from the distri-
bution will lie between 6.08 and 13.92. This gives a confidence interval for the
mean that can be compared with other values. This approach has the benefit of an
exact statistical formulation, allowing easy calculation, and is the basis of most
measures of precision. We therefore provide equations for calculating parameter
standard errors throughout this topic, and describe relationships between this and
other measures of precision in Box 2.1. The major constraint with the parametric
approach is that it requires the data to at least approximate the assumed dis-
tribution, an assumption that will often not hold. In this case, it may be safer to use
non-parametric bootstrapping (Box 2.1), which makes no assumptions about
the underlying distribution of the data. This is a computer-intensive approach,
requiring some ability to write simple programs, although existing software makes
the technique relatively accessible these days (see Section 2.7).
 
Search WWH ::




Custom Search