Biology Reference
In-Depth Information
The methods for determining timeliness include peak comparison, aber-
ration detection comparison, and correlation. The peak comparison method
is retrospective and can be used as a preliminary measure to determine
the potential timeliness of one data source compared to another. However,
comparing the peaks of two time series does not address the question of
when an outbreak would be detected, and the peak is not always the fea-
ture of interest. The peak comparison method does not account for the size
or the width of the peaks in each time series. An earlier peak in one data
source does not necessarily translate to a timelier source of data when a
detection algorithm is applied in a prospective setting. For example, an
algorithm may alert first in one data source yet have a later peak in com-
parison to another.
The aberration detection method is used to answer the fundamental ques-
tion of when an outbreak would be detected. However, a weakness of this
method to be considered when comparing results based on timeliness is
the bias associated with algorithm selection. Numerous algorithms can be
applied to surveillance data, and for each algorithm, parameter selection
affects the time at which the algorithm will alert. Hence, interpreting the
timeliness of one data source in a particular study location can be affected
by the appropriateness of the algorithm and the parameters of the model.
Therefore, algorithm parameters should be set to detect a defined increase
that will satisfy an algorithmic alert in the data and reflect the temporal
features (day-of-week, seasonality) for a specific location. A sensitivity analy-
sis should also be conducted to investigate algorithm parameters.
Several studies that used the aberration detection method compared the date
of alert in one data source with the date of the peak in the gold standard or ref-
erence data source. This introduces bias as prediagnostic data sources would
be expected to alert earlier than the peak in the gold standard. This could be
further biased by setting a low threshold value, thus resulting in an earlier date
of detection relative to the peak and hence increasing the timeliness of the data
source (see Figure 1.5). In order to find a better estimate of timeliness between
two data sources, the alerts generated by an algorithm on both sources should
be compared if a well-established indicator does not exist.
An alert may also result from an increased daily count due to variability
within the time series. These may exist in the form of known variability,
such as seasonal effects, day-of-week effects, or unknown variability.
As discussed, the basis of outbreak detection by statistical methods is the
process of interpreting and responding to alerts. These may be true alerts
during an epidemic or false alerts generated by artifacts within the data.
Aberrations in the data are not necessarily caused by an outbreak
(Hutwagner et al. 2003). Ultimately, once a statistical aberration is detected, it
should be evaluated to determine its epidemiological or clinical significance
(Hutwagner et al. 2003). The development of new algorithms for outbreak
detection continues and, concurrently, the bias associated with algorithm
selection. However, prospectively, alerts should not be dismissed purely on
Search WWH ::




Custom Search