Agriculture Reference
In-Depth Information
availability and accessibility of information relevant to epidemiologists in the light of an
exponentially-increasing volume and diversity of data and data sources. In this
discussion we consider both empirical data collected from experiments, surveys etc.,
and information in publications and databases which has already been processed to some
extent. Much of what we say applies equally to both. At the more practical end of
epidemiology, in forecasting disease (see also Chapter 9), we highlight recent uses of
Bayesian methods to evaluate the performance of decision tools (modern examples of
which often have a heavy reliance on IT infrastructure). One of the main advantages
of these methods, as we will show, is that they allow epidemiologists to identify
those diseases for which the potential exists to change grower behaviour by
providing disease or disease risk forecasts. The unifying theme across these areas
of discussion is the use and flow of information in plant disease epidemiology
under uncertainty.
12.2 DEFINITION OF INFORMATION TECHNOLOGY IN PLANT DISEASE
EPIDEMIOLOGY
The definition we have adopted is 'the use of computer-based technology to collate,
process and disseminate information and knowledge for application to the study of
plant disease epidemiology'. This includes existing technologies such as computer
databases, statistical and field trial design packages, (e.g . Genstat, SAS, S-Plus,
Agrobase), automated quantitative diagnostic systems, Laboratory Information
Management Systems (LIMS) which may utilise bar-coding tools, and the newer
ones referred to below, but the definition is broad enough to allow the inclusion of
new developments not yet envisaged. Fig. 12.1 summarises the processes involved
in the use of IT to develop epidemiological knowledge from data. Collation,
processing and dissemination of information may operate separately or in an
integrated manner depending upon the purpose. It is important to note that empirical
data will consist of a mixture of facts (i.e . things that are true) and errors (i.e . things
that are not true). The extent to which the information comprises only facts will
depend on the quality of the mechanism by which the data are collected and the
quality of the processing (if any) that is applied to the data to filter out errors. Some
databases may simply collate information such as disease incidence or pathotype
distribution. Both knowledge arising from the analysis of information (for example
by statistical methods) and the information itself might be made available for
practical use through some IT medium. In either case validation by successful use will
ultimately separate those IT-based systems which are useful from those which are not.
In theory, the process of validation-by-use should provide a means for further filtering
of errors from facts and an improvement (if that is possible in the particular context) in
the quality of available information and knowledge. Clearly, the content of databases
and the way in which they are accessed have important consequences for the quality of
information and knowledge that results from their use.
Search WWH ::




Custom Search