Agriculture Reference
In-Depth Information
incidence within a geographical area. For example, ten fields in an area are
inspected for disease and six are found to be infected; the disease prevalence for that
area is 60%.
Most assessment keys have been designed to measure disease severity using
either descriptive or pictorial (picture) keys. With either type of key, it is essential
that standardization is maintained and the use of arbitrary categories such as slight,
moderate or severe should be avoided. Such broad categories take no account of the
fact that the eye apparently assesses diseased areas in logarithmic steps, as stated by
the Weber-Fechner law for visual acuity (for appropriate stimuli, visual response is
proportional to the logarithm of the stimulus). Thus up to 50% disease severity, the
eye reads diseased tissue but above this value healthy tissue is judged. Horsfall and
Barratt (1945) therefore proposed a logarithmic scale for the measurement of plant
disease severity, in which grades were allotted according to the leaf area diseased:
1 = nil, 2 = 0-3%, 3 = 3-6% and so on to 11 = 97-100% and 12 = 100%. This scale
reads the diseased tissues in logarithmic units below 50% and healthy tissue in the
same units above 50%. Thus, if the Horsfall-Barratt hypothesis is correct, the least
reliable estimates of severity should occur at the 50% level. Forbes and Jeger (1987)
found the greatest overestimation of severity occurred at levels below 25%,
suggesting that the Horsfall-Barratt hypothesis over-simplifies the stimulus response
relationship of visual disease severity assessment. Hebert (1982) pointed out that
some visual estimates might not obey the Weber-Fechner law. Nutter and Esker
(2006) revisited the Weber-Fechner law using a classical method developed in the
field of psychophysics (the method of comparison of stimuli) and concluded that
although Weber's law appeared to hold true, Fechner's law did not. Furthermore, the
relationship between actual disease severity and estimated severity was found to be
linear rather than logarithmic as proposed by Horsfall and Barratt (1945). There is,
therefore, no single accepted method of making visual estimates of disease severity,
and a linear percentage scale is often used.
Chaube and Singh (1991) and James (1983) identified the advantages of the
percentage scale as: the upper and lower limits are always uniquely defined; the
scale is flexible and can be divided and subdivided; it is universally known and can
be used to measure incidence and severity by a foliar or root pathogen; and it can
easily be transformed for epidemiological analysis, e.g. transformation to logits for
calculation of r, the apparent infection rate. The best-known descriptive key to
utilize the percentage scale was that published by the British Mycological Society
(Anon., 1947) for measuring potato late blight (Table 2.3).
The pictorial disease assessment key uses standard area diagrams that illustrate
the developmental stages of a disease on small simple units (leaves, fruits) or on
large composite units such as branches or whole plants. Such standard diagrams are
derived from a series of disease symptom pictures that may be in the form of line
drawings, photographs or even preserved specimens. The assessment scale of Cobb
(1892) for wheat rust was among the first to use standard area diagrams; this was
joined by numerous others for disease assessment on a wide range of crops (e.g.
Dixon and Doodson, 1971; James, 1971) (Fig. 2.5). Campbell and Madden (1990a)
provided a useful tabular summary of pictorial disease assessment keys available for
measuring disease severity on a range of hosts using the principle of standard area
Search WWH ::




Custom Search