Geoscience Reference
In-Depth Information
Van Genderen and Lock (1977) and Van Genderen et al. (1978) argued
that only maps with 95 percent confidence intervals (i.e., b = 0.05) should be
accepted and proposed a sample size of 30. Ginevan (1979) pointed out that
Van Genderen et al. (1978) made no allowance for incorrectly rejecting an
accurate map. The tradeoff one makes using Ginevan's more conservative
approach is to take a larger sample, but in so doing reduce the chance of reject-
ing an acceptable map. Hay (1979) 9 concluded that minimum sample size
should be 50, greater than that of Van Genderen and Lock (1977).
The number of samples must be traded off against the area covered by a
sample unit, given a certain quantity of money to perform a sampling opera-
tion. Curran and Williamson (1986) asked whether many small-area samples
or a few large-area samples should be taken. The answer is that it depends on
the cover type being mapped; a highly variable cover type such as rainforest is
better suited to many small-area samples, whereas for more homogeneous
cover types it is more efficient to take fewer large-area samples.
Generally, mapping of heterogeneous classes such as forest and residential
land is more accurate at 80-m resolution than at finer resolutions such as 30 m
(Toll 1984); however, more homogeneous classes such as agricultural land and
rangeland are more accurately mapped at 30 m than at 80 m (Toll 1984). The
reason for this is the tradeoff between ground element size and image pixel res-
olution.
Based on the error matrix, different measures of accuracy can be derived. A
commonly cited measure of mapping accuracy is the overall accuracy, which is
the number of correctly classified pixels (i.e., the sum of the major diagonal
cells in the error matrix) divided by the total number of pixels checked (table
11.3). Anderson et al. (1976) suggested that the minimum level of interpreta-
tion accuracy in the identification of land use and land cover categories should
be 85 percent.
Overall classification accuracy is the ratio of the total number of correctly
classified pixels to the total number of pixels in each class (Kalensky and
Scherk 1975).
Cohen (1960) and Bishop et al. (1975) defined a measure of overall agree-
ment between image data and the reference (ground truth) data called Kappa
or K. K ranges in value from 0 (no association, that is, any agreement between
the two images equals chance agreement) through 1 (full association, or per-
fect agreement between the two images). K can also be negative, which signi-
fies a less than chance agreement.
The methods just discussed for quantifying error in raster images are
equally applicable to quantifying error in vector polygons. Instead of checking
Search WWH ::




Custom Search