Geoscience Reference
In-Depth Information
determined the most appropriate label for the site, and the label was entered in the appropriate box
under the “classification” column in the form. This label determined in which row of the matrix
the site would be tallied and was used for calculation of the deterministic error matrix. After
assigning the label for the site, the remaining possible map labels were evaluated as “good,”
“acceptable,” or “poor” candidates for the site's label. For example, a site might fall near the
classification scheme margin between forest and shrub/scrub. In this instance, the analyst might
rate forest as most appropriate but shrub/scrub as “acceptable.” As each site was interpreted, the
deterministic and fuzzy assessment reference labels were entered into the accuracy assessment
software for creation of the error matrix.
12.3.4
Compilation of the Deterministic and Fuzzy Error Matrix
Following reference site labeling, the error matrix was automatically compiled in the accuracy
assessment software. Each accuracy assessment site was tallied in the matrix in the column (based
on the map label) and row (based on the most appropriate reference label). The deterministic (i.e.,
traditional) overall accuracy was calculated by dividing the total of the diagonal by the total number
of accuracy assessment sites. The producer's and user's accuracies were calculated by dividing the
number of sites in the diagonal by the total number of references (producer's accuracy) or maps
(user's accuracy) for each class. That is, from a map producer's viewpoint, given the total number
of accuracy assessment sites for a particular class, what was the proportion of sites correctly mapped?
Conversely, class accuracy by column represents “user's” class accuracy. For a particular class on
the map, user's class accuracy estimates the percentage of times the class was mapped correctly.
Nondiagonal cells in the matrix contain two tallies, which can be used to distinguish class labels
that are uncertain or that fall on class margins from class labels that are most probably in error.
The first number represents those sites in which the map label matched a “good” or “acceptable”
reference label in the fuzzy assessment (Table 12.3). Therefore, even though the label was not
considered the most appropriate, it was considered acceptable given the fuzziness of the classifi-
cation system and the minimal quality of some of the reference data. These sites are considered a
“match” for estimating fuzzy assessment accuracy. The second number in the cell represents those
sites where the map label was considered poor (i.e., an error).
The fuzzy assessment overall accuracy was estimated as the percentage of sites where the “best,”
“good,” or “acceptable” reference label(s) matched the map label. Individual class accuracy was
estimated by summing the number of matches for that class's row or column divided by the row
or column total. Class accuracy by row represents “producer's” class accuracy.
12.4 RESULTS
Table 12.3 reports both the deterministic and fuzzy assessment accuracies. The overall and
individual class accuracies and the Kappa statistic are displayed. Overall accuracy is estimated in
a deterministic way by summing the diagonal and dividing by the total number of sites. For this
matrix, overall deterministic accuracy would be estimated at 48.6% (151/311). However, this
approach ignores any variation in the interpretation of reference data and the inherent fuzziness at
class boundaries. Including the “good” and “acceptable” ratings, overall accuracy is estimated at
74% (230/311). The large difference between these two estimates reflects the difficulty in distin-
guishing several of the classes, both from TM imagery and from the NTM. For example, a total
of 31 sites were labeled as evergreen forest on the map and deciduous forest in the reference data.
However, 24 of those sites were labeled as acceptable, meaning they were either at or near the
class break or were inseparable from the TM and/or NTM data (Appendix A).
The Kappa statistic was 0.37. The Kappa statistic adjusts the estimate of overall accuracy for
the accuracy expected from a purely random assignment of map labels and is useful for comparing
Search WWH ::




Custom Search