classification errors associated with transitions between cover types? How accurate are the classifi-
cations within relatively large homogeneous areas of the map? Deriving a spatial representation of
classification error is another relevant, but supplemental, objective that places additional requirements
on the accuracy assessment analysis that may not have been planned for at the design stage.
Over time, objectives and/or priorities of objectives may change. This may not represent a major
problem in accuracy assessment projects, but one example is changing the classification scheme if
it is recognized that certain LC classes cannot be mapped well. Another example illustrating this
principle occurs when the map is revised (updated) while the accuracy assessment is in progress.
Some of the additional analyses described for Principle 2 represent a change in objectives also.
Insufficient budget is a common affliction of accuracy assessments (Scepan, 1999). Resource
allocation is dominated by the mapping activity, with scant resources available for accuracy assess-
ment. Adequate resources may exist to obtain reasonably precise, class-specific estimates of accu-
racy over broad spatial regions. For example, the NLCD accuracy assessment provides relatively
low standard errors for class-specific accuracy for each of 10 large regions of the U.S. However,
once Principle 2 manifests itself, data that serve well for regional estimates may look woefully
inadequate for subregional accuracy objectives. Edwards et al. (1998) and Scepan (1999) recognized
these phenomena for state-level and global mapping. In the former case, resources were inadequate
to estimate class-specific accuracy with acceptable precision for all three ecoregions found in the
state of Utah. In the global application, the data were too sparse to provide precise class-specific
estimates for each continent.
Timeliness of accuracy assessment reporting is hampered by the need for the map to be
completed prior to drawing an appropriately targeted sample, and any accuracy assessment activity
concurrent with map production detracts from timely completion of the map. Managing and quality-
checking data is a time-consuming, tedious task for the large datasets of accuracy assessment, and
the statistical analysis is not trivial when the design is complex and standard errors are required.
Lastly, neither the time nor the financial resources are usually available to support research that
would allow tailoring the sampling design to specifically target objectives and characteristics of
each individual mapping project. Comparing different sampling designs using data directly relevant
to the specific mapping project requires both time and money. Instead of this focused research
approach, often design choices must be based on judgment and experience, but without hard data
to support the decision.
Sampling design is one of the core challenges facing accuracy assessment, and future devel-
opments in this area will contribute to more successful assessments. The goal is to implement a
statistically defensible sampling design that is cost-effective and addresses the multitude of objec-
tives that multiple users and applications of the map generate. The future direction of sampling
design in accuracy assessment must go beyond the basic designs featured in textbooks (Campbell,
1987; Congalton and Green, 1999) and repeated in several reviews of the field (Congalton, 1991;
Janssen and van der Wel, 1994; Stehman, 1999; McGwire and Fisher, 2001; Foody, 2002). While
these designs are fundamentally sound and introduce most of the basic structures required of good
design (e.g., stratification, clusters, randomization), they are inadequate for assessing large-area
maps given the reality of budgetary and practical constraints.