Geography Reference
In-Depth Information
option is to only do comparisons in areas of relative
'pristine' or 'pure' classes where the faith in the ground
measurements is very high.
A more pragmatic issue with using hyperspectral
imagery is the computer processing power and memory
needed for digital image processing. Long hyperspectral
flight lines are both large spatially and have hundreds of
individual layers (the spectral channels). Any additional
processing will create for such layers. This was a larger
problem in the 'early days' of hyperspectral remote sens-
ing of rivers (the 1990s). Though the problem is fairly
simply dealt with today with careful computer plan-
ning, the issue can become serious in cases where many
orthorectified datasets are joined together.
The final problem with hyperspectral use in fluvial
environments is not really a problem, but rather a matter
of project specification. There are very few situations
today where we have long-term historical hyperspectral
data that we can compare with modern hyperspectral
data. If change detection over long time spans is what is
required for a particular project, hyperspectral imaging is
perhaps better supplanted by more traditional imagery.
on exactly which techniques ought to be used in a given
fluvial situation.
Statistical-empirical techniques include such meth-
ods as supervised classification (automated classification
based on user-input training targets), unsupervised clas-
sification (automated production of mapped clusters
or categories based on statistical differentiation), and
regression (statistically-constructed continuous functions
between an image variable and a ground variable). More
advanced statistical-empirical approachesmay use a num-
ber of such techniques in sequence or combination.
The most commonly used statistical-empirical meth-
ods in river remote sensing are supervised classification
methods. While there are several different numerical
approaches to supervised classification, all basically try
to find separation rules that group different pixels with
similar spectral signatures with spectrally similar targets
input as 'true' targets on the ground by a researcher. One
such approach, maximum likelihood classification, is very
commonly used with multispectral data, and is also com-
monly used by practitioners with hyperspectral data as
well. Maximum likelihood supervised classification (Mar-
cus, 2002) yielded producer's accuracies of 85%-91% for
different in-stream habitats using hyperspectral image
data. Mapping of these habitats in Yellowstone National
Park (Figure 4.5) included features such as pools, glides,
and riffles and used 128 channel hyperspectral imagery
at 1m spatial resolution. Further work with this same
imagery (Marcus et al., 2003) used a combination of
maximum likelihood classification and principle com-
ponents analysis (PCA) to map in-stream habitats over
a wider range of scales, as well as map woody debris
occurrence and water depths. Calibration depths were
measured in the field, and these were regressed (using
multiple linear regression) against several PCA compo-
nent images to produce water depth maps with R 2 values
of 0.28 to 0.99 depending on the size of the stream and
on the habitat type. These researchers were able to map
woody debris by using a Matched Filter technique trained
on field maps of fluvial wood. While overall accuracies
using this technique were quite high (overall accuracies
of 83%), it was apparent that the technique was working
better than the validation statistics suggested, because the
imagery was picking up wood that was missed by field
crews because of the small size of some of the pieces. Tech-
niques such as PCA are very useful in showing visually
the high data dimensionality associated with hyperspec-
tral imagery, and in visually separating various riverscape
elements (Figure 4.6). Extending hyperspectral imagery
and supervised classification to estuary and tidal areas,
4.5 Image processing techniques
The conversion of hyperspectral imagery intomaps show-
ing important fluvial environments quantitatively has
been the primary area of research in the field during
the past fifteen years. While there are many different
processing techniques, they can be divided into two
broad groups. The first are statistical-empirical tech-
niques; these include algorithms that compare image
data to reference data (ground or lab-based), and these
comparisons allow generalisations that can be applied to
entire images. The second group is the physically-based
approaches. Physically-based radiative transfer models
such as Hydrolight use knowledge of the absorption,
scattering and backscattering to model light propagation
through the medium of interest (e.g. a water column).
This second approach has the advantage of not neces-
sarily needing ground reference data, but comes at the
price of being more technical and complex, requiring spe-
cialised software (such as Hydrolight), and may require
more experienced personnel. It also has been used more
rarely in the fluvial environment, though it is much
more standard in the ocean water science community.
Some techniques attempt to combine both statistical and
physically-based methods together. Unfortunately for the
river manager community, there is no broad consensus
Search WWH ::




Custom Search