Image Processing Reference
In-Depth Information
is in a different class (class 1) to that of the next two nearest neighbours (class 2); a
different result has occurred when there is more smoothing in the feature space (when the
value of k is increased).
The Brodatz database actually contains 112 textures, but few descriptions have been
evaluated on the whole database, usually concentrating on a subset. It has been shown that
the SGF description can afford better classification capability than the co-occurrence matrix
and the Fourier transform features (described by Liu's features) (Chen, 1995). For experimental
procedure, the Brodatz pictures were scanned into 256 × 256 images which were split into
16 64 × 64 sub-images. Nine of the sub-images were selected at random and results were
classified using leave-one-out cross-validation (Lachenbruch, 1968). Leave-one-out refers
to a procedure where one of the samples is selected as the test sample, the others form the
training data (this is the leave-one-out rule). Cross-validation is where the test is repeated
for all samples: each sample becomes the test data once. In the comparison, the eight
optimal Fourier transform features were used (Liu, 1990), and the five most popular
measures from the co-occurrence matrix. The correct classification rate, the number of
samples attributed to the correct class, showed better performance by the combination of
statistical and geometric features (86%), as opposed to use of single measures. The enduring
capability of the co-occurrence approach was reflected by their (65%) performance in
comparison with Fourier (33% - whose poor performance is rather surprising). An independent
study (Walker, 1996) has confirmed the experimental advantage of SGF over the co-
occurrence matrix, based on a (larger) database of 117 cervical cell specimen images.
Another study (Ohanian, 1992) concerned the features which optimised classification rate
and compared co-occurrence, fractal-based, Markov random field and Gabor-derived features.
By analysis on synthetic and real imagery, via the k -nearest neighbour rule, the results
suggested that co-occurrence offered the best overall performance. More recently (Porter,
1996), wavelets, Gabor wavelets and Gaussian Markov random fields have been compared
(on a limited subset of the Brodatz database) to show that the wavelet-based approach had
the best overall classification performance (in noise as well) together with the smallest
computational demand.
8.4.2
Other classification approaches
Classification is the process by which we attribute a class label to a set of measurements.
Essentially, this is the heart of pattern recognition: intuitively, there must be many approaches.
These include statistical and structural approaches: a review can be found in Shalkoff
(1992) and a more modern view in Cherkassky and Mulier (1998). One major approach is
to use a neural network which is a common alternative to using a classification rule.
Essentially, modern approaches centre around using multi-layer perceptrons with artificial
neural networks in which the computing elements aim to mimic properties of neurons in
the human brain. These networks require training , typically by error back-propagation,
aimed to minimise classification error on the training data. At this point, the network
should have learnt how to recognise the test data (they aim to learn its structure): the output
of a neural network can be arranged to be class labels. Approaches using neural nets
(Muhamad, 1994) show how texture metrics can be used with neural nets as classifiers,
another uses cascaded neural nets for texture extraction (Shang, 1994). Neural networks
are within a research field that has shown immense growth in the past two decades, further
details may be found in Michie (1994), Bishop (1995) (often a student favourite), and more
Search WWH ::




Custom Search