Image Processing Reference

In-Depth Information

7.4

Further Information

The grass-fire algorithm can be modified to also operate on gray-scale and color

images. The first modification is that the algorithm does not scan the entire image,

but instead starts at a so-called
seed point
often defined interactively by a user. The

second modification is that an object pixel is a pixel within a certain gray-scale or

color range. The range can for example be defined as the value of the seed point

a

small value. A more robust approach is to define the range based on the statistics of

the pixels located in the vicinity of the seed point, see Appendix C. The effect this

algorithm will have is that a region centered around the seed point will be selected.

One might think of the algorithm as a combination of thresholding and connected

component analysis. The algorithm is known as
region growing
and can for example

be applied to remove the red-eye effect in pictures.

The grass-fire algorithm is not the only connected component analysis algorithm

that exits. But no matter which algorithm is used it is very often combined with

the feature extraction process since both need to process each pixel in a BLOB.

Combining them will speed up the system. Many other features than those described

in this chapter exist, especially more advanced shape features such as Hu moments.

Furthermore, many new features can be defined/optimized with respect to a concrete

application.

A common question when doing BLOB classification is whether a simple box

classifier is sufficient. The answer depends on the application. If the feature vectors

of the non-object BLOBs and the object BLOBs are far apart in the feature space,

then the exact position and shape of the decision region is not critical and hence a

box classifier will suffice. This is the situation in Fig.
7.7
. The accuracy of the box

classifier goes down as the feature vectors becomes similar. This is illustrated in

Fig.
7.9
where it can be seen that the weighted Euclidean distance classifier outper-

forms the box classifier.

Another line of argumentation is that the number of parameters needed to be

defined in the box classifier (the shape of the rectangle) increases as the number of

feature increase. In the weighted Euclidean distance classifier only one parameter

(a threshold on the distance) has to be decided independent on how many features

are used.

Sometimes we will have features that are dependent.
Dependency
means that if

we know something about one feature we can say something about another feature.

If for example we as features have area and perimeter, then it is very likely that

the value of the perimeter increases as the area increases. Dependency in data can

result in the point cloud having an orientation that is neither vertical nor horizontal,

see Fig.
7.9
(c). In these cases both the box classifier and the weighted Euclidean

distance classifier will fail. Instead we must use the
Mahalanobis distance classifier
.

It is a statistical classifier measuring the distance between an unknown feature vector

and the prototype. So like the two other statistical classifiers presented above it only

requires one parameter to be defined no matter how many features are used. In fact,

the Euclidean distance classifier and the weighted Euclidean distance classifier are

both special cases on the Mahalanobis distance classifier. In Fig.
7.9
(c) the decision

±