Image Processing Reference
In-Depth Information
Noise reduction, and to a certain extent edge detection, are used globally on a
complete image. However, when shapes, colours and textures need to be compared,
a global view may obscure detail in specific regions in an image. Hence, segmenta-
tion of images needs to be considered.
8.3.3
Segmentation
Segmentation of an image into smaller homogeneous regions ensures a more local
and hence a more specific match to be made between an example and an image in
the database. One simple example of segmentation is the separation of an image
into a background and foreground [2]. Segmentation based on human interaction
is reliable, but impractical for large databases, and hence automatic unsupervised
segmentation is desirable.
As a representative example of a CA model of unsupervised segmentation, the
unsupervised grow-cut (UCG) CA model of Ghosh et al. [11] is discussed below.
Each cell in the CA has a 3-tuple associated with it, containing a label, the pixel
intensity, and a so-called cell strength (between 0 and 1). The labels are updated
on each evolution, to indicate to which equivalence class a cell belongs. Initially,
random pixels are selected to form the first equivalence classes (these pixels get cell
strength 1), and the different segments of the image are represented by the different
equivalence classes on completion of the process. Suppose that p is the current cell,
and consider each of its neighbours q .If p and q are not in the same equivalence
class, and the difference in intensity of p and q is too large, then the equivalence
class of q is updated to include p and the strength of p is decreased.
The intensity difference between two cells is calculated as follows: let I p and I q
denote the intensities of p and q respectively, and let
φ
p indicate the cell strength of
cell p .Then
|
I p
I q |
1
φ
q
max
(
I
)
represents the intensity difference. If cells p and q are not in the same equivalence
class, and the intensity difference exceeds a previously defined fixed threshold, then
the label of p is updated and added to the equivalence class of q . In addition, the
strength of p becomes the intensy difference as calculated above.
Ghosh et al. show that the CA method for unsupervised segmentation compare
favourably to non-CA methods as far as results are concerned.
8.3.4
Colour Matching and Histograms
Image analysis based on colour is done by means of histograms. Colours are clas-
sified as belonging to a so-called bin, and then the number of pixels in the image
that have the same colour as the bin, is counted to form the histogram. Note that
the number of bins may become excessively large, as an image can have millions of
different colours.
 
Search WWH ::




Custom Search