Information Technology Reference
In-Depth Information
Independent component representation captures the essential structure of data and
make them more visible or accessible in many applications like feature extraction
and signal separation.
Appearance-based feature extraction methods [ 14 , 21 ] rely on techniques from
statistical analysis to find the relevant characteristic of images. They transfer a recog-
nition problem to image/signal space (principal or independent components) analysis
problem, where well known methods like PCA [ 39 ], LDA [ 28 ], ICA [ 33 , 34 , 38 ]
in real domain has been tried out. This requires sufficient representative data to
sample the underlying distribution successfully. Feature extraction with PCA deals
with second-order statistics only and removes correlation from the data (uncorrelates
the data); while ICA accounts for higher-order statistics and aims at the removal of
higher-order dependence. It is simple to find independent component from Gaussian
data with PCA because for Gaussian data, the uncorrelated components are always
independent. In real world, the data necessarily not following Gaussian distribution
therefore uncorrelatedness in itself is not enough to separate the components, then
ICA is preferred because ICA finds a representation in non-Gaussian distribution
and yields the components that are statistically independent as possible. Thus, ICA
provides more powerful data representation (basis vectors) than PCA [ 40 , 41 ] and it
has proved to be more effective than PCA for feature analysis [ 33 , 38 ] in literature.
Hence, it gives better results in partial occlusion or noisy environment.
This topic focuses on the complex domain neurocompting, therefore it is highly
desirable to develop techniques in complex domain in the role of feature extractor
which may be compatible with tools of neurocompting in complex domain. It will
be advantageous to develop a feature extraction technique using ICA in complex
domain ( C ICA ) and explore its performance over other statistical techniques. Com-
plex PCA [ 8 , 42 ] and complex ICA [ 9 , 43 , 44 ] have been introduced in literature to
analyze the 2D vector fields and complex data. They have also been used to analyze
the real data complexified first by Hilbert transformation [ 12 , 42 , 45 ]. This topic
develops feature extraction algorithms necessary for machine recognition using con-
cepts of these statistical methods. This chapter is devoted in exploring the capability
and effectiveness of 'feature space' computed with the feature selection schemes in
complex domain. The new representation of images in the feature space may directly
be used for classification with suitable classifier in complex domain.
7.1.2 Classifier Design
Classifiers play a significant role in the performance of recognition system apart
from the feature extraction techniques. Statistical behavior of the feature vectors
(new representation of patterns in lower dimensional subspace) is exploited to define
decision regions corresponding to different classes. There are variety of discrimi-
nant functions (decision surface, distance metrices, separating hyperplane, threshold
function, etc) available for classification. But in view of machine recognition, the
 
Search WWH ::




Custom Search