Database Reference
In-Depth Information
as a pattern recognition problem, which could be analyzed qualitatively, using the
patterns produced by these various sensory data sources, and also quantitatively, by
computing individual semantic component's knowledge variance contribution. It is
envisaged that data processing and pattern analysis would provide useful and better
understanding of the data generated and perhaps greater robustness. Pattern recogni-
tion algorithms have been a critical component in the implementation, development,
and successful application of this knowledge processing architecture. Data normal-
ization was used to prepare the integrated sensor response matrix for the subsequent
signal preprocessing paradigms on a local or global level.
1
n
2
1
xx
=
x
2
(15.2)
ij
ij
ij
i
=
Equation 15.2 was employed to normalize and compensate for differences in the
magnitudes of the signals and to reduce the effects of noise. This normalization
model was used to make the responses linear, to increase the relative contribution
of the sensors and the overall dynamic range. The data produced from this layer
were more linear and thus were simpler and easier to process than the original raw
data. The most widely used local method is vector normalization (Equation 15.2),
in which each feature vector (sample) is divided by its norm so that it is forced to
lie on a hypersphere of unit radius. This has the beneficial effect of compensating
for sample-to-sample variations due to signal to noise dependencies and other cor-
related sensor drift. It was preferable to use sensor response normalization because
it ensured that sensor magnitudes were comparable, preventing signal preprocessing
techniques from being overwhelmed by sensors with arbitrarily large values. The
principal components analysis (PCA) is an unsupervised linear nonparametric pro-
jection method, which has been incorporated in this layer for dimension reduction
and feature extraction. It is a multivariate statistical method, based on the Karhunen-
Lowve expansion (Equation 15.3). The method consists of expressing the response
vectors r ij in terms of linear combinations of orthogonal vectors, and is sometimes
referred to as vector decomposition. Each orthogonal vector, principal component
(PC), accounts for a certain amount of variance in the data with a decreasing degree
of importance. The scalar product of the orthogonal vectors with the response vector
gives the value of the pth principal components:
X p = α 1 p r 1 j + α 2 p r 2 j + … + α ip r ij + … + α np r nj
(15.3)
The variance of each PC score, X p , is maximized under the constraint that the
sum of the coefficients of the orthogonal vectors or eigenvectors α p = (α 1 p α jp α np ) is
set to unity, and the vectors are uncorrelated. PCA was applied to extract least cor-
related attributes from the integrated attribute matrix to capture the maximum data
variances. Output from this layer were a dynamic list of variables, which were the
biggest contributors for knowledge variance in a particular instance. Variances are
represented in a dynamically projected score matrix of the whole integrated data
matrix [10,21,25,28,29,34,54,73].
Search WWH ::




Custom Search