Civil Engineering Reference
In-Depth Information
The diagonal terms represent the average variances of the individual random variables, while the off-diagonal
terms represent the average covariances between two random variables. The first term of the first row in
the matrix is the variance of the first point for each input vector, the second term is the covariance
between first and second sampled points for each vector, the third term is the covariance between the
first and third sampled points for each input vector, etc. Note that if the input vectors are not zero-mean,
these terms are interpreted as the mean-squared deviations from the mean vector.
The resulting N by N covariance matrix is real and symmetric, and contains the mean-squared
deviations from the mean vector. If the input vectors are zero-mean, then the matrix contains mean-
squared deviations between each input vector. As a result, the matrix contains all of the variational
structure of the input data. The eigenvectors of this matrix provide the principal axes of the variational
structure, presented in the form of basis functions representing the dominant patterns in the data. The
eigenvectors are used here to determine the dominant patterns in the monitored signal. The eigenvalues
of the covariance matrix are computed using det ( S
i are the
roots of the characteristic polynomial, representing the eigenvalues of the covariance matrix. The eigen-
vectors
I )
0, where I is the identity matrix;
i of the matrix are then computed using ( S
i I )
i
0. We will show that these eigenvectors
are dependent on the deviations of each point of the input vectors from the mean vector.
Even though the covariance matrix has dimension N , the rank of the matrix is determined by the
number of input vectors M , resulting in M independent eigenvectors of dimension N . Each eigenvector
i has a coefficient vector y i associated with it, computed using . The coefficient vectors of
dimension M are the projection of the original data matrix X onto the corresponding eigenvectors, and
hence represent the weight of each input vector in the new transform domain spanned by the eigenvectors
i . As a result, the coefficient vectors are used here to monitor the changes over time for each dominant
eigenvector.
i T X
y i
Decomposing Complex Signals into Orthogonal Functions
To investigate the ability of the Karhunen-Loève transform to decompose signals, we assume that a general
function g ( x , t ) represents a manufacturing signal, such as a surface profile. This function is composed
of a multitude of functions f 1 ( x , t ), f 2 ( x , t ), etc. In reality, the exact shape of the functions f i , and the exact
nature of the interactions between these functions are not known a priori . The decomposition of g ( x , t )
into individual functions f i will ultimately enable accurate monitoring.
Simple Linear Functions
To demonstrate the mechanics of the Karhunen-Loève method, we first study the case of simple linear
vectors as our general function [70]. Linear vectors are the simplest form of deterministic functions. Two
sample input vectors X 1 , and X 2 , each with two sample points ( x 11 x 12 ) T and ( x 21 x 22 ) T , respectively, are
collected, as shown in Fig. 1.10 ( M
2, N
2). The input vectors represent pure linear trends (straight
T
X a
(
x a 1 x a 2
)
lines), of increasing slopes. The mean vector
is also a straight line, with two sampled
ij is assumed to be the deviation of the i th sampled point of the j th input vector from the mean
vector. In this case, since we only have two input vectors, the average vector is equidistant from each
input vector, i.e.,
points.
11
12 and
21
22 , as shown in Fig. 1.10 . We first show that, given M
2 linear
vectors with N
2 sampled points each, the Karhunen-Loève transform results in a single fundamental
2 ] T , Which is a linear vector.
To demonstrate this result, let
1
eigenvector
[
T
T
X 1
[
x 11 x 12
]
X 2
[
x 21 x 22
]
and
be two linear vectors. The mean
x 11
---------------------- x 12
2
x 21
2
x 22
T
T
(
x a 1 x a 2
)
vector is X a
. Subtracting the mean vector from the input vectors, we
obtain the zero-mean input vectors, used to compute the covariance matrix:
--------------------
Z 1
X 1
X a
x 11
[
x a 2 ] T
x a 2 ] T
x a 1 x 12
Z 2
X 2
X a
x 21
x a 1 x 22
ij deviations of the input
and
[
. Assuming
T
Z 1
[
11
21
]
Z 2
vectors from the mean vector, the zero-mean input vectors become:
and
T
[
11 21
]
, which results in the data matrix Z
[Z 1 Z 2 ].
Search WWH ::




Custom Search