Environmental Engineering Reference
In-Depth Information
4.2.4.5 Principal Components
In the principal component modeling methods an N
·
m sample measurement matrix
T
X is formed in which each row is a vector x
(
j
)
containing the m measurements
x 1
N .Then
each column of matrix X may be considered a realization of a random sequence
() in the time window corresponding to j
(
j
),
x 2
(
j
),...,
x m
(
j
)
performed in the plant at sample times j
=
1
, ....,
N .The m eigenvalues λ p and
eigenvectors p i of the symmetrical matrix X T X are found. Since X T X is symmet-
ric, all eigenvectors are orthogonal to each other, and are normalized to Euclidean
length 1. Letting V be the matrix whose columns are the eigenvectors p i , it turns
out that VV T
=
1
, ....,
= I. The columns of vector Z
=
XV are the principal components
z 1
(
j
),
z 2
(
j
),...,
z m
(
j
)
corresponding to measurements x 1
(
j
),
x 2
(
j
),...,
x m
(
j
)
.Prin-
T z k
cipal components are mutually orthogonal, i.e. , z i
(
j
)
(
j
)=
0for i
=
k and span
T z i
2
the principal component space. Furthermore z i
λ i , [56-58]. A
subset of principal components that explains most of the data variation is selected,
e.g. , by choosing those having the largest eigenvalues. In this way a space of some-
times considerably reduced dimensions is obtained which facilitates modeling, fault
detection, and the solution of other problems.
Principal components have been used in the estimation of a grindability index by
Gonzalez et al. , [22]. Here an original dimension 48 space has been reduced to an 8
dimension space in which Fisher discrimination analysis [57] was made simpler for
the identification a grindability index, which corresponds to the soft sensor output.
Using a PCA method a missing measurement may be reconstructed [34], [59].
But more than one measurement may be reconstructed, depending on the dimension
of the principal component space and the number q of sensors. Hence several soft
sensors are implicit in the PCA modeling method and the set of secondary measure-
ments for any of them depends on which of the q measurements are missing. Here
the model structure is contained in the principal component space and is determined
from the sample correlation matrix X T X .
In principal component regression [56, 58] a regression model is determined by
regressing the soft sensor variable on the principal components of the secondary
measurements or basis functions. Here matrix X is similar to the one in PCA, but
contains only the secondary measurements or basis functions used to model the soft
sensor.
Projection to latent structures, an extension of PCA, has been used in the design
of a soft sensor for concentrate grade in a rougher flotation plant. Figure 4.8 shows
a test of the soft sensor with the same data used in Figure 4.2 [31].
(
j
)
(
j
)=
z i
(
j
)
=
4.2.4.6 Neural Networks
Neural network models may be considered as general adaptable nonlinear functions
generators whose parameters (weights) may be determined so as to minimize in-
dexes reflecting the error between the model output and the plant measurement,
following the general modeling approach [1, 12, 14, 27, 31]. Being nonlinear, neu-
Search WWH ::




Custom Search