Graphics Reference
In-Depth Information
The MI algorithm has been widely used in many research fields. Focusing
on DM methods to increase the robustness of MI [
19
], alleviate the parameter
selection process [
35
] and improve Rubin's rules to aggregate models have been
proposed [
86
]. New extensions to new problems like one-class [
48
] can be found,
as well as hybridizations with innovative techniques such as Gray System Theory
[
92
]. Implementing MI is not trivial and reputed implementations can be found in
4.4.3 Bayesian Principal Component Analysis (BPCA)
The MV estimation method based on BPCA [
62
] consists of three elementary
processes. They are (1) principal component (PC) regression, (2) Bayesian esti-
mation, and (3) an EM-like repetitive algorithm. In the following we describe each
of these processes.
4.4.3.1 PC Regression
For the time being, we consider a situation where there is no MV. PCA represents the
variation of
D
-dimensional example vectors
y
as a linear combination of principal
axis vectors
w
l
(
≤
≤
)
(
<
)
1
l
K
whose number is relatively small
K
D
:
K
y
=
x
l
w
l
+
(4.21)
l
=
1
The linear coefficients
x
l
(
denotes the residual
error. Using a specifically determined number
K
, PCA obtains
x
l
and
w
l
such that
the sum of squared error
1
≤
l
≤
K
)
are called factor scores.
2
over the whole data set
Y
is minimized.
When there is no MV,
x
l
and
w
l
are calculated as follows. A covariance matrix
S
for the example vectors
y
i
(
1
≤
i
≤
N
)
is given by
N
1
N
T
=
1
(
y
i
−
μ)(
y
i
−
μ)
,
S
(4.22)
i
=
)
i
=
1
y
i
. T denotes the transpose of
a vector or a matrix. For description convenience,
Y
isassumedtoberow-wisely
normalized by a preprocess, so that
μ
μ
=
(
/
where
is the mean vector of
y
:
1
N
μ
=
0 holds. With this normalization, the result
by PCA is identical to that by SVD.
Let
u
D
denote the eigenvalues and the
corresponding ei
genv
ectors, respectively, of
S
. We also define the
l
th principal axis
vector by
w
l
=
√
λ
l
u
l
.With these notations, the
l
th factor score for an example vector
λ
1
≥
λ
2
≥ ··· ≥
λ
D
and
u
1
,
u
2
,...,