Information Technology Reference
In-Depth Information
Taking the expectation of both sides of eq. (2.182), the following relation holds:
T e
w
=− w
(2.185)
0
which, substituted into eq. (2.184) yields
w λw =
0,
λ =
r
(w
,
)
(2.186)
where = R ee T is the covariance matrix of the data set. The MCA neurons
can be used directly as optimal TLS fitting analyzers if e = 0. The parameters
are given by the unit vector in the direction of the final solution (i.e., a division is
needed). In the case of e = 0, these neurons need a preprocessing of the data (see
Section 2.8.2.3). Noticing that = R ee T
, it suffices
to subtract e from each data point. An extra operation is still needed after the
neuron learning; the parameter w 0 =− w
= E ( x e )( x e )
T
T e must be computed.
Remark 79 (Output of the Neuron) After the preprocessing, if the weight
modulus is equal to 1, after the presentation of the i th sample point, the absolute
value of the output of the linear MCA neuron represents the orthogonal distance
from the sample point to the current fitted hyperplane. Furthermore, the sign of
its output also detects which side of the hypersurface the point is on.
2.10 SIMULATIONS FOR THE MCA EXIN NEURON
The first simulations, presented here to illustrate the application of Section 2.9,
deal with the benchmark problem in [195]: A line model a 1 x + a 2 y = 1 is fitted
to an observation data set by the estimation of its parameters. Here, a 1 = 0 . 25 and
a 2 = 0 . 5. First, 500 points are taken from this line with 0 < x < 5 by uniform
sampling. Then the noisy observation set is generated by adding Gaussian noise
with zero mean and variance
2 to these points. As seen before, at first the
points are preprocessed. Then the MCA EXIN, OJA, OJAn, and LUO learnings
are compared (at each step the input vectors are taken from the training set with
equal probability). All neurons have the same initial conditions [0
σ
1] T
.
1, 0
.
and
the same learning rate [i.e., start
at 0.01, reduce it linearly to 0.001 at the
first 500 iterations, and then keep it constant]. In the end, the estimated solution
must be renormalized. The index parameter, defined as
α(
t
)
i = 1 w i ( t ) w i 2
n
ρ =
(2.187)
[ w i ( t ) are the n components of the neuron weight vector and w i are the n
components of the desired parameters], is used as a measure of the accuracy.
In the first experiment the noise level is low ( σ
2
= 0 . 1). Figure 2.15 shows
Search WWH ::




Custom Search