Information Technology Reference
In-Depth Information
the process is operating. Nevertheless, the adaptive character of the problem
requires a new framework, which will be presented in the next section.
4.3 On-Line Adaptive Identification and Recursive
Prediction Error Method
4.3.1 Recursive Estimation of Empirical Mean
Let us consider first the elementary problem of computing the mean of a data
series. This problem can be formulated as a linear regression problem of order
zero, x k = a + v k where ( v k ) is a numerical white noise and where parameter
a is scalar. We look for a good estimation of a . It amounts to compute the
mean of a sequence of independent identically distributed random variables.
The minimization of the cost function J N ( a )=1 / 2 N k =1 ( x k −a ) 2 with
respect to a is a well-known problem. Its solution is the empirical mean a N
=
k =1 x k /N .
That estimate has all the desirable properties of current linear regression
estimators such as consistency, bias free, minimal variance among bias free
estimates. Its consistency (i.e., its convergence towards a when the sample
size goes to infinity) is called the law of large numbers. It intuitively expresses
that the arithmetic mean of a sequence of independent random measurements
provides an accurate estimate of the expectation value of the random variable
that models the phenomenon of interest.
A simple rewriting of the previous definition gives
N
( N +1) a N +1 =
x k + x N +1 = Na N + x N +1 .
k =1
The following recursive definition follows immediately:
1
N +1 ( x k +1
a N +1 = a N +
a N ) .
This recursive formulation of the definition of the empirical mean allows
an adaptive estimation. A single observation is su cient to initialize the al-
gorithm. To update the estimate, it is not necessary that all the observations
be available. The previous estimate and the current observation are su cient
to perform the update. The coe cient γ κ +1 =1 / ( N + 1) is called the learning
rate.
Another advantage of the recursive estimation is that it allows tracking
slow variations of the parameter, which is currently estimated if the model is
not stationary. The estimation is adaptive. In that case, one has to replace
the slowly decreasing learning rate by a small constant learning rate. Then,
estimation amounts to filtering (in that case, a first order filter). In order to
Search WWH ::




Custom Search