Information Technology Reference
In-Depth Information
[24, 28, 34]. The rows show the Lyapunov exponents in a decreasing manner, i.e.,
λ 1 > λ 2 >...> λ n , for given initial conditions. The algorithm by Wolf et al. [34]
only gives us an estimate of the largest exponent. As mentioned earlier, a positive
Lyapunov exponent measures sensitive dependence on initial conditions, or how
much our forecasts can diverge based upon different estimates of starting condi-
tions. Another way to view Lyapunov exponents is the loss of predictability as we
look forward in time. Thus, it is interesting to know a measure of information loss
for avoiding possible misinterpretations.
If we assume that the true starting point x 0 of a time series is possibly displaced
by an
, we know only the information area I 0 about the starting point. After some
steps the time series is in the information area at time t , I t . The information about the
true position of the data decreases due to the increase of the information area. Con-
sequently, we get a bad predictability. The largest Lyapunov exponent can be used
for the description of the average information loss;
ε
0 leads to bad predictability.
Therefore, the exponent values in Table 16.2 are given in units of nats/s. 4
Of all the N displacement vectors found inside the sphere of radius
λ 1 >
, only five to
seven vectors with the smallest norm are chosen. This has practically no noticeable
effect on the exponent values, but speeds up the algorithm. It is further enhanced by
introducing another constraint which enables us to search for displacement vectors
close in phase space (Equation (16.13)), but far away in time
ε
t i | > ε
δ
|
t j
t ,
i
,
j
,
i
=
j
.
(16.27)
The Gauss-Newton algorithm is used to solve the nonlinear least-squares prob-
lem in Equation (16.16), while Sano [28] uses a linear approach to solve the same
problem. By examining Table 16.2 we see that there is hardly any difference in the
estimated exponent values between Pardalos's algorithm and the one described by
Sano. This behavior is due to the small values of the evolution time
Δ
t . During this
short evolution, the mapping between t j and t j + Δ
t does not show any stronger non-
linear properties, therefore, the results are similar. The value
t should be kept small
enough so that orbital divergence is monitored at least a few times 5 per (mean) orbit.
A larger
Δ
t has been shown to increase the difference between these two algorithms,
as expected.
The displacement vectors y i have been chosen to lie inside a sphere of radius
ε
Δ
is
good as long as we fulfill the condition of finding a minimum of five vectors inside
the sphere. Though theory says this value should be infinitesimal, the optimization
algorithm described in Section 16.2 is robust against small increase in
0
.
02 L A , where L A is the horizontal extent of the attractor. The choice of
ε
. Figure 16.3
shows how the Lyapunov exponents for the examined systems converge.
The results from Pardalos's and Sano's algorithms, though different from the
estimated values computed by the Wolf algorithm, are in good agreement with other
numerical experiments performed in [30, 27, 4, 26, 8].
ε
4
1nat
/
s
1
.
44 bits
/
s.
5
We have computed Equation (16.16) between 30 and 40 times per mean orbit.
 
Search WWH ::




Custom Search