Information Technology Reference
In-Depth Information
mathematical definition, this is true no matter how small the noise component is,
however, we will show later on that Lyapunov exponents of an underlying deter-
ministic system can in fact be measured.
When a system has a positive Lyapunov exponent, there is a time horizon beyond
which prediction breaks down.
Wolf et al. [34] proposed the first algorithm for calculating the largest Lyapunov
exponent. First, the phase space is reconstructed and the nearest neighbor is searched
for one of the first embedding vectors. A restriction must be made when searching
for the neighbor: it must be sufficiently separated in time in order not to compute
as nearest neighbors successive vectors of the same trajectory. Without considering
this correction, Lyapunov exponents could be spurious due to temporal correlation
of the neighbors. Once the neighbor and the initial distance L is determined, the
system is evolved forward some fixed time (evolution time) and the new distance L
is calculated. This evolution is repeated, calculating the successive distances, until
the separation is greater than a certain threshold. Then a new vector (replacement
vector) is searched as close as possible to the first one, having approximately the
same orientation of the first neighbor. Finally, Lyapunov exponents can be estimated
using the following formula:
M
k = 1 ln
L (
1
t k )
λ 1 =
) ,
(16.2)
(
t M
t 0 )
L
(
t k
1
where k is the number of time propagation steps.
The Wolf algorithm only estimates the largest Lyapunov exponent and not the
whole spectrum of exponents. It is said to be sensitive to the number of observa-
tions as well as to the degree of measurement or system noise in the observations.
This discovery motivated a search for new algorithm designs with improved finite-
sample properties. Sano and Sawada [28], Eckmann et al. [5], Abarbanel et al. [1],
Rosenstein et al. [27], and Pardalos and Yatsenko [24], among others, came up
with improved algorithms for calculating the Lyapunov exponents from observed
data.
16.3 An Optimization Approach
In the previous section, we mentioned a number of algorithms that have been pro-
posed for estimating the Lyapunov exponents from a scalar time series. The problem
of calculating these exponents can be reformulated as an optimization problem (see
Pardalos and Yatsenko [24]), and in these following sections we present an algo-
rithm for its solution which is globally and quadratically convergent. Here, we use
well-established techniques from numerical methods for dealing with the optimiza-
tion problem. We also discuss the computational aspects of this method and the
difficulties which inevitably arises when estimating the Lyapunov exponents based
on the use of time-delay embedding. Using numerically generated data sets, we
consider the influence of the system parameters and the optimization algorithm on
the quality of the estimates.
Search WWH ::




Custom Search