Information Technology Reference
In-Depth Information
Rosenblatt's perceptron or the backpropagation algorithm, which is based on
gradient descent in the error function of the classification result. An example
for unsupervised neural networks is a self-organizing feature map, which is able
to map high dimensional data into low dimensional feature maps. AI and CI
methods can be applied to a huge variety of domains, e.g. in the recent past
Kramer et al. [76] created a taxonomy of problem classes in the field of AI and
music.
2.1.2
Optimization Problems and Stochastic Convergence
One of the most important tasks in computer science and AI is optimization. In-
tuitively speaking, optimization is denoted as as the improvement of parameters
of a system. The formal definition is given as follows.
Definition 2.1 (Optimization Problem)
Let f :
1 .Find
X→
IR be the fitness function to be minimized with search set
X
an element x ∈X
such that f ( x )
f ( x ) for all x
∈X
.
Let X t := ( x t, 1 ,x t, 2 ,...,x t,n ) be a random population of size n> 0atstep t
0
and F t := min
the best fitness value in the population at
step t . A desirable property of an optimization method is to find the optimum x
with fitness f within a finite number of steps. This is formalized in the following
definition [125].
{
f ( x t, 1 ) ,...,f ( x t,n )
}
Definition 2.2 (Finding an Optimum with Probability One)
Let random variable T := min
denote the first hitting time of
the global optimum. An EA is said to visit the global optimum in finite time with
probability one if P ( T<∞ )=1 regardless of the initialization.
{
t
0: F t = f }
In some cases an EA does not find , but approximates the optimal solution or
converges to f . As the deterministic concepts of convergence are not appro-
priate, it is important to introduce the stochastic concepts of convergence as
defined by Lukacs [87].
Definition 2.3 (Stochastic Convergence)
Let D 0 ,D 1 ,... be non-negative random variables defined on a probability space
( Ω,
A
,P ) . The sequence ( D t : t
0) is said to
converge completely to zero if t =0 P ( D t > ) <
for any > 0 ,
converge with probability 1 or almost surely to zero if P (lim t→∞ D t =0)=1 ,
converge in probability to zero if P ( D t > )= o (1) as t
→∞
for any > 0 ,
and to converge in mean to zero if E [ D t ]= o (1) as t
→∞
.
With this concept, Rudolph [125] defines the term convergence for EAs.
1 The problems we consider in this work are minimization problems. If we speak of
high fitness, we mean low values.
 
Search WWH ::




Custom Search