Biomedical Engineering Reference
In-Depth Information
Table 3.3 Probabilistic
optimization algorithms
Probabilistic methods
Random search method
Tabu search
Simulated annealing
Genetic methods
Evolution strategy
the same input, deterministic algorithms always produce the same results. They do
not rely on random numbers at any executional stage, and within the algorithm, at
best, only one possible way to proceed exists. Within this description the fact that
randomized sequences in numerical optimization are generated deterministically by
means of a pseudorandom number generator, thus not truly random, is ignored.
Besides the above classification concerning the method of operation, optimi-
zation methods may be further categorized according to required information into
direct (numerical) search methods and gradient (indirect, analytic) methods.
Indirect methods describe analytical approaches that make use of objective
function values of information about the first partial derivatives (gradient vector)
r U ð p Þ of the objective function (first-order methods). They make decisions of
search directions according to these first partial derivatives. Direct methods rely
exclusively on values of the objective function itself. Direct search methods are
thus often referred to as zero-order-, non-gradient-orderivative-free methods,
whereby, as emphasized in (Lewis et al. 2000), these descriptions do not fully
characterize what constitutes the term ''direct search''. A more distinct description
as given by Hooke and Jeeves (1961) is that direct search methods involve the
comparison of each trial solution with the best previous solution. Direct search
methods thus operate only with the relative ranks of objective function values and
work iteratively towards a solution. Objective function values are evaluated at
various parameter vectors and conclusions are drawn upon this information,
usually employing heuristics, to improve the value of the object function regarding
an optimal solution.
Powell's method is an example of an efficient optimization method, to be
positioned between direct and gradient methods. Though no gradient information
is needed, the model function must be given explicitly, and several function
evaluations are required for minimization of each particular one-dimensional
function (line searches).
In addition to the above mentioned category of gradient methods, one further
distinguishes Newton methods (second-order methods), which amongst first partial
derivatives make use of second partial derivatives (Hessians) r 2 U ð p Þ of the
continuous, twice-differentiable objective function. In all methods, zero-, first- and
second-order, the aspired optimum depends on the choice of the starting point
U 0 ð p 0 Þ . Maxima, minima or saddle-points, as well as global or local optima, are
thereby not distinguished.
Search WWH ::




Custom Search