Graphics Reference
In-Depth Information
(without the error term) is set to zero and rearranged to get Equation B.162 . Thus, a succession of
guesses are generated using Equation B.163 .
0
¼ gðxÞþg
ðxÞdx
0
gðxÞ
g 0
(B.162)
dx ¼ x 1 x 0 ¼
ðxÞ
gðxÞ
g 0
x n ¼ x n 1
(B.163)
ðxÞ
f 0 ( x )
In the current case,
¼ g ( x ). If x is multidimensional, as it usually is in computer graphics appli-
cations, then g ( x ) is the gradient of f( x )( Eq. B.164 ) and
g 0 ( x ) is the inverse of the Hessian ( Eq. B.165 ) ,
but the same strategy holds and becomes Equation B.166 .
Þ¼ @f
Þ; @f
Þ; ...; @f
rf ð
x
@x 1 ð
x
@x 2 ð
x
@x n ð
x
Þ
(B.164)
2
f
@x i @x j ð
@
HðxÞ¼
x
Þ
(B.165)
x n 1 Þ 1
x n ¼
x n 1 ½Hgð
rgð
x n 1 Þ
(B.166)
The negative of gradient is used because the gradient points in the direction of the greatest increase.
This is called the gradient descent method .
It has been shown that always taking steps in the direction of the gradient can lead to situations that
converge very slowly. Therefore, taking steps in directions orthogonal to previous steps helps the con-
vergence. This is called the conjugate gradient method .
An alternative way to approach the minimization problem is to fit a quadratic approximation to the
function f at x , and then solve the quadratic equation for its minimum point and use that in the next
iteration. The Taylor series expansion of G is shown in Equation B.167 .
2
ðxÞ ðdxÞ
0
00
3
f ðx þ dxÞ¼f ðxÞþf
ðxÞðdxÞþf
þ OððdxÞ
Þ
(B.167)
2
The quadratic approximation can be easily solved for the minimum value. This is used as the next guess
for the minimum and the procedure iterates. In this way, steps are taken toward the minimum of the
function using the first and second derivative of the objective function.
So far, constraints on the parameter space have not been considered, and this is called unconstrained
optimization. Often, there are constraints on valid input values, such as greater than zero.
Hard and soft constraints, equality constraints, and inequality constraints
The simplest constrained optimization is when the function is linear and the constrains are linear, which is
referred to as linear programming. Think of a panel that is tilted representing the linear objective function,
and a frame consisting of 2
4s, the constraints, bounding a region of the board. Awell-known method of
solving a linear programming problem is the simplex method. Essentially, this method says that the con-
strained minimummust occur at a corner point of the frame. The simplex method progresses from corner
to corner, always going in a direction that reduces the objective function.
 
Search WWH ::




Custom Search