Biomedical Engineering Reference
In-Depth Information
1. Calculate the energy U (1) for an initial geometry X (1) , and at positive and negative
displacements for each of the coordinates.
2. Fit a quadratic for each of the coordinates according to the formula
g i X i
X i 2
p
X i +
2 H ii X i
1
U k
U ( X )
=
+
(5.8)
i = 1
(this essentially gives numerical estimates of the gradient and theHessian; I have dropped
the brackets round the iteration count for clarity).
3. Find a minimum of this expression; we have
U
X i =
0
H ii X i
X i =
g i
+
0
(5.9)
g i
H ii
The last term gives the correction to coordinate x i ; if these are small enough then stop.
4. Calculate the energy at points X k , X k
c i
=
X i
X i
=−
+
c k and X k
+
2 c k .
5. Fit a quadratic to these three points as above.
6. Find the energy minimum, as above. This gives point X k + 1 on the surface.
7. Calculate the gradient g k + 1 at this point, increase the iteration count and go back to step 3.
5.9 Choice of Method
The choice of algorithm is dictated by a number of factors, including the storage and
computing requirements, the relative speeds at which the various parts of the calculation
can be performed and the availability of an analytical gradient and Hessian. Analytic first
and second derivatives are easily evaluated for MM force fields; the only problemmight be
the physical size of the Hessian. For this reason, MMcalculations on large systems are often
performed using steepest descent and conjugate gradients. The Newton-Raphson method
is popular for smaller systems, although the method can have problems with structures that
are far from a minimum. For this reason, it is usual to perform a few iterations using (for
example) steepest descent before switching to Newton-Raphson. The terms 'large' and
'small' when applied to the size of a molecule are of course completely relative and are
dependent on the computer power available when you do the calculation.
In cases where the differentiation cannot be done analytically, it is always possible to
estimate a gradient numerically; for example in the case of a function of one variable
d f
d x
f ( x 1 +
Δ)
f ( x 1 )
x 1
Δ
x
=
where Δ is small. Algorithms that rely on numerical estimates of the derivatives need more
function evaluations than would otherwise be the case, so there is a delicate trade-off in
computer time. It is generally supposed that gradient methods are superior to nongradient
methods, and it is also generally thought to be advantageous to have an analytical expression
for the gradient and Hessian.
 
Search WWH ::




Custom Search