Biomedical Engineering Reference
In-Depth Information
worthwhile, therefore, to carefully think through what you want to measure, and
why, before running anything. In particular, if you are trying to judge the merits
of competing models, effort put into figuring out how and where they are most
different will generally be well-rewarded. The theory of experimental design
offers extensive guidance on how to devise informative series of experiments,
both for model comparison and for other purposes, and by and large the princi-
ples apply to simulations as well as to real experiments.
6.1.2.
Monte Carlo Methods
Monte Carlo is the name of a broad, slightly indistinct family for using
random processes to estimate deterministic quantities, especially the properties
of probability distributions. A classic example will serve to illustrate the basic
idea, on which there are many, many refinements.
Consider the problem of determining the area A under an curve given by a
known but irregular function f ( x ). In principle, you could integrate f to find this
area, but suppose that numerical integration is infeasible for some reason. (We
will come back to this point presently.) A Monte Carlo solution to this problem
is as follows: pick points at random, uniformly over the square. The probability
p that a point falls in the shaded region is equal to the fraction of the square oc-
cupied by the shading: p = A / s 2 . If we pick n points independently, and x of them
fall in the shaded region, then x/n p (by the law of large numbers), and s 2 x / n
A . s 2 x / n provides us with a stochastic estimate of the integral. Moreover, this is a
probably approximately correct (ยง2.1.3) estimate, and we can expect, from basic
probability theory, that the standard deviation of the estimate around its true
value will be proportional to n -1/2 , which is not bad. 16 However, when faced with
such a claim, one should always ask what the proportionality constant is, and
whether it is the best achievable. Here it is not: the equally simple, if less visual,
scheme of just picking values of x uniformly and averaging the resulting values
of f ( x ) always has a smaller standard deviation (140, ch. 5).
This example, while time-honored and visually clear, does not show Monte
Carlo to its best advantage; there are few one-dimensional integrals that cannot
be done better by ordinary, non-stochastic numerical methods. But numerical
integration becomes computationally intractable when the domain of integration
has a large number of dimensions, where "large" begins somewhere between
four and ten. Monte Carlo is much more indifferent to the dimensionality of the
space: we could replicate our example with a 999-dimensional hypersurface in a
1000-dimensional space, and we'd still get estimates that converged like n -1/2 , so
achieving an accuracy of F will require evaluating the function f only O (F -2 )
times.
Our example was artificially simple in another way, in that we used a uni-
form distribution over the entire space. Often, what we want is to compute the
Search WWH ::




Custom Search