Database Reference
In-Depth Information
An example should make the MLE principle clear. For instance, we might wish to gener-
ate random graphs. We suppose that each edge is present with probability p and not present
with probability 1 − p , with the presence or absence of each edge chosen independently.
The only parameter we can adjust is p . For each value of p there is a small but nonzero
probability that the graph generated will be exactly the one we see. Following the MLE
principle, we shall declare that the true value of p is the one for which the probability of
generating the observed graph is the highest.
EXAMPLE 10.21 Consider the graph of Fig. 10.19 . There are 15 nodes and 23 edges. As
there are pairs of 15 nodes, we see that if each edge is chosen with probability p , then
the probability (likelihood) of generating exactly the graph of Fig. 10.19 is given by the
function p 23 (1 − p ) 82 . No matter what value p has between 0 and 1, that is an incredibly
tiny number. But the function does have a maximum, which we can determine by taking its
derivative and setting that to 0. That is:
23 p 22 (1 − p ) 82 − 82 p 23 (1 − p ) 81 = 0
We can group terms to rewrite the above as
p 22 (1 − p ) 81 (23(1 − p ) − 82 p ) = 0
The only way the right side can be 0 is if p is 0 or 1, or the last factor,
( 23(1 − p ) − 82 p )
is 0. When p is 0 or 1, the value of the likelihood function p 23 (1− p ) 82 is minimized, not
maximized, so it must be the last factor that is 0. That is, the likelihood of generating the
graph of Fig. 10.19 is maximized when
23 − 23 p − 82 p = 0
or p = 23/105.
Prior Probabilities
When we do an MLE analysis, we generally assume that the parameters can take any value in their range, and there
is no bias in favor of particular values. However, if that is not the case, then we can multiply the formula we get for
the probability of the observed artifact being generated, as a function of the parameter values, by the function that
represents the relative likelihood of those values of the parameter being the true values. The exercises offer examples
of MLE with assumptions about the prior distribution of the parameters.
Search WWH ::




Custom Search