Biomedical Engineering Reference
In-Depth Information
From here, we can check the consistency of our estimate with intuitive situations:
if w is not correlated to z , the estimate is 0, which is the mean value of the marginal
p.d.f. of w . In this case, the knowledge of z does not bring any advantage and the
best estimate remains the a priori expected value of w .
Maximum Likelihood Estimate
Another reasonable approach is to compute the estimator w as the value that
maximizes the probability of measuring z . In other terms, we get
w ML
D
arg max p Z j W . z j w / or
@p Z j W
@ w
w ML W
j w ML
D 0:
The p.d.f. p Z j W is a measure of the likelihood that z is measured, so this estimate is
called maximum likelihood .
It is interesting to establish a relation between this estimator and the previous
ones. We do this for the case of Gaussian variables, even though the same conclusion
holds in the general case.
We know that estimator
w MAP is such that @p W j Z
@ w
j w MAP
D 0:
Then, thanks to the Bayes Theorem, we have
p WZ
p Z D
p Z j W p W
p Z
p W j Z D
:
Maximizing p W j Z is equivalent to the maximization of its logarithm, so we have
@ ln p Z j W p W
p Z
@ w
@p W j Z
@ w
@ ln p W j Z
@ w
D 0 )
D
D 0
@ ln p Z j W
@ w
@ ln p W
@ w
@ ln p Z
@ w
@ ln p Z j W
@ w
@ ln p W
@ w
)
C
D
C
D 0
since p Z is independent of w .
Now, for a Gaussian variable, such that
exp
. w // ;
1
p .2/ n
1
2 . w E
. w // T ƒ 1 . w E
p W D
jƒj
we have
. w // T ƒ 1 . w E
@ ln p W
@ w
1
2
@. w E
. w //
1 . w E
D
. w //:
@ w
When the variance of a random variable is large, this means that our a priori
knowledge is not trustworthy. The limit case of ƒ 1
! 0 corresponds to a total
 
Search WWH ::




Custom Search