Image Processing Reference
In-Depth Information
ζ
θ
have been given the observation
. We would like to estimate
, i.e., to find the best
θ
ζ
possible value of
from the given observation
. Firstly, we want to compute the
posterior probability of
θ
given the observation
ζ
as in Eq. ( 5.1 ).
P(θ | ζ) P(ζ | θ)P(θ).
(5.1)
The first term in this expression refers to the conditional probability of the occurrence
of the observation
ζ
given
θ
which is the underlying variable of interest. The term
P(ζ | θ)
is known as the likelihood of
θ
. The likelihood functions play an important
role in Bayesian estimation.
The second term
P(θ)
in Eq. ( 5.1 ), is the probability density function (pdf) of the
variable of interest,
variable
is not available, some knowledge about the same is generally available. It could be
obtained from the physical nature of the system, history, or some known constraints
on the system. This information reduces some amount of uncertainty associated with
θ
θ
. Although the exact or precise information about the
θ
, and hence it is also referred to as the prior probability, or simply the prior.
The prior information
may not always be available. Depending upon
whether the prior information is available or not, a Bayesian framework can be
categorized into the following two estimation methods-
P(θ)
1. Maximum likelihood (ML).
2. Maximum a posteriori (MAP).
The ML estimator neglects the prior probability of the quantity of interest, and
maximizes only the likelihood function, i.e., the probability defined by the model.
The principle of the maximum likelihood (ML) estimator is to find the value of
variable
θ
that maximizes the likelihood function, i.e.,
θ ML =
argmax
θ P(ζ | θ).
(5.2)
It is a common practice to deal with the logarithmic value of the likelihood function
due to mathematical convenience. However, it does not alter the solution as the log
is a monotonically increasing function. One of the most commonly used examples of
the ML estimation is the least squares method, when the perturbations are Gaussian
distributed.
The maximum a posteriori (MAP) estimation is closely related to the ML estima-
tion, but incorporates the prior distribution of the quantity to be estimated. Consider
the expression for the ML estimation given by Eq. ( 5.2 ). The expression for MAP
is obtained by augmenting the objective with the prior
P(θ)
. The MAP estimation,
thus, refers to estimation of the value of
that maximizes the expression in Eq. ( 5.1 ).
The MAP estimation can be formulated by Eq. ( 5.3 ).
θ
θ MAP =
argmax
θ P(ζ | θ)P(θ).
(5.3)
 
Search WWH ::




Custom Search