Digital Signal Processing Reference
In-Depth Information
Thus, using Table 2.1 we get
0.18, for variety = Cabernet-Sauvignon
0, for variety = Tannat
0.11, for variety = Malbec
0.06, for variety = Merlot
P
(
variety, taste = plum
)
P
(
variety
) =
.
Thus, in this case, even though the Malbec wine provides a highest prob-
ability of finding the flavor of plum, the MAP estimator changes the answer
to Cabernet-Sauvignon. This is because the taster now takes into account the
marginal probabilities P ( Cabernet - Sauvignon )
0.2, which
increases the probability that the observed flavor of plum is originated by a
Cabernet-Sauvignon wine.
=
0.4 and P
(
Malbec
) =
2.5.4.2 Minimum Mean-Squared Error
As mentioned before, the estimation error is directly related to the efficiency
of the estimator. For multiple parameters, we can define the error vector
θ
=
θ
(2.144)
Whenever the set of parameters θ to be estimated is random, we may
think about any “measuring of closeness” between θ and its estimate. A sta-
tistical average of the estimation error is not per se a suitable candidate, since
it is possible, for example, that a zero-mean error has a significant variance.
In other words, the estimator may be unbiased but not efficient. A suitable
option is to work with the statistical average of the square of the error, i.e.,
with the mean-squared error (MSE). Such option originates the method of
the MMSE estimation, which consists in finding the θ that minimizes
E
2
E
2
θ
( θ
J MSE
) =
=
θ
(2.145)
In (2.145) it should be emphasized that since θ is random, the expectation
operator is taken with respect to the joint pdf p
(
x , θ
)
, which means that
θ
2
J MSE ( θ
) =
θ
p
(
x , θ
)
d x d θ
, X
d θ p X (
θ
2
=
θ
p
(
θ
|
x
)
x
)
d x
(2.146)
|
X
Search WWH ::




Custom Search