Information Technology Reference
In-Depth Information
Minimum variance estimators provide relatively consistent results from
sample to sample. While minimum variance is desirable, it may be of
practical value only if the estimator is also unbiased . For example, 6 is a
minimum variance estimator, but offers few other advantages.
Plug-in estimators , in which one substitutes the sample statistic for
the population statistic, the sample mean for the population mean, or
the sample's 20th percentile for the population's 20th percentile, are
consistent, but they are not always unbiased or minimum loss.
Always choose an estimator that will minimize losses.
Myth of Maximum Likelihood
The popularity of the maximum likelihood estimator is hard to compre-
hend. This estimator may be completely unrelated to the loss function and
has as its sole justification that it corresponds to that value of the parame-
ter that makes the observations most probable—provided, that is, they are
drawn from a specific predetermined distribution. The observations might
have resulted from a thousand other a priori possibilities.
A common and lamentable fallacy is that the maximum likelihood esti-
mator has many desirable properties—that it is unbiased and minimizes
the mean-squared error. But this is true only for the maximum likelihood
estimator of the mean of a normal distribution. 2
Statistics instructors would be well advised to avoid introducing
maximum likelihood estimation and to focus instead on methods for
obtaining minimum loss estimators for a wide variety of loss functions.
INTERVAL ESTIMATES
Point estimates are seldom satisfactory in and of themselves. First, if the
observations are continuous, the probability is zero that a point estimate
will be correct and equal the estimated parameter. Second, we still require
some estimate of the precision of the point estimate.
In this section, we consider one form of interval estimate derived from
bootstrap measures of precision. A second form, derived from tests of
hypotheses, will be considered in the next chapter.
Nonparametric Bootstrap
The bootstrap can help us obtain an interval estimate for any aspect of
a distribution—a median, a variance, a percentile, or a correlation coeffi-
cient— if the observations are independent and all come from distributions
2 It is also true in some cases for very large samples. How large the sample must be in each
case will depend both upon the parameter being estimated and upon the distribution from
which the observations are drawn.
Search WWH ::




Custom Search