Geoscience Reference
In-Depth Information
3.2.3 Lilliefors Test
The 'Lilliefors test' was named after Hubert Lilliefors, a professor of statistics
at the George Washington University and is an adaptation of the 'Kolmogorov-
Smirnov test' (Lilliefors, 1967, 1969). The 'Lilliefors test' compares the
cumulative distribution of data to the expected cumulative normal distribution.
The 'Lilliefors test' is different from the 'K-S test' as unknown population
parameters are estimated, while the test-statistic is the same. The critical
values of the two tests are different, which results in different decisions (Mendes
and Pala, 2003). The 'Lilliefors test' is more powerful than the 'chi-square
test' for large sample sizes and is recommended by the US Environmental
Protection Agency (USEPA, 1996).
The 'Lilliefors test' is used to test the null hypothesis that the data come
from a normally distributed population. The mean and variance of the normally
distributed population are estimated. The procedures for applying the test are
as follows:
Step 1: Estimate the population mean and variance of the time series data.
Step 2: Find the maximum discrepancy between the empirical distribution
function and the cumulative distribution function (CDF) of the normal
distribution with the estimated mean and estimated variance. The
test-statistics for the 'Lilliefors test' is similar to that for the 'K-S test'
shown in Eqn. (3).
Step 3: Finally, the null hypothesis should be rejected if the maximum
discrepancy is large enough to be statistically significant, which is
the criteria for testing the null hypothesis by 'K-S test'. In 'Lilliefors
test', since the population parameters and CDF are estimated based
on sample data, the hypothesized CDF moves closer to the data
themselves. As a result, the computed maximum discrepancy becomes
smaller than it would have been if the null hypothesis had singled out
just one normal distribution. Thus, probability distribution of the
'Lilliefors test', assuming the null hypothesis is true, is stochastically
smaller than that for the Kolmogorov-Smirnov test. This is the
Lilliefors correction to 'K-S test'. To date, tables for this distribution
have been prepared by Monte Carlo methods only (Lilliefors, 1967,
1969).
3.2.4 Anderson-Darling Test
The Anderson-Darling test is used to test if a sample of data came from a
population with a normal distribution. It is a modification of the Kolmogorov-
Smirnov (K-S), which gives more weight to tails compared to the K-S test
(Stephens, 1974). The K-S test is distribution-free in the sense that the critical
values do not depend on the normal distribution. However, in the Anderson-
Darling test, the critical values are dependent on a given distribution, which
makes it a more sensitive test; though the disadvantage is that critical values
Search WWH ::




Custom Search