Information Technology Reference
In-Depth Information
i = 1
k = 1 y ijk
an
a
n
y . j . =
(Column average)
(18.A.2)
k = 1 y ijk
n
y ij . =
(Treatment or cell average)
(18.A.3)
i = 1
j = 1
k = 1
a
b
n
y ijk
y ... =
(Overall average)
(18.A.4)
abn
It can be shown that:
2
a
b
n
a
b
y ijk
y i ..
... 2
y
... 2
y
=
bn
y
+
an
.
y
...
.
j
i = 1
j = 1
k = 1
i = 1
j = 1
SS A
SS T
SS B
2
2
a
b
a
b
n
y ij .
y ...
y ijk
y ij .
+
y i ..
y . j . +
+
n
i
=
1
j
=
1
i
=
1
j
=
1
k
=
1
SS AB
SS E
(18.A.5)
Or simply:
SS T
=
SS A +
SS B +
SS AB +
SS E
(18.A.6)
As depicted in Figure 18.A.1, SS T denotes the “total sum of squares,” which
is a measure for the “total variation” in the whole data set. SS A is the “sum of
squares” because of factor A, which is a measure of the total variation caused
by the main effect of A. SS B is the sum of squares because of factor B, which is
a measure of the total variation caused by the main effect of B. SS AB is the sum
of squares because of factor A and factor B interaction (denoted as AB) as a
measure of variation caused by interaction. SSE is the sum of squares because
of error, which is the measure of total variation resulting from error.
2. Test the null hypothesis toward the significance of the factor A mean effect and
the factor B mean effect as well as their interaction. The test vehicle is the mean
square calculations. The mean square of a source of variation is calculated by
dividing the source of the variation sum of squares by its degrees of freedom.
The actual amount of variability in the response data depends on the data
size. A convenient way to express this dependence is to say that the sum
of square has degrees of freedom (DF) equal to its corresponding variability
source data size reduced by one. Based on statistics, the number of degree of
freedom associated with each sum of squares is shown in Table 18.A.1.
 
Search WWH ::




Custom Search