Chemistry Reference
In-Depth Information
does the sample of items represent the content to be assessed? Is the format of
the instrument appropriate?
Criterion-related evidence validity refers to the relationship between the score
obtained using the instrument and score obtained using one or more other
instruments or criteria. How strong is the relationship between the variables?
How well do the scores estimate the present performance or predict future
performance of a certain type?
Construct-related evidence of validity refers to the nature of the psychological
construct or characteristics being measured by the instrument. How well does a
measure of the construct explain differences in the behavior of individuals or
their performance on a certain task? There are usually three steps in obtaining
the construct-related evidence of validity, i.e., (1) the variable being measured is
clearly defined; (2) hypotheses, based on a theory underlying the variable, are
formed about how people will behave in a particular situation; and (3) the
hypotheses are tested both logically and empirically.
Reliability means that scores from an instrument are stable and consistent.
Scores should be nearly the same when the researchers administer the instrument
multiple times on different occasions. Reliability estimates provide researchers
with an idea of how much variation to expect, measured in terms of the reliability
coefficient that ranges from 0.00 to 1.00, with no negative values (Fraenkel &
Wallen, 2006 ). The three best-known ways to obtain a reliability coefficient are the
test-retest method,
the equivalent-form method, and the internal-consistency
method:
1. The test-retest method involves administering the same test twice to the same
group after a certain time interval has elapsed. The reliability coefficient is then
calculated to indicate the relationship between the two sets of scores obtained.
2. The equivalent-form method is used when two different but equivalent forms of
an instrument are administered to the same group of individuals during the same
time. A reliability coefficient is then calculated between the two sets of scores
obtained.
3. The internal-consistency method consists of several procedures/approaches of
estimating reliability, requiring only a single administration of the instrument.
The procedures/approaches are:
(a) The split-half procedure involves scoring two halves (usually odd items
versus even items) of a test separately for each person and then calculating
a correlation coefficient for the two sets of scores using the Spearman-
Brown prophecy formula .
(b) Kuder-Richardson approaches , particularly formulas KR20 and KR21, are
the most frequently used formulas for determining internal consistency.
(c) Alpha coefficient (frequently called Cronbach
s alpha ) is another check on
'
the internal consistency of an instrument.
Internal-consistency estimates of reliability for good affective scales tend to fall
into the 0.80 s (Anderson & Anderson, 1982 ); such numbers indicate a high degree
Search WWH ::




Custom Search