Chemistry Reference
In-Depth Information
collected based on temporal stability, internal consistency, test content, response
processes, internal structure, and relations to other variables (Arjoon et al., 2013 ).
Test content evidence is typically established by asking a panel of domain
experts to judge whether the items appropriately sample the domain of interest.
Cognitive interviews are often used for gathering response process evidence,
providing insight into whether thought processes invoked by test items are those
intended by the test developer. Constructed response items can also be useful tools
for examining this sort of validity evidence. Respondents need to first understand
the nuances of the item and then mentally retrieve relevant information in order to
make a decision about how to respond to the item; response process evidence
demonstrates respondents
understanding of an item by illuminating their thinking
about that item. Relational validity evidence is typically inferred from statistical
analysis, such as confirmatory factor analysis and correlation analysis. The internal
structure of an instrument, or how the items in the instrument relate to each other, is
important because usually an instrument prescribes the intended construct as
unidimensional or multidimensional, with specific item sets measuring different
aspects of the construct in the latter case. Evidence based on internal structure
establishes the degree to which the item scores for the instrument conform to the
hypothetical construct. Evidence based on relations to other variables concerns
hypothesized relationships between the construct measured by the instrument and
other variables within a specific theoretical framework. Accumulating this evidence
requires information about the other variables of interest, gathered via additional
tests or surveys of the respondents.
All the sources of evidence mentioned above provide support for instrument
function from multiple perspectives. Gathering evidence, even for an existing
instrument, is a long and iterative process and should never be viewed as complete.
Instead of developing a new instrument from scratch, it is desirable to use and
evaluate an established instrument for respondents in different contexts. While
many instruments relating to attitude toward chemistry are available, five have
been specifically evaluated with respect to published validity evidence in the
college chemistry context (Arjoon et al., 2013 ): the Cognitive Expectations for
Learning Chemistry Survey (CHEMX) (Grove & Bretz, 2007 ), Colorado Learning
Attitudes about Science Survey (CLASS) (Barbera, Adams, Wieman, & Perkins,
2008 ), Chemistry Self-Concept Inventory (CSI) (Bauer, 2005 ), and Attitude toward
the Subject of Chemistry Inventory (ASCI) (Bauer, 2008 ) and its shortened version
ASCIv2. Among these instruments, ASCIv2 has the advantage of clear connection
with the attitude definition and framework in psychology (Rosenberg & Hovland,
1960 ), which is supported by empirical data from student samples at multiple sites
(Xu et al., 2012 ; Xu & Lewis, 2011 ). The ASCIv2 retains eight items from ASCI in
two subscales, “intellectual accessibility” (items 1, 4, 5, and 10 from ASCI) about
the difficulty of chemistry and “emotional satisfaction” (items 7, 11, 14, and
17 from ASCI) about how satisfied students feel about chemistry in general,
which are congruent with two components (cognitive and affective, respectively)
of attitude theory. This study focuses on quantitative analysis of internal structure,
internal consistency, and some qualitative evidence based on response processes for
'
Search WWH ::




Custom Search