Biomedical Engineering Reference
In-Depth Information
1. What are the dimensions (number of rows and number of columns) of
the two objects-by-observations matrices used to compute these results?
2. Is there any evidence of rater tendency errors (leniency, stringency, or
central tendency) in these data?
3. Viewing this as a measurement study, what would you be inclined to con-
clude about the measurement process? Consider reliability and validity
issues.
4. Viewing this as a demonstration study, what would you be inclined to
conclude about the accuracy of TraumAID's advice?
Items Facet
As defined earlier, items are the individual elements of an instrument used
to record ratings, opinions, knowledge, or perceptions of an individual we
generically call a “respondent.” Items usually take the form of questions.
The instruments containing the items can be self-administered, read to the
respondent in a highly structured interview, or completed interactively at a
computer. For the same reason that a single task cannot be the basis for
reliable assessment of performance of an information resource, a single
item cannot be used to measure reliably the respondent's beliefs or degree
of knowledge. The measurement strategy to obtain accurate measurement
is always the same: use multiple independent observations (in this case,
items) and pool the results for each object (in this case, respondents) to
obtain the best estimate of the value of the attribute for that object. If the
items forming a set are shown to be “well behaved” in an appropriate mea-
surement study, we can say that they comprise a scale.
When people (health care providers, researchers, students, or patients)
are the object class of interest, investigators frequently use multi-item forms
to assess the personal attitudes, beliefs, or knowledge of these people. This
technique generates a basic one-facet measurement problem with items as
the observations and persons as the objects. Items can also form a facet of
a more complex measurement problem when, for example, multiple judges
complete a multi-item form to render their opinions about multiple case
problems. A vast array of item types and formats is in common use. In set-
tings where items are used to elicit beliefs or attitudes, there is usually no
correct answer to the items; however, in tests of knowledge, a particular
response is identified by the item developer as correct. We explore a few of
the more common item formats here and discuss some general principles
of item design that work to reduce measurement error.
Almost all items consist of two parts, whether they are used to assess
knowledge or personal beliefs, or to judge performance. The first part is a
stem, which elicits a response; the second provides a structured format for
the individual completing the instrument to respond to the stem. Responses
can be elicited using graphical or visual analog scales, as shown in Figure
6.6. Alternatively, responses can be elicited via a discrete set of options, as
shown in Table 6.4. The semantics of the response options themselves may
Search WWH ::




Custom Search