Biomedical Engineering Reference
In-Depth Information
of care) and you start collecting data, in the hopes that
you will fail to prove it. Everything stays carefully
blinded. The investigators have no idea what they are
administering and the patients have no idea what
they're taking until a predetermined endpoint - to do
otherwise would destroy the statistics.
Notice the signifi cance of “blinding” in clinical trials,
particularly “double blinding,” where both subjects and
investigators are unaware of the assigned intervention -
whereby fi ndings for program improvement cannot
straightforwardly be fed back. 6 In contrast to the logic
of blinding, the actual conduct of blinding in
randomized clinical trials (RCTs) has been assessed in
several recent studies, including Boutron et al. 7 and
Fergusson et al . 8
This position has been held by researchers outside the fi eld
of clinical trials as well: Michael Brooks 9 states that
continuous feedback of evaluative fi ndings “. . . has the
unfortunate effect of tossing a monkey-wrench into
the research design constructed at the program's outset.”
Daniel Stuffl ebeam, a leading fi gure in the program
evaluation community, describes the development of his
own position:
￿ ￿ ￿ ￿ ￿
I had to reject basically everything I had thought
necessary for evaluating educational projects, including
behavioral objectives, experimental designs, and
standardized tests. Instead, I advised educators to key
evaluations to provide information for decision-
making. 10
The argument against the “experimental method” is a
methodological, not a practical argument. 11 The critics of
Search WWH ::




Custom Search