Biomedical Engineering Reference
In-Depth Information
approval of the patients whose management it may indirectly influence or,
conversely, to withhold advice of an information resource from the physi-
cians caring for a control group? Within the scope of this volume, it is only
possible to alert investigators to this broad category of concerns and to
emphasize the necessity of requesting the approval of the appropriate IRB
(or body with analogous authority) before undertaking a function or impact
study. Studies of information technology applied to health care, research,
and education invoke these human subjects' considerations in slightly dif-
ferent ways, but no application domain is exempt from them. The IRBs will
offer investigators specific instructions regarding, for example, from whom
informed consent must be obtained and how. These instructions make the
life of the investigator easier in the long run by removing these difficult con-
siderations from the sphere of the investigator's sole judgment, and allow-
ing the evaluation study to proceed without concern that appropriate
ethical procedures are not being followed.
The final ethical issue discussed here concerns the evaluator's integrity
and professionalism. 14 Evaluators are in a strong position to bias the col-
lection, interpretation, and reporting of study data in such a way as to
favor—or disfavor—the information resource and its developers. One
mechanism to address this concern would restrict the pool of potential eval-
uators to independent agents, commissioned by an independent organiza-
tion with no strong predispositions toward or profit to be made from
specific outcomes of the evaluation. While there is a role for evaluations
conducted with this extreme level of detachment, it is impractical and
perhaps suboptimal as a general strategy. The more removed the investiga-
tors are from the environment in which a resource is developed or
deployed, the steeper their learning curve about the key issues relating to
the resource that drive the generation of the evaluation questions. Some
“incest” in evaluation is often inevitable, and, some would argue, desirable
to enhance the relevance and the legitimacy of the study.
In the extreme case, where the developers of an information resource are
the sole evaluators of their own work, the credibility of the study can be
preserved through an external audit of decisions taken and data collected.
Otherwise, no matter how careful the methods and clear the results, there
may remain a suspicion that the reported study attesting to the statistically
significant benefits of an information resource was the 20th study con-
ducted, after 19 negative studies were conducted and their results sup-
pressed. When the evaluation group for a project includes many of the same
people as the development group, it is advisable to create an advisory com-
mittee for the evaluation that can perform an auditing, validation, and legit-
imating function. A recent systematic review of 100 randomized trials of
clinical decision support systems emphasizes these concerns, as it showed
that about three quarters of studies carried out by system developers
showed improvements in clinical practice, contrasting with only one quarter
of the studies carried out by independent evaluators. 15
Search WWH ::




Custom Search