Biomedical Engineering Reference
In-Depth Information
demands. It is often the case that the variables that are most readily obtain-
able and most accurately assessed (e.g., length of hospital stay), and which
therefore are employed as outcome measures in studies, are difficult to
relate directly to the effects of a biomedical information resource because
there are numerous intervening or confounding factors. Studies may have
null results, not because there are no effects but because these effects are
not manifest in the outcome measures pursued. In other circumstances, out-
comes cannot be unambiguously assigned a positive value. For example, if
use of a computer-based tutorial program is found to raise medical students'
national licensure examination scores, which are readily obtained and
highly reliable, it usually does not settle the argument about the value of
the tutorial program. Instead, it may only kindle a new argument about the
validity of the examination used as an outcome measure. In the most
general case, a resource produces several effects: some positive and some
negative. Unless the reasons for these mixed effects can somehow be
explored further, the impact of a resource cannot be comprehensively
understood, or it may be seriously misestimated. When there are mixed
results, oftentimes the resource is judged entirely by the single result of
most interest to the group holding the greatest power. A resource that actu-
ally improves nursing care may be branded a categorical failure because it
proved to be more expensive than anticipated.
Comparison-based studies are also limited in their ability to explain dif-
ferences that are detected or to shed light on why, in other circumstances,
no differences are found. Consider, for example, a resource developed to
identify “therapeutic misadventures”—problems with drug therapy of hos-
pitalized patients—before these problems can become medical emergen-
cies. 28 Such a resource would employ a knowledge base encoding rules of
proper therapeutic practice and would be connected to a hospital informa-
tion system containing the clinical data about in-patients. When the
resource detected a difference between the rules of proper practice and the
data about a specific patient, it would issue an advisory to the clinicians
responsible for the care of that patient. If a comparison-based study of this
system's effectiveness employed only global outcome measures, such as
length of stay or morbidity and mortality, and the study yielded null results,
it would not be clear what to conclude. It may be that the resource is having
no beneficial effect, but it also may be that a problem with the implemen-
tation of the system—which, if detected, can be rectified—is accounting for
the null results. The failure of the system to deliver the advisories in a visible
place in a timely fashion could account for an apparent failure of the
resource. In this case, a study using the decision-facilitation approach or
the responsive/illuminative approach might reveal the problem with the
resource and, from the perspective of the evaluator's mindset, be a much
more valuable study.
The existence of multiple alternatives to the comparison-based approach
also stems from features of biomedical information resources and from the
Search WWH ::




Custom Search