Biomedical Engineering Reference
In-Depth Information
medical informatics, should also move toward an evidence-based model.
This would require us to be clear about the question before we start a
research or implementation project, and either to search for relevant results
of evaluation studies already completed or to propose new studies, taking
care to avoid threats to external and internal validity. In an evidence-based
informatics model, we would also adopt an appropriately skeptical view
toward the results of individual studies, and seek instead systematic reviews
that combine the results of all rigorous studies—whether positive or nega-
tive, objectivist or subjectivist—to generate the best evidence to address a
question of interest. Systematic review methods 7 can also be valuable in
uncovering insights about which classes of information resources generate
positive results. As an example, Table 12.5 depicts results from the system-
atic review by Garg et al. 15 of the impact of clinical decision support on
health professional actions. In this review the investigators identified 100
randomized controlled trials (RCTs) covering the period 1974 to 2002. Per-
formance improved in 62 (64%) of the 97 RCTs for which health care
provider behavior was the focus, while patient outcomes improved in only
7 (13%) of the 52 RCTs in which outcomes were studied. The table shows
the proportion of trials in which a statistically significant improvement in
health professional behavior was observed by type of DSS. Absent this sys-
tematic review, it would have been hard or impossible to predict, for
example, the results for diagnostic DSSs, which appear to be half as likely
to be effective as preventive care systems.
Finally, certain systematic reviewing methods, specifically meta-
regression, 7 can be used to improve our evaluation methods, by uncovering
evidence about which methods lead to study bias. Table 12.6 shows an
example from a related domain because relevant data for informatics are
not yet available. The table summarizes the results of a systematic review
by Lijmer et al., 18 looking at the effect of various study faults on the results
of 218 evaluations of laboratory tests. In the table, a high figure for relative
diagnostic odds ratio suggests that the class of studies is overestimating the
accuracy of the test in question. The table shows that, for example, case—
control studies and those with verification bias (different reference stan-
dards for positive and negative test results) were biased, as they were
TABLE 12.5. The probability of a decision support system (DSS) leading to
improved health professional behavior, by focus of the DSS.
Percentage (number) of the randomized trials showing
Target behavior for the DSS
improvement in clinical practice
Diagnosis
40% (4/10)
Prescribing, drug dosing
66% (19/29)
Disease management
62% (23/37)
Preventive care
76% (16/21)
Source: Data redrawn from Garg et al. 15
 
Search WWH ::




Custom Search