Information Technology Reference
In-Depth Information
small, and there remains considerable variability in the model, it is not a valid representation of the actual
outcomes. Therefore, we need to consider the differential between the predicted and actual outcomes.
An assumption is made that if the actual mortality is less than what is predicted, then the hospital must
be doing something to prevent deaths that should be expected to occur. If this difference is negative so
that the actual mortality is larger than predicted, then the hospital appears to be doing something wrong
that increases the mortality.
Construct validity deals with an examination of the causal relationship. In our example, there should
be a relationship between patient severity and mortality. There has to be a relationship to the actual patient
severity. However, as we have seen, different severity measures can put the patient in the least severe
category, and then a second measure can put the patient in the most severe category. Because there is
such a lack of consistency across measures, the construct validity must remain in doubt.
In addition to defining severity, we must see how that severity measure relates to the outcome.There
does not appear to be a cause-effect between actual and predicted mortality. Since regression and pre-
dictive models focus on predicting outcomes accurately, should we work with a measure that is defined
based upon a difference in predicted and actual outcomes? Generally, these are considered to be residual
or random error rather than a useful measure of difference. Yet we are using random error to define the
quality of hospitals. In addition, the r 2 values tend to be small, and lift functions defined from predictive
models indicate that only a portion of the population can be predicted accurately.
Unfortunately, there is no absolute standard to compare a severity index in order to make such a
validation. Because validation is so difficult, reliability is often substituted. A good way to validate is
to compare a severity measure to patient outcomes. However, if outcomes are used to define the sever-
ity index, they cannot then be used to validate. In the absence of such an analysis, how can validity be
established?
We have been looking into the issue of validity throughout this topic. When we examined the risk
adjustment by mortality to find that providers with a zero mortality rate can actually rank low in terms
of the differential between actual and predicted mortality (since the difference is zero), then there is a
problem with validity. At the same time, it was also shown that a provider with a zero mortality rate
generally does not have patients that are as severe compared to providers with higher mortality rates; for
this reason, the differential between actual and predicted values is low. Therefore, there must be a better
way than to compare this differential in order to assign quality. Clearly, a provider with zero mortality
should rank high in a measure that uses mortality as an outcome measure.
But if there is difficulty when just comparing actual mortality to predictive mortality, just how can
we define a risk adjusted method to compare provider quality, and to validate this method? Without
some decisions as to how we define the quality of care, no measure will be valid. Any measure must
also have some adjustment for the severity of the patient's condition since it is clear that a patient with
a more severe condition will have a higher risk compared to a patient with a lower degree of severity.
The methodology cannot focus just on inputs since inputs are insufficient to compare patient outcomes.
We also need to consider just what outcome we need to examine. Certainly, a high rate of patient falls
is disconcerting; however, again, the patient's condition may create a higher risk of falls. Therefore,
adverse events and errors also need to be risk adjusted. We still need to validate the adjusted risk.
In addition, we need to consider hospital errors and adverse outcomes. If a hospital has a high rate
of resistant infection, a high rate could mean that the hospital treats a high proportion of community-
acquired infections, or it has an epidemic of resistant infection because of improper infection control
procedures.
Search WWH ::




Custom Search