Information Technology Reference
In-Depth Information
from predefined lists to force what he considered the necessary level of precision.
This approach was not popular with his house staff, whom I spoke with, because it
took more time than traditional charting, a problem that, as we've said, is still largely
unresolved today.
The second reason is because computer systems in use for purposes such as
claims processing were usually written years ago and were designed to handle struc-
tured and consistent terminology. Humans, on the other hand, are remarkably adept
at free text. We deal with ambiguity and the context sensitivity of word meanings
without even thinking about it. For example, consider these phrases: “I swatted the
fly” and “I hit a fly ball”. The word “fly” has entirely different meanings which we
understand subconsciously because the context sensitivity of language is apparently
wired into our brains. Until relatively recently computers were not particularly good
at this. IBM's Watson, the winner of the popular game show Jeopardy , against
expert human competition, is evidence of how far the technology has come. The
first commercial application of Watson may be to answer questions posed by physi-
cians at the point-of-care. [ 19 ]
Getting paid for clinical services is important, so everyone pays close attention
to assigning the correct ICD and CPT codes to the billing records for each patient
encounter. For the same reason National Drug Codes (NDC) are assigned to medi-
cations by pharmacies and Logical Observation Identifiers Names and Codes
(LOINC) are assigned to laboratory studies. These codes are all widely used, but
they have each typically been a list of terms for their domain of interest. While they
certainly add more precision to medical and billing records, they each only encom-
pass a part of the complete health domain; they don't code in great detail so they
may artificially group what are not necessarily identical patients; and they don't
express clinical relationships so that, for example, a computer with only these codes
may not be able to easily search for any patient who has had their left thumb ampu-
tated rather than just patients who have had an amputation.
That is because, for the most part, we have not applied the same level of preci-
sion to the information we code as we do when we record it in charts. Doing this
will require a different approach. The current transition from ICD-9 to ICD-10 is, in
large part, about introducing this extra precision. [ 20 ]
The earliest attempt to introduce significant structure into coding of medical data
may have been the Standardized Nomenclature of Pathology (SNOP) begun in 1965
by Dr. Arnold W Pratt at the National Institutes of Health (NIH). Dr. Pratt's vision was
that computers could “read” free text and convert it into a hierarchical structured lan-
guage. The terms would be standardized and the hierarchy would represent clinical
relationships so that, for example, the computer would “know” that the ulna is a bone
in the arm or that glomerulonephritis is an inflammatory disease of the kidney. The end
goal was to use routinely collected clinical data to support research and other purposes
beyond patient care. Today we call that “secondary use” of clinical data and it is still
one of the objectives of the national effort to deploy health information technology.
Later on, under the leadership of two other pathologists, Dr. Roger Cote at the
University of Sherbrooke and Dr. David J. Rothwell at the Medical College of
Wisconsin, the concept was expanded to the Standardized Nomenclature of Medicine
Search WWH ::




Custom Search