Image Processing Reference
In-Depth Information
Chapter 6
Visualizing Uncertainty in Predictive Models
Penny Rheingans, Marie desJardins, Wallace Brown, Alex Morrow,
Doug Stull and Kevin Winner
Abstract Predictive models are used in many fields to characterize relationships
between the attributes of an instance and its classification. While these models can
provide valuable support to decision-making, they can be challenging to understand
and evaluate. While they provide predicted classifications, they do not generally
include indications of confidence in those predictions. Typical quality measures for
predictive models are the percentage of predictions which are made correctly. These
measures can give some insight into how often the model is correct, but provide little
help in understanding under what conditions the model performs well (or poorly).
We present a framework for improving understanding of predictive models based on
the methods of both machine learning and data visualization. We demonstrate this
framework on models that use attributes about individuals in a census data set to
predict other attributes of those individuals.
( B ) · M. desJardins · W. Br own · A. Morrow · D. Stull · K. Winner
University of Maryland Baltimore County, Baltimore, MD, USA
e-mail: rheingan@cs.umbc.edu
M. desJardins
e-mail: mariedj@cs.umbc.edu
W. Br own
e-mail: brown1@umbc.edu
A. Morrow
e-mail: amo3@umbc.edu
D. Stull
e-mail: ds10@umbc.edu
K. Winner
e-mail: winnerk1@umbc.edu
P. Rheingans
 
Search WWH ::




Custom Search