Database Reference
In-Depth Information
Evaluating Information Visualizations
Sheelagh Carpendale
Department of Computer Science, University of Calgary,
2500 University Dr. NW, Calgary, AB, Canada T2N 1N4
sheelagh@ucalgary.ca
1
Introduction
Information visualization research is becoming more established, and as a result, it is
becoming increasingly important that research in this field is validated. With the gen-
eral increase in information visualization research there has also been an increase,
albeit disproportionately small, in the amount of empirical work directly focused on
information visualization. The purpose of this paper is to increase awareness of em-
pirical research in general, of its relationship to information visualization in particu-
lar; to emphasize its importance; and to encourage thoughtful application of a greater
variety of evaluative research methodologies in information visualization.
One reason that it may be important to discuss the evaluation of information visu-
alization, in general, is that it has been suggested that current evaluations are not con-
vincing enough to encourage widespread adoption of information visualization tools
[57]. Reasons given include that information visualizations are often evaluated using
small datasets, with university student participants, and using simple tasks. To en-
courage interest by potential adopters, information visualizations need to be tested
with real users, real tasks, and also with large and complex datasets. For instance, it is
not sufficient to know that an information visualization is usable with 100 data items
if 20,000 is more likely to be the real-world case. Running evaluations with full data
sets, domain specific tasks, and domain experts as participants will help develop
much more concrete and realistic evidence of the effectiveness of a given information
visualization. However, choosing such a realistic setting will make it difficult to get a
large enough participant sample, to control for extraneous variables, or to get precise
measurements. This makes it difficult to make definite statements or generalize from
the results. Rather than looking to a single methodology to provide an answer, it will
probably will take a variety of evaluative methodologies that together may start to
approach the kind of answers sought.
The paper is organized as follows. Section 2 discusses the challenges in evaluating
information visualizations. Section 3 outlines different types of evaluations and dis-
cusses the advantages and disadvantages of different empirical methodologies and the
trade-offs among them. Section 4 focuses on empirical laboratory experiments and the
generation of quantitative results. Section 5 discusses qualitative approaches and the
different kinds of advantages offered by pursuing this type of empirical research.
Section 6 concludes the paper.
Search WWH ::




Custom Search