Database Reference
In-Depth Information
visualization is often an interplay between an expert's meta-knowledge and knowl-
edge of other sources as well as information from the visualization in use.
While all of the above are important, a question that lies at the heart of the success
of a given information visualization is whether it sheds light on or promotes insight
into the data [55, 63]. Often, the information processing and analysis tasks are com-
plex and ill-defined, such as discovering the unexpected, and are often long term or
on-going. What exactly insight is probably varies from person to person and instance
to instance; thus it is hard to define, and consequently hard to measure. Plaisant [57]
describes this challenge as “answering questions you didn't know you had.” While it
is possible to ask participants what they have learned about a dataset after use of an
information visualization tool, it strongly depends on the participants' motivation,
their previous knowledge about the domain, and their interest in the dataset [55, 63].
Development of insight is difficult to measure because in a realistic work setting it is
not always possible to trace whether a successful discovery was made through the use
of an information visualization since many factors might have played a role in the
discovery. Insight is also temporally elusive in that insight triggered by a given visu-
alization may occur hours, days, or even weeks after the actual interaction with the
visualization. In addition, these information processing tasks frequently involve
teamwork and include social factors, political considerations and external pressures
such as in emergency response scenarios. However, there are other fields of research
that are also grappling with doing empirical research in complex situations. In particu-
lar, ecologists are faced with conducting research towards increasing our understand-
ing of complex adaptive systems. Considering the defining factors of complex adap-
tive systems may help to shed some light on the difficulties facing empirical research
in information visualization. These factors include non-linearity, holoarchy and inter-
nal causality [37, 49]. When a system is non-linear, the system behaviour comes only
from the whole system. That is, the system can not be understood by decomposing it
into its component parts which are then reunited in some definitive way. When a
system is holoarchical it is composed of holons which are both a whole and a part.
That is, the system is mutually inter-nested. While it is not yet common to discuss
information analysis processes in terms of mutual nesting, in practice many informa-
tion analysis processes are mutually nested. For instance, consider the processes of
search and verification: when in the midst of searching, one may well stop to verify a
find; and during verification of a set of results, one may well need to revert to search
again. Internal causality indicates that the system is self-organizing and can be charac-
terized by goals, positive and negative feedback, emergent properties and surprise.
Considering that it is likely that a team of information workers using a suite of visu-
alization and other software tools is some type of complex adaptive system suggests
that more holistic approaches to evaluation may be needed.
Already from this brief overview, one can see that useful research advice on the
evaluation of information visualization can be gathered from perceptual psychology,
cognitive reasoning research, as well as human computer interaction research. Many,
but not enough, information visualization researchers are already actively engaged in
this pursuit. The purpose of this paper is to applaud them, to encourage more such
research, and to suggest that the research community to be more welcoming of a
greater variety of these types of research results.
Search WWH ::




Custom Search