Database Reference
In-Depth Information
Collaborative visualization can similarly be viewed as a process of peer pro-
duction of information goods. Stages in this process include uploading data sets,
creating visualizations, and conducting analysis. To support this process, it is
important to identify the specific forms of contribution (modules) that users
might make and how to integrate these contributions. Existing frameworks for
aiding this task include structural models of visualization design and sensemak-
ing processes [17]. As shown in Fig. 3, each of these models suggests tasks that
contribute to collaborative analysis, including data cleaning, moderation, visual-
ization specification, sharing observations, positing hypotheses, and marshaling
evidence. These concerns are given further treatment in [43].
Once modules have been identified, one can then attempt designs which re-
duce the cost structure of these tasks. Consider the issue of scale. Most of the
examples in the previous section use sequential text comments to conduct ana-
lytic discussion. However, it is unclear how well this form of communication will
scale to massive audiences. An open research problem is the creation of new forms
of managed conversation that have a lower cost of integration, enabling people to
understand and contribute to analysis without having to wade through hundreds
of individual comments. For example, Wikipedia relies on human editing coupled
with a revision management system to integrate and moderate contributions. Al-
ternatively, systems with highly structured input such as NASA ClickWorkers
[7] or von Ahn's (2006) “games with a purpose” [98] rely on purely automated
techniques. Some middle ground between these approaches should be possible
for collaborative analysis, such as argumentation systems that model hypothe-
ses and evidence as first class objects. One example of such a system is CACHE
[11], which maintains a matrix of hypotheses and evidence, with collaborators
providing numerical measures of the reliability of evidence and the degree to
which evidence confirms or disconfirms the hypotheses. These scores can then
be averaged to form a group assessment. Other possibilities include augment-
ing graphical workspaces such as the Analysis Sandbox [107] with collaborative
authoring features or automatic merging of representations (c.f., [13]).
Engagement and Incentives: If collaborators are professionals working
within a particular context (e. g., financial analysts or research scientists) there
may be existing incentives, both financial and professional, for conducting col-
laborative work. In a public goods scenario, incentives such as social visibility or
sense of contribution may be motivating factors. Incorporating incentives into
the design process of collaborative visualization systems may increase the level
of user contributions, and could even provide additional motivation in situations
that already have well established incentive systems.
Benkler posits an incentive structure for collaborative work consisting of mon-
etary, hedonic, and social-psychological incentives [7]. Monetary incentives refer
to material compensation such as a salary or cash reward. Hedonic incentives
refer to well-being or engagement experienced intrinsically in the work. Social-
psychological incentives involve perceived benefits such as increased status or
social capital.
Search WWH ::




Custom Search