Information Technology Reference
In-Depth Information
of the major changes we will introduce in the
next evaluation, therefore, is to enact an open
authorship rule, such as that used in Wikipedia,
and to rely mostly on intrinsic incentives (like
voluntarism and reputation).
Another open issue is the rating procedure. The
rating procedure can play a critical role in promot-
ing high-quality contributions and convergence
in collective deliberation. In the experiment pre-
sented in this article the rating tool was extremely
simple: users votes expressed how much a user
liked a post through a five point scale ranging
from 1 (poor) to excellent (5). Further research
developments will concern the design of a more
articulated rating procedure aimed at evaluating
ideas and argument quality, author reputation,
and the community consensus.
The experiment involved a relatively small
number of users, by Internet standards, and the
way students approached the experiment was
distorted, no doubt, by the fact that they were
co-located peers. Social pressure, for instance,
may have had a role in limiting the number of
cons compared to pro's and reduce the number
of poor rating, coupled with the students' low
expertise in the topic. The experiment also ran,
perforce, over a limited time window. Further
evaluations will aim to remove these artificial
constraints by assessing the platform with much
larger, and truly open, more intrinsically motivated
user communities. Increased scale will probably
require qualitative changes in design choices and
user incentives. Among the most critical improve-
ments we underline: designing mechanisms and
rules able to generate a self-organized hierarchy
of user roles (readers, authors, and moderators),
improving the design of the platform in terms of
browsing, Information visualization & retrieval,
providing on-line support to users (such as on
line help and training tools), and building tools to
increase moderators productivity. We are currently
identifying other possible contexts for assessing
and applying the deliberatorium, ranging from
problem solving within companies and profes-
sional communities of practice, to learning and
education with communities of young students.
research implications and next
steps
The experiment presented in this article was the
very first test of the Deliberatorium. The main
aim of the test was to observe users' first hand
reactions to large scale, internet-based argument
debate to improve the design of the platform for
the next experiments in which our aim will be to
compare the deliberatorium performances with
other current sharing tools based on different
technologies. A first attempt has been done in a
second test scheduled in the late Spring of 2008
with a group of 300 students at the University
of Zurich. The aim of this test was to compare
the new release of the deliberatorium with more
traditional technologies, in particular with fo-
rums and wikis, given the same structure of
solely intrinsic incentives. For this purpose, we
created three groups of users debating on a same
topic but using, respectively, the deliberatorium,
a forum and a wiki. The analysis of the data of
this test was still in progress during the writing
and reviewing of this article and results will be
the object of a future publication.
Starting from the empirical results and lessons-
learned obtained in the Naples test, in this section
we present several research hypotheses for the
next test, clustered into three groups: effects on
users skills and participation , effects on quality
and quantity of knowledge contents , and effects
on group deliberation .
Effects on Users Skills and Participation
H1: A large-scale collaborative argumentation
platform improves users' critical thinking skills
compared to forum and wikis.
While in forum and wikis people express
themselves freely, the deliberatorium requires
Search WWH ::




Custom Search