Information Technology Reference
In-Depth Information
there are three roles: moderators, authors and
readers/voters. Moderators are charged with the
usual tasks of filtering out noise and rejecting
off-topic posts. They, in addition, were charged
with ensuring that the argument map was well-
structured, i.e. that all posts were properly divided
into individual and non-redundant issues, ideas,
and arguments, and were located in the relevant
branch of the argument map. This involved
classifying and sometimes editing posts, offer-
ing suggestions to authors, aggregating similar
arguments, and occasionally re-organizing the
overall argument map so that related topics are
grouped into the same branch. A team of 4 student
moderators was selected and trained in argument
mapping before the test. One of the authors also
joined the moderators team.
The on-line argumentation process developed
as follows:
that only certified posts will appear in the
final, publicly available, version of the argu-
ment map. Moderators also left comments,
edited, moved, trashed and classified posts.
Usually moderators would leave a comment
to explain changes. Authors would receive an
alert email when their post was modified or
trashed (but the trash was never emptied).
In the experiment we established a single au-
thorship rule: nobody, except moderators, was al-
lowed to edit a post authored by someone else.
Several countermeasures and incentives were
set up to limit the negative effects due to limited
scale and presence of social and informational
pressure usually absent or limited in Internet
communities. In particular, we used several ex-
trinsic incentives such as minor awards and five
scholarships for the best participants thanks to
the support of the Naples City Science Museum
with the aim of improving post quality. To limit
the negative influence of social pressure on the
rating process, a kind of prediction market in-
centive for voters was set up according to which
votes would have been converted at the end of the
experiments into awards financed by the sponsor
organization in the following way: at the end of
phase 2, a team of independent external experts
would have identified and ranked the best posts.
Then voters would have been assigned a score
based on how closely their votes correlated with
the expert ratings. The voters with the highest
correlation score would have been selected and
awarded with educational gadgets.
1.
Authors posted and edited questions, ideas,
and pro/con arguments and produced an
argument map similar to that in figure 3.
While questions and ideas could be posted
only as single short sentences, arguments
were posted using an on-line form that helped
them structure their post in argument form
(conclusion, argument scheme and critical
questions, argument content, possibility to
attach links, references and documents); the
form was designed because otherwise people
tended to bundle a mishmash of issues ideas
and arguments within individual issues and
ideas;
2.
All users (including moderators, authors
and readers) rated arguments and ideas and
could send comments to authors through
threaded discussion forums associated, like
wiki talk pages, with each post. Rating was
anonymous;
Preliminary results
Since phase 2 terminated at the end of December
2007, at the time of writing this article, the data
analysis had just begun. We are currently collect-
ing and analyzing three types of data:
3.
Posts were initially given a status of “pend-
ing”, and could only be certified by modera-
tors. Until a post was certified, it could not
be rated and nobody, except its author, could
link any other posts to it. We also explained
1.
statistics about tool usage and information
accumulation (number of ideas, number of
arguments, total volume of inputs, etc.),
Search WWH ::




Custom Search