Game Development Reference
In-Depth Information
individuals (task owners) submit tasks they need to solve by a crowd (e.g., annotation
of image collection). When the tasks are submitted the Turk organizes contributors
to solve them, following the instructions given by task owner (e.g., how many times
a particular task instance should be solved, what criteria a contributor must fulfill).
A significant feature of the Mechanical Turk are micro payments to contributors
(usually units of cents) for each task solved. Micro payments represent important
motivation for contributors to participate in the process. It is sometimes secondary to
other, primarymotivation, but is necessary. Agood example of this is participation in a
crowd-based scientific experiment evaluation (e.g., validation of resource metadata):
the contributor is sympathetic to the cause, but the definitive incentive to join the
process is the money (although small) he receives for the job [ 58 ].
Apart from this model, there are also crowdsourcing approaches that employ
contributors without the need of motivating them by monetary values. Almost exclu-
sively, the semantics are then only a side-product of the user activity, which primarily
focused on their needs (e.g., social bookmarks, comments). Sometimes, users do not
even know they are contributing to some knowledge base. Due to these facts, possible
kinds of semantics we can collect via crowdsourcing is limited with types of activities
the users usually do on the Web (although there is always a possibility to attract their
attention to some new activity). For example, common users of the Web upload and
annotate (textually with tags) images. They do this, because they want to have them
organized, always available and shareable to friends (e.g., Flickr 6 image gallery),
not because they should be annotated. In social networks like Facebook , users locate
exact position of persons in the images. However, we can hardly expect them being
motivated to locate non-living things (which also deserve such annotations).
2.5.3 Delicious
A typical case of semantics acquisition via crowdsourcing is the bookmarking portal
Delicious. 7 Here, users submit URLs they want to visit later or simply have them at
hand for some reason, similarly to web browser bookmarks. Here, however, they have
them online so they do not have to create them repeatedly on different workstations
and moreover, they decorate them with tags (the submission procedure requests the
user to provide some tags). Using the tags, the URLs can be easily filtered and even
large set of bookmarks are relatively easy to browse (e.g., by using tag clouds).
From the Semantic Web perspective, the Delicious users do two useful things:
1. They decorate URLs , i.e. web resources, with tags and annotate them. Unfortu-
nately, they do it with respect to themselves, i.e. they write tags which meaning
they understand, but this meaning can be proprietary to them only and therefore
confusing or inaccurate for the rest of the world. For example, someone book-
marks the Wikipedia page about grizzly but decorates it with tag “55 km/hr”,
6 http://www.flickr.com
7 http://delicious.org
 
Search WWH ::




Custom Search