Game Development Reference
In-Depth Information
What do users do in the process (how they solve the tasks)?
What technique is used
by users to contribute (e.g., tagging, rating, reviewing)? How cognitively- or skill-
demanding? Here, applications that ask simpler questions retrieve more answers
from more workers but may also demand more validation (e.g., when workers
validate existing metadata by dichotomic yes-no options). Is the semantics (being
retrieved) a common sense knowledge or a specialized domain knowledge (which
is obviously harder to obtain due to smaller pool of potential contributors)?
How are the partial results combined?
And how is the problem decomposed prior
to that? In semantics acquisition, many approaches tend to collect atomic pieces
of information (tags, triplets) which then (automatically) constitute more complex
structures. A contrast to this are, for example, contributors toWikipedia, that com-
pose complex structures (texts) themselves and where the contribution combining
is a demanding human intelligence task itself.
How is the output evaluated?
Common for semantics acquisition is redundant task
solving and collaborative filtering—a technique possible mainly due to the “atom-
icity” of the acquired information. Apart from this, however, other techniques
exist, such as rating of contributions created by other users, post hoc cleaning by
domain experts or detection of suspicious behavior of workers (and thus, malicious
contributions).
At approximately same time as Doan et al., Quinn and Bederson offered a different
conceptualization and design space of the “combined” fields. Although they focused
primarily on the human computation (rather than crowdsourcing) [
55
] and offered
a classification of human computation approaches, their classification dimensions
strongly refer to the crowdsourcing too. For each dimension, Quinn and Bederson
also name several values, i.e. typical design patterns or features utilized by human
computation systems. At the same time, they declare the list as open and waiting to
be filled with new alternatives.
Motivation.
What motivates people to contribute? This dimension is directly map-
pable to Doan's “recruitment and retention”. As major incentive forms, Quinn
and Bederson identifies pay, reputation, altruism, enjoyment and “implicit” work
(covered by Doan's “nature of collaboration”).
Quality control.
Another “recurring” dimension of Doan's: “output evaluation”.
Though, Quinn and Bederson offer a finer-grained set of patterns. Many of them
represent some kind of redundant task solving, common for semantics acquisition:
output and input agreement (a reference to work of Luis von Ahn's semantics
acquisition games [
1
]), redundancy, statistical filtering.
Aggregation.
Describes how are the individual worker contributions combined.
A dimension directly mappable to Doan's “partial result combination”. Authors
identify a variety of approaches, including iterative improvement (or validation)
of existing artifacts, searching for positive cases (e.g., visual scanning of large set
of satellite images for evidence) or evaluation of fenotypes of genetic algorithms.
More characteristic for semantics acquisition however, is simple collecting of
partial contributions into a larger structure (e.g., atomic facts into an ontology),
Search WWH ::
Custom Search