Game Development Reference
In-Depth Information
feeding training data to machine learning approaches (e.g., training set of images
with metadata) or using no aggregation at all.
￿
Human skill. What type of cognitive activity are the workers performing? Authors
mention visual recognition, language understanding and human communication.
The visual recognition together with aural recognition is often a case of multimedia
metadata acquisition approaches. As another human skill category, we recognize
the application of “common sense” which is a subject of several knowledge acqui-
sition projects [ 39 ]. We see two counterparts to this dimension in Doan's work:
the “target problem (type)” and “how do workers solve the task” (what tools or
techniques they use). In both cases however, Doan et al. focus on “outer” charac-
teristics of the job, whereas Quinn and Bederson focus on mind skills themselves.
An attempt to categorize human skills used in human computation was also made
later work by Parshotam [ 54 ], who identifies them as human perception (sensing),
cognition, knowledge, common sense, visual processing, anomaly detection or
context identification.
￿
Process order. For this dimension, authors identify three roles found in each human
computation system: the requester, worker and computer. Then, several classes of
systems based on order of work of these roles are presented. Sometimes, the com-
putational task is firstly attempted by a computer and then corrected or comple-
mented by a human, e.g., computer-worker-requester for ReCAPTCHA. 4 In other
cases, the human contribution precedes the computer processing, e.g., a seman-
tics acquisition game Peekaboom, where players identify visual objects by circular
regions in the images which are further automatically folded to form true (i.e. non-
circular) boundaries of these objects [ 2 ]. For semantics acquisition, both cases are
common. Moreover, the role of computer processing (either prior or posterior),
not only for mediation is often essential to handle the quantity of tasks (high even
for a crowd processing).
￿
Task-request cardinality. How many workers are necessary to finish one task?
The authors encourage to further experimentation with the classification by combin-
ing various dimensions and their values to imagine new systems.
Based on the literature review, the (1) role of incentives (motivation) and (2) quality
control receive most of the attention of researchers in crowdsourcing and human
computation ([ 19 , 57 , 64 ] resp. [ 3 , 19 , 45 , 73 ]).
2.5.2 Mechanical Turk
As a demonstration and a single most renown product (and at the same time, an
approach) of the crowdsourcing the Wikipedia is often presented. A much more
characteristic to the crowdsourcing principles however, is the Amazon Mechanical
Turk . 5
It is a generic platform for controlled crowdsourcing, where companies or
4 http://www.google.com/reCAPTCHA
5 https://www.mturk.com
 
Search WWH ::




Custom Search