Game Development Reference
In-Depth Information
The summary from the perspective of problem decomposition and task difficulty
is that we have two possible models in SAGs:
1. All tasks are equal in their complexity and are relatively easy to solve.
2. There is a gradual increase of complexity of tasks.
7.2.3 Task Distribution and Player Competences
Each SAG operates with a pool of available players and a pool of unsolved (or
partially solved tasks). The way how SAGs assign the tasks to the players may
greatly influence their outcome, both qualitatively and quantitatively.
First, we might consider the quantitative effectiveness of the game. Let us assume
that
1. The SAG requires more than one player to solve a particular task because of the
mutual validation, which is needed for majority of problems.
2. There is only limited available work-power (i.e. we have a limited number of
players with limited average play time). This work-power is lesser than a total
work-power needed for solving all of the task instances in the pool (i.e. number
of all tasks times the number of redundant solutions needed for validation).
If the SAG assigns the tasks to players randomly under such conditions, its effective-
ness in using the player work will be very small: only a fraction of task instances will
be solved sufficient number of times. Therefore, a random scheme is almost never
used (the exception might be the games that use exact artifact validation, where no
further redundancy in task instance solving is needed once a first correct solution
is found). Instead, SAGs apply a greedy strategy , which basically pulls those task
instances for solving, which are closest to reaching the number of needed solutions.
This way, the work (solutions, artifacts) of the players never goes “in vain” as it
always participates in the artifact validation.
For the task assignment in SAGs, the greedy strategy can be considered as
a baseline. It is usually modified by secondary task-picking criteria, such as:
￿
Not assigning the same task instance to the same player multiple times (to prevent
him to get bored).
￿
Preferring certain tasks according to some measure of their value (e.g. how impor-
tant is the resource to be annotated) or to some existing data stubs (e.g. ontology-
driven selection [ 11 ]).
The approach above optimizes the number of tasks being solved and influences
(through the secondary criteria) what tasks are solved with higher priority. On the
other hand, it cannot influence the quality of the SAG outcome. The overall “abstract”
quality of the SAG output depends on the quality of individual, concrete solutions
and these depend on the “quality” of the players they create them (i.e. how good are
the solutions of a particular player).
 
Search WWH ::




Custom Search