Game Development Reference
In-Depth Information
is problem-dependent: not all SAGs can use pre-recorded game sessions, because of
a need for inter-player interaction.
Other type of artifact validation approach, which many SAGs rely on, is boot-
strapping . With bootstrapping, some of the player's output is evaluated according to
existing data (according to these, the player gets scored). The rest is taken as new
artifacts with high chance of being correct, because player does not know, which of
his action (artifact) can and which cannot be evaluated.
Exercising the bootstrapping approach, an interesting framework comprising SAG
for image annotation was created by Seneviratne and Izquierdo [ 17 ]. It is a single
player game. The authors solved the artifact validation issue as follows: as the game
input, they mix non-annotated and fully annotated images, transparent to player. At
first, the game asks the player to tag the images with existing annotations, then it
introduces non-labeled images with an occasional presence of a labeled ones. By
tracking the players behavior patterns (using Markov models), the game is able to
determine whether the player's behavior is honest or not and determine relevance of
the annotations he provided.
Another interesting case of bootstrapping validation model in SAG is the
Akinator—the game in which players answer the questions about famous persons
and the game “guesses” who it is. The game uses its existing knowledge base to be
a solid opponent for a player, while it collect new knowledge as the player answers
the questions. If the game “guesses” the correct person at the end, the answers pro-
vided by player are used to strengthen that person's attributes in the knowledge base.
In case of player's win, the newly introduced character can be immediately provided
with some knowledge (answers that the player provided in the game session).
In special cases (depending on the problem being solved), the SAGs are able
to validate the player artifacts automatically. We identify two possible forms of
automated validation.
The exact automatic validation. In this case, the game is able to exactly compute
the value of an artifact according to some metric (e.g. a real fitness value). For exam-
ple, this is possible when a game solves an NP-Hard problem, where the candidate
solutions could be tested with an algorithm with polynomial complexity [ 5 ]. Yet, we
have observed such scheme only outside the SAG domain, in other crowdsourcing
games (e.g. testing the FPGA layout [ 23 ] or protein structures [ 4 ]).
The approximative automatic validation. In this case, the game is also able to
measure the artifact correctness automatically, but with a certain bias, guaranteeing
only a partial correlation with the true artifact value. The bias introduces a theoretical
risk that the players would be misled to producing wrong artifacts. In practice, the
SAGs cover this with not-so-transparent scoring functions, so the players are not able
to optimize their solutions to it, or the approximation is simply well enough for the
player to stay “on the right track”. As typical examples of the approximative artifact
validation, we consider our games of Little Search Game and the CityLights. There,
a background corpora of not-so-good metadata serves as sources for approximative
artifact evaluations. A typical phenomenon, which occur when this model is used,
is that the truly valuable artifacts emerge just when the player thinks he was wrong
(e.g. the hidden relationships of the LSG or negative feedback on tags in CityLights).
Search WWH ::




Custom Search