Game Development Reference
In-Depth Information
a set of possible solutions for meeting these requirements. Not always, the solutions
are optimal, so we also outline their drawbacks. We later build up on our findings
regarding the design aspects and present our own improvements of SAG design,
demonstrating it on our games.
7.2.1 Validation of Player Output (Artifacts)
Every SAG has to solve the issue of validation of player output (inferred from the set
of actions he does in the game) in order to give him the score feedback. The score
must correlate with value of his output from the purpose perspective, otherwise the
player would tend to produce outputs with no value in the future. This means the
game has to be able to evaluate the value of user output, and has to do it immediately
after the game ends, so the player receives feedback and stays motivated to play
again.
But how can we evaluate an artifact, which was created by the player for the
first time? In other words, if the purpose of the game is to create new artifacts, and
creating those artifacts is only within the power of a human, then who, apart from
human can validate the correctness of the output?
Many games (like ESP, TagATune and others) [ 2 , 8 , 9 , 12 , 13 , 28 , 29 ]relyon
mutual agreement of two, simultaneously playing players—cooperating or opposing
each other (anonymous to each other in case of cooperating players). It's logic is
simple, as the players produce artifacts, they are matched together and if they are
the same, they are with high probability correct (assuming they have been created
independently).
The mutual agreement mechanism however, introduces the cold-start problem :
there have to be a large enough pool of mutually anonymous players wanting to play
at the same time, otherwise, the game cannot even start. This also hinders the desired
iterative process of SAG development—it is harder to get a large-enough group of
players for testing and user evaluation after every iteration, than to get individual
players.
Some SAGs solve this issue by using bot players (based on previous recorded
sessions) that validate some of the player's output and provoke him to introduce new
facts. A good example of bot use is Vickrey's Free Association SAG, where authors
claim a relatively large mass of explored metadata relationships (800 thousand term
relationship suggestions). This is thanks to the usage of game bots which simulated
the opposite player in cases when there was only single player wanting to play (this
approach was also successfully used by von Ahn in the ESP Game [ 28 ]). In fact,
almost all games were played in “human-bot” mode. Anecdotally, the authors found
that most of the players were not even aware that they are playing against a bot [ 26 ]—
a strong argument to consider possible use of bots in general SAG design.
However (in connection to the cold-start), bots tend to help more in the later stages
of SAG deployment. Even if the SAG uses them, it is prone to the cold-start problem
in the beginning, where not enough games is pre-recorded. Additionally, use of bots
 
Search WWH ::




Custom Search