Game Development Reference
In-Depth Information
algorithm would measure the trend in normal conditions. After a “stalemate” is
found, the tag would be featured in the game in “special conditions”, meaning that
the game would “purposefully” provoke the player to express explicit opinions on
this tag. Also the tag could be assigned to “expert players” (see below). In case the
stalemate is still present, the featuring of the tag in the game would be ceased. Such
approach to sparing of the player work may not necessarily increase correctness
of our method, but would certainly improve its output quantity.
Lack of player experience in the particular music domain . In our experiments,
we deployed the game limited to certain music genres. Therefore, some players
had trouble in recognizing the features of the music and mapping them to tags.
Some specific tags imposed problems by themselves: the players (as some of them
reported) simply did not understood the tags, as the tags were part of a jargon
which the players were not familiar with.
One solution for this is to let the player to choose genre he is most familiar with
(the only issue then is to have enough players for each category). Another is to
measure the player competences for individual genres implicitly from his actions
in the game (a type of approach we discuss in the second part of this topic). With
this measure, we can sort the players according to their skill levels for particular
music domains and eventually weight the support value changes they impose
accordingly (assuming that “expert players” give correct feedback more often).
“Too dirty dataset” . It is possible that usability of our method is limited to datasets
with only certain range of correct/wrong tag ratio. If for example, the majority
of tags within the dataset is wrong, it could cause too much confusion among
players, resulting in more biased results. Unfortunately, to determine this, much
more experimentation with differently “spoiled” metadata sets must be done.
One more improvement which could help in improving of the output of our method,
is taking more of the player behavior into account. What could be considered is the
amount of time the player needs or how many times he re-plays the music track
to make his decision—possible indicators of hesitation that could be reflected into
lower change of the tag support value.
References
1. Coviello, E., Chan, A.B., Lanckriet, G.: Time series models for semantic music annotation.
Trans. Audio, Speech and Lang. Proc. 19 (5), 1343-1359 (2011)
2. Dittmar, C., Grossmann, H., Cano, E., Grollmisch, S., Lukashevich, H., Abesser, J.: Songs2see
and globalmusic2one: two applied research projects in music information retrieval at fraun-
hofer idmt. In: Proceedings of the 7th International Conference on Exploring Music Contents.
CMMR'10, pp. 259-272. Springer-Verlag, Berlin, Heidelberg (2011)
3. Dulacka, P., Šimko, J., Bieliková, M.: Validation of music metadata via game with a purpose.
In: Proceedings of the 8th International Conference on Semantic Systems, I-SEMANTICS '12,
pp. 177-180. ACM, New York, NY, USA (2012)
4. Miotto, R., Barrington, L., Lanckriet, G.R.G.: Improving auto-tagging by modeling semantic
co-occurrences. In: Downie J.S. Veltkamp R.C. (eds.) ISMIR, pp. 297-302. International Society
for Music Information Retrieval (2010)
 
Search WWH ::




Custom Search