Game Development Reference
In-Depth Information
symbolic representations (still in rdf) [ 42 ]. A more supervised, ontology-driven
approach, than the “keyword” approach of Lu and Hanjalic, which reminds us of
the unsupervised TF-IDF.
The preprocessed audio streams are subjected to further analysis, detecting more
complex features and patterns of themusic, eventually giving out the desiredmetadata
about their aural characteristics. The unsupervised approaches produce unlabeled
features (used mainly in example querying) using mostly statistical process model-
ing and machine learning [ 40 , 50 , 56 ]. There are also supervised, ontology-driven
feature identification approaches [ 65 ]. Apart from content-based, also context-based
approaches are used [ 26 , 61 ].
2.5 Crowdsourcing
Crowdsourcing. The term itself was first coined in 2005 byHowe [ 29 ]. In 2008, Daren
C. Brabham defined it as “an online, distributed problem-solving and production
model”. Crowdsourcing (and crowd-based approaches for semantics acquisition)
emerged along with the Web 2.0 phenomenon, which enabled masses of Web users
to be contributors of the Web content. The crowdsourcing often comprises human
computation and is focused towards solving of the human intelligence tasks —tasks
hard or impossible to be solved by computers, but relatively easy for humans. As
Quinn and Bederson remind us, these two terms should not be confused [ 55 ]. While
the “crowdsourcing” primarily designates the distribution of a task to the wide and
open mass of people, the “human computation” designates the using of human power
for solving of a problem with a computational nature (i.e. a problem that may be
solved by computers at some point in the future).
The semantics acquisition involves many tasks performed via crowdsourcing.
Users of the Web are time-to-time (and in various contexts) motivated to disclose
some descriptive information about web resources they encounter. They comment
and rate images or videos, manage their personal content applications, galleries and
bookmarks. By collecting these information and tracking user behavior, crowdsourc-
ing techniques produce resource descriptions and even lightweight domain models.
If the crowdsourced semantics originates from the human work, then what differ-
ences it have to expert approaches we mentioned earlier? The answer is the different
quality assurance mechanisms. While manual approaches rely on an expertise of
the individual, the crowd-based approaches the agreement principle: if many, even
uninitiated people independently express the same fact, it is probably a truth (e.g., the
same photo gets decorated with same tag from multiple users). This allows crowd-
sourcing to produce relatively precise outputs even if the input is noisy (an individual
uninitiated user may produce many untrue suggestions).
The advantage of crowdsourcing approaches against the expert-based approaches
is much greater scale of discovered semantics. First, the quantity of potential lay
(non-expert) contributors is larger (even when they are used redundantly). On the
 
Search WWH ::




Custom Search