Information Technology Reference
In-Depth Information
architecture is that it provides tools for efficient
storage of structural musical data and for perform-
ing content-based queries on such data.
The overall architecture of the Musical Data
Management module is illustrated in Figure 19.
<eventid=”p1v1 _ 22”timing=”256”hpos=”256”/>
<eventid=”p2v1 _ 14”timing=”0”hpos=”0”/>
<eventid=”p1v1 _ 23”timing=”256”hpos=”256”/>
</spine>
...
</mx>
Figure 18. Embedding of MIR metadata in an
XML environment
<?xmlversion=”1.0”encoding=”UTF-8”?>
<mx>
<general>
<description>
<movement _ title>Inventio#4
</movement _ title>
</description>
</general>
<structural>
<themes>
<themeid=”theme0”>
<thm _ spinerefendref=”v1 _ 12”
partref=”part0”stafref=”v1 _ 0”
voiceref=”voice0”/>
<thm _ spinerefendref=”v2 _ 12”
partref=”part1”stafref=”v2 _ 0”
voiceref=”voice0”/>
<invariants>
<order=”7”>
<size=”12”>
<complexity=”54”>
</invariants>
</theme>
</themes>
</structural>
<logic>
<spine>
<eventid=”timesig _ 0”timing=”0”hpos=”0”/>
<eventid=”keysig _ 0”timing=”0”hpos=”0”/>
<eventid=”clef _ 0”timing=”0”hpos=”0”/>
<eventid=”clef _ 1”timing=”0”hpos=”0”/>
<eventid=”p1v1 _ 0”timing=”0”hpos=”0”/>
<eventid=”p1v1 _ 1”timing=”256”hpos=”256”/>
<eventid=”p1v1 _ 2”timing=”256”hpos=”256”/>
<eventid=”p1v1 _ 3”timing=”256”hpos=”256”/>
<eventid=”p1v1 _ 4”timing=”256”hpos=”256”/>
<eventid=”p1v1 _ 5”timing=”256”hpos=”256”/>
<eventid=”p1v1 _ 6”timing=”256”hpos=”256”/>
<eventid=”p1v1 _ 7”timing=”256”hpos=”256”/>
<eventid=”p1v1 _ 8”timing=”256”hpos=”256”/>
<eventid=”p1v1 _ 9”timing=”256”hpos=”256”/>
<eventid=”p1v1 _ 10”timing=”256”hpos=”256”/>
<eventid=”p1v1 _ 11”timing=”256”hpos=”256”/>
<eventid=”p1v1 _ 12”timing=”256”hpos=”256”/>
<eventid=”p2v1 _ 0”timing=”0”hpos=”0”/>
<eventid=”p2v1 _ 1”timing=”256”hpos=”256”/>
<eventid=”p1v1 _ 13”timing=”256”hpos=”256”/>
<eventid=”p2v1 _ 2”timing=”0”hpos=”0”/>
<eventid=”p2v1 _ 3”timing=”256”hpos=”256”/>
<eventid=”p1v1 _ 14”timing=”256”hpos=”256”/>
<eventid=”p2v1 _ 4”timing=”0”hpos=”0”/>
<eventid=”p2v1 _ 5”timing=”256”hpos=”256”/>
<eventid=”p1v1 _ 15”timing=”256”hpos=”256”/>
<eventid=”p2v1 _ 6”timing=”0”hpos=”0”/>
<eventid=”p2v1 _ 7”timing=”256”hpos=”256”/>
<eventid=”p1v1 _ 16”timing=”256”hpos=”256”/>
<eventid=”p2v1 _ 8”timing=”0”hpos=”0”/>
<eventid=”p2v1 _ 9”timing=”256”hpos=”256”/>
<eventid=”p1v1 _ 17”timing=”256”hpos=”256”/>
<eventid=”p2v1 _ 10”timing=”0”hpos=”0”/>
<eventid=”p2v1 _ 11”timing=”256”hpos=”256”/>
<eventid=”p1v1 _ 18”timing=”256”hpos=”256”/>
<eventid=”p2v1 _ 12”timing=”0”hpos=”0”/>
<eventid=”p1v1 _ 19”timing=”256”hpos=”256”/>
<eventid=”p1v1 _ 20”timing=”256”hpos=”256”/>
<eventid=”p2v1 _ 13”timing=”0”hpos=”0”/>
<eventid=”p1v1 _ 21”timing=”256”hpos=”256”/>
The module consists of two main environ-
ments: the Musical Storage Environment and the
Musical Query Environment. The musical storage
environment has the purpose of representing musi-
cal information in the database, to make query by
content efficient. The musical query environment
provides methods to perform query by content on
music scores, starting from a score or an audio
fragment given as input.
The matching between the input and the scores
stored into DB is performed in several steps,
graphically illustrated in Figure 19. The input can
be either an audio file or a score fragment, played
by the user on a keyboard or sung or whistled into
a microphone connected to the computer (Haus
& Pollastri, 2000).
Musical Storage Environment: From the
audio files, note-like attributes are extracted
by converting the input into a sequence of
note-numbers, that is, the concatenation of
pitch and duration of each input note. Such
a step is performed by the Symbolic Music
Code Extractor module (Figure 19). The
Search WWH ::




Custom Search