Information Technology Reference
In-Depth Information
In contrast, the research concerning generative video composition presented in this
paper is based on the idea of providing variable music videos, which can be generated
in real-time without the requirement of viewer interaction and which therefore depend
on processes that are to some extent beyond the director's control [3]. Thus, the
methods presented here aim to provide a new creative tool for video directors, as well
as an engaging viewing experience for audiences.
2
Music Video Generation
This demonstration presents a prototype application of Dividation, a software for
generative music video editing utilising real-time assembling of video sequences from
individually prepared pools of video footage using algorithmic decision-making
processes based on the creator's clip classifications.
Sequence generation is achieved using stochastic processes based on Markov
chains and probabilistic structures to describe the editing dynamics. A database of
sufficiently annotated original video footage is used. Assuming the individual shot to
be the basic building block of these videos, a sequence is assembled shot by shot
according to the video's principal key characteristics as defined by the video director.
Additional structuring methods, such as underlying ordered lists, help to ensure
narrative coherence through the specification of forced shots and the definition of
temporal windows within which the stochastic selections are constrained.
3
Demonstration
Two music videos are used to demonstrate the video generation method described
above: Majestic and Eloise .
The music video Majestic consists of a simple narrative, in which the protagonist is
drawn into the televised action of the musicians' performances. Six parameters are
used to categorise the video content and are mapped to temporal probabilities. The
mapped temporal probabilities describe the amount of visual features that should be
seen during particular sections of the song. This music video uses a pool of roughly
two hundred clips with a length varying from less than one second up to three
minutes. An algorithmic decision-making process is then employed for both the
automatic sequencing of the video footage and the cutting of the footage into shots of
appropriate lengths.
In comparison, the music video Eloise uses a slightly more detailed narrative.
Here, the protagonist, Eloise, is seen inside her home, while the musicians are
performing the song inside her living room. A secondary protagonist singer urges the
woman to leave her house and quit her bad habits, which she finally does. For this
video, the editing dynamics are based on the use of three visual parameters and a pre-
defined narrative structure. This provides an additional guide to the order of the shots.
The video uses approx. 550 pre-cut clips with a length varying from less than one
second up to ten seconds. In this example, the algorithmic decision-making process is
only applied to the sequencing of the video content. The selected shots are played in
Search WWH ::




Custom Search