Biomedical Engineering Reference
In-Depth Information
a parametric re-synthesis filter, and by feeding this filter with signals having the
temporal density and envelopes calculated during the analysis.
Especially innovative and rewarding, in this modeling approach, was its tight
interactivity with non musical sound events. Not only did this system allow straight-
forward connection of floor interfaces like sensing mats; it also put a palette of
controls available to the users, who could manipulate the synthesis parameters for
trimming the results of the analysis, and furthermore introduce their own taste to the
footstep sounds. A similar interaction design approach was followed by Fontana and
Bresin one year later in form of C external code for the Puredata realtime environ-
ment, limitedly to the interactive simulation of aggregate grounds [ 30 ]: as opposed
to Cook, their model was completely independent of pre-recorded material, instead
relying on a physics-based impact model simulating a point-wise mass colliding
against a resonant object through a nonlinear spring. This model was employed to
generate bursts of micro impacts in real time, whose individual amplitude and tempo-
ral density followed stochastic processes taken from respective physical descriptions
of crumpling events. Such descriptions expose macro parameters (respectively of
amplitude and temporal density) that, for the purpose of this model, could be used
for user control. Finally, an amount of potential energy could be set which was pro-
gressively consumed by the micro impacts during every footstep: this feature made
it possible to trigger a footstep on a specific floor directly, i.e. with no further infor-
mation needed, and allowed the authors to reproduce slow-downs taking place at the
end of a run, based on assumptions on human movement having links to musical
performance.
Both such models have imposed the closed-loop interaction paradigm to the spe-
cific area of interactive walking simulation. This paradigm is even more constraining
in the case of acoustic rendering, as only few milliseconds are allowed to the system
for displaying the response signal in front of an action of the foot in contact with
a sensing floor, or wearing an instrumented shoe. From there, further experiences
have aimed at refining the maps linking foot actions to the synthesized sound. In
particular, an attempt to integrate some biomechanical parameters of locomotion,
particularly the ground reaction force, in a real time footstep sound synthesizer was
made by Farnell in 2007 [ 27 ]. The result was a patch for Puredata that was further-
more intended to provide an audio engine for computer games, in which walking is
interactively sonified.
12.2.2.2 Current Approaches to Walking Sound Synthesis
The synthesis of walking sounds has been recently centering around multimodal,
interactive contexts where users are engaged in a perception and action task. In fact,
for the mentioned lack of robust maps linking biomechanical data of human walking
to dynamic contact laws between the foot and grounds having different properties,
if the listener is not physically walking then the synthesis model can be conve-
niently resolved by a good dataset of footstep sounds recorded over a multiplicity
of grounds, that is managed by an intelligent agent capable of understanding the
Search WWH ::




Custom Search