Graphics Reference
In-Depth Information
and b ( t ) for the key frames are set, the functions a ( t )and b ( t ) can be interpolated throughout the time
span of interest. Combined, these functions establish a function y ( t 0 ) that satisfies the key-frame and
time warp constraints by warping the original function.
6.7.2 Retargeting the motion
What happens if the synthetic character doesn't match the dimensions (e.g., limb length) of the cap-
tured subject? Does the motion have to be recaptured with a new subject that does match the synthetic
figure better? Or does the synthetic figure have to be redesigned to match the dimensions of the cap-
tured subject? One solution is to map the motion onto the mismatched synthetic character and then
modify it to satisfy important constraints.
This is referred to as motion retargeting [ 2 ]. Important constraints include such things as avoiding
foot penetration of the floor, avoiding self-penetration, not letting feet slide on the floor when walking,
and so forth. A new motion is constructed, as close to the original motion as possible, while enforcing
the constraints. Finding the new motion is formulated as a space-time, nonlinear constrained optimi-
zation problem. This technique is beyond the scope of this topic and the interested reader is encouraged
to consult the work by Michael Gleicher (e.g., [ 2 ]).
6.7.3 Combining motions
Motions are usually captured in segments whose durations last a few minutes each. Often, a longer
sequence of activity is needed in order to animate a specific action. The ability to assemble motion
segments into longer actions makes motion capture a much more useful tool for the animator. The sim-
plest, and least aesthetically pleasing, way to combine motion segments is to start and stop each seg-
ment in a neutral position such as standing. Motion segments can then be easily combined with the
annoying attribute of the figure coming to the neutral position between each segment.
More natural transitions between segments are possible by blending the end of one segment into the
beginning of the next one. Such transitions may look awkward unless the portions to be blended are
similar. Similarity can be defined as the sum of the differences over all DOFs over the time interval of
the blend. Both motion signal processing and motion warping can be used to isolate, overlap, and then
blend two motion segments together. Automatic techniques to identify the best subsegments for tran-
sitions is the subject of current research [ 4 ].
In order to animate a longer activity, motion segments can be strung together to form a longer
activity. Recent research has produced techniques such as motion graphs that identify good transi-
tions between segments in a motion database [ 3 ][ 5 ][ 7 ] . When the system is confronted with a
request for a motion task, the sequence of motions that satisfies the task is constructed from the seg-
ments in the motion capture database. Preprocessing can identify good points of transition among the
motion segments and can cluster similar motion segments to reduce search time. Allowing minor
modifications to the motion segments, such as small changes in duration and distance traveled,
can help improve the usefulness of specific segments. Finally, selecting among alternative solutions
usually means evaluating a motion sequence based on obstacle avoidance, distance traveled, time to
completion, and so forth.
Search WWH ::




Custom Search