Graphics Reference
In-Depth Information
movie production until James Cameron's stereoscopic film Avatar (2009). Weta
Digital, the company responsible for the effects in the film, wanted to make the
process more interactive as well as producing more realistic rendered results. The
goal was to provide the director with the ability to see in real time a visualization
of an actor's full performance as it would appear in the final CG character. In this
sense, the actors and director could work as if they were filming on a virtual stage.
One of the most notable technologies that reflected this philosophy was the
real-time facial capture system. Weta Digital switched from traditional (optical)
marker-based facial capture to an image-based facial capture. In this arrange-
ment, a helmet camera is mounted in front of an actor's mouth to capture the
displacements of green dots placed on the face. The resulting displacement data
is converted to a weighted summation of a facial expressions bases and brought
into the system that sculpts a 3D face model with the given facial expression
data. 5
(Note that this “image-based” motion capture is different than those intro-
duced in 5.3.3.) It does not recover a 3D model using stereo reconstruction.
MotionBuilder, a commercially available software package, was used to sculpt
a 3D face model using the given 2D facial expression data. Once a time series
of 3D facial expression models is obtained in this way, the 3D facial animation
from arbitrary camera positions can be constructed. Momcilovic Dejan and Mark
Sagar at Weta Digital developed the technology to make this processes work in
real time. This enabled the director to see the facial performances streamed in
live while working with the actors. The same system and pipeline was used for
creating the final facial animation that appears in the film, but with more accurate
tracking and data analysis.
In Avatar , the entire world was represented in CG, and James Cameron wanted
the photorealism to exceed that of ordinary live-action films as well as introduce
a new level of believability. The ability to view the effects of global illumination
with each change of lighting and surface quality evaluation were indispensable to
satisfy his demands. However considering the complexity of CG environments
in this work, it was very difficult to achieve it using existing approaches. Weta
Digital decided that a kind of highly efficient precomputation approach was the
best choice. PRT was introduced for this purpose, and a new lighting system was
developed by Martin Hill, Nick McKenzie, and Jon Allitt. The PRT in Avatar
5 The idea of using facial expression bases, which are commonly called action units (AU), came
from the facial action coding system (FACS) originally invented in the 1970s. Mark Sagar at Weta
Digital adapted this idea to establish a new facial animation pipeline in the movie King Kong (2005).
Each AU corresponds to each part of the face and is combined with a few facial muscles that determine
the appearance of that part of the face. Approximating the captured numerical data with the linear
combination of AUs produces more natural facial expressions.
Search WWH ::




Custom Search