Graphics Reference
In-Depth Information
bands going across the data. Systems that tie the projector timing into the camera's
frame acquisition timing help alleviate, but not entirely eliminate, this artifact.
RJR: Have multi-view stereo techniques made an impact in 3D data acquisition for
visual effects yet?
Chapman: We've been exploring the photogrammetric solution for quite some time
and think that it's trending toward the point that it's likely going to replacemost of the
other processes very soon. However, inmovies there are a lot of black sets, costumes,
and shiny things for aesthetic reasons—Batman's outfit or TRON 's sets, for example.
If you try to use multi-view stereo to capture that you might end up with just the
edges of objects. We've attempted to aid the algorithms by projecting a pattern onto
the object so it's sort of amix of structured light andmulti-view stereo. We often have
to quickly improvise with materials on set, like powdering a sarcophagus in order to
read the reflective gold, taping lines onto a shiny helicopter, or even kicking up some
dirt onto a black waxed pickup truck.
If we need to do something even grander than we could handle with time-of-flight
LiDAR, we'd likely use a photo modeling technique, but today the results are often
simply not yet good enough to deliver as-is to a visual effects company. We have to
do a lot of work to make it presentable. I used one such tool — PhotoSynth — on
a project where we needed to model the Statue of Liberty. We didn't have the time
or money to do the job in person, and even if we did, getting permission to scan it
would have been very difficult for security reasons. We used PhotoSynth to get the
essential proportions of the statue, and discovered roughly where and how big things
needed to be, but we still needed a sculptor to go in and recreate the accurate likeness
underneath.
On a movie set, we definitely take as much video footage and supplemental pho-
tography as we can and catalog it for reference, since we never know when we'll scan
something and find that somebody moved it the next day or even destroyed it. We
have terabytes of photos that were once intended solely for reference but that now
might be reprocessed through multi-view stereo software to derive new information.
RJR: What techniques do you use for registering multiple scans?
Chapman: Whenwe starteddoing LiDAR in the 1990s, weneeded toplace registration
spheres all over the set, similar to littlemagnetic pool balls. We would find the centers
of the spheres in the data and use them to do a three-point alignment, and then do a
best-fit registration automatically from that. It took a lot of time to climb around the
set and place these targets, which resulted in being able to take fewer scans.
Since then, commercial software has evolved so that we can quickly pick three
points in one scan, pick roughly the same points in another scan, and the software
will automatically register them. Currently we use custom software to greatly reduce
the data for registration purposes. We usually scan one pass at the farthest possible
distance from the scene to act as a key alignment pass, to which all of the other scan
passes will be aligned. We often devote a single LiDAR scanner solely to perform this
“master” scan while we use other scanners to do the remaining multiple viewpoints.
Search WWH ::




Custom Search