Graphics Reference
In-Depth Information
It's definitely a million times better than doing it by hand, but as far as the per-
centage of shots that track well right out of the gate, without any user interaction, it's
surprisingly inconsistent. Some shots I think oh, this'll be easy, like a day of my time
with an hour of tracking, and it turns out I'll have to tweak the software for a day and
a half just to track the shot. And some shots, I think, wow, this is going to be really
hard and I send it through the autotracker and it's like click, I'm done. A lot of it has
to do with the amount of parallax going on in the shot, how deep the distance is —
sometimes if the shot is really shallow the software gets confused.
RJR: Once you get the camera track, are the 3Dpoint locations that were simultaneously
estimated useful?
Capton: Sometimes. We worked on the visual effects for the last season of Lost .
There were a couple of shots in the final episode where a cliffside was supposed to be
crumbling and falling away. Based on the camera track we had a good idea of the 3D
locations of a sparse set of points on the cliffside. Those enabled us to build a simple
3Dmodel, so we could create small rocks that appeared to bounce off the nooks and
crannies of that cliff all the way down. We probably could have done that by eye, but
having a sparse point cloud to base it on definitely helped, instead of having to guess
how far away the cliff was from the camera. When the autotracker gives you a decent
point cloud, it's almost like a poor man's LiDAR scan.
As another example, say we tracked a room and need to create an artificial semi-
reflective object in the middle of it. We could use the 3D points from the autotracker
to build really simple geometry to mimic the room, project the plate back onto the
geometry, and use that to create reflections onto the object. It's kind of a cheat to cre-
ate a sense of the environment when we don't have any other reference information
about the scene.
6.8
NOTES AND EXTENSIONS
Burtch [ 79 ] summarized the history of photogrammetry from da Vinci up to the
digital age. A brief historical overview also appears at the end of Triggs et al. [ 500 ]. In
particular, DuaneBrown, working for theU.S. Air Force in the 1950s, iswidely credited
with developing the theory of bundle adjustment [ 69 ] as well as many techniques for
camera calibration. Kraus's textbook [ 256 ] covers modern digital photogrammetric
techniques using image and range data.
As mentioned previously, the topics by Hartley and Zisserman [ 188 ] and Faugeras
and Luong [ 137 ] are excellent references on the theory of projective geometry as it
relates to computer vision. Faugeras and Luong's book [ 137 ] is orientedmore toward
the deep theory of projective geometry, and has several useful tables of invariants
at different levels of reconstruction and degrees of freedom in different problems.
Hartley and Zisserman's book [ 188 ] offers more practical advice for approaching
real-world multi-view geometry problems, including many easy to follow algorithms
and notes on implementation. All four of the authors have written many landmark
articles on issues related to structure frommotion, summarized conveniently in these
topics.
Search WWH ::




Custom Search