Information Technology Reference
In-Depth Information
illustrates the shooting of a room in “Palais du TAU” [26] in which we had a virtual
rabbit 1 .
6Conluion
This work models geometrical distortions between the shot scene and its multiscop-
ically viewed avatar. These distortions are related to geometrical parameters of both
the shooting and rendering devices or systems. This model enables quantitative ob-
jective assessments on the geometrical reliability of any multiscopic shooting and
rendering couple.
The formulas expressing distortion parameters from geometrical characteristics
of the shooting and rendering devices have been inverted subsequently in order to
express the desirable shooting layout yielding a chosen distortion scheme upon a
chosen rendering device. This design scheme apriori insures that the 3D experi-
ence will meet the chosen requirements for each expected observer position. Such
a scheme may prove highly valuable for applications needing reliable accurate 3D
perception or specific distortion effects.
From this design scheme we have produced several shooting technologies en-
suring the desired depth effect upon a pre-determined rendering device. The pro-
posed shooting technologies cover any needs of multi-viewpoint scene shooting
(real/virtual, still/animated, photo/video).
This work proposes several perspectives. We are developing a configurable cam-
era with flexible geometric parameters in order to adapt to a chosen rendering de-
vice and a desired depth effect. Thus, we could test different depth distortions for
the same scene. Moreover, we could produce qualitative 3D content for several ren-
dering devices from a single camera box.
We will need to do some experiments on a demanding subject as we have to
validate that the perception is geometrically conform to our expectations. This will
require a significant panel of viewers but also to define and set up the perception test
which will permit to precisely quantify the distances between some characteristic
points of the perceived scene.
Acknowledgements. We would you like to thank the ANRT, the French National Agency of
Research and Technology for its financial support. The work reported in this chapter was
supported as part of the CamRelief project by the French National Agency of Research.
This project is a collaboration between the University of Reims Champagne-Ardenne and
TeleRelief. We would like to thank Michel Frichet, Florence Debons and the staff for their
contribution to the project.
References
1. Sanders, W.R., McAllister, D.F.: Producing anaglyphs from synthetic images. In: Proc.
SPIE Stereoscopic Displays and Virtual Reality Systems X, Santa Clara, CA, USA
(2003)
 
Search WWH ::




Custom Search