Information Technology Reference
In-Depth Information
DepthMovie
{
field SFVec2f fieldOfView
0.785398 0.785398
field SFFloat nearPlane 10
field SFFloat farPlane 100
field SFBool orthographic
TRUE
field SFTextureNode texture
NULL
}
The upper four fields of the DepthMovie node are the same as the fields of the
DepthImage node [21] that indicates the camera parameters. The texture field can
store a depth image sequence as geometry through a MovieTexture node that usu-
ally indicates the 2D video. Then, the corresponding color image sequence is stored
on the texture field of the Appearance node. In this way, these nodes can describe a
3D surface. Following shows an example describing video-plus-depth using the
DepthMovie node. In this example, “colorVideo.h264” and “depthVideo.h264” are
the compressed versions of color and depth image sequences, respectively.
Shape
{
appearance Appearance {
texture MovieTexture { url “colorVideo.h264” }
}
geometry DepthMovie {
texture MovieTexture { url “depthVideo.h264”}
}
}
In general, computer graphic models are represented by the mesh structure and
described using predefined nodes in MPEG-4 BIFS. The MPEG-4 BIFS data includ-
ing scene description information and computer graphic model data are coded by the
BIFS encoder provided by the MPEG-4 system. Thereafter, the compressed video-
plus-depth and MPEG-4 BIFS bitstreams are multiplexed into a MP4 file that is de-
signed to contain the media data by the MPEG-4 representation. The MP4 file can
be played from a local hard disk and over existing IP networks. Hence, users can en-
joy the 3D video contents in the context of a video-on-demand concept.
5 Experimental Analysis
5.1 Evaluation of Depth Accuracy
For this experiment, as shown in Fig. 1, we set up a hybrid camera system with
two HD cameras (Canon XL-H1) as a stereoscopic camera and one Z-Cam as a
Search WWH ::




Custom Search