Graphics Reference
In-Depth Information
6.3.4 Recoloring
All of the previous refinement steps involve changing the depth map Z v , which is
normally—due to the plane sweep—jointly linked to the image color in I v .Torestore
this link, the refined depth map is used to recolor the interpolated image with the
updated depth values. As opposed to other more geometrically correct approaches
[ 17 ], we thereby significantly enhance the subjective visual quality. The system is
currently able to recolor the image in two different approaches, each having their
particular effect on the resulting quality.
6.3.4.1 N -Camera Recoloring
The simplest and fastest recoloring solution is similar to the plane sweeping mech-
anism because it recomputes each pixel of the image I v with an updated T j matrix
(Eq. 6.3 ) according to the refined depth information. The interpolated pixel color is
then again obtained by averaging all N cameras.
This approach generates very smooth transitions of the input images in the syn-
thesized result, at the expense of loss of detail (see Fig. 6.9 e).
6.3.4.2 Confident Camera Recoloring
For each pixel f v of the image I v , the second recoloring solution determines which
input camera C i is closest in angle to the virtual camera C v , a nd stores the camera
index in a color map H v according to Eq. 6.8 , where h i =
fC i is the vector from f
to C i , and h v =
fC v .
( h v h i )
H v =
arg max
i
cos
(6.8)
∈{
1
...
N
}
We assume C i to represent the optical image center of the camera, and f is the
image point f v backprojected to world space according to Eq. 6.3 , again with an
updated T j matrix. This recoloring scheme is illustrated in Fig. 6.9 f, with depicted
color map H v .
Selecting the color from a single camera defined in H v , ensures a sharply detailed
synthesized image. However, the quality is sensitive to deviating colors between
input cameras due to variations in illumination and color calibration (see Fig. 6.9 f).
6.3.5 Concurrent Eye Tracking
To restore the eye contact between the video chat participants, the camera C v needs to
be correctly positioned. Eye tracking can be performed robustly and more efficiently
on CPU, and is therefore executed concurrently with the main processing of the
system.
Search WWH ::




Custom Search