Information Technology Reference
In-Depth Information
1. For the n th view, the differences between consecutive frames are obtained by
calculating the pixel-wise absolute differences between depth, D , and texture, C ,
frames at time instants t
1and t :
Δ
C ( i , j )= abs ( C ( i , j , t
1 , n )
C ( i , j , t , n )) ,
Δ
D ( i , j )= abs ( D ( i , j , t
1 , n )
D ( i , j , t , n )) .
(14)
Then, using these frames, the global segmentation map is calculated as follows:
a pixel at location (i,j) is assigned to background, if satisfies the condition given
by (15), and assigned to foreground otherwise;
Δ
C +
λΔ
D < T CD ,
(15)
(typically 0.5) and the threshold T CD are constants.
2. After the global segmentation map is obtained, the average background depth
values of depth maps D(i,j,t-1,n) and D(i,j,t,n) are calculated by using the depth
values of background pixels by simple averaging.
3. Finally, the depth values of the pixels at time instants t
where,
λ
1and t ,i.e. D(i,j,t-1,n)
and D(i,j,t,n) are compared to the average background depth values calculated
in Step 2, and the foreground pixels are determined as the pixels having depth
values different from the average depth values by a certain threshold.
Fig. 10 illustrates the obtained global, previous and current segmentation maps for
Akko-Kayo sequence. It should be noted that most of the available multi-view se-
quences have static camera arrays, which lets this simple algorithm yield satisfac-
tory results. In the case that the camera is not stationary, more complex segmen-
tation algorithms such as [39] should be utilized. The static/dynamic camera dis-
tinction can simply be made using the ratio of pixels in motion. Furthermore, when
there are multiple objects in the scene, connected component labeling is crucial for
segmentation.
(a)
(b)
(c)
Fig. 10 (a) Global, (b) previous and (c) current segmentation maps for Akko-Kayo sequence
 
Search WWH ::




Custom Search