Digital Signal Processing Reference
In-Depth Information
object segmentation algorithm to automatically separate the OOIs from background
and identify individual object in the initial frame of a multiview video. The seg-
mentation algorithm is developed for different two scenarios that are the spatially
separated objects and overlapped objects.
5.2.3.1
Automatic Object Extraction
The semiautomatic object extraction algorithms, which require user-supplied pri-
ors such as brush stokes [ 2 , 8 ] and bounding box [ 25 , 37 ] are not preferable in the
MVI/V due to large quantities of data with users' interventions, thus fully automatic
algorithm is in the highly demand. Automatic object extraction is still a challenging
problem, especially when no prior information (background image) is provided or
no semantic cues are extracted from the scene.
In our recent work [ 47 ], to automatically extract the OOIs patches for initializa-
tion of the segmentation process, saliency model is employed to compute a saliency
map for the key view (middle view in 5 views) of initial frame, where higher-level
features that are the depth and motion estimated off-line are utilized. The selection
of these two features is based on the following reasons: human attentions are
generally more focused on the moving object than the static one in the video; an
OOI appears to have similar depth values in the 3D scene thus form a uniform distri-
bution in the depth field. By thresholding, morphological operations and connected
component analysis on the saliency map, initial OOIs can be automatically extracted
to trigger the subsequent segmentation process. Figure 5.3 shows the saliency maps
and initial OOIs in two key view images, which are used to model the foreground
and background distributions.
Fig. 5.3 Automatic object extraction: left : input image; middle : saliency map; right : initial OOIs;
top : Reading sequence; bottom : Calling sequence
Search WWH ::




Custom Search