Information Technology Reference
In-Depth Information
tracking is to track the segmented objects over an image sequence, although the
extension of the rigidity constraint to multiple frames is nontrivial. Motion segmen-
tation aims at the motion layers of a scene rather than the moving objects. For ex-
ample, if a moving object contains multiple motions at a moment, it may be divided
into several motion layers. When these motion layers share the same motion, they
could be merged into a single layer. Hence, motion segmentation usually uses the
information from a few successive frames. In contrast, object tracking focuses on a
moving object in a scene. It utilizes the information from an image sequence. Mo-
tion segmentation plays a role of fundamental module in motion analysis and track-
ing. [14] presented a subspace segmentation method to estimate the motion models
of the motion layers based on two successive frames. Built on this subspace segmen-
tation method, this paper will further aim at two other basic problems of motion
segmentation, i.e. the detection of motion layer boundaries and depth ordering based
on two successive frames. The basic idea is to refine a global segmentation to solve
these two problems. We first address this subspace segmentation approach for mo-
tion model estimation. We then incorporate it with the intensity edge information
into a post-processing procedure, which refines the layer boundaries and infers the
layer order between two successive frames. These two procedures form a complete
algorithm for motion segmentation. Our specific contributions in this paper include
1) the Polysegment algorithm (a special case of the generalized PCA [12]) is em-
ployed to detect the layer boundaries in our post-processing procedure, and 2) the
cues from the intensity edges of images are utilized in the detection of the layer
boundaries and depth ordering.
1.1 Previous Works
Although motion segmentation has long been an active area of research, many
issues remain open in computer vision, such as the estimation of multiple motion
models [1,2], layered motion descriptions [3,4], occlusion detection and depth
ordering [5-7].
Most popular approaches to motion segmentation revolve around parsing the
optical flow field in an image sequence. Because of the well-known aperture prob-
lem, the motion vector from optical flow computation can only be determined in
the direction of the local intensity gradient. For the sake of completeness of optical
flow field, it is assumed that the motion is locally smooth. Obviously, depth dis-
continuities and multiple independently moving objects usually result in disconti-
nuities of the optical flow. The usual approaches are to parameterize the optical
flow field and fit a different model (e.g. 2D affine model) to each moving object,
such as the layered representation of the motion field [3]. The challenges of the
optical flow-based techniques involve identifying motion layers (or pixel group-
ing), detecting layer boundaries, and depth ordering. Previous research can mostly
be grouped into two categories. The first category is to determine all of the motion
models simultaneously. This can be achieved by parameterising the motions and
segmentation, and using sophisticated statistical techniques to predict the most
probable solution. For example, Smith et al. in [6] presented a layered motion
Search WWH ::




Custom Search