Information Technology Reference
In-Depth Information
[14]. But the resulting motion model estimation can only yield coarse motion
segmentation, i.e. the boundary of the motion layers is very blurry. Our basic idea
is to further refine the boundary of the resulting motion layers by a post-
processing procedure. Before introducing our post-processing procedure, we
firstly review the motion model estimation approach in [14] briefly. The two used
algorithms, GPCA-PDA Alg. and Polysegment Alg., can be found in [12]. (We
also briefly introduce these two algorithms in Appendix.)
The first problem to motion segmentation is to obtain the layered motion mod-
els corresponding to independently moving regions in a scene, (i.e. layer segmen-
tation). We address an algebra approach in terms of a known optical flow field
which has been presented in [14]. Its distinct advantage over the other approaches
is that it can determine all motion layers simultaneously.
Given N measurements of the optical flow
{( , ) } N
ii
uv
at the N pixels
=
1
N
i
{(
x
,
x
)
}
, we can describe them through a affine motion as follows,
1
2
i
=
1
a
x
+
a
x
+
a
u
=
0
11
1
12
2
13
.
a
x
+
a
x
+
a
v
=
0
21
1
22
2
23
In terms of the hyperplane representation in the Appendix, the solution to the
multiple independent affine models can be rephrased as follows. Let
5
T
x
=
(
x
,
x
,
u
,
v
)
R
and hyperplane
S be spanned by the basis of
1
2
T
T
baaaa
=
(,
,
,
,
and
baaaa
=
(,
,
, ,
. We need to segment a
1
11
12
13
14
2
21
22
23
24
mixture of the hyperplanes of dimension d = 3 in
R , which is expressed as,
{
}
5
T
Sx Rbbx
=∈
:( ,
)
= .
0
i
12
i
The original equations of optical flow have finished the projection from
x
R
5
R in a natural way, i.e. each new hyperplane in
to two individual subspaces of
R can be expressed as,
(
a
,
a
,
a
,
a
)
(
x
,
x
,
x
,
x
)
=
0
.
11
12
13
14
1
2
3
4
Applying the scheme of Eq.(A1-A4) in Appendix can yield the desired basis
= for each hyperplane S in R .
Up to now, one can obtain the initial estimation of all of the motion layers si-
multaneously. This is insufficient for motion segmentation, since we also need to
determine the layer boundaries and the occlusion relationship. Beside that, it can
be observed that each segmented layer contains some small and isolated spurious
regions, and the resulting layer boundaries wander around the real ones. This
makes the detection of the layer boundaries difficult. The occluded regions take
place in the neighborhood of the layer boundaries. If the occluding edges can be
determined correctly, the occluded regions can be segmented correctly. Further-
more, the resulting motion layers can also be linked to the occluded regions in
terms of the occluding edges for the depth ordering. Hence, it is a crucial step to
(
i
)
B
(
b
,
b
)
1
2
i
Search WWH ::




Custom Search