Graphics Reference
In-Depth Information
matrix on both sides, and denote the rows of this inverse by r 's:
=[
α
i
] 1
α
β
F 2
B 2 F 1
F 2 B 1
B 2
(
I i
B 2
)
i
i
(
1
α
i
i
(2.23)
r 1
r 2
r 3
(
=
I i
B 2 )
Taking just the first element of the vector on both sides, we see that
r 1 I i
r 1 B 2
α
=
(2.24)
i
r 1 B 2 .
which corresponds to Equation ( 2.20 ) with a
=
r 1 and b
=−
2.4.2
The Matting Laplacian
The assumption that the
values and colors inside a window are related by
Equation ( 2.20 ) leads to a natural cost function for the matting problem:
α
N
a j I i +
2
J
( { α i , a i , b i , i
=
1,
...
, N
} ) =
w j i (
b j ))
(2.25)
j
=
1
i
This cost function expresses the total error of the linearity assumption in
Equation ( 2.20 ) over each window. We want to minimize J to find
i at each pixel
as well as the coefficients a i and b i for every window w i around pixel i . For brevity,
we'll write the left-hand side as J
α
( α
, a , b
)
, where
α
is an N
×
1 vector that collects all the
α
values in the image, and a and b represent the collections of affine coefficients for
each window. Since the windows between adjacent pixels overlap, the
estimates at
each pixel are not independent. We also add a regularization term to Equation ( 2.25 ):
α
i
2
N
a j I i +
+ ε
2
2
J
( α
, a , b
) =
w j i (
b j ))
a j
(2.26)
j
=
1
This term acts to bias the mattes toward being constant, since if a
=
0,
α i =
b i
is chosen to be on the order of 10 7 if each
ε
within the whole window w i . Usually
[
]
color channel is in the range
.
Onfirst glance, this formulationdoesn't seemtohelpus solve themattingproblem,
since we still have many more equations than unknowns (i.e., the five values of
0, 1
, a ,
and b at each pixel). However, by a clever manipulation, we can reduce the number
of unknowns to exactly the number of pixels. First, we rearrange Equation ( 2.26 )asa
matrix equation:
α
2
I j
1
j
1
.
1
α
a j
b j
.
.
N
J
( α
, a , b
) =
(2.27)
I j W
j
W
0 3 × 1
1
α
j
=
1
ε
I 3 × 3
0
Search WWH ::




Custom Search