Graphics Reference
In-Depth Information
The mean
µ B and covariance matrix
B can can computed from the collection of
N B background sample locations
{
B i }
in
B
using:
N B
1
N B
µ
=
I
(
B i
)
B
i
=
1
(2.14)
N B
1
N B
B i ) µ B )
B =
1 (
I
(
B i ) µ B )(
I
(
=
i
We can do the same thing for the foreground pixels in the trimap. Therefore, we
can obtain estimates for the prior distributions in Equation ( 2.10 ) as:
µ B ) 1
B
log P
(
B
) ≈− (
B
(
B
µ B )
(2.15)
µ F ) 1
F
log P
(
F
) ≈− (
F
(
F
µ F )
where we've omitted constants that don't affect the optimization. For the moment,
let's also assume P
(α)
is constant (we'll relax this assumption shortly). Then sub-
stituting Equation ( 2.12 ) and Equation ( 2.15 ) into Equation ( 2.10 ) and setting the
derivatives with respect to F , B , and
α
equal to zero, we obtain the following
simultaneous equations:
F
B
1
F
2
2
d I 3 × 3
2
d I 3 × 3
+ α
α(
1
α)/σ
2
d I 3 × 3
1
B
2
2
d I 3 × 3
α(
1
α)/σ
+ (
1
α)
1
F
2
µ
+ α/σ
d I
F
=
(2.16)
1
B
2
µ
+ (
1
α)/σ
d I
B
(
I
B
) · (
F
B
)
α =
(2.17)
(
F
B
) · (
F
B
)
Equation ( 2.16 )isa6
×
6 linear system for determining the optimal F and B for
a given
α
; I 3 × 3 denotes the 3
×
3 identity matrix. Equation ( 2.17 ) is a direct solu-
tion for the optimal
given F and B . This suggests a simple strategy for solving the
Bayesian matting problem. First, we make a guess for
α
at each pixel (for example,
using the input trimap). Then, we alternate between solving Equation ( 2.16 ) and
Equation ( 2.17 ) until the estimates for F , B , and
α
α
converge.
2.3.2
Refinements and Extensions
In typical natural image matting problems, it's difficult to accurately model the
foreground and background distributions with a simple pdf. Furthermore, these dis-
tributions may have significant local variation in different regions of the image. For
example, Figure 2.9 a illustrates the sample foreground and background distribu-
tions for a natural image. We can see that the color distributions are complex, so
using a simple function (such as a single Gaussian distribution) to create pdfs for the
foreground and background is a poor model. Instead, we can fit multiple Gaussians
to each sample distribution to get a better representation. These Gaussian Mixture
Models (GMMs) can be learned using the Expectation-Maximization (EM) algorithm
[ 45 ] or using vector quantization [ 356 ]. Figure 2.9 b shows an example of multiple
Gaussians fit to the same sample distributions as in Figure 2.9 a. The overlap between
 
Search WWH ::




Custom Search