Digital Signal Processing Reference
In-Depth Information
otherwise,
)
8 c 2 R 2
t 2
2
p ( t )
prox F A (
α
.
(7.22)
t 2 ), which is much
For the nonframe case ( c 1 =
0), the convergence rate is O (1
/
better than O (1
t ) in equation (7.20). Even for the case of frames, Algorithm 21
enjoys a linear convergence rate with a faster rate function compared to equa-
tion (7.19). However, an iteration of Algorithm 21 is twice the computational cost
of one iteration of Algorithm 20.
/
7.3.3 Splitting Algorithms
Recall from equation (7.2) that our goal is the minimization of functions of the form
F
=
F 1 +
F 2 , where F 1 ,
F 2 0 (
H
) such that their domains have a nonempty inter-
0
section. Let
G
be the set of solutions of problem ( P ), that is,
G ={ α ∈H
F (
α
)
}
.
From the minimality condition and Definition 1 of the proximity operator,
α ∈ G
if
and only if
0
F (
α
)
⇐⇒
β
α
β>
0
(
F )(
)
0
⇐⇒ α α
(
β
F )(
α
)
⇐⇒ α =
prox
F (
α
)
,
(7.23)
β
where
is a positive scalar known as the proximal step size. The proximal-type
algorithm constructed as
β
( t
+
1)
( t ) )
α
=
prox
F (
α
β
is a fundamental algorithm for finding a global minimum of F . In Martinet (1972), it
was proved that
( t ) converges to
α
α ∈ G
. The main difficulty with the method is that
prox
F may be hard to compute in general, depending on the function F . This is, for
instance, the case in most inverse problems arising in image and signal processing.
β
7.3.3.1 The Gist of Splitting
Splitting methods for problem ( P ) are algorithms that do not attempt to evaluate
the proximal operator prox
F of the combined function F , but instead perform
a sequence of calculations involving separately the individual proximal operators
prox
β
β F 2 . The latter are, it is hoped, easier to evaluate, and this turns
out to be true in many situations such as those we have already discussed in equa-
tions (7.3), (7.4), and (7.5) and those that we will consider later in Section 7.4.
Splitting methods for monotone operators have numerous applications in con-
structing decomposition algorithms for convex optimization and monotone vari-
ational inequalities. Splitting algorithms have an extensive literature (see, e.g.,
Eckstein (1989) for a review), all of which can, in essence, be categorized into
three main classes: the forward-backward (Gabay 1983; Tseng 1991, 2000), the
Douglas/Peaceman-Rachford (Lions and Mercier 1979) (although their roots go
back to numerical methods for PDEs (Douglas and Rachford 1956)), and the
β F 1 and prox
Search WWH ::




Custom Search