Information Technology Reference
In-Depth Information
The next step consists of solving (9.20) via convex optimization. To this end, let
us introduce the polynomials
z
)
h
i
,
j
,
k
(
a
2
b
r
(
a
,
z
)=
γ
−
a
−
∑
u
i
,
j
,
k
(
a
,
,
z
)
(
i
,
j
,
k
)
∈I
(9.21)
z
)
h
i
,
j
,
k
(
a
2
)
2
2
b
t
(
a
,
z
)=(1 +
a
γ
−
z
−
∑
u
i
,
j
,
k
(
a
,
,
z
)
(
i
,
j
,
k
)
∈I
where
u
i
,
j
,
k
(
a
is an auxiliary scalar to be
determined. Let us exploit the SMR of polynomials described in Section 9.2.3. Let
v
b
(
a
,
z
)
∈
R
are auxiliary polynomials and
γ
∈
R
,
z
) be a vector containing any base for the polynomials
b
r
(
a
,
z
) and
b
t
(
a
,
z
),and
let
v
u
(
a
,
z
) be a similar vector for the polynomials
u
i
,
j
,
k
(
a
,
z
). Then, the polynomials
b
r
(
a
,
z
)
,
b
t
(
a
,
z
) and
u
i
,
j
,
k
(
a
,
z
) can be expressed as
⎧
⎨
z
)
T
B
r
v
b
(
a
b
r
(
a
,
z
)=
v
b
(
a
,
,
z
)
z
)
T
B
t
v
b
(
a
b
t
(
a
,
z
)=
v
b
(
a
,
,
z
)
(9.22)
⎩
z
)
T
U
i
,
j
,
k
v
u
(
a
u
i
,
j
,
k
(
a
,
z
)=
v
u
(
a
,
,
z
)
where
B
r
,
B
t
and
U
i
,
j
,
k
are any symmetric matrices of suitable dimensions satisfying
(9.22). Let
L
(
α
) be any linear parametrization of the linear set
=
L
=
L
T
z
z
)
T
Lv
b
(
a
L
:
v
b
(
a
,
,
z
)=0
∀
a
,
where
α
is a free vector, and let us define the optimization problems
s.t.
B
r
+
L
(
α
)
≥
0
γ
r
=
γ
,
α
,
U
i
,
j
,
k
γ
inf
U
i
,
j
,
k
≥
0
∀
(
i
,
j
,
k
)
∈I
s.t.
B
t
+
L
(
(9.23)
α
)
≥
0
γ
t
=
γ
,
α
,
U
i
,
j
,
k
γ
inf
U
i
,
j
,
k
≥
0
∀
(
i
,
j
,
k
)
∈I .
These optimization problems are convex since the cost functions are linear and the
feasible sets are convex being the feasible sets of LMIs. In particular, these prob-
lems belong to the class of eigenvalue problems (EVPs), also known as semidefinite
programming [2].
Let us observe that the constraints in (9.23) ensure that
⎫
⎬
⎭
∀
b
r
(
a
,
z
)
≥
0
b
t
(
a
,
z
)
≥
0
a
,
z
u
i
,
j
,
k
(
a
,
z
)
≥
0
from which one obtains
⎫
⎬
⎭
∀
2
γ
r
≥
a
2
a
,
z
:
h
i
,
j
,
k
(
a
,
z
)
≥
0
∀
(
i
,
j
,
k
)
∈I .
z
γ
t
≥
2
)
2
(1 +
a
Search WWH ::
Custom Search