Digital Signal Processing Reference
In-Depth Information
From Eq. (6.66) it therefore follows that
(
y
)=lndet(
πe
C
yy
)=lndet
πe
(
HC
xx
H
†
+
σ
q
I
)
H
(6
.
84)
and, furthermore,
(
q
)=lndet(
πeσ
q
I
)
.
H
(6
.
85)
Substituting into Eq. (6.81), the result (6.80) follows.
The problem of maximizing the mutual information (6.80) can be reformulated
by considering Fig. 6.7. Here the vector
s
is restricted to be a zero-mean circu-
larly symmetric complex Gaussian with covariance matrix
C
ss
=
I
.
The vector
x
,
which is the output of the linear transformation
F
,
is also circularly symmet-
ric complex Gaussian for any
F
(Lemma 6.1, Sec. 6.6). Since the covariance of
x
is
C
xx
=
FF
†
,
(6
.
86)
we can realize any covariance matrix by appropriate choice of
F
.
So the problem
of maximizing the mutual information
(
x
,
y
) between
x
and
y
can be solved
by fixing
s
to be a zero-mean circularly symmetric complex Gaussian with co-
variance
I
,
and optimizing
F
. Since the mutual information can now be written
as
I
(
x
;
y
)=logdet
σ
q
HFF
†
H
†
,
1
I
I
+
(6
.
87)
we only have to maximize this by optimizing
F
subject to the power constraint
which now becomes
Tr (
FF
†
)=
p
0
.
(6
.
88)
The same optimization problem also arises in a different context, namely that
of optimizing the precoder in a decision feedback transceiver without the zero-
forcing constraint (Sec. 19.4).
6.7.2 Solution to the maximum mutual information problem
At this point it is convenient to represent the channel
H
and the precoder
F
using their singular value decompositions (Appendix C):
F
=
U
f
Σ
f
V
f
H
=
U
h
Σ
h
V
h
,
and
(6
.
89)
where
U
f
,
V
f
,
U
h
,
and
V
h
are unitary matrices, and
Σ
f
and
Σ
h
are diagonal
matrices with non-negative diagonal elements (singular values of
F
and
H
). Note
that
H
and
Σ
h
are rectangular matrices; all other matrices are square. Since
FF
†
=
U
f
Σ
f
U
f
the power constraint (6.88) becomes
P−
1
σ
f,k
=
p
0
,
(6
.
90)
k
=0
Search WWH ::
Custom Search