Biomedical Engineering Reference
In-Depth Information
H(
)
Note that we use a simplified notation
x
in this topic to indicate the entropy of
3
(
)
H [
(
) ]
the probability distribution p
x
, which should formally be written as
p
x
C.2
Complex Gaussian Distribution
We derive the probability distribution when the random variable is a complex-valued
Gaussian. The arguments in this section follows those of Neeser and Massey [1].
Let us define a column vector of N complex random variables as z , and define
z
and x and y are real-valued N-dimensional
Gaussian random vectors. Here, we assume that E
=
x
+
i y where x
= (
z
)
, y
= (
z
)
(
z
) =
0, and as a result, E
(
x
) =
0
E zz H .
and E
(
y
) =
0. The covariance matrix of z is defined as
ʣ zz =
x T
y T
T . The covariance
We next define a
(
2 N
×
1
)
vector
ˆ
such that
ˆ =[
,
]
matrix of
ˆ
,
ʣ ˆˆ
, is given by
E x
y
x T
y T
ʣ xx ʣ
T
yx
ʣ yx ʣ yy
ʣ ˆˆ =
,
=
,
(C.10)
xx T
yx T
yy T
where
ʣ xx =
E
(
)
,
ʣ yx =
E
(
)
, and
ʣ yy =
E
(
)
. We assume that the joint
distribution of x and y is given by
2 exp
1
1
2 ˆ
T
ʣ 1
p
(
x
,
y
) =
p
( ˆ ) =
ˆˆ ˆ
.
(C.11)
/
(
2
ˀ)
N
| ʣ ˆˆ |
1
In this section, we show that this joint distribution is equal to
exp
z
1
z H
ʣ 1
zz
p
(
z
) =
,
(C.12)
ˀ
N
| ʣ zz |
which is called the complex Gaussian distribution.
To show this equality, we assume a property, called “proper”, on the complex
random variable z . The complex random variable z is proper if its pseudo covariance
ʣ zz is equal to zero. The pseudo covariance is defined such that
E zz T ,
ʣ zz
=
which is written as
E
T
i
yx
ʣ zz =
T
(
x
+
i y
)(
x
+
i y
)
= ʣ xx ʣ yy +
ʣ yx + ʣ
.
ʣ zz =
Therefore,
0 is equivalent to the relationships
T
ʣ xx = ʣ yy and
ʣ yx = ʣ
yx .
(C.13)
3 The notation H (
x
)
may look as if the entropy H (
x
)
is a function of x , but the entropy is a functional
of the probability distribution p
(
x
)
, not a function of x .
 
Search WWH ::




Custom Search