Digital Signal Processing Reference
In-Depth Information
We then introduce:
()
x
k
()
() ()
z
=
s
+
n
[8.64]
k
k
k
(
)
x
k
+
1
where:
()
⎡ ⎤
b
k
A
()
A
=
and
n
k
=
[8.65]
⎢ ⎥
(
)
b
k
+
1
A
Φ
⎣ ⎦
It is thus the structure of the matrix A of di mension 2 MP × P , which will be
exploited to estimate Φ without having to know A .
It is easy to see that the covariance matrix Γ zz of all the observations contained in
z ( k ) is of dimension 2 M × 2 M and is written:
()() H
H2
Γ
=
Ekk
zz
=AA
Γ
+
σ
I
[8.66]
zz
ss
where Γ ss is the covariance matrix of dimension P × P of the amplitudes of the
complex sine waves and I is the identity matrix of dimension 2 M × 2 M. Thus, the
structure of the covariance matrix is identical to that of the observation [8.4] and we
can then apply the theorem of the eigendecomposition and the definition of the
signal and noise subspaces. In particular, let V s be the matrix of dimension 2 M × P
of the eigenvectors of the covariance matrix Γ xx , associated with the eigenvalues
which are strictly superior to the variance of the noise σ 2 , it results that the columns
of V s and the columns of A define the same signal subspace. There is thus a unique
operator T as:
s V T
[8.67]
By decomposing V s as:
V
⎡⎤⎡ ⎤
AT
x
V
=
=
[8.68]
⎢⎥⎢ ⎣ ⎦
s
V
A Φ
⎣⎦
y
where the matrices V x and V y are of dimension M × P , we can see that the subspaces
esp V x = esp V y = esp A are the same. Let Ψ be the unique matrix P × P of
transition from the basis of the columns of V x to that of the columns of V y , we have:
VV Ψ
[8.69]
y
x
Search WWH ::




Custom Search