Digital Signal Processing Reference
In-Depth Information
Proof The cost function given in ( 6.29 ) can be rewritten as
E tr x(k)
k)
Z k 1
2
J
(k,P FA )
=
−ˆ
x(k
|
k) Z k 1
E tr x(k)
k) T x(k)
=
−ˆ
x(k
|
−ˆ
x(k
|
k) T Z k 1
E tr x(k)
k) x(k)
=
−ˆ
x(k
|
−ˆ
x(k
|
k) T Z k 1
tr E x(k)
k) x(k)
=
−ˆ
x(k
|
−ˆ
x(k
|
tr E E x(k)
k) x(k)
k) T Z k Z k 1
=
−ˆ
x(k
|
−ˆ
x(k
|
tr E P(k
Z k 1
=
|
k)
|
tr P MRE (k | k)
tr P(k
1 )
q 2 λ(k)V(k),P D tr W(k)S(k)W T (k) ,
=
|
k
where the first equality is due to the property that the trace of a scalar is itself, the
third one is due the property that tr
{
AB
}=
tr
{
BA
}
, the fourth one is due to linearity
of tr
operators, and the fifth one follows from the smoothing property [ 9 ]
of expectations. Note that W(k)S(k)W T (k)
{·}
and E
[·]
W(k)S(k)W T (k)
0,
and q 2 (λ(k)V(k),P D ) is the only term that depends on P FA . Hence the minimiza-
tion of
0 implies tr
{
}≥
(k,P FA ) can be achieved by maximizing q 2 (λ(k)V(k),P D ) over P FA ,
which completes the proof.
J
Remark 6.2 We experimentally observe that choosing any other scalar mea-
sures for the function f S [·]
results
in the same optimization problem given in ( 6.30 ) where the elements of the
set are the determinant, 1-norm (the largest column sum), 2-norm (the largest
singular value),
from the set
{|·| , · 1 , · 2 , · , · F }
-norm (the largest row sum) and Frobenius-norm of a ma-
trix, respectively.
Due to mathematical intractability, the problem given in ( 6.30 ) was solved by uti-
lizing some line search algorithms that require only the evaluation of the cost func-
tion (e.g., Golden-Section or Fibonacci Search methods) [ 22 ]. We call this scheme
as DYNAMIC-MRE-LS in Fig. 6.2 .
Lemma 6.2 (A Closed-Form Solution [ 5 ]) An approximate closed-form solution
for the MRE-based dynamic threshold optimization can be found for a special type
Search WWH ::




Custom Search