Digital Signal Processing Reference
In-Depth Information
diagonal
˙ n . This is recognized as a “rank-1 factor analysis” model in multivariate
analysis theory [ 18 , 26 ] . Given R , we can solve for g and
˙ n in several ways
[ 4 , 5 , 48 ] . For example, any submatrix away from the diagonal is only dependent
on g and is rank 1. This allows direct estimation of g . This property is related to
the gain and phase closure relations often used in the radio astronomy literature for
calibration (in particular, these relations express that the determinant of any 2
2
submatrix away from the main diagonal will be zero, which is the same as saying
that this submatrix is rank 1).
In general, there are more calibrator sources ( Q ) in the field of view, and we have
to solve ( 35 ) . We resort to an Alternating Least Squares approach. If
×
would be
known, then we can correct R for it, so that we have precisely the same problem as
we considered before, ( 27 ) , and we can solve for
˙
and
˙ n using the techniques
discussed in Sect. 5.3 . Alternatively, with
˙
known, we can say we know a reference
A H , and the problem is to identify the element gains
model R 0 =
A
˙
=
diag
(
g
)
from a model of the form
H
R
=
R 0
+ ˙ n
or, after applying the vec
( · )
-operation,
vec
(
R
)=
diag
(
vec
(
R 0 ))(
g
g
)+
vec
n ) .
This leads to the Least Squares problem
R
2
g
=
arg mi g
vec
(
˙ n )
diag
(
vec
(
R 0 ))(
g
g
)
.
This problem cannot be solved in c losed form. Alternatively, we can first solve an
unstructured problem: define x
=
g
g and solve
R 0 )) 1 vec
R
x
=
diag
(
vec
(
(
˙ n )
gg H ,
or equivalently, if we define X
=
X
R
=(
˙ n )
R 0 .
where
denotes an entrywise matrix division. After estimating the unstructured
X , we enforce the rank-1 structure X
gg H , via a rank-1 approximation, and find
an estimate for g . The pointwise division can lead to noise enhancement; this is
remediated by only using the result as an initial estimate for a Gauss-Newton
iteration [ 13 ] or by formulating a weighted least squares problem instead [ 45 , 48 ] .
With g known, we can again estimate
=
n , and make an iteration. Overall
we then obtain an alternating least squares solution. A more optimal solution can
be found by solving the overall problem ( 35 ) as a covariance matching problem
with a suitable parametrization, and the more general algorithms in [ 31 ] lead to an
asymptotically unbiased and statistically efficient solution.
˙
and
˙
 
Search WWH ::




Custom Search