Information Technology Reference
In-Depth Information
Definition 32 (Multivariate Linear EIV Model)
T
m × d , A 0
m × n ,
d
B 0 = 1 m α
+ A 0 X 0 , B 0
α
A = A 0 + A
B = B 0 + B
(1.52)
where 1 m = [1, ... ,1] T .X 0 is the n × d matrix of the true but unknown parame-
ters to be estimated. The intercept vector α is either zero ( no-intercept model ) or
unknown ( intercept model ) and must be estimated.
Proposition 33 (Strong Consistency) If, in the EIV model, it is assumed that
the rows of [ A ; B ] are i.i.d. with common zero-mean vector and common
covariance matrix of the form = σ
2
ν
2
ν > 0 is unknown, then the TLS
method is able to compute strongly consistent estimates of the unknown parame-
ters X 0 , A 0 , α , and σ ν .
I n + d ,where σ
EIV models are useful when:
1. The primary goal is to estimate the true parameters of the model generating
the data rather than prediction and if there is not a priori certainty that the
observations are error-free.
2. The goal is the application of TLS to the eigenvalue - eigenvector analysis
or SVD (TLS gives the hyperplane that passes through the intercept and is
parallel to the plane spanned by the first right singular vectors of the data
matrix [174]).
3. It is important to treat the variables symmetrically (i.e., there are no inde-
pendent and dependent variables).
The ordinary LS solution X of (1.52) is generally an inconsistent estimate
of the true parameters X 0 (i.e., LS is asymptotically biased ). Large errors (large
, σ ν ), ill-conditioned A 0 , as well as, in the unidimensional case, the solution
oriented close to the lowest right singular vector
v n of A 0 increase the bias and
make the LS estimate more and more inaccurate. If
is known, the asymptotic
bias can be removed and a consistent estimator, called corrected least squares
(CLS) can be derived [60,106,168]. The CLS and TLS asymptotically yield the
same consistent estimator of the true parameters [70,98]. Under the given assump-
tion about the errors of the model, the TLS estimators X
, A , and [ d
2
α
/(
n
+
d
)
σ
= ( 1 / mt ) i = 1 σ
2
2
n
[ σ
i with t = min { m n , d } ] are the unique with probabil-
ity 1 maximum likelihood estimators of X 0 , α , A 0 , and σ
+
2
ν
[70].
Remark 34 (Scaling) The assumption about the errors seems somewhat restric-
tive: It requires that all measurements in A and B be affected by errors and,
moreover, that these errors must be uncorrelated and equally sized. If these condi-
tions are not satisfied, the classical TLS solution is no longer a consistent estimate
of the model parameters. Provided that the error covariance matrix is known up
Search WWH ::




Custom Search