Biomedical Engineering Reference
In-Depth Information
Recall that in LS, the multiple linear regression model is hypothesized as
y W
=
o (
Φ
)
+
v n
( )
(4.1)
n
n
where W represents the model weights, Φ n is the identity or a nonlinear function depending on
whether the model is linear or nonlinear, and
v n is the modeling uncertainty. We will use Φ n to
( )
mean Φ( x n ), where x is the neuronal input.
Denote the transformed data matrix as
D
T
= [
W x W x
(
),
(
), ...,
W x
(
)], the correlation
W
1
2
N
matrix by
V
R
=
D
D
/
N
(4.2)
W
W
W
N
(
the cross-correlation vector
T
, and the Gram matrix
p
=
Φ
x y N D y N
i
)
/
=
/
Φ
i
Φ
i
=
1
(4.3)
T
G
=
D D
=
[ (
κ
x x
,
)]
Φ
Φ Φ
i
j
N N
×
Then, the LS solution is known to satisfy the following normal equations [ 8 ]
ˆ
(4.4)
R W p
Φ
=
Φ
Unfortunately, the LS problem is not always well posed. According to Hadamard [ 8 ], a prob-
lem is well posed if the solution exists, is unique, and smoothly depends on data and parameters.
More specifically, the following theorems demonstrate how to get a well-posed LS algorithm.
Theorem 4.1: (uniqueness theorem of LS [ 12 ]) The least-squares estimate W ˆ
is unique if and only if the
correlation matrix R Φ is nonsingular and the solution is given by
W R p
(4.5)
Φ Φ
1
=
Theorem 4.2: (pseudo-inverse [ 12 ]) If R Φ is singular, there are infinitely many solutions to ( 4.5 ). Of
them, there is a unique minimum-norm solution given by
W D y
(4.6)
=
+
Φ
Here, D Φ + is the general pseudo-inverse of D Φ given by
é
ù
S
-
1
0
ê ê
ú ú
+
T
(4.7)
D
=
p
Q
F
0
0
ë
û
 
Search WWH ::




Custom Search