Biology Reference
In-Depth Information
In general, when the data set contains n data points, the matrices P and
Y* will be:
2
3
dG
ð
r 0 ;
X 1 Þ
4
5
dr
2
4
3
5
ð
Þ
Y 1
G
r 0
;
X 1
dG
ð
r 0 ;
X 2 Þ
Y 2
G
ð
r 0
X 2
Þ
;
dr
.
and Y ¼
.
P
¼
:
(8-29)
Y n
G
ð
r 0
X n
Þ
;
dG
ð
r 0 ;
X n Þ
dr
To solve Eq. (8-28) for
e
, we multiply both sides by the transposed
matrix P T :
P T P
P T Y :
e ¼
(8-30)
Now, P T P is a square matrix, and, if it is invertible, we can solve
Eq. (8-30) and obtain
Þ 1
P T P
P T Y Þ:
e ¼ð
ð
(8-31)
Because
r 0 .We
call this improved guess r 1 , use it in place of r 0 in Eq. (8-26), and then
iterate. Schematically, the process can be represented as
e ¼
r - r 0 , the next guess is calculated from r
¼ e þ
e þ
guess
)
better guess.
The process terminates when the calculated value for better guess is the
same as guess; that is, when
e ¼
0. We have then found the answer
for the parameter r.
The Gauss-Newton method just described is not based upon minimizing
the SSR defined in Eq. (8-22), so how is it a least-squares procedure?
The answer is found in Eq. (8-31). When
0, we have (P T P) -1 (P T Y*)
¼
0. Because (P T P) -1 cannot be zero, as the matrix (P T P) was inverted, it
must be that P T Y*
e ¼
0. For the matrices P T and Y* defined in
Eq. (8-29), we then have for r
¼
¼
answer, the product
X
dG
ð
r
;
X i
Þ
P T Y ¼
½
Y i
G
ð
r
X i
Þ
¼
0
:
;
dr
i
On the other hand, differentiating Eq. (8-22) gives dSSR
ð
r
Þ
¼
dr
2 P i ½
dG
ð
r
X i Þ
;
Thus, when r is such that P T Y*
Y i
G
ð
r
X i Þ
:
¼
0,
;
dr
we will also have dSSR
ð
r
Þ
¼
0. This shows that we have found
dr
the least-squares estimate for the parameter r.
Search WWH ::




Custom Search