Information Technology Reference
In-Depth Information
where we have used the fact that
ξ j
dQ =−
d
1
for all j . The constraint ( 1.12 ) applied to ( 1.13 ) is the mathematical rendition of the
desirability of having the average value as the most probable value of the measured
variable.
We now solve ( 1.13 ) subject to the constraint
N
1 ξ j
=
0
(1.14)
j =
by assuming that the j th derivative of the logarithm of the probability density can be
expanded as a polynomial in the random error
ln P ξ j
∂ξ j
k
=
C k ξ
j ,
(1.15)
k = 0
where the set of constants
{
C k }
is determined by the equation of constraint
N
k
j
C k ξ
=
0
.
(1.16)
j
=
1
k
=
0
All the coefficients in ( 1.16 ) vanish except k
1, since by definition the fluctuations
satisfy the constraint equation ( 1.14 ) so the coefficient C 1 =
=
0 satisfies the constraint.
Thus, we obtain the equation for the probability density
ln P
j )
∂ξ j
=
C 1 ξ j ,
(1.17)
which integrates to
exp C 1
2
j
P
j )
2 ξ
.
(1.18)
The first thin g to notice about this solution is that its extreme value occurs at
ξ j
=
0, that
=
is, at Q j
Q as required. For this to be a maximum as Gauss required and Simpson
speculated, the constant must be negative, C 1 <
0, so that the second derivative of P at
the extremum is positive. With a negative constant the function decreases symmetrically
to zero on either side, allowing the function to be normalized,
P
j )
d
ξ j
=
1
,
(1.19)
−∞
and because of this normalization the function can be interpreted as a probability
density. Moreover, we can calculate the variance to be
2
j
2
σ
=
ξ
j P
j )
d
ξ j ,
(1.20)
−∞
allowing us to express the normalized probability density as
 
Search WWH ::




Custom Search