Geoscience Reference
In-Depth Information
Functions c j are supposed to be continuous and differentiable within the whole
region of the argument values. Note that in the method of penalty functions
unlike the linear case (4.16) under conditions (4.26) the relations between
the number of constraints J and number of parameters K could be arbitrary,
particularly, J
K is admissible (and certainly concrete values c j could be
independent of all arguments at once). The search of the minimum of the
following value instead of discrepancy R minimum (4.13) is a matter of the
method of penalty functions:
w i ( y i y i ) 2
h j c j ( x 1 ,..., x K )
N
N
J
N
R C =
R 2 + R 2 H
=
w i +
w i ,
=
=
=
=
i
1
i
1
j
1
i
1
(4.27)
where h j is a certain constant. The idea of the method is elementary. Indeed,
additional sum R 2 H in (4.27) with functions c j yields nothing to discrepancy R 2
if conditions (4.26) are severely satisfied. The less constraint conditions (4.26)
are satisfied (i. e. the farther values of c j from zero), the greater the yield of the
additional sum is to total value R C . This yield is like a penalty for the violation
of the constraint conditions, hence the name of the method appears (penalty
functions are expressions h j c j ( x 1 ,..., x K )). During the search of minimum R C
the solution tends to the parameter values, when the additional yield of the
conditions is minimal, i. e. to the most exact satisfying of constraint conditions
(4.26). The choice of constants h j , j
=
1,..., J in (4.27) are arbitrary enough. It
is clear that the greater they are, the more exact constraint conditions (4.26)
are satisfied by the solution. Theoretically, constants h j have to tend to the
infinity (Vasilyev F 1988), but practically the greater h j are, the more nonlinear
the problem is and the more difficult the calculation algorithm is adapted to
the problem. Thus, it is necessary to select constants h j carefully in practice.
Usually all constants h j are selected equal to each other, i. e. the algorithm is
managed by one parameter of penalty functions h
=
=
=
h j .
To solve the problem of the search for the value minimum (4.27) we are
applying linearization: at first, the solution of linear functions g i and c j is ob-
tained and then the nonlinear case is reduced to the linear one. From equation
system
h 1
...
R C |∂
=
0 the following is obtained with linear dependences (4.9)
and (4.16) instead of (4.14):
K
x k
x j N
h l c lj c lk
J
N
J
=
h l c l 0 c lk ,
w i g ij g ik +
( y i g i 0 ) w i g ik
(4.28)
=
=
=
=
=
j
1
i
1
i
1
l
1
l
1
=
k
1,..., K .
Introducing the vector and matrix of the constraints C 0
( c j 0 ), C
( c jk ),
=
=
=
h j , h j , l =
=
j
1,..., J , k
1,..., K andalsodiagonalmatrix H
( h jl ), h jj
0
j
analogous to matrix W the solution of system (4.28) is obtained:
=
( G + WG + C + HC ) −1 ( G + W ( Y G 0 )− C + HC 0 ) .
X
(4.29)
Search WWH ::




Custom Search