Geology Reference
In-Depth Information
of the corresponding outputs, and m is a column vector of parameters which must
be determined to provide the optimal mapping from X to y, such that
0
1
0
1
0
1
m 1
m 2
m 3
.
m d
x 11
x 12
x 13
...
x 1d
y 1
y 1
.
y pmax
@
A
@
A
@
A
x 21
x 22
x 23
...
x 2d
.
.
.
.
.
¼
ð 4 : 15 Þ
. .
x x pmax 1
x x pmax 2
x x pmax 3
...
x x pmax d
The rank r of the matrix X is the number of linearly independent rows, which
will affect the existence or uniqueness of solutions for m.
If the matrix X is square and non-singular, then the unique solution to ( 4.14 )is
¼
X 1 y.IfX is not square or singular, then ( 5.6 ) is modi
m
ed and an attempt is
made to
find a vector m which minimizes
2
j
Xm
y
j
ð 4 : 16 Þ
This was proved by Penrose [ 71 ] where the unique solution to this problem is
provided by m
X # y where X # is the pseudo-inverse matrix [ 71 , 72 ].
In the case of LLR modeling, the input training data is derived from a kd-tree,
with time-complexity O(M logM).Akd-tree (short for k-dimensional tree) is a
space-partitioning data structure for organizing points in a k-dimensional space so
that the LLR algorithms could be implemented using a minimum number of direct
evaluations. More details of theoretical aspects of the kd-tree can be found in Jones
[ 41 ] and Durrant [ 23 ].
¼
4.3 Artificial Neural Networks Model
The story of ANNs started in the early 1940s when McCulloch and Pitts developed
the
first computational representation of a neuron [ 58 ]. Later, Rosenblatt proposed
the idea of perceptrons [ 75 ], who used a single layer feed-forward networks of
McCulloch
Pitts neurons and focused on computational tasks with the help of
weights and training algorithm. The applications of ANNs are based on their ability
to mimic the human mental and neural structure to construct a good approximation
of functional relationships between past and future values of a time series. The
supervised one is the most commonly used ANNs, in which the input is presented to
the network along with the desired output, and the weights are adjusted so that the
network attempts to produce the desired output. There are different learning algo-
rithms and a popular algorithm is the back propagation algorithm which employs
gradient descent and gradient descent with momentum. They are often too slow for
practical problems because they require small learning rates for stable learning.
Algorithms such as conjugate gradient (CG), quasi-Newton, Levenberg
-
-
Marquardt
 
Search WWH ::




Custom Search