Information Technology Reference
In-Depth Information
have used 2D images of human faces. However, 2D face recognition techniques are
known to suffer from the inherent problems of illumination and structural variation,
and are sensitive to factors such as background, change in human expression, pose,
and aging [ 7 ]. Utilizing 3D face information was shown to improve face recognition
performance, especially with respect to these variations [ 8 , 9 ].
The complexity of ANN depends on the number of neurons and learning algo-
rithm. The higher the complexity of an ANN is, the more computations and memory
intensive it can be. The number of neurons to be used in an ANN is a function of the
mapping or the classifying power of the neuron itself [ 2 , 10 ]. Therefore, in case of
high- dimensional problem, it is imperative to look for higher-dimensional neuron
model that can directly process the high-dimensional information. It will serve as a
building block for a powerful ANN with fewer neurons. Various researchers have
independently proposed extension of real-valued neuron (one dimension) to higher
dimension [ 2 , 11 ]. Most of them have followed natural extension of number field
like real number (one dimension), complex number (two dimension), 3D real-valued
vectors (three dimension), quaternion (four dimension), etc., for representation of
higher-dimension neurons. Therefore, it will be worthwhile to explore the capabili-
ties of the 3D vector-valued neurons in function mapping and pattern classification
problems in 3D space. The activation function for 3D vector-valued neuron can be
defined as 3D extension of real activation function. Let V
T be the
=[
V x ,
V y ,
V z ]
net internal potential of a neuron then its output is defined as:
T
=
(
) =[
(
V x ),
(
V y ),
(
V z ) ]
Y
f
V
f
f
f
(6.1)
6.1.1 Learning Rule
In our multilayer network, we have considered three layers, first is of inputs, second
layer is only hidden layer, and an output layer. A three-layer network can approximate
any continuous nonlinear mapping. In 3D vector-valued neural network, the bias
values and input-output signals are all 3D real-valued vectors, while weights are 3D
orthogonal matrices. All the operations in such a neural network are scaler matrix
operations. A 3D vector-valued back-propagation algorithm (3DV-BP) is considered
here for training a multilayer network, which is natural extension of complex-valued
back-propagation algorithm [ 10 , 12 ]. It has ability to learn 3Dmotion as complex-BP
can learn 2D motion [ 13 ].
In a three-layer network (L-M-N), the first layer has L inputs ( I l ), where l
L ,
the second and the output layer consist M and N vector-valued neurons, respectively.
By convention, w lm is the weight that connects l th neuron to m th neuron and
=
1
..
ʱ m =
[ ʱ mx my mz ]
is the learning rate and
f is derivative of a non-linear function f .Let V be net internal potential and Y be
the output of a neuron. Let e n be the difference between actual and desired value at
n th output, where
is the bias weight of m th neuron.
ʷ ∈[
0
,
1
]
e n 2
e n 2
e n 2 and e n =[
e n ,
e n ,
e n ]
T
Y n
|
e n | =
+
+
=
Y n
.
 
Search WWH ::




Custom Search