Databases Reference
In-Depth Information
x
w
1
w
3
y
x
y
)
f (
w
x
=
out
out
2
j
j
j =1
w
x
3
Figure 4.5 Neuron with Three Inputs
on a [0,1] scale. To get the actual predicted value of a network, outputs from the
output layer must be denormalized.
The internal structure of an individual neuron is depicted in Figure 4.5. The
output value of the neuron is the weighted sum of the inputs applied to a
function f . Note that if the weights for a given neuron sum to one and given that
all inputs are in the range [0, 1], then the weighted sum of inputs will also range
[0, 1]. It is the weights, unique to each artificial neuron, that are equivalent to the
synapse strengths in the human brain. The method used to determine values for
these weights will be presented shortly.
The function f of the neuron is known as the activation function .Itis
specified when the network is constructed and can be as simple as the identity
function. An activation function frequently employed in ANN implementations
is the S-shaped logistic or sigmoid function:
1
1 þ e 2 x 1 Þ
f ðxÞ¼
where a defines the steepness of the curve. See Figure 4.6. A desirable feature
of this function is that inputs in the [0, 1] range generate outputs in the same
range. The S-shaped curve is desirable because it somewhat mimics the
“firing” of neurons in the brain. At lower input levels the output remains very
low, then suddenly begins to rise as input reaches a threshold (about 0.4 in
Figure 4.6).
The weights of the network are computed using the training dataset contain-
ing known values for both input and output variables. The process is as follows:
1. Construct a network. If the ANN is to be used for regression, create just one
output layer neuron. When used for classification, create one output layer
neuron for each possible output value.
2. Assign random values to each of the neuron weights.
3. For each observation in the training set,
Search WWH ::




Custom Search