Database Reference
In-Depth Information
from functions like the logistic function described earlier to the Heaviside
step function. Figure 11.3 shows three popular choices of activation function
in neural networks: the logistic function, the hyperbolic tangent, and the
Heaviside step function.
Figure 11.3
The units are then arranged into layers. There is an input layer, an output
layer, and some number of hidden layers. The input layer is used to drive
the activation function of units in the hidden layer (if any). The first hidden
layer's units then drive the activation function of the next layer and so on,
until the last hidden layer drives the activation function of the output layer.
There is often a bias unit—which plays the same role as the intercept in
linear models—that provides a constant input to each neuron.
A typical neural network arrangement is shown in Figure 11.4 . In this case,
the input layer and the hidden layer are the same size with the output layer
only containing two units. In general, there is no restriction on the number
of units in a layer relative to any other layer. Because each unit is connected
to all the units of the previous layer, all of the units will in theory learn
different aspects of the input signal.
 
 
 
Search WWH ::




Custom Search