Databases Reference
In-Depth Information
Weights
w
1
j
y
1
Bias
j
w
2
j
Σ
f
y
2
Output
w
nj
y
n
Inputs
(outputs from
previous layer)
Weighted sum
Activation
function
Figure 9.4
Hidden or output layer unit
j
: The inputs to unit
j
are outputs from the previous layer. These
are multiplied by their corresponding weights to form a weighted sum, which is added to the
bias associated with unit
j
. A nonlinear activation function is applied to the net input. (For
ease of explanation, the inputs to unit
j
are labeled
y
1
,
y
2
,
,
y
n
. If unit
j
were in the first
hidden layer, then these inputs would correspond to the input tuple
:::
.
x
1
,
x
2
,
:::
,
x
n
/
.)
its output,
O
j
, is equal to its input value,
I
j
. Next, the net input and output of each unit
in the hidden and output layers are computed. The net input to a unit in the hidden or
output layers is computed as a linear combination of its inputs. To help illustrate this
point, a hidden layer or output layer unit is shown in Figure 9.4. Each such unit has
a number of inputs to it that are, in fact, the outputs of the units connected to it in
the previous layer. Each connection has a weight. To compute the net input to the unit,
each input connected to the unit is multiplied by its corresponding weight, and this is
summed. Given a unit,
j
in a hidden or output layer, the net input,
I
j
, to unit
j
is
X
I
j
D
w
ij
O
i
C
j
,
(9.4)
i
where
w
ij
is the weight of the connection from unit
i
in the previous layer to unit
j
;
O
i
is
the output of unit
i
from the previous layer; and
j
is the
bias
of the unit. The bias acts
as a threshold in that it serves to vary the activity of the unit.
Each unit in the hidden and output layers takes its net input and then applies an
acti-
vation
function to it, as illustrated in Figure 9.4. The function symbolizes the activation
of the neuron represented by the unit. The
logistic
, or
sigmoid
, function is used. Given
the net input
I
j
to unit
j
, then
O
j
, the output of unit
j
, is computed as
1
1C
e
I
j
.
O
j
D
(9.5)