Biology Reference
In-Depth Information
Fig 1. ( A ) ANN “single preceptron” model. Inbox shows the two mathe-
matical terms to describe a neuron k , where x 1 , x 2 , … , x m are the input sig-
nals (e.g. amino acids in the sequence); w k 1 , w k 2 , … , w km
are the synaptic
weights of neuron k ; u k
is the linear combiner output due to the input sig-
nals; b k is the bias;
) is the activation function; and y k is the output signal
of the neuron; the use of bias b k has the effect of applying an affine transfor-
mation to the output u k
ϕ
(
of the linear combiner in the model, as shown by
b k ; ( B ) ANN supervised learning model implementation. The ANN
learns to map a set of signal inputs to specified outputs in the training data.
The adaptation (value changes) of the weights is achieved through a cost
function for error minimization, and through the training algorithm during
the training epochs.
v k =
u k +
neuron can adopt either a positive or a negative value. Second, the
weighted input signals are summed. Third, an activation function
)
limits the amplitude range of the output of the neuron to a finite
value. For example, the typical output of a neuron falls within either
the closed interval [0, 1] or alternatively [
ϕ
(
1, 1]. Many types of acti-
vation functions can be used to train an ANN. 113,114 The model also
includes an externally applied bias, denoted by b k . The bias b k has the
effect of increasing or lowering the net input ( u k ) to the activation
function, depending on whether it is positive or negative, respectively.
The architecture (i.e. layout and connections between the neurons)
that can be adopted into the design of an ANN depends on the task and
the function for which the system is designed. In the most common
Search WWH ::




Custom Search