Biomedical Engineering Reference
In-Depth Information
not exactly match the perfectly scripted 'E'. In this case, the gradient decent method would classify the
input as an 'E' by sending the ball into the local 'E' minima. Likewise, if the imperfect letter was closer
to the ideal 'F', the ball would fall into the 'F' minima.
The analogy we have built up so far is for a network that has already been trained. To train our
sheet-ball system, we would begin with a perfectly straight sheet and then present a perfect 'E' as an
input. To ensure that the sheet would remember the 'E' in the future we would change the contour to
make a local minima at exactly this point. The change in sheet topology would is analogous to changing
the weights in a neural network. The effect would be to warp the sheet around the local minimum such
that even a distorted 'E' input would roll to the lowest point. The same type of process could be used to
generate other local minima to correspond to 'F', 'G', and 'H'. In the end, the sheet would be trained to
recognize distorted letters.
7.4.5 Network Structure and Connectivity
A consideration when building a neural network is how many perceptrons are needed to make the desired
classification. If the network is too small, the error even at the global minimum may be unacceptable,
and the network is an underfit . On the other hand, if the network is very large (an overfit ), the increased
computation may only result in a small gain in accuracy. A more recent solution to this problem is to
use tiling . A tiling algorithm begins with a small network which is trained to classify inputs using the
methods discussed above. When the global minimum is reached, the error is assessed. If the error is not
acceptable, more perceptrons are added and new connections are made.The training then resumes on the
larger network. This iterative process can be continued until the network produces results that are within
a desired error tolerance.
7.4.6 Recurrent and Unsupervised Networks
Thus far, we have only considered supervised training of a neural network. Although supervision may
have different interpretations, generally it means that the user specifies some known inputs paired with
desired T outputs. A learning algorithm, such as Eq. (7.10), can then be used to adjust the weights.
In an unsupervised network, T may not be given and it is up to the network to develop classification
patterns that pair similar inputs. The neural networks discussed thus far have been of a class called
feedforward because information flows from one layer to the next in one direction. For a network to
develop classification patterns, it must be able to adjust weights through feedback connections. Networks
with built in feedback are often called recurrent networks and can be used to not only classify patterns, but
also store distributed and adaptable memories of information and processes. The two most mentioned
types of recurrent networks are shown in Fig. 7.12 but the number of network variations has grown
rapidly and is still growing. Although we will not discuss these types of networks, they have been shown
to perform some incredible feats, such as unaided pattern recognition, storage of short-termmemory and
planning.
7.5 NUMERICALMETHODS
Although it is possible to generate the computer code needed to simulate networks of neurons and arti-
ficial neural networks, a number of software packages exist which drastically reduce the start up time. To
simulate networks of neurons, two programs have established themselves as the gold standards of neural
compartment simulations: GENESIS and NEURON. GENESIS ( http://www.genesis-sim.org )
and NEURON ( http://neuron.duke.edu ) were born out of the need for a common platform for
Search WWH ::




Custom Search