Biomedical Engineering Reference
In-Depth Information
function for a structure prediction problem.
Neural Networks
Neural networks are simulations loosely patterned after biological neurons. They are said to learn, or
be trainable. In molecular biology, they learn to associate input patterns with output patterns in a
way that allows them to categorize new patterns and to extrapolate trends from data. In operation, a
neural network is presented with a pattern on its input nodes and the network produces an output
pattern, based on this learning.
The power of neural networks is that they can apply this learning to new input patterns. For this
reason, neural networks, like genetic algorithms, are often referred to as a form of "soft" or "fuzzy"
computing because the answers or pattern matching provided by these methods represent best
guesses, based on the data available for analysis. Neural networks always produce an output pattern
when presented with an input pattern. However, the resultant categorization isn't necessarily the
best answer. The best answer, computed using traditional algorithms, may require weeks of
computing time on a desktop workstation. In comparison, a neural network may be able to
categorize the data in a few seconds using the same hardware.
The inner workings of a neural network are independent of the problem domain, in that the same
neural network configuration (with different training) can be used to recognize a nucleotide triplet, or
a critical pattern on a patient's EKG tracing, or a potential mid-air collision when used with radar
data. It's up to the researcher to determine what the input and output patterns represent. That said,
neural networks, like other fuzzy systems, work best in a narrowly defined domain in which input
patterns are likely to follow the same progression or logic. As the number and complexity of the
possible input patterns increases, the ability of a neural network to classify input patterns
deteriorates. For example, a neural network that works well classifying proteins within a given
protein family will likely fail to classify the universe of known proteins, despite additional training.
An increase in the number and complexity of input patterns typically requires reconfiguring or
rewriting a neural network with more layers and different interconnections. For example, the simple
three-layer neural network shown in Figure 7-9 may have to be replaced by a four-layer neural
network with double the number of interconnections. As a result, training time—the time required for
a neural network to consistently associate an input pattern with an output pattern correctly—may be
extended from a few minutes to several hours, even on high-performance hardware. Recognition
time should be relatively unaffected.
Figure 7-9. Neural Network. One of the limitations of a neural network is
that the significance of the strength of the internal interconnections is
unknown. As a result, as a pattern recognizer or categorizer, the neural
network can be treated as a black box.
Search WWH ::




Custom Search