Civil Engineering Reference
In-Depth Information
There are at least two roles for models in a generic nonadaptive weld process controller. First, a model
can be used in defining the initial equipment parameters of the process. The welder specifies the desired
DWP, such as weld bead width, penetration, etc., and the model can be used to arrive at suitable IWP,
such as welding current, travel speed, etc. Secondly, a model can be executed in parallel with the actual
process and provide calculations for DWP that cannot be measured directly in real-time. Thus, a weld
model can provide the controller at any time with an estimate of the weld bead penetration, although it
may not be measurable in real-time. Adaptive controllers may require a process model to compare against
the physical process for computation of errors used to inact parameter adaptation.
7.6
Neural Networks
Recent successes in employing artificial neural network models for solving various computationally
difficult problems have inspired renewed research in the area. Early work by McCulloch [55] and Widrow [56]
focused largely on mathematical modeling, while more recent research has augmented theoretical analysis
with computer simulations and implementation demonstrations. Numerous variants of pattern classifiers
using neural networks have been studied by Hopfield [60] and Lippmann [59]. Introductory texts on
the subject may be found in [57] and [58].
As the name indicates, a neural network resembles, to a certain degree, biological nervous systems as
we currently understand them. While most traditional computers rely on a single or few computational
units to perform their tasks, the neural network typically consists of a relatively large number of com-
putational units, connected with an even larger number of communication links. The underlying prin-
ciple aims to examine numerous hypotheses simultaneously and process data in a distributed fashion.
In essence the neural network is a self-adaptive structure which incrementally alters its inner workings
until it achieves the desired performance.
The fundamental building block of an artificial neural network is the perceptron which was introduced
by Rosenblatt in [54]. Originally designed for pattern recognition and as a research tool for modeling
brainlike functions of a biological system, the perceptron pattern-mapping architecture can be generalized
to the processing element present in the back-error propagation systems used today. These processing
elements (as shown in Fig. 7.3 ) are linked together with variable weights into a massively parallel environ-
ment. A neural net can achieve human-like performance due to its ability to form competing hypotheses
simultaneously.
A neural network and its adaptation procedure using back propagation is best illustrated by an example.
Figure 7.4 s hows a small neural network consisting of eight nodes, arranged in two hidden layers of three
nodes each and one output layer of two nodes. Each node
i
in the first hidden layer produces a single
x i ()
x 0 ()
numeric output which we denote as
. Similarly the nodes of the second hidden layer are labeled
,
x 1 ()
x 2 ()
, and
. The three inputs and two outputs of the network are
x
through
x
and
y
through
y
0
2
0
1
respectively. Each node accepts numeric data through a number of input links, each of which multiplies
Input
1
w
1
w
2
+
Output
w
Nonlinear
Transfer
Function
N
Input
N
FIGURE 7.3
The processing element of a neural network.
 
 
Search WWH ::




Custom Search