Information Technology Reference
In-Depth Information
To the best of our knowledge, these are the best published results on the
problem. The detail of the results, together with an application of wavelet
networks to the same problem, can be found in [Oussar 1998].
2.7.5 Casting Dynamic Models into a Canonical Form
In the previous sections, we assumed that no prior knowledge of the process
was available to the model designer, so that the form of the algebraic or differ-
ential equations that would be derived from a physical analysis was unknown.
That is a typical black-box modeling situation.
In the next section, we show that any prior knowledge, available under the
form of algebraic or differential equations, can be embodied into the structure
of a neural network. The model thus designed is a “gray-box” or “semiphysical
model”. The design of such a model may lead to a complex recurrent network
structure, which is neither an input-output representation, nor a state-space
representation; since the training algorithms that are described in the previous
section were applicable to state-space models or input-output models, how
can one train networks that are neither? Should one design a special training
algorithm for each specific architecture?
Similarly, Chap. 4 presents a set or network “models” (where the term
model does not have its scientific meaning, but its commercial meaning—as
in car or TV model), which are generally named from their author (Hopfield
model, Jordan model, Elman model, etc.), whose structures are different from
the architectures discussed above. Again, one may ask whether each specific
architecture requires a specific training algorithm.
The answer to that question takes advantage of the following property.
Property. Any recurrent neural network, however complex, can be cast into
a minimal state-space form, called canonical form, to which the training algo-
rithms discussed in the previous section can be applied. The latter are therefore
fully generic, since they can train any recurrent network, provided it has been
cast into a canonical form.
Therefore, the present section shows how the canonical form of an arbi-
trary recurrent neural network, stemming from instance from a semiphysical
modeling, can be derived. That task is performed in two steps,
derivation of the order of the network,
derivation of a state vector and of the corresponding canonical form.
A reminder: when designing a purely black-box model, without any prior
knowledge, the model is sought directly in a canonical form.
2.7.5.1 Definition
The canonical form of a recurrent neural network is the minimal state-space
representation
Search WWH ::




Custom Search