Information Technology Reference
In-Depth Information
to analyze time-dependent data, since the network elements have special
interconnections. These networks are particularly suitable for time series
predictions (Rowe and Roberts, 1998). This means that once the input
data are transferred to a certain network element (e.g. hidden layer of
neurons), they are memorized and integrated with subsequent inputs.
Past information is therefore used to predict both current and future
system states. It can be expected that analysis and modeling of time-
series and time-dependent processes are most appropriately achieved
using DNNs.
DNNs are often called recurrent networks, owing to their interconnected
architecture. The fl exibility of DNNs comes from usage of different
processing elements that contain feedback and delay line taps to express
dynamic behavior (Panerai et al., 2004). Neural networks used for time
series analysis are Memory Neuron Networks (MNN), Dynamic Neural
Unit (DNU), Feedback Networks (FN), and others (Shaw et al., 1997).
Connections between the neurons can be set up to have a memory,
which is important for dynamic networks. The order of the memory
says by how many time steps the signal will be delayed. Treatment of
dynamic data requires these kinds of temporal dependencies of signal
channeling. Network topology (architecture), together with the control
system for time delay of the signal, forms a complete system. Correction
of weights in the DNN is somewhat more complicated, in comparison to
static neural networks. It is possible to use the technique called back
propagation through time , where the BP signal is buffered and reversed,
which enables forward and BP signals being synchronized in time
(Petrovic´ et al., 2009).
Gamma memory is a specifi c short memory recurrent structure, which
preserves temporal information about the system. A distinctive feature
of gamma memory is the number of taps (number of signal delays). For
a given number of taps, Gamma memory remembers previous system
states and integrates them with the current ones. Gamma memory is
schematically represented in Figure 5.3. From the point of view of signal
transmittance, Gamma memory can be seen as a recursive low-pass fi lter
(each output gives a more fi ltered version of the original signal), which
acts as an infi nite response fi lter. It is ideal for adaptive systems, since its
interpolation weight µ can be adapted using common algorithms.
Interpolation weight controls the depth of the Gamma memory and
stability is guaranteed when 0 < µ < 2. Gamma memory is actually a
combination of Tapped-Delay-Line (TDL) and a simple feedback neuron.
Therefore the signals g k (t) at the taps k in time t of the Gamma memory
are convulsions of the impulse response g k tap k (Petrovic´ et al., 2009).
￿
￿
￿
 
Search WWH ::




Custom Search