Information Technology Reference
In-Depth Information
Models that involve lateral interactions can be found in the neural networks lit-
erature as well. In the remainder of this section, some of these models are reviewed.
Linear threshold networks with local excitation and global inhibition. Among
the simplest models of lateral interaction are models with global inhibition. Global
inhibitory networks can, for instance, implement a winner-take-all dynamics. Hahn-
loser [89] analyzed networks with excitatory linear threshold units and a single in-
hibitory unit that computed the sum of excitatory activity. When the excitatory units
implement a perfect autapse, a unit that maintains its activity by self-excitation, only
network states with a single active neuron are stable. This neuron is the one that re-
ceives the strongest external input. All other units have an output of exactly zero
because the global feedback lowers the activity below the threshold of the transfer
function.
The behavior of the network is more complex if the excitatory units interact
directly. Hahnloser et al. [90] designed a chip consisting of a ring of neurons with
local excitatory connections. A single neuron computed the average activity and
provided global inhibitory feedback.
The analysis of the network demonstrated the coexistence of digital selection
and analog sensitivity. The authors identified two types of neuron subsets in the
network. The activity of forbidden sets is not stable, while persistent activity of a
permitted set can be maintained by the network. It was shown that all subsets of
permitted sets are also permitted and all supersets of forbidden sets are forbidden.
Local excitatory connections widen the set of active units in a winner-takes-all
dynamics from a single unit to a patch of consecutive units that have a blob-shaped
activity. In the network, a linear relation between the amplitude of the blob and the
level of uniform background input exists.
If more than one unit receives external input, the network places the blob at the
location of the strongest stimulus. The network also showed hysteresis behavior. An
already selected stimulus wins the competition although a different unit receives a
slightly larger input. If the difference between the two stimuli exceeds a threshold,
the activity blob jumps to the stronger stimulus.
Neural Fields. Amari [7] was among the first to analyze networks with lateral con-
nectivity. He simplified the analysis by using a linear threshold activation function
f ( x ) = max(0 ,x ) . Amari generalized the discrete neurons to a continuous field.
The simplest case of such a model is a one-dimensional field consisting of one
layer:
Z
τ ∂u ( x,t )
∂t
w ( x,x
) f [ u ( x
)] dx
= u +
+ h + s ( x,t ) ,
where u ( x ) is the membrane potential at position x , and h < 0 determines the
resting potential. Amari assumed space-invariant symmetric lateral connectivity
w ( x,x ) = ω ( | x x | ) . For constant input s ( x ) he proved the existence of five
types of pattern dynamics:
- monostable field in which all excitations will die out,
- monostable field which is entirely excited,
Search WWH ::




Custom Search