Biomedical Engineering Reference
In-Depth Information
here, because nothing was done so far, about us-
ing such devices in “standard” neural computing
applications, such as pattern recognition. Several
open problems and research topics will be men-
tioned below (a long list of such topics, prepared
for the Fifth Brainstorming Week on Membrane
Computing, Seville, January 29-February 2, 2007,
can be found in (Păun, 2007)), but probably this is
the most important one: connecting SN P systems
with neural computing, more generally, looking
for applications of SN P systems.
One of the neurons is considered to be the
output neuron , and its spikes are also sent to the
environment. The moments of time when a spike
is emitted by the output neuron are marked with 1,
the other moments are marked with 0. The binary
sequence obtained in this way is called the spike
train of the system - it might be infinite if the
computation does not stop.
In the spirit of spiking neurons, in the basic
variant of SN P systems introduced in (Ionescu et
al., 2006), the result of a computation is defined
as the distance between consecutive spikes sent
into the environment by the (output neuron of
the) system. In the initial paper, only the distance
between the first two spikes of a spike train was
considered, then in (Păun et al., 2006a) several
extensions were examined: the distance between
the first k spikes of a spike train, or the distances
between all consecutive spikes, taking into ac-
count all intervals or only intervals that alternate,
all computations or only halting computations,
etc.
Systems working in the accepting mode were
also considered: a neuron is designated as the
input neuron and two spikes are introduced in it,
at an interval of n steps; the number n is accepted
if the computation halts.
Two main types of results were obtained: com-
putational completeness in the case when no bound
was imposed on the number of spikes present in
the system, and a characterization of semilinear
sets of numbers in the case when a bound was
imposed (hence for finite SN P systems).
Another attractive possibility is to consider the
spike trains themselves as the result of a computa-
tion, and then we obtain a (binary) language gen-
erating device. We can also consider input neurons
and then an SN P system can work as a transducer.
Such possibilities were investigated in (Păun et al.,
2006b). Languages - even on arbitrary alphabets
- can be obtained also in other ways: following
the path of a designated spike across neurons, or
using extended rules, i.e., rules of the form E/a c
→ a p ;d , where all components are as above and
AN INFORMAL PRESENTATION OF
SN P SySTEMS
Very shortly, an SN P system consists of a set of
neurons (cells, consisting of only one membrane)
placed in the nodes of a directed graph and send-
ing signals ( spikes , denoted in what follows by
the symbol a ) along synapses (arcs of the graph).
Thus, the architecture is that of a tissue-like P
system, with only one kind of objects present in
the cells. The objects evolve by means of spiking
rules , which are of the form E/a c → a;d , where
E is a regular expression over { a } and c, d are
natural numbers, c ≥ 1, d ≥ 0. The meaning is
that a neuron containing k spikes such that a k
belongs to the language L(E), identified by the
expression E, k ≥ c, can consume c spikes and
produce one spike, after a delay of d steps. This
spike is sent to all neurons to which a synapse
exists outgoing from the neuron where the rule
was applied. There also are forgetting rules , of
the form a s → λ , with the meaning that s ≥ 1 spikes
are removed, provided that the neuron contains
exactly s spikes. We say that the rules “cover”
the neuron, all spikes are taken into consideration
when using a rule.
The system works in a synchronized manner,
i.e., in each time unit, each neuron which can use
a rule should do it, but the work of the system is
sequential in each neuron: only (at most) one rule
is used in each neuron.
Search WWH ::




Custom Search