Information Technology Reference
In-Depth Information
copy of current activation x ' into a tube T, pour it over the weight chip W and allow
enough time and reaction conditions for hybridization of Di to Ci to occur; next,
PCR extension is used on the weight template, now primed by Di , to obtain a com-
plementary product Vi j
di r dj r dj r dj attached to Wij in a double strand. This
copy Vi j is first detached by heating, then primed with copies of all Ci 's, extended,
and digested separately by the restriction enzyme in a clean separate tube T, which
now contains a molecular representation of the net input (i.e., the product Wx
=
,but
in complementary form). In order to produce the next activation vector, the satura-
tion function is now computed as in [35], by pouring the net input back on recently
washed chip A, allowing time for hybridization, and flushing the chip in order to
eliminate any excess strands beyond saturation levels and to preserve the accuracy
of the process in successive reuse/iterations. The concentration of double-stranded
remaining oligonucleotides is the new activation x
(
t
)
(
t
+
1
)
of the neural network at
the next step (at time t
1). Figure 14.2 illustrates the saturation function. Step 1
creates the DNA of the net input. In Step 2, the DNA is poured over the chip A.
The remaining DNA is then passed over chip W for saturation in Step 3. The entire
procedure is then repeated as many times as desired to iterate the Hopfield network
until a stable activation state is reached. Several variants of this design are possible,
taking care of preserving the basic ideas presented above.
In order to verify the reliability of this design experimentally, the discrete Hop-
field net example 14.4.2 from [19, p. 690], with three units and three memories was
seeded with three sets of inputs and allowed to run for 10 rounds or 10 transitions
beginning at state x
+
. The total state of the Hopfield
memory was recorded at the end of each round. The experiment was performed sev-
eral times for each of the three inputs. The first input was ideal and should cause
the network to instantly recognize the memory and converge to the same stable state
immediately (in one round). The second input contained one mismatch and should
converge toward the same stable as the first input after several rounds. The last input
contained nothing but errors and should converge to the complementary fixed point
of the ideal input.
Figure 14.3 shows the Hopfield memory with ideal input. This memory con-
verges within one round to the fixed point of (-1, 1, -1), as expected for this mem-
ory. Figure 14.4 further shows that the same memory with one mismatch converges
to the ideal output in the fourth round. Again, this behavior is entirely consistent
with the behavior of Hopfield memories implemented in silico. Figure 14.5 shows
that the Hopfield memory converges away from the correct output when the input is
totally corrupted. This behavior is again consistent with Hopfield memories imple-
mented in silico.
The noncross-hybridizing property of the code set of neuron guarantees that a
similar behavior will be observed with much larger set of neuronal ensembles in a
parallel computing environment, either in vitro or in silico. Note that only a mod-
est amount of hardware (two DNA chips) are required, they are reusable, and the
updates can be automated easily, even in microscales using microfluidics [28], in a
parallel fashion that is to a large extent independent of the number of neurons. This
is a particularly interesting property for potential applications of these systems.
(
0
)
and ending at state x
(
10
)
Search WWH ::




Custom Search