Information Technology Reference
In-Depth Information
current state resulting from previous changes, in order to reflect the interaction rules
of the objects themselves and/or with their environment.
In the actual implementation of this conceptual model, EdnaCo is implemented
by dividing the computer's memory into a number of discrete segments, each run-
ning on a different processor. This allows multiple interactions to take place at once.
When entities move, they either change positions within a segment, or they may
migrate across processor boundaries. Thus, the container of the VTT is a discrete
3D space residing inside a digital computer, the entities are instantiated as objects
in classes in a programming language (C
), and the interactions are appropriate
functions and methods associated with these objects. These methods either leave
the objects unperturbed or make the appropriate deletions and insertions to reflect
the reactions they simulate. The communication between processors is implemented
using a message-passing interface, such as MPI, on a cluster of personal comput-
ers or a high-performance cluster (in our case, an IBM cluster containing 112 dual
processors). The architecture of EdnaCo allows it to be scaled to an arbitrarily large
number of processors and to be portable to any other cluster supporting C
++
and
MPI. The results of these simulations are thus guaranteed to be reproducible on any
other systems running the same software as the original simulation. Further details
of this simulation environment can be found in [11].
The striking property of the VTT is that the programming stops at the level of
local interactions. Nothing else is programmed, every other observable is an emer-
gent property of the simulation. In particular, if the equivalent molecules in Adle-
man's original experiment are placed in the tube, they will seem to be moved about
randomly by Brownian motion. Nevertheless, encounter between vertex and edge
molecules may create longer and longer paths, and eventually a Hamiltonian one if
one is possible. This is indeed the case with a very high probability (reliability of
99.6% with no false negatives has been reported [12]). In other words, the solution
to an optimization problem has been captured in silico by a simple simulation of the
most relevant properties of the natural phenomena occurring inside a test tube con-
taining DNA oligonucleotides with the appropriate characteristics. The scalability
of this solution is clear in two different directions. First, the parallelism of EdnaCo
provides parallelism of the type inherent in chemistry. Second, the size of problems
solvable by this method is only limited by our ability to find sets of oligonucleotides
large enough to code for the vertices and edges without causing any undesirable in-
teractions between them. Although it was initially suggested that random encodings
would provide sufficient stock of oligonucleotides [1], further work had determined
that care must be exercised in selecting oligonucleotides that are inert to cross-
hybridization due to the uncertain and thermodynamic nature of DNA hybridization.
Although this problem has proven to be in itself NP-complete [29], similar methods
can be used to provide nearly optimal solutions, both in vivo by the PCR selection
protocol of [5, 4, 7], based on an extension of the polymerase chain reaction, and
in silico by its simulation [16]. So-called DNA code sets, i.e., noncross-hybridizing
(nxh) molecules are now readily available to solve large problems systematically,
of the order of tens to hundreds of thousands of noncrosshybriding 20-mers, for
example, as seen in Fig. 14.1 [16, 14].
++
Search WWH ::




Custom Search