Information Technology Reference
In-Depth Information
phenomena. The predictive power of these software can vary from very qualitative
(their results show very general trends similarly present in the real world) to very
quantitative (the numbers produced by the computer may be compared to those pro-
duced by the real phenomena which we are seeking to model). Even in their most
qualitative form and simply due to the fact that they need to be translated into an
algorithmic structure, these programs often allow a deep and careful examination of
those mechanisms known to be responsible for observed patterns of behaviour. The
needed “explicitation” and the writing down in an algorithmic structure of these
mechanisms is already the guarantee of an advanced understanding accepted by all.
Algorithmic writing is an essential stage in formalising the elements of the model and
in rendering them less subjective. John Holland wrote about one definitive virtue of
computer models: “ The assumptions underlying the predictions are made explicit, so
others can use or modify the assumptions enriching the overall enterprise ” [19]. In a
commentary very recently published in “Nature” and entitled “ Can computers help to
explain biology ” [4], we can read the following extracts: “ Today, by contrast with
descriptions of the physical world, the understanding of biological systems is most
often represented by natural-language papers and text books. This level of under-
standing is adequate for many purposes (including medicine and agriculture) and is
being extended by contemporary biologists with great panache. But insofar as biolo-
gists wish to attain deeper understanding, they will need to produce biological knowl-
edge and operate on it in ways that natural language does not allow …. Biology
narratives of cause and effects are readily systematizable by computers
Although algorithmic writing is less demanding than mathematical writing (quali-
tative agents found in agent-based models or in cellular automata are less precise than
the quantitative variables found in differential equations), it requires a great degree of
rigour and thus a much sharper clarification of various mechanisms than is found in
biological literature in versions still quite ambiguous. The more the model allows to
integrate what we know about the reality reproduced, the detailed structures of objects
and relationships between them, the more the predictions will move from “tenden-
tious” to quantitative and precise and the easier it will be to validate the model ac-
cording to Popper's falsificationism, the way in which physicists wish to see biology
to evolve. Still more important, new original mechanisms may be discovered, as it is
their multiple iterations in time and space, only made possible through the computer,
which allows to understand how they underlie the observed emergent behaviour. And
this is indeed the territory of “emergent” phenomena and functionalities that only, in
addition to nature, software can produce. In the 1950's, when Alan Turing discovered
that a simple diffusion phenomenon, propagating itself at different speed, depending
on whether it was subject to a negative or positive influence, produces zebra or alter-
nating motifs, he had a considerable effect on a whole section of biology studying the
genesis of forms (animal skins, shells of sea creatures). When Kaufmann discovered
that the number of attractors in a Boolean network or a neural network has a linear
dependency on the number of units in these networks, these results can equally well
apply to the number of cells expressed as dynamic attractors in a genetic network or
to the quantity of information being stored in a neural network. When some physicists
recently observed a non-uniform connectivity in many networks, whether social,
technological or biological, showing a small number of nodes with a large number of
connexions and a greater number of nodes with far fewer, and when, in addition, they
Search WWH ::




Custom Search