Geoscience Reference
In-Depth Information
McCulloch 2003; Pollak 1990). These methods were the foundation for a new
science called
(Krapivin and Potapov 2002; Nitu et al. 2004). This
science lies at the boundary of disciplines such as neurocybernetics, cognitive
psychology, arti
evoinformatics
cial intelligence, theory of systems, theory of survivability, and
systemology.
Recent advances in informatics and information technology have enabled
mathematical modeling and computer technology applications to enter such
domains as ecology, biophysics, and medicine. The concept of experimental model
has lost its novelty since long ago and has been replaced by the term of
computing
experiment
, which is used in many studies covering a vast range of subjects
including the biospheric experiments (Kondratyev et al. 2003b, 2004b). All such
works imply a priori availability of a more or less adequate model implemented as
an array of tools in an algorithmic language. To manipulate a model to carry out a
series of speci
c computing experiments one needs a general purpose computer. It
is at this point that the researcher may be challenged with insurmountable diffi-
-
culties caused by constraints on the computer
s memory and speed of operation.
Many of such endeavors indicate that modern hardware can handle relatively
complex models. Yet the same experience prompts the need for constant
improvement of modeling techniques, because the researcher runs into con
'
ict
between his desire to enhance the accuracy of a model and limited capabilities of
the computer. Building a model that is completely representative of a real world
entity is clearly not feasible: on the one hand, taking account of all the parameters
of the entity leads to the evil of multi-dimensionality and, on the other hand,
simple models that can only cope with a small number of parameters are simply not
up to the task of simulating the complex entities under consideration. Besides, such
projects involve ocean physics, geophysics, global ecology, socio-economics, etc.
The building of an adequate model is in principle impossible because of unat-
tainability of information completeness. Such systems can only deal with applied
problems from the domains of global ecology, biophysics, and medicine. Fur-
thermore, dif
fl
culties in these subject areas arise at the early stages of research (e.g.,
when attempting to formulate a model).
What is to be done when the currently available knowledge does not allow for
synthesis of a mathematical model of an entity or a process? The answer on this
question gives a theory of learning computers of the evolutionary type. We shall
retain the term
although it is used here in a somewhat different sense. What
it implies here is the description of entities changing over time in an unpredictable
manner and, by virtue of this, ensuring the irremovability of information uncer-
tainty at any moment. Such are natural systems studied e.g. in global ecology,
geophysics, biophysics and medicine. Consequently, a model treated in a broad
sense must provide for continuous adaptation to the changing behavior and struc-
ture of the observed entity. It is clear that universal models can be built only
through synthesis of particular models. Models of this kind are implemented for
problems of recognition and prediction.
Thus, let a real-world object A has some unknown algorithm of operation, only
some previous history of operation of
model
finite length been known. We need to
Search WWH ::




Custom Search