Biomedical Engineering Reference
In-Depth Information
an interesting way of pruning the cognitive func-
tions processing by using the internal drives as
filters. Next, main properties necessary to build an
efficient artificial mind structure are discussed.
sequences of adaptive actions emerging from a
non-monolithic structure of interactive modules
with different functionalities serving internal
drives. This structure specifies possibilities and
restrictions of both perception and action in a
known changing environment. Perception and
action are not considered as if they were indepen-
dent from one another, but rather they are seen
as constantly giving mutual feedback to control
their processing. That close relation must be part
of an efficient modeling of both processes, as has
already been stated in other studies (Vilela, 1998,
2000). Also, both processes depend on knowledge
and on internal states. Transitory priorities may
change the way of perceiving and acting in the
current environment, in a dynamic interaction
process.
Internal forces establish goals that orient action
selection, and may be thought of as motivations .
Their intensities determine goal priorities and their
dynamics, allowing proactive behavior. Sevin and
Thalmann (2005a, 2005b) consider motivations as
essentially representing the quantitative side of
decision-making. For them, proactivity includes
the ability to behave in an opportunistic way by
taking advantage of new situations, especially
when they permit satisfaction of more than one
motivation. Usually, there are many self-gener-
ated concurrently active motivations, even though
one of them may be more prominent. So, there
are many elements to be considered in the resolu-
tion process:
Character Artificial Mind
Minsky (1986) proposed a theory of human cogni-
tion that he called the Society of Mind . The core
of his theory is that mind is not the result of a
single, intelligent and unified processing but, on
the contrary, is produced by the work of thousands
of specialized different sub-systems of the brain.
Minsky uses the term agent to refer to the simplest
units of the mind that can be connected to compose
larger systems, or societies . What we know as
mind functions are performed by these societies
of agents, each one of them executing different
tasks, with different, but specialized roles. Unlike
Newell (1990) with his General Problem Solver,
Minsky does not believe that a general thinking
algorithm, method or procedure could describe
human intelligence performance. Instead, he
understands human thought as the result of the
combined activity of more specialized cognitive
processes. Each one of them has limited powers
and does not have significant intelligence, being
capable of interacting with only certain others.
However, Minsky considers that human mind
emerges from their interactions. Minsky does
not distinguish “intellectual” and “affective”
abilities, he considers both as mental abilities
emerging from societies of agents organized in
hierarchies. The Mental Society would then work
as does a human organization, where there are,
on the largest scale, gross divisions where each
subspecialist performs smaller scale tasks.
According to Franklin (1995), cognition
emerges as the result of interactions between
relatively independent modules, and the criterion
for evaluating such a mechanism is the fulfillment
of the agent's needs in the current environment.
Artificial mind architectures have to produce
Priority: Search for satisfaction of the most
relevant motivation.
Opportunism: Ability to change plans
when new circumstances show interesting
possibilities of motivation satisfaction.
Compromise actions: Ability to change the
course of action if it is possible to satisfy
more than one motivation.
Quick response time: Action selection must
occur in real time.
Search WWH ::




Custom Search