Biomedical Engineering Reference
In-Depth Information
and practitioner showed that robots were able to
operate autonomously with minimum human
intervention. They could also carry out complex
tasks successfully. However, they are still unable
to learn autonomously from their own experience.
Next generation of autonomous robots surely
should have this skill. In our opinion, this is an
inflection point in robot's development. The most
promising methodologies to achieve this robot
ability seem to be the ones provided by ER. They
try to emulate natural or bio-inspired behaviours.
Furthermore, they have the additional attractive
feature of facilitating robots interaction, enabling
multiple robots problem solving approaches.
In what follows, the remaining sections of this
chapter are devoted to present our work within
this research area.
Working Assumptions for the
Autonomous Robot Application
In this case study, the following working condi-
tions were assumed: (a) the robot moves on a flat
ground; (b) inertial effects, as well as non-holo-
nomic characteristics of the mobile robot, are not
taken into account; (c) the robot moves without
slipping; (d) the environment is structured but
unknown, with some elements fixed (e.g., walls,
corridors, passages, doors, etc.) while others,
like goal position references (light sources) and
obstacles, can be modified for each run; (e) envi-
ronment variable conditions (e.g., light influence
from other sources) are not managed directly but
considered as perturbations to the controller.
Construction of the Neurocontrollers
EvOLUTIONARy RObOTICS
APPROACH
An initial set of neurocontrollers, called popula-
tion, is mapped into individuals with their rep-
resenting genotypes. These genotypes are made
of a constant number of chromosomes. Individ-
ual neurocontrollers were implemented for each
simple behaviour using feed-forward recurrent
neural networks, without recurrence in any level
and with a fixed number of neurons. A genetic
algorithm based on Harvey's proposal (Harvey,
1992) was used in the neurocontroller's weight
fitting, called learning algorithm in the context
of artificial neural networks. This learning stage
is done in the following way. The chromosomes
of the genotype include the sign and the weight
strength for each synapse. Then, each genotype
representing a specific neurocontroller is awarded
according to its observed performance through a
fitness measurement that is used as a comparison
parameter, establishing a ranking. After that, those
genotypes situated at the lower part of this scale
(lower half) are discarded as individuals in the
next generation. Copies of the individuals at the
upper part replace these individuals.
In order to develop an architecture for autonomous
navigation based on ER, a control strategy was
designed and implemented over a Khepera © robot.
This strategy actuated directly over the robot's
wheels, taking into account the sensor measures
and the mission objectives. It was based on the
generation of independent behavioural modules
supported by neurocontrollers, and a coordination
structure. The evolutionary approach adopted in
this experiment gave as an output the guidance
and control systems progressively, once a target
position is determined. In this sense, the mission
planner is static. However, if one of the simple
behaviours to be scaled-up is an obstacle avoid-
ance one or the target is a moving point, the final
complex behaviour after the scaling-up process
will exhibit features of a dynamic mission plan-
ner. In effect, as the obstacle appears, the target
positions will be changing dynamically on-line
as the robot moves.
Search WWH ::




Custom Search