Information Technology Reference
In-Depth Information
Belief Management for Autonomous Robots Using
History-Based Diagnosis
Stephan Gspandl, Ingo Pill, Michael Reip, and Gerald Steinbauer
Institute for Software Technology
Graz University of Technology
Inffeldgasse 16b/2, A-8010 Graz, Austria
{ sgspandl,ipill,mreip,steinbauer } @ist.tugraz.at
Abstract. Non-deterministic reality is a severe challenge for autonomous robots.
Malfunctioning actions, inaccurate sensor perception and exogenous events eas-
ily lead to inconsistencies between an actual situation and its internal representa-
tion. For a successful mission in dynamic environments, a robot is thus required
to efficiently cope with such inconsistencies.
In this paper, we present a belief management system based on the well-known
agent programming language IndiGolog and history-based diagnosis. Extending
the language's default mechanisms, we add a belief management system that is
capable of handling several fault types that lead to belief inconsistencies. First ex-
periments in the domain of service robots show the effectiveness of our approach.
Keywords: belief management, diagnosis, history-based diagnosis, autonomous
robot.
1
Introduction
There is an increasing interest to target autonomous mobile robots for complex tasks
in dynamic (non-deterministic) environments. Related target applications range from
simple transportation services via visitor guidance in a museum to autonomous car
driving[1,2]. The complexity of such application domains raises the demands regarding
autonomous reasoning capabilities of deployable systems. Appropriate robots have to
consider, for instance, a high and dynamic number of entities and objects, including
their complex and spatial relations. In this context, inherent challenges a robot has to
live up to are those of facing novel scenarios and having to cope with situations where
a robot's assumptions about a situation are in conflict with reality.
Imagine for instance a delivery robot whose task is to move objects between rooms
and assume that it perceives some object that is known to be at some other place. Ob-
viously, there is more than one explanation for this crucial inconsistency. For instance,
(1) that the robot drove to the wrong room (execution fault), (2) a sensing fault, and (3)
that someone moved the object (unobserved exogenous event). A robot that is unable
to handle such inconsistencies would neither be able to successfully finish the given
The authors are listed in alphabetical order. The work has been partly funded by the Austrian
Science Fund (FWF) by grants P22690 and P22959.
 
Search WWH ::




Custom Search