Information Technology Reference
In-Depth Information
the route formation does not emerge from a globally coordinated process, but
from multiple local interactions among agents.
3.3 Support to Robot Navigation
When a robot is following a route, each camera on the route will support the
movement of the robot to the FOV of the next camera on the route. We must bear
in mind that every camera keeps information not only about the neighbouring
cameras but also regarding the neighbourhood regions (overlapping FOV areas).
First of all, the camera which sees the robot will start a communication with
it (C2R HELLO in Fig. 3). If the robot is attending a call event (state ON-
ROUTE), it will answer to the camera with its state and the route it is following
(C2R INFORMSTATE in Fig.6). Finally, the camera receiving this message es-
timates the direction that the robot should follow to reach the FOV of the next
camera on the route (basically the robot should move towards the Neighbour-
hood Region shared with the next camera on route), and sends this information
to the robot (C2R UPDATEMOV, in Fig. 3). Once the robot receives infor-
mation about the direction it should follow, it will consider this direction as an
attractive force (Potential Fields based navigation method described in Sec. 2.2).
In this manner, the robot will move from the FOV of one camera to the other
on the route.
4
Implementation and Experimental Results
We have implemented both kinds of agents: robots and cameras. The robot used
was a Pioneer P3DX with a SICK-LMS200 laser. On the other hand, each camera
agent used either an Unibrain Fire-i camera, or a PointGrey Chameleon CMLN-
13S2C with a FUJINON FUJIFILM Vari-focal CCTV Lens (omnidirectional).
The processing units where either a DELL Latitude E550 (Intel(R) Core(TM)
2 Duo P8600 @ 2.4 GHz, 4 GB RAM) or a Toshiba Satellite A100-497 (Intel(R)
Core(TM) 2 Duo T5600 @ 1.83 GHz, 4 GB RAM). Regarding the software of the
controllers, it was implemented using the Player(v-2.0.3)-Stage(v.2.0.4) platform
for the robot, and the OpenCV 2.0 library for the camera agent. Finally, messages
were passed over an IEEE 802.11g local wireless network via UDP.
4.1 Experimental Setup and Results
We deployed a multi-agent network over the Department of Electronics and
Computing of the University of Santiago of Compostela. As represented in
Fig. 7, the network consisted on a robot-agent (R), four camera-agents (A, B,
C, D) and a call event (M). D sighted the call event (M), started the robot call
process, and triggered the route formation (see Sec. 3.2). Upon reception of the
route, R started navigating through the network towards M, supported by A,
B, C, and D (in this order, from left to right, top to bottom in the Figure),
while avoiding the obstacles detected. The robot's trajectory and the map of the
 
Search WWH ::




Custom Search