Graphics Reference
In-Depth Information
Reactive obstacle avoidance controllers have been based on image space nearness
fields computed from optical flow [ 25 ] and trained from human behavior via imitation
learning [ 50 ]. Recent deliberative planners have used techniques including anytime,
incremental A* for nonsymmetric vehicles moving slowly in cluttered spaces [ 35 ]
and RRT* with path optimization [ 48 ] and lattice search with precomputed motion
primitives [ 45 ] for fast, aggressive maneuvers.
4.3.2 Vision-Based Autonomous Navigation System
Traditional deliberative motion planning approaches usually implement a 3D
grid-based world representation to expand trajectories and check for collisions [ 5 ,
18 , 56 ], and more or less assume that a planned trajectory would be accurately fol-
lowed by a relatively slow moving vehicle. Applying such an approach to a micro
air vehicle generates several issues. Computational resources usually do not permit
processing large 3D grid representations in reasonable time, and the agility of the
system requires a complex planning approach which incorporates additional vehicle
states to reflect fast vehicle dynamics. In our approach, we introduce two key features
to mitigate these issues. To reduce complexity of path verification, our system uses an
image-based world representation generated from stereo vision with a fixed memory
footprint, and to allow planning in a low-dimensional planning space, trajectories
are planned over closed-loop vehicle dynamics.
Figure 4.7 gives an overview of our system architecture. 3D perception follows
a stereo vision pipeline. When images are acquired with a forward-looking stereo
camera head, a stereo disparity map is calculated with a real-time stereo algorithm,
and then expanded into configuration space (C-space), which is used for collision
checking. Stereo, C-space expansion, and collision checking all take place within
an image-based representation: 3D world points are characterized by their polar-
perspective image coordinates in the frame of the reference camera and an assigned
stereo disparity value (disparity space). The resulting 2.5D inverse-depth representa-
tion is very well suited for fast obstacle avoidance: close range where accurate object
pose
estimation
image
acquisition
vehicle
controller
stereo
disparity
collision
checker
C-space
expansion
motion
planner
goal
disparity space (2.5D)
world space (3D)
Fig. 4.7
Autonomous navigation system architecture
Search WWH ::




Custom Search