Graphics Reference
In-Depth Information
(a)
Landing
target
Flight path
(b)
Take-off
point
(c)
(d)
Fig. 4.21 Outdoor experiment environment and data set images. a Aerial view of the flight area
and target building; b the flat area on the roof is the landing target; c raw input image with good
texture on roof; d image with saturation area which leads to missing stereo data
4.5 Conclusion and Future Work
Vision-based navigation algorithms have the potential to become an enabling
technology for micro air vehicle autonomy. With the advent of small, low-power
processing units and miniature camera modules from the cell-phone sector, low
SWaP computing for vision applications is ready to be deployed, enabling fully
autonomous navigation of very small platforms for the first time. In this chapter,
we presented three different fundamental building blocks for platform autonomy:
vision-based pose estimation, onboard obstacle avoidance, and autonomous landing.
Fast pose estimation that is independent of external sensor inputs is the basis for
safe MAV operations. Our approach fuses accurate map-based localization with a
fast map-free approach to estimate vehicle velocities in emergency situations when
a map-based approach might fail.
Obstacle avoidance is a key capability for flights in highly cluttered environment
or close to the ground. We use frontal stereo vision approach which provides a
polar-perspective, inverse-range world representation for obstacle detection and col-
lision checking with low computational complexity, and deploy a closed-loop motion
Search WWH ::




Custom Search