Graphics Reference
In-Depth Information
their hovering ability to position a sensor payload in 3D space only constrained by
the mission profile. However, in order to be deployable, a human-machine interface
which allows an operator to easily control such a platform is key. The ingredient
that most facilitates operation is autonomy, since autonomous vehicles can execute
high-level commands without any further human interaction.
Thus, it requires the vehicle to know its position within the environment, and
to have a capability to avoid collisions in flight and during takeoff and landing.
All processes enabling such autonomy have to be implemented onboard, without
requiring any external sensor input.
Miniature rotorcrafts (e.g., quadrotors) offer very high maneuverability and agility
but require high rate of control because of their natural instability. Subsequently, sen-
sor signals and images used for accurate pose estimation and for control input need
to be processed fast. Since the platform has to be self-contained and payload capac-
ities on micro air vehicles (MAVs) are in general very limited, only light-weight
and low-power sensors and processing units can be used on-board the vehicle. This
favors vision-based solutions that use small light-weight cameras and microelectro-
mechanical systems (MEMS) inertial sensors. As recent developments in multicore
smartphone processors are driven by the same size, weight, and power (SWaP) con-
straints, MAVs can directly benefit from new products that provide more computa-
tional resources at lower power budgets and low weight. This enables miniaturization
of aerial platforms that are able to perform navigation tasks fully autonomously. In the
subsequent sections, we introduce our autonomous navigation framework with focus
on pose estimation, collision avoidance, and an example for a high-level navigation
task that builds on these lower level functions: autonomous landing.
Viable solution for GPS-independent pose estimation from visual and inertial sen-
sor inputs have been proposed in the literature [ 29 , 41 ]. However, a major algorithmic
challenge is to process sensor information at high rate to provide vehicle control and
high-level tasks with real-time information about position and vehicle states.
In Sect. 4.2 , we approach the issue of processing the vast camera information in
real-time, rendering the camera a 6 degrees of freedom (DoF) pose sensor or a 3DOF
velocity sensor. We discuss two methods representing two flavors of vision-based
MAV state estimation. The first is a map-based approach using feature matches
over long periods. The second is a map free and thus inherently fail-safe approach
without using any kind of feature history. We will discuss that the first approach is
more suitable for local drift-free navigation, while the latter is useful as a fall-back
to keep the MAV airborne if a map corruption occurs. We will show that such an
approach can quickly stabilize a thrown MAV and keep it at a constant heading and
distance to the scene even though only two consecutive images and no feature history
are used.
Once pose estimation is available, higher level autonomous navigation tasks which
leverage and require this information can be executed. Examples for such tasks
are: obstacle avoidance, autonomous landing, ingress, surveillance, exploration, and
other.
In order to maneuver safely in highly cluttered environments and at low altitude,
a MAV needs the ability to detect and avoid obstacles in its flight path autonomously.
Search WWH ::




Custom Search