Graphics Reference
In-Depth Information
A filter-based approach not only allows to estimate the pose of the vehicle for
control but also can estimate calibration parameters such as IMU biases, camera-IMU
extrinsics, and visual drifts. Such self-calibration is crucial for long-term missions
and renders the system power-up-and-go without the need of pre-mission calibration
procedures. With a map-free inherently fail-safe vision module as we discuss below,
it is further possible to eliminate the visual initialization procedure and failure modes.
This literally renders the MAV throw-and-go as the MAV can be powered on and
immediately be thrown in the air to deploy it.
4.2.2 Visual-Inertial State Estimation Approaches
A camera can be used in various ways to compute its pose in 3D space. We will
discuss two approaches which can be classified into two main categories which we
call map-based and map free . The first is a key frame-based visual odometry approach
using a local map to estimate the arbitrarily scaled 6DoF of the camera. The second
approach does not use temporal information or features (i.e., a local map) but only
uses the current optical flow measurement to estimate the 3DoF arbitrarily scaled
camera velocity vector, 3DoF attitude of the MAV, and distance to the current scene.
4.2.2.1 Map-Based Approach
The first approach is described in detail in [ 63 , 67 ] which shows that, fusing with an
IMU, we can navigate a MAV in large environments and high altitude with visual
and inertial cues only. Because of robustness, real-time performance, and position
accuracy, the keyframe-based solution proposed in [ 31 ] was selected and tailored to
run on our embedded architecture. Our implementation uses a downward-looking
camera and executes a sliding-window, vision-based self localization and mapping
(VSLAM) feature tracking approach to extract pose estimates from visual inputs,
maintaining constant computational complexity. This method is viable for large out-
door environments and long missions—only limited by the battery lifetime and not
by processing power nor memory. We show how the proposed algorithm in [ 63 ] can
be implemented on a 12 g, 5 W processing unit while still running at 50 Hz. This
renders even a very light-weight MAV truly power-on-and-go.
The 6DOF pose of the VSLAM algorithm is fused with the inertial measurements
of an IMU using an Extended Kalman Filter (EKF). More details are given in [ 63 , 67 ].
An EKF framework consists of a prediction and an update step. The computational
load required by these two steps is distributed among the different units of the MAV
as described in [ 64 ]. The state of the filter is composed of the position p i w , the attitude
quaternion q i w , and the velocity v i w of the IMU in the world frame. The gyroscope
and accelerometer biases b ˉ
are also
included in the state vector. For completeness, the extrinsic calibration parameters
describing the relative rotation q i
and b a as well as the missing-scale factor
ʻ
and position p i
between the IMU and the camera
Search WWH ::




Custom Search