Graphics Reference
In-Depth Information
frames were also added. This yields a 24-element state vector X :
p i T
w v i T
w q i T
w b T
b a
p i q i } .
X
={
ʻ
(4.1)
ˉ
Details about the EKF prediction and update equations can be found in [ 63 ]. A
nonlinear observability analysis [ 62 ] reveals that all state variables are observable,
including the intersensor calibration parameters p i and q i . Note that the VSLAM
pose estimates are prone to drift in position, attitude, and scale with respect to the
world-fixed reference frame. Since these quantities become observable when fus-
ing with an IMU (notably roll, pitch, and scale), gravity-aligned metric navigation
becomes possible even in long-term missions. This is true as long as the robot excites
the IMU accelerometer and gyroscopes sufficiently as discussed in [ 29 ]. Additionally,
since the gravity vector measured by the IMU is always vertically aligned during hov-
ering, the MAV will not crash due to gravity misalignment—even during long-term
operations.
4.2.2.2 Map-Free Approach
The map-based approach described above is locally drift free. However, it requires
to redetect the same features over several camera frames. This is prone to failure and
mismatches, and can lead to corrupting the local map which in turn can lead to a
crash of the MAV because of a wrong state estimate based on the corrupted map.
In [ 65 , 66 ], we present an approach which only uses two consecutive camera
images and inertial cues for MAV navigation. This inertial-optical flow (IOF)-based
approach does not use any kind of history that can be corrupted and does not require
to find the same features in later frames. In [ 66 ], we show that we still can estimate the
metric velocity of the MAV, its metric distance to the scene, and its full attitude (roll,
pitch, yaw) drift free while maintaining a self-calibrating system. That is, in addition
to the states used for control, we can estimate the IMU biases and the camera-IMU
extrinsics, and do not need specific calibration steps prior to launch. In fact, in this
work, we show that this state estimation is robust and fast enough such that the MAV
can be deployed by simply tossing it into the air rendering it a throw-and-go system.
The state vector
p i w v i w q i w
p i
q i
ˇ ={
b
b a ʛ
ʱ }
(4.2)
ˉ
contains the IMU-centered MAV position p i w , velocity v i w and attitude q i w with
respect to the world frame. It also contains the IMU biases on gyroscopes b ˉ
and
accelerometers b a , the common visual scale factor
ʛ
and the 6D transformation
between the IMU and the camera in translation p i
and rotation q i . The system can
additionally estimate the inclination
ʱ
of the scene plane it currently observes (see
Fig. 4.2 ).
A nonlinear observability analysis reveals that all states are observable except two
dimensions in position. This is expected since optical flow and inertial measurements
Search WWH ::




Custom Search