Graphics Reference
In-Depth Information
We ported the initial estimator implementation from the Asctec Mastermind to
the U2 by using system specific changes in order to speed up the execution on the
SoC. We used a highly ARM-customized Ubuntu version as operating system and
Robot Operating System (ROS) [ 49 ] for interprocess communication. Our VSLAM
implementation primarily consist of a tracking and a mapping part which we enforce
to be executed on separate cores. Tracking is the most critical part, since it yields
instantaneous pose measurements which are used to generate filter updates. There-
fore, running this part on a dedicated core ensures uninterrupted pose handling at all
time. Mapping is responsible for pose refinement and windowed bundle adjustment,
and is thus less time critical. Note that the adjustments are refinements and we do
not use global loop closure techniques. This avoids large and abrupt pose changes.
Since the mapping task runs at a lower frequency and is less time critical, it shares
its dedicated core with other system tasks. After optimization, the vision front end
produced visual pose estimates at a stable 50 Hz rate.
4.2.3.2 Map-Free Approach
Our inertial-optical flow (IOF)-based approach is designed to keep the MAV airborne
at all times in a fail-safe manner. Thus, it has to have low-computational cost requiring
low system resources and it has to be fail safe.
We implement IOF on our 12 g Odroid-U2 platform and use similar NEON
optimization instructions than for the above explained map-based approach. We use
the same (FAST) feature extraction method but simplified the matching process by
not warping patches. Since we only use two consecutive images at high frame rate,
the distortion is small and warping is not required.
Computing the normalized camera velocity vector requires normalizing all optical
flow vectors with their scene depth. As detailed in [ 65 ], this normalization uses
a computational complex SVD per feature i including the optical flow
x i (
˙
t
)
,the
feature direction vector x i (
t
)
, and the camera velocity direction vector v
(
t
)
.The
unknowns are the feature scale factor and scale factor change ʻ i (
t
)
,
ʻ i (
t
)
and velocity
normalization factor
ʷ
:
ʻ i (
t
)
x i (
t
) + ʻ i (
t
) ˙
x i (
t
) = ʷ
v
(
t
).
(4.3)
which can be stacked to a feature
×
features matrix M containing optical flow and
velocity vector measurements ( x i (
t
)
,
x i (
˙
t
)
, v
(
t
)
) and
ʻ
is the solution vector con-
taining a scale factor per feature ( ʻ i (
t
)
,
ʻ i (
t
)
) and a scale factor
ʷ
for the velocity
vector.
is a solution up to an arbitrary global scale (which will be estimated in the
EKF using the IMU). Thus without loss of generality, we can set
ʻ
1 and use the
block sparsity of M to efficiently compute the SVD in a block-wise parallel fashion
on the Odroid-U2. The optimized code runs at 50 Hz with an image resolution of
572
ʷ =
480 (WVGA) on the Odroid-U2 using only about 20 % of the overall compu-
tation capacity of system.
×
Search WWH ::




Custom Search