Graphics Reference
In-Depth Information
reconstruction is essential, is resolved with the highest accuracy, whereas accuracy
for far distances decreases. At the same time, the polar-perspective character of an
image-based representation significantly reduces the memory foot print of a world
representation, making this method suitable for small hardware platforms.
The vehicle navigation system follows a standard control loop scheme: the motion
planner module plans 3D vehicle trajectories in world space based on vehicle pose
and a predefined goal input, and issues control commands to a vehicle controller,
which maneuvers the vehicle. For collision checking, 3D trajectory segments are
projected into disparity space and verified using the C-space map.
In our simulated experiments, we use simulated vehicle positions as pose esti-
mation inputs. Our onboard implementation uses a vision-aided pose estimation
approach [ 65 ] to provide pose. As any pose estimation framework on a real system
will incorporate pose errors, we evaluate the robustness of our planning approach in
Sect. 4.3.7 .
In the following, we describe the individual parts of the approach in more detail.
4.3.3 Image-Based Collision Checking
For efficiency reasons, collision checking is performed directly in disparity space.
When a new disparity image is obtained from stereo, C-space expansion is applied
in the disparity domain, allowing to treat the MAV as a single point in space for
planning purposes. During motion planning, small trajectory segments are verified
by projecting them into disparity space and comparing the reconstructed disparity
values along the segment with the corresponding C-space disparity values to detect
collisions.
4.3.3.1 C-Space Expansion
C-space expansion is implemented as an image processing function (Fig. 4.8 ). To
illustrate this operation, we first project a pixel of the stereo disparity map p
)
into world coordinates using the stereo base b s and the focal length f (in pixels),
assuming rectified images and a disparity map that corresponds to the left camera
view:
(
u
,
v
,
d
z w =−
fb s /
d
(4.4)
T
P
(
x w ,
y w ,
z w ) =[
uz w /
f
,
vz w /
f
,
z w ]
(4.5)
Considering an expansion sphere S around P
with the expansion radius
r v , we calculate the position of the rectangle that perfectly hides the sphere from the
viewpoint of the camera (Fig. 4.9 ) and assign to it a disparity value that corresponds
to the distance to the point on S that is closest to the camera origin.
(
x w ,
y w ,
z w )
Search WWH ::




Custom Search