Graphics Reference
In-Depth Information
in [ 5 ], wherein an Adaboost-based learning algorithm is used to extract lane features
in challenging road scenarios.
The detected lane features are further sent for outlier removal. This is done by
employing road models and clustering algorithms on the lane features extracted from
the first step, and are applied on two different domains. Firstly in the image domain
with perspective effect, HVS perceives lanes as usually straight (and then curved
if they are curving in far-view from the host vehicle), and are directed toward a
vanishing point. Second domain is the inverse perspective map (IPM) view, which
is the top view of the road, where the lanes are perceived to be parallel lines which
are either straight or following a clothoid model [ 10 ]. These visual perceptions are
translated into computer vision algorithms such as Hough transform, which is a
straight line detector [ 1 , 10 ], and RANSAC, which is a outlier detection algorithm
based on a road model [ 1 ]. The third step in lane estimation process is lane tracking,
which is usually performed using techniques like Kalman filters and particle filters
[ 1 , 5 , 10 , 16 ]. Lane tracking follows the HVS' perception that positions of lanes
can be predicted in the current frame based on the history of lane positions in the
previous frames and the vehicle dynamics. In addition to these three steps, other
visual perception cues are also used for efficient lane estimation. For example, in
[ 16 ], vehicle detection was used to robustly locate lanes as an additional cue. This is
inspired from the HVS' perception process that vehicles are expected to be in lanes
and hence lanes could be localized nearby the vehicles.
Although there are a number of computer-vision-based lane analysis methods
reported in literature as shown in recent works [ 1 , 5 - 7 , 10 ], most of these works
address the robustness of the vision algorithms in different road scenarios. However,
as pointed by Stein in [ 18 ] titled, “The challenge of putting vision algorithms into a
car,” there is a need to explore lane analysis approaches for embedded realization.
Attempts have been made to realize embedded solutions for lane estimation and
tracking [ 9 , 15 ], etc., but as indicated in [ 9 ], most of them have been architectural
translations of somepartsof existing lane detection algorithms.
In this paper, we propose a lane feature extraction method that addresses some of
these issues related to embedded realization.
10.2 Lane Analysis and Embedded Vision
Different variants of lane analysis techniques have been proposed in literature such
as [ 1 , 5 , 10 ] as shown in (Table 10.1 ). A detailed survey of lane analysis methods
is presented in [ 10 ] and [ 7 ]. An effective lane analysis method [ 7 , 10 ] comprises of
three main steps: (1) lane feature extraction, (2) outlier removal or postprocessing,
and (3) lane tracking. Pixel level filtering operations such as steerable filters, etc.,
are applied on the entire image or regions of interest (usually the lower half of the
input image) to extract lane features. A further postprocessing and outlier removal is
performed using techniques like RANSAC [ 1 ], Hough transform [ 13 ], etc., in order
to improve the robustness. Inverse perspective mapping (IPM) of the input image is
Search WWH ::




Custom Search