Graphics Reference
In-Depth Information
other sensors such as vehicle controller area network (CAN) data, etc., in such a
framework is explained in detailed in [ 23 ], wherein vision systems that enhance
safety of the driver by looking-in and looking-out of the vehicle are proposed. It is
established that it is not only important to sense the environment outside the vehicle
such as obstacles (vehicles, pedestrians), but also monitor the dynamics of the driver
(and possibly other passengers) inside the vehicle. Having such a holistic sensing
would also enable in predicting driver intentions and take the necessary control/alarm
actions well in time and mitigate dangerous situations [ 4 ].
The vehicle surround analysis modules include operations such as lane analy-
sis [ 19 ] and vehicle detection [ 20 ], which are the front end modules in terms of
capturing and analyzing the vehicle surroundings. The information from these mod-
ules are then further analyzed to assess the criticality of the situation and predict
driver intentions and behavior before the driver takes any decision or maneuver [ 21 ,
22 ] This assessment and prediction can either be used to warn/alert the driver of
any unsafe maneuvers or otherwise, or input to automatic control systems such as
Adaptive Cruise Control (ACC) systems. Therefore, the vehicle surround analysis
modules play a vital role in deciding the effectiveness of the IDAS because they are
the primary modules that sense and analyze data from outside and inside the vehicle
to extract meaningful information for the rest of the IDAS.
Among the different modules for active driver safety framework, lane analysis
using monocular cameras contributes to its efficiency in multiple ways. Firstly, lane
analysis, i.e., lane estimation and tracking, aids in localizing the ego-vehicle motion,
which is the one of the very first and primary steps in most IDAS such as lane
departure warning (LDW), lane change assistance, etc. [ 11 , 23 ]. Next, lane analysis
is also shown to aid other vehicle surround analysis modules. For example, in [ 17 ],
lanes are used to detect vehicles more robustly because vehicles are assumed to be
localized to their ego-lanes. Similarly, lane detection is shown to play a significant
role in predicting driver intentions before lane changes occur [ 11 , 23 ], etc.
By lane estimation, we refer to the process of detecting and tracking lane markings
that define the lanes in a road scene. A detailed survey of various lane estimation
techniques is presented in [ 7 , 10 ]. Though a number of lane estimation methods
have been proposed in literature [ 1 , 2 , 5 , 7 , 8 , 10 , 16 ], the variations in the road
scene makes lane estimation a challenging process [ 5 , 7 , 10 ] such as shadows from
trees, vehicles, etc., presence of skid marks, varying tar color and road surfaces,
varying ambient lighting conditions, etc. Figure 10.1 shows some of these challenging
scenarios.
The lane estimation methods in [ 1 , 5 , 7 , 8 , 10 , 16 ] usually comprise three main
steps: (1) lane feature extraction, (2) outlier removal, and (3) lane tracking [ 10 ].
Lane feature extraction techniques are usually based on the properties of lanes like
directionality, intensity gradients, texture, and color. Techniques like steerable filters
[ 3 , 10 , 16 ], adaptive thresholding [ 1 , 12 ], Gabor filters [ 8 ], etc. are used to extract
lane features. Learning based approaches are also employed in [ 5 ] to extract lane
features. The detected lane features are further filtered for outliers in the second
step of lane estimation process. This step usually involves fitting the detected lane
features into a known road/lane model, thereby eliminating nonlane features. This
Search WWH ::




Custom Search