Graphics Reference
In-Depth Information
also performed to transform the input image into world coordinate system (WCS)
[ 10 ]. In addition, lane models and vehicle dynamics from CAN data are used to track
lanes across time using Kalman filtering, etc.
Considering that IDAS are implemented on battery-powered embedded platforms
inside a car, attempts have beenmade to implement lane detection systems on embed-
ded platforms in [ 9 , 15 ], etc. However, as indicated previously, most of these are par-
tial systems with the exception of the full system implemented in [ 9 ]. For example,
in [ 15 ], lane detection is implemented using steerable filters on an FPGA platform.
However, this is only the lane feature extraction module of a comprehensive and
robust lane analysis method called VioLET in [ 10 ]. One of the very few complete
lane analysis systems is reported in [ 9 ], which includes a pipelined architecture for
lane feature extraction, lanemodel fitting and tracking, and implemented on an FPGA
platform using DSP48 cores of Spartan FPGAs.
In [ 18 ], different kinds of embedded constraints are elaborated that decide the
feasibility of employing a computer vision task in a car, which is an excellent example
of a complex embedded system. These constraints bring together the requirements
from two different disciplines—computer vision and embedded engineering. In other
words, robustness is the key performance index for a computer vision algorithm but
real-time operation, limited hardware resource utilization, energy efficiency are the
key metrics for embedded realization. With the two together in active driver safety
framework, the reliability and dependability of computer vision algorithms that run
on resource constrained computing platforms is another challenge that needs to be
satisfied.
10.3 Feature Extraction Method for Context-Aware Lane
Analysis
Lane feature extraction is one of the key steps in real-time lane analysis, which
includes both lane estimation and tracking. The robustness of the entire lane analysis
system depends directly on reliable lane features that need to be extracted from the
road scene. This also implies that there is a direct relationship between the efficiency
of lane feature extraction process and the robustness of the system. Adding more
computer vision algorithms for lane feature extraction in order to improve robustness
can directly impact the efficiency of the system. Also, the robustness, and hence the
efficiency, of this feature extraction step is dependent on vehicle surround conditions
like road types, weather conditions such as fog, wet roads etc., environmental changes
in road scene like shadows, road surface, etc., and the availability of other data
sources like road maps, etc. These factors—application requirements (e.g., safety
critical systems demand higher robustness), environmental and weather conditions,
road information, lane types, etc., constitute the context in which lane analysis is
to be performed. Therefore, this context plays an important role in the robustness
and efficiency of the lane feature extraction step. A detailed exploration of the lane
Search WWH ::




Custom Search