Digital Signal Processing Reference
In-Depth Information
The chapter is organized as follows: Section 18.2 gives a brief overview of
previous work related to the study presented in this chapter. Section 18.3 describes
the protocol used to collect the database. Section 18.4 presents subjective
evaluations to quantify the perceived distractive behaviors. Section 18.5 reports
our analysis of features extracted from the CAN-bus signal, a frontal camera, and a
microphone. We study the changes in behaviors observed when the driver is
engaged in secondary tasks. Section 18.6 demonstrates that the multimodal features
can be used to recognize drivers engaged in secondary tasks, and to infer the
distraction level of the drivers. Section 18.7 concludes the chapter with discussion
and future directions.
18.2 Related Work
Several studies have attempted to detect inattentive drivers. These studies have
proposed different sensing technologies including controller area network-bus (CAN-
bus) data [ 7 , 10 , 11 ], video cameras facing the driver [ 12 - 14 ], microphones [ 10 ], and
invasive sensors to capture biometric signals [ 8 , 15 , 16 ]. Some studies have analyzed
data from real driving scenarios [ 7 , 11 , 17 ], while others have considered car
simulators [ 16 , 18 , 19 ]. They also differ on the secondary tasks considered in the
analysis. Bach et al. presented an exhaustive review of 100 papers that have consid-
ered the problem of understanding, measuring, and evaluating driver attention [ 20 ].
This section gives a brief overview of the current approaches to detect driver
distractions.
18.2.1 Modalities
Features derived from the vehicle such as speed, acceleration, and steering wheel
angle are valuable in assessing driver behaviors [ 13 , 18 , 19 , 21 - 23 ]. Relevant
information can be extracted from CAN-bus data. Sathyanarayana et al. used
CAN-bus signals to model driver behaviors [ 21 ]. They extracted steering wheel
angle and gas and brake pedal pressures. The proposed information was used to
detect driving maneuvers such as turns, stops, and lane changes. After maneuver
recognition, they recognized distraction using driver-dependent Gaussian mixture
model-universal background model (GMM-UBM). Unfortunately, accessing the
CAN-bus information is not always possible, since the car manufactures protect this
information. Accessing car information is easier in studies that use car simulators.
These interfaces usually provide detailed information about the car. For example,
Tango and Botta used features such as the steering angle, lateral position, lateral
acceleration, and speed of the host vehicle to predict the reaction time of the
drivers [ 19 ]. Along with other features, Liang et al. used the steering wheel position,
steering error, and lane position to assess cognitive distraction [ 13 ]. Ersal et al. built a
Search WWH ::




Custom Search