Information Technology Reference
In-Depth Information
Eye Model Analysis
Our system should be capable of reliably distinguishing between open and closed
eyes. For that purpose, a Support Vector Machine classifier [ 3 ] was chosen. During
the training stage, the classifier is fed with examples consisting of grayscale images
of equal size that were cropped from frames used in the initialization stage, so as to
contain only the (open or closed) eyes portion of the frame.
Head Position Analysis
Our prototype uses two main factors to determine if the driver is in a state that may
require issuing a warning: (i) the duration of the driver's “closed eyes” state; and
(ii) the detection of a characteristic type of nodding while the eyes are closed. We are
interested in the type of nodding associated with dozing, which typically consists of
a rapid vertical head drop with slow recovery back up. During the initialization
stage, when the system interacts briefly with the driver for the sake of initial
calibration, an analysis of the relative location of the driver's head within the frame
allows the calculation of all the necessary parameters for correct functioning of the
nod tracking method. Those parameters are used to create two thresholds. The nod
tracking method will be based on driver's head's relative position compared to those
two thresholds. In rested state, the driver keeps his/hers head in a fairly stationary
position, above the upper threshold. The lower threshold can be determined by
exploiting physical properties of a human body. Stretchability of a human neck
are know as well as radius of motion of human head. It can be determined how
much can head physically lean forward while nodding. Lower threshold can be
statistically determined as a value beneath which we can claim that head nodded
with high degree of certainty.
5.2.2
Regular Stage: Eye Tracking with Eye-State Analysis
At the end of the initialization stage, our system has successfully created the skin
color model, the eye state model, and the head position model for the current driver.
It is now ready to start actively tracking the driver's eyes and monitoring the driver's
drowsiness level.
If the eyes are properly located in the initialization stage, tracking eye position
through subsequent frames is a relatively straightforward task. In the current
prototype, the tracking area will be dynamically defined based on the speed and
direction of eye movement in previous frames. Basically, we apply the detection
method based on the Viola-Jones algorithm to a candidate area where the eyes are
expected to be found in subsequent frames. Since this corresponds to a small region
of the overall frame, tracking is significantly faster compared to initial detection.
Search WWH ::




Custom Search