Graphics Reference
In-Depth Information
Let us consider the technical challenges for feature- and semantic-level
algorithms. Challenges in detectingmoving objects include: (1) stabilization of jittery
or on-the-move video for reliable detection of independently moving entities, and
(2) eliminating false alarms due to extraneous factors, e.g., swaying trees, shadows
and other illumination changes. Challenges for tracking objects include: (1) han-
dling objects that come to a stop and blend into the background, and (2) predicting
object movement for partial and full occlusions. Challenges for object classification
include: (1) distinguishing objects with similar features, and (2) having sufficient
views of the same object to extract distinguishing features. All of the algorithms
would benefit from increased sensor sensitivity spatially, spectrally, and temporally,
in order to reduce the false alarms caused by semantic inference from processing the
video.
13.5.1 Sensor Adaptive Algorithms
Detection, classification, tracking, and classification algorithms directly interact with
each other in order to formulate an output result based on an understanding of the
scene. For example, a detection algorithm would provide motion and location para-
meters to a tracking algorithm. The tracking algorithmwould sort among the detected
objects to associate a label or track identification. It wouldmaintain a history of tracks
and other features with the object so overcome occlusion. The track and feature infor-
mation is used by the classification algorithm to infer object type based on a trained
database of features and activities.
We will leverage our understanding of interactions within feature- and semantic-
level processing to create low-latency sensor adaptive algorithms. In Fig. 13.9 ,we
show small elements of the detect-track-classify algorithms within the sensor adap-
tive algorithm module that drives input into a Sensor Control block. Each of the
feature-semantic level algorithms has needs with respect to imagery to improve its
tasks, but not all can be satisfied concurrently. For example, a detection algorithm
typically wants a global view of the scene because it is tasked to find all moving
targets. This means that imagery captured within a single frame will equalize the
importance among all pixels in the field of view. In contrast, a tracking algorithm is
focused a single object, so it would prefer to allocate more resources (time, power,
dwell time) on a particular region of interest. Similarly, classification algorithms
would prefer to remove spurious data points (e.g., noisy sampled features), so it
might prefer to collect more data points around a particular modality (e.g., longer
exposure for higher dynamic range to gather more distinguishing features).
Running concurrently, the Sensor Control block contains an Adaptive Task-
Specific Model that organizes the inputs from the detect-track-classify algorithms
to maintain a consistent and robust control loop with the imager. The task-specific
model can be reinitialized based on different tracks, or when tracks are lost in some
frames. Lower rate information from the back-end processing chain (see Fig. 13.9 )
Search WWH ::




Custom Search