Graphics Reference
In-Depth Information
Fig. 13.8 Task interaction and data flow, showing tasks performed on compressed measurements
Detection/Classification : Implement discriminative modulator patterns and
directly measure linear projections of incoming radiance for detection and classi-
fication. Learn discriminative patterns via offline training and online update.
Tracking : Use a Kalman filter to track the detected objects-of-interests. In case
of a missing track, the detector finds the object and reinitializes tracking.
Learning : The detection results (i.e., the measured discriminative linear projec-
tions), after being verified with tracked frames, are used to update the posterior
probability distribution and themodulator patterns, and to improve the atmospheric
models (e.g., fog, haze, heat scintillations).
The key enabler for this architecture framework is to identify functions that can
be implemented with tight coupling between algorithms for feature and semantic
analysis with lower level computational imaging parameters for coded illumination,
aperture, and exposure. This coupling creates a feedback loop between application
task and sensor, allowing for increased sensitivity and potentially lowered power
operation by not following typical methods of just running the processing pipeline
faster.
While current camera systems already have feedback mechanisms that allow the
image capture subsystem to adapt based on the image processing system, there is
a significant level of feedback latency that limits the effectiveness of the system.
Very often, many multiple image frames have elapsed before any dynamic settings
can be calculated because there is a long processing chain from raw pixels to detec-
tion/tracking/classification of objects. Furthermore, the settings are often for the
entire frame and not at a finer grain level because there are no effective algorithms
that consider region of interest control of sensor parameters. Here, we are exploring
a camera system in which two subsystems (image capture and processing) are tightly
coupled at a very fine grain, resulting in low latency level control at the sub-frame
level. Such a system would be analogous to current auto-focus functionality, but
instead, such a mechanism would be driven from needs of the feature and semantic
analysis.
Search WWH ::




Custom Search