Graphics Reference
In-Depth Information
Fig. 13.9 Sensor adaptive algorithms in the pixel domain can more actively control the sensor
without the latency to process feature and semantic level processing. Key elements of the detect-
track-classify algorithms provide cues to the task-specific model to enable high rate sensor control
based on scene understanding
Our proposed architecture revolves around selecting pieces of object detection,
tracking, and classification algorithms to process earlier in the image processing
pipeline. As shown in Fig. 13.9 , this approach will offer a sensor adaptive module
with tight coupling between feature-semantic-level processing to camera control.
This approach is radically different than the traditional approach of simply speeding
up the entire image processing pipeline to enable a faster control loop. Motion and
other semantic cues are evaluated at a high sub-frame rate to reduce the latency for
sensor control. The needs of feature and semantic processing for object detection,
tracking and classification are evaluated quickly to provide near-instant feedback to
the sensor. By being task-adaptive , this approach can improve image quality and
SNR on target objects for higher quality actionable intelligence.
Sensor adaptive algorithms can improve the overall sensitivity of the camera
system for dynamic task-specific needs. That is, by adapting sensor parameters to
the dynamic understanding of the scene and targeted objects, the capture process can
be made more efficient to capture the desired signal at the appropriate time, duration,
and location.
The proposed architecture requires the mapping of sensor parameters to video
analytic parameters within the developed simulation framework. It is important to
explore sensor control parameters such as frame rate, resolution, exposure, and read-
out times, and how they can be tuned based on video analytic parameters, such as the
type of detected objects, number of salient targets, and predicted motion of targets.
With a semantic understanding of the scene and detected objects, the capture process
can adapt accordingly to the scene content.
The key to effectively migrating feature and semantic level algorithms into the
“front end” pixel processing domain is to maintain a low-latency control loop at the
sub-frame level such that sensor control can adjust quickly. This means that we will
algorithmically make an inference on detected objects based on low-level analytics.
While we efficiently and intuitively bridge the semantic gap between higher-level
algorithms and sensor controls, wemust maintain the robustness of the camera system
to mission-specific functions.
Search WWH ::




Custom Search