Graphics Reference
In-Depth Information
can also feed into the task-specific model to enhance the robustness by prioritizing
among multiple tracked objects.
The proposed algorithmic framework also allows for new algorithms to be created
and integrated. For example, as part of the detection task, we can include a filter that
detects the level of motion blur. Since blur can imply object motion, we can use it
to infer the object type in the scene. As such, the algorithm can feed input into the
task-specific model to decrease integration time for that region of interest because
of anticipated needs for the tracking task. The use of motion blur is improved from
typical optical flow analysis methods because the algorithm would measure the level
of blur of several pixels, and not need to find correspondence between pixels in
different frames.
13.6 Conclusion
Traditional embedded vision systems are bounded in performance because they are
limited by the quality of images they can work with. With a computational imaging
approach, these vision systems can “see” better because they are no longer limited to
the single-lens to focus light onto a focal plane image sensor. In this chapter, we pro-
vide an overview of the research elements of computational imaging. We showed two
example applications of this technology area using coded aperture and coded expo-
sure. We then show a proposed architectural framework for a computational imaging
system that is adaptive and task specific for back-end video analytics processing.
There is certainly a lot more work left to do as there are many technical challenges
in low latency, high robustness, and low power. All of these can be addressed as we
can readily adapt our understanding in accelerating embedded vision algorithms to
hardware such as GPUs, FPGAs, and DSPs. The vision community in both academia
and industry are already starting to head towards this path to reach significant gains
in image quality and performance in analytics. Are you doing the same with your
vision system?
References
1. Raskar R, Tumblin J (2009) Computational photography: mastering new techniques for lenses,
lighting, and sensors. AK Peters Ltd, Wellesley
2. Nayar SK (2006) Computational cameras: redefining the image. IEEE Comput 39(8):30-38
3. Jinwei G, Liu C (2012) Discriminative illumination: per-pixel classification of raw materials
based on optimal projections of spectral BRDF. In: 2012 IEEE conference on Computer vision
and pattern recognition (CVPR). IEEE
4. Lim S, Berends D, Das A, Isnardi M, Chai S (2014) Forensic prescreening system using coded
aperture snapshot spectral imager. SPIE defense, security, and sensing. international society
for optics and photonics
Search WWH ::




Custom Search