Digital Signal Processing Reference
In-Depth Information
but at low resolution for any part of the scene. Imagery from multiple cameras can be
combined to provide both extended coverage and adequate resolution. Distributed
smart cameras combine physically distributed cameras and distributed algorithms.
Early approaches to distributed-camera-based computer vision used server-based,
centralized algorithms. While such algorithms are often easier to conceive and
implement, they do not scale well. Properly-designed distributed algorithms scale to
handle much larger camera networks. VLSI technology has aided both the image-
gathering and computational abilities of distributed smart camera systems. Moore's
Law has progressed to the point where very powerful multiprocessors can be put on
a single chip at very low cost [ 30 ] . The same technology has also provided cheap
and powerful image sensors, particularly in the case of CMOS image sensors [ 31 ] .
Distributed smart cameras have been used for a variety of applications, including
tracking, gesture recognition, and target identification. Networks of several hundred
cameras have been tested. Over time, we should expect to see much larger networks
both tested and deployed. Surveillance is one application that comes to mind.
While surveillance and security are a large application—analysts estimate that 25
million security cameras are installed in the United States—that industry moves
at a relatively slow pace. Health care, traffic analysis, and entertainment are other
important applications of distributed smart cameras. After starting in the mid-
1990s, research on distributed smart cameras has progressed rapidly over the past
decade. A recent special issue of Proceedings of the IEEE [ 24 ] presented a variety
of recent results in the field. The International Conference on Distributed Smart
Cameras is devoted to the topic. We start with a review of some techniques from
computer vision that were not specifically developed for distributed systems but
have been used as components in distributed systems. Section 3 reviews early
research in distributed smart cameras. Section 4 considers the types of challenges
created by distributed smart cameras. We next consider calibration of cameras in
Sect. 5 , followed by algorithms for tracking in Sect. 6 and gesture recognition in
Sect. 7 . Section 7.1 discusses computing platforms suitable for real-time distributed
computer vision.
2
Approaches to Computer Vision
Several algorithms that used in traditional computer vision problems such as
tracking also play important roles as components in distributed computer vision
systems. In this section we briefly review some of those algorithms. Tracking
refers to the target or object of interest as foreground and non-interesting objects
as background (even though this usage is at variance with the terminology of
theater). Many tracking algorithms assume that the background is relatively static
and use a separate step, known as background elimination or background sub-
traction , to eliminate backgrounds. The simplest background subtraction algorithm
simply compares each pixel in the current frame to the corresponding pixel in a
reference frame that does not include any targets. If the current-frame pixel is
Search WWH ::




Custom Search