Digital Signal Processing Reference
In-Depth Information
In their architecture, each sensor node ran one particle filter, with random number
generators on each node that were synchronized. They described two approaches to
the problem of distributing observations over the network, one parametric and one
based on adaptive encoding. Sheng et al. [ 26 ] developed two distributed particle
filter algorithms: one that exchanges information between cliques (nearby sensors
whose signals tend to be correlated) and another in which cliques compute partial
estimates based on local information and forward their estimates to a fusion center.
3
Early Work in Distributed Smart Cameras
The DARPA-sponsored Video Surveillance and Monitoring (VSAM) program was
one of the first efforts to develop distributed computer vision systems. A system
developed at Carnegie Mellon University [ 8 ] developed a cooperative tracking
system in which tracking targets were handed off from camera to camera. Each
sensor processing unit (SPU) classified targets into categories such as human or
vehicle . At the MIT Media Lab, Mallet and Bove [ 18 ] developed a distributed
camera network that could hand-off tracking targets in real time. Their camera
network consisted of small cameras mounted on tracks in the ceiling of a room. The
cameras would move to improve their view of subjects based on information from
other cameras as well as their own analysis. Lin et al. [ 17 ] developed a distributed
system for gesture recognition that fuses data after some image processing using a
peer-to-peer protocol. That system will be described in more detail in Sect. 7 . The
distributed tracker of Bramberger et al. [ 5 ] passed off a tracking task from camera
to camera as the target moved through the scene. Each camera ran its own tracker.
Handoffs were controlled by a peer-to-peer protocol.
4
Challenges
A distributed smart camera is a data fusion system—samples from cameras are
captured, features are extracted and combined, and results are classified. There is
more than one way to perform these steps, providing a rich design space. We can
identify several axes on which the design space of distributed smart cameras can be
analyzed:
￿
How abstract is the data being fused: pixel, small-scale feature, shape, etc. ?What
methods are used to fuse data?
￿
What groups/cliques of sensors combine their data? For example, groups may be
formed by network connectivity, location, or signal characteristics. How does the
group structure evolve as the scene changes?
￿
How sparse in time and space is the data?
 
 
Search WWH ::




Custom Search