Graphics Reference
In-Depth Information
the embedded analysis system met the operational objectives-directing search and
rescue efforts [ 3 ].
While full motion video (FMV) offers more fidelity, it is currently not possible to
recover the data lost during a network outage until after the UAV has landed. When
sending cropped detections, on the other hand, the data is small enough that a small
backlog from network congestion or outage can be overcome. Embedded, onboard
processing “at the edge” of the network facilitates more efficient network utilization
by passing limited amounts of processed information in place of large amounts of
raw data.
9.6 Conclusions
Embedded, onboard analysis of imagery presents a solution to the bandwidth limita-
tions of small unmanned vehicles and their steadily improving camera capabilities.
Prerequisites are an automated detection method for the object or scene of interest
and the ability to discard unimportant image or full motion video (FMV) areas. Still,
adopting a computer vision algorithm for use in an embedded environment involved
detailed planning and customizations:
Analysis of flight and platform characteristics (nadir shooting possible, flight alti-
tude, jitter, required image resolution, available bandwidth, etc.),
Selection of a suitable computer vision method, based on speed and accuracy,
Method training, modifications, and tuning (e.g., for rotation invariance),
Selection of embedded hardware that meets payload demands (USB ports, power
consumption, heat dissipation, CPU speed, memory),
System integration (connections to hardware and software components), including
adaptations for the specific hardware and operating system, and
Validation of the entire system as part of the operational workflow.
While smaller bandwidth needs are advantageous even in a fully functional and
reliable network, onboard processing dramatically increases information throughput
when the network is only intermittently available, and preserves the real-time value
of the UAV.
The advantages of the described system extend beyond network aspects into
human performance. The human image analysts' responsibility changes froma repet-
itive, error-prone detection task to a lower volume, less taxing detection verification
task. The demonstrated prefiltering elevates the operator to make decisions based
on the information passed from the UAV. While these experiments took place on a
UAV in still video, this could easily be extended to embedded analysis of full motion
video.
Acknowledgments We would like to thank the contributions of the NPS unmanned systems com-
munity for their help and support, particularly Prof. Tim Chung, Prof. Kevin Jones, and Prof.
Vladimir Dobrokhodov.
Search WWH ::




Custom Search