Agriculture Reference
In-Depth Information
the high sensitivity to wind caused by the loss of the reflected signal. Lidar heads,
on the contrary, have been extensively used as perception units of automated vehi-
cles, mainly for safety in navigation. The driverless car that won the DARPA Grand
Challenge competition in 2005, for example, featured five roof-mounted lidar units.
Lidar rangefinders are optical sensors that calculate ranges by measuring the time
of light of a light beam, usually laser because of its coherence. In order to cover a
significant portion of the vicinity of a vehicle, the light emitter needs to rotate back
and forth in a scanning motion that yields polar range data. When this mechanical
rotation occurs under off-road conditions, which often involve dusty environments
and rough terrains, it results in noteworthy disadvantages of lidar heads in compari-
son to visual sensors that can capture the entire scene at 30 frames/s with no moving
components.
The systematic arrangement of bulk crops in equally spaced rows provides useful
features for a vision system to position a vehicle with respect to the rows in order
to follow a guidance directrix automatically. This idea has been in use for more
than 20 years (Reid and Searcy 1991; Rovira-Más et al. 2003), and still continues in
progress because of the difficulties encountered in building a robust system capable
of adapting to any lighting situation. In general, the basic objective of vision-based
automatic guidance consists of placing a monocular camera centered in the vehicle
and looking ahead, so that crop rows converging to a vanishing point in the hori-
zon form the standard image to process (Figure 12.1). An onboard computer applies
image analysis techniques to estimate the offset and heading error of the vehicle, and
eventually calculate the turning angle of the front wheels. Images are acquired, pro-
cessed, and discarded; therefore, no map is constructed along the way. The percep-
tion system is constantly aware of the semistructured terrain ahead of the vehicle and
computer speed needs to be fast enough to allow conventional traveling velocities of
the vehicle. Unexpected obstacles and end of rows (headlands) are usually detected
by complementary systems although more sophisticated algorithms may well com-
prise both safety and guidance functions. Monocular cameras acquire 2-D images
that represent 3-D spaces, which makes the perception of depth quite a delicate issue.
Consequently, the image-to-world transformation matrices need to be determined
as precisely as possible during the camera calibration procedure. It is important to
keep in mind that being aware of the environment not only means getting perceptual
information but also interpreting it efficiently. Further details on the calibration of
monocular cameras and the analysis of guidance images for navigation are given by
Rovira-Más et al. (2010).
The idea of precision agriculture (PA) has its roots in the problem of spatial
variability within farm fields, and tries to increase efficiency by applying just what
is needed, only where it is needed, and as soon as it is necessary. This archetype of
perfect management requires the concurrence of two basic facts: crops have to be
constantly positioned in the field, and crops need to be monitored in real time. The
former has been universally achieved with the Global Positioning System (GPS), but
the latter depends on each application, and therefore solutions greatly differ. Yet,
local perception has proved to be particularly effective in the observation and track-
ing of parameters related to crop status. The following paragraphs describe some of
the solutions devised to elaborate PA maps. These maps typically depict a top view
Search WWH ::




Custom Search