Agriculture Reference
In-Depth Information
canopy such as oranges. In robotic fruit harvesting, machine vision has become one
of the most popular sensing systems for fruit identification. A basic machine vision
system includes a camera, optics, lighting, data acquisition system, and an image
processor, usually a personal computer. Vision systems are capable of determining
either the two-dimensional (2-D) or 3-D position of the fruit depending on the hard-
ware software implementation.
In their pioneering research, Parrish and Goksel (1977) demonstrated the techni-
cal feasibility of using machine vision to guide a spherical robot for apple harvesting.
In this research, a black and white camera was used to detect the apple fruits. A red
filter was fitted in front of the camera to enhance the contrast between the fruit and
the background. A few years later, Tutle (1985) developed a machine vision-based
orange harvester, which used a photodiode array for image acquisition. Two filters
were used with the photodiode; one filter was between 600 and 700 nm, which covers
the chlorophyll absorption band, and the other filter permitted wavelength between
750 and 850 nm, which is the water absorption band. Grand D'Esnon et al. (1987)
used a color-based machine vision system for detecting apples. The image process-
ing algorithm was able to detect the red colored fruit; however, problems in variable
lighting conditions were encountered. At the University of Florida, Slaughter and
Harrell (1989) developed an orange fruit detection system using a 15-bit color cam-
era using hue, saturation, and intensity to separate the fruits from the leaf canopy.
According to Sarig (1993), “While major progress has been made with the iden-
tification of fruit on the tree and determination of its location, only 85% of the total
fruits on the tree are claimed to be identified.” There are three major problem areas
associated with the use of machine vision-based sensing: (1) partial and totally
occluded fruit are difficult to accurately detect; (2) light variability can result in low
detection rates of actual fruit as well as high levels of false detections; and (3) the
computational time required to process images influences real-time control.
Fujiura (1997) developed robots having a 3-D machine vision system for crop rec-
ognition. The vision system illuminated the crop using red and infrared laser diodes
and used three position sensitive devices to detect the reflected light. The sensors
selected were suitable for agricultural robots required to measure the 3-D shape and
size of targets within a limited measuring range. Jiminez et al. (2000) developed a
laser-based vision system for automatic fruit recognition to be applied to an orange
harvesting robot. The machine vision system was based on an infrared laser range-
finder sensor that provides range and reflectance images and was designed to detect
spherical objects in a nonstructured environment. The sensor output included 3-D
position, radius, and surface reflectivity of each spherical target, and had good clas-
sification performance.
Plebe and Grasso (2001) presented a color-based algorithm for detecting oranges
and determining the target centers. They also applied stereo imaging to these pro-
cessed images to determine the range to the detected fruit. Their algorithm correctly
identified 87% of the oranges, whereas 15% of the detected regions were incorrectly
classified as oranges when they were not. Their approach had difficulty with both
brightly and poorly lit oranges, brightly lit leaves, and certain types of occlusion.
Bulanon et al. (2001) presented an algorithm that used a 240
240-pixel color image
to detect apples. The apples were detected by thresholding the image using both the
×
Search WWH ::




Custom Search