Machine Vision in Agricultural
Robotics
–
A short overview
Emil Segerblad
Mälardalen University
School of Innovation, Design and Engineering
Box 883, Västerås,
Sweden
esd08004@student.mdh.se
Björn Delight
Mälardalen University
School of Innovation, Design and Engineering
Box 883, Västerås, Sweden
bdt08001@student.mdh.se
ABSTRACT
T
he demand for f
ood
rises
in the world as the total
p
opulation of
Earth
grows
and
the human life span increases
.
T
he
global
agricultural
industry
must
rise to meet the increasing demand and
decreasing work force, through increased production and
agricultural efficiency.
Automation increases efficiency and
decreases costs
, as well freeing up human resources for other
tasks.
The
introduction
of agricultural robots is not a new development
and has already made significant progress, but there are still
several complications to overcome in order to leverage
their
full
advantage. One
of the major challenges within the field is the
construction of relia
ble and
accurate vision systems used for
navigation and targeting.
In this paper
the authors
present and discuss some of the basic
concepts of
machine
vision sy
stems for agriculture
robotics.
The
article
covers
sensor selection, t
he pros and cons of laser and
camera based systems.
In addition it describes the
machine vision
process: image acquisition, segmentation and feature extraction.
T
h
is
is f
ol
lowed by a
present
ation
and discuss
ion of
some recent
developments in the field, such as the implementation an
d
adaption of stereo
-
cameras, laser based machine vision and image
processing algorithm
s
designed for use in agricultural robotics.
Keywords
Agriculture robotics, vision systems, computer vision.
1.
INTRODUCTION
As the world's population increases, so do t
he demand for food
and other life
-
essential products. In order the face and overcame
this problem agriculture most increase its efficiency and output of
food drastically. As the cost of labor increases the profitability of
agriculture decreases, so
there i
s a potential profit to be made in
automating the more work intensive parts of agriculture.
In
addition, in many parts of the world an aging population has
created or will create a scarcity of labor while maintaining
food
demand.
One solution is automation, of
all industries, including the
introduction of agricultural robots. Robots of this kind have
existed for quite some time now and come in a wide variety of
shapes and sizes. There are several advantages to such robots, for
example: lower labor costs, increa
sed efficiency and the
possibility to work under
almost any weather conditions.
An important aspect of designing
an
agricultural robot is deciding
which kind of sensory input the robot should have. Different
applications require different solutions. For i
nstance, a fruit
picking robot must be able to determine the fruits position and
then remove it from the plant, which requires color (since fruits
are most easily located by their bright colors) and 3D vision (to
guide the robot in harvesting). While a whe
at
-
harvesting robot
need only to follow the rows of reaped wheat. [10] The
complexity of the agricultural process and the environment in
which this process occurs often forces the robot to incorporate
some kind of vision based system. A vision based system
consists
of several equally important parts in order to successfully
navigate/map/interpret the environment. The most common parts
of such systems will be discussed to great extent later in this
paper.
In this paper the authors will also present some rec
ent
implementations of vision based systems for agricultural robotics.
This paper is divided into 6 sections. In section 2 the basic
concepts and algorithms behind vision systems are introduced. In
section 3 relevant work and implementations are presente
d. In
section 4 the works present in section 3 are discussed. In section 5
conclusions
are presented.
2. BASIC
CONCEPTS OF VISION
SYSTEMS
2
.1
Basic design of vision systems
The design of vision based systems differs depending on the
application but they
often share som
e common features. In figure
1
some of these common features are presented.
[12]
The authors will now give a
n
introduction to each of the
components that make up a
typical
machine visions system.
Figure
1
:
An exam
ple of a machine vision system and its
components.
2.
2
.1 Image acquisition
To begin with, the vision system must have some kind of method
to acquire a reading of its environment. Cameras, ultra
-
sonic
sensors an
d radar are just some of the examples of devices used
for image acquisition. Different devices produces different kinds
of output to the system and this output can range from a simple
distance reading
in integer form
to a 3D image of the
environment.
Several of the paper
s sampled
in this review
had stereo
-
cameras
[11], in some cases aided by GPS [11] though one had three
cameras, and a few used 3D LIDAR (LIght Detection And
Ranging)
[14]
and one used a single camera in conjunction with
lasers. The primary problem with lasers is there hi
gh cost, though
they are capable of giving an almost perfect picture of the world
in terms of obstacles. Cameras, though cheap have more trouble
finding objects, and measuring the environment in a 3D fashion,
though they can find objects by color, such as
brightly colored
fruit or flowers, as shown in [9].
Usually we can model the camera to a pinhole camera, that is to
say a camera with a pinhole as a lens. In a pinhole came
ra all light
passes through a single point and is projected onto a screen, this
screen then represents the image acquired by the camera.
If we represent environment as a space, then the picture received
from the camera is a projected through a “pinhole” on
to a light
sensor. That is to say the outside world is a field of points that are
linearly projected onto our image. [4]
Using single cameras for machine vision can be very effective if
the application does not require 3D vision or precise distance
measurement.
[
10
]
If one wants to
achieve
3D vision
or measure distances
accurately
using
cameras, one must resort to using more than one camera.
The simplest setup in that case would be a stereo camera solution.
Then i
n order to identify the distance to an object
in such a
system
, it mus
t be identified in both
of the
cameras
. For example,
i
f
one
finds
an object,
for example
the red dot
marked X
in figure
2
, with
the left camera.
We know the right camera will find the
same
point on
the line defined by OI
-
Xl (shown in green).
If we
find the object on the left camera and we know the position of the
two cameras we can find the three dimensional position of the
object using triangulation. (We know two angles and a side, which
allows us to recreate the entire triangle
using some
trigonometry
)
[13]
2.
2
.2 Preprocessing
Once the image is acquired, some preprocessing is required to
make it useful to the computer. Some common methods applied
here are: noise reduction/removal or filtering algorithms, such as
Fourier transforms. In addition smoothing algor
ithms, such as
SNN (Symmetric Nearest Neighbor) are implemented, begins by
selecting a pixel and groupings its neighbors (those adjacent to it)
into pairs. The pairs are then colored the same color as that pixel
of the two which is closest in color to the
center pixel.
[15]
2.
2
.3 Feature extraction
Feature extraction is the process where some predefined feature is
retrieved from the acquired picture, such as lines or other shapes.
There are several interesting algorithms in the domain of feature
extraction,
for example the Hough Transform.
The Hough transform involves matching the lines found in an
image to rotations of a model (though it often ignores line length).
It attempts all rotations (of some quanta
) and “votes” with the
number of ma
tches. The rotation with the most votes is accepted.
[
4
] Due to this voting system the algorithm is resistant to noise.
An example of output from a machine vision system
that utili
zes
the Hough transform can be
seen in figure 3.
Figure 3: The top picture shows the input to a
machine
vision system utilizing the Hough transform
in order to find lines. At the bottom is the output
from
the same system. The lines found are colored red.
Figure 2: An example of how to locate objects using stereo
cameras.
An object, X, can be mapped in th
e
3D
space by
knowing the distance between the cameras and using
trigonometry to complete the triangle.
An alternative method for finding lines is RANSAC (RANdom
SAmple Consensus)
[14]
is used to confirm an estimate (a guess
at) a line, whi
ch it generates randomly from a set of data points. It
tries to find all “inliers” that confirm the estimate and ignores the
outliers, the estimate with the largest consensus is selected. [8]
The strength of this algorithm is its resistance to noise
[14]
.
2
.3.4 Detection/Segmentation
After the picture has been feature extracted, the
d
etection/
s
egmentation stage begins. In this stage the
machine
vision
system decides which extracted features are relevant
f
or
further processing. Within the field of
agricultural
robotic,
common
features
to be extracted from th
e input data are:
fruit
s
,
flowers, vegetables and
geographic
features.
The
simplest
segmentation method is separating regions based on a brightness
threshold (for example
[15]
). One could keep every pixel with a
red value over 200, and
remove t
he rest from the acquired image
.
A more advanced version of the threshold method is the Otsu
method [7]
which changes the threshold dynamically in order to
minimize region overlap. (See page 3 of [7] for more detail).
More complex applications might add or subtract the red green
and blue from each other in order to build a gray
-
scale picture that
better capt
ures a certain feature. One might even use a different
representation of color, [7] for example uses Hue Saturation
Intensity.
Epipolar geometry co
uld
be considered to within the categor
y of
detection/segmentation; but
it is often used in conjunction with
the methods presented her
e (we still need to identify the objects
before we can place them in 3D space).
The complication that arises with thresholding is figuring out
which red pixels are parts of which tomato. We need to divide the
red pixels into objects. In region growing
[15]
the image is first
divided into clusters of pixels with the same color value. Then the
clusters search their
neighborhood for like colored (within a
certain threshold) regions which are merged, increasing error,
until the image
becomes one large region
or an error threshold is
reached [4].
An alternative method for segmenting the image into objects is
edge detection. An edge is defined as “a location of rapid intensity
variation” [13]. That is to say an edge is an area where the color
of the image change
s quickly. This can also be applied to other
parameters, such as brightness. This can be used to find certain
patterns or object in images.
Some authors further divided the process of computer vision into
a greater number of steps then described here. F
or instance, the
information supplied by the above mentioned sub
-
systems is often
used by some decision making system. These methods fall outside
the scope of this article, hence their exclusion.
3. EXAMPLES
3.1 Weiss and Biber, navigation in a
field
An
example of an implementation of a 3D
-
sensor
in agricultural
robotics is [14
]. Weiss and Biber used a MEMS (Micro
-
electromechanical system) based LIDAR sensor, the FX6, in
order to detect and classify plants. Further they map the location
of these plants to
a global map in order to create a 3D model of
the entire “field”. Weiss and Biber also compared the performance
of this device with some other “state
-
of
-
the art 3D sensor
technologies”. The FX6 is, as stated above, a 3D laser scanner
developed by Nippon S
ignal. It has a resolution of 29 x 59 pixels
and a frame rate of 16 fps. The FX6 was mounted on a small
robot and the setup was tested both in the field and on several
models of plants. F
igure 4
describes the algorithm which was
used in this vision b
ased system.
First the 3D scanner scans the environment and returns a point
cloud to the ground detection algorithm. (A point cloud is just a
set of points) This algorithm uses the Random Sample Consensus
RANSAC
-
algorithm to find the ground plane’s equatio
n. This
equation is modified in order to compensate for the velocity of the
robot. The system then removes the ground plane from the point
cloud and merges, the points that are close together. The merged
points are then considered to be a single object. In
order to
determine if the object is a plant or n
ot, a statistical model is used:
(1)
In (1)
i
s the standard deviations
and
is
coordinates the measured “bounding box”
and
is
the measured
number of points.
In (2
each of the probability values
are
multiplied
together
and the result
will be the probability that the
current set of mer
ged points is a plant. After this step the local
position of the plant is stored and then later added to the global
map of all the plants. Figure
5
shows from left to right: Model
used in laboratory, dep
th image of said model, intensity image and
the point cloud mapping.
Weiss and Biber shows that their setup is well performing in
theory and in real life and concludes their article with that even
though the FX6 has a limited resolution it can still be use
d with
good accurac
y in agricultural applications.
The
authors
of this
article would like to
say
that Weiss and Biber
’
s system
eas
i
l
y
could be
extended
to perform SLAM (
Simultaneous
Localization
And Mapping)
, for more information about SLAM see [3].
Figure 4:
An
overview of Wei
ss and Biber
’
s system
[14]
.
3.2 Rovira
-
Más
et al.
, navigation in a field
Rovira
-
Más et al. [11] also focuses on creating three
-
dimensional
field maps, but unlike Weiss and Biber, this group uses a stereo
camera, as their image acquisition
device, in conjunction with a
GPS
-
device. The team placed stereo cameras in the front of a
tractor. The cameras used were MEGA
-
D, made by Videro
Design, and The Tyzx Inc. 3Daware DeepSea Development
System. These systems ran at 2 Hz with a resolution of 3
20 x 240
pixels. Furthermore, they Incorporated a Real
-
time
-
kinematic
GPS and a fiber optic gyroscope. These instruments were used to
gather positional information for the map building algorithm. The
designed system acquires a picture and then applies a
tr
ansformation of the coordinates in order to create a more
intuitive coordinate system. This transfor
mation can be seen in
figure 6
.
The team argues that changing from a coordinate system with the
camera as origin to a coordinate system with the closest g
round
point from the camera as origin is preferable. This is done with
the
transformation matrix
(3)
.
Theta is the angle of the camera; Xc, Yc and Zc are the
coordinates from the camera. Hc is the height of the camera. X, Y,
Z are the new coordinates
. The team applies some advanced
stereo
-
matching algorithms that use all of the sensory input, in
order to generate the 3D map. Rovira
-
Más et al. concludes that
their solution is functional but some problems were observed
during development, such as: probl
ems with computational
performance and inaccuracy of the sensors.
For more information
on linear transformation and
matrix operations, see
[2].
3.3 Orviz and Olivares, navigation in a field
Ortiz and Olivares [10] used a single camera in order to navigate
in an agricultural environment, in this case a field with crops
growing on
it. The team used a pre
-
built four
-
wheeled robot
called “Yeti”. The “Yeti” featured a camera with a resolution of
320 x 240, infrared sensors and a compass. The acquired image is
first white balanced in the pre
-
processing stage and then the green
pixels a
re extracted from the picture and stored in a binary image
file. Using standard vector notation,
a pixel in an RGB
-
image can
be
described as
:
.
W
here R, G, B are the amount of red, green and blue in the pixel
at position
. Then by looking at the sum of dif
ferences
between the green channel, G, and the other two channels, that is:
One
can easily determine which pixels are close to true green
value of
by looking at the value of
.
A big value on
would indicate a genuine green pixel while a
small value wou
ld indicate the reverse. If one calculates
for
each pixel in the picture and then say that if
is bigger than
some predefined limit, in this case 80, then one assigns that pixel
the value of 1. Otherwise it is assigned the value of 0. The output
from th
is process is the binary image file mentioned earlier. The
authors correctly points out that this process only works if the
color sought is parallel to any of the RGB
-
base vectors. Otherwise
one must use the Gram
-
Schmidt process for finding a new set of
ba
se vectors and so forth. Further, the authors explains that this
algorithm is light
-
sensitive and in some cases it might be
preferably to use the HSV/HSB (Hue, Saturation, Value) / (Hue,
Saturation, Brightness) model instead. The binary image file is the
a
nalyzed and all continuous pixels are grouped together in to
“objects”. The number of pixels in each “object” is the counted
and if this number is greater than some predefined limit the object
is kept in the binary file, otherwise it is deleted. Ortiz and
Olivares then determine the path of the robot by using an
algorithm that finds the outlines of the green pixels and then
declares that the path of the robot must line in the middle of those
two lines. The team was also interested in map building and
declar
es that this can be done by some, relatively, simple
projections. Ortiz and Olivares conclude that “the vision system is
very robust” and “The robot is able to navig
ate by itself in the
plantation
[...]”
3.4
Nielsen et al., 3D sensors and peach trees
Nielsen et al. [9] is investigating the possibili
ty to use a stereo
camera system in order to retrieve 3D data about peach trees. This
data will then be used in the thinning process of peach trees
(removing some flowers so that the tree would invest more energy
in the remaining ones). The team uses three
ten mega
-
pixel
cameras in a so called L
-
setup with an individual resolution of
2592 x 3872 and 24 bit color depth.
T
he
output from this system
can be seen in figure 7.
Figure 5: To left image input to the system in
[14]
and to the
right the final result g
enerated by the system, a 3D point
cloud.
Figure 6: Transformation done in [11].
The cameras were used in conjunction with a total station, a
device that reads distances. In addition stereo matching was
performed via
the cameras with a variant of the epipolar algorithm
described above (extension to three cameras is trivial, and enabled
the robot to see flowers that otherwise would be concealed). A
strong flash light was mounted between the cameras in order to
evenly d
istribute light to all cameras, the picture where taken at
night to decrease light related noise. Nielsen et al. argues that
stereo
-
vision is superior to LIDAR in that sense that the original
true
-
color RGB values are transferred into the 3D point cloud.
T
his can be a helpful feature when dealing with color
-
full plants.
Some problems were observed during the implementation, such as
the inability to handle occlusions and faulty blossom mapping
which resulted in bad accuracy of the positional data. These erro
rs
seem to stem from the complexity of the environment which this
vision system works in.
The team still concludes that their work shows good results within
blossom thinning, as most of the blossoms were found during
testing, as well as other agricultural
applications where 3D point
clouds may be needed.
3.4 Irie
et al
.
, locating asparagus
Irie et al. used a computer vision system to locate and harvest
asparagus over 230 mm. They used two laser projectors (230 mm
apart, (the same as the ideal height for an aspar
agus apparently) in
conjunction with a TV camera (the TV camera was mounted in
between the two laser projectors) and a personal computer to find
ripe asparagus in a green house. These image acquisition devices
were mounted on a trolley which also featured
a robotic arm
designed to harvest the asparagus. This robot ran along a track
running between the rows of asparagus plants. Irie et al. used a
transformation matrix in order to change raster (pixel) coordinates
obtained by a camera to 3D geometric coordina
tes, presumably by
some triangulation variant. This could then be used by a robot to
identify asparagus of the correct length and harvest them. This
method showed promising results as the 3D vision system
performed acceptable enough to be used in real life
. Irie et al.
concludes their article by declaring that the next step to take is to
develop a faster and more compact robot. [6]
3.5 Gang Wu
et al.
, Navigation in a field
Gang Wu et al. where trying to navigate a combine harvester
through a wheat field using a s
ingle
color
camera
, with a
resolution of 640 x 480,
in combination
with a
GPS
-
device
for
navigation
in a wheat field.
A
fter
acquiring
a picture,
the system
look
s
at the red channel
of
the
picture
with some smoothing using
a digital sieve. A digital sieve (Figure
8
) consists of a border of
white pixels that is moved through the image, one pixel
-
width at a
time. If all the border pixels are white, than the all of the central
pixels (the gray pixels in the figure) are turned from black to
white, this greatly reduces noise in a black and white image.
They then use a variant on the Hough transform which the
y had
developed themselves to find lines in the wheat being harvested.
They were further forced to find the ends of the lines in order to
determine where the field ended. [
5
]
3.6 Xiao
-
lin
et al.
, tomato identification
Xiao
-
lin et al. used a stereo camera system
to identify tomatoes.
Once they had procured images they used two different types of
picture adjustment algorithms. The first, using a standard RGB
model subtracted the green value from the red v
alue in each pixel,
implemented
as
.
The other based on
the H
ue Saturation and Intensity color mode used the hue strength
of each pixel:. A median filter (a neighbor algorithm not so
different from SNN
[14]
) as well as an application of the Otsu
method was used to clean up noise. After finding the center of the
toma
toes using central moments, they used epipolar geometry to
match the tomato centroids from both cameras, thereby in
positioning them in 3D space. The accuracy according to Xiao
-
lin
et al. in their tests was within 15 mm. [7]
3.7 Yang
et al.
, tomato identificatio
n
Yang et al. also searched for tomatoes with a color stereo camera
system, this one called the PGR BumbleBee2. The images are
passed through a low pass filter rectification (for lens distortion)
and SNN. Edge detection is also applied so that any differe
nce in
brightness between the two cameras does not affect identification.
Then the images are cleaned up and region growing is employed
to identify objects. Following which the images are stereo
matched (presumably through epipolar geometry) Yang et al.
em
ployed masking, that is to say a small neighborhood of pixels
around prospective points when comparing between cameras. In
addition they attempt some kind of texture correction; apparently
by removing regions with too little texture (texture is identified
by
the contrast between pixels within the mask). The candidate
masked region are then judged by comparing the sum of
differences between pixels
[15]
4. DISCUSSION
In
this paper we have presented several possible implementations
of vision based systems for
agricultural robotics.
The articles described in this paper used a variety of image
acquisition devices: stereo
-
cameras, single cameras and laser
based 3D detection. In addition GPS and similar systems where
used to aid navigation and information gather
ing.
Figure 7: Output from Nielsen
et al.
’
s system [9]
Figure 8: The digital sieve used by
Gang Wu et al.
[
5
]
The goals of agricultural robot vision can largely be divided into
two categories, object identification and navigation.
When it comes to navigation
even a single camera
is very
effective. Fields can usually be navigated through the use of
Hough tra
nsforms and basic image manipulation. Ortiz et al. [10]
used a single camera, color subtraction and some basic grouping
to create two binary Fields, between which they could safely
navigate.
Object identification (
Several applications for
object detection
within
agricultural
robotics
have
been shown in this article, for
example
: asparagus, tomatoes and
peach blosso
ms) is somewhat
more problematic, since the distance from the camera is often
required. LIDAR though effective can be quite expensive
[14]
.
Another possi
ble solution to this problem
is to perform
s
tereo
vision with cameras.
This solution
has
the advantage of being ab
le
to identify objects by color but
can be
difficult to implement for
more complex environments, as Neilsen’s problems suggest [9].
In simpler environments attempting to locate groups of brightly
colored objects seems to be quite possible, if we look at the
some
of the
examples
provided in this article
:
[
7
] [
9
]
[15]
.
(The
common vein in
these papers is that they are searching for
brightly colored targets, such as flowers or fruit).
Hybrid solutions such as Irei et al.’s dual laser with a camera [6]
also exist which are able to combine advantages of both systems.
With the widespread use o
f the Microsoft Kinect in robot vision
[1], one wonders how it will fair when it ine
vitably is applied to
the field, as it has some
interesting
properties, such as its
l
ow co
st
and the combination
of range sensor and
camera.
5. CONCLUSIO
NS
The
progress within the field of agricultural robotics
during
the
last ten years has
been impressive but further improveme
nt
is
needed before a full scale robotic revolution within the field can
occur. One important factor that the authors of this article can
identify is the importance of having good and accurate image
acquisition devices in order to receive correct representati
ons of
the real
-
world.
There are several methods used in
machine vision for
agricultural
robotics
, each with its advantages and disadvantages.
A
ll
of the
methods are not suited for
all cases
. For example, a single camera
cannot be used in a situati
on where 3D
mapping is required.
It is
the demands and restrictions of the application that
determines
which image
acquisition
and image processing algorithms to be
used.
As more a
mbitious agricultural robotics tasks are attempted, more
methods will surface further advantages and disadvantages will
arise. What works in a green house in Iceland might not be
equally suited to a tropic coconut plantation. One thing is certain,
with the
current trends
in population growth
and aging, g
reater
automation and efficiency will be required of the agricultural
field.
6. REFERENCES
[1]
Ackerman, “Top 10 Roboti
c Kinect Hacks”, IEEE Spectrum,
March 2011,
http
://
spectrum
.
ieee
.
org
/
automaton
/
robotics
/
diy
/
top
-
10
-
robotic
-
kinect
-
hacks
[2]
Anton; Rorres,
Elementary Linear Algebran, Ninth edition,
Wiley, 2005, 978
-
0
-
471669
-
59
-
3.
[3]
Dissanayake, M.W.M.G.;
Newman, P.; Clark, S.; Durrant
-
Whyte, H.F.; Csorba, M.; , "A solution to the simultaneous
localization and map building (SLAM) problem,"
Robotics
and Automation, IEEE Transactions on
, vol.17, no.3,
pp.229
-
241, Jun
2001
[4]
Faugeras, Oliver; Three
-
dimensional Computer Vision a
geometric viewpoint, MIT press 1993, pages 474
-
475, 485
-
488
[5]
Gang Wu; Yu Tan; Yongjun Zheng; Shumao Wang; ,
"Walking Goal Line Detection Based on Machine Vision on
Harvesting Robot,"
Circ
uits, Communications and System
(PACCS), 2011 Third Pacific
-
Asia Conference on
, vol., no.,
pp.1
-
4, 17
-
18 July 2011
[6]
Irie, N.; Taguchi, N.; Horie, T.; Ishimatsu, T.; , "Asparagus
harvesting robot coordinated with 3
-
D vision sensor,"
Industrial Technolog
y, 2009. ICIT 2009. IEEE International
Conference on
, vol., no., pp.1
-
6, 10
-
13 Feb. 2009
[7]
Lv Xiao
-
lian; Lv Xiao
-
rong; Lu Bing
-
fu; , "Identification
and Location of Picking Tomatoes Based on Machine
Vision,"
Intelligent Computation Technology and Autom
ation
(ICICTA), 2011 International Conference on
, vol.2, no.,
pp.101
-
107, 28
-
29 March 2011
[8]
Ma, Yi; Soatto, Stefano; Kosecka, Jana; Sastry, Shankar S.;
An Invitation to 3
-
D Vision from images to geometric
models, Springer 2004, pages 388, 165
-
167
[
9]
Nielsen, M.; Slaughter, D.; Gliever, C.; , "Vision
-
based 3D
Peach Tree Reconstruction for Automated Blossom
Thinning,"
Industrial Informatics, IEEE Transactions on
,
vol.PP, no.99, pp.1, 0
[10]
Ortiz, Jose Manuel; Olivares, Manuel; , "A Vision Based
Na
vigation System for an Agricultural Field Robot,"
Robotics Symposium, 2006. LARS '06. IEEE 3rd Latin
American
,
vol., no., pp.106
-
114, 26
-
27 Oct. 2006
[11]
Rovira
-
Mas, F.; Qin Zhang; Kise, M.; Reid, J.F.; ,
"Agricultural 3D Maps with Stereovision,"
Positi
on,
Location, And Navigation Symposium, 2006 IEEE/ION
, vol.,
no., pp. 1045
-
1053, April 25
-
27, 2006
[12]
Szeliski, Richard; Computer Vision Algorithms and
Applications, 2011, 978
-
1
-
84882
-
934
-
3, Springer London.
[13]
Turek, Fred D.,
Machine Vision Fund
amentals: How to Make
Robots ‘See’, NASA Tech Briefs,
June 2011,
http
://
www
.
greentechbriefs
.
net
/
about
-
dtb
/10531
.
[14]
Weiss, U.; Biber, P.; , "Plant detection and mapping for
agricultural robots using a 3D LIDAR sensor,
"Robotics and
autonomous systems [0921
-
8890]”
, vo
l. 59, no.5, pp.339
-
345, 2011
[15
]
Yang, L.; Dickinson, J.; Wu, Q.M.J.; Lang, S.
;
"A fruit
recognition method for automatic harvesting,"
Mechatronics
and Machine Vision in Practice, 2007. M2VIP 2007. 14th
International Conference
on
,
vol., no., pp.152
-
157, 4
-
6 Dec.
2007
Enter the password to open this PDF file:
File name:
-
File size:
-
Title:
-
Author:
-
Subject:
-
Keywords:
-
Creation Date:
-
Modification Date:
-
Creator:
-
PDF Producer:
-
PDF Version:
-
Page Count:
-
Preparing document for printing…
0%
Comments 0
Log in to post a comment