introduction to robot vision - Oklahoma State University

lynxherringAI and Robotics

Oct 18, 2013 (3 years and 11 months ago)

100 views



ROBOT VISION


Jaskaran Singh


Masters of Science Graduate Student



Submitted in Partial Completion of the Requirements of

IEM 5303

Advanced Manufacturing Systems Design


Fall 2000

This paper was developed by the above named graduate student in partial
f
ulfillment of course requirements. No warranty of any kind is expressed or
implied. Readers of this document bear sole responsibility for verification of its
contents and assume any/all liability for any/all damage or loss resulting from its
use.





Page
i

Tab
le of Contents

ROBOT VISION

................................
................................
.............................

1

INTRODUCTION TO ROBOT VISION

................................
................................
..........

1

PURPOSE OF A MACHINE VISION SYSTEM

................................
.............................

2

FUNDAMENTAL TASKS OF A ROBOT VISION

................................
.........................

3

Image Transformation

................................
................................
................................
..............

3

Image Analysis

................................
................................
................................
...........................

3

Image Understanding

................................
................................
................................
...............

3

GENERAL PURPOSE ROBOT VISION

................................
................................
.........

4

Object verification and tra
cking

................................
................................
..............................

4

Fast extraction of stable image features

................................
................................
..................

5

Object model acquisition

................................
................................
................................
..........

6

Efficient indexing of the model database

................................
................................
................

6

VISION SENSORS

................................
................................
................................
............

7

Special Vision Sensors

................................
................................
................................
..............

7

Eye
-
in
-
hand vision system

................................
................................
................................
........

7

Solid
-
State Color Sensor

................................
................................
................................
...........

7

Fiber Optic Sensors

................................
................................
................................
...................

7

La
ser Sensors

................................
................................
................................
.............................

8

Beyond Light Sensitive Vision
................................
................................
................................
..

8

Neuromorphic Sensors

................................
................................
................................
.............

8

STEREO VISION

................................
................................
................................
..............

9

Active/Dynamic Stereo Vision

................................
................................
................................
..

9

Advantages of Active Vision

................................
................................
....................

10

RESEARCH ISSUES

................................
................................
................................
......

11


Page
ii

Sensor Technology for Artificial Intelligence

................................
................................
.......

11

Approach to Human Vision

................................
................................
................................
...

11

Neural Networks

................................
................................
................................
.....................

12

CONCLUSIONS

................................
................................
................................
..............

13

Bibliography

................................
................................
................................
.....................

14


List of Figures

Figure 1. Symbolic Image Description of a vision System [1]

................................
........

2

Figure 2. Minimization of Perpendicular Errors [3]

................................
......................

5




Page
iii





Abstract


This paper presents a brief introduction to Robot Vision and the various issues
related to this area. General
-
Purpose Robot vis
ion and the various sensors used in
the robot vision system are briefly discussed. A brief summary of Stereo and Active
Stereo Vision is presented. At the end, some of the research work related to this area
is discussed.


Page
1



INTRODUCTION TO ROBO
T VISION

Vi
sion is our most powerful sense. It provides us with a remarkable amount of
information about our surroundings and enables us to interact intelligently with the
environment; all without direct physical contact. Through it we learn the positions and
identit
ies of objects and the relationships between them. Attempts have been made to
give machines a sense of vision almost since the time that digital computers first became
generally available [1]. The use of vision and other sensing schemes is motivated by the

continuing need to increase the flexibility and scope of applications of robotics systems.
Although proximity, touch, and force sensing play a significant role in the improvement
of robot performance, vision is recognized as the most powerful robot sensor
y capability
[2].


Robot vision may be defined as the process of extracting, characterizing, and interpreting
information from images of a three dimensional world. Broadly classified, this process is
also known as machine vision or computer vision [2].



V
ision for robots requires the ability to identify and accurately determine the positions of
all relevant three
-
dimensional objects within the robot workspace. These positions must
be updated in real time. It must be able to gather this information as quick
ly as needed
and be able to update it as objects undergo motion at moderate rates of speed. It should
function without restrictive assumptions regarding the locations or orientations in which
objects may occur [3]. It is not surprising that many attempts t
o provide machines with a
sense of vision have ended in failure. Significant progress has been made nevertheless,
and today one can find vision systems that successfully deal with a variable environment
as parts of machines [1].



Page
2

PURPOSE OF A MACHINE

VISI
ON SYSTEM


A machine vision system analyzes images and produces descriptions of what is imaged
(Fig 1.). This description may then be used to direct the interaction of a robotic system
with its environment [1].






















F
IGURE
1.

S
YMBOLIC
I
MAGE
D
ESCRIPTION OF A VISI
ON
S
YSTEM
[1]




The input to a machine vision system is an image, or several images, while its output is a
description that must satisfy two criteria:



It must bear

some relationship to what is being imaged.



It must contain all the information needed for the given task [1].





Page
3

FUNDAMENTAL TASKS OF

A ROBOT VISION

I
MAGE
T
RANSFORMATION


It is the process of electronically digitizing light images using image devices. An
image
device is the front end of a vision system, which acts as an image transducer to convert
light energy to electrical energy. Image device can be a camera, photodiode array,
charge
-
coupled device (CCD) array, or charge
-
injection device; the output of a
n image
device is a continuous analog signal that is proportional to the amount of light reflected
from an image. In order to analyze the image with a computer, the analog signals must be
converted and stored in digital form. A rectangular image array is d
ivided into small
regions called picture elements, or pixels. With photodiodes or CCD arrays, the number
of pixels equals the number of photodiodes or CCD devices. The pixel arrangement
provides a sampling grid for an analog
-
to
-
digital (A/D) converter. At
each pixel value,
the analog signal is sampled and converted to a botic digital value. With an 8
-
bit A/D
converter, the converted pixel value will range from 0 for white to 255 for black.
Different shades of gray are represented by values between these two

extremes. This is
the reason why the term gray value is often used in conjunction with the converted
values. These gray
-
level values are stored in a memory matrix, which is called a picture
matrix [2].

I
MAGE
A
NALYSIS


A computer needs to locate the edges

of an object in order to construct drawings of the
object within a scene. The line drawings provide a basis for image understanding, as they
define the shapes of objects that make up a scene. Thus, the basic reason for edge
detection is that edges lead li
ne drawings, which lead to shapes, which lead to image
understanding [2].


I
MAGE
U
NDERSTANDING


The final task of robot vision is to interpret the information obtained during the image
-
analysis process. This is called image understanding, or machine percep
tion [2]


Page
4


GENERAL PURPOSE ROBO
T VISION

According to David G. Lowe, General
-
purpose vision for robots requires the ability to
identify and accurately determine the positions of all relevant three
-
dimensional objects
within the robot workspace [3]. Here the
important thing is to capture the relevant data.
With the motion of the object, the system should be able to update the information. In the
following section, four steps to satisfy the goals of general
-
purpose robot vision as
proposed by David G. Lowe are
presented. He has assumed that the input data for the
robot consists of ordinary gray
-
scale images taken under normal workplace illumination.
The four steps are as under [3] :

O
BJECT VERIFICATION A
ND TRACKING


The most basic capability required for robot
vision is the ability to verify the presence of
an object and accurately determine its location when given only an approximate estimate
for its position. Given an object model and estimate for its position in an image, it is
straightforward to project the
model from the estimated position to predict the locations
of its features in the image. The position estimate may have a significant degree of error.
So there are likely to be many ambiguities in potential matches between image features
and predicted loca
tions of model features [3]. Probabilistic incremental matching
technique is used to select the most reliable match [4]. For refining the viewpoint from
the given image information, Newton’s method is used to determine the best least squares
fit between a
three
-
dimensional model and some two dimensional image features. This
method is illustrated in Fig 2. for minimizing the perpendicular errors between some
image line segments and corresponding edges of a three
-
dimensional model. The
advantages of this tech
nique are that it is fast, robust, and capable of solving for a wide
range of image and model parameters [3,5].








Page
5























F
IGURE
2.

M
INIMIZATION OF
P
ERPENDICULAR
E
RRORS
[3]

F
AST E
XTRACTION OF STABLE
IMAGE FEATURES


While the extraction of linked edges is sufficient for motion tracking, in which the
position of an object in each frame can be quite accurately predicted, these features are at
a level that is too primitive for more ge
neral object recognition. The individual edge
segment is too ambiguous and could match many potentially corresponding features on
each object. So the detection of image features that are less ambiguous in their
interpretation and yet are stable over change
s in viewpoint is the important goal of
general
-
purpose robot vision. The author suggests the linking of straight
-
line segments
based on co
-
linearity, parallelism, and proximity of end points. The features participating
in a grouping must be indexed accord
ing to location at various scales of resolution,
depending upon the size of the feature. An economical technique is to simply index each

Page
6

feature under a few relevant properties such as location, orientation and size. After all
features have been indexed, w
e can search away from each feature for other features that
fall within the relevant parameter bounds to form a significant grouping with the first
feature. In this manner, little computation is wasted on the consideration of relationships
that are not pre
sent in the actual data [3].

O
BJECT MODEL ACQUISIT
ION


One of the requirements for representing an object is to describe its visual appearance,
which has been explored in the field of computer graphics. For the purposes of
recognition, it is more importan
t to represent the relationships between particular features
and structures that can be detected in the image and corresponding components of the
model. One basic role for object models is to provide fast predictions for the locations of
image features fro
m particular viewpoint for use during object verification and tracking.
The most reliable and straightforward method for identifying the associations between
image and model features is to gather data from actual images of the object. There are
number of s
uccessful systems that can generate three
-
dimensional models from a set of
images [3].


E
FFICIENT INDEXING OF

THE MODEL DATABASE


For general problems of object recognition, there may be a large library of potential
objects that could appear in an image a
nd no prior information regarding their positions.
Therefore, initial stages of image processing often proceed without use of knowledge
regarding particular objects. At later stages, it is necessary to use the features derived
from the image to access part
icular object models for final matching and verification.
This requires an indexing mechanism that can associate low
-
level features with the object
models that are most likely to be present given those features. Because any particular
image feature will no
t always result from only one possible object model, therefore
probabilistic methods should be used and the most straightforward and accurate method
for determining these probabilities is to simply measure the probabilities from a sample

Page
7

of images and to c
ontinuously improve their accuracy as more instances of recognition
are performed. For this, the author has suggested to use Baye’s theorem [3].

VISION SENSORS

A Vision sensor has the capability to sense, store and reconstruct a graphic image that is
close

to the original The use of vision sensors has sparked the most interest by far and is
the most active research area [6].

S
PECIAL
V
ISION
S
ENSORS


Some vision sensors are specifically designed for robotic applications. Examples of these
types of sensors are


E
YE
-
IN
-
HAND VISION SYSTEM


This is used in relation to hole location, pick and place and jig location tasks. It has
application in line, edge and contour following, height sensing and dimension checking
[7].


S
OLID
-
S
TATE
C
OLOR
S
ENSOR


It has the capabil
ity to determine both the average color and the intensity of the incident
light within the visible part of the spectrum and without additional color filters. This
color determination feature provides an industrial robot with an extra capability for
environ
mental data extraction [7].

F
IBER
O
PTIC
S
ENSORS



Eye
-
in
-
hand robot has several advantages over the use of a fixed external camera. But in
this, the movement of the grippers is restricted by the weight of the camera. Coherent
fiber
-
optic bundles can be use
d for carrying light from an object to be imaged and
photodiodes are used to convert this light into electrical signals for processing [7].


Page
8


L
ASER
S
ENSORS


These are based on the same principle as used in CD players. A laser light is broken into
sub beams
through the use of diffraction grating. The main benefit of the laser
-
diffraction
system is that it is easier to write software for this than it is to write software that attempts
to recognize shapes and patterns. For many machine
-
vision applications, it i
s not as
important for the robot to recognize the actual shape of an object as it is for it to navigate
around or manipulate the shape [8].

B
EYOND
L
IGHT
S
ENSITIVE
V
ISION


Like a cave bat, robot can use high frequency sounds to navigate its surroundings.
Ul
trasonic transducers are common in Polaroid instant cameras, electronic tape
-
measuring devices, automotive backup alarms, and security systems. Radar system
work under the same principle as ultrasonic but instead of high frequency sound, radar
uses a high
frequency radio wave. Radar is less often found in robotic systems because
of its higher costs as compared to ultrasonic. Passive infrared sensors, which are used in
security and automatic
-
outdoor lighting systems, detect the natural heat radiated by all
o
bjects. That heat is in the form of infrared
-

a form of light that is beyond the limits of
human vision. The most simple passive Infrared radiation system merely detects a rapid
change in heat reaching the sensor; such a change usually represents movement
[8].


N
EUROMORPHIC
S
ENSORS


Neuromorphic sensors are specialized sensory processing functions implemented by
analog electronic circuits that are inspired by biological systems. These circuits are
particularly good candidates for the construction of artific
ial sensory systems that attempt
to emulate biological vision. The real time processing of the continuous, high
-
dimensional input signals provided by vision sensors is challenging both in terms of the
computational power required and the sophisticated algo
rithms required extracting
behaviorally relevant information. These are helpful in the construction of systems that
attempt to emulate biological vision [9]. The visual system of a jumping spider is very

Page
9

complex and advanced. Most spiders don’t see very we
ll and rely heavily on vibratory,
tactile, and chemical cues to perceive their world. The jumping spider is an exception.
They have one of the most sophisticated visual systems amongst the invertebrates. Their
eyesight rivals the eyesight of humans, though

is quite different. The only other
invertebrates with eye sight as good as jumping spiders are octopus and squid. In fact, the
brain of jumping spider includes a fairly large region for visual processing. A similar
visual system can easily be adapted for
robotics purposes [10]. The Neuromorphic
approach to artificial perceptive sensor implements specialized sensory processing
functions inspired by biological systems in analog electronic circuits. These circuits are
parallel and asynchronous, and they respo
nd in real time. Surprisingly, useful results have
been obtained in replicating insect visual homing and chemo
-

and phonotaxis strategies,
using simple off
-
the shelf analog components interfaced to robots [9,11].

STEREO VISION

Stereovision is basically in
ferring scene geometry from two or more images taken
simultaneously from slight different viewpoints and this different perspective of the eye
lead to slight relative displacement of object disparities in the two monocular views of
scene. Using these dispa
rities, the vision system is able to calculate depth information
about one scene or object [12]. Stereovision has long been accepted as a valid technique
of three
-
dimensional data acquisition and surface description. For centuries, engineers
and architects

have used various views of an object or structure to describe it in three
-
dimensional space [13]. A frequently used method for this purpose is triangulation. It
determines the depth of object point by directing toward it the light from two sources that
a
re known distance apart. The object point will be at the intersection of the two light
lines. Triangulation methods concentrate on the third dimension. Once all three
dimensions are known, stereo image of the object can be computed. [14]


A
CTIVE
/D
YNAMIC
S
T
EREO
V
ISION


An important feature of the current state
-
of
-
the
-
art is the view that sufficiently efficient
interpretation of complex scenes can only be implemented using an adaptive model
structure. In the infancy of computer vision, it was believed that ob
jects of interest could

Page
10

unequivocally be separated from the background using a few standard operations applied
over the entire image. It turns out however, that this simple methodology works on
simple images having a good separation between object and back
ground. In the case of
more difficult problems with noise, ambiguities, and disturbances of different types, more
sophisticated algorithms are required with provisions to adapt themselves to the image
content. A further extension of this adaptability is th
e Active Vision [15]. Active vision
seeks to gather scene information dynamically and selectively by probing and exploring
the entire visual field for the information that is salient to the particular task at hand [16].
One of the most interesting aspects
of the Active Vision paradigm is the use of the
motion (and in general the ability of the observer to act and interact with the
environment) to guide continuous image data acquisition to enrich the measurements of
the scene. Overall this implies an increas
e in the amount of incoming data, which, in turn,
may require an appropriate data
-
reduction strategy. On the other hand, this strategy can
enormously simplify the computational schema underlying a given visual task or
measurement. This is very important wi
shing to design working systems, where simpler
processes allow the implementation of real
-
time systems, which can perform several
measurements over time [17].

Advantages of Active Vision


The advantages that active vision offer include a large effective fi
eld of view, an increase
in spatial resolution of the vision system and a reduction in the computational burden by
intelligent selection of regions of interest from the scene. In this way the active vision
system can process only that information which is
relevant to the task and is not reliant
on processing all data uniformly. Other advantages include the ability to stabilize the
images, aiding motion estimation, figure
-
ground separation, better range estimates fused
from stereo, focusing and sensor geomet
ry, and lessening the effect of occlusion [16].

The use of Active Vision is in Robot Navigation. Basically, robots can be bolted to tables
or slabs on the floor or these can move around factory sites and elsewhere. The robots
that move around are said to n
avigate when they have path choices that must be decided
between in carrying out tasks efficiently without bumping into things or people. So for
navigation, range sensors (visual and ultrasonic) and touch sensors are used [18]. Visual

Page
11

Navigation is a chall
enging issue in automated robot control. In many robot applications,
like object manipulation in hazardous environments or autonomous locomotion, it is
necessary to automatically detect and avoid obstacles while planning a safe trajectory. In
this context
the detection of corridors of free space along the robot trajectory is very
important capability, which requires non
-
trivial visual processing. In most cases it is
possible to take advantage of the active control of the cameras [17].


RESEARCH ISSUES

S
ENSO
R
T
ECHNOLOGY FOR
A
RTIFICIAL
I
NTELLIGENCE


In early AI research, one of the most important problems was to determine whether or not
a line in the line drawing corresponds to a discontinuous edge, an edge between two
adjacent faces, or a shadow boundary. To
solve this problem, Yoshiaki Shirai proposed
using a range finder that measures the depth of every pixel in an image directly by active
triangulation and applied it to polyhedral object recognition (shirai and suwa, 1971). In
AI or robot vision, however fo
r more than a decade the range finder or a modified one (
e.g. a laser range finder) was used as single technique to reliably obtain the depth of a
scene. Today, commercial range
-
finding systems are available, and they are used not only
for industrial appl
ications but also for modeling artistic objects in museums, or measuring
the geometry of a plant. The cost and the size of infrared TV systems is getting smaller;
however, it is still hard to obtain sonar images with high resolution. The global
positioning

system (GPS), which is useful for positioning approximate position, is
insufficient for vehicle dead reckoning. New sensor technology is needed to identify new
strategies and realize better robot vision [19].

A
PPROACH TO
H
UMAN
V
ISION


Image data reduction

achieved by the retina of primates provide a wide field of view and
high resolution where needed, without creating a large amount of data at the outset. To
design a sensory system that is biologically motivated, a data reduction model is required
[20]. Al
though our foveae (retina) cover only some ten
-
thousandth of the visual field,
humans manage to achieve a fairly good vision. The strategy is to have our eyes

Page
12

continually on the move, pointing the foveae at whatever we wish to see. Binocular stereo
require
s that foveae simultaneously converge at the object of interest
-

a process called
binocular fixation
-

to maximally exploit the foveae acuity for depth perception [21]. The
human eye has its highest resolution at the center of the optical axis, and it decay
s
towards the periphery. There are number of advantages in such an arrangement.



Data reduction compared to having the whole field of view in full resolution.



High resolution is combined with a broad field of view.



The foveae mark the area of interest, and

disturbing details in the surround are
blurred.

These advantages can be utilized in a robot vision system as well. There are number of
research projects developing both hardware and algorithms for heterogeneous sampled
image arrays, implementing the fovea
e concept in one form or the other [15]. Currently
under development at john Hopkins University (Baltimore) is a robot that can see (like
humans) under real
-
life conditions. With the aid of a chip
-
based vision system, a toy car
was able to follow a line ar
ound a track, avoiding obstacles along the way. The new
system is based on a single 6.4X6.8 mm chip. The chip’s design borrows from nature,
modeling the behavior of primate’s visual tracking system, namely the retina and parts of
the brain. The retinal por
tion of the chip uses photodiodes because of cells in the retina.
The chip is interfaced with an eight
-
bit microcomputer to implement fast, autonomous
navigation. Applications could include surveillance, surgery, manufacturing, and
videoconferencing [22].

N
EURAL
N
ETWORKS


For several years, A.B. Bonds, professor of electrical and computer engineering and
biomedical engineering, has been observing the language of cells that form the pathway
from the eyes to the thalamus, or midbrain, and finally to the visua
l cortex at the back of
the head. In order to design and build new brain models for computers, called artificial
neural networks, Bonds is working on understanding how individual brain cells work
within the network and how they hook up to other cells. His
research involves stimulating
the receptive field of a cell with a visual image and independently stimulating other areas
that the cell does not directly “see”. He then measures the cell’s response. He has found
that stimulating other areas in the visual f
ield clearly influences the response of a cell

Page
13

looking at a particular place, which indicates the existence of interactions between cells at
a given level of processing. In one of the projects he has studied the powerful feedback
pathway from the visual co
rtex back to thalamus, or midbrain. He suggests that this
feedback pathway has something to do with visual field that are more interesting than
others. We are bombarded by visual signals constantly. Megabytes of information per
second are coming into the e
yes, and about 99 percent of this is of no use to us. So there
is something coming down from the higher levels of the brain to the midbrain that
suppresses dull information in the field to the advantage of what is interesting. This is of
particular interes
t to those involved in robot vision research because making a robot that
must pay full attention to the whole visual field would take an enormous amount of
computing power. But if we had some way to zero in on a particular part or the field that
was intere
sting and then to enhance that area, we wouldn’t need so much computer power
[23]. So we can say that there is lot of opportunities in the field of neural networks
towards robot vision.

CONCLUSIONS

Robot vision is one of most challenging field from researc
h viewpoint. While research in
vision is maturing, much remains to be investigated. Current topics include the object
recognition techniques, stereovision and sophisticated sensor technology. Neuromorphic
sensors are among the most sophisticated sensors an
d with the development of computer
technology and research in the field of neural networks; it is possible to imagine a robot
having the same vision power as of a human being.












Page
14


B
IBLIOGRAPHY

1.Horn, B.K.P.,
Robot Vision
, The MIT Press, Cambridge,
MA, 1986.

2.
"Compute Vision"
,
www.ite.his.se/ite/research/automation/course/vision.htm.

3. Lowe, D.G.,
"Four Steps Towards General
-
Purpose Robot Vision"
,
Robotics
Research, The fourth Symposium
, 221
-
228, The MIT Press Cambridge, MA, 1988.

4.Lowe, D.G.,
"The

Viewpoint Consistency Constraint"
,
International Journal of
Computer Vision
, 1,1,57
-
72, 1987.

5.Lowe, D.G.,
Perceptual Organization and Visual Recognition
, Kluwer Academic
Publishers, Boston, MA, 1985.

6.
"Vision Sensors"
,
www.eng.morgan.edu/~tsjoseph/robo
tics.html

7.Pugh. A.,
"Robot Sensors: Vol. 1
-
Vision"
,
IFS Publications Ltd, Bedford, UK, 1986.

8.McComb, G.,
"Vision Systems"
,
Popular Electronics
, 16, 8,77
-
84, Gernsbaik
Publications, 1999.

9.Indiveri, G.,
"Robotic Vision: Neuromorphic Vision Sensors"
,
Sc
ience
, 288, 5469,
1189
-
1191, American Association For advancement of Science, 2000.

10.Baker,C.;Bumett,J.;Kozuki,T.,
"Question1:AnimalVision"
,
www.contrib.andrew.cmu.
edu/user/cbaker/robotics/assign2/animal.html

11. Webb, B.,
Neural Networks
, 11, 1479, 1998.

12.
"Stereoscopic Vision "
,
www.ee.ic.ac.uk/eee2proj/khl98/vision.html.

13.Beni, G.; Hackwood, S.,
Recent Advances in Robotics,

John Wiley and Sons, 1985.
14. Holzbock, W.G.,
Robotic Technology: Principles and Practice
, Van Nostrand
Reinhold Company Inc.,
New York, New York, 1986.

15.Granlund, G.H.; Knutsson, H; Westelius, C.

J; Wiklund, J.,
"Issues in Robot
Vision"
,
Image and Vision Computing
, 12, 3, 131
-
148, Butterworth
-
Heinemann Ltd,
oxford, UK, 1994.

16.Pretlove, J.,
"Stereo Vision"
,
The Industrial Rob
ot
, 21, 2, 24
-
27, MCB University
Press Limited, AL, USA, 1994.

17.Grosso, E.; Tistarelli, M.,
"Active/Dynamic Stereo Vision"
,
IEEE Transactions on
Pattern Analysis and Machine Intelligence
, 17, 9, 868
-
879, IEEE Computer Society, NY,
USA, 1995.


Page
15

18.Jarvis, R
.,
"Robot Navigation"
,
The Industrial Robot
, 21, 2, 3
-
10, MCB University
Press Limited, AL, USA, 1994.

19. Shirai, Y.,
"Robot Visor Research: Past and Future Roles"
,
International Journal of
Robotics Research
, 18, 12, 1185
-
1200, SAGE Publications, Thousand

Oaks, CA, USA,
1999.

20.Bolduc, M.; Levine, M.D.,
"A Foveated Retina for Robotic Vision"
,
Research in
Computer and Robot Vision (research paper)
, 93
-
116
, World Scientific Publishing
Company, River Edge, NJ, USA, 1995.

21.Tong, F. Li, Z.N.,
"Reciprocal
-
Wed
ge Transform in Active Stereo"
,
International
Journal of Pattern Recognition and Artificial Intelligence
, 13, 1, 25
-
48, World Scientific
Publishing Company, River Edge, NJ, USA, 1999.


22.Anonymous,
"Robots that can See like Humans"
,
Robotics Today
, 12, 4,

5
-
7,
Society of Manufacturing Engineers, Dearborn, MI, USA, 1999.

23.
"Neural Networks: A Guide to Robot Vision Research"
,
www.vanderbilt.edu
/News/research/ravs96_3.htm/







Page
16





Page
17