Omni-vision based autonomous mobile robotic platform

somberastonishingΤεχνίτη Νοημοσύνη και Ρομποτική

13 Νοε 2013 (πριν από 3 χρόνια και 4 μήνες)

70 εμφανίσεις


1

Omni
-
vision based autonomous mobile robotic platform

Zuoliang Cao
*a
, Jun Hu *
a
,
Jin Cao
**
b
,

Ernest L. Hall
***c

a
Tianjin University of Technology, Tianjin / China;

b
Fatwire Corporation, New York /USA;
c
University of Cincinnati, Cincinnati/ USA



ABSTRACT


As a laboratory demonstration platform, TUT
-
I mobile robot provides various experimentation modules to demonstrate
the robotics
technologies that

are

involve
d in

remote control, computer programming, teach
-
and
-
playback

operations
.
Typically, the teac
h
-
and
-
playback operation has been
proved to be an effective solution especially in
structured
environments. The path generated in the teach mode and path correct
ion in

real
-
time
using
path error detecting in the
playback mode are demonstrated. T
he vision
-
based image database is generated as the given path representation in the
teaching procedure. The algorithm of an online image positioning is performed for path following. Advanced sensory
capability is employed to provide environment perception.

A unique
omni directional

vision (omni
-
vision) system is
used for localization and navigation. The
omni directional

vision involves an extremely wide
-
angle lens, which ha
s

the
feature that a dynamic omni
-
vision image
is

processed in real time to respond the widest view during the movement.
The beacon guidance is realized by observing locations of points derived from over
-
head features such as predefined
light arrays in a building. The navigation approach is based upo
n the omni
-
vision characteristics. A group of ultrasonic
sensors is employed for obstacle avoidance.


Keywords:
Mobile robot,
omni directional

vision,
n
avigation,
t
each
-
and
-
playback,
i
mage database.




1. INTRODUCTION


The robotics labo
ratory is a major research division of Tianjin University of Technology (TUT). It is equipped with an
industrial robot work cell and a mobile robotic platform with a series of lab
-
demo system modules. The TUT mobile
robotic platform, as a wheel
-
based demo
system, was developed to adapt to current education and research needs. The
emphasis of design concepts is placed on the following considerations.


(1)

Mobile robotic devices promise a wide range of applications for industry automation in which stationary rob
ots
cannot produce satisfactory results because of limited working space. Such applications include material transfer and



*Zlcao@eyou.com
; phone 86 22 23688585; fax 86 22 23362948;
http://www.tjut.edu.cn
; Tianjin University of Technology, Tianjin,
300191. China;
**Jincao@farwire.com
; phone 1 516 3289473; fax 1 516739
-
5069;
http://www.Fatwire.c
om
; Fatwire Co. 330 Old
Country Road, Suite 207 Mineola, New York, USA 11501;
***hallel@email.uc.edu
; phone 1 513 5562730; fax 1 513 5563390;
http://www.uc.edu
; Univ. of Cin
cinnati , Cincinnati, USA, OH 45242


2

tool handling in factories or automatically piloted carts for delivering loads such as mail around office buildings and
service in hospitals. Most of the advanced manufacturing systems use some kind of computer transport
ation system.


(2) The facilities of the laboratory support the education of Robotics and Mechatronics which include a series of the
primary and typical robotics courses such as Kinematics, Dynamics, Motion control, Robot programming, Machine
intelligence.



(3)

The laboratory experiments are the major resources for studying and researching the future technologies. Some
particular application demonstrations are necessary to create basic concepts and approaches in new development area
s

to cope with the needs

for research experience and capability.


Navigation
t
echniques is one of the most common and important factors in robotics research. The use of a fixed
guidance method is currently the most common and reliable technique. An alternative method uses a sig
nal
-
carrying
wire buried in the floor with on
-
board inductive coil or a reflective stripe painted on the floor with an on
-
board
photoelectric device. In the laboratory, a magnetic strip is paved on the floor and the vehicle is equipped with a
detecting sen
sor to direct the vehicle to follow it. Recent preferred approach employs laser beam reflecting or digital
imaging units
1, 2

to navigate a free
-
range vehicle.


It is feasible and desirable to use sensor
-
driven, computer controlled and software programmed a
utomatic system for the
AGV development. A series of special sensors may be employed for mobile robots. Various methods have been tested
for mobile control using ultrasonic
3, 4
, infrared, optical and laser emitting units. The data fusion
5
, fuzzy logic
6, 7
,

neural
network
5

and machine intelligence technologies
8~10

have been developed.


Machine vision capability may have a significant effect on many robotics applications. An intelligent machine, such as
autonomous mobile robot, must be equipped with vision sy
stem to collect vision information which is used to adapt to
its environment. With regard to the limitation of the view scope, the current imaging system affects the performance of
the robotics system. The development of the
omni directional

vision system appears to have more advantages. The
omnidirecitonal vision navigation program
11, 12

is an example.


The mobile robotic platform, as a lab
-
demo system, is featured by its hybrid (in the sense of variety of actuators and
sensors), reconfigura
ble, easily accessible, changeable controller and some advanced features. Both laboratory hardware
modules and advanced computer software packages provide an appropriate environment supplemented by a graphics
simulation system support. The two laboratory p
hases form an experimentation base from fundamentals of robot
operation through integration with new control technologies. The advanced experiment is divided into two phases as
following:


Phase one: robotic control, navigation, obstacle avoidance, sensory

fusion.

Phase two: computer vision, image process
ing
, path tracking and planning, machine intelligence.


TUT
-
I mobile robotic platform can be operated through four modes: human remote control, programmed control,

3

teach
-
and
-
playback,
and
automatic path pla
nning in structured environments. Typically, the main function is
teach
-
and
-
playback operation for indoor transportation. The autonomous mobile robots can be considered as automated
path planning, free
-
rang
ing

vehicle. However, the cooperation of autonomo
us capability with a supervising operator
appears to be the engineering compromise that provides a framework for the applications of mobile robots especially in
the structured environments. The operation mode of teaching path and playback is an effective s
olution not only for
manipulator arms but also for vehicles. The vehicle records the beacon data progressively in an on
-
board computer
memory during a manually given teaching mode trip along with a desired course. On subsequent unmanned trips, the
vehicle
directs itself along the chosen course by observing the beacon and comparing the data. The steering correction
will allow the vehicle automatically to follow the taught course. The path generation by teaching mode and path
correction by real
-
time path err
or detecting in the playback mode are demonstrated by the TUT
-
I robotic system. The
vision
-
based image database is generated as the given path representation in the teaching procedure. The algorithm of
the image processing is performed for path following.
The path tracking involves an online positioning method.


Advanced sensory capability is employed to provide environment perception. A unique omni
-
vision system is used for
localization and navigation. The
Omni directional

vision involves an

extremely wide angle lens with a CCD camera,
which has the feature that a dynamic omni
-
vision image
can

be responded

to

in real time. The beacon guidance is
realized by observing locations of points derived from over
-
head features such as predefined l
ight array in a building.
The navigation approach is based upon the omni
-
vision characteristics. A group of ultrasonic sensor is employed to
avoid obstacles. The other sensors are utilized for system reliability. The multi
-
sensory data fusion technique is
developed for collision
-
free trajectory pilot.


TUT
-
I mobile platform can be decomposed into five distinct subsystems: locomotion with control/man
-
machine
interface, sensors for obstacle avoidance, omni
-
vision module with image processor, navigator with ce
ntral controller
and power source units. The paper provides a concise series of descriptions in path planning, obstacle avoidance,
control strategies, navigation, vision systems, ranging systems and various application modules.



2. CONFIGURATION OF
THE OMNIDIRECTIONAL VISION NAVIGATION UNITS


The platform
comprises

an unmanned vehicle, which is comprised of two driven wheels and two front free swiveling
wheels. The chassis contains the necessary elements to power, propel and steer other on
-
b
oard equipment
. Near the
chassis, a number of ultrasonic ranging sensors are mounted. The ultrasonic devices act to sense the presence of the
objects within its path and prevent
a

collision. As a further precaution, safety switches will stop the vehicl
e if it contacts
anything.


Omni directional

vision means that an entire hemispherical field of view is seen simultaneously. This is realized by
means of an extremely wide
-
angle optical image device, called a fisheye, with a CCD camera.
The
o
mnidirectionan
vision guidance is a new and unique navigation technique. The omni
-
vision appears
to have a
definite significance in
navigation application for various autonomous guided vehicles. Omni
-
vision automated guiding system referred by
overhead li
ght consists of the following five components shown in Fig.1.


4

(1) Fisheye lens, CCD camera with an automated electric
shutter.
An electronic shutter control system uses

a single
chip microcomputer. It is suitable fo
r TK
-
60 CCD camera.
When the illumination light changes, the system still
enables the camera have better output image.

(2) A camera stand with 5 coordinate

degrees

of
freedom
.

(3) Beacon
t
racker: an image
-
processing computer for
real
-
time dat
a acqu
isition

of the targets.

(4) Omni
-
image distortion corrector.

(5) Navigator, which includes three function modules: Path
Generator, Path Error Detector and Corrector.


System will perform path planning and tracking on the
teach
-
and
-
playback mode under the
environments by
referring to overhead lights. The system guides itself by
referring to overhead visual targets that are universally
found in the buildings. A group of predefined overhead
lights is usually selected as landmark, it is not necessary
to inst
all any special features. Since at least two points can
determine the vehicle’s position and orientation, the beacon group should consist of
at least
two lights as guiding targets
in each frame of image
. However, the algorithms may be adjusted to
handle a
ny

specified number of targets. The
number must be greater than two. This would produce a more accurate and robust system. Even if the target may be lost
from time to time, the vehicle would still have at least the minimum number of targets for gui
dance.


The tracker is real
-
time digital image processor. Although multiple targets can be multiplexed through a single target
tracker, a multi
-
target tracker is used to eliminate the need for the multiplexing software. Up to four image windows or
gates ca
n be used to track the apparent movement of the overhead lights simultaneously. The beacon tracker is the major
on
-
board hardware. It outputs the coordinates of a pair of lights that are shown on the monitor screen and placed a
tracking gate around each of

them.



3. TEACHING AND PLAYBACK OPERATION MODE BASED ON IMAGE DATABASE


For industrial manipulators the teaching and playback operation mode through a teach pendant handled by operator is a
traditional technique. For an automated guided vehicle, it appea
rs to be an unusual method but still to be an effective
solution. The vision
-
based image database is employed as the desired path generator. In the teaching mode, the operator
manually causes the

vehicle to move forward along the desired path at a selected

speed. The selected targets with their
gates will move downward on the monitor screen as the vehicle passes underneath the overhead lights. Then the datum
is recorded as described in the section in the reference database. When a target is nearly out of th
e field of view, the
frame will be terminated and a pair of targets will be selected. The process will be repeated. Some combination of

5

vision frames and position tracking will continue until the desired path has been generated and the reference data for
t
hat track is recorded in memory. In the playback mode thereafter, the vehicle automatically maintains itself in the
desired path by comparing between the analogous record and the vehicle path of movement and steering corrections
made to bring the path erro
rs: Ex, the lateral
error,

Ey, the error along the path and Ea, the angle orientation error to zero,
thereby keeping the vehicle on the intended course. As the value of Ey diminishes and approaches to zero the next
reference frame will be called up.

The cycle will be
repeated. The procedure will cause the vehicle to
follow the desired path.


A database with
the
tree structure shown in Fig.2 is
built up to record the data stream. The vision
-
based
image database is three
-
layer data structure. The
data

structure is a group of array pointer. Each
array element points the next layer’s node. The array index means the serial number of the sampling, which represents
frames, fields and records respectively. The method creates an exclusive property that the de
sired path is defined by an
image database.


There are three coordinate systems: the beacon coordinate system, the vehicle coordinate system and the image
coordinate system. On the basis of the principle of the coordinate conversion, the path errors can b
e easily calculated
and obtained when the vehicle departs from the desired path as detected from the recorded locations of geometric points
derived from the various elements of the existing pattern. At this point it is possible to compare the observed targ
et
coordinates (Tx, Ty) with the reference target coordinates (Rx, Ry) and determine the angle error Ea, the lateral position
error Ex and the longitudinal position error Ey.



4. LENS DISTORTION AND PIXEL CORRECTION


For vision
-
based measurement, in orde
r to obtain the real world position of the vehicle the lens distortion and camera
chip pixel distortion correction have to be considered. The target image coordinates supplied by the video tracker
require correction for two inherent errors before they can
provide
accurate input for guiding the vehicle. One is
l
ens distortion error and
another is the pixel direction error.



The wide angle lens used in this system has considerable distortion
and consequently the image coordinates of the overhead lights as
r
eported by the tracker are not proportional to the true real world
coordinates of those lights. The lens distortion will cause the image
range Ir vary as a function of the zenith angle B and depending on
the lens used, the function may be linear, parabolic

or trigonometric.
The distortion is determined experimentally by laboratory

6

measurements from which the function is determined.

The lens used herein the distortion correction refers to a linear formula as shown in Fig.3

Ir = KB


Where K is a constant der
ived from the distortion measurement. The correction factor for a specific lens distortion is
provided by the lens correction software.


The pixels in the CCD chip are different scale between X and Y direction. This causes the targets coordinates to be
di
fferent in X and Y axis. It is necessary to divide the Y coordinates by a factor R which is the ratio of pixel length
toheight. This brings the X and Y coordinates to the same scale. The following equations give this conversion:


dx = (xi


xc)

dy = (yi


yc)/R


where (xc, yc) is origin coordinates of the camera center. Then we know the image range Ir.

The image coordinates (dx, dy,) on focal plane are converted into the target coordinates (Tx, Ty,) at the known height H.
show in Fig .3.


Tr = H tan B

It i
s known that B = Ir / K

Tr = H tan (Ir / K)

Where Tr is the target range in real world coordinate system.

Since the azimuth angle C to
a

target point is invariant in the image and real world coordinates system.
The target world
coordinates
(Tx
, Ty
) can be calculated as:



Tx = dx Tr/Ir


Ty = dy Tr/Ir


In order to transfer the image coordinates from an origin at the camera centerline to the vehicle centerline, the
calibration of the three coordinates as discussed above is necessary to determinate th
e center point of the coordinates.



5. THE MOTION CONTROL MODULES


A programmable two
-
axis motor controller is used. A non
-
programmable model may be chosen. The necessary
programming modules can be incorporated into the vehicle computer. The controller mu
st selectively control velocity or
position of two motors, utilize appropriate feedback, such as from optical encoders and coupled tachogenerators to
either the motor or the wheels, which involves a two freedom closed
-
loop servo system.


The vehicle has t
wo self
-
aligning front wheels and two driving rear wheels. Velocity mode control is used when the

7

vision system is guiding the vehicle. In this mode, steering is accomplished by the two driving motors turning at
different speeds due to the inputs of VL and

VR (left and right wheel velocities) from the computer. The wheel encoders
provide local feedback for the motor controller to maintain the velocity profile programmed during the teaching mode.


By speeding one wheel and slowing another one equally by an a
mount dV, the motor control strategy steers the vehicle
to return to its desired path. Since the sample frequency is large enough, we can consider the system as a continuous
feedback system. In practice, the conventional PID compensator can be designed to
achieve desired performance
specifications. A simple control formula is used as followings:


dV = k
1

Ex + k
2

Ea


where Ex and Ea are output from the path error measurement circuit.
K
1

and
K2

are the constrain
t
s which can be
mathematically calculated from

significant parameters of vehicle dynamics and kinematics or be determined
experimentally.


VL = Vm + dV

VR = Vm


dV


where Vm is velocity at the centerline of the vehicle. The sign of dV and the magnitude of dV will determine the
turning direction by gi
ving a turn radius. The control formula will bring the vehicle back onto course in the shortest
possible time without overshooting. The following block diagram represents the closed
-
loop transfer function. For the
output Ex, it involves a PD compensation s
hown in Fig.4.




Where D is the diameter of the rear drive wheel, w is the distance between two bear wheel, k
n
is a constant related to
motor

output, V
0

is the velocity of the mass center of the vehicle. The right diagram is a simplified unit feedback s
ystem
derived from left diagram. We could select a pair of the optimal values of K
1

and K
2

through calculation or experiments








8


































Fig. 5: TUT
-
I Omni
-
vision based autonomous mobile robotic platfor
m





6. CONCLUSION


The photographs of the TUT
-
I omni
-
vision based autonomous mobile robotic platform are shown in Fig.5. The
technique points of the application for omni
-
vision based autonomou
s mobile robotic platform include following

9

properties:


(1) The omni
-
vision provides an entire scene using a fish eye lens. It appears useful in a variety of applications for
robotics. An overall view is always required for safe and reliable operation of
a vehicle. While conventional method
with camera scanning appears generically deficient, dynamic omni
-
vision is considered as a definite advantage
particularly for mobile navigation.


(2) The preferred approach is that an unmanned vehicle guides itself by

referring to overhead visual targets such as
predefined overhead lights.
Overhead lights

are universally found in structured environments and not eas
ily

blocked by
floor obstacles. The guidance system does not require the installation of
any special equipment in the work area. The
point matrix pattern of the beacon as an environment map is very simple to be processed and understood.


(3) The teach
-
and
-
playback operation mode seems appropriate not only for robotic manipulator but also for o
ther
vehicles as well. Vision
-
based image database, as a teaching path record or a desired path generator, create a unique
technique to expand robot capability.



ACKNOWLEDGMENTS


The author gratefully acknowledges the support of K.C.Wong education founda
tion, Hong Kong.




REFERENCES


1.

Xiaoqun Liao, Jin Cao, Ming Cao, Tayib Samu, and Ernest Hall, “Computer vision system for an autonomous
mobile robot,”
Proc.
SPIE
Intelligent Robots and Computer Vision

C
onference,

November, Boston, 1998.

2.

E.L.

Hall
, “Fundamental principles of robot vision,” Handbook of
P
attern
R
ecognition and
I
mage
P
rocessing:
Computer
V
ision, Academic Press, New York, pp.543
-
575
, 1994
.

3.

Qing
-
ha0 Meng, Yicai Sun, and Zuoliang Cao, “Adaptive

extended Kalman filter (AEKF)
-
based mobile robot
localization using sonar,”
Robotica
,
Vol.18
, pp.459
-
473.2000

4.

Gordon Kao, and Penny Probert, “Feature extraction from a broadband sonar sensor for mapping structured
environments efficiently,”

The Internat
ional Journal of Robotics Research
,
Vol.19, No.10
, pp.895
-
913, 2000.

5.

Minglu

Zhang
, Shangxian Peng, and Zuoliang Cao, “The artificial neural network and fuzzy logic used for the
avoiding of mobile robot,”
China mechanical engineering
,
vol.18
, pp.21
-
24
, 1997
.

6.

Hong Xu, and Zuoliang Cao, “A three
-
dimension
-
fuzzy wall
-
following controller for a mobile robot”,
ROBOT
,
Vol.18
,

pp.548
-
551 1996.

7.

T.I Samu, N.

Kelkar, and E.L.

Hall, “Fuzzy logic system for three dimension line following for a mobile robot,”
Proc.

of
Adaptive distribute parallel computing symposium, Dayton, Oh. pp137
-
148
,

1996.

8.

Zuoliang Cao “Region filling operations with random obstacle avoidance for mobile robot,”
J
ournal

.of Robotic
System
s,

5(2)
, pp.87
-
102
, 1998
.


10

9.

Zvi Shiller, “
Online

suboptimal obstacle avoidance,”
The International Journal of Robotics Research
,
Vol.19, No.5
,
pp.480
-
497, 2000.

10.

Alain Lamber, and Nadine Le Fort
-
Piat, “Safe task planning
integrating

uncertainties and local maps federations,”
The I
nternational Journal of Robotics Research
,
Vol.19, No.6
, pp.597
-
611, 2000. 12

11.

Liming Zhang, and Zuoliang Cao, “Teach
-
playback based beacon guidance for autonomic guided
vehicles,

Journal of

Tianjin

I
nstitute of

T
echnology,
Vol.12 No.1
, pp.28
-
31
, 1996
. 1

12.

Liming Zhang, Zuoliang Cao, “Mobile path generating and tracking for beacon guidance,” 2nd, Asian Conference
on Robotics, 1994.

13.

Alain Lamber, and Nadine Le Fort
-
Piat, “Safe task planning
integrating

uncertainti
es and local maps federations,”
The International Journal of Robotics Research
,
Vol.19, No.6
, pp.597
-
611, 2000.