vision based autonomous mobile robotic platform
, Jun Hu *
Ernest L. Hall
Tianjin University of Technology, Tianjin / China;
Fatwire Corporation, New York /USA;
University of Cincinnati, Cincinnati/ USA
As a laboratory demonstration platform, TUT
I mobile robot provides various experimentation modules to demonstrate
remote control, computer programming, teach
Typically, the teac
playback operation has been
proved to be an effective solution especially in
environments. The path generated in the teach mode and path correct
path error detecting in the
playback mode are demonstrated. T
based image database is generated as the given path representation in the
teaching procedure. The algorithm of an online image positioning is performed for path following. Advanced sensory
capability is employed to provide environment perception.
vision) system is
used for localization and navigation. The
vision involves an extremely wide
angle lens, which ha
feature that a dynamic omni
processed in real time to respond the widest view during the movement.
The beacon guidance is realized by observing locations of points derived from over
head features such as predefined
light arrays in a building. The navigation approach is based upo
n the omni
vision characteristics. A group of ultrasonic
sensors is employed for obstacle avoidance.
The robotics labo
ratory is a major research division of Tianjin University of Technology (TUT). It is equipped with an
industrial robot work cell and a mobile robotic platform with a series of lab
demo system modules. The TUT mobile
robotic platform, as a wheel
system, was developed to adapt to current education and research needs. The
emphasis of design concepts is placed on the following considerations.
Mobile robotic devices promise a wide range of applications for industry automation in which stationary rob
cannot produce satisfactory results because of limited working space. Such applications include material transfer and
; phone 86 22 23688585; fax 86 22 23362948;
; Tianjin University of Technology, Tianjin,
; phone 1 516 3289473; fax 1 516739
; Fatwire Co. 330 Old
Country Road, Suite 207 Mineola, New York, USA 11501;
; phone 1 513 5562730; fax 1 513 5563390;
; Univ. of Cin
cinnati , Cincinnati, USA, OH 45242
tool handling in factories or automatically piloted carts for delivering loads such as mail around office buildings and
service in hospitals. Most of the advanced manufacturing systems use some kind of computer transport
(2) The facilities of the laboratory support the education of Robotics and Mechatronics which include a series of the
primary and typical robotics courses such as Kinematics, Dynamics, Motion control, Robot programming, Machine
The laboratory experiments are the major resources for studying and researching the future technologies. Some
particular application demonstrations are necessary to create basic concepts and approaches in new development area
to cope with the needs
for research experience and capability.
echniques is one of the most common and important factors in robotics research. The use of a fixed
guidance method is currently the most common and reliable technique. An alternative method uses a sig
wire buried in the floor with on
board inductive coil or a reflective stripe painted on the floor with an on
photoelectric device. In the laboratory, a magnetic strip is paved on the floor and the vehicle is equipped with a
sor to direct the vehicle to follow it. Recent preferred approach employs laser beam reflecting or digital
to navigate a free
It is feasible and desirable to use sensor
driven, computer controlled and software programmed a
utomatic system for the
AGV development. A series of special sensors may be employed for mobile robots. Various methods have been tested
for mobile control using ultrasonic
, infrared, optical and laser emitting units. The data fusion
, fuzzy logic
and machine intelligence technologies
have been developed.
Machine vision capability may have a significant effect on many robotics applications. An intelligent machine, such as
autonomous mobile robot, must be equipped with vision sy
stem to collect vision information which is used to adapt to
its environment. With regard to the limitation of the view scope, the current imaging system affects the performance of
the robotics system. The development of the
vision system appears to have more advantages. The
omnidirecitonal vision navigation program
is an example.
The mobile robotic platform, as a lab
demo system, is featured by its hybrid (in the sense of variety of actuators and
ble, easily accessible, changeable controller and some advanced features. Both laboratory hardware
modules and advanced computer software packages provide an appropriate environment supplemented by a graphics
simulation system support. The two laboratory p
hases form an experimentation base from fundamentals of robot
operation through integration with new control technologies. The advanced experiment is divided into two phases as
Phase one: robotic control, navigation, obstacle avoidance, sensory
Phase two: computer vision, image process
, path tracking and planning, machine intelligence.
I mobile robotic platform can be operated through four modes: human remote control, programmed control,
automatic path pla
nning in structured environments. Typically, the main function is
playback operation for indoor transportation. The autonomous mobile robots can be considered as automated
path planning, free
vehicle. However, the cooperation of autonomo
us capability with a supervising operator
appears to be the engineering compromise that provides a framework for the applications of mobile robots especially in
the structured environments. The operation mode of teaching path and playback is an effective s
olution not only for
manipulator arms but also for vehicles. The vehicle records the beacon data progressively in an on
memory during a manually given teaching mode trip along with a desired course. On subsequent unmanned trips, the
directs itself along the chosen course by observing the beacon and comparing the data. The steering correction
will allow the vehicle automatically to follow the taught course. The path generation by teaching mode and path
correction by real
time path err
or detecting in the playback mode are demonstrated by the TUT
I robotic system. The
based image database is generated as the given path representation in the teaching procedure. The algorithm of
the image processing is performed for path following.
The path tracking involves an online positioning method.
Advanced sensory capability is employed to provide environment perception. A unique omni
vision system is used for
localization and navigation. The
vision involves an
extremely wide angle lens with a CCD camera,
which has the feature that a dynamic omni
in real time. The beacon guidance is
realized by observing locations of points derived from over
head features such as predefined l
ight array in a building.
The navigation approach is based upon the omni
vision characteristics. A group of ultrasonic sensor is employed to
avoid obstacles. The other sensors are utilized for system reliability. The multi
sensory data fusion technique is
developed for collision
free trajectory pilot.
I mobile platform can be decomposed into five distinct subsystems: locomotion with control/man
interface, sensors for obstacle avoidance, omni
vision module with image processor, navigator with ce
and power source units. The paper provides a concise series of descriptions in path planning, obstacle avoidance,
control strategies, navigation, vision systems, ranging systems and various application modules.
2. CONFIGURATION OF
THE OMNIDIRECTIONAL VISION NAVIGATION UNITS
an unmanned vehicle, which is comprised of two driven wheels and two front free swiveling
wheels. The chassis contains the necessary elements to power, propel and steer other on
. Near the
chassis, a number of ultrasonic ranging sensors are mounted. The ultrasonic devices act to sense the presence of the
objects within its path and prevent
collision. As a further precaution, safety switches will stop the vehicl
e if it contacts
vision means that an entire hemispherical field of view is seen simultaneously. This is realized by
means of an extremely wide
angle optical image device, called a fisheye, with a CCD camera.
vision guidance is a new and unique navigation technique. The omni
to have a
definite significance in
navigation application for various autonomous guided vehicles. Omni
vision automated guiding system referred by
ght consists of the following five components shown in Fig.1.
(1) Fisheye lens, CCD camera with an automated electric
An electronic shutter control system uses
chip microcomputer. It is suitable fo
60 CCD camera.
When the illumination light changes, the system still
enables the camera have better output image.
(2) A camera stand with 5 coordinate
racker: an image
processing computer for
of the targets.
image distortion corrector.
(5) Navigator, which includes three function modules: Path
Generator, Path Error Detector and Corrector.
System will perform path planning and tracking on the
playback mode under the
referring to overhead lights. The system guides itself by
referring to overhead visual targets that are universally
found in the buildings. A group of predefined overhead
lights is usually selected as landmark, it is not necessary
all any special features. Since at least two points can
determine the vehicle’s position and orientation, the beacon group should consist of
two lights as guiding targets
in each frame of image
. However, the algorithms may be adjusted to
specified number of targets. The
number must be greater than two. This would produce a more accurate and robust system. Even if the target may be lost
from time to time, the vehicle would still have at least the minimum number of targets for gui
The tracker is real
time digital image processor. Although multiple targets can be multiplexed through a single target
tracker, a multi
target tracker is used to eliminate the need for the multiplexing software. Up to four image windows or
n be used to track the apparent movement of the overhead lights simultaneously. The beacon tracker is the major
board hardware. It outputs the coordinates of a pair of lights that are shown on the monitor screen and placed a
tracking gate around each of
3. TEACHING AND PLAYBACK OPERATION MODE BASED ON IMAGE DATABASE
For industrial manipulators the teaching and playback operation mode through a teach pendant handled by operator is a
traditional technique. For an automated guided vehicle, it appea
rs to be an unusual method but still to be an effective
solution. The vision
based image database is employed as the desired path generator. In the teaching mode, the operator
manually causes the
vehicle to move forward along the desired path at a selected
speed. The selected targets with their
gates will move downward on the monitor screen as the vehicle passes underneath the overhead lights. Then the datum
is recorded as described in the section in the reference database. When a target is nearly out of th
e field of view, the
frame will be terminated and a pair of targets will be selected. The process will be repeated. Some combination of
vision frames and position tracking will continue until the desired path has been generated and the reference data for
hat track is recorded in memory. In the playback mode thereafter, the vehicle automatically maintains itself in the
desired path by comparing between the analogous record and the vehicle path of movement and steering corrections
made to bring the path erro
rs: Ex, the lateral
Ey, the error along the path and Ea, the angle orientation error to zero,
thereby keeping the vehicle on the intended course. As the value of Ey diminishes and approaches to zero the next
reference frame will be called up.
The cycle will be
repeated. The procedure will cause the vehicle to
follow the desired path.
A database with
tree structure shown in Fig.2 is
built up to record the data stream. The vision
image database is three
layer data structure. The
structure is a group of array pointer. Each
array element points the next layer’s node. The array index means the serial number of the sampling, which represents
frames, fields and records respectively. The method creates an exclusive property that the de
sired path is defined by an
There are three coordinate systems: the beacon coordinate system, the vehicle coordinate system and the image
coordinate system. On the basis of the principle of the coordinate conversion, the path errors can b
e easily calculated
and obtained when the vehicle departs from the desired path as detected from the recorded locations of geometric points
derived from the various elements of the existing pattern. At this point it is possible to compare the observed targ
coordinates (Tx, Ty) with the reference target coordinates (Rx, Ry) and determine the angle error Ea, the lateral position
error Ex and the longitudinal position error Ey.
4. LENS DISTORTION AND PIXEL CORRECTION
based measurement, in orde
r to obtain the real world position of the vehicle the lens distortion and camera
chip pixel distortion correction have to be considered. The target image coordinates supplied by the video tracker
require correction for two inherent errors before they can
accurate input for guiding the vehicle. One is
ens distortion error and
another is the pixel direction error.
The wide angle lens used in this system has considerable distortion
and consequently the image coordinates of the overhead lights as
eported by the tracker are not proportional to the true real world
coordinates of those lights. The lens distortion will cause the image
range Ir vary as a function of the zenith angle B and depending on
the lens used, the function may be linear, parabolic
The distortion is determined experimentally by laboratory
measurements from which the function is determined.
The lens used herein the distortion correction refers to a linear formula as shown in Fig.3
Ir = KB
Where K is a constant der
ived from the distortion measurement. The correction factor for a specific lens distortion is
provided by the lens correction software.
The pixels in the CCD chip are different scale between X and Y direction. This causes the targets coordinates to be
fferent in X and Y axis. It is necessary to divide the Y coordinates by a factor R which is the ratio of pixel length
toheight. This brings the X and Y coordinates to the same scale. The following equations give this conversion:
dx = (xi
dy = (yi
where (xc, yc) is origin coordinates of the camera center. Then we know the image range Ir.
The image coordinates (dx, dy,) on focal plane are converted into the target coordinates (Tx, Ty,) at the known height H.
show in Fig .3.
Tr = H tan B
s known that B = Ir / K
Tr = H tan (Ir / K)
Where Tr is the target range in real world coordinate system.
Since the azimuth angle C to
target point is invariant in the image and real world coordinates system.
The target world
) can be calculated as:
Tx = dx Tr/Ir
Ty = dy Tr/Ir
In order to transfer the image coordinates from an origin at the camera centerline to the vehicle centerline, the
calibration of the three coordinates as discussed above is necessary to determinate th
e center point of the coordinates.
5. THE MOTION CONTROL MODULES
A programmable two
axis motor controller is used. A non
programmable model may be chosen. The necessary
programming modules can be incorporated into the vehicle computer. The controller mu
st selectively control velocity or
position of two motors, utilize appropriate feedback, such as from optical encoders and coupled tachogenerators to
either the motor or the wheels, which involves a two freedom closed
loop servo system.
The vehicle has t
aligning front wheels and two driving rear wheels. Velocity mode control is used when the
vision system is guiding the vehicle. In this mode, steering is accomplished by the two driving motors turning at
different speeds due to the inputs of VL and
VR (left and right wheel velocities) from the computer. The wheel encoders
provide local feedback for the motor controller to maintain the velocity profile programmed during the teaching mode.
By speeding one wheel and slowing another one equally by an a
mount dV, the motor control strategy steers the vehicle
to return to its desired path. Since the sample frequency is large enough, we can consider the system as a continuous
feedback system. In practice, the conventional PID compensator can be designed to
achieve desired performance
specifications. A simple control formula is used as followings:
dV = k
Ex + k
where Ex and Ea are output from the path error measurement circuit.
are the constrain
s which can be
mathematically calculated from
significant parameters of vehicle dynamics and kinematics or be determined
VL = Vm + dV
VR = Vm
where Vm is velocity at the centerline of the vehicle. The sign of dV and the magnitude of dV will determine the
turning direction by gi
ving a turn radius. The control formula will bring the vehicle back onto course in the shortest
possible time without overshooting. The following block diagram represents the closed
loop transfer function. For the
output Ex, it involves a PD compensation s
hown in Fig.4.
Where D is the diameter of the rear drive wheel, w is the distance between two bear wheel, k
is a constant related to
is the velocity of the mass center of the vehicle. The right diagram is a simplified unit feedback s
derived from left diagram. We could select a pair of the optimal values of K
through calculation or experiments
Fig. 5: TUT
vision based autonomous mobile robotic platfor
The photographs of the TUT
vision based autonomous mobile robotic platform are shown in Fig.5. The
technique points of the application for omni
vision based autonomou
s mobile robotic platform include following
(1) The omni
vision provides an entire scene using a fish eye lens. It appears useful in a variety of applications for
robotics. An overall view is always required for safe and reliable operation of
a vehicle. While conventional method
with camera scanning appears generically deficient, dynamic omni
vision is considered as a definite advantage
particularly for mobile navigation.
(2) The preferred approach is that an unmanned vehicle guides itself by
referring to overhead visual targets such as
predefined overhead lights.
are universally found in structured environments and not eas
floor obstacles. The guidance system does not require the installation of
any special equipment in the work area. The
point matrix pattern of the beacon as an environment map is very simple to be processed and understood.
(3) The teach
playback operation mode seems appropriate not only for robotic manipulator but also for o
vehicles as well. Vision
based image database, as a teaching path record or a desired path generator, create a unique
technique to expand robot capability.
The author gratefully acknowledges the support of K.C.Wong education founda
tion, Hong Kong.
Xiaoqun Liao, Jin Cao, Ming Cao, Tayib Samu, and Ernest Hall, “Computer vision system for an autonomous
Intelligent Robots and Computer Vision
November, Boston, 1998.
, “Fundamental principles of robot vision,” Handbook of
ision, Academic Press, New York, pp.543
ha0 Meng, Yicai Sun, and Zuoliang Cao, “Adaptive
extended Kalman filter (AEKF)
based mobile robot
localization using sonar,”
Gordon Kao, and Penny Probert, “Feature extraction from a broadband sonar sensor for mapping structured
ional Journal of Robotics Research
, Shangxian Peng, and Zuoliang Cao, “The artificial neural network and fuzzy logic used for the
avoiding of mobile robot,”
China mechanical engineering
Hong Xu, and Zuoliang Cao, “A three
following controller for a mobile robot”,
T.I Samu, N.
Kelkar, and E.L.
Hall, “Fuzzy logic system for three dimension line following for a mobile robot,”
Adaptive distribute parallel computing symposium, Dayton, Oh. pp137
Zuoliang Cao “Region filling operations with random obstacle avoidance for mobile robot,”
Zvi Shiller, “
suboptimal obstacle avoidance,”
The International Journal of Robotics Research
Alain Lamber, and Nadine Le Fort
Piat, “Safe task planning
uncertainties and local maps federations,”
nternational Journal of Robotics Research
611, 2000. 12
Liming Zhang, and Zuoliang Cao, “Teach
playback based beacon guidance for autonomic guided
Liming Zhang, Zuoliang Cao, “Mobile path generating and tracking for beacon guidance,” 2nd, Asian Conference
on Robotics, 1994.
Alain Lamber, and Nadine Le Fort
Piat, “Safe task planning
es and local maps federations,”
The International Journal of Robotics Research