b. Robot arm. - IRCCyN

lynxherringAI and Robotics

Oct 18, 2013 (4 years and 23 days ago)

87 views

A VISUAL SERVOING ARCHITECTURE FOR CONTROLLING ELECTROMECHANICAL
SYSTEMS

R. Garrido, A. Soria, P. Castillo, I. Vásquez

CINVESTAV
-

IPN, Departamento de Control Automático

Av. IPN No. 2508, C.P. 07360, México, D.F., MEXICO

fax (52)57 47 70 89

e
-
mail:
garrido@ctrl.cinvestav.mx


Keywords
.
-

Direct Visual Servoing. Low
-
cost visual architecture. Electromechanical control system.
Planar Robot Control.


Abstract
.
-

Most of the architectures employed in visual servoin
g research use specialised hardware and
software. The high cost of the specialised hardware and the engineering skills required to develop the
software complicates the set
-
up of visual controlled systems. The present day costs for the key elements
for comp
uter vision such as cameras, frame grabbers and computers continue to fall thus making
reasonable low
-
cost visual architectures. In this paper, we present a visual servoing architecture for
controlling electromechanical systems based on standard off
-
the
-
s
helf hardware and software. The
proposed scheme allows to control a wide class of systems including robot manipulators. The
programming environment is based on MathLab/Simulink


which allows to take advantage of the
graphic programming facilities of Simuli
nk

. Two experimental evaluations are presented, a linear
motion cart controlled using
direct visual servoing

and a planar robot arm controlled in a
look and
move
framework.


I.
-

INTRODUCTION

The sense of sight procures most of the information received by

a human being allowing him to interact
with his environment and to survive. In the case of a robot manipulator,
computer vision is an useful
sensor since it mimics the human sense of sight and allows for non
-
contact measurement of the
environment, a featu
re which could give it more versatility with respect to robots endowed only with
optical encoders and limit switches.

This feature could
potentially

allow to a robot to

deal with
problems related to flexibility and backlash in the robot transmission system

and with partial
knowledge of the robot kinematics. The task in visual control of robots is to use
visual information to
control the
pose

of the robot end
-
effector relative to a target object or a set of target features.
The first
works dealing with comp
uter vision applied to robot manipulators appeared in the seventies [1], [5].
However, technical limitations at that time prevented an important development of this subject. During
the eighties and nineties, the availability of more powerful processor and
the development of digital
electronics gave a strong impulse to visual control of robots and mechanical systems. Examples of
works involving visual control of robot manipulators are [3], [6], [7], [8], [10], [11], [12], [13]. For
other mechanical systems s
ee for example [2] where visual control is applied to an inverted pendulum
and [9] for a flexible arm controlled using a vision system.


In most of the references cited above, the experiments were executed using specialised hardware and
software, which m
ay increase time required to set up an experiment. O
ne of the first computer vision
architectures was the BVV1 [2], which contained up to 15 Intel 8085A, 8
-
bit microprocessors. In the
same paper, another architecture, the BVV2, was also proposed. The main
difference between the
aforementioned architectures is that the later employs Intel 8086 16 bits processors. Almost all the
software was written in machine code and the architectures were tailored to suit the computing needs of
each experiment. Both system
s are able to work at
17 ms

of visual sampling period. Unfortunately, the
authors does not mention if other kind of interfaces are available with these systems, e.g. analog to
digital converters, interfaces for optical encoders, etc. In the case of visual
control of robot
manipulators, Corke [1] proposed an architecture in which image processing is done using a Datacube
card and visual control is performed through a VME
-
based computer. Visual sampling period was
20
ms

and custom made software, ARCL, was emp
loyed for programming. In some experiments the robot
controller shares the control law execution. In Hashimoto and Kimura [4], vision processing was done
using a Parsytec card mounted in a personal computer and another personal computer hosting a
transpute
r network was employed for control. Vision sampling period was 85 ms and the period for
joint robot control was
1 ms
. Papanikolopoulos and Koshla [8] uses an architecture in which an
IDAS/150 board is used for image processing and 6 Texas Instruments TMS
320 DSP processors for
controlling the joints of a CMU DDArm II robot. All the cards were connected through a VME bus
hosted in a Sun workstation. Vision sampling period was
100 ms

and the robot controller has a period
of
3.33 ms
. The whole system runs un
der the Chimera
-
II real
-
time software environment. Another
interesting architecture is proposed in [13] where a custom made board based on a Texas Instruments
TMS320C15 fixed
-
point processor is employed for image processing. Sampling period for image
acqui
sition and processing was
16.7 ms
. Control at the visual level is executed using a network of RISC
processors with a sampling period of
7.3 ms
. Joint control was left to the robot controller. A serial
protocol is employed to connect the robot to the person
al computer hosting the RISC and the image
processors. In [10], a personal computer hosts a Texas Instruments TMS320C31 DSP processor for
joint control of a direct drive two degrees of freedom robot and a Data Translation DT3851
-
4 card for
image processin
g. Sampling period for joint control was
2.5 ms

and
50 ms

for the visual feedback part.

From the above non
-
exhaustive review, it can be concluded that in most cases data processing is
executed using specialised and sometimes high cost boards, and it is no
t always possible to modify the
image processing algorithms since in some boards these algorithms are in hardware. The control part
also relies on specialised hardware such as transputers and DSP. It is also worth remarking that
programming is made using m
achine code or C language. This feature may be adequate for researcher
with good programming skills but for other users, some time would be needed for acquiring a good
level of familiarity with the system and for setting up an experiment. Motivated by the

remarks made
above, in this work we propose a visual servoing architecture for controlling electromechanical systems
based on personal computers and standard off
-
the
-
shelf hardware and software. This paper is organised
in the following way. Section II des
cribes the visual servoing architecture. In Section III, the proposed
architecture is tested through two experiments, namely
direct visual servoing

of a linear motion cart
and
look and move control

of a two degree of freedom robot arm. Finally, in Section
IV some
concluding remarks are given and future work is also discussed.


II.
-

VISUAL SERVOING ARCHITECTURE.

a.

Overview
.

From the review presented in the introduction, it is clear that in most of the visual servoing
architectures, the image processing and the

control parts should be executed using separate processors.
This philosophy is reasonable if one takes into account the computing burden associated with image
processing. In some instances in the robot control part, visual servoing algorithms may also req
uire
significant computing resources. However, as it was pointed out, specialised hardware is employed for
the above aims. In order to benefit from the advantages of the above philosophy and, at the same time,
avoiding excessive costs associated with highl
y specialised components, it would be interesting to
integrate off
-
the
-
shelf hardware and software in a particular architecture. It would allow, on the one
hand, a user
-
friendly control algorithm programming environment, and on the other hand, a
performanc
e comparable with those architectures proposed in the visual servoing literature. In our case,
the proposed architecture achieves the above goals separating the visual servoing task into 3 different
components, each having a specific function (see Figure 1
):



A programming, algorithm development and data logging environment component.



A control component that can interact with the vision component to fulfil the control goals.



A vision component capable of perceiving the environment.




























b. Programming, algorithm development and data logging environment.

The computer devoted to programming, development and data logging, which we will call in the sequel
the Server, hosts MathLab/Simulink


from The MathWorks Inc. , Wincon


from Quanse
r Consulting
Inc. and MS Visual C++


software, all running under Windows 95

. Simulink


is devoted to
programming the control algorithms and compiles the graphical code produced under Simulink

.
Wincon


(Server part) downloads the code to the real
-
time con
trol computer, which we will call the
Client. Once the code has been downloaded, it is possible from the Server to start and stop the
experiment, to change controller parameters on the fly and to log data from the Client. Interconnection
between the Serv
er and the Client is made through an Ethernet network. Further details can be found in
[15]. In our set
-
up, the Server is a Pentium computer running at 200 Mhz.


c.

Real time control.

For the Client, we use a computer with a Pentium processor running at 350
MHz under Windows 95

.
Wincon


is employed to run the code generated at the Server. Data acquisition is performed using a
Servotogo S8 card, which is able to handle optical encoders and analog voltage inputs and outputs.
Figure 1

: Block diagram for the proposed visual servoing architecture.

ISA Bus



CAMERA



RS
-
232

RS
-
170



ETHERNET



PCI Bus



IMAGE ACQU
ISITION
AND PROCESSING
COMPUTER

B
orland

C


PROGRAMMING AND DATA
LOGGING COMPUTER

Wincon Server

MathLab/Simulink

MS V
isual

C++


REAL
-
TIME CONTROL
COMPUTER

Wincon Client


DATA
ACQUISITION

CARD

S
ervotogo S8

ELECTROMECHANICAL

S
YSTEM

A/D

Op. Enc.

D/A

FRAME GRABBER

National Instruments

PCI 1408


Sampling time will depend on the p
rocessing power of the computer. In the experiments we set the
sampling frequency at
1 Khz
.


d.

Image acquisition and processing.

Image acquisition and processing is performed using a Pentium based computer running at 450 Mhz.
under Windows NT

. For image acq
uisition we use a Pulnix camera, model 9710, which outputs a
video signal in the RS
-
170 standard. Image is converted to digital data using a frame grabber from
National Instruments, model PCI
-
1408. Image processing was done using Borland C


language and
co
nsists in image thresholding and detection of the object position through the determination of its
centroid. The object is the part of the electromechanical system, which needs to be controlled, for
example the tip of a robot arm. When the centroid of the
object of interest is computed, it is transmitted
to the Client via a RS
-
232 link at 115.4 Kbauds. Visual information is available in a Simulink diagram
as a block in the same way as an optical encoder or a digital to analog converter. It is worth to note
that
once the program for image acquisition and processing is launched, the user does not need to take care
of it. Visual sampling rate is
50 hz

(
20 ms
) and is a function of the time required to image acquisition
(
16.7 ms
), processing (
2.3 ms
) and the time

required for sending data to the Client (
1 ms
).


e. Platform development.


In Table 1 we distinguish the key elements of the architecture. We have divided them as follows:
hardware, standard software and non
-
standard software. Here, We conceive non
-
standa
rd software as
the drivers or programs developed to integrate the standard hardware and software elements to achieve
the proposed visual servoing scheme.



Development
-
Data Logging

Vision

Real
-
Time Control

Hardware



PC



Ethernet Card




PC



Camera



Frame Grab
ber



RS 232



PC



I/O Card



Ethernet Card

Standard Software



MathLab
-
Simulink



Wincon Server

C/C++ Compiler



Frame Grabber setup
software



C/C++ Compiler



Wincon Client

Non
-
Standard

Software



I/O RTW Servotogo
card driver

RS 232 RTW Port Driver



RS 232 Communicati
on
in C/C++



Image processing
algorithm in C/C++


Table
Error! Unknown switch argument.
: Key elements of the proposed architecture.

III.
-

EXPERIMENTAL EVALUATION

Two experimental evaluations were made to test the plat
form. In the first experiment, a linear motion
cart is controlled under the
direct visual servoing

philosophy[12]. The above means that the
measurement used for feedback comes only from the vision subsystem. In the case of the second
experiment a two degre
e of freedom robot is controlled using the
look and move

philosophy [12],
indeed, the control law uses measurements from the vision subsystem and from the robot optical
encoders. In both cases we employed image based controllers which means that position
measurements
are in
pixels

so avoiding calibration problems. See [1] for further details on image based control.
Programming of the control laws took less than a hour in each case. Moreover, tuning was easy because
the real time capabilities of changing pa
rameters on the fly offered by
Wincon

.


a. Linear motion cart.

It consists of a solid aluminium cart driven by a DC motor. The cart slides along a stainless shaft using
linear bearings.
The whole prototype was covered with white stripes and a black circl
e of
7 cm

of
diameter was attached to the cart. In this experiment we used a discrete PID controller and a
direct
visual servoing

scheme (see Fig. 2). Note that the only information employed for feedback comes from
the vision system. Experimental result is

depicted in the Figure 3.
The desired output in
x
axis

is a
square wave signal of
30

pixels

of amplitude and a frequency of
0.05 Hz
.
The above result shows
several typical features found in electromechanical systems controlled using visual information. Fi
rstly,
note that there exists an overshoot in the response. This behaviour is due to the fact that the simple PID
controller employed in the experiment does not take into account the time delay introduced by the
visual measurement. Then, increasing the pro
portional gain increases the overshoot. On the other hand,
decreasing the gain avoids overshoot but the steady state error increases. This problem may be
alleviated by using integral control but high integral gains produce an oscillatory behaviour. Another

factor that affects the steady state position error is the quantization introduced by the camera which may
be coarser than the quantization introduced by an optical encoder.


Figure 2. Block diagram for the experiment with the linear motion cart





Figure 3. Experimental result for the linear motion cart.

b. Robot arm.

The robotic system considered in this experiment is composed by a built
-
in
-
house planar robot
manipulator with two degress of freedom moving in the vertical p
lane. In this approach, the vision
system provides the image position centroid measured directly on the image plane in
pixels
. In order to
obtain a good binary image, a metallic circle at the robot tip was painted in black and the rest of the
robot was pai
nted in white. Figure 4 depicts a block diagram of the closed loop control system used for
the robot. In this case we are using a
look and move

structure because optical encoders are employed for
measuring position
q

which is subsequently used for estimat
ing numerically joint speed
using a high
pass filter. The PD plus gravity compensation control law is


(1)

where
q

is the vector of joint displacements,

is the
n

1

vector of applied joint torques,
is

the jacobian matrix,

and

are linear
2

2
symmetric positive definite matrices,

is the
rotation mat
rix generated by rotating the camera about its optical axis by

radians and
is
the vector error in the image plane in
pixels
. Further details about this algorithm can be found in
[4]. In
the experimental set
-
up
, the centre of the robot first axis coincides with the origin of the image plane
and the camera was aligned so that
, then

may be considered the identity matrix. Two
experiments were performed, in the first c
ase, the set point was
with

a square wave
signal of
30 pixels

of amplitude centred at
85 pixels

and a frequency of
0.05 Hz
. The robot response is
depicted in Fig. 3 for the
x

axis. In the second experiment (Fi
gure 4), the reference was set to

with

a square wave signal of the same frequency of

and
20 pixels

of amplitude
centred at
0 pixels
.
In the first experiment, set point was reached in
2

s
. For the second experiment
settling time was longer than in the first


Figure 4. Block diagram for the experiment with the robot arm.





Figure 5. Experimental results for the robot arm.

experiment. The above results points out th
e non
-
linear nature of the closed loop system. The gains
were set such that the responses do not exhibit overshoot. As in the experiment with the cart, more gain
will produce overshoots, a behaviour due essentially to the time delay introduced by the visio
n system.
Note also that in the response for the
y

axis, there exist some small oscillations. The above phenomena
is due to the interlaced nature of the RS
-
170 standard.




IV.
-

CONCLUDING REMARKS.


In this work, a visual servoing architecture for control
ling electromechanical systems based on off
-
the
-
shelf hardware and software is proposed. Programming, development and data logging is performed
through MathLab/Simulink


software allowing an user
-
friendly interface and a performance
comparable with other v
isual servoing architectures. Modularity is a key characteristic of the proposed
platform since in order to incorporate new hardware, it is only necessary to write the drivers in the
MathLab/Simulink environment. Moreover, if more computing power is needed
, personal computers
with increased capabilities may be employed without changing the software. Two experiments were
shown to test the capabilities of the architecture. Future work includes adding a second frame grabber
and another video camera to perform
stereo vision and the inclusion of a motorised platform.


REFERENCES

[1]

C
ORKE
,

P.
-

Visual Control of Robots: High performance Visual Servoing
. Taunton, Somerset,
England: Research Studies Press, 1996.

[2]

D
ICKMANNS
,

E.

&

G
RAEFE
,

V
.
-

"Applications of Dyna
mic Monocular Machine Vision",

Machine Vision and Applications
, vol 1
.

pp. 241
-
261.

[3]

F
EDDEMA
,

J.

&

L
EE
, C
. “Adaptive image feature prediction and control for visual tracking
with hand
-
eye coordinated camera”,
IEEE Trans. On System, Man and Cybernetics
,

vol. 20,
nº. 5.
-

1990, pp 1172
-
1183.

[4]

H
ASHIMOTO
,

K.

&

K
IMURA
,

H.
-

"LQR optimal and non
-
linear approaches to visual
servoing", (pp. 165
-
198) in
H
ASHIMOTO
,

K.

(Ed.).
-

Visual Servoing
.
-

Singapore:

World
Scientific, 1993.

[5]

H
UTCHINSON
S.;

H
AGER
,

G.

&

C
OR
KE
,

P
.
-

"A Tutorial on Visual Servo Control",
IEEE
Trans. on Robotics and Automation
, vol.

12, nº.5.
-

October 1996, pp. 651
-
670.

[6]

K
ELLY
,

R
.
-

"Robust Asymptotically Stable Visual Servoing of Planar Robots",
IEEE Trans.
on Robotics and Automation
, vol.

12
, nº. 5.
-

October 1996, pp. 759
-
766.

[7]

M
ARUYAMA
,

A.

&

F
UJITA
,

M.
-

"Robust Control for Planar Manipulators with Image
Feature Parameter Potential",
Advanced Robotics
, vol. 12, nº. 1.
-

1998. pp. 67
-
80.

[8]

P
APANIKOLOPOULOS
,

N.

&

K
HOSLA
,

P
.
-

"Adaptive Rob
otic Visual Tracking: Theory and
Experiments",
IEEE Trans. on Automatic Control
, vol. 38, nº.

3.
-
March 1993, pp. 429
-
444.

[9]

T
ANG
,

P.

;

W
ANG
,

H.;

L
U
,

S
.
-

"A vision
-
based position control system for a one
-
link
flexible arm",
J. of the Chinese Inst. of Eng
.
, vol. 18, nº. 4.
-

1995, pp 565
-
573.

[10]

R
EYES
,

F.

&

K
ELLY
,

R.
-

"Experimental Evaluation of Fixed
-
Camera Direct Visual
Controllers on a Direct
-
Drive Robot",
[Proc.] of the 1998 IEEE International Conference on
Robotics and Automation
(Leuven, Belgium, May

16
-
20).
-
New

York
:
IEEE, 1998,

pp. 2327
-
2332.

[11]

R
ICHARDS
,

C.

&

P
APANIKOLOPOULOS
,

N
.
-

"Detecting and Tracking for Robotic Visual
Servoing Systems",
Robotics & Computer
-
Integrated Manufacturing
, vol. 13, nº. 2.
-

1997,
pp.

101
-
120.

[12]

W
EISS
,

L.;

S
ANDERSO
N
,

A.

&

N
EUMAN
,

C.
-

"Dynamic Sensor
-
Based Control of Robots
with Visual Feedback",
IEEE J. Robot. Automation
, vol. RA
-
3.
-

October 1987, pp.

404
-
417.

[13]

W
ILSON
,

W.;

W
ILLIAMS
H
ULLS
,

C.

&

B
ELL
,

G
.
-

"Relative End
-
Effector Control Using
Cartesian Position Ba
sed Visual Servoing",
IEEE Trans. on Robotics and Automation
, vol.
12, nº. 5.
-

October 1996, pp. 684
-
696.

[14]

C
ASTILLO
-
G
ARCÍA
, P.
-

Plataforma de control visual para servomecanismos
./ M.

Sc. Thesis:
Departamento de Control Automático

CINVESTAV
-
IPN ,August
2000.

[15]

Q
UANSER
C
ONSULTING
.
-

Wincon 3.0.2a Manual
.