Haptic Guidance in a Collaborative Robotic System

thunderclingAI and Robotics

Nov 13, 2013 (3 years and 11 months ago)

97 views

Haptic Guidance in a Collaborative Robotic System

Fernando Ribeiro
1
, and António M. Lopes
2


1

Institute of Mechanical Engineering, LAETA

Faculty of Engineering, University of Po
rto

Rua Dr. Roberto Frias, s/n 4200
-
465 Porto, Portugal

2

Institute of
Mechanical Engineering, UISPA

Faculty of Engineering, University of Porto

Rua Dr. Roberto Frias, 4200
-
465, Porto, Portugal
,

Tel.: (+351)

22 508 17 58; Fax.: (+351) 22

508 14 45;
aml@
fe.up.pt

Abstract.

The development of the notion of virtual fixture
originated the
broader concept of robot haptic guidance, meaning a recent technology to
support motor learning. It has been applied in many areas, namely on
automotive assembly, medical rehabilitation and training of healthy people. The
implementation of v
irtual fixtures depends on the robot mechanical and drive
system, namely the type of actuators and transmissions. If non
-
backdriven
transmissions are used the operator controls the robot through the forces he /
she exerts on the robot handle (which are mea
sured by a force transducer
incorporated at the robot end
-
effector). These robots usually use motors with
large reduction ratios and are admittance controlled. In this paper we implement
haptic guidance in an impedance type three degree
-
of
-
freedom (dof) he
avy
robot. An admittance low
-
level controller is firstly designed based on an IP
(Integral, Proportional) velocity controller. Two types of virtual fixtures are
implemented and the effectiveness of the proposed approach is illustrated
experimentally.


Keyw
ords
:
Collaborative robotic system, virtual fixtures, admittance control
.

1 Introduction

In a collaborative robotic system a human operator and a robot interact in real
-
time
while performing a given task [1]. This interaction pursues to replicate the phy
sical
movements (upper and / or lower limbs, hands or fingers) of the operator, possibly
through some kind of mapping between the operator and robot workspaces. Usually,
the term collaborative, applied to robotic systems, distinguishes between two models:
cooperative manipulation and telemanipulation.

In telemanipulation there is an indirect interaction between the operator and the
robotic system. The operator interacts with a local joystick (master system) to control
a remote robot (slave system), responsi
ble for the execution of the task. In cooperative
manipulation the operator is physically positioned in the robot’s workspace and
interacts directly with it via some type of handler. Collaborative systems have been
developed in addition to industrial robot
ic systems, which could not act in
coordination with humans for safety reasons. Moving the robot directly in its
workspace has the advantage of keeping the operator kinaesthesia (the sensation of
movement or strain in muscles, tendons, and joints), making
cooperative manipulation
more intuitive than telemanipulation. In both cases, there is some level of shared
control between the human operator and the robot [1]. Sharing control through haptics
implies that the operator experiences additional forces via th
e control interface he /
she is grabbing to control the system [9]. Those forces can be repulsive, if they are
used to create forbidden regions in the robot workspace [10], meaning that when the
operator is closer to the boundaries of these regions, the hi
gher the repulsive forces
grow into. On the other hand, the feedback forces can be used to keep the operator on
a given optimal (programmed) trajectory [11], being, in this case, attractive forces.

It is important to distinguish these two models of human
-
m
achine interaction
(cooperative manipulation and telemanipulation) from the
machine
-
machine
interaction, usually called “cooperative systems”
, which denotes collaboration
between two or more robots [19].

The concept of virtual fixture corresponds to overla
pping abstract sensory
information in a given workspace for the purpose of conditioning the system response
[1
-
7]. This conditioning applied to collaborative systems generates a sensory
stimulation on the operator in order to facilitate the execution of a
task. This concept
is rather broad and assumes that sensory information is not only positional, but also
haptic (feeling of accelerations), visual and / or hearing.

An important application of virtual fixtures is assistive assembly, especially when
dealin
g with heavy and large objects [1]. One solution would be to implement virtual
points, curves, surfaces or volumes, such as a funnel in 3D space to guide the parts to
a given desired point in the workspace. This virtual funnel would have two purposes:
to a
void collisions during the movement and guide the operator and the load to the
mounting location. The virtual fixtures are not limited to simulate rigid (solid)
environments, but can also simulate surfaces with lower stiffness (ex. linear / non
-
linear spri
ng type), or repulsive / attractive surfaces and friction contacts [1
-
8].

The implementation of virtual fixtures depends on the robot mechanical and drive
system, namely the type of actuators and transmissions. If backdriven transmissions
are used, the rob
ot actuators apply forces to the operator, whenever he / she tends to
violate a virtual restriction, for example. Such robots typically use direct drive motors
or small reductions. Non
-
backdriven transmissions allow the operator to control the
robot throug
h the forces he / she exerts on the robot handle (which are measured by a
force transducer incorporated at the robot end
-
effector). These robots usually use
motors with large reduction ratios. In the former case, the robot will be impedance
controlled and
in the latter case it will use admittance control [1, 3].

Several pioneer works [1
-
8] reporting the use of virtual fixtures are, for example,
Abbot et al. [3] that analyze virtual fixtures in the context of both cooperative
manipulation and telemanipulatio
n systems, considering issues related to stability,
passivity, human modeling and applications. They present the design, analysis, and
implementation of two categories of virtual fixtures: guidance virtual fixtures, which
assist the user in moving the mani
pulator along desired paths, and forbidden
-
region
virtual fixtures, which prevent the manipulator from entering into forbidden regions
of the robot workspace. Taylor et al. [8] propose a robotic system designed to extend
the operator ability to perform sma
ll
-
scale manipulation tasks. In their approach, the
tools are held, simultaneously, by the operator and the robot. Forces exerted by the
operator on the tool and by the tool on the environment are measured and used by the
controller to offer smooth, tremor
-
free and precise positional control and force
scaling. Payandeh and Stanisic [4] use virtual fixtures in telemanipulation and training
environments. They found that virtual fixtures could improve the speed and precision
of the operator, reduce the operato
r workload and the duration of the training phase
for novice operators. Many studies can also be referred dealing with virtual fixtures
on cooperative manipulation and telemanipulation systems of both impedance and
admittance types [4
-
7].


During the last
two decades, the development of the notion of virtual fixture
originated the broader concept of robot haptic guidance, meaning a recent technology
to support motor learning. It has been applied in many areas, namely medical
rehabilitation [12
-
13] and train
ing of healthy people [14, 18]. For example, Marchal
-
Crespo and Reinkensmeyer [12] review control strategies for robotic therapy devices.
Several strategies have been proposed, including assistive, challenge
-
based, haptic
simulation and coaching. Takahashi

et al. [15] investigate how a robot can improve
motor function, concluding that robot based therapy yields improvements in hand
motor function after chronic stroke. Emken and Reinkensmeyer [16] studied robot
enhanced motor learning in human locomotion. Th
ey concluded that motor learning
of a novel dynamic environment can be accelerated by exploiting the error based
learning mechanism of internal model formation. Reinkensmeyer and Patton [17]
show how robotic devices can temporarily alter task dynamics in w
ays that contribute
to motor learning experience, suggesting possible applications in rehabilitation and
sports training. Ben
-
Pazi et al. [14] investigate the effect of mechanical properties of a
pen on the quality of handwriting in children. A pen was att
ached to a robot and
effective weight (inertia) and viscosity were programmed. Increased inertia and
viscosity improved handwriting quality in 85% of children. Abbink et al. [9] argue
that haptic shared control is a promising approach to meet the commonly
voiced
design guidelines for human
-
automation interaction, especially for automotive
applications.

In this paper we implement haptic guidance in an impedance type three degree
-
of
-
freedom (dof) heavy robot. An admittance low
-
level controller is firstly
designed
based on an IP (Integral, Proportional) velocity controller. Two types of virtual
fixtures are implemented and the effectiveness of the proposed approach is illustrated
experimentally. Bearing these ideas in mind, the paper is organized as follows
. In
section 2 the used robot is introduced and the admittance controller is presented.
Section 3 describes the virtual fixtures, its implementation and testing. Finally, in
section 4, the main conclusions are presented.

2

Robotic System and Low
-
level Ad
mittance Controller

The robotic system consists of a
n existing

Cartesian manipulator and a PC
-
based
digital controller.
This robot was not designed for human interaction; it was primarily
designed to carry heavy loads at high speeds. It

has three linear ax
es powered by
brushless AC servomotors. B
all
-
screw
based transmissions
convert
the
motor
s

rotation
into linear motion. The axes linear position and acceleration are monitored
via incremental encoders and accelerometers, respectively. A
6
-
axis force/torque
transducer is mounted between the robot end
-
point and the operator handle.

The
torque signal
s were not used.


The controller runs under Matlab / Simulink / xPC
Target.

E
ach robot axis may
be modeled
as a
mass
-
damper system and its dynamics is
approximately

given by:


I
K
T
T



(1)







B
J
T



(2)


x
K
LA




(3)

The drive current,
I
, for the motor produces the torque,
T
, at the motor shaft, which
is rigidly connected to the load.
Constant
J

is the
total
inertia
of the parts and
B

the
total friction (essentially viscous)

both
referred to the motor shaft
. Parameter



is the
torque constant and



represents the transmission ratio.

We implemented an IP velocity controller that includes an

integrator anti
-
wind
up
loop
.

This loop prevent
s

the integrator to saturate

by adding feedback of the error (the
difference between the actuator
output

and the control action signal) multiplied
by
a
tracking time constant.
I
f the integrator saturate
s
, adding the error
signal
will reset it.

The admittance controller relies on the velocity controller and a linear relationship
between the force imposed by the operator and the robot velocity, as given by (Figure
1):


f
c
x




(4)

Where

c

>

0 is an admittance gain that acts like the inverse of a damping
coefficient. Thus, the admittance controller transfer function,





, is given by:




1
2
2
2
1
2
)
(
)
(
)
(













s
n
n
s
c
T
K
I
K
P
K
T
K
LA
K
B
s
T
K
I
K
LA
K
J
s
c
s
F
s
sX
s
A
G




(
5
)


It should be noted that
c

acts as steady state gain that does not affect the setting of
the parameters of the velocity controller. Moreover, the natural frequency of the
controlled system was chosen equal to



= 60 rad/sec, based on simulations and
experiments involving velocity

/ admittance control, and the damping coefficient was
set to


= 1, fixing the controller gains,



and


, respectively. The values of all
parameters are shown in Table 1.

In Figure
2

the
response to an arbitrary force command imposed by the operator

is
shown (



= 60 rad/s;


= 1;
c

= 1 m∙s
-
1
/N). It can be seen that the robot responds to
the force profile with a velocity. As expected, force and velocity are numerically
equal, as the impedance gain, in this case, was set
equal to unity
.



Table
1
.


Values of the system and controller parameters
.


Parameter

Value




0.590







(
f
rom manufacturer catalog)

J

axis X:












;

ax楳i Y













;

ax楳i 娺













⡣a汣l污瑥d
, 瑡t楮g 楮to accoun琠axe猠楮e牴ra猠
and ma獳es
)


B

0.002









(
0.001







, f牯m
manufac瑵牥爠 ca瑡汯g
,

plu猠
0.001









e獴sma瑥t 瑯 co牲r獰ond to approx業ately
10% o映瑨e

ava楬ab汥l 瑯牱ue at
maximum 獰eed
)





axis X: 314



㬠ax楳⁙i 314



㬠ax楳⁚㨠157




(
f
牯m

manufac瑵牥爠ca瑡汯g
)




60




⡡d橵獴sd by 獩mu污瑩ln)



1


(adjusted in order to have no overshoot)


1
s
+
+
+
-
K
T
+
-
1
J
1
s
1
K
LA
B
K
P
K
I
+
-
c
1
K
LA
+
-
1
T
t
x
.
f
T
x
.
I
θ

.
Fig.
1
.

Block diagram of the
admittance
controller
.



Fig.
2
.

R
esponse to a given force
command imposed by the operator

(































)
.

3

H
igh
-
level Virtual Fixtures

Generally speaking, adopting an admittance controller, we establish the
relationship between force imposed by the operator an
d
motion of the robot end
-
effector, as given by [5]:


)
(
f
Φ
v


(6)

Where
3


f

is the force imposed by the user and
3


v

is the velocity of the
robot end
-
effector, both expressed in the Cartesian space. The admittance function


敳瑡e汩lh敳e瑨攠
r敬慴楯nsh楰

b整w敥n
f

and
v
. If this
relationship

is linear and the same
in all directions, we can write:


f
v


c

(7)

Thus
,

it is understood that the
velocity of the robot

in a given direction is
proportional to the force exerted in that direction

and the robot has an isotropic
behavior in terms of velocity.

A virtual fixture generalizes the previous model by adding
anisotropy conditions to
the robot workspace. For doing this, the time dependent 3×
n

(0 <
n

< 3) matrix
)
(
t
δ
δ


is introduced, according to the notation used by Bettini et al. [5]. Intuitively,
δ

represents the preferred directions of motion of the robot end
-
effector. Using matrix
δ
, we can set up the projection operator,


T
T
δ
δ
δ
δ
D
1
)
(




(8)

allowing the decomposition of the force vector (exerted by the operator) into two
components,


f
D
f





(9)




f
f
f



(10)

meaning that
0




f
f
T
.

We

can now introduce a new admittance
coefficient





[0,

1] that will
attenuate

the system response
along

non
-
preferred components of force,


. Consequently,
it
results in:




f
D
I
D
v




)
(



c
c

(11)

The
coefficient
c

may be regarded as
the
general
admittance of the system.
Imposing
0 <





1, a virtual constraint is added

to the robot motion
in the directions
orthogonal to
δ
. In the
limit

case
,





=

0, a rigid

virtual fixture is imposed
. It
should
also be noted that





= 1, results in a robot
isotropic behavior.

4

Experimental Results

In this section two different situations are implemented, using the Cartesian robot
and low
-
level

admittance controller described in section 2. Afterwards, the
experimental results are discussed.

4
.1
Motion Along a Curve

In this case the robot end
-
effector can be moved along a curve. It is
assumed that
the virtual constraint is
given

by
the

parametr
ic expression

[5]
:






1
,
0
,
)
(
)
(
)
(
)
(


s
s
z
s
y
s
x
s
T
p

(12)

Defining


)
(
a
s
x
p

as the
point of the virtual curve closer to the robot end
-
effector

real position,
a
x
, t
he

preferred direction of motion

δ

is given by the
normalized
vector

tan
gent to the curve at that point:


)
(
)
(
)
(
a
s
s
a
s
ds
d
x
p
x
t



(13)


)
(
)
(
)
(
a
a
a
x
t
x
t
x
δ


(14)

Nevertheless, if the robot end
-
effector does not start on the desired curve it will
tend to move along a direction parallel to the curve. This means that an
attractor must
be defined to redirect the robot to the desired path. This can be done using:




)
(
)
(
)
(
signal
)
(
a
d
a
a
a
c
k
x
e
x
δ
x
δ
f
x
δ





(15)




a
a
a
s
x
x
p
x
e


)
(
)
(

(16)

Where the
)
(
a
x
e

is the Cartesian error and



the parameter that controls the rate
of convergence to the desired curve.

Figure 3
a illustrates the robot end
-
effector being moved by the user and guided
along a curve. In this case, a helix curve is defined as the desired path, as given by:


)
2
cos(
)
(
s
r
x
s
x
c




(17)


)
2
sin(
)
(
s
r
y
s
y
c




(18)


bs
z
s
z
c

2
)
(



(19)

With
center
x
c

= [
x
c

y
c

z
c
]
T

= [0 0 0]
T
, radius
r

= 100 mm and pitch
b

= 15 mm. The
direction
)
(
s
δ

in every point,
s
, is given by:






T
T
s
z
s
y
s
x
s
z
s
y
s
x
s
)
(
)
(
)
(
)
(
)
(
)
(
)
(







δ

(20)

The control parameters
c

= 1








and



= 0








are used.

As can be seen, while at the beginning the robot end
-
effector is away from the
desired path (





[





]

), it rapidly converges to the curve and stays
approximately on the
re.

Figure 3b

depicts the modulus of the error. We can see the
approach phase, on the left of the graph, where the error diminishes quickly.
Subsequently, the error is kept inferior to 0.6 mm.


(a)


(b)


Fig.
3
.

(a) Motion along a helix
curve; (b) time evolution of the error
.

4
.1
Motion Inside a Tube

With this type of restriction the robot end
-
effector can be freely moved inside the
tube. Once in the tube, it must stay in there.
It
this case
the task can be de
scribed by a
parametric curve
p
(
s
) representing the axis of a tube with
radius



. The boundary
surface of the tube is
a
switch
ing surface

between
a free
mot
ion

region
(inside) and a
virtual attract
ive region
. Further, we define a transition region



<

ε

<




within
which the gain discontinuity
is
smoothed

[5]
.































cases
other
all
r
r
c
r
c
r
c
c
a
t
a
t
n
a
t
t
a
tu
,
1
0
)
(
)
(
(
),
1
(
)
(
)
(
,
f
x
e
x
e
x
e
x
e






(21)

Where
n

≥ 1 is a scalar that shapes the switching surface. If the end
-
effector is
outside

the tube the surface is virtually attractive. If it is inside the tube the surface is
virtually repulsive.

The reference direction is given by




)
(
)
(
)
(
signal
)
(
a
t
d
a
a
a
tu
k
x
e
x
δ
x
δ
f
x
δ





(22)













t
a
t
a
a
a
t
a
a
t
r
r
r
)
(
,
)
(
)
(
)
(
)
(
,
0
)
(
x
e
x
e
x
e
x
e
x
e
x
e

(23)


We used the helix defined in the previous example for the axis of the tube. The
tube radius was set to



= 12 mm and the control parameters are
c

= 1
m∙s
-
1
/N
,



= 0







,
n

= 3 and


= 10 mm. Figure 4
a shows the tube and the robot

trajectory.
It
can be seen that the robot starts outside the tube (





[





]

) and rapidly
converges to the inside volume. Once there, the robot can be freely moved, explaining
the almost “random” trajec
tory observed in graph. Figure 4
b depicts the modulus of
the
distance between the robot end
-
effector and the axis of the tube. As in the
previous example, we can see the approach phase, on the left of the graph, where the
distance diminishes quickly. Subsequently, the distance is at most 12 mm,

approximately, meanin
g that the end
-
effector is close to the tube surface, but always
inside it.


(a)


(b)


Fig.
4
.

(a) Motion inside a tube; (b) time evolution of the error
.


Figure 5
a depicts the robot handle actuat
ed by the operator. In Figures 5b and 5
c
an
example of haptic guided motion is illustrated. The desired trajectory is a
circumference, being execu
ted without assistance (Figure 5b) and with assistance
(Figure 5
c). As can be seen, while in the former case it is almost impossible to keep
the robot
on the desired trajectory, in the latter case the trajectory is easily executed.





(a)

(b)

(c)

Fig.
5
.

Images taken during haptic guidance along a circle
.

5

Conclusions

In this paper haptic guidance
was implemented using a non
-
backdriven three
degree
-
of
-
freedom
robot
.
The heavy duty robot were not been firstly designed for
human intera
c
tion.
We syn
thesized a
n admittance low
-
level controller based on a
n

IP

velocity controller. Two types of virtual fixtures
were

implemented
based
on the
formalism adopted in reference [5]
and the effectiveness of the proposed approach
was

illustrated experimentally.

The admittance controller was proven to satisfy the
requirements imposed by the implementat
ion of virtual guidance control:

t
he human
c
an control the robot in a
virtualized
environment

that restricts or aid
s

the human arm
motion
.



References


1.

M. A. Peshkin, J.

E. Colgate, W.

Wannasuphoprasit, C. A
.

Moore,

R. B
, Gillespie

and

P.
Akella
:


Cobot Ar
chitecture”,

IEEE Transac
tions on Robot
ic and Automation, Vol.17, No.

4, pp
.
337
-
390
, 2001.

2.
L. B.
Rosenberg:


Virtual Fixtures: Perceptual Tools for Telerobotic Manipulation
”,

Proc.
IEEE Virtual
Reality Int. Symp. (VRAIS'93),
pp
.

76
-
82
,
1993.

3. J. J. Abbott, P. Marayong and A. M. Okamura: “
Haptic Virtual Fixtures for Robot
-
Assisted
Manipulation
”,

Springer Tracts in Advanced Robotics, Vol
.

28, pp. 49
-
64
,
2007
.

4. S. Payandeh and Z. Stanisic: “On application of virtual fixtures as an aid for
telemanipulation and training”, Proc. 10th Symposium

on Haptic Interfaces for Virtual
Environments and Teleoperator Systems, pp. 18
-
23, 2002.

5.
A
.

Bettini, P
.

Marayong,

S
.

Lang, A
.

M. Okamura

and

G
.

D. Hager
: “Vision
-
assisted control
for manipulation using virtual fixtures”, IEEE Trans. Robotics, Vol.20, N
o.6, pp.953
-
966,
2004.

6. S. Park, R. D. Howe and D. F. Torchiana. “Virtual fixtures for robotic cardiac surgery”, In
Proc. 4
th

Int. Conf. on Medical Image Computing and Computer
-
Assisted Intervention, pp.
1419
-
1420, 2001.

7. N. Turro and O. Khatib: “Hapti
cally augmented teleoperation”, Proc. 7
th

Int. Symposium on
Experimental Robotics, pp. 1
-
10, 2000.

8.
R
.

Taylor, P
.

Jensen, L
.

Whitcomb, A
.

Barnes, R
.

Kumar, D
.

Stoianovici, P
.

Gupta, Z
.
Wang,
E
.

Dejuan and L
.

Kavoussi
: “Steady
-
hand robotic system for micr
osurgical augmentation”,
Int. J. Robotics Research, Vol.18, No.12, pp.1201
-
1210, 1999.

9.
D
.

A. Abbink
,
M
.

Mulder

and
E
.

R. Boer
: “
Haptic shared control: smoothly shifting control
authority?
”,
Cogn
.

Tech Work
, Vol.14, No.

1
, pp.
19

28
,
2012
.

10. P. Marayong

and A. M. Okamura: “Speed
-
accuracy characteristics of human
-
machine
cooperative manipulation using virtual fixtures with variable admittance”,
Hum Factors
,
Vol.
46
, No.
3
, pp.
518
-
32
,

2004
.

11. P. G. Griffiths and R. B. Gillespie: “Sharing control between hu
mans and automation using
haptic interface: primary and secondary task performance benefits”, Hum Factors, Vol.47,
No.3, pp.574

590, 2005.

12. L. Marchal
-
Crespo, and D. J. Reinkensmeyer: “Review of control strategies for robotic
movement training after neu
rologic injury”, Journal of Neuroengineering and Rehabilitation,
Vol.6, No.20,

2009.

13. L. E. Kahn,
M
.

L
.

Zygman, W
.

Z
.

Rymer

and D
.

J
.

Reinkensmeyer
: “Robot
-
assisted
reaching exercise promotes arm movement recovery in chronic hemiparetic stroke: A
random
ized controlled pilot study”, Journal of Neuroengineering and Rehabilitation, Vol.3,
No.12,

2006.

14. H. Ben
-
Pazi, A. Ishihara, S. Kukke, and T. D. Ranger: “Increasing viscosity and inertia
using a robotically controlled pen improves handwriting in childre
n”, Journal of Child
Neurology, Vol.25, No.6, pp. 674

680,

2009.

15. C. D. Takahashi, L. Der
-
Yeghiaian, V. Le, R. R. Motiwala, and S. C. Cramer: “Robot
-
based hand motor therapy after stroke”, Brain, Vol.131, No. 2, pp.425

437, 2007.

16. J. L. Emken and D.
J. Reinkensmeyer: “Robot
-
enhanced motor learning: Accelerating
internal model formation during locomotion by transient dynamic amplification”, IEEE
Trans
.

on Neural Systems and Rehabilitation Engineering, Vol.13, No. 1, pp.33

39,

2005.

17. D. J. Reinkensme
yer, and J. L. Patton: “Can robots help the learning of skilled actions?”,
Exercise and Sport Sciences Reviews, Vol. 37, No. 1, pp.43

51,

2009.



18.
J
.

Lüttgen

and

H
.

Heuer
: “The influence of haptic guidance on the production of spatio
-
temporal patterns”,

Human Movement Science
, Vol.
31
, No.3,

pp.
519

528
,
2012
.

19. P. Marayong: “Motion Control Methods for Human
-
Machine Cooperative Systems”, PhD
Thesis, John Hopkins University, 2007.