Agent based Human Robot Interaction.

embarrassedlopsidedAI and Robotics

Nov 14, 2013 (3 years and 7 months ago)

57 views



Agent based Human Robot Interaction.


Reuven Granot


Center for Computational Mathematics and Scientific Computation Center for Computational Mathematics and
Scientific Computation at
University of Haifa


Abstract


A small
-
scale supervised autonomous bul
ldozer in a remote site was developed to experience agent based
human intervention. The model is
based on Lego Mindstorms kit and

represents earth moving equipment, whose
job performance does not require high accuracy. The model enables evaluation of syste
m response for different
operator interventions. The automation is introduced as improved Man Machine Interface (MMI) by developing
control agents as intelligent tools to negotiate between human requests and task level controllers as well as
negotiate
with

other elements of the SW environment.

Controlling agents' functions or goals are to perform a
simple task without the need to use joysticks in order to manually control each one of the system’s activities.
The control is improved because the agent is capa
ble to negotiate with other elements of the SW environment.
For human intervention at the low layers of the control hierarchy we suggest a task oriented control agent to take
care of the fluent transition between the state in which the robot operates and t
he one imposed by the human.
This transition should take care about the imperfections, which are responsible for the improper operation of the
robot, by disconnecting or adapting them to the new situation
.



Introduction


Nevertheless, great achievements i
n distant operation of systems were reported in recent years, intelligent
operation of complex systems is still some years ahead
(1, 2)
. Since some of these autonomous missions are very
challenging
(3)

we are suggesting to develop semiautonomous human supe
rvised telerobotic systems rather then
concentrate full effort in autonomous solutions to complex tasks
(4,5)
. Experiencing teleoperation in the last years
reported regarding the operator
loss of spatial awareness (disorientation, loss of context), cogniti
ve (inadequate
mental model about what’s really out there) and perceptual errors (distance judgment, display interpretation),
poor performance (imprecise control, obstacle detection), as well as simulator sickness and fatigue
.
Communications between the ro
bot and the teleoperation unit suffered from latency, bandwidth and reliability.
As a consequence it is not possible to send to the human operator as much information as the system may
gather, so always some data will be always kept at the remote site. We
shall state in this paper that in some cases
a remote intelligent machine may even perform better than a standard human operator. Such an intelligent
machine will use controlling agents to perform autonomous tasks.


Off
-
road equipment is designed to operat
e in unstructured and rapidly changing environment, by taking
advantage of its mechanical designed properties, as well as the capabilities of a human operator to predict and
control reaction caused by machine
-
soil interaction. It is not possible to model t
his interaction to predict the
input at all levels of control as requested by a fully autonomous system.
Earthmoving equipment are usually
large sized and perform their tasks in a quite relaxed atmosphere, enabling the telerobot researcher restrict the
sco
pe of the research to basic agent relations, rather then been constrained by capabilities of sensor measure and
analysis rates or high bandwidth response of robot actuators.
The major problem of automation is how to deal
with unexpected contingent events.
The equipment is large and powerful, which practically enables that all
needed hardware be easily mounted on it. In design of this system, there are no hard limitations on volume,
weight or energy as in most other applications.
We decided to focus attentio
n on intelligent operation of a
semiautonomous bulldozer.

From one hand we have a non trivial control problem and from the other hand it is
characterized by relatively slow reaction, since the bulldozer is moving slow enough to enable perception of
relevan
t changes in its environment.


The operational environment of a bulldozer is semi
-
structured,

so

task level control methodology is recognized
to be

an

adequate
design decision. The Global Goal of the mission or the main task can easily be decomposed
into
a hierarchical tree of subtasks. A representative mission for a bulldozer is covering a ditch.
Let's assume
that the ditch is an obstacle which has to be removed and a road traversable for off
-
road vehicles has to be built
in order to enable traffic across

the obstacle. The ditch may or may not have a small hill on its opposite rim.
This ma
in task can be decomposed into a) pushing earth into the ditch and collapsing the near rim, b) entering
the ditch and climbing the opposite rim, c) pushing earth from th
e distant rim into the ditch.

This task
decomposition may continue until the servo level of control is defined.

The task described above is quite
general
,

but

typical for earthmoving, mining or military environments.

It is characterized by moderate
require
ments in agility or accuracy.


Following a classical attitude, the control architecture for guiding an off
-
road equipment (bulldozer) to perform
a non
-

trivial task a comprehensive plan should be developed and subtasks been scheduled. These subtasks may
b
e associated to behavioral response of the system very much similar to reactive response of a well trained
human operator. Since the need to deal with contingent events is widely recognized each one of these tasks must
be executed according to real time se
nsory data measurements.

Th
e

behavioral decomposition suffers from lack
of deliberation required to
ensure completion of the main task in time and in accordance to performance
measures some amount of deliberation is required. The known hybrid architectures

deliver the required
hierarchical structure to enable task decomposition, which control performance measure, but
bridging the
different time scales is still a difficulty.
Bandwidth (time
-
scale) separation enable d
esigners of hierarchies
to
benefit from

a

gradual reduction of frequencies of sampling.

In th
ese

hierarchies
,

layered by bandwidth
,

a
separate controller is used for each bandwidth and
the
control outputs are superimposed within each controlled
task.
Contingent events may be

treated by an effectiv
e method of re
-
planning based on real time sensory
information whenever any contingent event occurs.
A logical compromise would be distribution of activities
between collaborative behaviors in such a way that
functionality of
each behavior may be achieved

by a
controlling agent based on feedback controllers within a limited state space

(5
, 9
)
.



A Reference Model Architecture based on Real time Control System (RCS)
(
6, 7
)
was developed by NIST and
widely adopted as a standard by NASA

in its teleoperation
missions.

In order to achieve the system Global Goal, in RCS
(7)
, the global task is decomposed according to the time scale
of the mission. This generates a hierarchical controlled layered architecture. The control bandwidth, as well as
the perceptual reso
lution of special and temporal patterns

is planned so that it

decreases bottom
-
up about an
order of magnitude at each layer. The layers are built of elements (nodes) of intelligence; each one
independently processes its relevant sensory data, makes decisio
ns and controls its action. The elements of
intelligence in the RCS architecture sense their environment and update their World Model representation. Each
one has its restricted World Model representation, locally developed and maintained
. Agent based arch
itectures
were lately included in the Reference Model
(8)
.

We found the Reference Model Architecture as a relevant
framework to develop the required architecture for each specific application.




Supervision and intervention by a human would provide the ad
vantages of on
-
line fault correction and
debugging, and would relax the amount of structure needed in the environment, since a human supervisor could
anticipate and account for many unexpected situations. Operators are able to use their intelligence to dec
ide
upon task execution strategies of important built
-
in tasks that a robot can perform.

A telerobot can use shared
control, when the instructions given by the Human Operator and by the robot are combined, or strict supervisory
control
(
4
)
, when the Human
Operator instructs the robot, then observes its autonomous actions. Under strict
supervisory control, a human operator specifies task
-
level commands to a remote robot. The set of tasks given to
a robot will be well designed if it allows the human enough fr
eedom to sufficiently vary the actions of the robot.
The guiding metaphor for interaction between the operator and robot is to consider the robot as an assistant. In
supervisory control, problems are caused by relatively long time delays, caused by human o
perator reaction or
by communication links capabilities. Most solutions to the long time delay problem involve designing high
-
level tasks, which can be executed reliably in an autonomous way. This type of control system works well in
structured and so
-
cal
led semi
-
structured environments, where parts of the environment have guaranteed
properties.


The controlling agent


An agent is a computer system capable of autonomous action in some environment. A general way in which the
term agent is used is to denote
a hardware or software
-
based computer system that enjoys the following
properties
(
10)
:
autonomy
,
social ability
,
reactivity

and

pro
-
activeness

(
taking the initiative
)
.

Agents being
autonomous, reactive and pro
-
active differ from objects, which encapsulate

some state, and are

more than expert
systems as being issues which are situated in their environment and take action instead of just

advising to do so.
We need to build agents in order to carry out the tasks, without the need to tell the agents

how to per
form these
tasks.


Such an agent is motivated by intention (goal oriented) and modulated by attention (decides about priorities as

function of static information as well as the fluently measured internal and external dynamic data).

The agent in
charge of s
ome controlling task (control agent) is the realization/ encapsulation of a behavior in a

behavior
based system. The task is decomposed in subtasks of behavior based nature: as a reaction to percept

change in
the environment or as an output action of a dec
ision procedure. The action may be a change of the

environment
or a message sent to another behavior to take action or know some information. The agent is in

charge of taking
care to execute the task according to the behavioral expectation. So, each agent
must have at

least one active
thread of control.

Behaviors are feedback controllers
,

which are designed to achieve specific tasks/goals

(11)
.
Behaviors are

typically executed in parallel, according to the distributed paradigm. Behavior Based Systems can
st
ore

distributed representations of world models. Behaviors can directly connect sensors and effectors and can
also

take inputs from other behaviors and send outputs to other behaviors. So, when assembled into distributed

representations, behaviors/agents c
an be used to look ahead at a time
-
scale comparable with the rest of the

system.


The agent is a control subassembly. It may be built upon a primitive task or composed of an assembly of
subordinate agents. The agent hierarchy for a specific task is preplan
ned or defined by the human operator as
part of the preparation for execution of the task. The final sequence of operation is deducted from the hierarchy
or negotiated between agents in the hierarchy.


After the start of task execution, the human operator
monitors the activities and the performance of the assembly
of agents. Being responsible for the completion of the major task (global goal) the human operator may interfere
by sending change orders. These change orders are of two different natures: emergen
cy, "as
-
is" or normal. The
emergency and "as
-
is" change orders are executed immediately and as given, while the emergency change
orders will take the shortest procedure, even if necessarily the execution of the task may suffer. The normal
change orders are

first checked by the interface agent, which is in charge of negotiating the execution with other
agents in order to optimize execution performance.
In this responsibility, the interface agent will act as most
other defined agents in order to complete exec
ution of the task as efficiently and with the highest possible
performance value.
The negotiation procedures are well defined and known to the

human operator. In case of
conflict between activities of different agents, the conflict resolution algorithm

use
d will be the one defined as
default, defined by the human operator in its change order or suggested to the

operator by a simplified decision
support algorithm.


Control agent performance evaluation


The field of robotics like AI and intelligent systems in

general is an emerging scientific field which still lacks

the metrics and quantifiable measures of performance. However, NIST
(1
2
, 1
3
)

is leading an international effort

to
fulfill this need in order to enable scientific evaluation of presented results. T
he science of intelligent

systems
needs standard units of measure to enable measurement of results against a standard metrics.

Until this or a
similar effort concludes to a widely accepted standard, we can only suggest methods and

architectures which
may b
e evaluated against common sense and qualitative experimental results. The

difficulty with this current
trend is the inability to check generic applications with some meaningful statistical

results. Each robotic system
is somehow different from similar one
s, it is quite expensive to build and

maintain, which makes any
statistically meaningful evaluation non practical. Meanwhile, the legitimacy of

transfer of conclusions over
different scale applications or different implementations remains to be decided by

specific designs.


Nevertheless, the implementation of a combat bulldozer in a small size scaled model differs by mechanical,

perceptual and control elements from the full scale application we believe that it still may help to identify

unusual situations w
e believe the software agent must be capable to deal with. Full scale machines may be

tested
only at field ranges, which is time consuming and very expensive. A small scale model may be tested in

office
environment, enabling the software developers to shor
ten test cycles by orders of magnitude.

Now, let's see what
should be tested in order to convince ourselves about the feasibility of implementation of

control agents to serve
for man machine interface. The paradigm states that the system (bulldozer) will

p
erform autonomously as
ordered (previously preplanned task by the human operator) until some action taken

autonomously by the
system is seen by the human operator as performed in a non satisfactory manner. In such

a case the operator is
expected to send a
message to the system instructing it how to perform better. This

message will actually adjust
the current running task to an updated task, probable not very much different from

the one been done. In a
remote control mode of operation the new order is effec
tive "immediately", while in a

supervised autonomous
control mode of operation, the system is assumed to intelligently adapt the last order to

better achieve the same
global goal. In our model this adaptation will be conducted by the control agent, which

s
erves as interface
between the human operator and the rest of the system.


Rather then evaluate the effectiveness of this mode of operation, which is quite inconvenient to run on a full

scale large, massive and complex bulldozer, we decided to test the bas
ic ideas on a small scale model. Our

goal
is now reduced to a qualitative evaluation of operating the bulldozer on a task consisting of moving earth

in
order to close a ditch, so that following off road vehicles be capable to cross the area. Due to the

unp
redictable
disturbances caused by earth reaction on the bulldozer's blade this task is not trivial.


Expected situations to be tested are:



The bulldozer moves forward placing the blade too low. The human operator, based on his experience

and some knowledge

about the rock component of the working area may decide that the blade should
be

placed higher, so less earth will be cut, but the task may still be completed (in a longer period of
time).



The bulldozer is experiencing too much power to enable earth movin
g forward and the human operator

would prefer it to withdraw and attack the soil from a new position behind. Since the human operator is

distant and can not see how far the bulldozer is from the ditch, in case the bulldozer is really very close
to

the ditc
h, it is assumed to be a better practice to move backwards only after completing the action.



After moving backwards and because of strong disturbances, the bulldozer places itself in a position

parallel but shifted/displaced from the partially completed tr
ack. The maneuver needed to reposition the

bulldozer is not simple and we assume was not implemented in the autonomous mode of operation. In

this situation the human operator is requested to interfere and remotely control the vehicle, while it
moves

to its

improved starting point of operation.


In our model of operation, the primitive agents are control tasks and perform continuously in parallel as

assumed from simple behaviors. The coordination mechanism is performed by a higher level agent, so forming

an
assembly of agents. The human operator is represented by an agent, which translates
linguistic orders

to

tasks
and
negotiates the execution

with the highest level agent in charge of the task. At start, the negotiation

agent will
build the task from primiti
ves and later will negotiate with the relevant subassembly the change

order. The agent
will be responsible to replace the current task by a modified one, as well as to inform the

higher level agents by
the change of the task.


For example, the human operat
or, while monitoring the activity may decide to turn off a specific agent. His

motivation is by attention as a response to unexpected events or some unpredicted details in completing the

task.
If this change order is of an emergency type, its execution wil
l be done at next time interval. But if the

change
order is normal, the change order will be executed at
next best

time interval. Rather than causing an

agent
failure or malfunction, the interface agent will negotiate the best time to make the replacement.

This

procedure
enables the human operator to take care of all possible dead
-
end situations, where autonomous

activities would
stop as result of catastrophic events and remotely controlled vehicles would be able to react

properly only by a
very experienced

human operator.


The model


Testing a full scale bulldozer in a test range requires
except the investment in a very expensive hardware also
logistics which reduce the test repetition time to once per
month, or less. At this stage we are more interested i
n
testing the concept itself and learn about the intelligence
needed to complete the task. We decided to develop an
inexpensive and very small scale model. The requirement
is to enable repetitive testing at an office environment, so
enabling very short rep
etition time between consecutive
debugging and testing procedures. Our choice was to
build a model using a Lego Mindstorms
(1
4
)

set. The
bulldozer model is constructed basically from the tracks
and blade rigidly placed on a simple construction. After
sever
al try and error type of development the model is
Figure
1

capable of earth moving and displacement at a general level of
satisfaction.

The model has a set of drawbacks as the consequence of using DC
motors of relatively weak power as well as its small dimensions

(length:15cm, width:14.5cm, hight:13cm + light sensor tower of
14cm) which strongly reduced the choice of suitable sensory
equipment. Regarding the low power constraint (the mechanical
design would be very clumsy with more than one gear ratio) we
could no
t control the vehicle speed and so we had to restrict our
testing to controlling the vehicle rotation around a perpendicular
axis and raising the blade at some low speed. Nevertheless it was
possible to move earth by manipulating the vehicle according to
s
ensed data received from a simulated beacon and a CMUcam
(15)

placed above. The camera is a simulation of the "Flying Eye"
concept of FCS
(16)
. In a simplified mode of testing we replaced
the CMUcam's outputs by direct input of the operator into the
tele
-
o
perator display, as shown in Figure 4. In this simplified
mode, the operator makes the change on the display according to his estimation. The model communicates
with a desktop computer through the IR tower of Lego Mindstorms set. The software running the m
odel
receives the change of the measured data in the same way as it would be received from the CMUcam,
when connected to the serial port.


Test procedure


We decided to test the model on a simple but meaningful
task of closing a ditch. This example is of

a practical use
and enables sophisticated manipulation. The primitive
agents' (tasks) are: move forward/ backward (for each
track), rise/lower blade (10 values), stop. The measured
variables were: direction of the beacon (accuracy of
approximately 10 degr
ee), main axis direction (accuracy of
15 degree), and speed (indication of move/not move),
distance from the ditch (accuracy of 2 cm) and the lateral
displacement in model adjustment relative to the
perpendicular line to

the ditch (accuracy of half model
w
idth).


Conclusions


The project was carried out as a project for
an undergraduate class requirement without
any external budget. We intend to upgrade
the testing environment by replacing the
Lego Mindstorms set by a vehicle of some
larger size to enable
at least implementation
of PID controllers to the DC motors duty
cycle.


Controlling agents' functions or goals are to
perform a simple task without the need to
use joysticks in order to manually control
each one of the system’s activities. The
control is
improved because the agent is
capable of negotiating with other elements
of the SW environment. The automation is introduced as improved Man Machine Interface (MMI) by
developing control agents as intelligent tools to negotiate between human requests and t
ask level
controllers as well as negotiate with other elements of the SW environment.

Figure
3
: model and test settlement

Figure
2
: the model

Figure
4


The very small scale experimentation and the constraints followed from the size and in special from the
inability to control motor power were found to be not negligible.

Nevertheless, this very simple and
inexpensive equipment enabled us to demonstrate the importance and capabilities of multi agent control
paradigm to enable natural man machine interface for any human supervised autonomous control
environment.


Refere
n
ces

1.

Unmanned Ground Vehicle Technology VI, SPIE Conference 5422, Orlando, Florida, 12
-
16 April, 2004

2.

1st NATO Workshop on short


term realizable (multi
-
) robot systems in military domains, 20
-
22
September 2004, Bonn, Germany

3.

DARPA Grand Challenge, http:www.d
arpa.mil

4.

Sheridan, T.B., Telerobotics, Automation, and Human Supervisory Control, MIT Press, 1992.

5.

Granot, R. "Architecture for Human Supervised Autonomously Controlled Off
-
road Equipment.”
Automation Technology for Off
-
road Equipment, p24, ASAE, Chicago,
Il, USA, July 26
-
28, 2002.

6.

Albus, J. S. 1997, 4D/RCS: A Reference Model Architecture for Demo III, NIST, March 1997.

7.

Albus, J. S.

"
The Engineering of Mind
"
, Information Sciences 117:
1
-
18,
Elsevier Science Inc, 1999

8.

Meystael M. A. an
d

Albus
,

S. J. "Intelli
gent Systems.
Architecture, Design, and Control", John Wiley &
Sons Inc., 2002

9.


Granot, R. “An agent based Human Robot
Interaction in a supervised autonomous

system.”, SPIE
AeroSense, 21
-
25 April 2003, Orlando, FL, paper #5083
-
45

10.


Michael Wooldridge, "Inte
lligent Agents: Theory and Practice"
http://www.csc.liv.ac.uk/~mjw/pubs/ker95/

11.


Mataric, M. J., "Behavior
-
Based Control: Examples from Navigation, Learning and Group Behavior",
Journal of Experimen
tal and Theoretical Artificial Intelligence, special issue on Software Architectures for
Physical Agents, 9(2
-
3), H. Hexmoor, I. Horswill, and D. Kortenkamp, eds., 323
-
336, 1997.

12.

http://www.isd.me
l.nist.gov/research_areas/


13.

http://www.isd.mel.nist.gov/projects/USAR/index.html

14.

Lego Mindstorm
http://mindstorms.lego.com/en
g/default.asp

15.

http://www
-
2.cs.cmu.edu/~cmucam/

16.

Stentz T., Kelly A., Herman H., Rander P., Amidi O., Mandelbaum R.
"Integrated Air/Ground
Vehicle System for Semi
-
Autonomous Off
-
Road Navigation" AUVSI Sympo
sium, July 9
-
11, 2002





Figure
4