literature review and Augmented

yardbellAI and Robotics

Nov 14, 2013 (3 years and 9 months ago)

53 views

Human Robot Collaboration: A
literature review and Augmented
Reality approach in design.


Scott A, Green, Mark Billinghurst, XiaoQi Chen & J. Geoffrey
Chase


Gijs Hofstee

Contents



Communication and collaboration


Human Robot Interaction


Robots in Collaborative tasks


Augmented reality for human
-
robot
collaboration


Research directions in human
-
robot
collaboration

Communication and collaboration



Understanding human
-
human
communication is the first step to
understanding how a robot should
communicate.

Human
-
Human collaboration


Verbal


speech


Non verbal


Gaze


Gesture


Looking before pointing

Human
-
Human collaboration


Grounding


reaching common ground/an
understanding.


This is an absolute requirement for any
communication during collaboration.

Human
-
Human collaboration model


3 channels:


Audio


Voice/sounds


Visual


imagery


Environmental


The world


Human Robot Interaction


Usage of robots:


Tools


Guides


Assistants


Robots as tools


Usually partially autonomous (in varying
degrees)


When not running autonomous,
teleoperated.


Tends to suffer from lack of situational
awareness


Ohba et al (1999) made a system to avoid
collisions in a multi robot/operator
environment by drawing the arms bigger
than they are using a predictive graphic.



Chong, Kotoku et al (2001) used force
feedback to do the same.


Robots as guides


Experiments show that humans spend half
the time looking at the “face” of the robot.


It is also shown that robots are considered
more “efficient” at communicating, when
they mimic human behavior.


Cero shows the importance of a human
appearance

Cero

Hosting Robots


Looks at audience


Follows the gaze of conversation partners


Points at objects


Can use regular speech

Humanoid robots


Should be able to learn


Mimic human behavior


Communicate both verbally and non
verbally.


Should be able to understand what is
being pointed at.


Example: Leonardo

Leonardo

Robots in collaborative tasks


Robots need to understand the difference
between “your” right and “my” right.


Additionally, they need to understand “left
of” or “here” and “there”


Fernandez et al (2001): use force
feedback to share intentions between
robot and human.

Augmented Reality (AR) for human
robot collaboration


Key characteristics of Augmented Reality
interfaces:


Combine real and virtual objects


Virtual objects appear registered on the real
world


Virtual objects can be interacted with in real
time.

AR qualities


The ability to enhance reality


Seamless interaction between real and
virtual environments


The ability to share remote views or bird
eye view.


Spatial queues, for local and remote
collaboration


Support for moving smoothly from reality
into virtuality.

AR qualities 2


AR allows for spatial dialog and gestures,
even when robots or humans cannot see
each other. (physically)

AR in collaborative applications


Makes use of a head mounted display.


Can make use of cards with special
images to modify the environment
(ARToolKit)


Allows (sometimes) switching from AR to
Virtual Reality (VR)

AR in collaborative applications 2

Other uses include:


Virtual meetings


The sharing of an object (like a map, or a
diagram) to look at

Demonstration video


http://www.youtube.com/watch?v=4XZC76
lQ2hc&feature=related


Mobile AR


AR can also be applied anywhere.


It does however require a detailed location
of the user. (GPS+camera)


Possible uses include a real/virtual tour in
a city, or route planning.


The fastest route without ever
looking at maps again.

Possible issues


Data managament


The device should contain all local data


A proper way to only display a part needs to be
determined


The system should be able to obtain more data


Registration of objects


Detection must work under different light etc


The system must still work if it loses it’s
connection.


Using AR in human
-
robot
collaboration

Going back to the most important point of
human
-
human communication: grounding.


Robots can use AR to help this process by
using the environment channel to show
where they plan to go, and what they plan
to do, in addition to the audio channel.


AR also allows robots to use the
environment even better to show their
intentions.

Research directions in human
-
robot collaboration


What are interesting directions of
research?


These are of course closely related to the
3 channels. (Audio, visual and
environment.)

Audio channel


There are already many speech
recognition and speech synthesize
programs, however, there seems to be a
lack of dialog management. (What to say,
when to say it, and in relation to what?)


The ability to give feedback on a situation
is also important here.


Being able to understand humor greatly
helps, but is even harder to do.

Environmental channel


Here a lot of work has already been done,
but a lot of fine tuning still awaits.


Pointing at objects, references to objects,
“left of”, “here”, are all things that come
natural to humans but are still hard for
robots to properly do.


interacting with objects to show intentions
or explain things falls in this category too.

Visual Channel


Different views and transition from reality
to virtuality are already developed.


Facial expression and body language are
however still a point where there is much
to gain.

General research in AR


Effective audio and video transmission.


Proper ways to handle (hardware)
communication failure


Tracking techniques need to improve, and
become less affected by light and shadows.


To be useful, AR needs to be used in
unprepared environments (“in the wild”). A lot of
testing and experimenting still needs to be done
here.


Questions?