HCC: Human-Driven Spatial Language for Human-Robot Interaction

puppypompAI and Robotics

Nov 14, 2013 (3 years and 9 months ago)

72 views

HCC: Human
-
Driven Spatial Language for Human
-
Robot Interaction

Agency:

NSF

Funding:

$499,512 (HCC Program)

PI:

Skubic

Dates:

9/1/2010 to 8/31/2013

Award #:

IIS
-
1017097

MOTIVATION

When people communicate with each other about spatially oriented tasks, they more often use
qualitative spatial references rather
than precise quantitative terms, for example,

Your
eyeglasses are
behind

the lamp
on

the table to the
left

of the bed
in

the bedroom
.” Although
natural for people, such qualitative references are problematic for robots that “think” in terms of
mathematical expressions and numbers. Yet, providing robots with the ability to understand and
communicate with these spatial refere
nces has great potential for creating a more natural
interface
mechanism for robot users. This

would allow users to interact with a robot much as
they would with another human, and is especially critical
if

robots
are to provide assistive
capabilities in
u
nstructured environments occupied by people.

PROPOSED RESEARCH

The proposed project will bring together an interdisciplinary team from Notre Dame and the
University of Missouri, leveraging previous NSF
-
funded work. Within the discipline of
Psychology,

Carl
son has studied the human understanding of spatial terms. Within Computer
Science and Engineering,

Skubic has investigated spatial referencing algorithms and spatial
language for human
-
robot interfaces.
We propose to leverage this independent prior work to

further a joint effort towards a deeper understanding of human spatial language and the creation
of human
-
like 3D spatial language interfaces for robots.
Human subject experiments will be
conducted with college students and elderly participants to explore

spatial descriptions in a
fetch

task; results will drive the development of robot algorithms, which will be evaluated using a
similar set of assessment experiments in virtual and physical environments.


Project Objectives:



To empirically capture and chara
cterize the key components of spatial descriptions that
indicate the location of a target object in a 3D immersive task embedded in an eldercare
scenario, focusing particularly on the integration of spatial language with reference
frames, reference object
features, complexity, and speaker/addressee assumptions.



To develop and refine algorithms that enable the robot to produce and comprehend
descriptions containing these empirically determined key components within this
scenario.



To assess and validate the
robot spatial language algorithms in virtual and physical
environments.

INTELLECTUAL MERIT

The proposed 3
-
year project will advance the state of the art in (1) the fundamental
understanding of the influence of
reference frames, reference object features,
complexity, and
speaker/addressee assumptions

on human spatial language; (2) the development and
validation of
robot
algorithms for
the

generation and comprehension

of 3D spatial language
in a
class of assistive tasks, driven
directly
by human subject stud
ies
; and (3)
the

better
understanding of how robots can and should be used for
this

class of assistive tasks

in an
eldercare scenario
.

BROADER IMPACTS

The proposed project will train graduate students in an interdisciplinary setting.

U
ndergraduate
students will play an important role in
robotics work

(
at Missouri) and in conducting human
subject

experiments (at Notre Dame). R
esearch
results

will be incorporated into course
s

and
thus impact numerous other students. The
PI’s

will continue

their history of recruiting women
and underrepresented minorities, mentoring students through the McNair program, and
providing excellent role models for students interested in science and technology.
T
he work will
provide a unique dataset for human
-
r
obot

interaction. T
he
research
results will likely have
applicability to help people with various disabilities.


Keywords:

spatial language, spatial cognition, human
-
robot interaction