HERE

bouncerarcheryΤεχνίτη Νοημοσύνη και Ρομποτική

14 Νοε 2013 (πριν από 3 χρόνια και 8 μήνες)

58 εμφανίσεις

SMC, Oct. 2003

Jean Scholtz

Siavosh Bahrami





Human
-
Robot Interaction:
Development of an Evaluation
Methodology for the Bystander
Role of Interaction


SMC, Oct. 2003

Our research premise


One key element of HRI is the information
that the user needs for effective and efficient
interactions


Different roles of interaction require different
information


Investigating the needs of different roles is a
way to subdivide the research


SMC, Oct. 2003

Roles


Supervisor


Monitors one or more robots and determines when any given robot
needs attention. Tasks the robot at a high level.


Operator


Needs an inside of the robot view to help the robot accomplish a
particular task.


Mechanic or developer


Applies a software or hardware fix to the robot. Interacts to test the
solution.


Teammate


Interacts with the robot at a task level.


Bystander


No training but needs to co
-
exist in the same environment as the
robot.

SMC, Oct. 2003

Our Research Efforts


Supervisory role


both in search and rescue
and in driving domain (off and on
-
road)


Operator role


search and rescue and off
-
road driving


Bystander role


discussed in this paper

SMC, Oct. 2003

Bystanders


Exist to some extent in desktop computing as
well


Computerized ordering


Grocery self
-
checkout


ATMs


Interactions are


Directed either by the system directly or posted
directions

SMC, Oct. 2003

HRI Bystander Evaluation


Mental or conceptual model


Identifies parts of the system


Explains what components do


Allows prediction of behavior


Mental models that users build up are NOT
complete and may be erroneous


Designers of systems also have a mental
model


These two models rarely match!


SMC, Oct. 2003

Metrics for our Evaluation


Predictability of behavior


Degree of match between user’s model of behavior and
actual behavior of robot


Capability awareness


Degree of match between user’s model and actual
functionality of robot


Interaction awareness


Degree of match between users’ model and actual set of
interactions possible


User satisfaction


How did the user like interacting with the robot



SMC, Oct. 2003

Experimental Design

Behavior

type

Examples

expected,

consistent

(EC)

walking
;

playing

with

a

pink

ball
;

sitting

down

unexpected,

consistent

(UC)

talking
;

dancing
;

waving

expected,

inconsistent

(EI)

same

as

expected,

consistent

but

with

a

certain

degree

of

random

behavior

unexpected,

inconsistent

(UI)

same

as

unexpected,

consistent

but

with

a

certain

degree

of

random

behavior

Sony Aibo

-
Voice commands

-
Buttons

-
Visual interaction

Determine effects of expectations and

consistency on ability to form mental models


SMC, Oct. 2003

Experiment


20 participants


Randomly assigned to one of 4 behaviors


Ages 19


25, 10 males, 10 females


SURF participants


5 had experience with limited interactive robot toys


Protocol


Asked participants how they thought they could interact


Explain to them the actual interactions


Gave them 10 minutes to play with robot


Asked them to explain what interactions resulted in what
behaviors


User satisfaction questionnaire

SMC, Oct. 2003

Interaction Awareness

Interactions

Number of participants

Voice

11

Buttons

6

Vision

10

Touch/ pet

4

Remote control

7

Smell

1

Number

of

interactions

correctly

identified

Number

of

participants

0

3

1

9

2

6

3

2

SMC, Oct. 2003

Predictability

0
2
4
6
8
10
12
14
16
18
20
scores of participants
EC
EI
UC
UI
SMC, Oct. 2003

User Satisfaction


16/20 reported they enjoyed the experience


2/ 20 reported the experience was okay for a short
period of time


2/20 said it was boring


Positive comments


Voice interaction


Negative comments


More dog
-
like behaviors


Better voice understanding


Multiple commands


Cancel a command

SMC, Oct. 2003

Issues


Robustness of interactions


Unexpected behaviors


Participants asked how to get dog to do more doglike things


Visual interaction was difficult


Knowing when robot was no longer tracking ball


Difficult to make sure participants tested all
interactions


Difficult for participants to describe behaviors


Difficult to recall behaviors for inconsistent

SMC, Oct. 2003

Discussion


Methodology looks appropriate for measuring mental
models


Interaction awareness


Capability awareness (not called out in this paper)


Predictability of behavior


Reran experiment this summer


Eliminated visual interactions


Developed logging scheme to track interactions


Initial experiment to rank doglike vs. undoglike behaviors


Added sounds to experiment



SMC, Oct. 2003

New Results


Consistent groups scored 10


20% better at
recall and matching than other groups


Consistent groups also felt that robot was
more predictable and easier to interact with


For inconsistent behaviors, adding expected
sounds increased realism of behaviors

SMC, Oct. 2003

Conclusions/ Plans


Evaluation methodology for bystander role
shows promise


Next steps are to move this to robots with
different form factors where there may not be
an “expected” set of behaviors