Adaptive Automation for Robotic Systems: Theoretical and Human Performance Issues

jadesoreAI and Robotics

Nov 13, 2013 (3 years and 9 months ago)

142 views


HFM
-
135

5

-

1

UNCLASSIFIED

NATO/PFP

UNCLASSIFIED

NATO/PFP UNCLASSIFIE
D

NATO/PFP

UNCLASSIFIED

Adaptive Automation for Robotic Systems: Theoretical and Human
Performance Issues

Keryl Cosenzo

US Army Research Laboratory

AMSRD
-
ARL
-
HR
-
SC

APG, MD 21015
-
5425

kcosenzo@arl.army.mil

Raja Parasuraman

George Mason University

MS 3F5, 4400 University Dr

Fair
fax, VA 22030
-
4444

rparasur@gmu.edu
Michael Barnes

US Army Research Laboratory

AMSRD
-
ARL
-
HR
-
MY

2250 Healy Ave, Ste 1172

Ft. Huachuca, AZ 85613
-
7069

mbarnes@arl.army.mil

Abstract


Modern warfare systems have increased in complexity in response to a progr
essively more multifaceted,
unpredictable, and dangerous world. In particular, ground and aerial automated systems have changed the
tenor of the battlefield. Robotic systems are becoming an essential part of the Army’s force. They are
intended to extend
manned capabilities, be force multipliers, and most importantly, save lives; however, the
addition of robotic systems will likely increase, or certainly change, the Soldier’s cognitive workload.
Automation is a possible solution to this cognitive workload

issue. We propose the use of adaptive systems
that use flexible automation strategies to account for the ever changing combat environment. This paper
presents supporting research, results from two multitasking studies in human robot interaction, and our
rationale for the implementation of adaptive automation in this environment. Finally, we discuss ongoing
research in terms of its theoretical and Soldier performance implications for designing adaptive algorithms as
part of the crew interface for remote t
argeting with robotic systems.

1.0

INRODUCTION

Modern warfare systems have increased in complexity (i.e., enhanced capabilities and broader operating
requirements) in response to the increasingly unpredictable and dangerous world circumstances. Ground and

aerial automated systems have become salient elements on the battlefield and thus, have been part of the
changing tenor of the battlefield. With automated systems, Soldiers can be out of harm’s way and will be able
to access portions of the battlespace by

proxy. Using robotic systems, they will be able to venture into areas
of the battlespace previously unavailable, effectively increasing their firepower and multiplying their
intelligence gathering capabilities. However, technology has its price. Despite

the prevalent use of the term
“unmanned” to describe robotic systems, the implied assumption that such systems are fully autonomous is
not the case: manning requirements will remain to perform robotic system supervision and control tasks.
Robotic systems

will increase, or change, rather than decrease Soldiers’ cognitive workload on the battlefield.
The Soldier will have to perform driving, security, reconnaissance, and combat tasks while coordinating and
some cases controlling multiple robotic systems. Th
is multitasking environment will not only make it difficult
to conduct robotic tasks but will also decrease the Soldier’s cognitive capacity needed to attend to the
immediate environment [1, 2, 18].

A possible solution to the cognitive workload issue is t
o automate segments of the Soldier’s tasks; however,
automation decisions are difficult because of the volatility and unpredictability of the battlefield environment.
As the environment changes, optimal automation strategies must change as well. As a resul
t,
static

automation
Adaptive Automation for Robotic Systems

5

-

2

HFM
-
135

NATO/PFP

UNCLASSIFIED

NATO/PFP

UNCLASSIFIED

NATO/PFP

UNCLASSIFIED

NATO/PFP

UNCLASSIFIED

put in place at the system design phase may not be sufficiently robust to contextual changes in the
environment. We are proposing the use of adaptive systems to employ flexible automation strategies as a
function of the changing enviro
nment and, importantly, the changing operator state as well. Adaptive
automation refers to an automation design concept in which task allocation and coordination between human
operators and automated systems are flexible and context dependent [15, 22, 23,
29]. In this paper we outline
the motivations behind our development program of adaptive automation for robotic systems, explicate the
theoretical issues that drive it, compare various adaptive options, and discuss preliminary results that will be
used to
establish performance guidelines for implementing adaptive automation as a mitigation strategy.

2.0

MULTITASKING AND MUL
TIPLE ROBOTIC SYSTEM
S IN COMBAT
ENVIRONMENTS

In the future, the military force will include a variety of unmanned systems, ranging from
small teleoperated
ground robots and utility vehicles for logistics support and casualty extraction to 6
-
ton vehicular robots with
gun systems and smaller unmanned aerial vehicles (UAVs). Integrating this “team” of robotic systems with
the Soldier teams w
ill be a key component of their tactical deployment. The systems are expected to be most
useful when they are used synergistically. For example, a UAV has a panoramic view of the battlespace
whereas a small unmanned ground vehicle (SUGV) can view spaces hi
dden from both the operator and the
UAV. Armed robotic vehicles (ARVs) can augment both systems by providing defense with its automated
weapons and anti
-
tank capacity. Our experimental focus is on control of the approximately 6
-
8 ton ARV and a
Class I or C
lass II UAV, all of which can be controlled either from a dismounted position or from an armored
vehicle. We are assuming that operators are heavily loaded with these robotic control or management tasks
including the communications necessary to coordinate
activities. Additionally, a critical consideration will be
ensuring local security, which requires continuous awareness of the threats to the operator’s own vehicle.
Mitchell [18] modeled the cognitive workload involved for mounted control of ARVs and conc
luded that the
gunner/ARV operator would be heavily loaded when local security was a priority in addition to the ARV
tasks. Operators would be engaged in using the ARV for remote targeting when they were most vulnerable to
threats in their own vehicle’s im
mediate environment. Adaptively automating some of the gunner’s tasks may
alleviate the multitasking workload problem.

2.1

Supporting Research

There are a number of human
-
robot interactions issues that need to be addressed to allow the operators to
conduc
t their mission while using multiple robotic systems. An often repeated goal is that one operator will
control/monitor multiple robotic assets. To explore this issue, Rehfeld and colleagues at the University of
Central Florida created a scale model replic
a of an Iraqi urban area with multiple roads, buildings, vehicles
and crowd scenes [28]. The general task was to monitor a robot’s progress via a video feed sent from the
robotic vehicle and report on specific pre
-
briefed targets in the urban scenes. Diffe
rent combinations of
number of operators and number of robots were examined. The Reserved Officer Training Corps (ROTC)
participants had a difficult time reporting the military targets; even the more senior ROTC students found it
difficult. More surprising
, the ideal ratio was two operators to one vehicle; the participants in this condition
found nearly 200% more targets than a single operator and teams always performed more effectively than
single operators. Note, control of the robot was not an issue sinc
e the operator simply planned the robot routes
through the urban streets and then the robots were automatically “driven” on the planned route. In general,
affording the operators a second vehicle did not improve performance [28]. Results suggest that with
one
operator it is difficult to interpret all the information received by the robot. By having additional operators,
target identification performance was improved; multiple operators were viewing, sharing information, and
Adaptive Automation for Robotic Systems

HFM
-
135

5

-

3

NATO/PFP

UNCLASSIFIED

NATO
/PFP

UNCLASSIFIED

NATO/PFP

UNCLASSIFIED

NATO/PFP

UNCLASSIFIED

interpreting the same video feed
from the robot. However, when the ratio was two operators to two robots,
performance was no better than the one operator to one robot condition. Each operator chose a robot to
monitor and in essence the task was completed the same as the one operator to on
e robot condition. This
finding also reaffirmed real world data collected during rescue missions for the victims in the collapsed World
Trade

Center [20]
. The key factor in the successful rescue mission was understanding the video information
received from

the rescue robots concerning locations of possible victims. Thus, as found with

Rehfeld

et al.’s
research, the more operators involved, the more information that can be gathered about a situation using a
single robotic asset.

Chen, Durlach, Sloan and Bo
wens [5] obtained similar findings in a simulation study on controlling single or
multiple unmanned assets (i.e., UGV, UAV) from a mounted armored vehicle interface. Participants used
either one (UGV or UAV) or three (mixed) unmanned assets to complete a r
emote targeting task. In addition
to number of assets available, Chen et al. manipulated the control modality for only the UGV asset; the
participants controlled the UGV by

teleoperating

it with a joystick or waypoint control. Not surprisingly,
when the U
GV was the only asset, the teleoperation was less effective than waypoint control. The mixed
condition showed a pattern similar to the Rehfeld et al. [28] study. Participants were not effective at using
multiple assets and having additional assets for tar
geting resulted in only minimal improvements in targeting
(Figure 1). Even in the mixed condition, the preferred tactic was to rely on the UAV alone rather than to
attempt an integrated strategy with the UGV. Both studies are worrisome. They indicate that
even if robotic
assets have minimal control requirements (pre
-
mission waypoint control), remote targeting is a difficult task
for the operator. What is more, neither study factored in the realistic multitasking requirements for operators
or the stress of
combat, in which case performance would likely be worse. Based on Mitchell’s [18]

modeling

results, Chen and her colleagues recently completed a second study in which she investigated having the
gunner/ARV operator perform remote targeting with a UGV and U
AV using more realistic multitasking
environments. The findings showed that robotic control and targeting are difficult and that mitigation
strategies such as automation need to be developed if robotic operations can be accomplished with a single
operator
in multitasking environments.





Figure 1. Mean Targets Lased As a Function of Control Modality and Number of Assets

2.2

Adaptive Automation

Both for engineering and future army force efficiency, there is a premium on reducing crew size for armored
v
ehicles. A possible solution is to automate as many operator tasks and functions as possible; however,
UAV

UGV

Waypoint Controlled

UGV

Tele
operated

Mean Targets Lased

Adaptive Automation for Robotic Systems

5

-

4

HFM
-
135

NATO/PFP

UNCLASSIFIED

NATO/PFP

UNCLASSIFIED

NATO/PFP

UNCLASSIFIED

NATO/PFP

UNCLASSIFIED

possible; research indicates that automation introduces its own problems. Improperly implemented
automation can have disastrous consequences and the lit
erature is replete with examples of misuse and disuse
of automated systems [10, 19, 26]. In particular, complacency effects, misusing systems, and performance
decrements because of the operator being out of the loop, all suggest that automation should be
implemented
carefully to ensure that operators maintain an awareness of the battlespace and their robotic assets and do not
overly, or improperly rely on the automated systems [8, 11, 25, 30]. Supporting a high level of awareness is
crucial in a battlefie
ld environment.

Adaptive automation has been proposed as a mitigating strategy to minimize the potential human performance
costs of automation [23]. Adaptive automation uses mitigation criteria that drive an invocation mechanism or
“trigger” to maintain
an effective mixture of operator engagement and automation for a dynamic multitask
environment (Figure 2). The invocation mechanism is triggered by whatever measurement process is used to
represent the current state of the operator and/or task. If properl
y instrumented the results of the measurement
process should be displayed to operators in order to keep them informed of the state of the invocation process.

To be effective, the invocation process must be sensitive to the operator’s combined tasking envir
onment,
which depends on interactions among tasks, the environment and the operator state (e.g., workload).











Figure 2. Example of closed loop adaptation for A
-
automated, A/M


automated / manual, and M
-

manual
task sets


The effects of adaptive
automation have been examined in a number of studies [13, 14, 16, 22, 32]. These
studies have shown that, compared to static or non
-
adaptive automation, adaptive automation can reduce or
eliminate some of the human performance costs of automation, includi
ng unbalanced workload [15],
complacency and reduced situation awareness [25], and cognitive skill loss [12]. However, with a few
exceptions [e.g., 24] adaptive automation has been examined in the context of aviation and process control,
not human
-
robot in
teraction. Accordingly, there is a need to examine its efficacy for the specific problems
faced by human operators supervising multiple robotic systems.

The method of invocation is an important issue in adaptive automation. There are four major invocation

methods for automation [23]. In the critical
-
events method, automation is invoked only when certain tactical
environmental events occur [1]. For example, in an aircraft air

defense

system, the beginning of a "pop
-
up"
weapon delivery sequence leads to the
automation of all defensive measures of the aircraft. If the critical
events do not occur, the automation is not invoked. This method is tied to current tactics and doctrine

Operator

Environmental

Forcing functions
functions

A

A/M

A/M

M

M

M


I
nvocation
mechanism

Measurement
of task effects

Multi
-
task state

Adaptive Automation for Robotic Systems

HFM
-
135

5

-

5

NATO/PFP

UNCLASSIFIED

NATO
/PFP

UNCLASSIFIED

NATO/PFP

UNCLASSIFIED

NATO/PFP

UNCLASSIFIED

established during mission planning. A disadvantage of the method is its possible

insensitivity to dynamic
real
-
time system and human operator performance. The critical
-
events method will invoke automation
irrespective of whether or not the pilot requires it when the critical event occurs. One potential way to
overcome this limitation

is to measure operator performance and physiological activity [23]. In the operator
performance measurement and operator physiological assessment method the operator mental states (e.g.,
mental workload, or more ambitiously, operator intentions) may be in
ferred from performance or other
measures. The measures are used as inputs for the adaptive logic [e.g. 4, 27] The Defense Advanced Research
Projects Agency Augmented Cognition program is currently attempting to validate the use of such
physiological techn
iques for real
-
time adaptive automation based on assessment of operator states [34]. In the
human operator

modeling

method, the operator states and performance are

modeled

theoretically and the
adaptive algorithm is driven by the model parameters. The hyb
rid method combines one or more of these
different invocation techniques, so that the relative merits of each method can be maximized in order to
minimize operator workload and minimize performance. If properly instrumented the results of the
measurement p
rocess should be displayed to operators in order to keep them informed of the state of the
invocation process.

2.3

Experimentation

The adaptive automation process is more complex than simply unloading (or engaging) the operator of a task,
irrespective of t
he invocation process. To be effective, the invocation process must be sensitive to the
operator’s combined tasking environment, which depends on interactions among tasks as well as overall
workload, stress, and safety considerations [36]. The effectivene
ss of automation is examined by looking at
its impact on human performance, mental workload, and situation awareness [22].


To gain an understanding of the operator constraints in the supervision of robotic assets and adaptive
automation, the Army Researc
h Laboratory and George Mason University developed a simulation test bed,
Robotic NCO
(Figure 3).
Robotic NCO
simulation isolates the cognitive requirements of just the tasks in
which robotic assets are involved from the larger military environment. The si
mulation requires operators to
complete three military tasks from the same display space: UAV sensor use for target detection, UGV
monitoring, and communications. Operators perform either the UGV or UAV task requiring them to switch
between the displays us
ing the designated buttons when one task or the other demands their attention. During
the simulation, the UAV gets electronic intelligence hits from possible targets, which are displayed in the
UAV view as white squares. When the operator sees a target, he

or she zooms in on that location and
identifies the possible target, which is then displayed on the situation awareness (SA) map. At the same time
the UGV moves through the area via pre
-
planned waypoints. The UGV stops at various times and the UGV
status
bar flashes. The operator then switches to the UGV display. The UGV stops for two reasons; it has
reached a reconnaissance point or an unknown obstacle. The operator either resumes the UGV along its pre
-
planned path or reroutes the UGV, depending on the r
eason for the stop. Communications are also
continuously received during the simulation. The operator is prompted (both visually and

auditorily
) at
various times for UGV/UAV status update and location of targets. The operator also has to perform a
communic
ations tasks in which he or she hears call signs at random intervals that are either ignored or, if own
call sign, needs to be acknowledged.

Adaptive Automation for Robotic Systems

5

-

6

HFM
-
135

NATO/PFP

UNCLASSIFIED

NATO/PFP

UNCLASSIFIED

NATO/PFP

UNCLASSIFIED

NATO/PFP

UNCLASSIFIED









Figure 3.
Robotic NCO

Simulation

2.3.1

Effect of Task Difficulty on P
erformance and Situation Awareness in the Robotic NCO
Simulation

Through experimentation with the Robotic NCO simulation we are investigating the effects of controlling
multiple unmanned systems on workload and performance of primary and secondary tasks.

C
osenzo,
Parasuraman, Novak, and Barnes [7] had participants conduct a reconnaissance mission using the
Robotic
NCO

simulation. As with the Rehfeld et al. study [28], active (telerobotic) robotic control was a not a factor,
since operators used waypoints to

direct the UGV. The participants used the UAV and UGV to identify
enemy units in the area and to respond to communications. With the information received from the UAV and
UGV operators were asked to choose a safe path for a platoon to take through the re
connaissance area.

The first experiment examined the effects of task difficulty on performance, workload, and SA
(operationalized according to the Kaber & Endsley [16]) in the Robotic NCO simulation. Three potential
drivers of task load were manipulated:
the number of UAV targets to be identified (low=10 targets, high=20
targets), the number of UGV stops and requests for operator assistance (low=5 stops/requests, high=7
stops/requests), and finally the uncertainty associated with number of high
-
priority co
mmunications (low=4,
high=16, out of a total of 20). 16 out of 20 in the low uncertainty condition, 4 out of 20 in the high uncertainty
condition. During the mission, the communications window also issued SA probe questions to the participant.
At the end o
f each simulated mission, participants were asked to use the information received from the UAV
and UGV to choose a safe path for a platoon through the reconnaissance area. In summary, the experiment
was a 2 x 2 x 2 within subjects design with the factors m
anipulated being the number of UAV targets, the
number of UGV requests, and the uncertainty of high
-
priority communications. Multivariate analyses of
variance and subsequent analyses of variance were conducted to examine the effects of task load on
perform
ance.

Overall, the results showed that participants were good at integrating information received from the UAV and
UGV to choose the platoon path. However, performance on the individual tasks was diminished due to the
multitasking requirements of the
Robot
ics NCO

simulation. The results in Figure 4 show that participants
generally took longer to respond to high
-
priority communications when they had to also identify many UAV
targets (UAV Targets x UGV Requests x Communication,
F

(1,16) = 5.73,
p

=.02
)
. Addit
ionally, when the
uncertainty of high priority communications was high, participants took longer to respond to communications
SA MAP with UAV
and UGV Path
-

Friendly Units


Zoomed In

UAV View

Possible Target

Communications
Screen

UGV Status
Menu

UAV Status
Menu


Adaptive Automation for Robotic Systems

HFM
-
135

5

-

7

NATO/PFP

UNCLASSIFIED

NATO
/PFP

UNCLASSIFIED

NATO/PFP

UNCLASSIFIED

NATO/PFP

UNCLASSIFIED

when they had many UAV targets and UGV requests. This result appears to indicate that high
-
priority but
infrequently occurring com
munications pose a particularly high monitoring load on the operator, as suggested
by recent studies of vigilance and monitoring in semi
-
automated systems [32]. Participants also took longer to
respond to UGV obstacles when there were many UAV targets to i
dentify (UAV Targets x UGV Requests,

F
(1,15) = 17.96
, p
= .00), indicating that the combined load of processing both tasks was a major contributor to
the operator workload. Additional results were that when there were fewer targets to be identified with t
he
UAV, overall SA was higher (UAV Targets x UGV Requests x Communications,
F

(1,15) = 12.91,
p

= .00).
In addition, trends in the data showed that comprehension of the situation (Level 2 SA) was better when the
uncertainty of communications was low. In ge
neral performance and situation awareness was compromised
when task load was high (i.e., many UAV targets and UGV requests).

1000
1200
1400
1600
1800
2000
Low UGV
High UGV
Low UGV
High UGV
Low UAV
High UAV

Figure 4. Mean Reaction Time to Low and High
-
Uncertainty Communications As a Function of UAV and
UG
V Load.

2.3.2

Effects of Task Difficulty on Change Detection Performance in the Robotic NCO Simulation

In the first experiment we identified the drivers of task load in the Robotic NCO simulation and showed that
the task difficulty manipulations provided s
ome evidence of performance decrement on individual tasks.
Results from the first experiment did not yield evidence for
major

areas of human performance that could be
mitigated with automation. One reason, of course, is that the
Robotic NCO
simulation alr
eady implicitly
includes considerable automation. For example operators used waypoint control rather than teleoperation for
the UGV task. Also, the operator did not have to plan the paths, so in effect, operators were assisted with an
automated path planne
r. In addition, the UAV task was largely automated, apart from target identification.
Another possibility is that the performance measures (e.g., overall accuracy and speed of UAV target
identification, etc.) that we used were not sufficiently sensitive to

transient or dynamic changes in workload
and SA that might be better captured with other measures. One such measure is
change detection

performance. People often fail to notice changes in visual displays when they occur at the same time as
various forms o
f visual transients [33]. Savage
-
Knepshield and Martin [31] and Durlach [9] have shown that
this “change blindness” phenomenon, which has typically been demonstrated for basic laboratory tasks or for
staged real
-
world activities such as sports or social in
teraction, also occurs with complex visual displays used
in various military command and control environments. In Experiment 2 a change detection measure was
included to assess operator attention allocation during a
Robotic NCO
mission.

In Experiment 2 we
dropped the UGV request manipulation, using only what was the previous high UGV
condition (7 requests) and varied only the number of UAV targets (low=10, or high=20) and the uncertainty
of the high
-
priority communications (low or high) as before. We embe
dded a change detection measure into
Mean Reaction Time

Low Uncertaint
y

Communications

High Uncertainty

Communications

Adaptive Automation for Robotic Systems

5

-

8

HFM
-
135

NATO/PFP

UNCLASSIFIED

NATO/PFP

UNCLASSIFIED

NATO/PFP

UNCLASSIFIED

NATO/PFP

UNCLASSIFIED

the SA map of the
Robotic NCO
interface. At unpredictable times during the simulated mission, and after the
SA map had been populated to a degree, an icon on the SA map changed its location. Participants were
instructed

that such changes might occur and that if they noticed them, to press the space bar. Only a simple
detection response was required, not identification or recognition. On the basis of the extensive change
blindness literature [33], we predicted that chang
e detection performance would be especially poor when a
visual transient was present, in the present case, when the UGV stopped and requested assistance from the
operator, in which case the UGV status bar flashed. However, in a complex visual display wher
e many items
compete for attention, change detection performance may be poor even without such visual transients, due to
the need for attention to be allocated to many different sub
-
tasks, windows, and display locations. We
therefore predicted that change
detection accuracy would be low even in the absence of an explicit display
transient, although not as low as with the UGV flash event. To test this prediction, we also included change
events during the UAV task, when no explicit visual transient was prese
nt. Multivariate analyses of variance
and subsequent analyses of variance were conducted to examine the effects of task load on performance.

Results showed that change detection accuracy was extremely low, ranging from 9% to 44%. More
specifically, change
detection performance was low in the UAV task (averaging about 35%) but significantly
higher than in the transient UGV task, averaging about 13% (main effect for UAV Targets,
F

(2,14) = 4.83,
p

=.02). Subsequent ANOVAs showed a significant effect for UAV T
argets for change detection performance
for UAV/COMMS related changes,
F

(1,15) = 6.16,
p

=.02, but not for UGV related changes,
F

< 1.0. Third,
change detection in the UAV task was lower when the high
-
difficulty high uncertainty in the communications
task

(low number of priority messages) was combined with a high number of UAV targets (See Figure 5). In
other words, UAV target load reduced change detection performance when the operator was also loaded with
monitoring infrequent communications messages.

I
n summary, these results show that the change detection measure was sensitive to the hypothesized effects,
being greater for UGV transients than for changes during non transient events (e.g., the UAV task). Moreover,
the measure was sensitive to UAV target

load. Overall the results suggest that the change detection measure
could be used to assess the possible enhancing effect of automation of one of the
Robotic NCO
tasks, as there
is considerable “room for improvement” with these performance levels.



Figure 5. Mean Number of Changes Detected As a Function of UAV and Communications Uncertainty


Low Uncertainty

Communications


High Uncertainty

Communications


Low Uncertainty

Communications


High Uncertainty

Communications

Low UAV

High UAV

Adaptive Automation for Robotic Systems

HFM
-
135

5

-

9

NATO/PFP

UNCLASSIFIED

NATO
/PFP

UNCLASSIFIED

NATO/PFP

UNCLASSIFIED

NATO/PFP

UNCLASSIFIED

3.0

CONCLUSIONS

3.1

Future Research

The research with the
Robotic NCO
simulation by Cosenzo et al. [6, 7] and the simulation research by Ch
en
and colleagues [5] have identified several tasks to consider for automation or aiding, target identification and
robotic control. We are currently evaluating the effects of the various automation methods (i.e., operator
performance method, modeling meth
od, critical
-
events method) on operator performance. We are examining
the influence of model
-
based, performance
-
based adaptive automation, and critical events
-
based automation
on operator performance. In the model
-
based adaptive automation, after a fixed n
umber of missed events an
automatic target recognition system (ATR) is invoked. The participant receives a message indicating that
automated target recognition (ATR) is being invoked, and that they will no longer be responsible for
identifying targets. The

mission continues as before, with the operator carrying the UGV, communications,
and change detection tasks. In the performance
-
based adaptive automation, when a predefined performance
threshold is met, the ATR is invoked for the UAV task, but not otherwi
se. In the critical
-
event based
automation the automation (i.e., ATR) is invoked based on task difficulty. The follow on studies will compare
the effects of various types of automation on Soldier performance while we vary the multitasking environment
inclu
ding the reliability of the proposed aids [30]. Change detection performance will be used as one measure
of the automation’s effectiveness. If the automation does function as predicted and decrease the operators’
workload, this should in turn free up resou
rces so that they can attend to other tasks or changes in their
environment.

Another important issue we will be evaluating is the question of who is in control of adaptation. When
adaptive changes are initiated by the system, the result is an
adaptive
sys
tem; when implemented by the
human operator, the system is termed
adaptable

[21, 32]. There is a current debate over whether adaptive or
adaptable automation is more efficient and acceptable to users [3, 17, 32, 35]. Empirical research will decide
this iss
ue. On the one hand, there is research indicating that adaptive (system
-
initiated) automation leads to
significant system and human performance benefits, but that some users may be unwilling to accede to the
authority of the system. On the other hand, Vand
erhaegen et al. [35] showed that in an air traffic control
setting, controllers preferred using a decision aiding tool at times of their choosing (adaptable automation), but
that system performance was better when the system provided the tool during expect
ed times of high
workload (adaptive automation). If these results can be generalized to robotic systems (and it is not clear that
they can, because so little research has been done), then it poses a dilemma: adaptive systems can boost
system performance, b
ut operators may not like them, and may disable or turn them off; users may prefer
adaptable automation, but such a system may not provide optimal benefits. The key to resolving this dilemma
may be in developing adaptable automation that poses only moderat
e demands on the operator in managing
allocation decisions. Miller and Parasuraman [17] have proposed the concept of
delegation interfaces
, in
which operators have the flexibility to choose different operating points along the adaptable
-
adaptive
continuum,

depending on contextual demands, workload, and other factors. Preliminary evidence in support of
the delegation interface concept was reported by Parasuraman et al. [24] in a simulation study of multi
-
robot
supervision.

For these reasons, we are in the p
rocess of developing an adaptive automation architecture for future armored
vehicles that will control ARVs and monitor UAVs among other important military functions. The objective
of the architecture will be to partition the multitasking environment into
operator tasks that should remain
manual, tasks that can be fully automated, and those that are candidates for adaptive automation. Based on the
architecture, we intend to conduct both laboratory research and field exercises with trained operators to
inves
tigate the performance effects of various adaptive schemes and compare them to fully automated and
Adaptive Automation for Robotic Systems

5

-

10

HFM
-
135

NATO/PFP

UNCLASSIFIED

NATO/PFP

UNCLASSIFIED

NATO/PFP

UNCLASSIFIED

NATO/PFP

UNCLASSIFIED

manual implementations. Performance will include not only performance of the various multitasking functions
including remote target with robotic assets but a
lso the ability of the operator to maintain both overall and
selective situation awareness.

3.2

General Conclusions

The studies conducted to date have identified some of the major issues and the preliminary results indicate
that adaptive automation may be
a useful mitigation strategy to help offset the potential deleterious effects of
high cognitive load on Army robotic operators in a multitasking environment. Several important issues
remain. In addition to the identification of candidate tasks for automati
on, a strategy for implementation for
adaptive automation needs to be developed. Under what conditions should automation be used? What is the
trigger for the automation? How much authority should be assigned to the automation versus the operator?
Are ther
e reliable physiological triggers practical for our environment? We will frame our adaptive
architecture around these research issues. We intend to develop increasingly realistic simulations as we
understand the efficacy of adaptable or adaptive options in

multitasking environments. As a result of our
laboratory results, we intend to validate promising adaptive candidates using prototype crew stations during
realistic field exercises. The goal of the program is to allow future combat vehicle operators to co
nduct remote
targeting with aerial and ground robotic systems in a multitasking, high stress environment while maintaining
sufficient combat awareness to ensure their survival.

4.0

REFERENCES

[1]

Barnes, M., & Grossman. The intelligent assistant for electroni
c warfare systems (NWC TP 5885).
China Lake, CA: US Naval Weapons Center; 1985.

[2]

Barnes, M., Parasuraman, R. & Cosenzo, K. Adaptive automation for robotic military systems.
Submitted chapter in The Emerging Force in Uninhabited Military Vehicles (UMVs). N
orth Atlantic
Treaty Organization, Human Factors and Medicine Panel (Committee on UMVs); 2006.

[3]

Billings, C.E & Woods, D. Concerns about adaptive automation in aviation systems: In R. Parasuraman
& M. Mouloua (Eds). Automation and human performance: Current

research trends. Lawrence Erlbaum
Associates, Hillsdale, NJ; 1994: 264
-
269.

[4]

Byrne, E.A. & Parasuraman, R. Psychophysiology and adaptive automation. Biological Psychology
1996; 42: 249
-
268.

[5]

Chen, J. Y. C., Durlach, P. J., Sloan, J. A., & Bowens, L. D. Rob
otic Operator Performance in Simulated
Reconnaissance Missions (ARL Technical Report). , APG, MD: Army Research Laboratory; in press.

[6]

Cosenzo, K., Parasuraman, P., Bhimdi, T, & Novak, A. Adaptive Automation for Control of Unmanned
Vehicles: Simulation Plat
form for Rapid Prototyping and Experimentation. Proceedings of the 1st
Augmented Cognition Conference; July 26
-
28, 2005.

[7]

Cosenzo, K.A., Parasuraman, R. P., Novak, A. & Barnes, M. Implementation of Adaptive Automation
for Control of Robotic Systems (ARL
-
TR
-
3808). APG, MD: Army Research Laboratory; 2006.

[8]

Dixon, S. & Wickens, C. D. Imperfect automation in unmanned aerial vehicle flight control (AHFD)
-
03
-
Adaptive Automation for Robotic Systems

HFM
-
135

5

-

11

NATO/PFP

UNCLASSIFIED

NATO
/PFP

UNCLASSIFIED

NATO/PFP

UNCLASSIFIED

NATO/PFP

UNCLASSIFIED

1/MAAD
-
03
-
1). Savoy, IL: University of Illinois Research Lab; 2003.

[9]

Durlach, P. Change blindness and its im
plications for complex monitoring and control systems design
and operator training. Human
-
Computer Interaction 2004; 19: 423
-
451

[10]

Dzindolet, M.T., Pierce, L.G., Beck, H.P., & Dawe, L.A. The perceived utility of human and automated
aids in a visual detectio
n task. Human Factors 2002; 44: 79
-
94.

[11]

Endsley, M. Automation and situation awareness. Automation and human performance: theory and
application. Human factors in transportation. Erlbaum Associates, Mahwah, NJ; 1996; 163
-
179.

[12]

Farrell, S., & Lewandwosky, S
. A connectionist model of complacency and adaptive recovery under
automation. Journal of Experimental Psychology: Learning, Memory, and Cognition 2000: 26: 395
-
410.

[13]

Hancock, P. A., Chignell, M. H., & Lowenthal, A. An adaptive human
-
machine system. Procee
dings of
the IEEE Conference on Systems, Man and Cybernetics 1985; 15: 627
-
629.

[14]

Hilburn, B., Jorna, P.G., Byrne, E.A., & Parasuraman, R. The effect of adaptive air traffic control (ATC)
decision aiding on controller mental workload. In M. Mouloua & J. Koon
ce (Eds.), Human
-
automation
interaction: Research and practice. Erlbaum Associates, Mahwah, NJ; 1997: 84
-
91.

[15]

Inagaki, T. Adaptive automation: Sharing and trading of control. In E. Hollnagel (Ed.) Handbook of
Cognitive Task Design. Erlbaum Associates, Mah
wah, NJ; 2003.

[16]

Kaber, D. B., & Endsley, M.
The effects of level of automation and adaptive automation on human
performance, situation awareness and workload in a dynamic control task. Theoretical Issues in
Ergonomics Science 2004; 5: 113
-
153.

[17]

Miller, C. &

Parasuraman, R. Designing for flexible interaction between humans and automation:
Delegation interfaces for supervisory control. Human Factors; In Press.

[18]

Mitchell, D. K. Soldier Workload Analysis of the Mounted Combat System (MCS) Platoon's Use of
Unmanne
d Assets (ARL
-
TR
-
3476). APG, MD: Army Research Laboratory; 2005.

[19]

Mosier K.L, & Skitka, L.J. Human decision makers and automated decision aids: Made from each other?
In R. Parasuraman & M. Mouloua (Eds.).Automation and human performance: theory and applicat
ion.
Human factors in transportation. Lawrence Erlbaum Associates, Mahway, NJ; 1996: 163
-
176.

[20]

Murphy, R. R. Human
-
robot interaction in rescue robotics. IEEE Systems, Man and Cybernetics Part C:
Applications and Reviews, special issue on Human
-
Robot Interac
tion 2004; 34.

[21]

Opperman, R. In Opperman, R. (Ed.), Adaptive User Support. Lawrence Erlbaum Associates, Hillsdale,
NJ; 1994.

[22]

Parasuraman, R. Designing automation for human use: Empirical studies and quantitative models.
Ergonomics 2000; 34: 931
-
951.

[23]

Parasu
raman, R., Bahri, T., Deaton, J.E., Morrison, J.G., & Barnes, M. Theory and design of adaptive
automation in aviation systems (Technical Report No. NAWCADWAR
-
92033
-
60). Warminster, PA:
Adaptive Automation for Robotic Systems

5

-

12

HFM
-
135

NATO/PFP

UNCLASSIFIED

NATO/PFP

UNCLASSIFIED

NATO/PFP

UNCLASSIFIED

NATO/PFP

UNCLASSIFIED

Naval Air Warfare Center, Aircraft Division; 1992.

[24]

Parasuraman, R., Ga
lster, S., Squire, P., Furukawa, H., & Miller, C. A flexible delegation
-
type interface
enhances system performance in human supervision of multiple robots: Empirical studies with
RoboFlag. IEEE Transactions on Systems, Man, and Cybernetics. Part A: Systems

and Humans 2005;
35: 481
-
493.

[25]

Parasuraman, R., Molloy, R., & Singh, I. L. Performance consequences of automation
-
induced
“complacency.” The International Journal of Aviation Psychology 1993; 3: 1
-
23.

[26]

Parasuraman, R. & Riley, V. Humans and automation: Us
e, misuse, disuse, abuse. Human Factors 1997;
39: 230
-
253.

[27]

Prinzel, L.J., Freeman, F.G. Scerbo, M.W., Mikulka, P.J., & Pope, A.T. A closed
-
loop system for
examining psychophysiological measures for adaptive automation. International Journal of Aviation
Psy
chology 2000; 10; 393
-
410.

[28]

Rehfeld, S. A., Jentsch, F. G., Curtis, M., & Fincannon, T. Collaborative teamwork with unmanned
ground vehicles in military missions. Proceedings of the 1st Augmented Cognition Conference; July 26
-
28, 2005.

[29]

Rouse. Adaptive addi
ng for human
-
computer control. Human Factors 1988; 30: 431
-
441.

[30]

Rovira, E., McGarry, K., & Parasuraman, R. Effects of imperfect automation on decision making in a
simulated command and control task. Human Factors; in press.

[31]

Savage
-
Knepshield, P. A., & Mar
tin. J. A human factors field evaluation of a handheld GPS for
dismounted Soldiers. Proceedings of the Human Factors and Ergonomics Society 39th Annual Meeting;
2005.

[32]

Scerbo, M. Adaptive automation. In W. Karwowski (Ed.) International encyclopedia of huma
n factors
and ergonomics. Taylor and Francis, Inc, London, England; 2001.

[33]

Simons, D. J., & Ambinder, M. S. Change blindness: Theory and consequences. Current directions in
Psychological Science 2005; 14: 44
-
48.

[34]

St. John, M., Kobus, D. A., Morrison, J. G.,

& Schmorrow, D. Overview of the DARPA augmented
cognition technical integration experiment. International Journal of Human
-
Computer Interaction 2004;
17: 131
-
149.

[35]

Vanderhaegen, F., Crévits, I., Debernard, S., & Millot, P. Human
-
machine cooperation: toward

an
activity regulation assistance for different air traffic control levels. International Journal on Human
-
Computer Interaction 1994; 6: 65
-
104.

[36]

Wickens, C. & Hollands, J. Engineering psychology and human performance. Upper Saddle River, NJ:
Prentice Ha
ll; 2000.