Life in the Atacama: Remote Science Investigation Tools and Human Performance [DRAFT]

earthblurtingΤεχνίτη Νοημοσύνη και Ρομποτική

14 Νοε 2013 (πριν από 3 χρόνια και 6 μήνες)

92 εμφανίσεις

Life in the Atacama: Remote Science Investigation Tools
and Human Performance


Peter W. Coppin

Robotics Institute

Carnegie Mellon University

5000 Forbes Ave., Pittsburgh,
PA, 15213, USA

David Wettergreen

Robotics Institute

Carnegie Mellon Uni

5000 Forbes Ave., Pittsburgh,
PA, 15213, USA

Geb Thomas

Department of Industrial and
Mechanical Engineering

University of Iowa

2404 Seamans Center, Iowa
City, IA

52242, USA


In this paper we describe an overall method for interacting

with an autonomous exploration rover to enable
geologists, biologists and other non
engineers to engage in
scientific exploration. From the EventScope Lab in
Pittsburgh, Pennsylvania, USA, a science team
communicated exploration requests to Zoë and explor
data returned during traverses through the Atacama Desert
of northern Chile. We developed innovative ways for
scientists to browse data and to communicate exploration
requests within orbital 3D models of the desert.
Innovations included the ability to
create and record
“Exploration Templates” to represent different
exploration strategies, tools to track data back to original
requests through each stage of the exploration process
enabling increased communication between science and
engineering teams and
a triangulation tool to estimate the
position of the rover based on landmarks visible from the
rover. We observed the existing work practices of the
science team to inspire our design and incorporated past
techniques used for other rover missions/expediti
ons such
as NASA’s Mars Exploration Rover mission. Observation
and analysis reveal that many tools increased science
effectiveness The paper concludes with lessons learned
and future directions to increase rover science
effectiveness, such as blends betwee
n automatic feature
detection systems and Exploration Template tools, the
idea of studying scientist
generated information
graphics/illustrations to inspire better interfaces and depth
information symbols for 2D panoramic imagery.

Categories and Subject

H.5.2 [
User Interfaces

General Terms

Management, Measurement, Documentation, Performance,
Design, Economics, Reliability, Experimentation, Human
Factors, Standardization, Verification.


robot interaction, Human
computer interact
Telepresence, Virtual presence, Remote experience, Science
interface, information interface



The ‘Life in the Atacama’ (LITA) field experiment presented us
with a unique set of challenges. Scientists in the EventScope
Lab in Pittsburgh (F
ig. 1) needed an overall method for
interacting with an autonomous rover in Chile, called Zoë, (Fig.
2) to enable them to remotely search for life and habitats in the
Atacama Desert [Cabrol et al., 2007; Wettergreen et al., 2007].
This application paper de
scribes our solutions to the particular
problem of interacting with this exploration rover over a
distance, constraints, and explains what methods and interfaces
were developed and how they were used.

We have created a method for a remote science team to i
with a remote exploration rover by enabling them to (1)
determine where it is located, (2) examine its surroundings and
observations, (3) identify and organize its measurements, and
(4) produce a complete plan and route for it to follow in the next

exploration cycle.

Our goal was to design an intuitive interface that would let
scientists focus on science and not on the logistical difficulties
of communicating with Zoë.

We have determined, through quantitatively
based observation
and analysis as wel
l as qualitative observation and anecdotal
evidence, that science effectiveness was increased through these
information interfaces [Pudenz et al., 2006].

Unique challenges
: The LITA field experiment introduced new
challenges not faced by past rover missio
ns or terrestrial field
experiments. First, unlike in past and current Mars exploration
missions, Zoë was traversing “beyond the horizon”, beyond the
visual range of its panoramic sensors and without human
intervention [Wettergreen, et al., 2007]. Scientis
ts, therefore,
were forced at all times except during pre
morning drive
operations to select exploration areas using orbital imagery
rather than ground
based imagery returned from Zoë, which
only communicated with the science team once per day [Cabrol,
al., 2007]. A significant part of our challenge was developing
interfaces to support this new type of rover exploration.

Figure 1. The science operations center (left), and Figure 2.
The Zoë rover (right).

Second, not all science team members were famil
iar with rover
exploration techniques. With primary specialties in biology,
bioengineering, ecology, and geology, geophysics we sought to
build intuitive human
robot interfaces that mirrored scientists’
existing work habits.


Background: description of rov
er science

Several groups supported science operations. The science team
worked in the teleoperations center, the engineering team
worked in the desert with the rover, and the EventScope team
worked primarily with the science team in the teleo
center designing the communication interfaces they would use to

communicate with Zoë and its engineers. One to two engineers
contributed to the activities associated with the daily downlink
to assist the science team and respond to technical ques
related to the rover operations if necessary.

The science team began a typical day of rover science operations
by reviewing both orbital data products of the region
surrounding the rover and data returned by the rover the
previous day. In late after
noon/evening, data was downlinked
from the field and loaded into various visualization tools in the
teleoperations center.

The science team reviewed the data and created a science plan
within the EventScope uplink interface. The plan was then
reviewed by
the team and, once consensus was reached,
uplinked to the field.


Background: the science operations

The EventScope center was designed to support several types of
science investigation, including remote biology and geology.
The centerpiece of the
center was a Science Activity Station,
modeled after MER Science Activity Stations [
Norris et

that consisted of two desktop workstations and a dual
projector system. There was space for the EventScope support
team, members of the rover engineerin
g team, and observers.
The center was fitted with video and audio recording equipment
used by onsite observers, wireless Internet access, and
teleconferencing units.


Tool Development

The development of exploration tools for a rover exploration
seeking m
icrobial life was an iterative process founded on
interaction, observation, and discussion with the science team.
Although we used observations of other rover information
interfaces [
Backes et al.,
Edwards et al.,


as starting
points, LITA introd
uced the challenge of navigating an
autonomous rover over long distances by selecting exploration
areas from orbital imagery rather than from ground
imagery returned from the rover [Wettergreen et al., 2007]. In
other words, most destinations were fa
r outside of the visual
range of the rover’s panoramic images. Therefore, our main goal
was to understand how to best communicate exploration goals to
a semi
autonomous rover and retrieve the information it
returned and how to optimize mission productivity

searching for life with a rover. Answers were provided by
observing the way scientists worked using their existing tools
(such as maps, spreadsheets, PowerPoint presentations and
reports) and by observing scientists using tools that we created.
For e
xample, the science team used maps and orbital images of
the Atacama, transparent overlays and pins to record
observations and to mark areas that they wanted the rover to
search. These concepts formed the basis for the “virtual pin”
paradigm used in the up
link interface to send requests to the
rover that will be described later in this paper.

We observed and interacted with scientists throughout a three
year development and testing cycle for the rover exploration
system. The first year, scientists deploye
d the previously
developed Hyperion rover as part of an initial “shakedown trial”
of the LITA project [
Wettergreen et al.,
2005]. Zoë, a new
rover, was constructed for the second and third years of the
project [Wettergreen et al., 2007]. Though some interf
features were built based on prior observations of other rover
interfaces, many advances occurred by observing and interacting
with the science team over the three
year period.

In the following sections of this paper, we describe our
techniques for im
plementing and measuring the efficacy of each
of the four tools we developed. Some results were based on
qualitative anecdotal evidence and personal observations.
However, by the second year of LITA, the GROK Lab
(Graphical Representation of Knowledge Lab
at the University
of Iowa) initiated a project that specifically measured and
analyzed science effectiveness in rover field experiments
Thomas et al., 2007
]. Science effectiveness can be defined as a
quantitative measure of the ability to increase scienti
productivity [Pudenz et al., 2006].

Due to LITA’s interface development schedule, the EventScope
team initiated and modified most tools based on direct
observation or discussion with science team members and other
observers. GROK Lab’s analysis is imp
ortant for understanding
exactly how the tools were used from a quantitative perspective
and how future tools should be designed.



After watching scientists collaborate and create hypotheses
using paper map print
outs of orbi
tal data products during
the year one shakedown trials, we conjectured that a human
robot interface based on a push
map metaphor (Fig.
3 and 4) might effectively mirror their current working

Figure 3. Virtual pins (top), and Figure 4
. Using pins to
direct the rover through a Digital Elevation Model (bottom).

Through the interface, scientists placed "pins" into proposed
search locations on a computer
based 3D Digital Elevation
Model (DEM) of the exploration area. By right
clicking on t
pin with a mouse, science team members could assign rover
actions to each pin (e.g.,

"take panoramic image," "take
microscopic image," etc.).

By specifying locations with pins,
and by assigning actions to the pins/locations, the scientists
could design
ate proposed search areas to Zoë and then later
review the retrieved data using our Science Web Site in our lab
and on their own computer as shown in Fig. 1.

The interface allowed scientists not only to assign actions to
pins but to edit pin actions, to
paste” actions from one
pin to another, and to “copy” a rover locale (a pin). Copying
pins enabled the science team to create exploration “templates”,
or sequences of actions that could be re
used. Similar to SAP for
MER (Cabrol, personal communic
ation, September 14, 2006),
progress bar at the bottom of the interface predicted the amount
of data and time required for a specific sequence of exploration
request actions. This updated as pins and actions were assigned
within the interface.

Our sof
tware converted pin locations and actions assigned to the
3D virtual pins into rover coordinates and placed them into four
discrete files: a machine
readable .xml file for Zoë, a human
readable text file for the rover engineering team, a screenshot of
pin locations on the orbital image for the science team, and
an EventScope 3D Virtual Environment Terrain file for the
EventScope team. Clicking a button on EventScope would
transfer all of these files into a web
accessible repository and
then to the engin
eering team and Zoë in the Atacama.


Uplink interface results

Most participants agreed that the creation and design of an
uplink interface in the second year dramatically improved the
science team’s ability to communicate exploration requests to
the rover
, thereby increasing science effectiveness.

We conducted a two
week study of science operations in the
second year to quantitatively measure the science effectiveness
of our interfaces. In the first week, scientists did not have access
to the “pin” inter
face or the templates; in the second week, they
did. Without these interfaces, the science team indicated
exploration requests to the rover engineering team in a text file.

Although we recorded no significant change in the amount of
time spent programming

the rover during the study, the nu
of tasks created in the rover plan was si
nificantly higher in the
second week than in the first week of the mission after scientists
gained the ability to create exploration templates. The time
task ratio was sub
stantially lower for the second week by
approximately a factor of four [Pudenz et al., 2006].

We can infer that templates significantly increased the ability to
plan, since the science team created tasks 4 times faster in week
2 than in week 1 [Pudenz et

al., 2006]. Variables that could
disrupt this inference would be the new ability to use the “pin”
interface, the fact that not all team members participated in
science operations during the first and second week, and the
learning curve of gaining familiar
ity with interface tools.
However, it appears that the template had the most effect on
saving time during science operations [E. Pudenz, personal
communication, August, 2006].

Our measurement of scientific statements during the two
study provides fur
ther evidence for this inference. We defined a
statement as a scientifically significant utterance made by a
member of the science team. For example, a comment like “That
rock is red” would count as one statement in our measurements
[E. Pudenz, personal co
mmunication, September 14, 2006].
Scientists generated nearly the same number of statements in
week 2 of the study as in week 1, but in that time the number of
bytes returned by the rover dropped by approximately a factor of
2. Thus, the number of statemen
byte increased from week
1 to week 2 by approximately a factor of 2. Additionally, there
was a clear shift in the scientists’ attention away from the orbital
imagery information towards the rover information [Pudenz et
al., 2006].




Once data was returned, the science team needed a method to
quickly and easily access data, often from their own computers,
during the expedition, while they were off
site or after the
expedition. Our downlink interfa
ce solution underwent three
distinct phases of development.

A public version of LITA’s final downlink interface can be
viewed at the Science Data


The Atacama Science Web Site

After attempting several different
strategies for data presentation
in addition to techniques to link pin requests with returned data,
in the third year, we fully integrated a web
based solution called
the Atacama Science Web Site. In designing the site, we were
inspired by the presentation

of science data in several science
team members’ science presentations [P. Coppin, observational
photograph of presentations by L. Marinangelie et al., Summer
2004]. The Atacama Science Web Site (Fig. 5) runs on most
standard computers that use commercial
ly available web
browsers such as Firefox, Mozilla and Internet Explorer. We
organized the site to support several browsing modes, such as:
by location (called “Locales”), by day

by data type (panoramic
images, microscopic images, fluorescence images, spe
etc.). Some data was downloadable as prepackaged zip files for
use in specialized tools on science team members’ laptops, and
more data was also available in RAW (non
.jpg) format.

Iterative testing of the system during Operational Readiness
sts [ORT’s] and during actual rover operations revealed the
need for additional tools, which were then built into the site.

Figure 5. The Atacama Science Website.



In Phase 3 we introduced key innovations in the interface for
panoramic images returned from Zoë. In Phase 2,
science team members had to click each individual low
resolution preview image to download the high
version, a time
consuming process that may have lead to less
thorough scrutiny of the tiles. Data

logs from the Phase 2 web
interface show that during this time the team looked at a limited
amount of high resolution data in initial passes [Glasgow et al.,
2005] In response, the EventScope team developed the
“Scanorama” interface (Fig. 6) so that the s
cience team could
browse full resolution data by simply dragging a square over a
lower resolution version of the entire pan, which caused higher
resolution images to load in a separate window. Thus, the team
was more likely to look and process more data in
itially, allowing
them to make more informed choices about the next day's plan
[E. Pudenz, personal communication, August, 2006].

Scanorama was in part a direct response to an event observed in
Phase 2, in which rare but prominent biological features (e.g
plants) escaped the notice of scientists until well after Zoë had
driven past them, rendering initial observations of the plant site
almost impossible.

Figure 6. The "Scanorama" interface.

One caveat: Glasgow and Pudenz [personal communication,
ber 14, 2006] theorize that while initial interface design
may have been a factor in missing the plant, another factor may
have been “inattentional blindness”, a naturally occurring
perceptual phenomenon [Mack and Rock, 1998] which happens
to “almost” ever
yone whether exploring an environment through
their own eyes or through an information interface [Pudenz,
personal communication, September 14, 2006]. In other words,
inattentional blindness arises from the ability to focus/attend to
certain phenomena in a
n environment while not attending to
other phenomena [Mack and Fock, 1998], both enabling
exploration and causing people sometimes not to see things
[Pudenz, personal communication, September 14, 2006].

Glasgow theorizes that if the science team had speci
fically been
looking for plants, they would have spotted the plant in the
lower resolution preview image during initial passes [Glasgow,
personal communication Sept. 14, 2006]. This theory is
supported by the fact that the plant was spotted later in the sa
resolution preview images where it was missed before
[Glasgow, personal communication Sept. 14, 2006].
Nevertheless, it is likely that the “blurring” of the original
preview image was a contributing factor in the missed plant
[Glasgow and Pudenz, pe
rsonal communication, September 14,


Identifying and organizing measurements

During both the second year and Operational Readiness Tests in
preparation for the third year of LITA, we observed that
scientists reviewing life detection data from the on
microscopic imager frequently needed the FI engineering
specialist to intervene. This meant scientists had less time to
analyze the data. In response, we developed a web
tool called the “FI Stacker” (Fig. 6), enabling team members to
ly review microscope imagery using various filters and
configurations and eliminating the need for specialized tools.

As shown in Figure 5, a pull down menu (A) enables a user to
load images such as RGB images, image of surfaces sprayed by
liquids to det
ect lipids, proteins, or Chlorophyll, for example.
The controller (B) enables a user to move through an image
stack, or to autoplay/cycle through images automatically. The
menu (C and D) shows what image is selected and enables a
user to select another ima
ge from the stack.


Downlink interface results

The Atacama Science Web Site was an unforeseen need and a
major success both during and after it was fully developed.

While no quantitative assessment of the site’s contribution to
science effectiveness ha
s been completed, we can infer science
effectiveness was increased from qualitative observation and the
comments of science team members who used the site and still
use it for post
expedition analysis [K. Warren
Rhodes, personal
communication, September 14
, 2006].

We can also infer that the Phase 3 “Scanorama” interface, while
certainly not likely to eliminate inattentional blindness, reduced
the factors that contributed to missed phenomena. This inference
is supported by comments from onsite observers as
well [Pudenz
and Glasgow, personal communication, September 14, 2006].

Figure 6. The Florescence Image Stacker tool in the Atacama
Science Website.



During the year one rover trials, we observed that the
team had trouble correlating data returned from the rover with
the original requests sent to the rover. For example, if a science
team member witnessed a gray speck in an orbital image and
sent Zoë to investigate, the rover may return an image of a

field. But it would be difficult for a scientist to correlate this
rock field back to the initially observed gray speck without
further information. Our solution was a “Request ID” system to
track the request throughout the rover exploration process,

the placement of the pin into the DEM until the retrieval of the
data in the operations room.

We originally designed the ID so that scientists could more
intuitively correlate pin requests with returned data by clicking
that same pin in EventScope t
he next day. But EventScope team
members and engineering team members came to rely on the ID
as a signifier to track data throughout the process. Therefore,
request ID’s were made more intuitively human
readable in the
third year.


The human
readable scienc
e plan:
Tracking robot/human understanding
throughout the exploration process

The human
readable science plan generated by the uplink
interface made it possible not only to communicate exploration
requests from the information interface, but also for the

team to append notes or instructions for the engineering team in
the field, who could in turn append notes of their own
(including when data was not collected and the reasons why).
Finally, this document became a clickable web page that enabled
e science team not only to understand what happened in the
field that day, but to click their requests (as request IDs) in order
to obtain data retrieved by the rover (Fig. 4).

The human
readable document was structured and processed as

1. Even
tScope created a document/text file when scientists
placed pins and assigned rover actions.

2. The science team appended comments, questions or
instructions for the Atacama rover engineering team if

3. The file was sent to the engineering team.

4. The engineering team used the human
readable file to
understand what the rover what tasks the rover was to
accomplished that day .

5. After daily rover activities, the engineering team appended
comments to the file

information about rover faults,
anations for uncompleted actions, etc.

6. The final annotated file was placed into the science web site.
Each request was hyperlinked to specific files returned by the


Request ID system results

No quantitative data was analyzed to determine the sc
effectiveness of the request ID system, although direct
observation indicates that the system was heavily used during
each day, and possibly even each hour, of the expedition.



A critical part of rover science
is localization, or finding out
where the rover is positioned within an orbital image. Images
returned from a rover are used to identify features/landmarks
that can also be found within an orbital image. This correlation
is used to localize the position of

the rover at the time that these
images were recorded. This process is called triangulation.
Typically, triangulation consumed a considerable amount of
time and leads to uncertainty if mistakes are made. During the
first year of rover science operations,
scientists were still using
rulers, protractors and other analogue measurement devices on
paper printouts of orbital images.

Figure 7. The Triangulation Tool.

In response to this, we developed a triangulation tool in an
attempt to increase accuracy and
save time.

Figure 7 shows triangulation in the EventScope Triangulation
Tool. A science team member enters virtual “flags” to mark
three or more features into the orbital image DEM (right in
Figure 7). Next, the science team member marks the

features in a panoramic image (left in Figure 7).
Using angles recorded by EventScope during this process, a
series of circles are automatically drawn in the DEM. The
location of the rover is where the circles intersect (right in
Figure 7). This feature h
elped the science team to find the
position of the rover within the orbital image DEM.


Panoramic image presentation in the
triangulation tool

Panoramic images were mapped into a navigable 3D virtual
environment half
sphere (left in Figure 7) that could
between two views that created two different projections of the
panoramic image:

Immersive view:

By orienting the sphere so that the user was
“immersed” inside of it, as they would through a QuickTime VR
(, approximating
the vantage point of
the rover and was intended for spotting distant objects.

Fisheye view:

Alternatively, the user could re
orient the half
sphere to view the panoramic image from outside of the half
sphere, enabling a fisheye projection of the image. In

a fisheye
projection, objects and markers/flags that were distant from one
another could be viewed simultaneously in a 360
degree angle
of view. Users could track markers between the fish
eye view
and the immersive view by re
orienting back and forth. Th
could also rotate the half
sphere to views that fell somewhere in
between the immersive view and the fish
eye view.

We believe that the capability to transition between these views
using a 3D half
sphere assisted in the process of finding visible
ures that corresponded to features that could be identified in
the orbital image DEM, because unlike the web
rectangular (equirectangular projection) pan (Figure 7), the
fisheye view matched the visible arrangement of angles in the
orbital image DEM.


Triangulation tool results

A science team member reported that the triangulation tool
decreased overall planning time, thus freeing up time for more
science investigation [N. Cabrol, personal communication,
September 14, 2006].

An analysis using techniq
ues such as those developed by the
GROK Lab [Thomas et al., 2007] would increase information
about the usefulness of this tool.



Table 1 shows a matrix of 1) the initial issues observed by the
EventScope team, 2) the innovation
s we made to the
information interfaces in response to the problems, 3) the
observations that inspired the interface designs and 4) the
impact of the information interfaces on remote science




Building the Uplink interface into
Atacama Science Web Site

Future versions could incorporate the functionality of the
EventScope Uplink Interface into the Atacama Science Web
Site. Another approach could be to maintain the functionality of
the EventScope Uplink Interface while at the s
ame time
enabling duplicate uplink tools to be built into future versions of
the Atacama Science Web Site, thus providing science team
members with multiple options for planning science activities,
especially off


Representations of
c Data

We believe the reported
success of the Atacama
Science Web Site makes a
strong case for modeling
future visual interface
innovations after existing
scientific presentations.

A review of scientist
created illustrations, figures
and information grap
produced during and after
the LITA project, such as
examples in this special
issue, suggests several new
intriguing directions for interface development. Figure 11 in
Cabrol et al. [2007] introduces a way of representing rover
transects. Figure 4 Site

D BioMap in Warren
Rhodes et al.,
[2007] introduces a way of presenting rover
scale biological
information in orbital image data. Also in Warren
Rhodes et al.,
[2007], Figure 5: Habitat Mapping could inspire a new way of
representing and providing input t
o the rover’s science
autonomy system.


Adding depth cues to 2D panoramic

During the 2005 ground truth visit of all rover exploration sites
at the end of the experiment [Thomas et al., 2007], an
EventScope interface developer had the chance to ob
serve the
remote Atacama terrain alongside the science team who
explored through the rover. The science team commented on the
differences between what they perceived in 2D panoramic
imagery and what they perceived in the field.

It was observed that some
of the terrain shape, such as slopes,
was not communicated through the 2D pans delivered by the
rover. This could be addressed by augmenting panoramic
imagery with markers that would communicate depth
information. Unfortunately, many of the features discus
sed by
the science team were at distances that fell outside of the
effective range of stereo algorithms used by LITA [Wettergreen
et al., 2007] and even MER [Goldberg, Maimone et al., 2002.,
Vona et al., 2003, Edwards, 2005, ].

We could, however, overlay

lines that matched the contours of
terrain shapes onto two
dimensional panoramic imagery to more
clearly illustrate the contours of terrain and other features. We
could generate these contour lines from position information
derived from rover telemetry. I
n the two dimensional panoramic
images, this would resemble a “trail” that matched the path of
the rover and that traced the contour of the terrain, adding even
Table 1. Issues, innovations, observations, and impact.

more contour and shape information into the panoramic



LITA informatio
n interfaces and analysis were made possible by
a partnership between the LITA Project at Carnegie Mellon
(under NASA grant NAG5
12890), the EventScope Project at
Carnegie Mellon, the University of Iowa (through NASA’s
Applied Information Systems Research

Program under NASA
grant NAGW5
11981G) and visualization tools created by
Platform Digital, LLC (under NASA contract NAS1

We would also like to thank the following people who
contributed to this work:

E. Myers
, D. Lu
, M. Wagner
, N.A. Cabrol
E.A. Grin
, K.
, E. Pudenz
, G. Glasgow
, D. Seneker
, M.
, T. Smith

D. Thompson
, D. Jonack
, A.S.
, S. Weinstein
, D. J. Dohm
, G. Fisher
, A.N.
, J. Piatek
, K. Warren

L. Ernst

and K.

CMU EventSc
ope Lab, Pittsburgh, PA;
CMU Robotics
Institute, Pittsburgh, PA,
Ames Research Center, MS
3. Moffett Field CA 94035;
Univ. of Iowa, IA.
MBIC, Pittsburgh, PA;
Univ. of Arizona, Tucson, AZ

, CA;

Univ. of Tennessee, Knoxville TN;

C Berkeley,
Berkeley, CA;




Backes, P.G., J.S. Norris, M.W. Powell, M.A. Vonn, R.
Steinke, J. Wick (2003), The Science Activity Planner for
the Mars Exploration Rover Mission: FIDO Field Test
Results, paper presented at IEEE Aerospace Conference


Cabrol, N.A. et al. (2007), Life in the Atacama: Searching
for Life with Rovers (Science Overview),
J. Geophys. Res.,
Vol. 112


Edwards, L., M. Sims, C. Kunz, D. Lees, J. Bowman
(2005), Photo
realistic terrain modeling and visualization
for Ma
rs Exploration Rover science operations, paper
Presented at IEEE Conference on Systems, Man and


Glasgow, J., E. Pudenz, E., G. Thomas, P. Coppin, N.
Cabrol, and D. Wettergreen (2005), Observations of a
Science Team During an Advanced Planetary

Prototype Mission, paper presented at 14

International Workshop of Robot and Human Interaction
Communication, Nashville, TN.


Goldberg, S.B., M.W. Maimone, L. Matthies (2002),
Stereo Vision and Rover Navigation Software for Planetary
on, paper presented at IEEE Aerospace


Norris, J.S., R. Wales, M.W. Powell, P.G. Backes, R.C.
Steinke (2002), Mars mission science operations facilities
design, paper presented at IEEE Aerospace Conference.


Norris, J.S., M.W. Powell, M.W, M.A. V
ona, P.G. Backes,
J.V. Wick (2005), Mars exploration rover operations with
the science activity planner, paper presented at IEEE
Conference on Robotics and Automation, Barcelona,
Spain, 18
22 April.


Pudenz, E., G. Thomas, J. Glasgow, P. Coppin, D.
een, and N. Cabrol (2006) Searching for a
quantitative proxy for rover science effectiveness, paper
presented at the 1st ACM SIGCHI/SIGART Conference on
Robot Interaction, Salt Lake City, Utah, 2
March. HRI '06. ACM Press, New York, NY, 18
25. DOI=


Thomas, G., I. Ukstins
Peate, N. Cabrol, et al. (2007), Life
in the Atacama: Ground
Truth Evaluation and Accuracy of
Rover Data Analysis,
J. Geophys. R
es., Vol. 112


Vona, M.A., P.G. Backes, J.S. Norris, M.W. Powell
(2003), Challenges in 3d visualization for mars exploration
rover mission science planning, paper presented at IEEE
Aerospace Conference.


Wagner, M.D., S. Heys, D. Wettergreen, J. Teza, D.
ostolopoulos, G.A. Kantor, and W.L. Whittaker (2005),
8th International Symposium on Artificial Intelligence,
Robotics and Automation in Space.


Rhodes, K.A., K. Rhodes, S. Pointing, S. Ewing,
D. Lacap, B. Gómez
Silva, R. Amundson, E.I. Friedmann,
and C.P. McKay (2006), Hypolithic cyanobacteria, dry
limit of photosynthesis and microbial ecology in the
hyperarid Atacama Desert,
Microbial Ecology,


Rhodes, K., S. Weinstein, D. Pane, C. Cockell, J.
M. Dohm, J. Pia
tek, L. A. Ernst, E. Minkley, G. Fisher, S.
Emani, D. S. Wettergreen, M. Wagner, N. A. Cabrol, A. S.
Waggoner (2005), Mars analog habitat survey and the
search for microbial life remotely with an autonomous
astrobiology rover, paper presented at UV, NAI B
Meeting, University of Colorado, Boulder, Center for
Astrobiology, (abstract #861).


Rhodes, K. A., S. Weinstein, et al. (2007), Life In
The Atacama: Habitat Survey and the Distribution of
Microbial Life (1),
J. Geophys. Res., Vol. 112


ergreen, D., N. Cabrol, J. Teza, P. Tompkins, C.
Urmson, V. Verma, M.D. Wagner, and W.L. Whittaker
(2005), First Experiments in the Robotic Investigation of
Life in the Atacama Desert of Chile, paper presented at
IEEE Conference on Robotics and Automation.


Wettergreen, D. S., N.A. Cabrol, et al. (2007), Life in the
Atacama: Rover Design and Technical Results,
Geophys. Res., Vol. 112