Coping With Uncertainty in Map Learning

strawberrycokevilleΤεχνίτη Νοημοσύνη και Ρομποτική

7 Νοε 2013 (πριν από 3 χρόνια και 8 μήνες)

66 εμφανίσεις

Coping With Uncertainty in Map Learning
Kennet h Basye Thoma s Dean* Jef f rey Scot t Vi t t er f
Department of Computer Science, Brown Universit y
Box 1910, Providence, RI 02912
Abst r ac t
In many appl i cati ons i n mobi l e roboti cs, i t i s
i mpor t ant for a robot to explore its envi ron­
ment i n order to construct a representation of
space useful for gui di ng movement. We refer to
such a representation as a map, and the process
of constructi ng a map f rom a set of measure­
ment s as map learning. In thi s paper, we de­
velop a framewor k for describing map-l earni ng
problems in which the measurement s taken by
the robot are subject to known errors. We i n­
vestigat e two approaches to l earni ng maps un­
der such condi ti ons: one based on Val i ant's
probably approximately correct l earni ng model,
and a second based on Rivest Sz Sloan's reli-
able and probably nearly almost always useful
l earni ng model. Bot h methods deal wi t h the
probl em of accumul ated error i n combi ni ng lo­
cal measurement s to make global inferences. In
the first approach, the effects of accumulated
error are el i mi nat ed by the use of reliabl e and
probabl y useful methods for discerning the lo­
cal properties of space. In the second, the ef­
fects of accumul ated error are reduced to ac­
ceptabl e levels by repeated expl orati on of the
area to be learned. Fi nal l y, we suggest some i n­
sights i nt o why certai n exi sti ng techniques for
map l earni ng perf or m as wel l as they do.
1 I nt r oduct i o n
Many of the problems faced by robot s navi gati ng i n the
envi ronment can be faci l i tated by using expectations in
the f or m of expl i ci t model s of object s and the spaces t hat
they occupy. We use the t er m map to refer to any model
of large-scal e space used for purposes of navi gati on. Map
"Thi s work was supported in part by the National Science
Foundation under grant IRI-8612644 and by the Advanced
Research Projects Agency of the Department of Defense and
was monitored by the Ai r Force Office of Scientific Research
under Contract No. F49620-88-C-0132.
1This work was supported in part by a National Science
Foundation Presidential Young Investigator Award CCR-
8846714 wit h matching funds from I BM, and by National
Science Foundation research grant CCR-8403613.
learning involves expl ori ng the envi ronment, maki ng ob­
servations, and then using the observations to construct
a map. The construction of useful maps is compl i cated
by the fact that observations i nvol vi ng the posi ti on, ori ­
entati on, and i denti fi cati on of spati al l y remot e object s
are i nvari abl y error prone. In thi s paper, we explore a
number of problems involved in constructi ng useful maps
f rom measurement s taken wi t h sensors subject to known
errors.
In previous work [Dean, 1988], we have looked at vari ­
ous opt i mi zat i on problems related to constructi ng maps
(e.g., construct the most accurate map consistent wi t h
a set of measurements). Even in cases i nvol vi ng onl y a
single di mensi on, such opt i mi zat i on problems can t ur n
out to be NP-har d [Yernini, 1979]. In thi s paper, rather
than look at problems that involve doi ng the best wi t h
what you have, we consider problems t hat involve going
out and getti ng what you need to generate useful repre­
sentations. In parti cul ar, we consider a f or m of reliable
and probably almost always useful l earni ng [Rivest and
Sloan, 1988] in which the robot gathers i nf ormat i on to
ensure that it nearl y always (wi t h probabi l i t y 1—6) can
provide a guaranteed perfect pat h f r om one l ocati on to
another. A prerequisit e to thi s sort of l earni ng is t hat
the robot, i n movi ng around i n its envi ronment, can dis­
cern the local properties of space wi t h absolut e certaint y
wi t h hi gh probabi l i t y havi ng expended an amount of ef­
fort pol ynomi al in 1/2 and n, where n is some measure of
the size of the envi ronment.
By el i mi nat i ng local uncertainty, smal l errors i ncurred
in maki ng local measurement s are not allowed to prop­
agate rendering global queries unacceptabl y inaccurate.
In general, local uncertaint y accumulates as the product
of the distance in generating global estimates. One way
to avoi d thi s sort of accumul ati on is to establish strate­
gies such that the robot can discern properties of its
envi ronment wi t h certainty. Most existing map learning
schemes exploi t thi s sort of certaint y in one way or an­
other (see Section 4). The rehearsal strategies of Kui per s
[1988] are one exampl e of how a robot mi ght pl an to
el i mi nat e uncertainty. Once we have a method for el i m­
i nati ng uncertainty, the probl em then reduces to one of
pl anni ng out and executing the necessary experiment s
to extract certai n i nf ormat i on about the envi ronment.
In si tuati ons i n whi ch i t i s not possibl e to el i mi nat e
local uncertai nt y completely, it is sti l l possibl e to reduce
Basye, Dean and Vitter 663
the effects of accumulated errors to acceptable levels by
performing repeated experiments. To support this claim,
we describe a map-learning technique based on Valiant's
probably approximately correct learning model [Valiant,
1984] that, given small 6 > 0, constructs a map to an­
swer global queries such that the answer provided in re­
sponse to any given query is correct with probability
1 — 6. The techniques presented apply to a wide range
of map-learning problems of which the specific problems
addressed in this paper are meant to be merely illustra­
tive.
2 Spat i al Repr esent at i o n
We model the world, for the purposes of studying map
learning, as a graph with labels on the edges at each
vertex. In practice, a graph will be induced from a set
of measurements by identifying a set of distinctive loca­
tions in the world, and by noting their connectivity. For
example, we might model a city by considering intersec­
tions of streets to be distinguished locations, and this
will induce a grid-like graph. Kuipers [1988] develops a
mapping based on locations distinguished by sensed fea­
tures like those found in buildings (see Figure 1). Fig­
ure 2 shows a portion of a building and the graph that
might be induced from i t. Levitt [l987] develops a map­
ping based on locations in the world distinguished by the
visibility of landmarks at a distance.
In general, different mappings result in graphs with
different characteristics, but there are some properties
common to most mappings. For example, if the mapping
is built for the purpose of navigating on a surface, the
graph induced wi ll almost certainly be planar and cyclic.
Other properties may include regularity or bounded de­
gree. In what follows, we wi ll always assume that the
graphs induced are connected and undirected; any other
properties will be explicitly noted.
Following [Aleliunas et a/., 1979], a graph model con­
sists of a graph, G — (V, E), a set L of labels, and a
labeling, Φ : {V x E} —> L, where we may assume that
L has a null element ± which is the label of any pair
(v e V, e e E) where e is not an edge from v. We will
frequently use the word direction to refer to an edge and
its associated label from a given vertex. Wi th this nota­
tion, we can describe a path in the graph as a sequence
of labels indicating the edges to be taken at each ver­
tex. We can describe a procedure to follow as a function
from V —> L indicating the preferred direction at each
location.
If the graph is a regular tessellation, we may assume
that the labeling of the edges at each vertex is consistent,
i.e., there is a global scheme for labeling the edges and
the labels conform to this scheme at every vertex. For
example, in a grid tessellation, it is natural to label the
edges at each vertex as North, South, East, and West.
In general, we do not require a labeling scheme that is
globally consistent. You can think of the labels on edges
emanating from a given vertex as local directions. Such
local directions might correspond to the robot having
a compass that is locally consistent but globally inac­
curate, or local directions might correspond to locally
distinctive features visible from intersections in learning
the map of a city.
In the following, we identify three sources of uncer­
tainty in map learning. First, there may be uncertainty
in the movement of the robot. In particular, the robot
may occasionally move in an unintended direction. We
refer to this as directional uncertainty, and we model this
type of uncertainty by introducing a probabilistic move­
ment function from {V x L} —► V. The intuition behind
this function is that for any location, one may specify a
desired edge to traverse, and the function gives the loca­
tion reached when the move is executed. For example,
if G is a grid with the labeling given above, and we as­
sociate the vertices of G with points (i, j) in the plane,
we might define a movement function as follows:
where the ". . ." indicate the distribution governing
movement in the other three directions. The probabili­
ties associated with each direction sum to ]. If all direc­
tions are equally likely regardless of the intended direc­
tion, then the movement function is said to be random.
Throughout this paper, we will assume that movement
in the intended direction takes place with probability
better than chance.
A second source of uncertainty involves sensors, and
in particular recognizing locations that have been seen
before. The robot's sensors have some error, and this
664 Machine Learning
can cause error in the recogni ti on of places previousl y
vi si ted; the robot mi ght either f ai l to recognize some
previ ousl y vi si ted l ocat i on, or i t mi ght err by mi st aki ng
some new l ocati on for one seen in the past. We refer to
thi s type of uncertai nt y as recognition uncertai nty, and
model i t by par t i t i oni ng the set of vertices i nt o equiv-
alence classes. We assume t hat the robot is unabl e to
di sti ngui sh between element s of a given class using onl y
its sensors.
A t hi r d source of error involves another mani festati on
of sensor error. In representing the worl d using a graph,
some mappi ng must be established f rom a set of di sti n­
guished locations in the worl d to V. Error in the sensors
coul d cause the robot to f ai l to notice a distinguished lo­
cati on some of the t i me. For exampl e, a robot t axi mi ght
use intersections as di sti ngui shed locations, leading to a
gri d-l i ke graph. But i f sensor error causes the robot not
to notice that he is passing t hrough an intersection, his
map wi l l become flawed. In expl ori ng an office envi ­
ronment, the poi nt i n a hal l way i n front of a door may
correspond to a vertex i n the induced graph. If the door
is closed, there is some chance t hat the robot wi l l not
recognize the vertex in traversing the hal l. We model
thi s type of uncertai nt y by i nt roduci ng a probabi l i sti c
movement f unct i on t hat can ski p over vertices. We refer
to thi s type of movement functi on as discontinuous and
to the type of uncertai nt y modeled as continuity uncer­
tai nty.
Apparentl y, the three types of uncertai nt y described
above are orthogonal in the sense t hat none i mpl i es or
precludes the others. The issues involved in model i ng
and reasoning about cont i nui t y uncertai nt y are complex
and wi l l not be treated further i n thi s paper. In the
f ol l owi ng, we are concerned wi t h di recti onal and recog­
ni t i on uncertai nty.
3 Map Learni ng
For our purposes, a map is a dat a structur e t hat fa­
ci l i tates queries concerning connecti vi ty, bot h local and
gl obal. Answers to queries i nvol vi ng global connecti vi t y
wi l l generall y rel y on i nf ormat i on concerning local con­
necti vi ty, and hence we regard the fundamental uni t of
i nf ormat i on to be a connection between two nearby loca­
tions (i.e., an edge between two vertices in the induced
undirected graph). We say t hat a graph has been learned
completely if for every l ocati on we know al l of its neigh­
bors and the di recti ons in whi ch they lie (i.e., we know
every tri pl e of the f or m (u, /, v) where u and v are vertices
and / is the label at u of an edge in G f rom u to v). We
assume t hat the i nf ormat i on used to construct the map
wi l l come f rom expl ori ng the envi ronment, and we iden­
t i f y two different procedures involved i n learning maps:
exploration and assimilation. Expl orat i on involves mov­
i ng about i n the worl d gatheri ng i nf ormat i on, and as­
si mi l at i on involves using t hat i nf ormat i on t o construct
a useful representation of space. Expl orat i on and as­
si mi l at i on are generall y handl ed i n paral l el, wi t h assim­
i l at i on performed i ncremental l y as new i nf ormat i on be­
comes availabl e dur i ng expl orat i on. In thi s section, we
are concerned wi t h the condi ti ons under which a graph
can be compl etel y learned, and how much t i me wi l l be
required for the expl orati on and assi mi l ati on.
3.1 Tessel l at i o n Gr aph s
I t's not hard to see t hat any connected, undirected graph
can be completel y learned easily if there is no uncer­
t ai nt y; [Kuipers and Byun, 1988] describes a way of do­
i ng thi s by bui l di ng up an agenda consisting of unex­
plored paths leading out of locations and then movi ng
about so as to eventuall y explore al l such paths. Not h­
ing about the graph need be known before the explo­
rati on begins. Introduci ng the kinds of uncertaint y de­
scribed in Section 2 complicates things considerably. If,
however, the graph has addi ti onal structure, then t hat
structur e can often be exploited to el i mi nat e uncertai nty.
In the fol l owi ng, we sketch a proof t hat it is possibl e
to efficientl y learn maps that correspond to regular tes­
sellations wi t h boundaries. It turns out t hat the ex­
pl orati on component of learning regular tessellations is
qui t e simple; random walks suffice for pol ynomi al -t i me
performance. In the longer version of thi s paper, we
describe an efficient incremental assi mi l ati on procedure
that is called whenever the robot encounters a location
duri ng expl orati on, and then prove the fol l owi ng1.
Lemma 1 The assimilation algorithm, provided will
learn a finite tessellation completely if the exploration
tour traverses every edge in the graph. The overall cost
of assimilation is O(m) where m is the length of the tour.
We now have to ensure that duri ng expl orati on the robot
traverses each edge in the graph at least once wi t h hi gh
probabi l i ty. The fol l owi ng two lemmas establish t hat,
for any connected, regular, undirected graph G and any
b > 0, a random walk of length pol ynomi al in 1/b and the
size of G is sufficient for traversing every edge in G wi t h
probability J - 6.
L e mma 2 For any d > 1, there exists a polynomial
p(d,1/2j) of order O(dlogj) such that with probability 1-S
p visits to a vertex of order d result in traversing all edges
out of the vertex at least once.
Lemma 3 For any connected, regular, undirected graph
G — (V, E) with order d, any S > 0, and any m > I,
there exists a polynomial p( | E|, ra, 1/b) such that with
probability 1 — S, a random tour on G of length p vis-
its every vertex in V at least m times.
In most cases, we can do better than random expl orati on.
If the robot moves i n the di recti on i t i s poi nt i ng wi t h
probabi l i t y better than chance, then the robot can tra­
verse every edge in the graph wi t h high probabi l i t y in
t i me linear in the size of the graph. Using the above
three lemmas it is easy to prove the fol l owi ng.
Theor e m 1 Any finite regular tessellation G — (V, E)
can be reliably, probably almost always usefully learned.
The lemmas and for m of the proof described above
provide a framewor k for provi ng that other kinds of
graphs can be reliabl y probabl y almost always usefull y
learned in a pol ynomi al number of steps. In general, al l
1To meet the submission length requirements, all proofs
have been omitted. The longer version of the paper, including
all proofs[Basye et al., 1989], is available upon request.
Basye, Dean and Vitter 665
we require is t hat a pol ynomi al number of visit s to every
vertex provides enough i nf ormat i on to learn the graph.
Perhaps, the most i mpor t ant lesson to extract f rom thi s
exercise is t hat the effects of mul t i pl i cat i v e error in learn­
i ng maps of large-scal e space can be el i mi nat ed if there is
a reliabl e met hod for el i mi nat i ng local uncertai nt y t hat
works wi t h hi gh probabi l i t y. The above approach t o
map l earni ng was i nspi red by Rivest's model of learn­
i ng [Rivest and Sloan, 1988], i n whi ch compl ex probl ems
are broken down i nt o si mpl e subproblems t hat can be
learned i ndependentl y. In order to learn a useful repre­
sentation of the gl obal structur e of i ts envi ronment, i t i s
sufficient t hat a robot have reliabl e and usuall y effective
methods for sensing the l ocal structur e of its envi ron­
ment and a met hod for composi ng the local structur e to
generate an accurat e gl obal structure. The sensing met h­
ods need not always provi de useful answers; they need
onl y guarantee t hat the answer returned i s not wrong.
The probl em then becomes largel y one of det ermi ni ng a
sequence of sensing and movement tasks t hat wi l l pro­
vide useful answers wi t h hi gh probabi l i t y. There are sit­
uati ons, however, i n whi ch reliabl e sensing methods are
not available, and i t i s st i l l possibl e to learn useful maps
of large-scal e space.
3.2 Gener a l Gr aph s
The next probl em we l ook at involves bot h recogni ­
ti on and di recti onal uncertai nt y wi t h general undi rected
graphs. We show t hat a f or m of Val i ant's probabl y ap­
proxi mat el y correct l earni ng i s possibl e when appl i ed to
l earni ng maps. In thi s section, we consider the case in
which movement i n the i ntended di recti on takes place
wi t h probabi l i t y better t han chance, and t hat, upon en­
teri ng a vertex, the robot knows wi t h certai nt y the local
name of the edge upon whi ch i t entered. We cal l the
l atter requi rement reverse movement certainty. Result s
for related model s are summari zed in the next section.
At any poi nt i n t i me, the robot i s faci ng i n a di recti on
defined by the label of a part i cul ar edge/vertex pai r—t he
vertex being the l ocati on of the robot and the edge being
one of the edges emanat i ng f r om t hat vertex. We assume
that the robot can t ur n to face i n the di recti on of any of
the edges emanat i ng f rom the robot's l ocati on. We also
assume t hat upon enteri ng a vertex the robot can de­
termi ne wi t h certai nt y the di recti on i n whi ch i t entered.
Di recti onal uncertai nt y arises when the robot at t empt s
to move in the di recti on it is poi nt i ng. Let 7 > 0.5 be
the probabi l i t y t hat the robot moves i n the di recti on i t
i s currentl y poi nt i ng. Mor e t han 50% of the t i me, the
robot ends up at the other end of the edge defi ni ng i ts
current di rect i on, but some percentage of the t i me i t ends
up at the other end of some other edge emanat i ng f r om
its st art i ng vertex. Whi l e the robot won't know t hat i t
has ended up at some uni ntended l ocat i on, i t wi l l know
the di recti on t o fol l ow i n t r yi ng t o ret ur n t o its previous
l ocati on.
To model recogni ti on uncertai nty, we assume t hat the
vertices V are part i t i oned i nt o two sets, the di sti ngui sh­
able vertices D and the i ndi sti ngui shabl e vertices I. We
are able to di sti ngui sh onl y vertices in D. We refer to
the vertices in D as landmarks and to the graph as a
landmark graph. We define the landmark distribution
parameter', r, to be the maxi mu m distance f r om any ver­
tex in I to i ts nearest l andmar k (i f r = 0, then I is
empt y and al l vertices are l andmarks). We say t hat a
procedure learns the local connectivity within radius r of
some v E D if it can provi de the shortest pat h between
v and any other vertex in D wi t hi n a radius r of v. We
say t hat a procedur e learns the global connectivity of a
graph G within a constant factor if, for any t wo vertices
u and v in D, it can provi de a pat h between u and v
whose l engt h is wi t hi n a constant factor of the l engt h of
the shortest pat h between u and v in G.
We begi n by showi ng t hat the mul t i pl i cat i v e error i n­
curred i n t r yi ng to answer gl obal pat h queries can be
kept low i f the l ocal error can be kept l ow, t hat the t ran­
si ti on f r om a local uncertai nt y measure to a gl obal un­
certai nt y measure does not increase the compl exi t y by
more t han a pol ynomi al factor, and t hat i t i s possibl e
to bui l d a procedur e t hat direct s expl orati on and map
bui l di ng so as to answer gl obal pat h queries t hat are ac­
curat e and wi t hi n a smal l constant factor of opt i mal wi t h
high probability.
L e mma 4 Let G be a landmark graph with distribution
parameter r, and let c be some integer > 2. Given a pro-
cedure that, for any S1 > 0, learns the local connectivity
within cr of any landmark in G in time polynomial in
1/2- with probability 1 - S1, there is a procedure that learns
the global connectivity of G with probability 1 — Sg for
any Sg > 0 in time polynomial in 1/S- and the size of the
graph. Any global path returned as a result will be at
most c/c-2 times the length of the optimal path.
The procedur e presented i n the proof of Lemma 4
searches out war d f r om a vertex v E D to a distance cr,
and then uses the edges f ound whi l e enteri ng vertices on
the out war d pat h to at t empt to ret ur n to v. The direc­
ti ons used on the way out f or m an expectati on for the
label s observed on the way back. When these expecta­
ti ons are not met, the traversal is said to have fai l ed, and
the procedur e tries agai n. The procedur e keeps track of
the edge/vertex label s associated wi t h vertices visited
duri ng expl orati on i n order to ensure t hat i t explores al l
paths of l engt h cr or less emanat i ng f r om each vertex
i n D wi t h hi gh probabi l i t y.
There is a possi bi l i t y t hat some combi nat i on of move­
ment errors coul d resul t in false posi ti ve or false nega­
ti ve tests. But we show by expl oi t i ng reverse certai nt y
t hat we can stati sti cal l y di sti ngui sh between the true
and false test results. By at t empt i ng enough traversals,
the procedur e can ensure wi t h hi gh probabi l i t y t hat the
most frequentl y occurri ng sets of directions correspond­
i ng to perceived traversal s actual l y correspond to paths
in G. What is requi red, then, is for the l earni ng pro­
cedure to do enough expl orati on to i denti f y al l paths of
l engt h cr or less in G wi t h hi gh probabi l i t y.
L e mma 5 There exists a procedure that, for any S1 > 0,
learns the local connectivity within cr of a vertex in any
landmark graph with probability l — 6\ in time polynomial
in 1/S1, 1/1-2r and the size of G, and exponential in r.
666 Machine Learning
3.3 Rel at e d Model s
We can get the same results as in the last section if we
allow movement uncertai nt y i n the reverse di recti on, but
demand forwar d movement certainty. The al gori thms
are si mi l ar, the j usti fi cati ons different. In thi s case, the
graph can be reliabl y navigated by the same agent t hat
di d the map learning.
We are also investigating ways to remove the require­
ment of either reverse certaint y or forwar d certainty. Re­
verse certaint y is used in the last section to hel p di sti n­
guish probabi l i sti cal l y between true and false results in
our testing procedures. We can show, for example, that
if r ( l —7) is bounded by a smal l constant, then efficient
map learning is possibl e wi t hout either the reverse cer­
t ai nt y or forwar d certaint y requirement. Another way
around thi s restriction is to allow the expl ori ng agent to
drop pebbles or beacons to remember where it has been.
4 Rel at ed Wor k
There have been many approaches to dealing wi t h un­
certaint y in spatial reasoning [Brooks, 1984, Davi s, 1986,
Durrant -Whyt e, 1988, Kuipers, 1978, Lozano-Perez,
1983, McDermot t and Davis, 1982, Moravec and Elfes,
1985, Smi t h and Cheeseman, 1986], but most of these
methods suffer f rom the effects of mul t i pl i cat i ve error in
esti mati ng relative position and ori entati on. Thi s paper
is concerned wi t h el i mi nati ng the effects of mul t i pl i cat i ve
error by either el i mi nati ng local uncertai nt y altogether
or by taki ng enough measurement s to ensure t hat such
effects are reduced to tolerabl e levels. In thi s section, we
consider two related approaches.
Kuipers defines the noti on of "pl ace" in terms of a set
of related visual events [Kuipers, 1978]. Thi s noti on pro­
vides a basis for i nduci ng graphs f rom measurements. In
Kui pers' framewor k [1988], locations are arranged in an
unrestricted planar graph. There is recognition uncer­
tai nty, but there is no directional uncertai nt y (i f a robot
tries to traverse a parti cul ar hal l, then i t wi l l actual l y
traverse that hal l; it may not be able to measure exactl y
how long the hal l is, but i t wi l l not mi stakenl y move
down the wrong hal l ). Kuiper s goes to some l engt h to
deal wi t h recognition uncertainty. To ensure correctness,
he has to assume t hat there is some reference l ocati on
that is distinguishabl e from al l other locations. Since
there is no di recti onal uncertainty, any two locations can
be distinguished by traversing paths to the reference lo­
cati on. Given a procedure that is guaranteed to uniquel y
i denti f y a location if it succeeds, and succeeds wi t h hi gh
probabi l i ty, we can show that a Kuipers-styl e map can
be reliabl y probabl y almost always usefull y learned using
an analysi s si mi l ar to t hat of Section 3. In fact, we do
not require that the edges emanati ng f rom each vertex
be labeled, j ust that they are cyclicall y ordered.
Dudek et al [1988] consider the probl em of learning
a graph in which al l vertices are i ndi sti ngui shabl e and
upon entering a vertex the robot can leave by any arc
indexed f rom the one i t entered on. The robot can always
Basye, Dean and Vitter 667
retrace its steps i f i t remember s the di recti ons i t took at
each poi nt duri ng expl orat i on. The author s show t hat
the probl em i s unsolvabl e i n general, but t hat by provi d­
i ng the robot wi t h a number of di sti nct marker s (k; > 1)
the robot can learn the graph i n t i me pol ynomi al i n the
gr aphs size. In order to place a marker on a part i cu­
lar vertex, the robot must vi si t t hat vertex; i n order to
recover the marker at later t i me, the robot must ret ur n
to the vertex. A vertex wi t h a marker on it acts as a
t emporar y l andmark. No assumpti on i s made regardi ng
the pl anari t y of the graph. The probl em wi t h a singl e
marker t hat can be placed once but not recovered is also
unsolvable, but, i f you allow a compass i n addi t i on, the
probl em can be solved i n pol ynomi al t i me.
Levi t t et a/[1987] describe an approach to spati al rea­
soning t hat avoids mul t i pl i cat i v e error by i nt roduci ng lo­
cal coordi nat e systems based on l andmarks. Landmarks
correspond to envi ronmental features t hat can be ac­
qui red and, more i mpor t ant l y, reacquired i n expl ori ng
the envi ronment. Gi ven t hat l andmarks can be uni quel y
i denti fi ed, one can induce a graph whose vertices corre­
spond to regions of space defined by the l andmarks vis­
ibl e i n t hat region. The resul ti ng probl em involves nei ­
ther recogni ti on nor movement uncertai nty. Our results
in Section 3 bear di rectl y on any extension of Levi t t's
work t hat involves either recogni ti on or movement un­
certainty.
5 Concl usi on
Thi s paper examines the rol e of uncertai nt y i n map
l earni ng. We assume an envi ronmental model t hat pro­
vides for a fi ni t e set of di sti ncti ve locations t hat can be
rel i abl y detected and repeatedl y f ound. Under thi s as-
sumpt i on, the probl em of map l earni ng reduces to one
of ext ract i ng the structur e of a graph t hrough a process
of expl orati on i n whi ch onl y smal l part s of the struc­
ture can be sensed at a t i me and sensing is subject to
error. We are part i cul arl y interested i n showing t hat cu­
mul at i ve errors i n reasoning about the gl obal properties
of the envi ronment based on local measurement s can be
reduced to acceptabl e levels using a pol ynomi al (i n the
size of the graph) amount of expl orat i on. The results i n
thi s paper shed l i ght on several exi sti ng approaches to
map l earni ng by showing how they mi ght be extended
to handl e various types of uncertai nty. Our basic frame-
work is general enough to be appl i ed to a wide vari et y of
map l earni ng probl ems. We have i denti fi ed one part i cu­
lar source of uncertai nty, namel y cont i nui t y uncertai nt y
(see Section 2), t hat we believe of parti cul ar interest in
l earni ng maps of bui l di ngs and other envi ronment s pos­
sessing an easil y discernabl e structure.
References
[Aleliunas et al., 1979] Romas Al el i unas, M. Kar p,
Ri chard, Ri char d J. Li pt on, Laszl o Lovasz, and
Charles Rackoff. Random wal ks, universal traversal
sequences, and the compl exi t y of maze probl ems. In
Proceedings of the 20th Symposium on the Founda-
tions of Computer Science, pages 218-223, 1979.
[Basye et a/., 1989] Kennet h Basye, Thomas Dean, and
Jeffrey Scot t Vi t t er. Copi ng wi t h uncertai nt y i n map
l earni ng. Technical Repor t CS-89-27, Brown Univer­
si ty Depart ment of Comput er Science, 1989.
[Brooks, 1984] Rodney A. Brooks. Aspect s of mo­
bil e robot vi sual map maki ng. I n H. Hanafusa and
H. Inoue, edi tors, Second International Symposium on
Robotics Research, pages 325-331, Cambri dge, Mas­
sachusetts, 1984. MI T Press.
[Davis, 1986] Ernest Davi s. Representing and Acquiring
Geographic Knowledge. Mor gan- Kauf man, Los Al tos,
Cal i f orni a, 1986.
[Dean, 1988] Thomas Dean. On the compl exi t y of inte­
grat i ng spati al measurements. In Proceedings of the
SPIE Conference on Advances in Intelligent Robotic
Systems. SPI E, 1988.
[Dudek et al., 1988] Gregor y Dudek, Mi chael Jenkins,
Evangelos Mi l i os, and Davi d Wi l kes. Roboti c ex-
pl orat i on as graph constructi on. Technical Repor t
RBCV- TR- 88- 23, Uni versi t y of Toront o, 1988.
[ Dur r ant - Whyt e, 1988] Hugh F. Dur r ant - Whyt e. In-
tegration, Coordination and Control of Multi-Sensor
Robot Systems. Kl uwer Academi c Publishers, 1988.
[Kui per s and Byun, 1988] Benj ami n J. Kui per s and
Yung-Tai Byun. A robust, qual i t at i ve met hod for
robot spati al reasoning. In Proceedings AAAI-8 8,
pages 774-779. AAAI, 1988.
[Kui pers, 1978] Benj ami n Kui pers. Model i ng spatial
knowledge. Cognitive Science, 2:129 153, 1978.
[ Levi t t et al., 1987] Tod S. Levi t t, Daryl T. Lawt on,
Davi d M. Chel berg, and Phi l i p C. Nelson. Qual i ta­
ti ve l andmark-based pat h pl anni ng and fol l owi ng. I n
Proceedings AAAI-87, pages 689-694. AAAI, 1987.
[Lozano-Perez, 1983] Tomas Lozano-Perez. Spatial
pl anni ng: A confi gurati on space approach. IEEE
Transactions on Computers, 32:108—1 20, 1983.
[ McDermot t and Davi s, 1982] Drew V. McDer mot t and
Ernest Davi s. Pl anni ng routes t hrough uncertai n ter­
ri t ory. Artificial Intelligence, 22:107-156, 1982.
[Moravec and Elfes, 1985] H. P. M oravec and A. Elfes.
Hi gh resolution maps f r om wide angl e sonar. In IEEE
International Conference on Robotics and Automa-
tion, pages 138-145, March 1985.
[Rivest and Sloan, 1988] Ronal d L. Rivest and Rober t
Sloan. Learni ng compl i cated concept s rel i abl y and
usefully. In Proceedings AAAI-88, pages 635-640.
AAAI,* 1988.
[ Smi t h and Cheeseman, 1986] Randal l Smi t h and Peter
Cheeseman. On the representation and esti mati on
of spati al uncertai nty. The International Journal of
Robotics Research, 5:56-68, 1986.
[Val i ant, 1984] L. G. Val i ant. A theory of the learnable.
Communications of the ACM, 27:1134-1142, 1984.
[ Yemi ni, 1979] Yechi am Yemi ni. Some theoreti cal as­
pects of posi ti on-l ocati on probl ems. In Proceedings of
the 20th Symposium on the Foundations of Computer
Science, pages 1-7, 1979.
668 Machine Learning