Ontological diversity for supporting intelligent shared-control spatial-assistance robots

embarrassedlopsidedΤεχνίτη Νοημοσύνη και Ρομποτική

14 Νοε 2013 (πριν από 3 χρόνια και 6 μήνες)

58 εμφανίσεις

ASK
-
IT International Conference


October 2006


1


Ontological diversity for supporting intelligent shared
-
control
spatial
-
assistance robots

John A. Bateman

University of Bremen
,
Bremen 28334, Germany

tel: +49/421
-
218
-
9483

fax: +49/421
-
218
-
4283

bateman@uni
-
bremen.de



Abstract


This paper illustrates som
e of the problems that immediately arise when trying to support
users carrying out spatially
-
embedded tasks such as navigation, movement within known and
unknown environments, and the like. The crucial role of communication between user and
supporting devi
ce is shown and basic problems with effective spatial communication are
demonstrated. A solution relying on foundational axiomatized ontologies and ontological
diversity supported by modular ontology definitions is then proposed. It is argued that such an
approach is also necessary for integrating the broad range of knowledge that is relevant for
intelligent spatial behaviour overall. The paper concludes with a consideration of how such
mechanisms can then also be applied to intelligent environments as such
. Environments under
this view become another kind of spatially
-
aware agent and we can apply much that we have
learnt concerning agent
-
centred spatial support to environment
-
based support also.



Keywords

Spatial assistance systems, formal ontology, human
-
robot interaction, spatial language,
intelligent environments



ASK
-
IT International Conference


October 2006


2

1.
Introduction

and Preliminaries

One important aspect of mobility concerns space and spatial awareness. Spatial awareness
plays a role when acting in space, when reasoning about space, and w
hen communicating
about space. Deepening our understanding of these three distinct but overlapping issues can
therefore be seen as one significant source of input for attempts to provide mobility assistance.
In the Collaborative Research Center on Spatial
Cognition (SFB/TR8: http://www.sfbtr8.uni
-
bremen.de) of the Universities of Bremen and Freiburg, we are conducting an intensive
programme of research involving all three of these issues. Several of our research activities
also focus particularly on spatial

assistance and these will provide the main
background

that I
draw on for this paper.

One distinctive feature of our approach is an extensive
reliance on

ontological engineering

and, in particular,
on

formal
axiomatized ontologies

(
Guarino

&
Poli
, 19
95,

Guarino
, 19
98
)
.
This foundation appears essential not only for doing justice to the complexity of the spatial
domain but also for constructing systems capable of dealing intelligently with spatial
problems.

The paper is structured as follows. First, I info
rmally sketch some illustrative examples where
problems of space and spatial 'interpretation' become an issue. These will serve to motivate
the particular kinds of work that we see ontologies taking on in the spatial assistance domain.
Second
,

I describe t
he distinct kinds of ontological organization we have found necessary for
dealing with these problems and our conclusions to date concerning requirements for adequate
formalizations. Finally, I point towards a further possible extension of the ideas presen
ted to
take in supportive intelligent environments as well as individual agents: all can potentially
benefit from detailed and well
-
founded ontologies of space and the commonsense world.

2. Examples of problems of spatial communication

To begin, I will con
sider spatial assistance largely from the perspective of robotic assistance
systems although, in the conclusion, I will broaden this perspective somewhat and consider
assistance 'agents' more generally. The examples I present show some of the work that nee
ds
to be supported by intelligent assistance systems in the spatial domain. Several of the main
research scenarios in our Collaborative Research Center currently involve autonomous
wheelchairs

in particular, the Bremen autonomous wheelchair

Rolland

(Lanke
nau, Meyer &
Krieg
-
Brückner, 1998)

and similar robotic assistance systems; the examples of this section
will therefore assume this as their area of application. We are also working on non
-
robotic
assistance devices, such as way
-
finding and route
-
planning a
ssistants, and these also
contribute to the general frameworks for 'spatial intelligence' that are being developed.

We will see that much of the complexity that arises for providing intelligent spatial assistance
revolves around issues of effective and ef
ficient
communication

be
t
ween the user and their
assisting device. As such devices grow in sophistication, the possibilities of

mis
-
understandings multiply. In the best case, this may simply be annoying; in the worst case,
however, it can be dangerous and
even life
-
threatening. Finding appropriate solutions in this
area therefore represents an important issue in its own right.

In most practical approaches to robotic assistance, relatively simple approaches are taken to
the issue of making the device do what

the user wants it to do. With direct control systems,
where the user is required, for example, to steer the device directly by means of some control
interface, the device is relatively constrained in terms of the possible 'misunderstandings' that
might oc
cur concerning what the user intended. Such misunderstandings are still possible, of
course, even when such a direct relationship is assumed between the user and the actions to be
ASK
-
IT International Conference


October 2006


3

performed. If the device, for example a wheelchair, is controlled by a joyst
ick, then the
gestures made by the user may still need to be
interpreted

as to their intent: was this
movement to the left an instruction or, possibly, a motoric problem of the user resulting in a
spurious, non
-
intended joystick movement? To meet this situ
ation, devices are now developed
which mediate between the immediate input received and what they 'interpret' the intention of
the user to be. A gesture might indicate, for example, that a U
-
turn is to be initiated rather than

iconically depicting some pat
h that the wheelchair is to follow; other kinds of user
-
triggered
'basic behaviours', such as wall
-
following, might also be employed to take the moment
-
by
-
moment load of directing the wheelchair off the user.

As this process becomes more sophisticated, we

move away from direct control and into the
area of
shared control

systems. Systems of this kind solve problems or carry out tasks, such
as getting from A to B,
with

the user. One example of a fairly complex system of this sort is
provided by modern passen
ger aircraft: due to their complexity, the pilot is not always in
direct control of every aspect of the machine's behaviour; there is
shared

control. Our
autonomous wheelchair is also a shared control device
(Lankenau & Röfer, 2000)
: The user
may initiate
certain tasks that the wheelchair then performs autonomously. Moreover, the
device may be performing tasks on its own agenda independently of explicit instruction from
the user

for example, avoiding obstacles.

With ever more sophisticated devices, it is na
tural to consider how they can best be made
aware of the intentions of their users. Here, a relatively obvious extension is to make the
device responsive to

voice commands
, such as 'go forward' or 'go left'. To begin, the voice
command may be seen simply
as a replacement, or alternative, for moving the joystick in a
particular direction. However, with more complex intentions, the metaphor of direct control
quickly breaks down: users see themselves as
interacting

with the device rather than simply
directing

it. And here, issues of dialogue and communication about and in space come to the
fore.

Designers of systems employing more sophisticated verbal interaction need to be aware of
this. As long as a voice command is mimicking the movement of a joystick, ther
e is little to
go wrong; this is, however, a very unnatural interpretation of such commands. The natural
use of even such simple instructions as 'go forward' is not, in general, or even often, analogous
to the movement of a joystick. The more that the com
mands accepted by a device
appear

to
be uses of natural language, the more a user automatically expects the device to interpret them
as he or she does. Such users will quickly be disappointed, however. In moving to natural
language interaction, an entirel
y different set of expectations are raised on the part of the user.



Figure 1: R
oute descriptions at various types of junctions


Consider a simple command such as 'go left' and the sequence of junction configurations
shown in Figure 1. Configuration (a)
should not create any problems; configuration (b) leaves
a little more room for doubt (since, geometrically, there are two turnings that are 'on the left');
but configuration (c) might well give considerable pause for thought: for example, if the
situation

is th
at this is a navigation assista
nt and the main road is the one marked M, then the
ASK
-
IT International Conference


October 2006


4

geometrically 'straight ahead' option is much more likely to be the intended 'left' turn. If it is
unclear to either the device or the user if this is the case, then th
ere is considerable opportunity
for confusion. A detailed study of just how users conceptualize and describe such junctions
when navigating is given by
Klippel
et al.

(2005)
.

A similar range of difficulties occurs in spatial configurations such as those s
hown in Figure
2. Imagine that the speaker (perhaps using the wheelchair that he or she is communicating
with so as to to avoid obvious questions such as 'your left or mine?') is moving in the direction
shown and wishes to make reference to the objects sho
wn to the left. This might be to go
towards them, or to pick them up, or just to say what they are (for example, in a 'learning'
phase where a 'household help' robot might be being informed of its new home). In
configurations (a) and (b), the designation '
on the left' is unproblematic. But for configuration
(c), the simple use of 'on the left' suddenly becomes less appropriate for object A even though
the identically placed object in configuration (b) presented no problem.



Figure 2: I
dentifying objects b
y location


The situation is in fact precisely analogous to that of Figure 1 in that expressions such as 'left'
are rarely interpretable purely in terms of geometric position. A detailed empirical study of
what users do when confronted with different spati
al configurations is presented in
Tenbrink
(2005)
. What 'left' really means is more like: 'left' picks out an object of some
agreed upon
and
relevant type

that is 'more on the left' than any other competing object. If there are
competing objects, then natu
ral descriptions tend to employ other ways of referring to the
objects or directions in question ('the nearest one'), or modify the description ('a little to the
left'). Problems also occur when it is unclear whether there is competition or not. If A is a
table and B is a chair, then the 'chair on the left' is unproblematic; but if A and B are both
chairs, then the 'chair on the left' will remain ambiguous.

In the case of a robotic assistance device, it may not be immediately obvious just what the
type of

the object under discussion is: this depends on the perceptual capabilities of the
device. A chair and a table may not be immediately distinguishable. Indeed, and just as bad,
the table might even turn out to be invisible (if the perceptual device is a la
ser scanner set at a
low height) in which case the user might perceive a competing object that the device does not.
The opposite case is also perfectly possible: perhaps the human user is also not able to
perceive the environment so well. In general, there
fore, the robotic device and its human user
may have differing perceptual capabilities
and both are variable

across different robotic
systems and also across different users. Therefore generic solutions need to be developed
which do not rely on given, fix
ed capabilities on behalf of either the robot or the human user.

Once interaction begins to break down between user and device, it can be difficult to regain
the user's confidence. In a series of experiments reported in Tenbrink
, Fischer & Moratz
(2002)
, i
t was found that once users had found that a 'goal
-
oriented' command (such as 'go to
the kitchen') had not been understood, they resorted to basic movement commands (such as
'go left') instead. And if those commands also were not understood, users reverted

even further
to commands such as 'rotate your wheels'

even if the robot did not
understand such

ASK
-
IT International Conference


October 2006


5

commands
. After this point, users often failed to get the robot in the experiment to do very
much at all. Users clearly have very firm ideas about what is 'si
mpler' to understand and what
is 'more difficult'. Once they have pigeonholed the robot as not being able to understand one
level of difficulty, they did not bother trying any kind of instruction 'higher' on their
complexity scale. Since, however, the abil
ities of robots

including their abilities to
understand instructions

are not known to users, this strategy can go decidedly wrong.




Putting some of these potential issues together now, consider the more complex and
naturalistic set up shown in Figure 3
. This is a real floor plan and the situation reported is a
genuine confusion that
occurred
. I will use this to move from the more informal discussion of
problems to a more precise account of just where problems are arising. The situation was that
there wa
s going to be a meeting in the room indicated as 'target room'; the relevant user was
situated at the point marked with a cross, was moving in the direction shown, and wanted to
know where the meeting would be. The instructions received were "in the last r
oom on the
right'': this was understood as the room indicated in the figure by 'understood room'. What
went wrong?

In order to pick this apart, we need to resolve the semantics of the received instructions in
more detail. First, we need to identify some se
t of entities in the world that may be designated
as
rooms
. Second, we need to identify in the world an
ordered sequence

of such entities that
allow the designation of one of their member as

last

in that sequence. In the present case, this
can be provided
by the perception of the environment as a
corridor
;

such entities are by and
large 'line
-
like' and so, when travelled along, can be used to define sequences. Finally, this
ordered sequence of entities needs to be found
rightwards

of the direction of travel

of the user
at that point in time. The problem here was due to an unrealized disagreement about the
ordered sequence that was in question: the corridor along which the user was moving has a
glass door, but that door is usually open. For the user, this was

not considered a relevant
aspect of the environment and therefore the corridor apparently ended at the open area marked
A. The 'last room' on the right was therefore the one understood. For the information
-
giver,
who may have been more familiar with the b
uilding, the corridor ends at the glass door and
everything on the other side of the glass door is associated more with the open area.
Therefore, 'the last room' was an u
n
ambiguous reference to the room just before the glass
door. Neither interlocutor was
aware of any competing entities for the designation 'last door
on the right' and so did not see the need for any further information.

Figure 3:

A
misunderstood

spatial description

ASK
-
IT International Conference


October 2006


6

For a robotic device, the situation is even worse because in order to interpret the intended
meaning of the utterance corr
ectly, it would need to have a clear notion of just what is to
count as a room or not. In this case, since the goal of the instruction was to find the way to a
meeting, it was sensible to restrict the meaning of "room'' to offices: this means that the
cupb
oard that comes after the room after the glass door, which contains cleaning equipment
and materials, was not considered as a possible option by anyone in the scene. This adds the
final complication: even to interpret the relevant entities that will be pic
ked out by some
natural language term, it is in general necessary to know just what the
goals

of the speakers
concerned are. Fortunately, this can sometimes be retrieved from the interactional context, for
example, answering "the last room on the right'' t
o the question "I've just spilled coffee
everywhere, where can I get a mop?'' might well trigger a different preferred reading of
"room''.

The notions of goal and purpose turn out to be crucial for much interpretation. It allows us, for
example, to ignore

possible small and narrow lanes that might also have run off to the left in
Figure 1 (unless we were trying to hide); it also allows us to restrict attention to relevant
entities so that the potentially competing objects in Figure 2 are automatically rest
ricted: there
are always

some

objects on the left and the right, but the ones that we need to discriminate are
dependent on task. It is as if, instead of having a
straightforward

semantics associated with
each linguistic item, human interpreters provide an

additional 'wrapper' around each use of a
term which brings with it an implicit instruction of the form: 'do something sensible with this
according to my current needs'. We can see this very clearly in our next and final example,
where the very sense and
range of application of spatial relations can also be seen to be
dependent on task precisely in this way.

Consider again our autonomous wheelchair and the situation of its having received the
instruction "take me in front of the refrigerator'': one resu
lt is shown in Figure 4 (taken from
our empirical experiments
which

investigated just how users communicate with various types
of robots, for various kinds of tasks and with respect to various

kind of spatial configurations
(Ross,
et

al
.,
2005;
Tenbrink
,
S
hi

&
Fischer
, 20
06
)
. If the user wanted to clean the front of the
door of the fridge, then this may be an appropriate position; in the probably more frequent
situation that the user wants to get something out of the fridge, the response of the wheelchair
w
ould be considered significantly less helpful.

The key result to sum up the kinds of problems exhibited in these scenarios, therefore, is that
the interpretation of linguistic terms depends on their functional deployment in contexts of
Figure 4:
'In front of'
the fridge?


ASK
-
IT International Conference


October 2006


7

tasks and goals. In
order then to provide effective assistance that can bring in more
sophisticated mobility issues than that afforded by direct control of the supporting device, it is
necessary to
communicate

with that device. But effective and appropriate action depends
cr
ucially on appropriate interpretations; and these interpretations, especially but not only in
the domain of space, rely on a range of distinct kinds of information that are by no means
immediately obvious.


3. M
odelling necessary knowledge of the world


W
hen we turn from informal descriptions of the kinds of things that can go wrong in
providing interpretations of spatial information to the knowledge that is necessary to prevent
such misunderstandings (or to resolve them as effectively as possible when the
y do occur), we
are drawn quickly to issues of ontology. Moreover, as we will see, we are drawn to ontologies
that maintain
detailed

knowledge of their domains of concern. An entire range of structures
currently goes under the heading of 'ontology', rangin
g from more or less simple taxonomies
of terms at one extreme right up to richly axiomatized ontologies on the other. Work on the
semantic web, for example, has largely been oriented towards less detailed, 'shallow' or
'lightweight' ontologies due to the c
omplexity of constructing richly axiomatized ontologies
for large
-
scale bodies o
f information (such as the web: cf.,
Kim
, 20
02
)
. For supportive
applications and effective communication concerning space, however, we require detailed
information

of several k
inds
.

First, we require a generic foundational backbone for the kinds of entities, relations and
activities that we encounter. As we saw from our examples, we need to be able to characterize
everyday commonsense objects (such as 'rooms') as well as their l
ocations. A detailed
overview of the varieties of ontologies currently available is given in Farrar

&
Bateman

(2004)
; there we recommend use of the
Descriptive Ontology for Linguistic and Conceptual
Engineering

(
DOLCE
:

Masolo

et al.,
2003)
. This ontology i
s richly axiomatized and provides
general categories, such as physical objects, social agents, events and so on. It also provides a
detailed model of how such entities relate to space without restricting just how space is to be
formalized. This has allowed

us to develop a 'plug
-
and
-
play' approach to spatial knowledge as
suggested in Figure 5.
Any physical object (DOLCE: Physical Endurant, PED) necessarily has
by virtue of being a physical object certain qualities (Physical Qualities, PQ). One of these is
it
s ‘being located’
-
ness. The location of objects can, however, be set into relation with one
another in a variety of ways. DOLCE does not specify how this should be done precisely and
leaves us free to entertain a variety of spatial characterizations as mo
tivated by our empirical
investigations.



ASK
-
IT International Conference


October 2006


8

Figure 5: 'Plug and play' spatial ontology

relating objects

and their locations


There is a long and detailed history of formal
qualitative
approaches to formalizing space

that
we can draw on here
. In such approa
ches, space is seen not as a pure Euclidean geometric
space, but instead as entities and relations with particular qualitative properties. One of the
most well
-
known treatments is the Region Connection Calculus (RCC) of Randell
, Cui &
Cohn (1992)
. This for
malizes space in terms of

regions

and whether or not such regions are

connected
. The RCC
-
family of
spatial calculi

can be made more or less expressive depending
on refinements of the connection relation. Another well
-
known treatment, called the
Doublecross

calculus
(
Fre
ksa, 19
92
)
, is based on a qualitative characterization of direction (in
front, behind, front left, front right, etc.). An overview of such qualitative approaches to space
and the reasoning that they support is given by Cohn

&
Hazariki

(20
01
)
;

a more introductory
overview with explicit relations to treatments of space found in computational ontologies is
given in Bateman

&
Farrar

(2004)
.

Several central directions of research within the Collaborative Research Center revolve
around qualitative
spatial calculi; these include constructing common formalizations
(Wö
lfl

&
Mossakowski
, 2005)
, demonstrating formal complexity results for the different classes of
calculi
(
Renz

&
Nebel
, 19
99
)
, and developing new calculi specifically tailored for particula
r
classes of spatial problems
(
Dylla

&
Wallgr
ü
n
, 2006)
. One of the main motivations for using
such calculi is that their qualitative nature makes them very much more like the kinds of
descriptions and representations that people appear to use. An account i
n terms of a qualitative

description ('in front of the fridge') is seen to be more natural than geometric or metric
descriptions ('0.5 metres along an axis passing through the centre of gravity of the fridge
parallel to the floor orthogonal to the wall').
Providing support for human users is therefore
more likely to benefit from qualitative representations than from quantitative.

However, and of particular importance then for our use of such calculi for spatial assistance,
particular tasks that a user need
s to have performed can render particular qualitative
treatments of space more appropriate than others. For example, it is natural when discussing
routes and giving directions, to use calculi such as the Doublecross calculus: the region
-
based
viewpoint is
not
so
helpful. But, when considering where entities, objects and other people are
located, then a region
-
based view can be more useful. In general, different kinds of linguistic
spatial expression can 'activate' distinct kinds of spatial calculi

as

sugges
ted graphically in
Figure 6. And, as a consequence, our plug and play embedding of models of space into a
generic ontology of entities appears just what is required. Relating

between

spatial
descriptions, for example, when we know that turning in a particu
lar
qualitative

direction
('left') will place us in a particular qualitative region ('a corridor'), is then handled formally by
treating the individual spatial models as
formal theories

and defining explicit mappings
(
theory morphisms
) between them. This
degree of reasoning is not possible if we do not have
a sufficiently detailed axiomatized ontology at hand.


Figure 6: Distinct kinds of

ontologies of space

ASK
-
IT International Conference


October 2006


9



In short, we are now developing methods for dealing with the range of spatially
-
relevant
'knowledges' that the examples of spatial assistance p
roblems discussed above show to be
necessary. Our approach argues that the sheer diversity of kinds of knowledge required
strongly argue for a modular approach to
ontological diversity
. That is, the most effective
way of achieving the knowledge
-
rich system
s that will
truly

be able to contribute to intelligent
assistance is to provide distinct ontological modules, each of which provides alternative
perspectiv
es on the issues being modelled, and to relate these formally using mathematical
tools involving theo
ry morphisms and category theory.


With this framework in place, we can then go further. There are other sources of information
that are also important for supportive assistance systems and many of these are now
increasingly also being organized in terms o
f ontologies. This ranges from information
concerning local facilities, the availability of particular forms of access (e.g., rampways) to
buildings, and much more.
An indicative range of such knowledge sources is listed in Figure
7.
Accepting such ontolog
ical diversity and embedding it within a formalized framework for
relating distinct perspectives appears to be the most promising approach for leveraging off the
information explosion and utilising this for increasingly sophisticated and effective spatial
assistance systems.




4.
Towards the future: spatially intelligent environments


I have discussed some of the distinct kinds of spatial knowledge that need to be maintained
for effective spatial assistance and how ontological engineering can provide app
ropriate
mechanisms for this. Once we have such knowledge, we can consider other contexts where
the sophisticated spatial awareness supported can be utilised.

One particularly challenging application lies in a radical extension of the just what 'agents' we

can consider to maintain spatial knowledge. For example, it has become relatively common
to talk of environments becoming more 'intelligent' in various senses and across various
disciplines. This intelligence ranges from simple environmental responses, s
uch as lowering
blinds when rooms are in direct sunlight or controlling room temperature, to more exploratory
attempts to provide dynamically configurable office buildings that are able to change room
size and function to respond to changing demands. Devic
es for adding user
-
adapted
way
finding

instructions into the environment, i.e., individualized signage, are also under
investigation. Within this general paradigm, research directions into spatial intelligence open
up a significant new perspective that may

offer a theoretically well
-
articulated and unifying
foundation for the notion of 'intelligent environments' as
such
. The crucial addition here is in
Figure 7: Relevant kinds of

knowledge for spatial intelligence

ASK
-
IT International Conference


October 2006


10

the explicit treatment of space
as a resource

about which intelligent systems can reason and
communicate.
This leads directly to a conceptualization of environments as

spatially
-
aware
agents
: that is, for an environment to be 'intelligent',
it

must take on some of the essential
properties developed for spatially intelligent agents described above. Intelligent
environments
then involve reasoning, action, and interaction in and concerning space, and draw on
cognitively well
-
founded models of space and its use.

A common question when dealing with agents distributed in space, for example, is that of
where knowledge

about aspects of their environment is to be situated: i.e., which knowledge
belongs where? Environmental knowledge is often unevenly distributed. For example,
differences in spatial knowledge about the environment form the basis both for the necessity
of
route
-
assistance and for the issues raised in multi
-
robot exploration. When we add the
environment itself into the situation as a spatially aware agent, we can ask under which
circumstances the spatial knowledge of that environment is best maintained by th
e
environment itself and when by its users. Infrequent visitors to an environment (e.g., visitors
to an office or hospital) will not be knowledgeable about the environment and so an intelligent
environment should see the need for interaction concerning th
e intended actions of the users
within its scope. Individualized signage then offers one set of appropriate technologies, as
well as individualized maps (e.g., providing information only about the spatial location of the
departments or people to be visited

and significant landmarks required to find them),
individualized route descriptions, or dedicated mobility assistance (e.g
.
, wheelchairs, walkers,
etc.).

Further interaction concerning the space can also be envisaged between the individual
components, e.g
., between the environment (with overall knowledge about its current state)
and assistance robots (with localized route knowledge necessary for a specific task). As users
of an environment move on from being infrequent or single visitors, their knowledge o
f the
environment also increases and so their need for guidance from the environment changes. This

responsiveness to partially shared (and therefore potentially outdated) knowledge is one
crucial contribution to interacting intelligently with users.

I sug
gested in the examples given above that issues of

functionality

are critical in determining
representations and appropriate reasoning about space: spatial awareness requires
consideration of what the goals and tasks of an agent are within its space. This c
arries over
naturally for an intelligent environment's treatment of its space. In the simplest cases,
knowledge that an agent intends to reach a specific place within the environment (e.g., a
particular person in a building or a particular station in a tr
ansportation network) usefully
constrains the situations where interaction between environment and agent is necessary. For
example, a particular person that is being looked for may be located in a different building to
that expected (when in, e.g., an ins
titution scattered over several buildings) and so interaction
will be necessary to communicate this fact. An intelligent environment is then one which has
become an active agent concerned with the situated mobility of the individual agents and
groups of a
gents located within it.

In short, this suggests that Pollack's

(2005)

recent classification of the kinds of activities that
assistance robots can perform into 'assurance', 'compensation' and 'assessment' can be applied
directly to intelligent environments

when these are viewed as agents in their own right
. The
more complex an environment is, the more support can sensibly be envisaged for the
inhabitants of that environment. In architectural or navigation contexts, for example, we can
see this as ranging fr
om assurance that one is on the right path, that a particular goal is where
we thought, through compensation for overly complex or misleading architectural design that
is difficult for a user to interpret in a way supportive of their goals, to assessment o
f designs
for buildings or outdoor networks (city streets, transportation systems, etc.) in terms of their
ASK
-
IT International Conference


October 2006


11

cognitively plausible effects on distinct groups of intended users

with perhaps differing
capabilities and difficulties
. For all such extended applic
ations, the kinds of basic spatial
representations and flexible relations with knowledge of the world and the goals and purposes
of users supported by foundational modular ontologies will play a central role.


Acknowledgements

The work reported in this pap
er is part of the SFB/TR8, Collaborative Research Center
‘Spatial Cognition’ funded by the Deutsche Forschungsgemeinschaft (DFG). Particular thanks
are due to all the members of the Ontospace, SharC and SPIN projects

of the Centre
.



References

Bateman, J
. Farrar, S.
(2004).

Spatial

ontology baseline, SFB/TR8 internal report I1
-
[OntoSpace]: D2
, Collaborative

Research Center for Spatial Cognition, University of
Bremen, Germany.

Cohn, A. Hazariki, S.
(2001).

‘Qualitative

spatial representation and rea
soning: an
overview',

Fundamenta Informaticae

43
, 2
-
32.

Dylla, F. Wallgrün, J. O.
(2006).

On generalizing orientation information in OPRA
m
,
in

'Proceedings of the 29th German Conference on Artificial Intelligence (KI 2006)',
Lecture Notes in Artifi
cial Intelligence, Springer, Berlin.

Farrar, S. Bateman, J.
(2004).

General

ontology baseline, SFB/TR8 internal report I1
-
[OntoSpace]: D1
, Collaborative

Research Center for Spatial Cognition, University of
Bremen
, Germany
.

Freksa, C.
(1992).

Using or
ientation information for qualitative spatial reasoning,
in

A. U.
Frank, I. Campari U. Formentini, eds
.
, 'Theories and methods of spatio
-
temporal
reasoning in geographic space', Vol. 639 of
LNCS
, Springer, Berlin, pp. 162
-
178.

Guarino, N.
(1998).

Fo
rmal ontology
and information

systems,
in

N. Guarino, ed., 'Formal
Ontology
in Information

Systems', IOS Press, Amsterdam, pp. 3
-
18.

Guarino, N. Poli, R.
(
1995
).

'The role of formal ontology in the information technology',

International Journal of Hum
an and Computer Studies

43
, 623
-
624.

Kim, H.
(2002).

'Predicting how
ontologies for

the semantic web will evolve',
Communications of the ACM


45
(2), 48
-
54.

Kli
ppel, A., Tappe, T., Kulik, L.
Lee, P. U.
(2005).

'Wayfinding choremes
-

a language for
mo
deling conceptual

route knowledge',
Journal of Visual Languages and Computing

16
(4), 311
-
329.

Lankenau, A., Meyer, O.
Krieg
-
Brückner, B.
(1998).

Safety in robotics: The Bremen
autonomous wheelchair
,
in

'Proceedings of AMC98, 5th Int. Workshop on Advan
ced
Motion Control', pp. 524
-
529.

Lankenau, A. Röfer, T.
(2000).

The role of shared control in service robots
-

the bremen
autonomous wheelchair as an example,
in

T. Röfer, A. Lankenau R. Moratz, eds
.
,
'Service Robotics
-

Applications and Safety Iss
ues in an Emerging Market.
Workshop
Notes. European Conference on Artificial Intelligence 2000 (ECAI 2000)', pp. 27
-
31.

Masolo, C., Borgo, S., Gangemi, A
., Guarino, N.
Oltramari, A.
(2003).

Ontologies library
(final), WonderWeb Deliverable D18, ISTC
-
CNR, Padova, Italy.

Pollack, M. E.
(2005).

‘Intelligent technology

for an aging population: the use of AI to assist
elders
with cognitive

impairment',
AI Magazine

pp. 9
-
24.

Randell, D., Cui, Z.
Cohn, A.
(1992).

A spatial logic based on regions and co
nnection,

in

'Proceedings of the 3rd. International Conference on
Knowledge Representation

and
Reasoning', Morgan Kaufmann, San Mateo, pp. 165
-
176.

ASK
-
IT International Conference


October 2006


12

Renz, J. Nebel, B.
(1999).

‘On the

complexity of qualitative spatial reasoning: a maximal
tractable
fra
gment of

the region connection calculus',
Artificial Intelligence


108
(1
-
2),
69
-
123.

Ross, R., Shi, H., Vierhuff, T., Krieg
-
Brückner, B.
Bateman
, J
.
(2005).

Towards dialogue
based shared control of navigating robots,
in

C. Freksa, M. Knauff, B. Krieg
-
Brückner,
B. Nebel T. Barkowsky, eds
.
, 'Spatial Cognition IV: Reasoning, Action, Interaction.
International Conference Spatial Cognition
(2004).

Frauenchiemsee, Germany, October
(2004).

Proceedings', Springer, Berlin, Heidelberg, pp. 478
-
499.

Tenbrink
, T.
(2005).

Identifying objects on the basis of spatial contrast: an empirical study,
in

C. Freksa, M. Knauff, B. Krieg
-
Brückner, B. Nebel T. Barkowsky, eds
.
, 'Spatial
Cognition IV: Reasoning, Action, Interaction.
International Conference

Spatial Co
gnition
(2004).

Frauenchiemsee, Germany, October
(2004).

Proceedings', Springer, Berlin,
Heidelberg, pp. 124
-
146.

Tenbrink, T., Fischer, K. Moratz, R.
(2002).

Spatial strategies in linguistic human
-
robot
communication
,
in

C. Freksa, ed., 'KI
-
Themenhef
t Spatial Cognition', arenDTaP Verlag.

Tenbrink, T., Shi, H.
Fischer, K.
(2006).

Route instruction dialogues with a
robotic
wheelchair
,
in

'Proc. BranDial 2006: The 10th Workshop on
the Semantics

and
Pragmatics of Dialogue. University of Potsdam, Germ
any
; September

11th
-
13th 2006'.

Wölfl, S. Mossakowski, T.
(2005).

CASL specifications of qualitative calculi,

in

A. G.
Cohn D. M. Mark, eds
.
, 'Proceedings of Spatial Information Theory: International
Conference, COSIT 2005', number 3693

in

'LNCS'
, Springer, Berlin, pp. 200
-
217.