Ontological diversity for supporting intelligent shared-control spatial-assistance robots

embarrassedlopsidedΤεχνίτη Νοημοσύνη και Ρομποτική

14 Νοε 2013 (πριν από 4 χρόνια και 7 μήνες)

79 εμφανίσεις

IT International Conference

October 2006


Ontological diversity for supporting intelligent shared
assistance robots

John A. Bateman

University of Bremen
Bremen 28334, Germany

tel: +49/421

fax: +49/421



This paper illustrates som
e of the problems that immediately arise when trying to support
users carrying out spatially
embedded tasks such as navigation, movement within known and
unknown environments, and the like. The crucial role of communication between user and
supporting devi
ce is shown and basic problems with effective spatial communication are
demonstrated. A solution relying on foundational axiomatized ontologies and ontological
diversity supported by modular ontology definitions is then proposed. It is argued that such an
approach is also necessary for integrating the broad range of knowledge that is relevant for
intelligent spatial behaviour overall. The paper concludes with a consideration of how such
mechanisms can then also be applied to intelligent environments as such
. Environments under
this view become another kind of spatially
aware agent and we can apply much that we have
learnt concerning agent
centred spatial support to environment
based support also.


Spatial assistance systems, formal ontology, human
robot interaction, spatial language,
intelligent environments

IT International Conference

October 2006



and Preliminaries

One important aspect of mobility concerns space and spatial awareness. Spatial awareness
plays a role when acting in space, when reasoning about space, and w
hen communicating
about space. Deepening our understanding of these three distinct but overlapping issues can
therefore be seen as one significant source of input for attempts to provide mobility assistance.
In the Collaborative Research Center on Spatial
Cognition (SFB/TR8: http://www.sfbtr8.uni
bremen.de) of the Universities of Bremen and Freiburg, we are conducting an intensive
programme of research involving all three of these issues. Several of our research activities
also focus particularly on spatial

assistance and these will provide the main

that I
draw on for this paper.

One distinctive feature of our approach is an extensive
reliance on

ontological engineering

and, in particular,

axiomatized ontologies


, 19

, 19
This foundation appears essential not only for doing justice to the complexity of the spatial
domain but also for constructing systems capable of dealing intelligently with spatial

The paper is structured as follows. First, I info
rmally sketch some illustrative examples where
problems of space and spatial 'interpretation' become an issue. These will serve to motivate
the particular kinds of work that we see ontologies taking on in the spatial assistance domain.

I describe t
he distinct kinds of ontological organization we have found necessary for
dealing with these problems and our conclusions to date concerning requirements for adequate
formalizations. Finally, I point towards a further possible extension of the ideas presen
ted to
take in supportive intelligent environments as well as individual agents: all can potentially
benefit from detailed and well
founded ontologies of space and the commonsense world.

2. Examples of problems of spatial communication

To begin, I will con
sider spatial assistance largely from the perspective of robotic assistance
systems although, in the conclusion, I will broaden this perspective somewhat and consider
assistance 'agents' more generally. The examples I present show some of the work that nee
to be supported by intelligent assistance systems in the spatial domain. Several of the main
research scenarios in our Collaborative Research Center currently involve autonomous

in particular, the Bremen autonomous wheelchair


nau, Meyer &
Brückner, 1998)

and similar robotic assistance systems; the examples of this section
will therefore assume this as their area of application. We are also working on non
assistance devices, such as way
finding and route
planning a
ssistants, and these also
contribute to the general frameworks for 'spatial intelligence' that are being developed.

We will see that much of the complexity that arises for providing intelligent spatial assistance
revolves around issues of effective and ef

ween the user and their
assisting device. As such devices grow in sophistication, the possibilities of

understandings multiply. In the best case, this may simply be annoying; in the worst case,
however, it can be dangerous and
even life
threatening. Finding appropriate solutions in this
area therefore represents an important issue in its own right.

In most practical approaches to robotic assistance, relatively simple approaches are taken to
the issue of making the device do what

the user wants it to do. With direct control systems,
where the user is required, for example, to steer the device directly by means of some control
interface, the device is relatively constrained in terms of the possible 'misunderstandings' that
might oc
cur concerning what the user intended. Such misunderstandings are still possible, of
course, even when such a direct relationship is assumed between the user and the actions to be
IT International Conference

October 2006


performed. If the device, for example a wheelchair, is controlled by a joyst
ick, then the
gestures made by the user may still need to be

as to their intent: was this
movement to the left an instruction or, possibly, a motoric problem of the user resulting in a
spurious, non
intended joystick movement? To meet this situ
ation, devices are now developed
which mediate between the immediate input received and what they 'interpret' the intention of
the user to be. A gesture might indicate, for example, that a U
turn is to be initiated rather than

iconically depicting some pat
h that the wheelchair is to follow; other kinds of user
'basic behaviours', such as wall
following, might also be employed to take the moment
moment load of directing the wheelchair off the user.

As this process becomes more sophisticated, we

move away from direct control and into the
area of
shared control

systems. Systems of this kind solve problems or carry out tasks, such
as getting from A to B,

the user. One example of a fairly complex system of this sort is
provided by modern passen
ger aircraft: due to their complexity, the pilot is not always in
direct control of every aspect of the machine's behaviour; there is

control. Our
autonomous wheelchair is also a shared control device
(Lankenau & Röfer, 2000)
: The user
may initiate
certain tasks that the wheelchair then performs autonomously. Moreover, the
device may be performing tasks on its own agenda independently of explicit instruction from
the user

for example, avoiding obstacles.

With ever more sophisticated devices, it is na
tural to consider how they can best be made
aware of the intentions of their users. Here, a relatively obvious extension is to make the
device responsive to

voice commands
, such as 'go forward' or 'go left'. To begin, the voice
command may be seen simply
as a replacement, or alternative, for moving the joystick in a
particular direction. However, with more complex intentions, the metaphor of direct control
quickly breaks down: users see themselves as

with the device rather than simply

it. And here, issues of dialogue and communication about and in space come to the

Designers of systems employing more sophisticated verbal interaction need to be aware of
this. As long as a voice command is mimicking the movement of a joystick, ther
e is little to
go wrong; this is, however, a very unnatural interpretation of such commands. The natural
use of even such simple instructions as 'go forward' is not, in general, or even often, analogous
to the movement of a joystick. The more that the com
mands accepted by a device

be uses of natural language, the more a user automatically expects the device to interpret them
as he or she does. Such users will quickly be disappointed, however. In moving to natural
language interaction, an entirel
y different set of expectations are raised on the part of the user.

Figure 1: R
oute descriptions at various types of junctions

Consider a simple command such as 'go left' and the sequence of junction configurations
shown in Figure 1. Configuration (a)
should not create any problems; configuration (b) leaves
a little more room for doubt (since, geometrically, there are two turnings that are 'on the left');
but configuration (c) might well give considerable pause for thought: for example, if the

is th
at this is a navigation assista
nt and the main road is the one marked M, then the
IT International Conference

October 2006


geometrically 'straight ahead' option is much more likely to be the intended 'left' turn. If it is
unclear to either the device or the user if this is the case, then th
ere is considerable opportunity
for confusion. A detailed study of just how users conceptualize and describe such junctions
when navigating is given by
et al.


A similar range of difficulties occurs in spatial configurations such as those s
hown in Figure
2. Imagine that the speaker (perhaps using the wheelchair that he or she is communicating
with so as to to avoid obvious questions such as 'your left or mine?') is moving in the direction
shown and wishes to make reference to the objects sho
wn to the left. This might be to go
towards them, or to pick them up, or just to say what they are (for example, in a 'learning'
phase where a 'household help' robot might be being informed of its new home). In
configurations (a) and (b), the designation '
on the left' is unproblematic. But for configuration
(c), the simple use of 'on the left' suddenly becomes less appropriate for object A even though
the identically placed object in configuration (b) presented no problem.

Figure 2: I
dentifying objects b
y location

The situation is in fact precisely analogous to that of Figure 1 in that expressions such as 'left'
are rarely interpretable purely in terms of geometric position. A detailed empirical study of
what users do when confronted with different spati
al configurations is presented in
. What 'left' really means is more like: 'left' picks out an object of some
agreed upon
relevant type

that is 'more on the left' than any other competing object. If there are
competing objects, then natu
ral descriptions tend to employ other ways of referring to the
objects or directions in question ('the nearest one'), or modify the description ('a little to the
left'). Problems also occur when it is unclear whether there is competition or not. If A is a
table and B is a chair, then the 'chair on the left' is unproblematic; but if A and B are both
chairs, then the 'chair on the left' will remain ambiguous.

In the case of a robotic assistance device, it may not be immediately obvious just what the
type of

the object under discussion is: this depends on the perceptual capabilities of the
device. A chair and a table may not be immediately distinguishable. Indeed, and just as bad,
the table might even turn out to be invisible (if the perceptual device is a la
ser scanner set at a
low height) in which case the user might perceive a competing object that the device does not.
The opposite case is also perfectly possible: perhaps the human user is also not able to
perceive the environment so well. In general, there
fore, the robotic device and its human user
may have differing perceptual capabilities
and both are variable

across different robotic
systems and also across different users. Therefore generic solutions need to be developed
which do not rely on given, fix
ed capabilities on behalf of either the robot or the human user.

Once interaction begins to break down between user and device, it can be difficult to regain
the user's confidence. In a series of experiments reported in Tenbrink
, Fischer & Moratz
, i
t was found that once users had found that a 'goal
oriented' command (such as 'go to
the kitchen') had not been understood, they resorted to basic movement commands (such as
'go left') instead. And if those commands also were not understood, users reverted

even further
to commands such as 'rotate your wheels'

even if the robot did not
understand such

IT International Conference

October 2006


. After this point, users often failed to get the robot in the experiment to do very
much at all. Users clearly have very firm ideas about what is 'si
mpler' to understand and what
is 'more difficult'. Once they have pigeonholed the robot as not being able to understand one
level of difficulty, they did not bother trying any kind of instruction 'higher' on their
complexity scale. Since, however, the abil
ities of robots

including their abilities to
understand instructions

are not known to users, this strategy can go decidedly wrong.

Putting some of these potential issues together now, consider the more complex and
naturalistic set up shown in Figure 3
. This is a real floor plan and the situation reported is a
genuine confusion that
. I will use this to move from the more informal discussion of
problems to a more precise account of just where problems are arising. The situation was that
there wa
s going to be a meeting in the room indicated as 'target room'; the relevant user was
situated at the point marked with a cross, was moving in the direction shown, and wanted to
know where the meeting would be. The instructions received were "in the last r
oom on the
right'': this was understood as the room indicated in the figure by 'understood room'. What
went wrong?

In order to pick this apart, we need to resolve the semantics of the received instructions in
more detail. First, we need to identify some se
t of entities in the world that may be designated
. Second, we need to identify in the world an
ordered sequence

of such entities that
allow the designation of one of their member as


in that sequence. In the present case, this
can be provided
by the perception of the environment as a

such entities are by and
large 'line
like' and so, when travelled along, can be used to define sequences. Finally, this
ordered sequence of entities needs to be found

of the direction of travel

of the user
at that point in time. The problem here was due to an unrealized disagreement about the
ordered sequence that was in question: the corridor along which the user was moving has a
glass door, but that door is usually open. For the user, this was

not considered a relevant
aspect of the environment and therefore the corridor apparently ended at the open area marked
A. The 'last room' on the right was therefore the one understood. For the information
who may have been more familiar with the b
uilding, the corridor ends at the glass door and
everything on the other side of the glass door is associated more with the open area.
Therefore, 'the last room' was an u
ambiguous reference to the room just before the glass
door. Neither interlocutor was
aware of any competing entities for the designation 'last door
on the right' and so did not see the need for any further information.

Figure 3:


spatial description

IT International Conference

October 2006


For a robotic device, the situation is even worse because in order to interpret the intended
meaning of the utterance corr
ectly, it would need to have a clear notion of just what is to
count as a room or not. In this case, since the goal of the instruction was to find the way to a
meeting, it was sensible to restrict the meaning of "room'' to offices: this means that the
oard that comes after the room after the glass door, which contains cleaning equipment
and materials, was not considered as a possible option by anyone in the scene. This adds the
final complication: even to interpret the relevant entities that will be pic
ked out by some
natural language term, it is in general necessary to know just what the

of the speakers
concerned are. Fortunately, this can sometimes be retrieved from the interactional context, for
example, answering "the last room on the right'' t
o the question "I've just spilled coffee
everywhere, where can I get a mop?'' might well trigger a different preferred reading of

The notions of goal and purpose turn out to be crucial for much interpretation. It allows us, for
example, to ignore

possible small and narrow lanes that might also have run off to the left in
Figure 1 (unless we were trying to hide); it also allows us to restrict attention to relevant
entities so that the potentially competing objects in Figure 2 are automatically rest
ricted: there
are always


objects on the left and the right, but the ones that we need to discriminate are
dependent on task. It is as if, instead of having a

semantics associated with
each linguistic item, human interpreters provide an

additional 'wrapper' around each use of a
term which brings with it an implicit instruction of the form: 'do something sensible with this
according to my current needs'. We can see this very clearly in our next and final example,
where the very sense and
range of application of spatial relations can also be seen to be
dependent on task precisely in this way.

Consider again our autonomous wheelchair and the situation of its having received the
instruction "take me in front of the refrigerator'': one resu
lt is shown in Figure 4 (taken from
our empirical experiments

investigated just how users communicate with various types
of robots, for various kinds of tasks and with respect to various

kind of spatial configurations


, 20
. If the user wanted to clean the front of the
door of the fridge, then this may be an appropriate position; in the probably more frequent
situation that the user wants to get something out of the fridge, the response of the wheelchair
ould be considered significantly less helpful.

The key result to sum up the kinds of problems exhibited in these scenarios, therefore, is that
the interpretation of linguistic terms depends on their functional deployment in contexts of
Figure 4:
'In front of'
the fridge?

IT International Conference

October 2006


tasks and goals. In
order then to provide effective assistance that can bring in more
sophisticated mobility issues than that afforded by direct control of the supporting device, it is
necessary to

with that device. But effective and appropriate action depends
ucially on appropriate interpretations; and these interpretations, especially but not only in
the domain of space, rely on a range of distinct kinds of information that are by no means
immediately obvious.

3. M
odelling necessary knowledge of the world

hen we turn from informal descriptions of the kinds of things that can go wrong in
providing interpretations of spatial information to the knowledge that is necessary to prevent
such misunderstandings (or to resolve them as effectively as possible when the
y do occur), we
are drawn quickly to issues of ontology. Moreover, as we will see, we are drawn to ontologies
that maintain

knowledge of their domains of concern. An entire range of structures
currently goes under the heading of 'ontology', rangin
g from more or less simple taxonomies
of terms at one extreme right up to richly axiomatized ontologies on the other. Work on the
semantic web, for example, has largely been oriented towards less detailed, 'shallow' or
'lightweight' ontologies due to the c
omplexity of constructing richly axiomatized ontologies
for large
scale bodies o
f information (such as the web: cf.,
, 20
. For supportive
applications and effective communication concerning space, however, we require detailed

of several k

First, we require a generic foundational backbone for the kinds of entities, relations and
activities that we encounter. As we saw from our examples, we need to be able to characterize
everyday commonsense objects (such as 'rooms') as well as their l
ocations. A detailed
overview of the varieties of ontologies currently available is given in Farrar


; there we recommend use of the
Descriptive Ontology for Linguistic and Conceptual



et al.,
. This ontology i
s richly axiomatized and provides
general categories, such as physical objects, social agents, events and so on. It also provides a
detailed model of how such entities relate to space without restricting just how space is to be
formalized. This has allowed

us to develop a 'plug
play' approach to spatial knowledge as
suggested in Figure 5.
Any physical object (DOLCE: Physical Endurant, PED) necessarily has
by virtue of being a physical object certain qualities (Physical Qualities, PQ). One of these is
s ‘being located’
ness. The location of objects can, however, be set into relation with one
another in a variety of ways. DOLCE does not specify how this should be done precisely and
leaves us free to entertain a variety of spatial characterizations as mo
tivated by our empirical

IT International Conference

October 2006


Figure 5: 'Plug and play' spatial ontology

relating objects

and their locations

There is a long and detailed history of formal
approaches to formalizing space

we can draw on here
. In such approa
ches, space is seen not as a pure Euclidean geometric
space, but instead as entities and relations with particular qualitative properties. One of the
most well
known treatments is the Region Connection Calculus (RCC) of Randell
, Cui &
Cohn (1992)
. This for
malizes space in terms of


and whether or not such regions are

. The RCC
family of
spatial calculi

can be made more or less expressive depending
on refinements of the connection relation. Another well
known treatment, called the

ksa, 19
, is based on a qualitative characterization of direction (in
front, behind, front left, front right, etc.). An overview of such qualitative approaches to space
and the reasoning that they support is given by Cohn



a more introductory
overview with explicit relations to treatments of space found in computational ontologies is
given in Bateman



Several central directions of research within the Collaborative Research Center revolve
around qualitative
spatial calculi; these include constructing common formalizations

, 2005)
, demonstrating formal complexity results for the different classes of

, 19
, and developing new calculi specifically tailored for particula
classes of spatial problems

, 2006)
. One of the main motivations for using
such calculi is that their qualitative nature makes them very much more like the kinds of
descriptions and representations that people appear to use. An account i
n terms of a qualitative

description ('in front of the fridge') is seen to be more natural than geometric or metric
descriptions ('0.5 metres along an axis passing through the centre of gravity of the fridge
parallel to the floor orthogonal to the wall').
Providing support for human users is therefore
more likely to benefit from qualitative representations than from quantitative.

However, and of particular importance then for our use of such calculi for spatial assistance,
particular tasks that a user need
s to have performed can render particular qualitative
treatments of space more appropriate than others. For example, it is natural when discussing
routes and giving directions, to use calculi such as the Doublecross calculus: the region
viewpoint is
helpful. But, when considering where entities, objects and other people are
located, then a region
based view can be more useful. In general, different kinds of linguistic
spatial expression can 'activate' distinct kinds of spatial calculi


ted graphically in
Figure 6. And, as a consequence, our plug and play embedding of models of space into a
generic ontology of entities appears just what is required. Relating


descriptions, for example, when we know that turning in a particu

('left') will place us in a particular qualitative region ('a corridor'), is then handled formally by
treating the individual spatial models as
formal theories

and defining explicit mappings
theory morphisms
) between them. This
degree of reasoning is not possible if we do not have
a sufficiently detailed axiomatized ontology at hand.

Figure 6: Distinct kinds of

ontologies of space

IT International Conference

October 2006


In short, we are now developing methods for dealing with the range of spatially
'knowledges' that the examples of spatial assistance p
roblems discussed above show to be
necessary. Our approach argues that the sheer diversity of kinds of knowledge required
strongly argue for a modular approach to
ontological diversity
. That is, the most effective
way of achieving the knowledge
rich system
s that will

be able to contribute to intelligent
assistance is to provide distinct ontological modules, each of which provides alternative
es on the issues being modelled, and to relate these formally using mathematical
tools involving theo
ry morphisms and category theory.

With this framework in place, we can then go further. There are other sources of information
that are also important for supportive assistance systems and many of these are now
increasingly also being organized in terms o
f ontologies. This ranges from information
concerning local facilities, the availability of particular forms of access (e.g., rampways) to
buildings, and much more.
An indicative range of such knowledge sources is listed in Figure
Accepting such ontolog
ical diversity and embedding it within a formalized framework for
relating distinct perspectives appears to be the most promising approach for leveraging off the
information explosion and utilising this for increasingly sophisticated and effective spatial
assistance systems.

Towards the future: spatially intelligent environments

I have discussed some of the distinct kinds of spatial knowledge that need to be maintained
for effective spatial assistance and how ontological engineering can provide app
mechanisms for this. Once we have such knowledge, we can consider other contexts where
the sophisticated spatial awareness supported can be utilised.

One particularly challenging application lies in a radical extension of the just what 'agents' we

can consider to maintain spatial knowledge. For example, it has become relatively common
to talk of environments becoming more 'intelligent' in various senses and across various
disciplines. This intelligence ranges from simple environmental responses, s
uch as lowering
blinds when rooms are in direct sunlight or controlling room temperature, to more exploratory
attempts to provide dynamically configurable office buildings that are able to change room
size and function to respond to changing demands. Devic
es for adding user

instructions into the environment, i.e., individualized signage, are also under
investigation. Within this general paradigm, research directions into spatial intelligence open
up a significant new perspective that may

offer a theoretically well
articulated and unifying
foundation for the notion of 'intelligent environments' as
. The crucial addition here is in
Figure 7: Relevant kinds of

knowledge for spatial intelligence

IT International Conference

October 2006


the explicit treatment of space
as a resource

about which intelligent systems can reason and
This leads directly to a conceptualization of environments as

: that is, for an environment to be 'intelligent',

must take on some of the essential
properties developed for spatially intelligent agents described above. Intelligent
then involve reasoning, action, and interaction in and concerning space, and draw on
cognitively well
founded models of space and its use.

A common question when dealing with agents distributed in space, for example, is that of
where knowledge

about aspects of their environment is to be situated: i.e., which knowledge
belongs where? Environmental knowledge is often unevenly distributed. For example,
differences in spatial knowledge about the environment form the basis both for the necessity
assistance and for the issues raised in multi
robot exploration. When we add the
environment itself into the situation as a spatially aware agent, we can ask under which
circumstances the spatial knowledge of that environment is best maintained by th
environment itself and when by its users. Infrequent visitors to an environment (e.g., visitors
to an office or hospital) will not be knowledgeable about the environment and so an intelligent
environment should see the need for interaction concerning th
e intended actions of the users
within its scope. Individualized signage then offers one set of appropriate technologies, as
well as individualized maps (e.g., providing information only about the spatial location of the
departments or people to be visited

and significant landmarks required to find them),
individualized route descriptions, or dedicated mobility assistance (e.g
, wheelchairs, walkers,

Further interaction concerning the space can also be envisaged between the individual
components, e.g
., between the environment (with overall knowledge about its current state)
and assistance robots (with localized route knowledge necessary for a specific task). As users
of an environment move on from being infrequent or single visitors, their knowledge o
f the
environment also increases and so their need for guidance from the environment changes. This

responsiveness to partially shared (and therefore potentially outdated) knowledge is one
crucial contribution to interacting intelligently with users.

I sug
gested in the examples given above that issues of


are critical in determining
representations and appropriate reasoning about space: spatial awareness requires
consideration of what the goals and tasks of an agent are within its space. This c
arries over
naturally for an intelligent environment's treatment of its space. In the simplest cases,
knowledge that an agent intends to reach a specific place within the environment (e.g., a
particular person in a building or a particular station in a tr
ansportation network) usefully
constrains the situations where interaction between environment and agent is necessary. For
example, a particular person that is being looked for may be located in a different building to
that expected (when in, e.g., an ins
titution scattered over several buildings) and so interaction
will be necessary to communicate this fact. An intelligent environment is then one which has
become an active agent concerned with the situated mobility of the individual agents and
groups of a
gents located within it.

In short, this suggests that Pollack's


recent classification of the kinds of activities that
assistance robots can perform into 'assurance', 'compensation' and 'assessment' can be applied
directly to intelligent environments

when these are viewed as agents in their own right
. The
more complex an environment is, the more support can sensibly be envisaged for the
inhabitants of that environment. In architectural or navigation contexts, for example, we can
see this as ranging fr
om assurance that one is on the right path, that a particular goal is where
we thought, through compensation for overly complex or misleading architectural design that
is difficult for a user to interpret in a way supportive of their goals, to assessment o
f designs
for buildings or outdoor networks (city streets, transportation systems, etc.) in terms of their
IT International Conference

October 2006


cognitively plausible effects on distinct groups of intended users

with perhaps differing
capabilities and difficulties
. For all such extended applic
ations, the kinds of basic spatial
representations and flexible relations with knowledge of the world and the goals and purposes
of users supported by foundational modular ontologies will play a central role.


The work reported in this pap
er is part of the SFB/TR8, Collaborative Research Center
‘Spatial Cognition’ funded by the Deutsche Forschungsgemeinschaft (DFG). Particular thanks
are due to all the members of the Ontospace, SharC and SPIN projects

of the Centre


Bateman, J
. Farrar, S.


ontology baseline, SFB/TR8 internal report I1
[OntoSpace]: D2
, Collaborative

Research Center for Spatial Cognition, University of
Bremen, Germany.

Cohn, A. Hazariki, S.


spatial representation and rea
soning: an

Fundamenta Informaticae

, 2

Dylla, F. Wallgrün, J. O.

On generalizing orientation information in OPRA

'Proceedings of the 29th German Conference on Artificial Intelligence (KI 2006)',
Lecture Notes in Artifi
cial Intelligence, Springer, Berlin.

Farrar, S. Bateman, J.


ontology baseline, SFB/TR8 internal report I1
[OntoSpace]: D1
, Collaborative

Research Center for Spatial Cognition, University of
, Germany

Freksa, C.

Using or
ientation information for qualitative spatial reasoning,

A. U.
Frank, I. Campari U. Formentini, eds
, 'Theories and methods of spatio
reasoning in geographic space', Vol. 639 of
, Springer, Berlin, pp. 162

Guarino, N.

rmal ontology
and information


N. Guarino, ed., 'Formal
in Information

Systems', IOS Press, Amsterdam, pp. 3

Guarino, N. Poli, R.

'The role of formal ontology in the information technology',

International Journal of Hum
an and Computer Studies

, 623

Kim, H.

'Predicting how
ontologies for

the semantic web will evolve',
Communications of the ACM

(2), 48

ppel, A., Tappe, T., Kulik, L.
Lee, P. U.

'Wayfinding choremes

a language for
deling conceptual

route knowledge',
Journal of Visual Languages and Computing

(4), 311

Lankenau, A., Meyer, O.
Brückner, B.

Safety in robotics: The Bremen
autonomous wheelchair

'Proceedings of AMC98, 5th Int. Workshop on Advan
Motion Control', pp. 524

Lankenau, A. Röfer, T.

The role of shared control in service robots

the bremen
autonomous wheelchair as an example,

T. Röfer, A. Lankenau R. Moratz, eds
'Service Robotics

Applications and Safety Iss
ues in an Emerging Market.
Notes. European Conference on Artificial Intelligence 2000 (ECAI 2000)', pp. 27

Masolo, C., Borgo, S., Gangemi, A
., Guarino, N.
Oltramari, A.

Ontologies library
(final), WonderWeb Deliverable D18, ISTC
CNR, Padova, Italy.

Pollack, M. E.

‘Intelligent technology

for an aging population: the use of AI to assist
with cognitive

AI Magazine

pp. 9

Randell, D., Cui, Z.
Cohn, A.

A spatial logic based on regions and co


'Proceedings of the 3rd. International Conference on
Knowledge Representation

Reasoning', Morgan Kaufmann, San Mateo, pp. 165

IT International Conference

October 2006


Renz, J. Nebel, B.

‘On the

complexity of qualitative spatial reasoning: a maximal
gment of

the region connection calculus',
Artificial Intelligence


Ross, R., Shi, H., Vierhuff, T., Krieg
Brückner, B.
, J

Towards dialogue
based shared control of navigating robots,

C. Freksa, M. Knauff, B. Krieg
B. Nebel T. Barkowsky, eds
, 'Spatial Cognition IV: Reasoning, Action, Interaction.
International Conference Spatial Cognition

Frauenchiemsee, Germany, October

Proceedings', Springer, Berlin, Heidelberg, pp. 478

, T.

Identifying objects on the basis of spatial contrast: an empirical study,

C. Freksa, M. Knauff, B. Krieg
Brückner, B. Nebel T. Barkowsky, eds
, 'Spatial
Cognition IV: Reasoning, Action, Interaction.
International Conference

Spatial Co

Frauenchiemsee, Germany, October

Proceedings', Springer, Berlin,
Heidelberg, pp. 124

Tenbrink, T., Fischer, K. Moratz, R.

Spatial strategies in linguistic human

C. Freksa, ed., 'KI
t Spatial Cognition', arenDTaP Verlag.

Tenbrink, T., Shi, H.
Fischer, K.

Route instruction dialogues with a

'Proc. BranDial 2006: The 10th Workshop on
the Semantics

Pragmatics of Dialogue. University of Potsdam, Germ
; September

13th 2006'.

Wölfl, S. Mossakowski, T.

CASL specifications of qualitative calculi,


A. G.
Cohn D. M. Mark, eds
, 'Proceedings of Spatial Information Theory: International
Conference, COSIT 2005', number 3693


, Springer, Berlin, pp. 200