pillowfistsAI and Robotics

Nov 13, 2013 (3 years and 6 months ago)


Thomas Laengle, Tim C. Lueth, Ulrich Rembold
Institute for Real-Time Computer Systems and Robotics (IPR)
University of Karlsruhe, D-76128 Karlsruhe, F.R. Germany
The main advantage of distributed controlled robots and subsystems is the decentralized task
execution by the system components. This way, properties for the design of flexible
control architectures like modularity, fault-tolerance, integrability and extendibility are
easily obtained, further it is possible to use the concepts of distributed knowledge and
decentralized world representation. On the other hand, coordination between the
components, for example path planning for collision avoidance between both manipulators
in a two-arm-system, is very difficult to guarantee. To explain these concepts, the
Karlsruhe Autonomous Mobile Robot KAMRO is considered which is being developed at
IPR. The robot system consists of several subcomponents like two manipulators, hand-eye-
cameras, one overhead-camera and a mobile platform. Extensions to the distributed control
architecture KAMARA (KAMROs Multi Agent Robot Architecture) are described that are
responsible to overcome coordination problems, for example caused by the independent task
execution of both manipulator systems. Further, it is explained which way the decentralized
world representation can be used for parallel task execution. The described intelligent
control architecture will replace the former control architecture of the autonomous robot
During the past few years, the need for large scale and complex systems has become
obvious. They are necessary to intelligently carry out tasks in the area of transportation,
manufacturing, and maintenance. Examples are automatically guided transport systems
containing many vehicles, complex flexible manufacturing cells, or eventually mobile
manipulator systems, which could be used for autonomous service applications in an
industrial setting [1, 2]. The main problem is mostly the design of the intelligent control
structure (ICS) for such complex systems, and also for system components if the overall
system consists of several separate systems. Up to now, the control structures of such
systems were usually designed as a hierarchical and centralized structure with a top-down
process for planning and decision making. The number and complexity of the hierarchical
layers determine the time the system requires for a reaction and the quality of a chosen
action. In most cases, additional actuators or sensors have to be added during the
development cycle to improve the capability of the overall system. In this case and, if the
integration of component capabilities is required, it's easy to see the disadvantages of the
hierarchical and centralized approach in comparison to the advantages existing at the initial
system design process.
In contrast to this approach, distributed or decentralized control architectures [3 - 12] reveal
their main advantages when it becomes necessary to enhance the system, to integrate
components, and to maintain the system. Therefore, most of the following properties are
well-known in the area of computer architectures:
¥ Modularity: By having a predefined framework for information exchange processes,
it is possible to develop and test single parts of the system independently (e.g. the
drive and navigation module for a mobile platform).
¥ Decentralized knowledge bases: Each subsystem of the overall system is allowed to
use a local knowledge base, that stores relevant information for the subsystem, and
is capable of exchanging this information with other subsystems if required (e.g.,
for mobile manipulators, information about obstacles on the platform's path is not
important for the manipulator itself).
¥ Fault-tolerance and redundancy: If system-inherent redundancy exists, this
redundancy should be usable without any error model, in case of a broken down
subsystem or another error situation (e.g., if one of several vehicles in a
transportation system is damaged, nothing else should happen than a slower task
¥ Integrability: Without any change in the control architecture, cooperation of
subsystems is possible and all synergetic effects can be used (e.g., any kinematic
chain in a multi-manipulator system or support of a manipulator by an existing
mobile platform).
¥ Extendibility: New system components can be added to the original system and any
improvements (such as reduced task completion time) come about without any
change in the system architecture. New components inform other components about
their existence and capabilities (e.g., extension of a sensor system).
The main disadvantage of not centralized architectures is to make sure, that the system will
fulfil an overall or global goal. On the other hand, the independent task execution by the
system components causes problems in the area of coordination between the system agents.
In this area, centralized control architectures show their main advantage. It will take quite a
while to integrate all these properties in an intelligent control architecture, but distributed and
decentralized concepts will be the main approach for this goal.
In the second section, a taxonomy for intelligent control architectures is given. In section
three, the distributed control architecture KAMARA for the mobile assembly robot KAMRO
is briefly presented. Hereby, a new agent model with local world representation is used.
This new model together with communication forms is explained in the fourth section. In
distributed systems, task negotiation between the system components is necessary, refer to
section five. In section six, an example is discussed which explains the concepts in detail.
Section seven deals with coordination between system components. The article closes with
an evaluation of the advantages of this new approach and conclusion for future work.
2.Intelligent Control Architectures for Multi-Agent Systems
In principle, complex systems, which consist of several executive subsystems (agents), can
be divided into three different design classes:
¥ Centralized Systems: A decision is made in a central mechanism for the system goal
and transmitted to executive components (Fig. 1a).
¥ Distributed Systems: The decision is made by a negotiation process among the
executive components and executed by them (Fig. 1b). This way, it is possible to
fulfil a global goal [13].
¥ Decentralized Systems: Each executive component makes its own decisions to fulfil
the individual goal of the component and executes only these decisions (Fig. 1c).
Independent of the individual goals of the components, it is also possible to derive a
common goal of the whole system if this goal is obtained by combination of the
individual ones.
Fi g. 1:Execution view of multi-agent systems:
a) centralized, b) distributed, c) decentralized
From the definition of an agent, it is possible to describe and explain hierarchical systems
(Fig. 2). An agent consists of three parts: communicator, head (for planning and action
selection), and body (for action execution) [14].
Action selection
Fi g. 2:Elements of an agent
The communicator connects the head to other agents on the same communication level or
higher. The head is responsible for the action selection process for the body (centralized
approach), organizes the communication among the body's agents actively or passively
(distributed approach), or is only a frame to find a principle of order for the decentralized
approach. The body itself consists of one or more executive components, which can be
similarly considered as agents.
The executive components can be divided into three classes, as well as the components of
the process for planning and action selection, i.e., the head of an agent (Fig. 3).
Fi g. 3:Three different ways to make decisions, plan, or select actions
The classification is as follows:
¥ Centralized action selection: Available information is centrally processed by a
decision making component and transformed into an action for the agent's body
(linear planner).
¥ Distributed action selection: Available information is processed by several decision
making components, which communicate and negotiate to come to a decision.
Afterwards the information is transformed locally and globally into an action for the
agent's body (Blackboard [15], Whiteboard [16])
¥ Decentralized action selection: The available information is processed independently
by several decision making components and transformed locally into their own
action decision for the agent's body (Motor schema [17]).
From an execution-oriented point of view, the presented taxonomy not only allows the
classification of already described ICS for planning and control. It also describes multi-
agent systems in a similar way as single systems.
3.Two Architectures for One Intelligent System
This concept for describing an intelligent control architecture can be used to explain two
different approaches toward controlling the Karlsruhe Autonomous Mobile Robot KAMRO
(Fig. 4). A centralized and a distributed architecture for this two-arm robot will be explained
in the following sections and the advantages and disadvantages will become clearer.
The robot receives assembly tasks from a cell controller, which are represented by assembly
precedence graphs. Afterwards, the robot travels to a workstation, searches for the
necessary assembly parts with its camera system and performs the assembly autonomously.
The used control architecture FATE [18] consists of a blackboard planning level that
generates situation-dependent manipulator-specific elementary operations.
Fig. 4: The mobile robot KAMRO
The real-time robot control system RT-RCS executes the elementary operations. The real-
time controller is able to control the manipulators independently or in a closed kinematic
chain. Therefore, the overall system can be described as a centralized execution architecture
with distributed action selection (Fig. 5a).
For the independent movement of the manipulators and for the kinematic chain, two
different kinds of elementary operations are necessary. It has already been shown that this
control architecture is principally suitable for solving the problem of autonomous assembly
tasks by robots. On the other hand, many difficulties have arisen by extension of the system
with miniature hand-eye cameras and through the integration of the mobile platform and the
manipulators for mobile manipulation. Similarly, the automatic execution (replacing) of a
damaged manipulator's task through a cooperation of the platform and the functioning
manipulator is only realizable by completely redesigning the centralized control architecture.
These difficulties are avoidable if a new architecture is implemented which doesn't only
have one executive agent (as FATE uses RT-RCS), but multiple agents like the image
processing system, the two manipulators, and the mobile platform as well (Fig. 5b).
Real-time controller
Command Status
Cell controller
Cell controller
a) b)
Fi g. 5:a) Centralized control architecture FATE, and
b) Distributed control architecture KAMARA
These agents have to communicate and negotiate among each other to collect the missing
information, that is required for an autonomous assembly, and to perform the desired task.
The new control architecture, KAMARA (










rchitecture), for
distributed intelligent robot systems and their components allows easier control in many
directions and also easier component integration. Different types of cooperation for coupled
agents, like closed kinematic chains or camera-manipulator coupling, are also considered in
this architecture. The main topic in the following sections will be the problem of task
distribution between the executive agents.
4.Agent Model and Communication Mechanism
As mentioned before, an agent A consists of a communicator, a head, and a body. In our
system description, an agent, like a manipulator, is only capable of performing one task at a
time. The reason for this is that its body is implemented as a single procedure. On the other
side, a head with a communicator doesn't only have to control the body, but also has to
communicate and negotiate with other agents or heads. An important reason for
communication is the determination of the agent for executing an elementary operation. This
means the head (and the communicator) have to deal with several different tasks at one time.
Therefore, head and communicator are implemented as a variable set H, C of equal
independent processes H (Head) for planning or action selection, and C (Communicator)
for communication and negotiation (Fig. 6):
A = (C, H, B)
Fi g. 6:Head and communicator can be several processes
The communication mechanism for all agents and for task distribution or task allocation is
blackboard-like. Hereby, the channel of the distributed system (Fig. 1) holds all executable
missions m in a mission set:
M = {m
, m
, É, m
M receives new missions M
from the cell planner P or from the agents of KAMRO:

P,A:M = M ∪M
Whether or not this communication mechanism (channel) is implemented as Blackboard,
Whiteboard, or token ring, etc., is an implementation-oriented question (here, a
Blackboard-architecture with a contract-net-like protocol is used).
In principle, this multi-agent architecture is also useful on the cell level. In this case the
communication mechanism of one KAMRO robot is the head of a KAMRO-Agent
(distributed action selection architecture), and it is possible to use more than one KAMRO
robot for complex tasks like carrying a large object with several robots or loaning one
manipulator to a second robot (Fig. 7).
Considering the distributed control architecture, it's easy to see that between the system
components, different cooperation forms are needed:
1.independent task execution,
2.asynchronous communication (teams), and
3.synchronous communication (special agents).
Cell controller
Plant controller
Fi g. 7:Distributed cell controller view
Teams as a set of cooperating agents are dynamic in nature, the number as well as the kind
of agents may change during the task execution. An example is the exchange of parts
between both manipulators. For a defined space of time, cooperation is necessary to reach
this goal. The communication for this kind of cooperation will be done on a high abstraction
level by the agent's communicator. As an example, a team of the components camera and
manipulator of the autonomous mobile assembly robot KAMRO can be considered. The
cooperation of manipulator and camera system is important if grasping and joining
operations should be performed by the robot. In this case, the camera system must be able
to correct the position of the manipulator if needed. Another example is the exchange of an
obstacle between both manipulators, or a regrasping operation to change the gripping
configuration of an object. In the first case, the camera and a manipulator must build a team
to solve the described problem, in the second case, both manipulators build a team together.
Because there is just brief information exchange that has not to be synchronized in a specific
time interval, team building (see Fig. 8) is sufficient. The communication form between the
system agents is asynchronous. On that account, it's not possible to guarantee real-time
communication component
communication component
communication component
Fig. 8: Team: asynchronous communication
If two manipulators grasp a large or heavy part, and by this way close a cinematic chain,
asynchronous communication between these system components is not sufficient. In this
case, dependent on the desired control concept for the cinematic chain, a decentralized
architecture (simple reflexive behaviour), a distributed architecture (master slave tasks), or a
centralized architecture is required. For the last case, it is necessary to extend the distributed
control concept by the introduction of special executive agents (centralized approach). These
special agents SA have, like all other agents, a head H and a communicator C. The body is
allowed to allocate bodies of other agents, if available, and control them by special
communication channels with high transfer rates (see Fig. 9). During this time, the normal
agents have no access to their bodies, since they are being used by the special agent.
Action selection
a) b)
Fig. 9: Special agent: centralized planning for other agent bodies
In other words: if the information exchange between agents of the same team increases so
dramatically that this results in a narrowing in the communication channels, then these
agents have the possibilities to refer to a closer internal relationship (see Fig. 10).Because
special agents change the structure of the control architecture while they are active, they
should only be used if no other type of cooperation is suitable.
communication component
synchronous communication
Fig. 10: Communication for high transfer rate
5.Task Negotiation in KAMARA
If a new mission is assigned from the mission set M, it is possible that many agents are able
to work on that task. As a consequence, an important problem that has to be solved is the
negotiation among these agents that compete for the mission. This negotiation process can
be performed by
¥ a centralized mediator (only one mediating agent exists in the system),
¥ a selected candidate (every agent has the ability to mediate), or
¥ all (or many) candidates.
The implementation of a centralized mediator conflicts with important goals of the new
system architecture. In this case, the disadvantageous and inflexible information flow goes
unavoidably through a slow centralized planning system, which has to negotiate with many
system components. Similiar to human behavior, a selected candidate is considered. In this
concept, the mediating agent demands a mission-solving evaluation from all other
competing candidates, compares these offers and chooses the best one.
5.1 Task Description
Because all agents are able to negotiate with the competing agents and the communication is
managed by a blackboard architecture (mission set), it is not only necessary to represent the
task itself, but other information as well.
First of all, it is important to store the agent identification of the mediator and a list of all
other candidates that compete for the mission. This way, the mediator may be identified,
and the mediator knows which other agents are involved in the negotiating process. Since
the blackboard structure controls the information exchange among the system components,
the evaluations of the competing agents are also stored in the task information block. As
mentioned before, all agents that work on the subtasks have to send their solutions (or a
signal) to the responsible initiators. Thus, the mission representation must also indicate the
agent that appended the subtask to the mission set. To be specific, it is necessary to store the
identification number of the desired head and communicator, since many of these heads and
communicators may exist for an agent. Since an agent head that needs further information to
perform its task is blocked until the desired information is present, a correspondence field
informs this agent that the solution is stored on the blackboard structure. To identify the
solution, the mission identification number n is used as a reference in the solution
representation. Because the solution representation can be very complex and is not always
known in advance, it should be not integrated into the mission description. Therefore, a
mission is represented as a tuple
m=(n, I, R, P, t, A, V, E)
The field I contains a list of the mission initiators, t represents the task itself, R is a set of
receivers which are interested in the mission solution, A is a list of the competing agents and
V their valuations. The first candidate in the candidate list A is the mediator between the
competing agents. All other entries are the candidates. When the mediator selects the agent
that has to perform the task, a corresponding message is sent to all agents through the
execution set E. In this field, all competing agents can see whether they are chosen to work
on the mission or not. The set P contains signals to the initiators or receivers, which indicate
that the mission solution is presented on the blackboard structure. The blocked initiator or
receiver head which is waiting for information has to examine this field until a signal is
presented. In order to overcome problems with older messages on the blackboard while still
holding the information available as long as necessary for other interested agents, the last
receiver that fetches its information from the blackboard has to delete the mission and the
corresponding answer from the mission set. This way, information is available as long as
necessary and is deleted if all interested agents have received the solution.
5.2 Selection of the Mediator
The negotiation procedure starts when a new mission appears in the mission set. The
communicator of every agent, whether the body of this agent is already performing another
mission or not, searches for tasks which it can solve in the mission set. One of these
competing agents should negotiate with the rest of them. If the candidate list A is empty, the
first competing agent head acts as mediator and stores its identification number into the first
position of the field A. When another agent becomes interested in mission solving, it is
obvious that this agent head should evaluate its problem solving ability and send it to the
mediator by writing the information into the corresponding position of the valuation field V.
The mediator calculates its own ability to work on the mission and waits an a priori defined
time τ for the evaluation of all other agents a
, compares these evaluations and
chooses the best agent a
of the entire candidate list A=(a
, a
, ..., a
to work on the
mission. This way, the candidates which aren't able to calculate their evaluation fast enough
or are disabled are not involved in the negotiation process. Because all agents that have the
ability to work on the mission are integrated in this selection procedure, it may occur that an
agent which was previously working on a different task enters the competing process later
when it's body is "free".
5.3 State Transition Diagram for the Agent Head
To briefly describe the above mentioned negotiation process, internal state transition
diagrams for the agent head is presented. The state transition diagram of the agent head
shown in Fig. 11 consists of four states: mediate, calculate, ready and not existing. If a
new mission appears in the mission set, the communicator of an interested agent initiates a
new process copy of the agent's head (state transition not existing to ready). The mediator
head then changes its state to mediate, and all other candidates have to change their state to
In both states, calculate and mediate, the agent calculates its ability to solve the mission. All
candidates that are in the state calculate store their calculated value in the corresponding
position in the evaluation field V. When this field is complete or the defined time constant τ
is reached, the mediator selects the candidate that has the best ability to complete the mission
and stores the execution information in the execution field E. Thereby, all candidates are
informed whether they are chosen to work on the mission or not. The heads of all
candidates that are not selected to work on the mission change their states back to not
existing, the chosen candidate changes its state to ready. If two (or more) agents send
equivalent valuations that also is the highest in the negotiating process, then the mediator
randomly selects one of these competing agents. In contrast to that, the physical agent is
only capable of performing one task at a time, thus the agent body is implemented as a
single procedure.
Fig. 11: State transition diagram for the agent head
5.4 Evaluation
Main goal of the described protocol is to fulfil the properties of fault-tolerance and system
extendibility. On that account, all information exchange has to be performed by use of the
mission set (direct agent communication is not allowed). As a consequence, it's not
necessary to recognize a damaged subsystem nor to reconfigure the control architecture to
work with damaged components. By use of several agent heads, it is possible to have
parallel planning when changes of the environment information has not to be integrated into
the action selection process. Furthermore, new components can easily be integrated into the
system architecture.
Comparing both control architectures (the centralized system FATE, and the distributed
concept KAMARA), it can be shown that the temporal resolution of a coordinated effort is
better in the distributed approach, because independent task execution and task planning is
often possible.
6.Agents and Tasks in KAMARA
In the KAMARA system, several agents exist that work together to perform the desired
task. Consequently, a communication protocol between the agents is required. This
language consists of operations that address an agent to perform a task and could be used by
other agents to involve other agents in the solution process. In KAMARA, the agents
perform the following operations:
¥ Manipulator: A manipulator is able to perform the implicit elementary operations
¥ Two-arm-manipulator: A two-arm-manipulator is also able to perform the implicit
elementary operations PICK and PLACE. Because this agent consists of two
independent actuators which make up a superagent, the mission valuation of this
agent is much higher than the calculated value of a single manipulator when picking
up a heavy or large obstacle.
¥ Manager: This agent is responsible for the interpretation of a complex mission the
system has to perform. It decomposes a complex task into its executable parts.
¥ Database: The database is able to offer world state information the agents need to
perform their tasks.
¥ Overhead camera: This agent type is able to determine the position of obstacles by
examination of a wide environmental area.
¥ Hand camera: This sensor type is able to determine exact relative object positions
based on inexact absolute position estimation. This information is, for example,
necessary for a manipulator, just before performing a grasping operation. It can also
be used to extract object positions like the overhead camera.
¥ State controller: This agent is responsible for the blackboard structure and the state of
the other system components. One important task is to control the period of time a
mission is hold on the blackboard for execution. If the time stamp increases above a
given threshold, the state controller can search the protocols to determine whether a
system component has recently performed a similar mission. If so, this component
may be damaged, overloaded with tasks or is blocked. Thus, the mission could be
given to the cell controller, so another system independent of KAMARA's control
can perform the task. The state controller also controls system component evaluation:
if a system component estimates its ability to perform a mission as very high, and
execution of the mission often fails, the state controller is able to inform the
corresponding agent, and this way reduces its evaluation coefficient to zero.
The communication and negotiation concept which exists between agents in order to
perform a mission will be demonstrated in the following example of an assembly task, the
Cranfield Assembly Benchmark.
An assembly task is represented by a precedence graph. The nodes consist of individual
subtasks. Therefore, all possible sequences of subtasks by which a given task can be
performed are represented. For the Cranfield Benchmark shown in Fig. 12, the assembly
description is given in Fig. 13. This precedence graph only describes the goals the system
has to reach, whereas the executing agent has to decide how these goals can be achieved
depending on the environment at execution time. Therefore, the agent head uses the
system's sensor information to expand this implicit representation to an explicit one.
Fig. 12: Cranfield Assembly Benchmark
Fig. 13: Assembly Precedence Graph
Interpretation of a complex mission is only performed by the manager agent. This agent
then competes for it. As a consequence, a new manager head process is involved in the
system. This head is responsible for the whole execution process of this assembly mission.
Thus, it starts with the task decomposition, taking the precedence graph into account.
Referring to the example described above, only the operation PICK(sideplate) can be
performed and is then appended to the mission set:

= (1, {M},{M}, nil, PICK(sideplate), nil, nil, nil)
This mission, one implicit elementary operation, can be performed by many system
components, for example all manipulators, perhaps also a special agent or a team of other
agents. All these agents are involved in the solution process and have to calculate their
ability to solve the mission. For example, both manipulators of KAMRO are interested in
the mission, and the manipulator Mp
is the first candidate that competes for it:

= (1, {M},{M}, nil, PICK(sideplate), ( Mp
, Mp
), nil, nil)
As a consequence, two new processes of the agent's heads are started, the state transition to
mediate is performed by agent head 1, and the second agent head switches to the state
calculate. In these states, both manipulators start to calculate their mission solution
valuation. Therefore, these system components need further information, for example, the
position and the weight of the sideplate. Because both manipulators are not able to
determine the position information, other agent types have to be involved. If, for example,
the agent Mp
is the initiator of the command, a new mission is appended to the mission set:
= (2, {Mp
}, {Mp
}, nil, find_position(sideplate), nil, nil, nil)
After a short time, manipulator Mp
is also interested in the obstacle position:

= (2, {Mp
, Mp
, Mp
}, nil, find_position (sideplate), nil, nil, nil)
Both agents also need weight information:
= (3, {Mp
, Mp
, Mp
}, nil, find_weight (sideplate), nil, nil, nil)
In mission m
there are two types of system components which are able to calculate this
position, and are therefore interested: the overhead camera OK, and both hand-eye-cameras
. As described above, a higher problem solving valuation for OK is calculated:

= (2, {Mp
, Mp
, Mp
}, nil, find_position (sideplate), ( HK
, OK, HK
(10%, 95%, 10%), nil)
On that account, the mediator HK
negotiates between the candidates and modifies E to
inform the competing agents whether they are chosen to work on the mission or not:

= (2, {Mp
, Mp
, Mp
}, nil, find_position (sideplate), ( HK
(10%, 95%, 10%), (0,1,0))
The component OK is able to calculate the position without further information. Because
both MP
and MP
are waiting for the position and are registered in I and R,, both agents
are appended to P :

= (4, {OK}, nil, {Mp
, Mp
}, Answer(2, (3.5;5.3;6)), nil, nil, nil)

examines the blackboard, recognizes that there is information available and deletes its
identification number from P:

= (4, {OK}, nil, {Mp
}, Answer(2, (3.5;5.3;6)), nil, nil, nil)
The initiating mission m
stays on the blackboard structure so that another agent that
demands this information can find the desired information immediately by use of a mission
identification number. When a
fetches the information from the blackboard, the post list is
empty, and a
thereby deletes message m
and mission m

from the blackboard. The
database DB is able to calculate the object weight in an analog way:

= (5, {DB}, nil, {Mp
, Mp
}, Answer(3, (17g)), nil, nil, nil)
Now, the competing manipulators are able to determine their ability to perform the desired
PICK task under consideration of the distance to the object, weight of the object, and
perhaps other information. The mediator compares all offers and starts the best qualified
manipulator to perform the task, i.e. Mp
, under use of the execution field :

= (1, {M},{M}, nil, PICK(sideplate), ( Mp
, Mp
), (30%, 60%), (0,1))
Agent Mp
gets the execution signal and starts the PICK operation. Thereby, it sends a
sequence of executable operations to its agent body. Immediately before executing the
grasping operation, it's necessary to determine the exact obstacle position. The integration
of the hand camera to get the exact object position is also performed by the above described
algorithm. This way, manipulator Mp

holds all needed information to execute the grasping

= (6, {Mp
}, nil, {M}, Finished(1), nil, nil, nil)
After execution of the PICK operation, the manager receives a signal that mission m
completed, and appends the next executable implicit elementary operation to the mission set.
When the precedence graph is finished, the manager sends a signal to the cell planning
system and leaves the system.
7.Coordination between system agents
In a distributed system, coordination between system components is not easy to obtain. For
example, the manipulator agents aren't able to avoid collision when they independently
evaluate their action selection function. Only the position in the environment can be
perceived by use of the sensor information. On that account, these agents need further
information that cannot be calculated by their sensor systems. This further information can
either be
1.information where other system agents are positioned, or
2.information about the plans of other system agents.
By the use of an action selection function that depends on internal world representation,
sensor data and information processed and offered by other system components, it is
possible to integrate the behaviour of other system components into the own action selection
For coordination between the components, the special agent concept presented above can be
used. If, for example, the spatial distance between both manipulators is so small that
asynchronous information exchange is not sufficient to guarantee collision avoidance, these
agents have the possibility to refer to a closer internal relationship. This way, the special
agent acts as a single component in the system. This is the case if, for example, both
manipulators need the same spatial area at the same time to work up their missions.
On that account, three steps of collision avoidance can be distinguished. First of all, both
manipulators can work up their missions independently. Therefore, they use their internal
world model, the sensor information and the information offered by the other system
components to calculate the next agent action. The information exchange is hereby
asynchronous by use of the mission board (information channel). Normally, this
information exchange is sufficient to avoid collisions by the action selection function.
If the trajectory of both manipulators lead through a common space, the action selection
function of the agent introduces new tasks t

on the mission board which force both
manipulators to build a special agent (SA):

= (manipulator
:stop, SA)

= (manipulator
:stop, SA)
This way, the agent bodies are blocked until the communicator of both agents has build a
special agent. In the special agent, one planning component and hereby one action selection
function is used to calculate the operations for both manipulators synchronously:
t = (t
, t
As a consequence, it is also possible to stop one manipulator for a time interval or slow
down the velocity or speed to avoid collisions. For example, one manipulator can wait at a
rest position until the other system agent has performed its operation. This approach is also
used in the centralized control architecture FATE. If the critical spatial area is passed, both
agents (A) are reintroduced to work up their individual plans:
t = ((manipulator
:stop, A), (manipulator
:stop, A))
This way, the special agent leaves the multi-agent system, and the assembly task can be
finished by use of two independent working manipulators.
In real complex robot systems, unexpected situations often occur that can't be taken into
consideration in advance. For example, it is possible that a system component or a
subsystem of another system obstructs the scene to be examined by the overhead camera. In
this case, a centralized system structure must start an error recovery procedure to determine
components which block the system, but if a component of another robot covers the scene,
it is impossible to solve this situation. In KAMARA, a mission is appended to the mission
set by the overhead camera requesting the blocking component to leave the scene. If the
state controller which is responsible for the blackboard recognizes that a mission cannot be
performed by the system itself, it delivers the mission to the cell planning system, which
involves other systems in the solution process.
It is also possible that a whole system or a system component can no longer work.
Centralized control architectures do not have the fault tolerance to overcome these problems:
At first, the system is often unable to recognize a damaged system. Furthermore, it's not
easy to reconfigure the control architecture to work with damaged components. In
KAMARA, damaged agent heads must not be recognized because these components do not
compete for missions. If the agent body is damaged, the head calculates a valuation that is
too high for the real ability. The state controller perceives that the desired agent often fails
during task execution and sends a message to the agent head.
The integration of a new system component into a complex centralized system is often very
difficult because it is not obvious where the system must be modified. In KAMARA, new
system components can easily be added to the system because the system structure doesn't
have to be modified. The integration is performed by the negotiation process.
This research work was performed at the Institute for Real-Time Computer Systems and
Robotics (IPR), Prof. Dr.-Ing. U. Rembold and Prof. Dr.-Ing. R. Dillmann, Faculty for
Computer Science, University of Karlsruhe and is funded by the nationally based research
project on artificial intelligence (SFB 314) funded by the German Research Foundation
1 0.References
1.Rembold, U.; Lueth, T.C.; Hrmann, A. Advancement of Intelligent Machines. ICAM
JSME Int«l Conf. on Advanced Mechatronics, Tokyo, Japan, August 2-4, 1993, pp.
2.Schraft, R.D. Vom Industrieroboter zum Serviceroboter - Stand der Technik, Chancen
und Grenzen. ISRR VDI/VDE-GMA Intelligente Steuerung und Regelung von
Robotern, Langen, Nov. 9.-10., 1993
3.Beni, G.; Hackwood, S. Coherent swarm motion under distributed control. DARS
Int. Symp. on Distributed Autonomous Robotic System, Wako, Japan, Sept. 21-22,
1992, 1992, pp. 39-52.
4.Camargo, R.F.; Chatila, R.; Alami, R. A Distributed Evolvable Control Architecture
for Mobile Robots. ICAR Int. Conf. on Advanced Robotics, Pisa, Italy, June, 4-6,
1991, pp. 1646-1649.
5.Drogoul, A. When Ants Play Chess. MAAMAW Europ. WS on Modelling
Autonomous Agents in a Multi-Agent World, Neuchatel, Swizerland, August 24-27,
6.Kotosaka, S.; Asama, H.; Kaetsu, H.; Endo, I., Sato, K.; Okada, S.; Nakayama, R.
Fault Tolerance of a Functionally Adaptive and Robust Manipulator. IROS IEEE/RSJ
Int. Conf. on Intelligent Robots and Systems, Yokohama, Japan, July 26-30, 1993,
pp. 294-300.
7.Lueth, T.C.; Laengle, Th. Task Description, Decomposition and Allocation in a
Distributed Autonomous Multi-Agent Robot System. IROS IEEE/RSJ Int. Conf. on
Intelligent Robots and Systems, Munich, Germany, Sept 12-16 1994, pp. 1516-1523.
8.Noreils, F.R. An Architecture for Cooperative and Autonomous Mobile Robots. ICRA
IEEE Int. Conf. on Robotics and Automation, Nice, France, 1992, pp. 2703-2710.
9.Ohmori, Y.; Nakauchi, Y.; Itoh, Y.; Anzai, Y. A Task Allocation Algorithm for
Multiple Mobile Robot Environment. 2nd ICARV Int. Conf. on Autonmation,
Robotics and Computer Vision, Singapore, September 16-18, 1992, pp. RO-15.3
10.Ozaki, K.; Asama, H.; Ishida, Y.; Matsumoto, A.; Kaetsu, H.; Endo, I. The
Cooperation among Multiple Mobile Robots using Communication System. DARS
Int. Symp. on Distributed Autonomous Robotic System, Wako, Japan, Sept. 21-22,
11.Tigli, J.Y.; Occello, M.; Thomas, M.C. A Reactive Multi-Agents System As Mobile
Robot Controller. IROS IEEE/RSJ Int. Conf. on Intelligent Robots and Systems,
Yokohama, Japan, July 26-30, 1993, 1993, pp. 2008-2014.
12.Yuta, S.; Premvuti, S. Coordinating Autonomous and Centralized Decision Making to
Achieve Cooperative Behaviors between Multiple Mobile Robots. IROS IEEE/RSJ
Int. Conf. on Intelligent Robots and Systems, Raleigh, NC (USA), July 7-10, 1992,
p. 1566 ff
13.Demazeau, Y; Mueller, J.-P. Decentralized Artificial Intelligence, North-Holland, 1992
14.Steiner, D. A Priliminary Agent Model: Cooperating Agents. Deliverable D-II.2.ii.
IMAGINE, Esprit Project 5362, Siemens, 1991.
15.Hayes-Roth: A blackboard architecture for control. Artificial Intelligence 26 (1985),
pp. 251-321.
16.Thorpe, C.E., Ed. Vision and Navigation - The Carnegie Mellon Navlab, Kluwer
Academic Publishers, Boston, 1990.
17.Arkin, R.C. Motor Schema-Based Mobile Robot Navigation. The International Journal
of Robotics Research, 8, 4 (1989), pp. 92-112.
18.Hrmann,A.; Meier,W.; Schloen,J. A control architecture for an advanced fault-
tolerant robot system. Robotics and Autonomous Systems, 7 (1991), pp. 211-225.