Reusable Software Components in a Human Robot Interface

powerfuelΛογισμικό & κατασκευή λογ/κού

9 Νοε 2013 (πριν από 4 χρόνια και 6 μήνες)

342 εμφανίσεις

Reusable Software Components in a Human Robot
João Xavier and Urbano Nunes
Institute of Systems and Robotics - ISR,University of Coimbra,Portugal
Abstract—In this article is introduced a framework for the
development of software components for Human Robot Interfaces
(HRI).In order to couple applications that work in two and
three dimentions,with different levels of interactivity research
was done to understand the canonical way of interacting with
the environment.From this research were developed reusable
components,for usage in composable HRI applications.We ex-
plain the architecture and design rationale of the framework,and
demonstrate its usage with the following examples of applications:
multiple robot teleoperation,environment perception,a computer
vision application for 3d reconstruction using the GPU and
finally voice control of components.The developed framework
is available online with a GPL licence.
Experimenting is important to have an idea on how stuff
works and how processes can be interconnected:it is an
important role on the cognitive process.To experiment with
a system the users need graphical user interfaces (GUI) that
permit themto interact with the systemand have an immediate
feedback of how the system responds to its changes.
In this article we propose and give insights of both a
component based Human Robot Interface (HRI) - called Player
Viewer 3D (PV3D) - and the framework that was used to build
it - called Experimental Robotics Framework (ERF).
To access the mobile robot sensors and actuators,the Player
[1] framework is used.Because Player is the communication
layer we can also use PV3D to access robots and sensors from
the Stage and Gazebo simulators along with the real-world
ones provided by Player.
Because collaboration is an essential part of modern re-
search which improves quality,problem solving time,and
speeds technology transfer this work is available for free with a
GPL [2] license for others to contribute to.The framework and
videos demonstrating its usage are available to download in
the"Modules for Intelligent Autonomous Robot Navigation"
(MIARN) site [3].
When doing robotics research we found ourselfs writing
new Graphical User Interfaces (GUIs) everytime there is the
need to visualize the evolution of an algorithm,or when a
process requires intervention from the user - e.g.teleoperation
of robots.
This work is partially supported by FCT Grant POSI/SRI/41618/2001 to
Jo Xavier and Urbano Nunes
The concept of a 3D GUI based on component frameworks
emerged with the development of software for robot perception
[4] of the surrounding environment;where we noticed that
both the mobile robot navigation and the machine vision
code used similar concepts,and therefore could be shared
in a single API.Examples of this case are machine vision
algorithms that can be applied to grid based maps as well,
e.g.voronoi diagrams,mathematical morphology operations,
Hough transforms,etc.So we proposed to create a framework
for common components used in interactive robotics related
experiments.Examples of research fields that benefit from
this work are robot perception,debugging of algorithms,
machine/computer vision,agent behaviour,teleoperation of
At the moment there is few bibliography or software on
Frameworks for GUI components useful for robotics research.
This made us approach this problemwith the increased respon-
sability of studying the state of the art technology to develop
the framework.
B.Concepts used in this article
To assure a better communication with the reader some
concepts that are fundamental to the understanding of the
article need to be clarified,namely the concepts of Framework,
Component and Plugin.
- A framework is usually considered some kind of software
"infrastructure"that is the pre-requisite to an expandable sys-
tem,i.e.the environment where components live.A framework
for components enables an application to use a component it
has never heard of - and was not specifically adapted for -
because both the application and the component comply to
the framework and know what to expect from each other.
- A software component is a system element offering
a predefined service and able to communicate with other
components.Components are reusable,non-context-specific
and composable with other components.The main purpose
of creating components is to enable use of the same compo-
nent in applications needing similar functionality.An existing
component can be replaced with a new implementation of
the same functionality,without changing a single line of
code in the application,because the interface remains the
same.From the programmer point of view in ERF,the
role of a Component is to provide a Extensible Markup
Language (XML) input/output for encapsulated C++ methods
and data access;The ERF component implementation was
inspired by the component features available in the Objective-
C programming language [5].ERF component features are
the following:message passing between components,dynamic
loading,and introspection (allow querying what services/data
the component provides).When developing components it is
crutial to verify that new features do not break the existing
framework,that components interoperate seamlessly,and that
authors provide documentation.
- A Plugin is a set of functionality that are to be merged
in a composite application.In ERF the Components inherit
from the base Plugin class,augmenting the Plugin with new
functionalities like XML communication.
C.The composable HRI PV3D and the framework ERF
PV3D composes a specific application from a set of avail-
able plugins,so that the application is adapted to the desired
case of study.
To provide the components of PV3D with a shared code
base,the ERF framework was created.The ERF framework
design goals are the following:

Unleash the developer creativity.

Be generic,avoiding the use of strict robotics related
syntax in documentation and code so it fits a broader
variety of research fields.

Describe the world and the interaction using generic
canonical concepts:3D Objects that have Tags for their
classification - e.g.ground,robot,person - and also can
receive User Input - e.g.drag mouse to select,goto,keys
for robot control,etc.

Use OpenGL for 3D graphics to take advantage of 3D
acceleration offered by todays 3D graphics cards.

Model and implement GUI interactions in a way that is
natural to the user.

Keep HRI applications simple (just use the components
you need) and easy to understand its code due to the
reduced complexity.

Keep components generic to encourage their reuse be-
tween applications.

Conceive the world camera view and behavior so that the
navigation in the world feels natural.

Encourage fast prototyping by combining available com-
ponents to compose an application,or forking the code
of the available components to devise new ones.

Components can be used in non-graphical,non-GUI

Distribute the software with a GPL license [2],to avoid
duplication of efforts.
To the authors knownledge ERF is the only framework with
this design philosophy and features,and PV3D is the only 3D
composable HRI with support for multiple robots developed
on a component based framework.
The rest of this article is organized as follows.The related
work is described in section
.The layout of ERF and
PV3D are described in section
.The modelling of the
HRI interactions is described in section
.Examples of
composed applications using PV3D in the domains of robotics
and machine vision are shown in section
.Because ERF
is the first software of its kind the process of selecting its
foundations was careful.We conclude with a brief review of
our work in section
On the HRI bibliography many good practices are proposed
and applications described.Good practices in usability issues
must be practiced to assure the users have a good experience
with the HRI.Some articles discussing and evaluating the
existing HRI are [6]–[13].Usability issues of home-service
robots are explored in [14];in particular a map-based user
interface for instructing the home-service robots in a home
environment.An elevated 2D map interface was favored by
the test subjects when they were asked to report ease-of-use of
the interfaces.The advantages of 3D interfaces for controlling
robots are discussed in [15].
A taxonomy for categorizing human-robot interaction is
presented in [16].Of the models available in this taxonomy
our model would be in the"one operator to multiple robots".
The task of interfacing with swarms of robots is discussed in
[17] where the author introduces the concept of robot society.
Both [18],[19] have developed 3D HRIs for interfacing of
a single user to multiple robots with support for motion
planning and robot trajectory generation for target tracking.
Their interfacing strategy is the following:first the operator
selects which robots to use,then the operator selects which
objects to be acted on,and finally the operator selects a task
to perform.A Graphical User Interface (GUI) to operate GPS-
enabled robots is described in [20].They raise an interesting
issue:the problem of objects appearing twice if sensed by
different robots.To solve it they use heuristics to avoid
drawing the same object twice if observed by more than one
robot.The previous three references use the OpenGL User
Interface library (GLUI) for widgets and the OpenGL C++
Toolkit(GLT) for loading textures and object interaction or
"picking".In [21] are studied the conditions under which
the collaborative human involvement in shared HRI will not
jeopardize scalability of a network of robots.In our work this
issue is not handled as we consider that it is a concern of the
sensor proxyfication layer,that in our case is Player [1].
Context Acquisition is described as the mechanisms used
to quick understand the robots situation,it is first desbribed
in [7] but similar functionality can be identified in [8],[22]
that introduces the concept of Situational Awareness.In PV3D
the concept of Context Acquisition can be encapsulated in the
At the moment the modalities supported by PV3D are
graphics,mouse,keybord and also speech.Other GUI solu-
tions that also use multimodality appeared on the fields of
teaching [23] and alternative interfaces for the handicapped
people [24].
Other fields that use HRI are the ones of Virtual Reality
(VR) and Augmented Reality (AR).VR environment-based
system for the navigation of underwater robots are described
in [25]–[27].Multi-robots teleoperation using an intermediate
functional representation of the real remote world by means
of VR are present in [28]–[32]Some VR environments for
remote operations in hazardous sites are described in [33]–
[35].A VR-based operator interface was developed by NASA
in [36] to remotely control complex robotic mechanisms.
The authors concluded that VR interfaces can improve the
operator situational awareness and provide valuable tools to
help understand and analyze the vehicle surroundings,and plan
command sequences.The multi-agent system (MAS) infras-
tructure,that combines HRI with a simulation environment
for human search and rescue (HSR) operations is described
in [37].Also from the HSR research area comes the topic
of adjustable autonomy described in [38].In this reseach the
robot sensor framework used was the FAST architecture [39].
Ar-Dev [40] is an AR application that superimpose graphics
of a robot sensor readings over a real-world environment in
Commercial solutions that provide feature-rich HRI are
Webbots and Mobile-Eyes.Webots [41],[42] aims for fast
prototyping of robotics algorithms,that also provides a 3D
simulator.Mobile-Eyes [43] is a HRI with a focus on area
surveillance;it has the following features:autonomous naviga-
tion to goals,patrol route set-up and scheduling,teleoperation
and remote video control of pan tilt and zoom camera.
All previous references are developed with C and C++
languages and libraries.One example of a HRI that uses uses
Java,Java3D and Corba is Avenue UI [44].In Avenue UI the
server exports the data and control API interfaces to clients
over Corba.This approach is flexible in a sense that the client
does not need to have a description of the robot controls.
A comparison table of the features of PV3D and ERF to
the best HRI we found is shown in Table
The architecture of the system is divided in two major
components,they are the ERF framework and the PV3D
application composer.In a nutshell:1) ERF provides a shared
library that constains a collection of useful C++ classes that
are available to PV3D and its components and also can be
used in other programs.2) PV3D is a program that parses a
XML configuration file to select,initialize and run components
that will joined together to form the final application.The next
subsections examine both software components in more detail.
A.Overview of the classes available in the Experimental
Robotics Framework
Follows a description of the principal classes available in
the ERF,that are also depicted in the mind map of Fig.
1) The Managers:These entities are responsible for the
bookkeeping of structures they manage.They use reference
counting techniques so that duplicating instances are avoided
whenever possible to ensure a frugal resource management.
Examples of this is the sharing of access to a Player proxy
by more than one plugin,or the reuse of a single OpenGL
display list to draw the same object by multiple components.
All Managers can be created as singletons,which is a software
design pattern that assures they are unique and have lazy
initiation (are only instantiated once,in the first time they are
2) Manager of Plugins:contains a list of all the plugins
loaded and is responsible for dynamic (un)loading plugins
from libtool [45] shared libraries.
3) Manager of OpenGL display lists:contains the OpenGL
display lists that represent the objects in the 3D scene.A
display list is an OpenGL set of cached procedures which
renders 3D objects.
4) Manager of Event Handlers:contains a map of all the
Event Handlers available for interaction.It is responsible
for selecting the Event Handlers (EH) and also for deliv-
ering the events to the EH.The event propagation acts like the
software design pattern of chain of responsibility,illustrated
in Fig.
.This will be further explained in Sec.
5) Item Tree:shows a hierarchical tree of the loaded
plugins for browsing.The branches of the tree are the plugins
and the leafs are their services and data.By clicking in each
service,a callback can be executed that performs an action,
e.g.showing a popup sub-menu of the component or give
instructions to a robot.
6) Window 3D:is where the drawing of the world happens.
This acts like the visitor design pattern,in which the"visitor"
is the window and the"visited"are the plugins.The number of
windows is not limited in any way.To save CPU/GPU cycles
the window is redraw only when a sensor displayed in the
window received new data or on input events.
7) Model Loader:adds the loading of common 3D model
formats capability.At the moment the only loader available is
for the OBJ format,described in Sec.
8) Camera:the provided camera has two view modes.A
top orthographic mode (like the Stage [1] simulator) and a
perspective projection mode (like the Gazebo [1] simulator).
The user can switch the view mode by pressing the space key.
The camera was idealized for mobile robotics,where we nor-
mally view from the top to give commands and move around
the world and for the popular Simultaneous Localization And
Mapping(SLAM) tasks.The center of the screen is common to
both views,this means that if the user moves the scene center
while in top view,and then switches to perspective view,the
center of the screen in perspective view is updated to the top
view center.The operations available on the top view are pan
and zoom in Cartesian coordinates;on the perspective view
are also pan and zoom but in spherical coordinates (around
the scene center).
9) Plugin:plugins extend the functionality of the applica-
tion and are identified either by their library name or an unique
id field defined in the XML configuration file.Plugins are
provided with default configurations so that they can run out
of the box,imposing a minimal need to known the parameters
of how they work.As there is a clear and simple interface
for plugins,the development is possible without the need to
read and understand the source of the rest of the composite
application.A plugin can contain calls to OpenGL,to FLTK
[46],or anything the developer wants to encapsulate in them.
Plugins are free to do anything,like adding widgets to the
HRI,adding new entities to the world,communicating with
the Player server,accessing other plugin data,etc.Every
component of PV3D inherits from the base Plugin C++ class.
Plugins read their states from the XML files and when PV3D
finishes,their states can be serialized (saved) again to the disk.
The plugin class also contains some information of the plugin:
the author,the webpage,a description,and the license.
During the plugin lifetime its following methods are called:
initialize - is called after the windows are shown and
they OpenGL contexts are created.
run - is called at each iteration,the method will draw or
process data.
clean - is called just before the plugin object is removed
from the framework.
The Plugin is implemented as a libtool [45] shared library,this
way it is accessible within multiple operating systems.
10) XML:the XML class is used for parsing XML config-
uration files and object persistence (save the actual state of a
world).The class can parse basic types,like integers,floats,
strings and booleans.ERF uses XML for the following tasks:

configuration files,where it solves the problem of syntax;
provides auto-completion and validation.

communication between components and the framework.

component introspection/reflexion to expose the compo-
nent interface,i.e.inputs and outputs.

application state persistence and serialization.
B.How PV3D works
The objective of PV3D is to load a set of plugins from
a XML configuration file and associate them with Player
proxies,in order to create a composite application.The next
subsections provide some insight on the major parts of PV3D.
1) Manager of Player proxies:Player [1] is deployed in a
client/server architecture.On the client side we have proxies
that mirror the data and methods available on the server robot.
PV3D provides a new manager to the ERF framework,the
manager of Player proxies.The Manager of Player proxies
contains a list of all the proxies available in the Player Client
and provides functions for managing them.PV3D will access
these local proxies to visualize the remote data and exert
control over the remote robot.
2) Configuration files:A configuration file describes the
plugins that will be loaded in the composite application.There
are two combinable approaches to write a configuration file:
the user specifies what plugins are going to be loaded
and what Player proxies they will access;
the user provides device handlers that specify a corre-
spondence between one Player devices to one or more
PV3D plugins;
the handler approach can still not be enough for specific
experiments,so another option available is to make the
program output the final configuration file generated by
the handlers.The user can then further customize this
configuration file to his needs.
The handler approach works by specifying a mask using
the syntax of a Player device,with the addition of wildcards
in place of the driver name or interface index;for selection
of multiple devices - e.g.laser:sicklms200:
means all the devices with a laser interface,a specific driver
called sicklms200 and all possible indexes.If the handler mask
matches any plugins they will be appended to a final XML
configuration file with the information of the Player proxy they
are associated with.This concept is useful for example to load
plugins for a simulator world full of robots without having to
specify proxy to plugins associations for each robot.
We avoid the need for users to remember the syntax of
configuration files by providing relax-ng schemas for them.
This way the user can use realtime auto-completion and
validation using the Emacs editor in nXml mode.An example
of this feature is provided in a video on the site [3].
3) Player Viewer 3D Initialization:After configuration,to
run PV3D the user only needs to give the configuration file as
argument to PV3D.
The lifetime of PV3D can be separated in four steps:XML
parsing and generation,plugin initialization,plugin execution
loop,and plugin termination.
The initialization is the most elaborate step.It will use
ERF to:1) contact the Player Server to retrieve a list of
available devices;2) parse the XML configuration file;3)
resolve Player device Handlers to generate a final XML file
with all the plugins to load;4) launch Player Clients and
Proxies associated to the Player Devices that the application
plugins need to subscribe.
4) Components available on PV3D:In Table
are listed
the components that are available in PV3D,they can all be
combined to achieve different composite applications.
An interaction in ERF is defined as a communication of
will between the user and the HRI,which normally comes in
the form of input from the mouse,the keyboard or the speech
recognition plugin.An example of an interaction is:pressing
the Left Mouse Button (LMB) to select a robot - which is
both an Object3D and an Event Handler (EH)- making
him the Active Event Handler (AEH) and then clicking
on a person - that is just an Object3D - delivers the event
to the robot because it is the AEH.The robot EH can then
display a popup with the options of actions that the robot can
perform on the clicked person.A simplified explanation of
how interaction is processed is demonstrated as the chain of
responsibility represented in the rightmost diagram of Fig.
To make any C++ class an EH the developer only has to
inherit from the EH virtual base class.Then when it becomes
the AEH (normally because a user clicked it when no other
AEH was active) the input of the user are then redirected to
the AEH.From the information of the user input - keyboard
or mouse - the AEH can decide what actions to take.The
AEH knowns the type of Object3D clicked by looking at its
Tags,which can be either be attributes,names,etc.
The major guidelines when developing the interaction part
of the framework was to not impose much constrains to the
developer while ensuring a minimum set of rules to assert
that the composed applications have a consistent behavior.The
interaction logic is all included in ERF,making the compo-
nents cleaner from interaction code and further eliminating the
possibility of unpredictable behavior.
This section demonstrates the capabilities of ERF with
typical PV3D applications from distinct fields in robotics.In
all tasks the first step is to launch the Player server with the
proxies we want to access.Then the user should edit a PV3D
XML configuration file detailing the composite application.
The composed applications demonstrated in this section are
the teleoperation of multiple robots,robot perception with
scene interpretation and people tracking,control of multiple
robots and components using speech recognition and finally
a computer vision application for 3D reconstruction from
A.Teleoperation of multiple robots
This experiment concerns multiple robot teleoperation in a
given map.Here we accessed three mobile robots in the Stage
simulator (also possible in real robots).Each robot has either
a laser or a sonar sensor on top.Two robots have 3D models,
the Nomad and the Robchair ( a robotic wheelchair);the third
robot only shows the geometry of the robot that is also its
collision envelope.Note in this example the widget for position
control,also present are spin buttons which can establish limits
to the velocities of the robot or provide a joystick like interface
to the robot.The robot position component also provides an
EH for controlling the robot in the form of a target,this is
achieved by dragging the target over the Ground Object3D
that gives a waypoint for the robot to navigate to (if in
position mode) or interfaces with the velocities of the robot
like controlling a joystick (if in velocity mode).
B.A robot’s perception of the World
One of PV3D original uses was drawing the perception
that a robot has of the world.For this task we require the
visualization of a laser device,and two scene interpretation
plugins,one for people and another for geometric features.
Both scene interpretation plugins are associated to Player
Fiducial devices that were also developed in a previous project
[4].One fiducial device outputs the location of persons while
the other fiducial device outputs the building structures,such
as benches and columns.Due to the reusable nature of the
ERF framework the components of the previous example could
be combined with the ones in this example to enable the
teleoperation of the robots while perceiving the scene.
C.Voice control of components
In this experiment the user controls robots or components
with the exchange of XML messages using ERF.The speech
recognition component separates utterances (a single sentence)
by words,and them puts them in a XML message to deliver to
the requested component.Two examples of possible sentences
:“camera frames thirty” and “robot goto lab”.The first
word is the unique id of the component that will receive
the rest of the utterance,in this case “camera” or “robot”.
The second word is the command we want to run on that
component,and the rest are arguments of the command.The
speech recognition component then gets the unique id of the
component (provided by the user in the configuration file) that
it wants to communicate to,and sends the XML command to
the target component.
The convertion from the plain text obtained from the speech
recognition engine to XML does not enforce modifications
neither on the voice recognition component C++ class or in
any other component,requiring only interpretation of the XML
data.This is one major advantage of the pragmatical approach
behind ERF and is unique to it.
D.Sparse 3D Reconstruction and Camera Egomotion
An example of a demanding application of computer vision
is real-time 3D reconstruction and camera egomotion using
SIFT [47] features.This application benefited from ERF
OpenGL classes that easen the of loading and processing
GLSL shaders that are computed on the Graphics Processing
Unit (GPU).By combining the components of teleoperation
demonstrated in the first example with this ones,it is possible
to compose an application that combines visual and spatial
landmarks with SIFT features while teleoperating the robot,
voice controlling it and perceiving a scene.
In this article is described an Open Source freely available
framework for developing HRI - the Experimental Robotics
Framework (ERF) - and the HRI composer that we developed
with this framework - Player Viewer 3D (PV3D).
The main contributions of this article lie in (1) a framework
for HRI called ERF that encourages fast-prototyping,modular-
ity,code reuse and makes use of state of the art technology;(2)
a composable HRI application that provides a wide selection of
already available components;(3) an example of the usefulness
of this software by giving examples of composed applications
for use cases like robot teleoperation,scene perception,voice
control and 3D reconstruction.We explain the model of
the interactions,which imposes a minimal set of rules for
interaction so that the developer is free to complement these
rules with his own.
ERF provides a common framework where a community
of researchers can share common interfaces and algorithms.
Because the component paradigm is simple to understand
and very expandable the chance their work gets exposed,
continued by others is greater.Thing that are normally a
challenge and a project per se like visualization of some
sensors and interacting with a 3D world is easy with ERF.
In the classroom this software can encourage learning due
to its interactivity,graphics,ease of use and Latex formula
display,which can be even be used to present algorithms in a
interactive way.With the availabily of this framework for free
on the Internet the process of designing 3D interactive GUIs
for robotics related sciences is now much more accessible to
everybody.The developed software is having wide acceptance
and good feedback by the Player robotics community.Because
the developed software is updated on a regular basis the reader
is invited to visit the site [3] for an updated version.
B.Gerkey,R.T.Vaughan,and A.Howard,“The player/stage project:
Tools for multi-robot and distributed sensor systems,” in Proc.11th
International Conference on Advanced Robotics,Coimbra,Portugal,
“Free software foundation - GNU general public license.”
“Modules for intelligent autonomous robot navigation.” [Online].
J.Xavier,M.Pacheco,D.Castro,A.Ruano,and U.Nunes,“Fast line
arc/circle and leg detection from laser scan data in a player driver,” in
Proc.IEEE Int.Conf.on Robotics and Automation,Barcelona,2005.
“The objective-c programming language.” [Online].Available:
A.Steinfeld,“Interface lessons for fully and semi-autonomous
mobile robots,” 2004.[Online].Available:
T.Fong,D.Kaber,and et al.,“Common metrics for human-robot
interaction,” Sendai,Japan,2004.[Online].Available:
J.Scholtz,B.Antonishek,and J.Young,“Evaluation of a human-robot
interface:Development of a situational awareness methodology,” Hawaii
International Conference on System Sciences (HICSS),vol.05,2004.
M.J.McDonald,“Active research topics in human machine interfaces,”
Intelligent Systems and Robotics Center Sandia National Laboratories,
S.S.Lakshmi,“Graphical user interfaces for mobile robots,” 2002.
A.Persson,“Multi-robot operator interface for rescue operations,”
Master’s thesis,Örebro University,2005.
T.W.Fong and C.Thorpe,“Vehicle teleoperation interfaces,”
Autonomous Robots,vol.11,no.1,pp.09–18,July 2001.[Online].
D.Song,“Systems and algorithms for collaborative teleoperation,”
Ph.D.dissertation,Department of Industrial Engineering and Operations
Research,University of California,Berkeley,2004.
H.Ryu and W.Lee,“Where you point is where the robot is,” in CHINZ
’06:Proceedings of the 6th ACM SIGCHI New Zealand chapter’s
international conference on Computer-human interaction.New York,
NY,USA:ACM Press,2006,pp.33–42.
I.-S.Lin,F.Wallner,and R.Dillmann,“Interactive control and environ-
ment modelling for a mobile robot based on multisensor perceptions.”
Robotics and Autonomous Systems,vol.18,no.3,pp.301–310,1996.
H.A.Yanco and J.Drury,“Classifying human-robot interaction:
An updated taxonomy,” in IEEE Conference on Systems,Man
and Cybernetics,October 2004.[Online].Available:
A.Halme,P.Jakubik,T.Schonberg,and M.Vainio,“Controlling the
operation of a robot society through distributed environment sensing,”
C.Clark and E.Frew,“An integrated systemfor command and control of
cooperative robotic systems,” in International Conference on Advanced
Robotics,June 2003.
H.Jones and P.Hinds,“Extreme work groups:Using swat teams as a
model for coordinating distributed robots,” in Conference on Computer
Supported Cooperative Work,November 2002.
M.S.Hank Jones,“Operating gps-enabled robots with an opengl gui,”
in Dr.Dobb’s Journal,January 2003,pp.16–24.
A.Makarenko,T.Kaupp,B.Grocholsky,and H.Durrant-Whyte,
“Human-robot interactions in active sensor networks.”
B.Graves,“A generalized teleautonomous architecture using situation-
based action selection,” Texas A&M University,College Station,1995.
H.S.Mosaru Ishii,“A step toward a human-robot cooperative system,”
Artificial Life and Robotics,1997.
H.Xu,H.V.Brussel,and R.Moreas,“Designing a user
interface for service operations of an intelligent mobile manipulator,”
Telemanipulator and Telepresence Technologies IV,vol.3206,no.1,
M.Bailey and D.P.Brutzman,“The nps platform foundation,” Prague,
Czech Republic,June 1995.[Online].Available:
C.K.Quentin Lin,“A virtual environment-based system for the navi-
gation of underwater robots,” Virtual Reality,1998.
A.Monferrer and D.Bonyuet,“Cooperative robot teleoperation through
virtual reality interfaces,” iv,p.243,2002.
A.Kheddar,P.Coiffet,T.Kotoku,and K.Tanie,“Multi-
robots teleoperation - analysis and prognosis,” Sendai,Japan,pp.
166–171,September 1997.[Online].Available:
A.Kheddar,J.Fontaine,and P.Coiffet,“Mobile robot teleoperation
in virtual reality,” Nabeul-Hammamet,TUNISIE,April 1998.[Online].
N.Aucoin,O.Sandbekkhaug,and M.Jenkin,“Immersive 3d user
interface for mobile robot control,” Orlando,pp.1–4,1996.[Online].
R.E.K.Stuart,G.Chapman,“Interactive visualization for sensor-based
robotic programming,” in Systems,Man,and Cybernetics,vol.12-15,
F.Michael Schmitt,“Virtual reality-based navigation of a mobile robot,”
in 7th IFAC/IFIP/IFORS/IEA Symposium on Analysis,Design and
Evaluation of Man-Machine Systems.
F.S.Junior,G.Thomas,and T.Blackmon,“An operator interface
for a robot-mounted,3d camera system:Project pioneer,” in VR ’99:
Proceedings of the IEEE Virtual Reality.Washington,DC,USA:IEEE
Computer Society,1999,p.126.
R.J.Anderson,“Smart:A modular architecture for robotics and tele-
operation.” in ICRA,1993,pp.416–421.
J.Tourtellott,“Interactive computer-enhanced remote viewing system,”
in American Nuclear Society Seventh Topical Meeting on Robotics and
Remote Systems,1997.
L.Nguyen and M.Bualat,“Virtual reality interfaces for visualization
and control of remote vehicles,” in IEEE Conference on Robotics and
S.Burion,“Human-robot teaming for search and rescue,” 2005.
A.Birk and M.Pfingsthorn,“A hmi supporting adjustable autonomy of
rescue robots,” in RoboCup 2005:Robot Soccer World Cup IX,ser.
Lecture Notes in Artificial Intelligence (LNAI),I.Noda,A.Jacoff,
A.Bredenfeld,and Y.Takahashi,Eds.Springer,2006,vol.4020,pp.
267 – 278.
H.Kenn,S.Carpin,M.Pfingsthorn,B.Hepes,C.Ciocov,and A.Birk,
“FAST-Robots:a rapid-prototyping framework for intelligent mobile
robotics,” in Artificial Intelligence and Applications Conference,2003.
T.Collett,“Augmented reality interface for player.” [Online].Available:
“Webots:Professional mobile robot simulation.” [Online].Available:
[42] Meneses and O.Michel,“Vision sensors on the webots
simulator,” in VW’98:Proceedings of the First International Conference
on Virtual Worlds.London,UK:Springer-Verlag,1998,pp.264–273.
“Mobile robots inc.” [Online].Available:
E.Gold,“Avenueui - a comprehensive visualization teleoperation
application and development framework for multiple mobile robot,”
Master’s thesis,CS Dept,Columbia University,2000.[Online].
“GNU autoconf,automake and libtool.” [Online].Available:
“The fast light toolkit.” [Online].Available:
D.G.Lowe,“Distinctive image features from scale-invariant keypoints,”
“Wavefront obj format.” [Online].Available:
“Wings 3d modeller.” [Online].Available:
3D World
3D Element
Event Handler
User Input
Plugin List
Plugin: OpenGL Lights
Plugin: ...
User Input
Deliver Event to the
Active Event Handler
Player Proxies
on Robot_1
Proxy: Position 2D
Proxy: Laser
Proxy: Camera
Player Proxies
on Robot_1
Plugin: Robot Pose
Plugin: Laser
Plugin: Camera
Proxy: Position 2D
Proxy: Laser
Proxy: Camera
Plugins are Reusable Graphical Components
Algorithm: C++ Library
Configuration: XML
Author Information
Communications: XML RPC, Shared Memory or Shared OpenGL context
AEH is
set ?
Clicked Element
becomes the AEH
Click on a 3D
Element that is
also an
EH ?
than one
in selection
Ray ?
User picks
the correct one
Sensors and actuators
Composed Application with GUI
World Interactions chain of responsibility
Move OpenGL camera,
switch fullscreen,
record movie, etc.
when new data arrives
redraw the window
that contais the plugin
3D Element
Event Handler
Object 3D
Event Handler
Real or 
simulated robot
Real or 
simulated robot
Fig.1.Layout of a composite application of ERF and the user interaction chain of responsibility (on the right)
Fig.2.Mind Map of the Experimental Robotics Framework
Human Robot Interface software
Avenue UI
Simulator compati-

MobileSim (Stage)
Gazebo or Stage
Real robots
Multiple robots


Robot Framework
RWII Mobility


Computer Vision

Mobile robotics


Non robotic appli-

Configuration syn-
tax autocompletion


Dynamic loading
of components

3D model loading

Fast prototyping

apart from
keyboard and
mouse inputs


speech only for
Operating Systems
all that support java
Software License
not available
GPL (opensource)
Visualization or HRI information components
counts and plots timings of other components.
measure distances on the world.
displays latex formulas on the screen.
displays information like position and tags about the object under the mouse
and other ERF entities and managers.
provides a “ground” plane that can be used for ray intersection in order to move
robots to the clicked position.
setup the OpenGL lighting:placement of lights,their type and material
a widget to control the timeshift logger of Player.
Sensor display components
displays video from a web camera and can be also used as input for computer
vision components.
Laser Feature 2D
displays identified laser features like arcs,lines and people legs.
Laser People Tracker
displays moving persons information.
Laser Scene Interpretation
displays found landmarks,like walls,building columns,benches,moving legs,
3D reconstruction
3d reconstruction from a sequence of 2d features.
Display the ranges of a laser sensor.
Display the ranges of a infrared sensor.
Display the ranges of a sonar sensor.
Display a global map of the world.
Interactive components
control the speed,goal and display of a Player position device,i.e.a robot.
Voice Contexts
forwards voice recognition messages to other components.
Polygonal zone tagging
delimits zones of interest with tags e.g.forbidden,hall,kitchen,lounge,etc -
also permits knowing if a point lies inside the polygon,the polygon centre,etc.
Markov localization
displays the result of the Adaptative Markov localization algorithm,i.e.the
ellipses that represent the highest likelihood of a robot pose.
Trajectory planner
allows the user to plan and view the trajectory of a robot.