1. Principal Investigator(s): Vijay Kumar, University of Pennsylvania

beaverswimmingAI and Robotics

Nov 14, 2013 (4 years and 8 months ago)


Principal Investigator(s):

Vijay Kumar
University of Pennsylvania

2. Budget: $65,000 per year for five years

3. Objectives

Our goal is to create the framework, architecture, algorithms, and augmentative hardware and software to
facilitate (a) interac
tion; (b) control and tasking; and (c) programming of computers and computer

devices. We are interested in assistive technology for both mobility and for
manipulation. However, our focus, in this project, will be on smart wheel chairs
that can be easily
controlled and programmed by a human user with minimal effort.

4. Background and Significance

While there are over 20 million users of wheelchairs in the U.S. alone and the demographics suggest that
this number will grow dramatically i
n the coming years [ref], there are relatively few successful efforts to
develop computer
controlled, robotic wheelchairs that represent a significant improvement over
conventional motorized chairs [ref]. Our goal is to develop a smart wheelchair

r equipped with a
variety of sensors, controllers, and computing power, including an omnidirectional camera system, range
sensors, GPS, a wireless communication system, and embedded computers for interacting with the user, for
integration, representation a
nd display of sensory information, and for augmented motion control.

Imagine a fourteen year old girl with cerebral palsy driving a wheelchair equipped with (a) omnidirectional
camera systems and range sensors; (b) a GPS sensor; (c) embedded computers f
or interacting with the
driver, for integration, representation and display of sensory information, and for augmented motion control
and (d) a wireless communication system.

(left) for a preliminary prototype system.

able to interact with and control the system by touching one of many virtual displays that are projected over
her lap tray, armrest, and other surfaces. She can access information about the environment including
features and landmarks behind her, at se
veral different levels (raw imagery, panoramic images, a two
dimensional overhead map, or a three
dimensional reconstruction of the wheelchair in the current
environment). At the simplest level of control, she is able to specify directions and heading velo
city while
the computer mediated motion control algorithms drive the vehicle on a smooth, safe path. At a higher
level, she is able to navigate by pointing to features, objects and targets in the environment. At any point,
she is able to abort the system b
y reaching for one of many large, conspicuous panic e
stop buttons that are
virtually painted on her wheelchair. In our scenario, she approaches a parking lot to a museum. She points
to the handicap ramp and the door on her display, both features that happ
en to be behind her. The
wheelchair is able to drive at a moderate speed to the museum door. When she enters the building, she tasks
the chair to function in an indoor environment allowing her to navigate a two
dimensional environment,
with possibly crowd
ed hallways, without continuous control. Features such as vertical lines, the walls and
the ceiling are used to automatically build a three
dimensional model of the environment. She can navigate
the hallway by pointing to objects or exhibits in the model
or the raw imagery.

5. Methodology and Research Plan

In order to realize the vision above, we propose a software architecture with three levels. As shown in
, the user can control the chair or the robot arm at three distin
ct levels. At the lower most level, a
haptic interface or the vision
based interface discussed earlier can allow

control. At an
intermediate level, the user can control the device using reactive controllers


that are
automated but c
an adapt to the environment. In navigation, one can imagine modes such as go to target
destination, go to door, go down the hallway, etc., while composing them with other modes such as
obstacle avoidance. At a higher level, we can build more
ntrollers as necessary. We propose
the use of reinforcement learning to tune individual controllers as well as the transition between these
controllers. For example, transitions between hallway navigation and obstacle avoidance.

The “human
loop” m
otion planning and control is distinct from traditional motion planning and
control [ref], where there have generally been two extremes in approaches. Either the robot is fully
autonomous, and it is desired to plan a path from point A to point B through a

known environment, or the
robot is teleoperated by a remote user who executes most or all of the low
level commands. Also the
motion planner must work robustly at several distinctly different levels of planning, which we refer to as

otion planning problem. In this problem, the motion control algorithms must be
capable of taking inputs, working with geometries, and performing tasks that are described at various levels
of detail and abstraction.

Human User
Input Device
and Interaction
Augmentative Hardware (Wheelchair)
Output Device

: The main components of the proposed wheelchair (left) and the architecture for the
proposed interfaces (right). Shaded boxes show the software and hardware components of interest,
solid arrows show i
nformation flow pathways of interest.


The user can control the chair or the robot at three distinct levels (left). The computer adapts the
characterstics of the embedded software to
the user at all three levels. The sensory information available to
the wheelchair via its sensors, the state of the machine, and the commands available to the user are
projected on the lap tray using the projector shown in


6. Timeline


Yr 1

Yr 2

Yr 3

Yr 4

Yr 5

Choice of sensors, controllers, related hardware

Software architecture

Representation of sensory information

Algorithms: control modes

Development of human
loop planner

elligent, adaptive control (reinforcement learning)

Usability students with non disabled human users

Evaluation and assessment: users with disability

Development for technology transfer

7. Outcome

This project will result in the ne
xt generation of wheelchairs with sensors and software

designed for human users in the loop allowing them

to interact with computers,
sensors, computer
, and more generally,
mobility systems with embedded software

Principal Inves

Vijay Kumar
University of Pennsylvania

and Venkat Krovi, SUNY

2. Budget: $65,000
(University of Pennsylvania) and ??? (SUNY)
per year for five years

3. Objectives

Our goal is to create the design and manufacturing support tools and their int
egration for the
design and manufacture of assistive devices for people with motoric disabilities. We are interested in
assistive technology for both mobility and for manipulation. However, our focus, in this project, will be on
design of custo
mized interfaces to manipulatory aids and other assistive devices.

4. Background and Significance

Consider the head controlled feeding aid for an individual with quadriplegia is shown in

[KF+97,Kr98]. It is currently diffi
cult to design robotic aids for tasks such as feeding for two reasons. First,
robotic aids are complex, expensive and difficult to use. Second, it is difficult to couple head movements
to the output of electromechanical manipulators. We advocate an appro
ach that involves the design of
customized head
mounted interfaces that will couple physically to the electromechanical arm. In the
example shown in
, a light
weight, passive manipulator built from composite tubing was designe
based on customizing off
shelf parts to the specific user. The customization involved obtaining
geometric, kinematic and dynamic measurements on the user.

: Prototype of a head
controlled feeding mechanism for an

individual with quadriplegia.


A head
mounted telethesis (left) allows users with limited use of their extremities to use the
system. A preliminary prototype (right) of simple head
worn input device for an artis
t with cerebral palsy is
illustrative of our general approach.

Another example of a head
mounted telethesis for painting is shown in
. Once again, the
customization of the telethesis to the specific individual and for the tas
k is critical to the performance of the
Such devices can be used to interface with computers or with machines. They are easier to use than
complex speech or multimodal computer interfaces. Even with such input modalities are available,
teletheses a
llow a physical, touch
based modality of interaction that is a natural extension of the currently
available icon and mouse based interfaces.

Our goal is to develop customized head
controlled interfaces for manipulation and for mobility that will
allow use
rs to control a variety of devices ranging from passive assistive devices like feeders to complex
electromechanical systems like robot arms.
Because such devices provide extended physiological
proprioception, they allow for efficiency and superior perform
ance in discrete interactions such as reaching,
selecting, and clicking on icons, as well as in continuous interactions that are required for continuous
control of wheelchair
like devices.

5. Methodology and Research Plan

Our research plan

can be div
ided into three areas: automated measurement of the geometry and kinematics
of a human user; synthetic models for human users and virtual prototyping; and rapid prototyping of
assistive devices for people with disabilities.

Our first task is to develop
a s
uite of algorithms that allows the imaging of human body parts with a laser
range scanner from different angles, automatic registering these images, and fusing of the information and
nslation into a geometric model. See preliminary work by
[Pi 96a]. Th
ese algorithms


benchmarked in terms of robustness with respect to positioning and orienting of the body part and
accuracy, and
will be

superior to other tools found in the CAD community [PB 95, PB 96, Pi 96a, Pi 96b].


system of calibrated cameras


be used to capture image sequences from different viewpoints and
these two
dimensional images

be fused to track the three
dimensional motion of body parts
(see the
work by [Ka 96, KM 96, KM+ 97, KM 98] as an example).
Finally, we


upport tools for
rapidly designing and prototyping a class of one
kind assistive devices customized to the human user
[KB+ 95, KB+ 96, KB+ 97]. This class consists of passive, articulated mechanical manipulation aids that
are physically coupled to the

user [Kr 98].

The design tools
will include: (a)
the synthesis, design and
optimization of such mechanisms and their analysis in a virtual environment containing both a model of the
product and a customized model of the human user [Kr 98, VK+96, VK+97];

the simulation of
mechanical systems undergoing frictional contacts with the environment [KK 97, KF+97, KK+98]; and

the computer
aided geometric design of motions and trajectories in three
dimensional space [ZK95, ZK96,
ZK98, ZK+98]. This include
s the theory and methodology for designing three degree
mechanisms consisting of revolute joints and cable
pulley transmissions [KKA+97,KA99].

Another important aspect of such human worn devices is the need to match the device to the invidual’s

physical capabilities. We propose the use of small motors at each joint that will be controlled by software to
augment the user’s capabilities. See


: An actively controlled head
worn telethesis in which the user’s input forces are filtered and
amplified (factor of 10 on the left and 2 on the right) in tasks with friction (left) and without friction (right)
[Du 98].

6. Timeline


Yr 1

Yr 2

Yr 3

Yr 4

Yr 5

Choice of
design, fab
rication processes, input from users

Automated system for data acquisition from remote users

Synthetic models for virtual prototyping and evaluation

Software for rapid design and prototyping

Hardware and software for a smart, power
assist system

Case Study with a test product

Usability students with non disabled human users

Evaluation and assessment: users with disability

Development for technology transfer

7. Outcome

This project will result in the ne
xt generation of
assistive devices that allow users to employ head and neck
movements t
o interact with computers,
machines, manipulation aids
, and mobility systems
like wheelchairs
with embedded software.