High-speed Autonomous Vehicle with Adaptive motion Control

vivaciousaquaticΤεχνίτη Νοημοσύνη και Ρομποτική

13 Νοε 2013 (πριν από 3 χρόνια και 9 μήνες)

65 εμφανίσεις















High
-
s
peed Autonomous Vehicle
with

Adaptive
m
otion

Control

(HAVAC)























Matthew Lanahan

…………………………….. Dr. Robert Meyer

William Chiaravalle

………….……………. Dr. Edward Sazonov

Zachary Schilling

…………………………… Dr. Jack Koplowitz





1

Pr
oblem Delimitation

1.1

General Problem

Autonomous navigation, the ability of a robot to travel from one place to another
without human assistance, is a multifaceted problem that has intrigued scientists
and engineers for decades. Perhaps one of the most inter
esting aspects of
autonomy is that there is no single problem definition, and all logical assertions
about one variation
are
change
d

entirely
by

a mere shift of context. That is to say,
the design assumptions and parameters for an airborne robot are vastl
y different
from those for a land
-
based robot, and even within one of these categories,
different scales of vehicles dictate different design approaches and levels of
precision.


For this project, we will design a ground
-
based, wheel
-
driven robot of
approx
imately 1/10 life
-
scale size that is capable of autonomously navigating
between buildings on the Clarkson University campus
via

existing pathways. In
order to accomplish this task, we will use the pre
-
built chassis of a Traxxas E
-
Maxx radio
-
controlled tru
ck, upon which all necessary electronic equipment will
be mounted. The vehicle will wirelessly receive commands from the end user,
and make use of sensor processing, image processing, path planning, and motion
control algorithms in order to fulfill those
commands.

This project differs from
previous research projects in three main ways.




The successful fusion of an Inertial Navigation System with data from the Global
Positioning System will produce a relatively low cost, repeatable, and accurate
platform f
rom which future research can be conducted

[2]
. Providing the
academic field as well as the public with a functioning INS system paves the way
for more sophisticated research into the role of INS sy
stems in autonomous
navigation.

William Chiaravalle

is re
sponsible for this portion of the research.




By using live images and visual processing to improve navigation within the
margin of error of position data, HAVAC will be able to move at higher speeds
with cheaper hardware, greater accuracy and less map data

than an autonomous
vehicle using either visual processing or p
osition data alone.

Zachary Schilling

is responsible for this portion of the research.




Motion control algorithms have not, to date, taken advantage of four
-
wheel
steering (4WS) designs. Our
research in this area will produce a simple and
efficient method of improving a vehicle

s turning ability by using all four wheels.
Improvements in this regard will be demonstrated in comparison with the same
vehicle steering with only the front wheels.

Matthew Lanahan
is responsible for
this portion of the research.


In the following sections of this paper, we will discuss in detail the division of
research for our project into sensor processing, image processing and path
planning, and motion control. W
e will also describe the general operation of the
vehicle and propose a system architecture diagram. Lastly, we will outline the
low
-
level requirements for the project. Upon fulfillment of the aforementioned
details, our variation on the popular problem
of robotic autonomy and our
approach to solving that variation

will be clearly defined.

1.2

Sensor Processing

An
inertial navigation system

(INS) provides the position, velocity, angular
velocity, and orientation of a vehicle by measuring the linear and angula
r
accelerations

of a system
. It is widely used because
it does not depend on any
external references. This lack of dependence comes with the sacrifice of not
knowing absolutely where the system is or the error accumulated in the inertial
frame of reference

over time.
All inertial navigation systems suffer from

the
phenomena called

integration drift

[7]
.
Drift is the result of
small errors in the
measurement of acceleration and angular velocity
that
are
twice
integrated into
increasing errors in position. On
e solution is to introduce a source of absolute
data, most frequently a Global Positioning System (GPS). By combining the
information obtained form the INS and the GPS, the actual error can be stabil
ized
and compensated for

[9]
.


The problem yet to be ade
quately solved is how to implement a low cost,
efficient, and accurate solution to the integration of the two sets of data. The GPS
has a 1 Hz refresh rate and will be the source of absolute data, functioning as an
error correction/control input. The INS w
ill be the source of position, velocity,
and heading for all other time. Integration of the two into continuous, accurate,
and useable data to be sent to a motion control system or path planning algorithm
will be the focus of
our

research

[1]
.


Currently t
here are still vehicles with forms of autopilot (specifically the Boeing
747) that are guided by complicated mechanical systems such as the gimbaled
spinning wheel gyros of the 1970’s. Although these systems have been replaced
by expensive digital ones in
military aircraft in the United States, much of Europe
continues to massively rely on the old mechanical systems for both civilian and
military aircraft

[3]
.
Our

research will focus on the simpler control of a vehicle on
2 axes rather than three, the appli
cation of which is more immediate and
relevant
to the general public.


The market for GPS navigation systems in cars has recently exploded. The
accuracy and availability of GPS receivers has increased and people are aware of
their power to conduct navigati
on for them. These car units, however, are
notoriously ineffective in locations where GPS signals can be blocked causing
refresh times from the GPS system to increase. The integration of the current GPS
system and a reliable INS system would greatly benefi
t those trying to navigate in
a city for example where skyscrapers can block the GPS satellites. Another
application would be increasing the effectiveness of military robots in theatres of
war.


With an accurate low cost INS, scientists and engineers can
focus more on
furthering the field of autonomous navigation with the aide of more accurate
continuous streams of information. It also opens this field of research to the
hobbyist who previously couldn’t afford the money to purchase expensive
systems or tim
e to research data
integration for motion control.

1.3

Image Processing and Path Planning

When navigating in a large, outdoor environment with variable terrain and limited
map data, autonomous robotic vehicles can have a great deal of trouble when
operating us
ing position data alone. Unless the vehicles are using a custom (and
very expensive) radio
-
triangulation positioning system (such a
s MDS/VeeLo
systems by Sensis

[10]
), they must rely on on
-
vehicle sensors for speed and
position (which become inaccurate qui
ckly due to sensor error) and GPS
positioning data, which can only refresh at most once every second and with the
best error correction is still only accurate to ±1 meter. Compounding these
problems, the areas covered by the robot outdoors are generally la
rger and cannot
be prepared with markers to lead the robot around.


HAVAC will overcome these problems through the use of a path network map of
its operational area and visual processing. A path network map is simply a
collection of positions, generally pa
th junctions, which are tagged to reflect the
compass directions a vehicle must face in order to start on the path towards
another junction. This minimal data should be easy to collect for a subset of the
walking paths at Clarkson.


Visual processing is th
e act of extracting useful data from an image (one taken
from a camera on a moving robot, for example) by performing a number of
computational filters and transforms such that a computer can meaningfully
interpret the contents of the image.


In the case o
f HAVAC, a camera mounted on the vehicle will take images of the
ground in front of it and send them to an onboard computer. Because the vehicle
will start on a path and the goal is to remain on a path the entire duration of
transit, this image should alwa
ys contain at some angle, a strip of asphalt or
concrete. By extracting the shape of this strip using visual processing, the onboard
computer will be able to determine the vehicle

s alignment with the path and
correct any deviation that could cause it to v
eer off course. In addition to simply
staying on a path, when the vehicle determines by its position data that it should
be approaching a turn soon, it slows down and uses visual processing to look for a
turn off of the path. This could not be done using p
osition data alone because even
if the exact location of the turn was known beforehand, the margin of error on
positioning data is too great to accurately turn onto a new path. This operation
differs greatly from past attempts to fuse visual processing wit
h inertial
navigation, which would use inertial data to predict and c
heck changes in image
frames

[8]
.


Inexpensively solving some of the problems with au
tonomous vehicle navigation
would be

a huge benefit
to

both military and civilian applications. For th
e
military, supply trucks could travel from one place to another autonomously
without drivers that could be killed or radio chatter of remote control vehicles that
could be picked up by the enemy and used to target the transport. In civilian life,
transpor
tation that can safely drive its self has long been a goal of car
manufacturers, but positioning data is too inaccurate and visual processing alone
tends to get lost very easily. Creating a system which can accurately blend the two
different types of navig
ation data would be a large step forward in vehicular
autonomy.

1.4

Motion Control

One of the biggest challenges associated with roboti
cs and autonomous navigation
is
motion control. What may seem like a simple or even trivial matter of
exe
cuting movement ins
tructions from the robot’s path
-
planning softwa
re is a
decently complicated task and
has been

the centerpiece of many research
p
roject
s
.
This is because motion control is a generic
categorization of

a broad

variety of
dynamic problems, each of which
has a

very

specific area of application.
Controlling a robotic welding arm on an assembly line
, for example,

is vastly
different than controlling the translation of a f
our
-
wheeled vehicle,
most notably
due to the differences in precision,
motion
redundancy, an
d kinematic

assumptions

between the two systems
.

With this
said,

it is
necessary

to define the
type
s

of

vehicular

motion control

with regard to both steering classification and
algorithm architecture

that

will be
researched

and implemented
in this project
.


There are two main categories of steering systems: differential and actuated.
Differential steering, also known as skid
-
steering or tank
-
steering, is only possible
in vehicles that drive the left an
d right wheels separately. For such a vehicle,
turni
ng is accomplished by powering the left and right sides at different rates.
The advantage of differential steering is the ability to rotate in place without the
need for tra
nslational motion. Actuated steering is used for vehicles
in which
the

powered wh
eels
rotate

in synchrony
. This type of turning works by changing the
angle between the wheels and their axles so that the wheels point tangent
to the
arc of the desired turn. The main disadvantage of actuated steering is that unlike
differential steering
, it only
works when the vehicle
is
moving.
For
robotic

autonomy applications, it is often desirable to turn the vehicle in place since this
simplifies the navigation processes.
As a direct result,
that vast majority of

motion control algorithms for robo
t
ic vehicles

use differential steering
.


While it was unfortunate that w
e
could not

find a pre
-
built vehicle with
differential steering
and of the proper

scale for our project, this setback was not
entirely negative.

Electronic speed controllers are expen
sive, and differential
steering would have required two

such devices. Furthermore,
using a

vehicle with
actuated steering greatly expanded

the possibilities
for
motion control research.
Traditional actuated steering deals solely with the front wheels of
the vehicle, but
it is possible to steer with the rear wheels as well. By using a four
-
wheel actuated
design in which the front and rear assemblies operate independently, a
nearly

infinite

range of turn radii can be achieved. It is our goal to break new
ground by
developing a motion control algorithm that makes effective use of four
-
wheel
steering
(4WS)
so as to minimize the disadvantages of such a system with respect
to a differential steering design.

In doing so, we will enable our robot to precisely
a
nd efficiently
execute

navigational commands from our path planning software.


When it comes to algorithm development, there are three major classifications of
control architecture
: model
-
based, sensor
-
based, and hybrid [12]. A model
-
based
approach would

create a complex and
but

static ‘map’ of the universe of
discourse, whereas a sensor
-
based approach would very generally define system
responses to environmental data that will be collected during runtime. For this
project, we have opted to develop a hyb
rid control algorithm. Our robot will have
a predefined understanding of its surroundings

in the form a path network map
and will use this map to plan out navigation routes (as described above). At the
same time, the robot will be collecting data about i
ts own movement and will
dynamically compensate for
movement errors

in order to stay on course. This
hybrid approach will be accomplished
through a fuzzy
-
logic
implementation

and
will
enable our robot to both plan at a high level and react at a low level.

This
fuzzy
-
logic design will be expandable to include obstacle avoidance if permissible
by our allotted time and funding.


Beyond the limited scope of this project, our motion control research will have
future applicatio
ns in the automotive industry.
Al
l vehicles in
the
consumer
market use actuated steering, and some vehicles, such as the GMC Sierra Denali,
have already employed 4WS

[2
2
]
.
As the world population continues to grow,
the

corresponding increase
s

in road congestion and social concern over
tr
ansportation safety will genera
te a need for autonomous cars. Motion control
algorithms such as ours will be needed in order to
reliably navigate cars
along

roadways.
Although l
ow speed maneuvers such
as U
-
turns and parallel parking
are where

the benefit
s of a 4WS design will prove invaluable over
commonplace
2WS technology, o
ur control research will
undoubtedly serve

as a

starting point
for revolutionary changes

to road
-
based transportation as a whole
.


2

Litera
ture

Background

2.1

Our
Project

vs. DARPA

The Def
ense Advanced Research Projects Agency (DARPA) has sponsored two
distinct challenges over the last few years. The first was dubbed the DARPA
Grand Challenge and was a race from Los Angeles, CA to Las Vegas, NV by
completely autonomous full size cars and tr
ucks. The U.S. Congress authorized
DARPA to offer prize money ($1 million)
to
the first Grand Challenge
’s winner
in order

to

facilitate robotic development;

with the ultimate goal of making one
-
third of
America’s
ground military forces autonomous by 2015.
In 2004 no
vehicles finishe
d the race

and in 2005 the race was conducted a second time with
four vehicles out of about 43 successfully completing the course. The general
operating rules were that the vehicle must navigate only by GPS and the
information re
ceived from its sensors. In addition
it was allowed to stop and
“stare” for at most 10 seconds at a time. Many teams used sonar, radar, and
odometer information to construct “tactical grade” Inertial Measurement Units
(IMUs). These were extremely expensive

to create and funding for teams often
exceeded a million dollars. Recently a 2007 race was announced and dubbed The
Urban Challenge. Where the first race’s obstacles were located in the desert and
primarily included drops offs, switchbacks, and rocks; thi
s race will tackle
autonomous navigation in an urban environment which requi
res far better
autonomous AI tha
n was shown in the desert. Several teams have already begun
investigating advanced stereo vision as a possible avenue of immediate
navigation. The G
PS this time will need to be

a

secondary or perhaps even a
tertiary system as the urban sprawl often blocks out the signals from GPS
satellites.


The second project sponsored by DARPA is the Learning Applied to Ground
Vehicles (LAGR) Program. It aims to d
evelop algorithms for autonomous vehicle
navigation that learn how to operate in complex terrain. This vision statement is
from DARPA’s website.


“Current systems for autonomous ground robot navigation typically rely on
hand
-
crafted, hand
-
tuned algorithms
for the tasks of obstacle detection and
avoidance. While current systems may work well in open terrain or on roads
with no traffic, performance falls short in obstacle
-
rich environments. In LAGR,
algorithms will be created that learn how to navigate based
on their own
experience and by mimicking human teleoperation. It is expected that systems
developed in LAGR will provide a performance breakthrough in navigation
through complex terrain.”


This project focuses on increasing the handling of obstacles by sma
ller, more
maneuverable robots than the Challenge programs. Whereas the larger vehicles
from the Challenge programs are more reactive in their autonomy and call slow to
a crawl as more objects are placed in their way; this program attempts to address
the p
roblem of AI learning how to avoid obstacles in the first place

[5]
.


While the roots of our project stem from the ideas put forth by DARPA programs
such as these, we combine these two programs’ approaches with a significantly
smaller budget. We take the c
oncept of autonomous navigation and scale it down
to a more affordable problem. We then combine this with the work being done on
obstacle avoidance and high maneuverability to create our final tier of the project.
By proving that at 1/10
th

scale an autonom
ous vehicle can operate at very high
speeds and successfully, safely navigate paths with moving and stationary objects
opens the door to future research on high speed autonomy. Current autonomous
vehicles operate at an average speed of 10 mph and rarely go

over 30 mph. If the
military is intending on making 1/3 of its ground force autonomous, more
research into high speed operation is required. This is what distances our project
from those currently sponsored by DARPA.


2.2

Sensor Processing

In the last few yea
rs the area of INS/GPS fusion has exploded. The applications of
cheap yet accurate systems have ballooned to encompass consumer cars, remote
controlled vehicles, military munitions, autonomous vehicles, and many other
previously unreasonably costly markets
. Not surprisingly there are a large number
of academic papers and scholarly writings pertaining to decreasing the cost of
INSs (cheaper and therefore less accurate sensors), increasing the accuracy of
INSs (via internal algorithms and sensors as well as c
ombinations with GPS
information), and discussing INS limitations. For the purpose of comparing the
work currently being done on INSs I will define a low cost system as that totaling
less than $4,000 and a high price system as that costing more than $10,00
0 (thus a
midrange would be a price in between the two). The military and aerospace
systems will largely be ignored as they cost in excess of $100,000. Accuracy of an
INS is generally defined in terms of drift error which
Shaikh
categorizes as
Consumer: >2
00 deg/hr, Automotive: 10
-
200 deg/hr, Tactical: 0.1
-
10 deg/hr, and
Navigation: <0.01 deg/hr. Our goal is to obtain at the very lest automotive
(hopefully tactical) accuracy at the cost of a consumer system

[1]
.


In 2002 Walchko

[2]

developed a midrange pr
iced INS in which he strived to
correct the problem of error drift by applying various filters to the noisy INS data
and combining it with an expensive Garmin GPS solution. The two systems were
connected serially to a laptop and driven around in a car. The

results showed that
a successful integration of the two systems would be capable of accurately
showing changes in motion on the scale of changing lanes on a highway.


Also in 2002, Boeing revealed development of GPS/INS changeover kits for
outdated gravi
ty munitions. The kit converted the bombs into smart munitions
capable of striking within 13 meters of their intended targets. These systems are
extremely expensive.


In 2004 Shaikh

[1]

extended Walchko’s research with his team from the
University of Putra

which applied a filtering solution to an INS/GPS system in an
attempt to address the problem of better correlating position from the navigation
system to that of a known map. This system had a major vulnerability in that it
was so low cost that the accura
cy penalty caused the system to only support
navigation for a few seconds if the GPS signal went out.


Our work differs from the above in two ways. The most significant being the
scaled speed the system would be traveling at. The end goal is between 5 and
10
mph which scale to 50 to 100 mph if the car was full size. This makes the problem
of accurate data and refresh rates rise immediately in both complexity and
importance. The second is that our system will be relatively cheap to duplicate
and contain part
s obtainable by not only people in academia but the general public
as well.


Another difference less related to the INS/GPS system and more to the system as
a whole is that our system will be capable of continuing on up to seven or ten
seconds without a GP
S signal. Depending on the distance to the next juncture that
the system is looking for, it will continue on its own without GPS information
until it determines that absolute data is needed to continue further

[]
.

2.3

Image Processing and Path Planning

In orde
r for HAVAC to navigate Clarkson’s campus using only imprecise
position information, a list of path junctions, and a video camera, the vehicle must
be able to plan its routes dynamically based on any position of campus and use its
camera to stay on paths a
t all times. To do this, we must programmatically
acquire the image off the camera, process it to exaggerate its useful data and
eliminate its noise, then run an algorithm that can extract some meaningful values
from the picture that we then use to contro
l the vehicle. To accomplish these tasks
we will be using a serial camera, ImageMagick and a custom
-
made Hough
-
like
transform.


Because static path planning quickly becomes unmanageable in cases where
certain path junctions become inaccessible or location
suddenly changes, we will
need to use a dynamic, real
-
time path
-
planning algorithm that will be able to cope
with these regular occurre
nces that will face our vehicle
[
2
4]
.


ImageMagick is a set of free and open source libraries for Linux that facilitate t
he
reading in and manipulation of images programmatically in C
-
code (among other
programming languages). ImageMagick will read in the image frames captured by
our onboard camera’s driver, adjust brightness and contrast, run an edge detection
algorithm, and

down sample the image to black and white. It can even export the
image in a programming friendly format so that the algorithms do not need to deal
with a
ny image file formatting issues
[
2
5]
.


The Hough transform is an image transformation technique aimed
at
mathematically defining lines or shapes contained in an edge
-
detected image. The
Hough transform accepts a bitmap as input and outputs data containing all
possible lines in the image, weighted by how well defined they were in the actual
bitmap. By selec
tively searching only part of the image and taking only two lines,
we could determine the angle of the road upon which the robot is traveling. Using
that angle and the current compass direction, the robot’s path planning algorithms
can send commands to the

motion control systems to stay on the path

[
11
]
.

2.4

Motion Control

The main concept

related to motion control that will be thoroughly investigated in
our literature search
is the fuzzy
-
logic
-
based approach

to developing vehicle
co
ntrol algorithms. This conc
ept

is one that has been explored in considerable
detail due to the usefulness of fuzzy algorithms in scenarios where the robot’s
environment changes rapidly or unpredictably [1
4
].
Due to the
preexistent

wealth
of information relating to fuzzy
-
logic contr
ol systems (see [12] through [18] for
brief exemplification), it is not our goal to produce anything new in this field.
Rather, it is our goal to develop a simple but effective algorithm that implements
existing fuzzy methods in order to control the movem
ent of the vehicle.

The
background needed to accomplish this task will be gained primarily through
examination of the

aforementioned research papers.


Our research in this regard will deal with the implementation of a 4WS design.
Like other

details of th
e algorithm
, such
as interpretation of path
-
planning
movement commands and compensation for wheel
slip,

4WS is an

implic
it
condition

of the fuzzy approach and will be worked out through our engineering
creativity.

Our research will be novel in that it wil
l be the first implementation of
4WS in a 1/10 scale autonomous vehicle.

The specifics of this research will be
outlined in section 3.4.


Besides fuzzy logic
, there are several less significant skills that we will need to
learn in order to complete this p
roject.
First, we will need to become comfortable
with serial communication since we will need to create data busses between our
different microcontrollers. This matter is of little concern since the RS232 (DB9)
serial specification is widely available o
nline and is basically common knowledge
among electrical and computer engineers. We also wi
ll need to learn how to
properly send PWM signals from the microcontroller to the servos and electronic
speed controller. This will be easily accomplished by simpl
y testing components

in lab with an oscilloscope.

Lastly, we will need to find a way to implement an
emergency failsafe device that can be used to override the control algorithm and
shut the vehicle down.


3

Proposed Research and Timeline

3.1

Sensor Processing

This section addresses the approach and methodologies we shall use in the Sensor
Processing portion of the HAVAC system as well as the specific components to
be used. (As it is written currently general descriptions of parts and ideas for
algorithms are gi
ven. As research is conducted and parts finalized these
generalities will be replaced with specific documentation, examples, and detailed
discussion of how integration was accomplished.)


INS on the HAVAC will be primarily a two dimensional problem as ther
e will be
no drastic changes in the Z axis during anticipated operation. Although we shall
be approaching this problem in the XY plane we will still need a 3 axis
accelerometer. The third axis (Z) will be used as a reference for gravity so that
while on in
clines we can pull the effect of gravity on the accelerometers out. This
will result in more accurate readings on the linear motion of the vehicle. The
integration of the line
a
r accelerations will yield velocities used to approximate X
and Y positional dat
a. In addition we will be using a digital gyroscope as well as a
digital compass. The combination of these two will yield an accurate heading in
the absence of GPS information. Several algorithms will have to be creat
ed to
sample the sensors, weigh the
imp
ortance of each signal, and combine them to
form navigational data. The sampling will be done on a microcontroller directly
interfaced with the sensors. Weighting will be pre determined after investigation
in the lab as to the accuracy and reliability of i
ndividual sensor measurements.
The weighting will then be incorporated as the raw measurements are integrated
into reliable navigation information.


The interfacing of the INS with the rest of the HAVAC system is crucial.
Communications need to be extremel
y fast as well as accurate. The interfaces and
relationships of components as they stand currently for the entire

HA
VAC system
are shown section 4.2


System Block Diagram
.

3.2

Image Processing and Path Planning

Initially, once our hardware is finalized and te
ntatively assembled, we will need to
create a series of test scenarios in which we manually drive the truck under well
-
known conditions while collecting data from sensors and the camera. At the sam
e
time, we will use the vehicle’
s on
-
board GPS unit and dig
ital compass to create
our path network map data. Using these sets of data, we will then be able to
develop formulas and control algorithms for keeping the vehicle straight on a path
or determining a turn. From a path planning perspective, the control equa
tions
should be fairly simple. An example of a simple control formula would be
translating the skew of the road into a number of degrees to turn to correct the
vehicle

s path. All of the difficult dynamical system calculations will take place in
a separate

microcontroller, distinct from the visual processing system. However,
before any of that can happen, the specific visual processing algorithm will need
to be finalized. Based on preliminary research, we will be using a custom
impleme
ntation of the Hough t
ransform

[11]

for line detection in camera data.
Once all of this software is functional, we will begin to test the vehicle and alter
our algorithms to perform more reliably in real
-
world situations. This will likely
be the most lengthy and time
-
consuming
part of our research.

3.4

Motion Control

This portion of our research wi
ll involve the development of a fuzzy
-
logic

algorithm

to propel and steer the vehicle based upon movement commands from
the path planning software. This algorithm will be designed in a mo
dular fashion
to account for different steering geometries and gear ratios, which will allow the
system to be scaled for functionalit
y on vehicles other than our own
.


The algorithm will receive movement instructions through a serial connection in
the form

of a desired heading, left/right turn direction, and a go/stop command.
Geometric calculations will be performed to find the most efficient method
satisfying the movement instructions. The vehicle’s motors
will be controlled by
a PWM signal sent to an e
lectronic speed controller with braking capability. The
steering servos will also be controlled by PWM outputs. The motion control
algorithm will continuously receive movement data over a serial connection, such
as actual compass heading, speed, and acce
leration. This data will
create a
feedback loop
to maintain the planned path

by

compensating for wheel slip and
other sources of displacement error.

Lastly, a

human failsafe control will be
implemented through an RF receiver so that the vehicle can be sh
ut down in the
case of an emergency.


Although w
e plan to implement o
ur motion control algorithm on a Microchip
PIC18F
4431 microcontroller and PICDEM2 Plus demo
-
board, our algorithm will
be developed in C programming language and
will

not
be
limited to thi
s
specific
hardware.

In summary, the motion control algorithm will be efficient, modular,
and scalable so that it can be applied to problems outside the scope of this project.

3.5

Project Timeline


Remainder of Spring Semester

Conduct literary research

Begin
formulating algorithm ideas

Summer Research (May to July)

D
evelop

and test algorithms

Establish communication between HAVAC parts

Complete initial tiers of operation

Begin final tier of operation

Fall Semester

Testing and debug system operation

Finish wr
iting
t
hesis

December break

Work on presentation

4

System Overview and Low
-
Level Requirements

4.1

General Operation

The
HAVAC

is designed with the goal of being able to navigate the Clarkson
University campus by making use of existing pathways. In order to ef
fectively
accomplish this task, the overall requirements for this project have been divided
into three categories: s
ensor

processing,
image processing and
path planning, and
motion control. Each category will have an exclusive set of requirements to be
m
et. This section spells out some basic parameters around which the three groups
of r
equirements have been written.


The individual sets of requirements are created in a modular fashion but without
overlap. Basic details of the interface methods are provi
ded initially, and these
details will be refined and updated as individual hardware components are
researched and selected. Finally, each individual category of the project will be
designed in several tiers, such that each tier successively builds up to a

more
sophisticated level of operation. The respective low
-
level tiers will be loosely
defined in the following sections of this specification document.


From a high
-
level viewpoint, we envision our end product being a device with
which the user can commu
nicate wirelessly via Bluetooth. The user can connect
to the robot with a computer using a Bluetooth module, open a custom application
containing a map of predefined locations, and give a command of where the robot
needs to go. The robot will then make ap
propriate actions to move to the desired
location. During transit, the robot will return status messages to the user so that if
the robot becomes lost, encounters an error, or reaches its destination, the user
will know.


The mid
-
level view of navigation
is more complicated than what is visible to the
end user. Once the robot receives a command, it first ascertains its position using
its internal path network data, GPS, and a digital compass. The robot must start at
a path junction or destination, though

it need not be facing the correct direction
initially. It will slowly turn in place to orient itself in the proper direction.
Finally, the robot will utilize its camera to make sure the
starting location is on a
path.


Once all of these checks and adjus
tments are complete, the vehicle will move
forward, turning as needed, and speeding up or slowing down to improve the
accuracy of the data being taken in. This operation will be executed in the form
of a feedback loop. Next a toggle
-
polling
-
based driving

system takes over to
maintain the robot on paths. Several feedback loops between sensors and
algorithms will be used in order to accomplish the desired accuracy of movement.
It will be the task of the motion control hardware to execute commands from the
path planning algorithm in a way that minimizes wheel slip. This will be
accomplished by dynamically changing the motor speed and front/rear turn
angles. Lastly, the signal processing hardware will poll various sensors, convert
data into meaningful units
, and send data to motion control or path planning
hardware as it is requested. This will be done in the fastest manner possible
without necessarily losing accuracy.

4.2

System Block Diagram

3
/
5
/
2007
Rev
3
.
0
HAVAC System Architecture
Sensor Processor
Accel
Gyro
Range
Finder
Camera
Desired Heading
Turn Left
/
Right
Go
/
Stop
User
Input
Truck
Velocity
Tilt
/
Turn
Obstacle
Distance
Movement Commands
Path Planning from Map and Data
Driving Straight
Looking for Turn
Turning
Movement Commands
Movement Data
PWM Motor and Servo Drive
Motion Control Loop
GPS
Sampled Data
Compass
Direction
Velocity
Relative Distance
Obstacle Range
Actual Heading
RS
-
232
Serial
Velocity
Relative Distance
Actual Heading
RS
-
232
Serial
Velocity and XY Position
Image Processing
Sampled Data
Embedded Linux
Microprocessor

4.3

Motion Control Block Diagram

3
/
11
/
2007
Rev
1
.
1
HAVAC Motion Control System
Desired Compass
Heading
Turn Left
/
Right
Go
/
Stop
Relative Distance
Actual Compass
Heading
Velocity
Movement Commands
Movement Data
PWM Motor and Servo Control Signals
RF Failsafe Shutoff Circuitry
Motor Control
Front Servo Control
Rear Servo Control
Emergency Human
Control via RF
CONTROL ALGORITHM
Interprets Movement Commands
Determines Efficient Method of Execution
Polls Movement Data for Continuous Feedback
Compensates for Wheel Slip and Vehicle Tilt

4.4

Sensor Processing

The initial tier of operation will involve the acquisition, set up, and calibration of
the sensors. The second tier will be integration of the data from sensors into
useable navigation data. The third and final tier wi
ll involve integrating
microcontrollers and sensors by establishing communications as well as the
refinement of algorithms and sampling methods.




The HAVAC
shall (
1
)

contain a microcontroller that is dedicated to sensor data
acquisition. This microcontroll
er will always be referred to as MC1.



The HAVAC
shall (
2
)

contain a 3
-
axis accelerometer. This accelerometer will
always be referred to as A1.



A1
shall (
3
)

be used for acquisition of linear acceleration data.



The HAVAC
shall (
4
)

contain a
GPS
. This will al
ways be referred to as GP1.



GP1
shall
(5
)
reinforce position data for placing HAVAC on its
virtual
map
.



The HAVAC
shall (
6
)

contain an ultrasonic

range finder
. This will always be
referred to as UR1.



UR1
shall (
7
)

be capable of measuring distances to obsta
cles
.



The HAVAC
shall (
8
)

contain a digital compass. This will always be referred to
as DC1.



DC1
shall (
9
)

be capable of measuring
the heading of the HAVAC in the form of
degrees on the compass rose.



A1, UR1, DC1, and GP1
shall (1
0
)
interface directly with

MC1.



MC1
shall (
1
1
)

be capabl
e of sampling (digital as well as

analog) data from
the
above listed
sensors
and forwarding the data to MP0
.



MP0
shall (1
2
)
integrate the data passed from MC1 into navigational information.
Specifically XY coordinates, heading
, and velocity.

4.5

Image Processing and
Path Planning



The HAVAC
shall (
13
)

contain a microprocessor that is dedicated to image
processing and path planning. This microprocessor will always be referred to as
MP0.



MP0
shall (
14
)

have access to, in memory, a da
tabase of path lengths, and path
intersection coordinates measured using GP1.



MP0
shall (
15
)
, before beginning any navigation, derive its starting position
based on coordinates returned from GP1



MP0
shall (
16
)

determine the shortest system of paths to trav
erse in order to reach
its destination.



The HAVAC
shall (
17
)

contain a
320x240 camera with a serial interface.
This
will always be referred to as CM1.



CM1
shall (18)

capture images
of the

immediate navigation

direction.



CM1
shall (19)
interface directly to

MP0 via a serial port



MP0
shall (
20
)

take an image of the road from CM1 while in motion and run a
custom edge detection algorithm to determine its immediate placement on the
road and location of possible path intersections.



MP0
shall (
21
)
,

if a path inter
section is detected and based on position, map, and
path planning data, the vehicle should be turning in that direction at some point
within the margin of error for positioning data, the vehicle will turn on to the
detected path.



MP0
shall (
22
)

send its in
structions to change the vehicle’s position to MC3.



MP0
shall (
23
)

receive IR range
-
finder data from MC2 and inform the vehicle to
stop if an obstacle is detected in its immediate path. (Preliminary)

4.6

Motion Control

The initial tier of this segment will con
sist of acquiring the vehicle and all
associated electronic hardware (motors, electronic speed control, batteries, servos,
and microcontroller). The second tier will involve the development of a two
-
wheel steering algorithm

with the assumption of obstacle
-
free pathways.


The
third tier will be the adaptation of
said

algorithm to steer using all four wheels.

The fourth and final tier will be the modification of this algorithm to incorporate
fuzzy
-
logic
-
based obstacle avoidance.




The HAVAC
shall (2
4
)

be bas
ed on a four
-
wheel
-
drive, four
-
wheel
-
steering
chassis design with a low center of gravity.



The HAVAC
shall (2
5
)

be driven by a single brushed DC motor. This motors
shall (
26
)

be powered by an appropriate metal
-
oxide semiconductor field
-
effect
transistor (
MOSFET) pulse
-
width modulation (PWM) device. This device will
most likely be a hobby
-
grade electronic speed controller.



The HAVAC
shall (
27
)

be capable of four
-
wheel steering, actuated by front and
rear mounted servo motors.



The HAVAC
shall (
28
)

contain a

microcontroller that is dedicated to motion
control. This microcontroller will always be referred to as MC3.



MC3
shall (
29
)

receive movement instructions from MP0.



MC3
shall (3
0
)

receive processed sensor data from MC2.



MC3
shall (3
1
)

determine the most
effective way to execute the instructions from
MP0 and verify the completion of those instructions based upon data received
from MC2.



MC3
shall (3
2
)

control the steering servos by sending PWM signals directly to
the servos.



MC3
shall (3
3
)

send PWM signals
to the MOSFET speed controller in order to
control the movement of the vehicle.


5

References

It should be noted that many of

our sources were found using a database search at
citeseer.ist.psu.edu. This online library is sponsored by The National Science
Fo
undation,

Microsoft Research, and NASA. The
site
is

hosted by Pennsylvania
State University.


[1]

K
.

Shaikh, R
.

Shariff, F
.

Nagi, H
.

Jamaluddin, S
.
Mansor
,

Inertial Navigation Sensors, Data
Processing and Integration with GPS for Mobile Mapping
.
University Pu
tra Malaysia
, 2005.



[2]

K. J.
Walchko,
Low Cost Inertial Navigation: Learning to Integra
te Noise and Find
your Way
.
MSc Thesis, University of Florida,

2002


[3]

D. KING, B.Sc., F.R.I.N
,
Inertial Navigation
-

Forty Years of Evolution
.
GEC REVIEW, VOL.
13, NO. 3,
1998
.


[4]

N
.
M. Barbour, J
.
M. Elwell, R
.
H. Setterlun
d,
INERTIAL INSTRUMENTS: WHERE TO NOW?
.

The Charles Stark Draper Laboratory, Inc. Cambridge, Massachusetts


[5]

J.
Albus, R
.

Bostelman, T
.

Chang, T
.

Hong, W
.

Shackleford, M
.

Shneier
,
Learning in a
Hierarchical Co
ntrol System: 4D/RCS in the DARPA LAGR Program
.

National Institute of
Standards and Technology Gaithersburg, MD


[6]

A.K. Brown, Ph.D,
TEST RESULTS OF A GPS/INERTIAL NAVIGATION SYSTEM USING A
LOW COST MEMS IMU
.

NAVSYS Corp
oration.


[7]

Skog

and

P
.
Handel
,
A LOW
-
CO
ST GPS AIDED INERTIAL NAVIGATION SYSTEM
FORVEHICLE

APPLICATIONS
.
KTH

Signals,

Sensors

and

Systems,

Royal

Institute

of

Technology.


[8]

Major

M.

Veth, J.

Raquet,

Fusion of Low
-
Cost Imaging and Inertial Sensors for Navigation
.
Air

Force

Institute

of

Technology


[9]

E.
Nebot
, H.
Durrant
-
Whyte
,
Initial Calibration and Alignment of Low Cost Inertial Navi
gation
Units for
Land Vehicle Applications
.
Journal

of

Robotics

Systems
,
Vol
. 16, No. 2,
February

1999,
pp. 81
-
92.


[10]

Sensis Corporation. Multistatic Dependent Surveillanc
e (MDS)
.
http://www.sensis.com/docs/49/

(as well as
a short interview with a Sensis

employee on Sunday, March 11, 2007)


[11]

R. Duda
. USE OF THE HOUGH TRASFORMTION TO DETECT LINES AND CURVES IN
PICTURES
. Peter E. Hart Artificial Intelligence Center. January 1
972


[12]

P. Garnier and T. Fraichard.
A Fuzzy Motion Controller for a Car
-
Like Vehicle.

INRIA Rhone
-
Alps, 1996.


[13]

O. A
ycard
, F. Charpillet, and J. Haton.
A New Approach to Design Fuzzy Controllers for Mobile
Robot Navigation.

CRIN/CNRS and INRIA
-
Lorraine.


[14]

E. Tunstel, A. Asgharzadeh, and M. Jamshidi.
Towards Embedded Fuzzy Control of Mobile
Robots.

University of New Mexico.


[15]

J. Yen and N. Pfluger.
A Fuzzy Logic Based Extension to Payton and Rosenblatt’s Command
Fusion Method for Mobile Robot Navigation.

Texas A&M University, 1996.


[16]

J. Zhang, F. Wille, and A. Knoll.
Modular Design of Fuzzy Controller Integrating Deliberative
and Reactive Strategies.

University of Bielefeld.


[17]

S. Thrun.
A Lifelong Learning Perspective for Mobile Robot Control.

Universitä
t Bonn, 1994.


[18]

J. Zhang and A. Knoll.
Hybrid Motion Control by Integrating Deliberative and Reactive
Strategies.

University of Bielefeld, 1995.


[19]

V. Kumar, M. Zefran, and J. Ostrowski.
Motion Planning and Control of Robots.

University of
Pennsylvania, 1
997.


[20]

G. Lafferriere and H. Sussmann.
A Differential Geometric Approach to Motion Planning.

Portland State University and Rutgers University.


[21]

L. Barello.
http://www.barello.net/Papers/Motion_Control/index.htm


[22]

Delphi QUADRASTEER™
in GM trucks
http://www.delphi.com/news/pressReleases/pressReleases_2002/pr7189
-
02122002/


[23]

ARC Electronics.
http://www.arcelect.com/rs232.htm


[24]

R. Washington. Practical Real
-
Time Planning. Stanford Knowledge Systems Laboratory. 1992.


[25]

M
. Still. The Definitive Guide to ImageMagick. Apress: December 16, 2005