The University of Sheffield CITY College

puppypompΤεχνίτη Νοημοσύνη και Ρομποτική

14 Νοε 2013 (πριν από 3 χρόνια και 4 μήνες)

109 εμφανίσεις



The University of Sheffield

CITY College


FINAL YEAR PROJECT



Building a Socially Assistive Robot Prototype



This report is submitted in partial fulfillment of the requirement for the degree of
Bachelor of Science with Honours in Computer Science by


Michael Papaioannou


June, 2011




Dr. George Eleftherakis








Building a Socially Assistive Robot Prototype


By

Michael Papaioannou




Supervisor

Dr. George Eleftherakis






ABSTRAC
T



This project is part of a bigger goal that is to build a robot that will be able to provide
assistance through social interaction.
The aim of this project is
t
o develop a prototype of
a robot focusing on the software of the system and the hardware availabl
e at the
moment that wil
l have as a bigger, long
-
term goal to lead

a mobile robot to assist its
users through social interaction.

The main outcomes of the project are the system that
will be used to control all the devices that the robot has and the review

on the goal itself.





“All sentences or passages quoted in this dissertation from other people’s work have
been specifically acknowledge
d

by clear cross

referencing to author, work and page(s).
I understand that fail
ur
e to do this,
amounts to plagiarism and will be considered
grounds for failure in this dissertation and the degree examination as a whole.


Michael Papaioannou




June 5 2011”.



i


Table of Contents

Table of Contents

................................
................................
................................
...............

i

Table of Figures

................................
................................
................................
...............

iii

1

Introduction

................................
................................
................................
...............

1

1.1

Aim

................................
................................
................................
....................

2

1.2

Objectives

................................
................................
................................
..........

2

2

Understanding the Greater Goal

................................
................................
...............

4

2.1

Introduction

................................
................................
................................
........

4

2.2

Assistive Robotics

................................
................................
..............................

4

2.3

Socially Interactive Robotics

................................
................................
.............

4

2.4

Socially Assistive Robotics

................................
................................
................

5

2.5

Autistic Disorder

................................
................................
................................

5

2.6

SAR and Autism

................................
................................
................................

6

2.7

Tools

................................
................................
................................
..................

7

2.8

Conclusion

................................
................................
................................
.........

7

3

Problem Solving Process

................................
................................
..........................

9

3.1

Understanding the hardware

................................
................................
..............

9

3.1.1

The USB
-
I2C Communications Module

................................
..................

10

3.1.2

The SRF08 Ultrasonic Sensor
................................
................................
...

11

3.1.3

The MD23 Motor Controller

................................
................................
....

12

3.2

Software Development Planning

................................
................................
.....

14

4

Developing the Software

................................
................................
........................

16

4.1

Iteration 1
-

Communications

................................
................................
..........

16

4.1.1

Analysis

................................
................................
................................
....

16

4.1.2

Design

................................
................................
................................
.......

18

4.1.3

Implementation

................................
................................
.........................

20

4.1.4

Testing

................................
................................
................................
......

22

4.2

Iteration 2
-

Movement

................................
................................
....................

23

4.2.1

Analysis

................................
................................
................................
....

23

4.2.2

Design

................................
................................
................................
.......

23

ii


4.2.3

Implementation.

................................
................................
........................

27

4.2.4

Testing

................................
................................
................................
......

29

4.3

Iteration
3
-

Sensing

................................
................................
.........................

31

4.3.1

Analysis

................................
................................
................................
....

31

4.3.2

Design

................................
................................
................................
.......

31

4.3.3

Implementation

................................
................................
.........................

33

4.4

Iter
ation 4
-

Speech

................................
................................
..........................

36

4.5

Iteration 5


Integration

................................
................................
...................

37

4.5.1

Analysis

................................
................................
................................
....

37

4.5.2

Design

................................
................................
................................
.......

37

4.5.3

Implementa
tion

................................
................................
.........................

40

4.6

Summary

................................
................................
................................
..........

42

5

Evaluation

................................
................................
................................
...............

44

5.1

System Evaluation

................................
................................
...........................

44

5.2

Pr
oject Evaluation

................................
................................
............................

45

6

Conclusion and Future Work

................................
................................
..................

46

7

References

................................
................................
................................
...............

47





iii


Table of Figures


Figure 3
-
1

Hardware Connections

................................
................................
....................

9

Figure 3
-
2 General I2C Command

................................
................................
.................

10

Figure 4
-
1 LocalSerialPort Class Diagram

................................
................................
.....

18

Figure 4
-
2

Controller Class Diagram

................................
................................
..............

20

Figure 4
-
3 LocalSerialPort constructor implementation

................................
................

21

Figure 4
-
4 LocalSerialPort serialEvent() implementation

................................
..............

21

Figure 4
-
5 L
ocalSerialPort read() Implementation

................................
.........................

22

Figure 4
-
6

LocalSerialPort getSoftwareRevision() Implementation

..............................

22

Figure 4
-
7

LocalSerialPort Testing

................................
................................
.................

23

Figure 4
-
8 Standardized Commands Class Diagram

................................
......................

25

Figure 4
-
9 Movement Class Diagram

................................
................................
.............

27

Figure 4
-
10

MotorController getAllInfo Implementation

................................
..............

28

Figure 4
-
11 MotorController move() Implementation

................................
...................

28

Figure 4
-
12 MotorDriver getAllInfo Implementation

................................
....................

29

Figure 4
-
13 New serialEvent Implementation

................................
................................

30

Figure 4
-
14 Movement Class Diagram

................................
................................
...........

32

Figure 4
-
15 SonarController startRanging Implementation

................................
...........

34

Figure 4
-
16 SonarController getLightValue Implementation

................................
........

34

Figure 4
-
17 SonarReadings

................................
................................
............................

35

Fi
gure 4
-
18 SonarDriver getAllSonarInfo Implementation

................................
............

35

Figure 4
-
19 Speech Implementation

................................
................................
...............

36

Figure 4
-
20 GUI Motor Control

................................
................................
.....................

38

Figure

4
-
21 GUI Sonar Control

................................
................................
......................

39

Figure 4
-
22 SingleSerialPort getInstance() Implementation

................................
..........

41

Figure 4
-
23 SwingWorker example

................................
................................
................

42





1

Introduction


1


1

Introduction


When hearing the term ‘robot’, the mind usually goes to science fiction stories found in
movies and books about sentient and intelligent humanoid machines, often trying to
replace or destroy their human creators. Although not having the exact

aforementioned
characteristics, robots exist nowadays and the term can be used for a large number of
different kinds of machines.

Because there is no standard way for machines to qualify as robots, there are many
characteristics a robot could have.
Some o
f the most common are:




The ability to move, either to cover distances or have moving parts of their own.
From ASIMO, a humanoid robot that is able to walk and run, to the robotic arms
that are used in factories.



Intelligence of any kind. From the cloth wa
shers that save water to self
-
controlled cars that make driving decisions.



The ability to sense their surroundings in the form of objects, images,
temperature, sound and more. From Roomba, an automated vacuum cleaner that
understands objects and change its

course to any of the rovers that were send to
explore the surface of Mars with many different kinds of sensors.


The most common characteristic for any kind of robot and also the purpose
of

their
creation

is to provide services to people by handling tasks

that might be tedious,
dangerous or difficult for a person to do, or to support and enhance human work.


One huge field that robots often provide support is medical science. There are many
application areas in this field that a robot can, and the last yea
rs increasingly do, affect
[1]. Robotic systems are being used from surgeons due to the stability and dexterity they
offer when handling sensitive and hard to reach areas [2]. Although still not absolutely
reliable and available for all, robotic limbs can
replace the functionality of a missing
body part [3]. Lastly, robots can be used to provide assistance through physical or social
interaction to a wide range of patients.

Assistive robotics is a term used to refer to robotic systems that assist people with

problems of physical nature through the use of physical interaction. In this category
belong robots that are used for rehabilitation, e.g. neurorehabilitation [4], mobility aids
[5] and the aforementioned robotic limbs.

Socially assistive robotics is a re
latively new category that extends assistive
robotics and although they share the same goal, their focus is different. They provide
assistance through social rather than physical interaction [6]. Due to this nature,
members of this category, will have to b
e made by taking into consideration not only
the field of health
-
care but also HRI (human
-
robot interaction) which in turn, as an
instance of human
-
machine interaction, brings together computer and engineering
science, social and cognitive science and ethi
cs among others [7].

User groups for socially assistive robotics can include the elderly, people with
physical disabilities, during recovery, students and individuals with cognitive
disabilities [6]. Among these, the focus of this project is on children wi
th autism.




1

Introduction


2


Autism is a developmental disability that is caused by a problem of the brain.
People with autism have problems with communication and social interaction and they
also have unusual behaviors and interests [8].

There has been a lot of research
concerning how a socially assistive robot can help
people with autism. There are two kinds of motivations. The first is to gather adequate
information that will contribute in diagnosing, treatment and understanding of autism
disorder and the second is to t
each and motivate people with autism to engage into
social interaction with the hope that this kind of engagement will be transferred to their
interactions with humans [9]. The motivation behind this project is closer to the later.
More precisely, due to t
he communication, socially and cognitive challenges that
children with autism have, it is believed that a mobile socially assistive robot, acting as
a mediator between the child and his caregiver, will generate a large degree of
motivation for interaction,

through which he will acquire apart from basic knowledge,
some reasons to share these experiences with other people, thus improving these
problematic areas.


1.1

Aim


To develop a prototype of a robot focusing on the software of the system and the
hardware
available at the moment, that wil
l have

as

a

bigger,

long
-
term goal to lead

a
mobile robot to assist its users through social interaction.

1.2

Objectives

There are two kinds of objectives that were decided for this project. The first include the
ones that are
related with how the system should be done, meaning the things that
are
needed to be accomplished during the whole development process. These will be called
system objectives. The second include objectives that
are

needed to be accomplished
with this proje
ct as a whole and by fulfilling
them

the general aim will also be
considered adequately reached. These will be called project objectives.


The system objectives are:




It is of high importance to have the appropriate software engineering model that
will be used for the project’s development process. Having an inappropriate
order for such an amount of work can result in time inefficiency.



The requirements must be capture
d correctly.
Understanding the hardware that
the robot will use is mandatory to accomplish this.




Keeping a detailed time plan will ensure that what is needed to be done will be
completed at the appropriate deadlines.



The analysis for the project’s devel
opment process should
complete

in order to
make easier the transition to other phases and ensure that the requirements are
being followed.



During design
,

the main focus should be to create simple, robust and

modular
solutions that could be reused in the f
uture.



Extensive testing should be done to check and correct any errors to ensure the
optimal behavior of the whole system.




1

Introduction


3




The programming language that will be used in the implementation phase should
be one that the developer is comfortable with. If thi
s cannot be the case there
should be taken into consideration the extra time that will be needed to become
familiarized.



There should be a
complete

evaluation that will be
used

to check if the end
product is appropriate for

setting the foundations for any

future work towards
the bigger goal.




Due to the nature of the project and the difficulties that it implies, the risk
management should identify and be able to mitigate all the possible risks that
may occur.


The project objectives are:



To provide the
theoretical background that will justify the robot’
s existence.
Defining the purpose of building something from the beginning is as important
as the process of building it. Therefore, extensive research on how a robot can
provide assistance through social
interaction should be made.



To have a control over the several hardware parts of the robot and develop a
prototype version of the software needed, focusing on establishing the overall
architecture that should be modular.



To transfer this control to the us
er through a friendly interfacing environment.

Although
a

greater goal is for the robot to be autonomous
,

monitoring and
ensuring

that

it operates as it should

be

is equally important.



To prototype a number of foreseen functionality to enable at a point t
he longer
term goal of the bigger project this project is part of.




2

Understanding
the

Greater

Goal

4


2

Understanding
the

Greater

Goal


2.1

Introduction


Before going on into start building the

prototype of the robot and developing t
he
software that will lead it, appropriate knowledge for its purpose sh
ould be given. Th
is

will be the focus of this chapter. Initially,
in
formation about what is assistive and
socially interactive robotics will be given.
From this, the term socially
assistive robotics
will be

explained. Afterwards, simple knowledge on autistic disorder will be given
while the first target group of people in need of assistance will be defined. Lastly, past
work
of socially assistive robots with autistic children will b
e provided.


2.2


Assistive Robotics


The term assistive robotics (AR) can be very general. It can be used to cover many
categories of robots that are used to provide aid and support in the fields of health care
and education.
Due to this they have many applic
ation areas:



Recovery and Rehabilitation. The most common group of patients that need help
in this category is people after a stroke with specific physical disabilities. A
robot can physically assist with the exercises needed for the patient to regain
control of his body movement and it proves to be very appropriate for this task
because it does not get tired, it is able to efficiently obtain data on the patient’s
progress and it is able to aid with exercises not possible for a human trainer [1],
[4], [
10].



Mobility problems. In this area there are included robots that can be used as
mobility aids.
These can be robotic wheelchairs or walkers [5], [11], [12].



External limb prostheses. Although this field still needs a lot of research to make
its robots mo
re available to the public, robotic arms can be used to replace the
missing part by getting signals from muscles and nerves [1], [3], [12].



Elder care. Assistive robots in this field can be used as minders or guides [12].


2.3

Socially Interactive Robotics


Th
is term (SIR) was first used by Fong, Nourbakhsh and Dautenhahn [13] to distinguish
robots that interact with humans socially, as opposed to all other methods of human
-
robot interaction. Some application fields were also identified to be research, educatio
n
and therapy.



Some important findings of their survey [13] come from the definition of a list of
characteristics that a socially interactive robot can have.
These are:




2

Understanding
the

Greater

Goal

5




Embodiment. This term can be defined as a measurable characteristic about h
ow
much a socially interactive robot can interact with its environment and vice
versa. This can include the measuring of sensors and actuators for example.



Emotion. In the context of SIR, a robot could simulate anger or sadness to
provide a hint for disagr
eement with the user.



Dialog. Verbal or non verbal communication between a human and robot.



Personality. If it is used in a socially interactive robot, it can set a specific way
for how the user perceives the robot.



Human
-
oriented perception. An SIR can ha
ve this characteristic if the way that it
understands its surroundings is human
-
like. This can be further analyzed in
people tracking, speech and gesture recognition and facial perception [13].



User modeling. The ability to understand and react to human s
ocial behavior
[13].



Socially situated learning. Acquiring knowledge through social interactions that
can lead to the improvement of communication.



Intentionality.


2.4

Socially Assistive Robotics


According to Seifer and Mataric

[6], socially assistive robotics (SAR) is defined in the
intersection of assistive and socially interactive robotics.

Indeed, SAR has the same goal with AR, that is to provide assistance to users
but their ways to achieve that are different. SAR relies o
n social rather than physical
interaction. At the same time this focus is very similar

with

the one from socially
interactive robotics. Thus, it is a term that it was needed to describe robots that belong
to both of these categories.

Socially assistive rob
ots can be applied to many user groups in the fields of
health care. Among them the most common ones are the elderly, people in recovery and
rehabilitation and individuals with cognitive disabilities [9].

For the elderly, most of the needs of assistance fo
r everyday tasks such as
moving independently can be covered by the broad category of assistive robotics. SAR
in this case can be used as companions to improve the mental health of this group by
reducing stress, loneliness and depression [14].

Assistance f
or people in rehabilitation and recovery can be provided already, as
it was mentioned, by assistive robots in the form of physically interacting with them.
Some people though, argue [6] that there exist less physically active ways that robots
can use for t
reatment. In this context, social assistive robots can be used to constantly
motivating and supporting the person in recovery to do certain physical exercises by
himself.

Another domain that SAR can contribute is the people with cognitive,
developmental an
d social disorders. In this domain the most research concerning
socially assistive robotics has been conducted on autistic disorder [9].


2.5

Autistic Disorder


Autistic Disorder along with Asperger Syndrome and Pervasive Developmental
Disorder


Not Otherwise

Specified (PDD
-
NOS) belong to a group of developmental



2

Understanding
the

Greater

Goal

6


disabilities known as Autism Spectrum Disorders (ASDs) [8]. People are affected
differently by ASDs.



The common symptom in all three types of ASDs is problems with social
interaction. Autis
tic disorder, which is also called classic autism, is usually the most
severe among the three. Apart from social disabilities there exist problems of
communication and unusual behaviors and interests. Many autistic people often have
intellectual challenges
. Asperger syndrome and PDD
-
NOS are usually milder than
autism but also share the problems with social interaction [8].



Currently, there is no cure for ASDs but early diagnosis and intervention can
greatly improve the future of an autistic child
. Still, there is no insurance on this and
usually autistic people need great support and care throughout their lives [8].


2.6

SAR and Autism


For the user group of people with Autistic disorder there are two common types of
research concerning the effect tha
t socially assistive robotics can have on people with
autism.

The first examines how SAR can help to diagnose, treat and understand the
nature of the autistic disorder. Scassellati [15] argues that using a socially assistive
robot that is able to detect an
d measure social behavior, a lot of information about social
responses of people with autism can be gathered. This is motivated by the fact that there
is no other diagnostic tool for autistic disorder apart from an expert examining a child’s
behavior [8].



The second and larger part of research on SAR and autism focuses on how
children with autism respond to robots with the aim that sometime a socially assistive
robot could be used as a “social crutch” [9] to assist these children by improving the
ir
social interaction and transferring this to the children’s’ other interactions with people.



There are many socially assistive robots that were used for the aforementioned
type of research:



The AuRoRA research project used a mobile robotic pla
tform as a remedial
device for children with autism. The robot was a four wheeled buggy with the
ability to avoid obstacles and sense heat. Through four behavior programs the
user could choose a demonstration, forwards and backwards movement, heat
followin
g and movement triggered by heat. The interaction with the children
came by applying the last three with the most popular being when the robot
followed them. First trials on children showed that all of them enjoyed
interacting with the robot [16].




Robal
l is a robotic ball
-
toy that was able to move and steer on its own and
change its course after bumping into an obstacle. In addition it had prerecorded
speech that used at specific situations. The interaction with the children came by
playing with it and t
he short sounds helped in drawing their attention. In the
trials that were made on children with autism, the conclusions were that
although varying in magnitude, the robot succeeded in catching their attention
[17].



Keepon is a small yellow snowman
-
like r
obot that is placed on a static base and
able of upper body movement. It has two cameras for eyes and one microphone
for nose to interact with the environment. By being able to turn its upper body
part in different angles it can simulate attention (by turn
ing the body such that



2

Understanding
the

Greater

Goal

7


the eyes seem to focus on the target) and emotions (by moving up and down or
left right) such as pleasure and excitement. It came in contact with many
children with developmental disorders, including autistic spectrum disorders, for
over a year and a half. They interacted with the robot by touching it or just
watching it moving. There were also many cases where the robot acted as the
mediator for social interaction among the children and between them with other
people. The reason for
this was to share their enjoyment about keepon with other
people. The results were really positive in both the success of assisting these
children through social interaction and also to provide information for them
through its cameras [18].



Robota is a hu
manoid robotic doll. It is able to move its hands, legs and head but
otherwise has no other mobility. It interacted with children with autism by
playing and by imitating (either the robot imitating the child or vice versa)
through several games and even da
ncing. After a lot of trials with several
children, an improvement of their social interaction with other people (i.e. their
caregiver) was observed with the robot as the mediator [19].


2.7

Tools

For the whole duration of this project, there were found
several tools that helped with
the work that it was needed to be done.



The first
tool
is the computer language that will be used to develop the robot’s
controlling software
.

Java was chosen for that role, due to its availability and
familiarity.



The next t
ool is the speech synthesizer. FreeTTS is a speech synthesis system
and it will be used by the robot to be able to communicate using speech with the
user (output). It was chosen due to the fact that is written in the Java language
and thus it will present
no compatibility problems.



Moreover, a piece of software that helped with controlling the robot remotely,
TeamViewer. It was used to access remotely the desktop of the main computer
that was controlling the robot.



The
Java Comminications API is the tool th
at is the basis for the successful
connection be
tween the robot and the system.



2.8

Conclusion


Socially assistive robotics share with assistive robotics the aim to provide assistance to
many categories of people in need in the field of health care, and at
the same time they
share with socially interactive robotics the focus which is social interaction. Apart from
a variety of characteristics that they borrow from SIR, socially assistive robots must
have a clear target group of users to provide their assista
nce due to the different needs
tha
t each group has. This project’s

bigger goal
target group, i.e. children with autistics
disorder, already had much contact with SAR. From all this previous research on the
field two things can be derived: Socially assistiv
e robotics can succeed with improving
the social interactions children with autism have with other people by acting as the



2

Understanding
the

Greater

Goal

8


mediators, but at the same time there is not enough proof that those children would keep
and practice these social and communication
skills with other people throughout their
lifetime, in the absence of the robot. More research for larger periods of time is needed
to ensure that this is not the case.




3

Problem Solving Process

9


3

Problem Solving Process


From the beginning of this project it became apparent that it is not such a

simple task to
build a socially assistive robot from zero. In fact, it is not so easy to build a machine
that can be considered as a robot with the broader definition of the term. In order to
tackle th
ese

difficulties the
general
problem solving process
was divided i
n
to

three

phases
. During phase
one
, as it was mentioned already, the main focus was on
understanding the bigger goal that defines the purpose for building the
robot
.

At phase
two the goal was to understand the hardware that the robot is using
before embarking on
phase three where the actual software development process will
be planned
.


3.1

Understanding the hardware


The three main hardware components of the robot are the USB
-
I2C Communications
Module, the MD23 motor controller and the SRF08 Ultra
sonic Sensor.

The basic
architecture
for the connections among all hardware that will be used by the system is
displayed in the figure below.




Figure
3
-
1

Hardware Connections




3

Problem Solving Process

10


3.1.1

The USB
-
I2C
C
ommunications
M
odule



An I2C bus is a way for several components to communicate with each other either on
the same circuit board or linked via a cable [21]. It was designed by Philips
Semiconductors (now NXP Semiconductors) and it refers to Inter
-
Integrated Circuit.

Som
e of its main features are [22]:



It requires only two bus lines; a serial data line and a serial clock line.



Any device connected to the bus can be accessed by a unique address.



There exists master
-
slave relationships among all devices connected on the bus
.
(Masters are the devices that are able to initiate a bus transaction while slaves
only respond by either receiving or transmitting according to the master’s
commands).

All devices on the robot are using this standard to communicate.

The USB
-
I2C Communica
tions Module is

a device

responsible for providing an
interface between the computer and the I2C bus on the robot.
It is an I2C master only
and it simplifies any access that the computer would require on the bus by handling all
the I2c bus internal require
ments (i.e. start, restart, stop and acknowledges) [23]. All
that is needed for this to be accomplished is to provide the appropriate byte sequence.

The general structure for the byte sequence that is needed to access a specific
device on the I2C bus throu
gh the USB
-
I2C module can be seen in the figure below.



Figure
3
-
2

General I2C Command

It has to begin with the command byte. This can be one of five standard command
types that it has to be chosen
according to the type of device that will be accessed. The
module supports reading and writing of one or multiple bytes on several types of I2C
devices [23] that may have none, one byte or two bytes internal address registers. An
example for the command by
te is the value 0x55 which is used for reading or writing
multiple bytes for 1 byte addressed devices.

The second byte is the address that the specific device has on the I2C bus. As it was
mentioned above every device on the I2C bus can be accessed through

its specific
address. Moreover, this byte is also used to indicate if the byte sequence is for the
purpose of writing or reading from the device. If it is a write sequence then the normal
device address which is always an even number is used. If the seque
nce is about reading
from the device this byte is increased by one to indicate this. The range for the value of
this byte depends on the device. An example would be the value 0xE0 for write and
0xE1 for read for the SRF08 sonar.

The next bytes on the seque
nce depend on the type of the device that, as it was
mentioned above can have none, one, or two byte registers. In the second and third
cases this is the part where the address of the register (in the device now) that this
sequence will act upon is passed.

Again, the address of the register (if any) is specific
for each device.

The next byte is used to keep the total number of the data bytes that this sequence
intend to access.




3

Problem Solving Process

11


In the case of writing to the device, the last bytes in the sequence are the va
lues that
are needed to be written. These bytes are as much as the previous data count byte in that
case.

A specific example for better understanding the byte sequence that the USB
-
I2C
module requires in order to access a specific device on the I2C bus co
rrectly can be the
following: 0x55, 0xE0, 0x00, 0x02, 0x55, 0x44. This can be translated into: with this
sequence access is required for a device with one byte address register, the address is
0xE0 and moreover this is a write command. Also, begin writing
from the register in
location zero for two total registers (locations 0 and 1) the values 55 and 44
(hexadecimal). It should be mentioned here that with this specific format, accessing
registers in locations that are far away from each other with a single
byte sequence is
impossible without accessing all the intermediate registers also.

Lastly, there is the possibility to access the USB
-
I2C module itself with an
appropriate byte sequence. This will begin with the command byte 0x5A and will
always be a total

of four bytes. Using this command several results can be achieved, like
reading the USB
-
I2C firmware revision, quickly and easily changing address for SRF08
sensors, setting up the module as a general purpose I/O controller if it is not used for
I2C or co
nverting analogue input if it is connected to a device that transmit analogue
signals.


3.1.2

The SRF08 Ultrasonic Sensor


Ultrasonic sensors are a common type of devices used in robotics when needed to
measure and calculate distance to objects. An ultrasonic
sensor works similar to radar
by transmitting ultrasonic sound waves and calculating the time until they are received
back. Knowing the time from transmission until the return as echoes, the distance can
be calculated using the speed of sound. When they a
re able to both send and receive
sound waves, ultrasonic sensors are also known as transceivers. Although they can be
affected by conditions like wind and temperature and are slower at calculating distances
than photoelectric sensors, they have advantages
due to the fact that they are not
affected by the color or reflectivity of the target objects as sensors using light do.


SRF08 is a transceiver ultrasonic sensor that can be connected on the I2C bus.
Moreover it has an onboard light sensor whose value is
updated with every new ranging
and it can be accessed from the appropriate internal register [25]. SRF08 is a device
with a total of thirty six one byte internal registers so it can be accessed with the
appropriate USB
-
I2C command. Its default address is E
0 but that can be changed up to
FE, so a total of sixteen SRF08 can be connected and uniquely accessed on the I2C bus.

From the thirty six registers, only the first three at locations zero, one and two
can be written to. The first is used to initiate the
ranging. According to the value of this
register the type of the range result is determined. This can be in inches, centimeters or
microseconds. This register is also used to change the sensor’s address on the I2C bus.
This can happen by sending four diffe
rent sequences of bytes to write on the location
zero register the appropriate values. The first three are specific for changing the address
while the fourth will contain the new address. All four of them must be sent in order and
there should be no other
access on the SRF08 while this happens.

The second (location one) is used for setting the maximum analogue gain of the
sensor and the third is used for changing the range that the sensor listens to. The range



3

Problem Solving Process

12


that the SRF08 can cover is determined by time

in microseconds. Originally this is 65
microseconds (about 11 meters) although the actual capabilities of the sensor are six
meters. By setting a value on the range register, this time (and effectively the range) can
be changed. Two reasons why this would

be needed would be to get distance
information faster or being able to initiate ranging at a faster rate [25]. Lastly, by setting
a maximum analogue gain for the SRF08, a limit for the sensor’s sensitivity to weaker
distant echoes can be achieved. A reaso
n of why this would be needed is that by making
the SRF08 less sensitive in weak echoes, and initiating range at a very fast rate, the last
echoes from one ranging would not be received by the next. If this would happen false
results would be possible to o
ccur.

Reading data bytes from the SRF08 can be done on all thirty six registers. By
reading the register at location zero the software revision of the sensor is returned. The
register at location one holds the value of the onboard light sensor which is upd
ated
along with the rest registers on every new ranging. The last thirty four registers hold the
ranging results in register pairs. This means that one actual result is a two byte unsigned
numerical value that can be taken from two consecutive registers (s
tarting from register
pair at locations two and three) high byte first [25]. So, the SRF08 can save up to
seventeen returned echoes
-
results. Depending on the type determined when the ranging
was initialized these values can be centimeters, inches or micros
econds.


3.1.3

The
MD23

Motor Controller


The devices that are responsible for moving the robot are two electric motors (EMG30),
one for each wheel. The main difference that these conventional motors have with the
more popular for robotics, servo motors is their

inability to set accurately the position of
the wheel. On the other hand, setting the speed of the motor in general is a lot easier
with this kind of normal DC motors.


Having just the motors though is not enough because their only capability is to
rota
te the wheels. There are a lot of things that needed to be controlled for the motors to
function correctly in order to have the needed results. The device that is responsible for
controlling the motors in the robot is the MD23 I2C motor controller. It prov
ides both
manual and automatic ways for controlling and ensuring the proper function of the
motors.


The two automatic functionalities that the MD23 provide are the automatic
speed regulation and the automatic motor timeout [26]. The first works by automatically
increasing the power that is supplied on the motors if the predefined speed cannot be
achieved
. An example benefit for this is that there would not be needed any concern
from the user of the controller for the appropriate setting of speed while the motors are
used for moving from lower to higher ground. The automatic motor timeout on the other
hand

is used for ensuring that the motors would not get out of control due to some
failure of the motor controller. This is achieved by stopping the motors if there is no
communication with the MD23 on the I2C bus within two seconds.


Moreover, there is a big

range of manual control that the user of the MD23 can
provide for the motors by writing on the appropriate registers. The controller itself is by
default assigned on the address 0xB0 on the I2C bus, but this can be changed for up to
address 0xBE (even num
bers only). Lastly, the device has a total of seventeen one byte



3

Problem Solving Process

13


internal registers that can be accessed by using the appropriate command on the USB
-
I2C module.


Registers at location zero and one are the speed registers. The MD23 has four
modes of operati
on. According to the mode chosen there are different meanings for
writing or reading on these first two registers. In order to set or get the needed mode the
appropriate value must be written or read from the register at location 15. By default the
mode is

set on zero.

On modes zero and one, writing on register 0 sets the speed for motor 1 and
writing on register 1 sets the speed for motor 2. Similarly, reading these registers during
the same modes returns the speed for the appropriate motor. In order to sp
ecify which
motor number is the left or right some experimentation is needed because this can differ
according to cable connectivity. The only difference between mode zero and one is the
meaning of the values that are written or read at these registers. On

mode 0 the values
are literal speeds in the range of 0 for maximum reverse, 128 for stop and 255 for
maximum forward speed. On mode one, these values are considered as signed and the
speed range becomes
-
128 for maximum reverse, 0 for stop and 127 for ma
ximum
forward.


Values for registers 0 and 1 while in modes two or three are similar to the
aforementioned ones with unsigned values for mode two and signed for mode three.
The difference is that in both modes two and three the register 0 is used to set o
r get the
speed for both of the motors and the register 1 is used for writing or reading the turn
speed. By setting a value on this register the speed of both motors will change for the
appropriate turn speed to be achieved by taking into account the value

that is kept at
register zero [26]:





If according to that value the direction was forward



Motor 1 speed = register 0 value


register 1 value.



Motor 2 speed = register 0 value + register 1 value.



If according to that value the direction was reverse



Motor

1 speed = register 0 value + register 1 value.



Mo
tor 2 speed = register 0 value
-

register 1 value.


Apart from the MD23 registers that were seen so far, the last two registers where
it is possible to write on are the registers 14 and 16. The first is use
d to set the
acceleration rate. By default the value is five and it can take values from 1 to 10 (0A at
hexadecimal). The value on this register is taken into account at any change in speed for
the motors and it controls how fast the change between the two

speeds, the old and the
new one, occurs. As specific examples it should be mentioned that at acceleration rate 1
it takes 255 steps (18 microseconds each) to change from maximum reverse to
maximum forward. At acceleration 10 it takes only 26 steps (again
18 microseconds
each) for the same change.

Register 16 is used to write some specific commands for the MD23. There are
specific values that allow the MD23 to reset the encoder registers to zero, disable or
enable the automatic speed regulation, disable or
enable the automatic timeout and
change its address on the I2C bus.

Overall, registers 0 to 15 can be read to take information for the MD23. As it
was mentioned, registers 0 and 1 have speed information (depending on mode). Register
10 returns the battery
voltage of the controller, 11 and 12 have information about the



3

Problem Solving Process

14


current that motor 1 and 2 have, register 12 returns the software revision number of the
specific MD23 controller, 14 returns the current acceleration rate and 15 is used to keep
the mode in w
hich the controller operates. Reading register 16 always returns zero.


Lastly registers from 2 until 9 hold information about both motors’ encoders.
From the first four registers a four byte number can be derived that is the value of the
encoder count for

the motor 1. The high byte is kept at location 2. The other four
registers keep the value for the motor 2 encoder with the high byte at location 6 [26].


3.2

Software Development Planning


At this phase all that is left is to visualize the software that will

not only control all the
hardware devices
separately but to also be able to make them work in relation with each
other so more complex results can be achieved that will lead closer to the bigger goal
that this project is part of.


The process model that w
ill be used for this project was decided to be the
iterative development model. The reasons for choosing to follow the specific model
came from the nature of the problem itself: There are specific pieces of hardware that
determine clearly the initial requi
rements. Moreover, it would be easier to plan, design,
implement and test smaller increments of work over specific iterations. Lastly, since the
whole project is based on the available hardware and considering the risk of some
hardware failure to occur or
even the possibility that new devices may be added in the
future, the development model should be able to cope with any changing or addition of
its requirements.


The iterations that will be needed for the completion of this project were
identified as the
following:




Iteration 1


Communication
. From 1/
1
2
/2011 to 30/
1
2
/2011

The first and most important functionality that is needed to be succeeded by the
system is the ability to control the USB
-
I2C module. By achieving this, the door
for accessing every
other device connected on the I2C bus will open. This
iteration will be considered finished successfully even if the system
accomplishes something very trivial like accessing the module’s software
number.



Iteration 2


Movement.
From

1/
1
/2011 to 30/
1
/2011

The purpose of this iteration is to be able to fully control the MD23 motor
controller on the robot. Every possible command for, or information needed
from the controller should be considered
. Moreover, a less hardware driven
approach for controlling
robot

to work together it would be inefficient to use
direct and complex access on the device.



Iteration 3



Sensing
.
From 1/
2
/2011 to 30/
2
/2011

This iteration will focus on the last device on the robot, the SRF08 ultrasonic
sensor. The product at the end of
this stage should be able to take range data
from the sensor device. Moreover, f
ollowing the same concept of control
abstraction for the motor controller, a similar result should be achieved for
controlling the SRF08 without relying directly on its hardwar
e specifications.




3

Problem Solving Process

15




Iteration 4


Robot Speech.

From 1/
3
/2011 to 30/
3
/2011

The last functionality that the system should have before starting to combine
everything together is the ability to speak.



Iteration 5


General Integration.


From 1/
4
/2011 to
15
/
5
201
1

Although integration with the whole system for every increment will occur at its
appropriate iteration, at this stage the system should provide a way for proving
its ability to combine all products from all previous iterations.

Additionally,
there should

be work towards accomplishing specific tasks that a robot should
be able to do i.e. obstacle avoidance, wall following etc.







4

Developing the Software

16


4

Developing the Software


4.1

Iteration 1
-

Communication
s


4.1.1

Analysis


There were certain difficulties when trying to analyze the problem that should be solved
in the specific iteration. According to planning, the result should be that the system
would be able to control the USB
-
I2C module. This requirement is q
uite general. Thus,
one entity was identified as required in order to describe what the system should do to
accomplish the required task and this was the USC
-
I2C module itself. There was also
identified a need for a ‘connection’ entity that would allow the

system to actually access
the module for the purpose of controlling it but there could not be found a way to
include it properly in the use cases.

Since all the things that would be needed to be achieved while controlling the
module had many similar parts

in common, a general use case that will demonstrate this
was created.



Use case: Command the USB
-
I2C module


Actors: USB
-
I2C module


Main success scenario

1.

System initiates a connection with the module.

2.

System issue a command on the module

3.

The module responds according to the command

4.

System uses the response to do something.

5.

Connection is terminated. End of use case: “Success”

Extensions

1a. Connection cannot be achieved.



1a1: System notifies user



1a2: System terminates. End of use
case: “cannot connect”


3a. there is no response from the module.

1a1: System notifies user.



1a2: System terminates. End of use case: “no response”


In every case, somehow a connection should be achieved with the module for
any data exchange to happen be
tween the system and the module. If something goes
wrong at this point there is not much that the system can do about it so it terminates.
The notification is just to inform any user monitoring the system at that time that the
connection could not be achie
ved. If all go well however, a command is passed on the
module (depending on what it is needed for the module to do). Normally the module
should send back a response according to the command that it was given. According to
the technical specifications for
the USB
-
I2C module, a response is always returned not



4

Developing the Software

17


only when reading data from the module but also when trying to write data on it too. If
nothing is returned from the module, the system can assume that there is a problem
(maybe hardware failure) and te
rminate after informing the user that observes it. Next,
the system should do something (again according to what the command was about) with
that response, and finally terminate the connection.

In order to do the above more specific, meaning what that com
mand could
actually be and what the system should do with the response, it was needed to search at
the technical specification of the USB
-
I2C to find specific requirements. The only
commands that are of interest at this point are to access the revision num
ber of the
module and to change the SRF08 I2C address. The second will be needed in the future
and it is just a convenience that the module provides due to its compatibility with the
specific sensor. For both use cases only a few lines would change from th
e above
general example. These would be:


Use case: Access USB
-
I2C module’s revision number


Actors: USB
-
I2C module


Main success scenario

1.

System initiates a connection with the module.

2.

System issue the command to receive the revision number

3.

The module
returns its revision number information

4.

System shows the revision number.

5.

Connection is terminated. End of use case: “Success”

Extensions

1a. Connection cannot be achieved.



1a1: System notifies user



1a2: System terminates. End of use case: “cannot conn
ect”


3a. there is no response from the module.

1a1: System notifies user.



1a2: System terminates. End of use case: “no response”


Use case: Change SRF08 I2C address


Actors: USB
-
I2C module


Main success scenario

1.

System initiates a connection with the mo
dule.

2.

System issue the command to
change the SRF08 I2C address

3.

The module returns the new address

4.

System notifies user for success

5.

Connection is terminated. End of use case: “Success”

Extensions

1a. Connection cannot be achieved.



1a1: System notifies
user



1a2: System terminates. End of use case: “cannot connect”


3a. there is no response from the module.

1a1: System notifies user.



1a2: System terminates. End of use case: “no response”





4

Developing the Software

18


4.1.2

Design


The first and most important problem that should be sol
ved during the design of the first
iteration was the way that a connection would be achieved with the USB
-
I2C module
using the Java language. At this point the drivers for the module were already installed
and the computer could actually recognize that the

module was connected to one of its
USB ports. The only thing left was to handle this connection from within the software
system.

After the appropriate research, there were two findings that helped to handle this
problem. The first was Java Communications

API and the second was a book written by
Scott Preston [
26
] that provided important knowledge about how to begin solving the
communication problem. The Java Communications API provides basic functionality
for handling serial (or parallel) communication wi
th Java. Although there are classes in
the API that provide an appropriate interface to physical communication ports
(SerialPort and ParallelPort) it was decided that a specific class should be made in the
system that will wrap the already existing functio
nality but the result should be simpler
and more appropriate for the specific problem. Moreover, an interface will be made in
order to define a specific behavior for what any serial port in the system should do in
case there is a need for a different imple
mentation in the future. The result from this
design is the class diagram below.



Figure
4
-
1

LocalSerialPort
C
lass

Diagram


As it was mentioned, SerialPort

is a class in the Java Communications API that defines
the minimum required functionality for serial communications ports. Apart from setting
specific parameters for the physical serial port this class will provide the input and
output streams that will h
andle all data transactions between the computer and the USB
-
I2C module.

The outputStream and inputStream fields will keep the aforementioned streams.
The readBuffer will be used to hold any bytes that will be read from the input stream
and the writeBuffe
r will do the same when writing on the output stream. Laslty, the
dataIn will be used to keep track of reading the input stream.




4

Developing the Software

19


The constructor of the class accepts one integer which is the number of the port
that will be used. When the drivers for the USB
-
I2C module are installed, the operating
system assigns a port number (COM for windows) according to the overall installed
COM
ports. Thus, because if the system is installed in different computers the number
of the port that is physically connected to the module might be different, being able to
set this number in the system is essential. The close method will be used to terminat
e
the connection when needed. This can be accomplished by calling the similar named
method of the SerialPort class. The read method will return the readBuffer to its caller
so basically this will return the array of bytes that was send from the USB
-
I2C mod
ule
and it will be used by the readString method to get this data in a string format. The write
method will send an array of bytes to the module over the output stream of the serial
port and the getName method will be used to return the name of the serial
port that this
object is using.

Lastly, the serialEvent is a method that will handle any event that occurs on the
serial port. This is supported by the Communication API (the class needs to implement
the SerialPortEventListener Interface) itself and the

two events that are of interest for
the specific case are: the DATA_AVAILABLE event that is fired to inform that the
input stream of the serial port has new data and the OUTPUT_BUFFER_EMPTY event
to inform when the output stream is empty. Listening and ha
ndling these events will
help with the appropriate synchronization for writing to and reading data from the
connected module through the serial port.

Having the connection ‘entity’ properly designed in the form of the
LocalSerialPort class, the next requi
rement needed to be fulfilled were the
representation of the actual USB
-
I2C module. The class should use the system’s serial
port to exchange data with the module.

At this point a general class named Controller was also designed for the purpose
of not rep
eating the same functionality in future classes. The system will be using an
object of LocalSerialPort to exchange any data (concerning any device on the I2C bus)
over the port by writing or reading. Thus, this functionality can be done in one general
clas
s that any future class with the same requirements (as this is the case for the class
representing the module in the system) will extend. The constructor of the class will
require a serial port object so it will have access to its read and write methods. T
he first
execute method will first call the write method from the serial port, wait for a set
duration in microseconds and then read the result and return it. The second will work
similarly, but in the end will call the readString and return the result as
a formatted
string.

The constructor for the USB
-
I2C class will require a serial port. The method to
change the address of the SRF08 will work by executing a specific command
-
byte array
that will contain the new address and any other bytes needed whose valu
es can be seen
in the module’s technical specification. The byte that will be returned from the execute
method will be the new address in case the change was successful. In a similar way, the
appropriate command to receive the revision number will be execu
ted and the returning
byte will be this number. The class diagram will now become as follow:




4

Developing the Software

20



Figure
4
-
2

Controller Class Diagram


4.1.3

Implementation


The LocalSerialPort class was implemented after a lot of res
earch on the Java
Communications API due to the developer’s inadequate experience with working with
it.

The constructor is implemented by initially gathering all the ports in the system
in an Enumeration with the use of a method from CommPortIdentifier cl
ass of the API.
Then, this enumeration is searched for a serial port whose name ends with the provided
number (e.g COM8). If it is found, then an instance of SerialPort is created which is
used to set the parameters for the physical port that it represents
, according to the
specifications of the USB
-
I2C module. Afterwards, the stream fields of the class are
assigned with the input and output streams of the serial port istance itself and finaly the
event listener for the two afore mentioned events is added.
Every possible exception
that might be thrown during all the above is handled by printing an appropriate message
on the System.err print stream.





4

Developing the Software

21



Figure
4
-
3

LocalSerialPort constructor implementation


Another method with interesting implementation is the serialEvent(). If the event
that it was fired is the SerialPortEvent.DATA_AVAILABLE, then the readBuffer is
initialized to a set size (this was decided to be 100 but there is no specific reason for
this
; the only requirement is for this to be bigger from the amount of data bytes that it is
possible to be retrieved from a device on the I2C bus). Afterwards, using the available()
method from the InputStream class as a check, the data that are read from the

inputStream object are stored in the readBuffer array. Laslty, after a check that the bytes
that are now on the readBuffer are not the same with the ones that are kept in the
writeBuffer since the last write (meaning, that the system did not receive back
from the
serial port the same thing that it sent to it) the flag dataIn is turned true.



Figure
4
-
4

LocalSerialPort serialEvent() implementation


The read method is basically waiting for the dataIn flag to
turn true, at which
case it sets it false again and return the readBuffer. Using this way it is ensured that
after the read method will be called, the returned byte array will contain the exact



4

Developing the Software

22


results that are needed. The reason for this is that the input

stream is read after knowing
that there are new data in the stream (the reading of the stream happens inside the code
that handles the DATA_AVAILABLE event).



Figure
4
-
5

LocalSerialPort read()
Implementation



Lastly, the write method simply sets the writeBuffer to the given byteArray and
then it calls the write method from the outputStream passing the array as a parameter.
Again, all possible Exceptions are handled with printing appropriate mes
sages on the
System.err.


For the Controller class, the execute method is implemented by calling the write
method from the serialPort object and then calling the Thread.pause() passing the
provided number for the duration of the sleep. When the thread awak
es again, it calls
the read method and returns the results as a byte array.


For the Usb_I2cModule class, the getSoftwareRevision works by first building
the byte array with the appropriate values and in the appropriate order as the
specifications. Afterwa
rds, this array is passed on the execute method and the byte at
index zero of the returned array is returned.



Figure
4
-
6

LocalSerialPort getSoftwareRevision() Implementation


4.1.4

Testing


In this iteration
the main concern during testing was if the Java Communications API
was used appropriately. The following code will print to screen all the ports that the API
can find in the system and it was used to test if (and how) the API could be used to
access the ph
ysical serial ports correctly.





4

Developing the Software

23



Figure
4
-
7

LocalSerialPort Testing


4.2

Iteration 2

-

Movement


4.2.1

Analysis


With the knowledge that was gained from first iteration, this stage was initially quite
easier than the previous. To gather the requirements for this iteration, the technical
specification of the motor controller was used. More specific, the requirements (
to
expand the more general one that was identified during planning) of controlling the
SRF08 were:



Move a single wheel



Move both wheels



Make robot able to turn



Change the controller mode of operation



Set the timeout on or off



Change the acceleration rate



Reset the encoders



Change the address on the I2C bus



Get all information for the controller


4.2.2

Design


4.2.2.1

Standardized commands





4

Developing the Software

24


During the initial design for the motor controller, the basic idea was to use a similar
approach for issuing commands (byte arrays)

to the device as it was done with the USB
module. This was considered quite ineffective though because the technical
specification was needed every time that a command would be issued to determine the
appropriate values that the byte array should have. Th
us, the decision to provide a more
specific, and understandable format for the commands was taken. Initially only the
commands for the MD23 were covered but in the end a total of five classes covering all
the current devices on the I2C bus were designed. A
lthough this was not in the scope of
the current iteration, this action was considered justified due to the fact of simplifying
the future iterations. In addition, it was expected that during implementation the code
will be more clean and understandable.

T
he basic idea was to include all possible values that were found in every
device’s technical specification as final and static fields in the appropriate classes. It
was considered easier to recognize what MD23_ADDRESS is instead of 0xb0 for
example.










4

Developing the Software

25



Figure
4
-
8

Standardized Commands Class Diagram





4

Developing the Software

26


The first class was the I2cCommand. This is a general class that contains the
basic logic. Its fields are: a string that can be used optionally to set a description for the
command, a byte that will hold the type of the command and an array of bytes for th
e
rest of the command’s body. The final fields in this class are the five possible values
that a byte sequence for the USB module can start with. The most important method of
the class is the getWholeCommand method that returns as a byte array the type plu
s the
commandArray of the class. This method will be used from all classes that will extend
this to return their whole command as a byte array to be used from the rest of the
system.

The next class is again a general one that extends the I2cCommand. It wa
s used
to represent all devices that have 1 byte internal address registers. At this point, the first
part of the sequence has become specific since the value that the USB
-
I2C module uses
to access these devices is 0x55. Both MD23 and SRF08 are that kind o
f devices so their
command classes will extend this one. The final fields of this class are the addresses of
each device on the I2C bus along with the read counterparts (normal address plus one)

MD23Command and SRF08Command are the final classes on the hie
rarchy and
contain all specific values that were taken from their technical specifications as final
fields. These classes should be used to create commands for the specific devices in
order for this whole design to be effective.



4.2.2.2

Movement Design


While designing the MotorController class the idea was to provide the simple
functionality of sending commands and retrieving data from the device that is trying to
represent in the system keeping any complex logic to minimum. This will provide a
complete
and direct control over the device. For example, in order to fulfill the
requirement to move, the simplest way is to issue the move command with a value for
speed on the motors. Keeping this value within the range specified in the technical
documentation o
r considering for how much time the move will occur should be out of
the scope of this class because it would result in unneeded complexity. Instead, this
logic should happen in a class that would build above the controller by providing more
complex functi
onality using the control of this class. This second class is the
MotorDriver. The class diagram for these classes is as follows:





4

Developing the Software

27



Figure
4
-
9

Movement Class Diagram



Similar to the class representing the m
odule in the system, MotorController
extends the general Controller class because it will require exchanging data with the
device by writing and reading on a serial port. Additionally, it will use the
MD23Command class for all of its methods. Objects of th
is class can be constructed by
providing the serial port that will be used for communication. Every method of this
class will construct an MD23Command object depending on what it is needed to be
achieved and then execute the byte array that can be extracte
d from the command.

Even though it was mentioned that there should be no complex logic in this class, this
could not be achieved for methods that issue a command on the registers 0 and 1 (speed
1 and 2 registers) because the values on these registers have
different meanings
according to the mode of operation that the controller is on. This is the reason for
keeping a single byte field called currentMode, for the class to be aware of the mode of
operation at all times.


Motor driver on the other hand needs t
o keep track of any change that happens
to the controller that it uses, thus the existence of fields to hold that information. Trying
to completely erase any sign of byte values on this higher level of functionality that the
MotorDriver is intended to prov
ide, normal integers are used for data types throughout
the class. Proper transformation from and to the byte values that are needed from the
MotorController will occur within each method. When using any methods that change
the speed or turnSpeed of the ob
ject, proper checking that the values are within a
specific range must be ensured.


4.2.3

Implementation
.


As it was mentioned, the implementation for the MotorController class focused more on
the control over the MD23 device. The same general idea can be seen
to any methods
within the class. A little more special are the methods move and turn because they



4

Developing the Software

28


include checks for the mode that the module is currently on. To demonstrate this
concept, below there is the code of the method that returns the value on eve
ry register
on the MD23 device and the current implementation of method move.



Figure
4
-
10

MotorController getAllInfo Implementation



Figure
4
-
11

MotorController move() Implementation


The move method has been changed since the design as a result of testing. The method
issues a different command according to the mode of operation. The Boolean is used in
order to chose between two different functiona
lities during mode 2 (where turning is
possible by writing on register that controls speed 2). If it is true then the turnSpeed
value is not taken in to consideration which means that if this was different than 0x80
(the neutral speed value
-

stop) the rob
ot will continue turning although a move
command was issued. If it is false then the neutral speed is written on the speed 2
register and this have as a result that the robot will move straight forward or backwards
depending on the given speed. Lastly, sin
ce the method does not handle modes 1 and 3
both motors are ordered to stop. Before testing, this method would only check for the
mode and thus the situation where the robot was turning after an order for move
occured. At the end, this was not removed as i
t might be needed in the future.


Implementation for the MotorDriver was mainly focused on checks for values
within certain ranges (like speed and turnSpeed) and transformation from bytes to
integers and the other way around. Inside almost every method o
f this class there is a
call for method of the MotorController object.


An interesting way for getting the values of every register on the MD23 device
can be shown below:





4

Developing the Software

29



Figure
4
-
12

MotorDriver getAllIn
fo Implementation


Initially there is a call to the method of the motorController that returns the values from
every register on the MD23 device in the form of a byte array. Afterwards, the integer
array that will be filled with the appropriate values and
then returned is initialized. At
this point the byte array is split and copied in three different byte arrays at specific
positions. The first will hold the four bytes that together are the 32bit value of the
encoder 1, the second is similar but for encode
r 2 and the third array contains the 1 byte
values for the rest of the registers. After the appropriate transformations everything is
arranged appropriate in the integer array, which is then returned.


Although this method may seem to somehow stray away f
rom the appropriate
object oriented techniques it is in fact the most efficient way(concerning the time that
the I2C bus is occupied) to achieve the specific functionality. The reason is that no
matter the complexity of the underlying calculations, the re
sult is achieved with only
one call on the I2C bus (the first line where the byte array is
retrieved
) Past
implementation of this method were using four lines of code to call four methods from
the motorcontroller and each method was a command that was pass
ing through the
serial port. Considering the fact that a at least 50 microseconds have to pass between a
write (issuing the command) and a read (retrieving the result), the minimum time
required was at least 200 microseconds. With this implementation this
is achieved at
minimum 50. Again, unit testing for each method was the reason this was discovered
and fixed.


4.2.4

Testing





4

Developing the Software

30



Lastly, there should be mentioned a bug that was found during unit testing at this
point and it was about a product of the previous iter
ation and more specific the
implementation of the serialEvent method of the LocalSerialPort class which

can be
seen at
page
9
.

The result of the bug was that the returned byte array (the readBuffer)
from the read method of this c
lass did not contain the expected amount of values. This
was discovered when trying to print the results of the getAllInfo method of the
MotorController class. The only place that the readBuffer is assigned values is at the
serialEvent method (since the re
ad method waits in a loop for the dataIn to turn true and
then returns the modified by the event buffer). The reason for the appearance of the bug
was found at the documentation of the inputStream.available() method. This method
returns an estimation of th
e number of bytes that can be read. Apparently, the estimation
was wrong in this case and only a part of the input stream was read in the readBuffer.
Execution would then continue and exit the method normally. Although the dataIn was
true the read method c
ould not return because immediately after exiting the event
handling a new event was fired concerning the fact that there are still available data in
the inputStream. Since the read buffer is initialized and assigned values during the
handling of the event
, the only data that were in the read buffer at the end of this
situation were the last part of the input stream. To handle this, the code was modified as
follows:



Figure
4
-
13

New serialEvent

Implementation


The solution was to use another array to keep the parts of what was read from the input
stream and then copy this array to the proper position of the read buffer with the help of
a pointer. The pointer and the read buffer are initialized o
ut of the data available event
handling code and when the output stream is empty (after a write has finished). This
works without any problems because before a read there is normally always a write. Of
course, although the results are now as expected (the
resulting read buffer has the
expected number of values) this solution cannot be considered proper and more robust
solution is required.





4

Developing the Software

31


4.3

Iteration 3
-

Sensing


4.3.1

Analysis


At this point, there was enough experience gathered during the previous development

cycles that could provide important help for every stage of this iteration. Having already
specified the common general requirement of controlling an I2C device for two devices
already, the same procedure was followed to further extend what the system sho
uld
concerning the SRF08. By reading its technical specification, the need for including the
following functionalities in the system was identified:




Initiate a ranging in either of the three distance measurement types that the
device provides.



Get informa
tion for the result of every one of the 17 echoes that the SRF08
can provide



Get the values of all the registers of the device that can provide any useful
information about the device itself. This information, apart from the 34
registers that provide the d
ata of the 17 echoes can be:

o

The software revision number for the device.

o

The value of the device’s onboard light sensor.



Provide the ability to change the maximum range that the device can take
measurements from.


4.3.2

Design


The first class to be designed i
n this stage was the SonarController. This is the place
where the SRF08 is accessed directly in the system. The design of the class was not so
difficult due to the knowledge that was gained while designing, implementing and
testing the motor controller cla