(SAR) that allows speeds up to 470000 conversions per second and ...

glassesbeepingΤεχνίτη Νοημοσύνη και Ρομποτική

20 Οκτ 2013 (πριν από 3 χρόνια και 7 μήνες)

107 εμφανίσεις



1


“NEURO CO
PTER”



LINEAR QUADRATIC GAUSSIAN DESIGN
LQG AND IMPLEMENTATION OF THE CONTROL SYSTEM OF
AN UNMANNED AERIAL VEHICLE UAV WITH STATE
SPACE
MODELLING AND KALMAN FILTERING AND AUTONOUMOUS
GUIDANCE WITH COMPUTER VISI
ON AND DATA LINK
COMMUNICATIONS

MECHATRONIC TEAM: José Merino, Mauricio Varhen

Advisor:Ing.
José Oliden

CONTROL TEAM: Ronald Alvarado, Lourdes Escobal

Advisor:

Ph.D © EE,

Fernando Jimenez

DATALINK TEAM: Uriel Chavesta, Ingrid Rojas



Advisor:Ing. Juan Huapaya

COMPUTER VISION TEAM: Alejandro Aranguren, Tiffany Ve
la Advisor:Ing Germaín Cardenas


U
niversidad Peruana de Ciencias Aplicadas (UPC)
-

Facultad de Ingeniería

Escuela
de Ingeniería Electrónica

Lima
-

Perú


Abstract

Controlling scale helicopters in its various
flight modes is a
complex task due to the
nonlinearity of its structure and strong coupled
motion dynamics. In this paper, an adaptive neural
controller is developed for a scale helicopter using
data of an artificial vision stage and from sensors
installed in the air vehicl
e. Data is transmitted from
the helicopter through a data link stage and is
received on a computer where the neuronal
controller is implemented. The outputs of the control
stage return to the helicopter through the same data
link in order to close the cont
rol loop. It is show that
a proper communication between various stages that
constitute the project, allows an accurate control of a
scale helicopter through a computer.

I.
INTRODUCTION


Nowadays, the helicopters are essential air
vehicles for search and resc
ue of people stranded
or to transport accident victims. Police
departments use them to find and prosecute
criminals. Firefighters use helicopters to the
precise distribution of chemical fire suppression in
forest fires. More electric companies are using
th
ese vehicles for the inspection of towers and
transmission lines looking for corrosion and other
defects to make the respective repairs. All these
applications require a pattern of approaching
danger, risking the safety of the pilot and other
crew of the s
hip. An autonomous helicopter
eliminates these risks and increases the
effectiveness of the flight.


The stabilization and control of a scale helicopter
has been designed using different techniques. In
the early nineties, prevailed classic control
systems
based on a proportional derivative
controller (PD) using a helicopter model with a
single entry and single output (SISO). Therefore,
more advanced approaches propose a precise
model of helicopter that represents its complex
dynamics. Since 2000, several un
iversities, like
MIT and UC Berkeley have developed optimal
controllers using Linear Quadratic Regulator and
Linear Quadratic Estimator (LQE and LQR).
However, the global trend in the last four years is
the development of an intelligent controller that
com
bines the power of neural networks, fuzzy
logic and genetic algorithms. These control
techniques have been implemented by universities
like Tennessee State University, School of
Science and Engineering, Oregon Graduate
Institute who prefer this type of int
elligent control
due to its real
-
time response and easy
implementation on microcomputers.


To give greater autonomy to a helicopter, control
engineering may be accompanied by a stage of
vision which the state receives referrals for
monitoring and cruise. O
ne of the major
challenges is presented in digital image processing
is to work on different scenarios, for example,
image sequences with changing backgrounds as
well as images with variable lighting level. These
variables lead many projects to fail or to i
ncur
many expenses to compensate the irregularities of
the images. Also another great challenge is to
estimate the trajectory of an object, which initially
considered the use of algorithms about "motion
estimation", which calculated the motion vectors
of a
n image over another. It gave up the use of
this method, since by their nature to calculate the
motion vectors of the entire image is needed a
prior segmentation, however with a previous


2

segmentation methods can be used to lower
computational cost and resu
lts equally efficient.


To carry out these tasks, it’s necessary that the
unmanned autonomous vehicle control techniques
develop and use modern and robust
communications systems. This means that
communication between vehicle and base station
is carried out

efficiently and effectively (optimally
exploiting the resources). Also, a supervised
control of the vehicle is monitored by an operator
to allow a support system and monitoring for
better handling and stable control of it.


II.
O
BJETIVES




Stabilize the helicopter by control algorithms
using System Identification for control over
the height, movement of yaw, pitch and roll,
and thereby placing the vehicle in hover.



Model mathematically the dynamics of
helicopter flight to know the variables

that
influence the stability of the vehicle.



Implement and verify the operation of
electronic circuits: power circuits (actuators
-

motors and drives) circuit sensors, Freescale
driver module.



To implement a vision system on an
autonomous robot or an UAV
.



To follow an object in ground using a vision
system



To control the movement of the Neurocopter
to follow an object in order to obtain a
smooth movement in the ground station.



To design a friendly interface for the user in
which the trajectory of the obje
ct is shown.



To achieve the image segmentation with a
variable background and illumination.



To develop a low cost system with the highest
quality that can contribute to social security.



To develop an efficient software using
parallel processing technology

(MATLAB
-
CUDA)



To develop a software allowing the
supervision of the data emitted by the sensor
in real time like height, orientation and speed



To allow the visualization of the absolute
position of the vehicle using a GPS



To transmit the received information from the
Neurocopter to the Ground Station



To allow the Waypoint Navigation. Throw
the selection of points in the navigation panel
a route is created, so that the Neurocopter can
follow it autonomously



To control the Ne
urocopter in both:
Stationary and Navigation Mode throw

a
Neural Network and a Kalman Es
timator



In Navigation Mode, being able to dodge
obstacles with the information of the sensors



Connect the chopper circuits to perform the
final tests involved.



To deve
lop new and innovating technologies
using Neural Networks that can be applied in
the Control Industry and Telecommunications


III.
J
USTIFICATION


Currently in Peru there is little experience in the
design and use of UAV prototypes, and
UAV
prototypes developed

do no achieve an
adequate
stabilization and n
avigation, plus t
heir costs are
very high.
This is the reason why we propose to
implement

a
low cost UAV prototype that can
navigate and achieve an optimum flight

for

different kind of applications it is needed
.


Math Level


We propose

a
design and construction of an
Unmanned Aerial

Vehicle (UAV)

that can be

able
to maintain
in a steady state at

air by

computer
algorithms. To make
this possible, it is necessary
a
mathematical analysis

and model

to describe the

helicopter flight dynamics in order to realize the
para
meters involved in the
stability of the vehicle
.
To control these parameters, we need to get some
signals

that describe the flight and changes of
position of
the helicopter, and then model them

by
tra
nsfer functions that allow us to integrate the
dynamic system to control
system with state
eq
uations
, fuzzy logic, and neural networks using
Kalman filters for optimization.
As a result, this
will allow to obtain and to process the Output
Data with a Digit
al Signal Process in order to
control
flight and stability of the helicopter.



Technical Level



For t
he technical specifications

we want to
manage we must consider all
characteristics of the
prototype components such as chassis materials,
batteries, en
gines, weight of the devices, security
systems etc. The helicopter chassis materials must
be lightweight and high strength in order to
achieve the elevation needed to avoid any damage
to a possible fall. The batteries to be used must be
lightweight and hig
hly durable, being the type
best suited LiPo. The engines to be used must be


3

of high standard of revolution, so that toge
ther
with the blades are able to

overcome the weight of
the helicopter,
so the motor to use is brushless
motor because its structure ac
hieves
great
revolutions

and also needs a drive to operate
properly due to this is

a three
-
phase

motor
.
Therefore, a
l
l circuits
will be embedded inside the
helicopter
and
must be properly protected

against
possible circuit damage.


Economic Level


The UAV prototype to develop
pretends to be a
low cost product
compared with prices in the
market. Thus, the prototype will
be available to
enterprises and people interested to use the
product in the multiple

applications submitted.

Finally, this project
is an invitation for companies
and especially Peruvian universities to invest in
research.


Social Level


This project could encourage the development of

similar
pr
ojects and the interest in this area of the
students in

school and university level. It wou
ld
also have an incentive
to radio control fans
to
manufacture
their own autonomous vehi
cles by
assigning especially features and functions
depending on their application o necessity.


IV.
APPLICATIONS


The potential uses of Unmanned Aerial Vehicles
in the mil
itary and civil industry are:



Car chase
: The implemented system can be
used to prosecute police cases
for the searching

of
criminals in high
-
speed vehicles.



Surveillance Rescue:

For the constant
monitoring of the beaches and ensuring the
welfare and safety of bathers.



Calibration of ships:

It is proposed that an
intelligent system is responsible for monitoring
ships for the calibration of the sight of the canyon.



Agro
-
indus
trial applications:

In the fields can
be used to spray disinfectants and fertilizers. This
will help an efficient and effective distribution of
inputs required for agriculture.



Inspection:

Inspection of high voltage power
lines in remote locations for
corrosion and other
defects to respective objections.



Border Surveillance:

Conduct surveillance
maneuvers on the border of a place of safety (such
as a prison) or borders between countries to report
suspicious and unusual activities.



Film:

Execution
of flight maneuvers to allow
more accurate aerial photographs in environments
inaccessible to the camera.



Natural disasters:

Monitoring and precise
distribution of chemicals for fire extinguishing
forest fires. Search for survivors in buildings or
house
s affected by an earthquake, tsunami, etc.



Mining & Archaeology:

Mapping of closed
areas inaccessible to humans.



Road Traffic Control:

Can be used for road
traffic control and air vehicle that determine the
flow of traffic and the drivers would be aw
are of
the best routes to follow.


V.
DEVELOPMENT PROJECT


MECHATRONIC

TEAM:


A.
Technical Features T


REX 500


-

Length: 850 mm.

-

High: 310 mm.

-

Length main rotor: 425 mm.

-

Diameter main rotor: 970 mm.

-

Diameter tail rotor: 200 mm.

-

Weight (w/o power system): 935g.

-

Flying Weight:1500g

-

Electrical Power System.

-

Carbón fiber blades.




Fig.
1

Main view T
-
REX 500





4



Fig. 2 View

of the

main rotor
,

carbon

fiber
blades

and

120

degrees

plate

of

T

-

Rex

500




Fig. 3 View

of the

tail

rotor

T


Rex 500



Fig. 4 Chassis
view T
-

REX 500


B.
Freescale QE 128 Programming Module



1.


ADC

Interface


The ADC converter is based on successi
ve
approximation register (SAR)

that allows

spe
eds

up to 470,000 conversion
s per second and
accuracy
up to 12 bits.

First, the ADC operation is
based

on the selection
of channels ADCSC1: ADCH. Once the channel
is s
elected, this is sampled by a sample
and hold

circuit
. This
circuit charges

a capacitor

with the
same voltage level of

the input. The sampled
value

is selected by the register

ADCFG:
ADL
SMP. When ADLSMP = 0, the sampling
capacitor is connected to input channel for 3.5
ADC clock. When ADLSMP = 1, the sampling
period extends at 23.5 times the ADC clock. Once
sampling is
completed sampling switch is closed
,
connecting the SAR converter.
Then the SAR
converter
starts with
the
c
onversion of the value
acquired
.



Fig.
5

Uniform Quantization


2.

Serial

Interface


The serial communication interface (SCI) is an
asynchronous serial interface based on the

baud
transmission
. C
ommunication is initia
ted when

start bit is at low level, followed by eight data bits,
and a
n optionally
bi
t called the parity bit
.



Fig. 6 Serial Frame Structure



5

The figure

above shows

the

structure
of

a

serial

pattern: its

content

and the

number
of

bits.


The falling edge
of start bit synchronizes the
transmission, the receiver considering the same
baud rate will be able to read that information.
The operating principles are simple. There are two
shift registers, one for transmission which
converts the information in parall
el to serial. The
other register converts serial data into parallel
data. Both shift registers are synchronized by a
clock signal derived from the bus clock.


3.


PWM

Modules


The pulse width modulation is based on the idea
that the maximum voltage is applied

to the load,
but this only occurs for a fraction of time
(proportional to the duty cycle). The average
voltage in the load is directly proportional to the
duty cycle of the wave signal.













(1)


Where:

Vout: Average output voltage.

D: Percentage of duty cycle (0
-
100 %).

Vmax: Maximum voltage applied to the load.


With QE128, the PWM is generated using the
channel of TPM for each signal PWM.



3.1

PWM

Signal Generation


The PWM signal starts to be generated when

activate the bit (TPMSC: C
PWMS = 0), the TPM
counter begins cou
nting until the value of
TPMCxV

when the flip
-
flop (FFout) is activated

(Q = 1).


The frequency of the PWM signal can be

calculated using the following formula:















(2)


Where
:

F
PWM
:

PWM signal frecuency in Hz.

F
TPMCK
:
The increased frecuency of TPM
(TPMCK)

in Hz.

TPM
MOD:

The value recorded in the register
TPMMOD.



The pulse width (in seconds) can be calculated
using the following formula:
















(3)


Where
:

T
Pulse_Width
: pulse width in seconds.

F
TMPCK
: increased frecuency in Hz TPM.

TPMCxV: cycle value stored in the register
TPMCK.


3.2

Capture of
PWM

Signals


In this operation mode, the TPM module measure
the temporal external events, in
this case, the
signals obtained from the RC receptor of the
helicopter. These events can be the measure of the
pulse width

or the frequency of the signal [1].


The

next
figure

shows

the
capture

of

a

square
signal applied to the capture pin
:




Fig. 7
Capture of PWM Signal

First, the input capture activation is done.
Then,
the system waits the external event and the
counter module starts to count

by falling or rising
edge.

The moment w
hen the first edge of the
external signal appears is defined as the start point
event
(counter = 0)

in order to measure the period
of the external signal
.
The value of the frequency
is the inverse of

the counted value multiplied

with
the time

base of the s
ystem in the second edge
event.


4.

SPI
Protocol
(Master


Slave)


The serial peripheral interface is a synchronous
master
-
slave protocol structured into 4 basic
areas: clock line, serial port, serial input and bit
selection. The master device, in this case t
he
micro controller always initiates communication.


6

After establishing communication with the slave,
the
master

begins the timing of the output data.
For each clock edge, a bit out and another one
comes. The transfer is completed when they have
spent eight

clock edges, so the master sends a byte
and receive a byte of it.


The SPI interface on the HCS08 micro controller
has a simple and efficient design, which has a
shift register of 8 bits, a clock generator and
additional control interfaces. Each module ca
n be
e
nabled and disabled using the
SPIC: SPE

bit [2].


SPI master


slave communication follow the next
steps:



1.

The device sends a byte of data, typically a
c
ommand that the slave should interpret
.

2.

If the command has additional bytes
, they

are
transmitte
d in a sequence.

3.

In case the command involves a response
from the slave, the master sends a byte test
and simultaneously receives a byte
corresponding to the desired data of the
slave
.



The user can determine whether manipulating the
data to be of a width

of 8 or 16 bits and if you set
a buffer of up to 64 bits for transmission and
reception of information.


-

MOSI/MOMI (Master Output Slave
Input/Master Output Master Input): In full
duplex mode, Transmission Pin from the
point of view of the master and
the
receiver

from point of view of the slave, but becomes
a bidirectional pin when is in Single Wire
Mode and the device is a master.

-

MISO/SISO (Master Input Slave
Output/Slave Input Slave Output):
In full
duplex mode,
receiver

p
in from the point of
view of th
e master and
transmission

from
point of view of the slave, but becomes a
bidirectional pin when is in Single Wire

Mode and the device is a slave
.


-

SPSCK (SPI Serial Clock):
S
ynchronous
clock signal

pin. This clock is given

by the
master and is configured a
s an input to
the
slave.


-

SS (Slave Select): Selector Pin, places a low
level the slave via hardware.
[2]


The following figure shows the diagram of the
transmission formats of the SPI protocol:



Fig. 8
Transmission format

diagram
of
SPI protocol

SD
memory supports a basic SPI interface for
connection to embedded systems, also has the SD
memory maximum clock frequency of 25

MHz
for communication. The following figure shows
the wiring diagram for the SD SPI mode.




Fig. 9 SD Memory Pinout in SPI mod
e


C.
Design of I
C
:

Control Power
Servo
Drives
,
Acquisition of PWM signals and micro controller

QE128


We implement this card in order to adjust the
PWM signals from the receiver at the time of
flight tests with the helicopter contr
olled by a
professional f
lyer

for two reasons. The first is that
the QE128 microcontroller only tolerates the
range of 0
-
3.3V and the receiver outputs PWM
signals in the range of 0
-
5V. Also in land
recomme
nded to isolate the power and

control

modules, this

in order to prevent the
micro
controller receives an unwanted current peak.


The board is composed o
f three integrated
TLP521
-
4
, which together make up ten opto
couplers, five of them used to capture the PWM
and the rest for coupling the output PWM signals


7

to the servo microcont
roller
. Also is being used
the

integrated

LM7805

which is a voltage
regulator to 5V to supply energy to
servomotors.


Furthermore,

the general purpose of the
implementation of this board

is weight reduction
in the prototype, since the
weight

is a
n
important

factor i
n the helicopter flight, due to

as you
increase the weight, height reached will be lower
so the main rotor engine needs to
work

harder
demanding more current. For this case we reduce
the card for the
PEMICRO QE128 module

to the
half (appr
ox. 48 g).



The card features the respective sockets for each
of the pins of the microcontroller also has output
pins for the use of different ports and their
connections to the PWM circu
it capture. It also
has an integrated Max232, which will be used to
convert from
UART
to Serial
and make
communication with the radio modem
transmission.


The following figure shows the schematic block
diagram of the card:




Fig. 10 Schematic block diagram of the acquisition card
to control PWM and Servo Motors

The foll
owing image shows the final design of
PCB layout designed card:




Fig. 1
1

Final PCB circuit board

D.
D
ata sensors and micro controller


1.

LPR530AL Gyroscope


The main characteristics of this sensor are the
following:


-

Supply voltage
between
2.7 y 3.6 VDC.

-

Current (operating)


6,8 mA.

-

Th
e circuit includes the "filters
" required.

-

There are two types of outputs:


Not Amplified (x1)

Amplified (x4)

-

Measure Range
:

Unamplified outputs

+
-
1.200 º/s

Amplified outputs

+
-

300 º/s

-

Sensi
tivity

Unamplified outputs

3,33

mV/


º/s

Amplified outputs

0,33 mV/


º/s

-

Output voltage



No movement


1,23 V

-

Bandwidth: 140 Hz.





Fig. 12
LPR530AL

Gyroscope



The measured signal
from the gyro is proportional
to the

angular velocity expressed in voltage levels.

It is required

to convert
voltage levels to
degrees
per second

units
.



8

For this, the follow relationship is used:












(4)


Donde:

ɷ: Velocity in degrees per second.

VzeroRate: Zero rate Voltage Level.
For this
sensor the value is

1.23 V @ 3 V.

Sensitivity: Gyro sensitivity.
For this sensor the
sensitivity is

3.33 mV /

°/s @ 3V.



2.

MMA7260Q

Accelerometer


This sensor permit

to measure the acceleration,
the MMA7260Q accelerometer has the following
features
:



-

Opt
ion Selection Sensitivity: (1.5g/2g/4g/6g)

-

Low current consumption: 500 uA

-

Sleep mode: 3 uA

-

Low Voltage Operation: 2.2 V
-

3.6 V

-

6mm x 6mm x 1.45mm QFN

-

High sensitivity (800 mV / g g@1.5)

-

Bandwidth:

350 Hz XY

Z 150 Hz

-

Quick rise time.

-

Using

pass filter for signal enhancement.

-

Robust Design, High Shocks Survivability

-

Pb
-
Free Terminations

-

Environmentally Preferred Package

-

Low cost.



Fig. 13 MMA7260Q Accelerometer

The sensor provides accelerat
ion measured at
voltage levels, so is
requ
ired to convert the
volt
age level of acceleration to

meters per second
squared

units

(m/s
2
).


The following relationship applies for the
transformation of units:










(5)


Where:

a: acceleration in
meters per second squared.

VzeroRate: Zero


g Voltage Level.
For this
sensor the value is
1.65 V @ 3.3 V.

Sensitivity: Accelerometer sensitivity.
For this
sensor the value is

800 mV /g @ 1.5g y 3.3V.



3.

Ultrasonic LV


MAXSONAR
-

EZ


The ultrasonic
sensor

is used

to

measure
the

height of

flying.

Its technical specifications

are

the following
:


-

Resolution: 1 inch (2.54 cm).

-

Current consumption: 2mA@3.3 V.

-

Bandwidth: 20Hz

-

Small size: less than one cubic inch.

-

Low cost.

-

Power up calibration.




Fig.
14

Ultrasonic

LV MAXSONAR EZ

sensor

As the others sensors, the signal measured is in
voltag
e
levels.
The transformation

of units

is to
centimeters or meters.


As there is no
a predefined relationship from

the
fabricator, is required
to make a
matching rule via
experimental measurements as detailed

in
subsequent sections (section F
). As a result we
have the following relationship:















(6)


Where:

h: Heigh
t

in cm.

x: Heigh
t

measured in volts.



4.

QE128 Freescale Micro Controller


For t
he

control
of
the different
interfaces of the
helicopter, we use the 32
-
bit
microcontroller
MCF51QE128. The t
echnical specifications of the
controllers can be seen in the following figures:



9



Fig. 15 Specifications MCF51QE128


Seven

ADC

inputs are required for the helicopter
flight, t
hree for processing the sign
als of
accelerations in X, Y, Z (PTA1,
PTA7
and
PTA6

ports are assigned

respectively
). Then,
gyroscopes
will need three inputs for receiving signals from
the angles
of Pitch, Yaw
and Roll (
PTF7, PTF6
and PTF5
ports are assigned
respectively
)
.
Finally, the
PTF3
port is assigned as an input for
the height sensor
.


Furthermore
,
we
need two SCI serial
communication ports to receive GPS information

(
PTC6
and
PTC7

ports of the SCI2 inter
face).
SCI1 interface will be used for communication
with the radio modem to transmit any information
collected from sensors and receive information
from the base station controller.


To control the actuators
we
need four PWM
outputs.
Pitch tilt servo (
PTC
0 port
),
Roll

servo

(
PTC1

port), the
main rotor

(PTA0 port) and tail
rotor (PTC0 port).


The diagram below shows the input and output of
micro

controller connected to the different
peripherals (sensors and actuators):




Fig. 16 System Inputs and outputs
diagram



E.
System identification by Neuronal Control


System Identification by neural networks is based
on the modeling of linear and nonlinear systems
using artificial neural structure, which calculates
an output from the multiplication of an input
vector
by a weight vector forming an activation
function. We will implement a neuronal
controller by modeling the movements of Yaw,
Pitch and Roll of the helicopter using neural
networks in order to objectively understand and
try to imitate the real properties a
nd characteristics
described in a real flight system.

To identify the flight control, all signals involved
in the fly were tested with the helicopter in flight:
sensors data and PWM signals for the main rotor,
Right and Left Roll servos, Pitch servo, and Y
aw
servo. All signal were measured and saved at the
same time (frames) due to the movement of the
plate of the helicopter not depends only on one
signal.


1.

Neuronal Network Development


After the capture, filtering and data adaptation,
the
design of the ne
ural network

that will act as the
controller of the systems on the helicopter

is done.
The following figure

shows the general outline of
the neural controller:




10



Fig. 17
N
euro


controller diagram


To obtain the paramete
rs of the neuro controller,
we
need to
define
the training patt
erns that come
to be the inputs

of neuro controller:


acelX:
Linear Acceleration axis X (

m/s^2
)
.

acelY:
Linear Acceleration axis Y

(
m/s^2
)
.

acelZ:
Linear Acceleration axis Z

(
m/s^2
)
.

velocAng
X:
Angular velocity axis X (°/
sec)
.

velocAngY: Angular velocity axis Y (°/sec)
.

velocAng
Z:
Angular velocity axis Z


(°/sec)
.

AngX: Angle X in degrees °.

AngY: Ángle Y in degrees °.

Height:

Height of the helicopter
from the land in
meters.


Also, it is defined the output of the neuro
co
ntroller:

u: PWM Duty Cycle in %.



Fig.
18

Neuro Controller Inputs and Outpust diagram

The design of the neural network model is the
MLP (Multi Layer Perceptron) whose main
characteristics are its good performance for

modeling nonlinear systems, process

a lot of
information
in parallel. The following figure

shows the design of MLP Network.




Fig. 1
9
1

Design of the Multi Layer Perceptron Neuronal
Network

Following figure details the variables involved in
the network:



Fig.
20

Variables in MLP Network

MLP follows the next equations:





(







)





(7)





(


)



(8)





(







)





(9)





(


)



(10)



Network
Training is done using the
Backpropagati
on algorithm. It consists in
a
number of iterations in order
to make zero the
error between the real controller and the neuro
-

controller. In this process the value of the weights
and bias are actualized to describe the behavior of
the controller.


Backpropagation is an optimization technique that
attempt to minim
ize the cost function.
The
mean
square error

is the most cost function used
.
It is
defined as follow:



11











(11)





(




)




(12)


Backpropagation algorithm is defined with the
following equation:



(



)










(13)


Where:

w:
Parameter to be actualized

(weights
).

J: Cost function.

N: Iteration number.

η
: Learning factor.


NN
toolbox will be utilized for the modeling of the
neural network using the Backpropagation
algorithm, obtaining the following model with 5
sigmoid neurons

at

the
hidden layer
and one linear
neuron

at the output layer.


Once the training patterns
are
defined and
nntoolbox parameters
are
configured, we proceed
to train the neural network
.


F.
Experimental T
ests: Sensors measurements


1.

Ultrasonic Sensor


As mentioned

in previous sections, the height
control is implemented with the ultrasonic sensor,
so it is necessary to know the data that
the sensor
gives from certain distances (Height)
. The
measurements allowed us to obtain the
relationship between Distance and Volt
age
.
Measurements are to the analog output of the
ultrasonic sensor, the analog output provides
values in volts. The measurements were tested for
different distances with an incremental step of
5cm in each measure
.

As a result we know

that t
he
range of the sensor is up to
d
istances of 4.35
meters. Then

Voltage VS Distance is plotted with
the experimental data. Finally, the graphic is
approximate to a linear equation cause of

the trend
of the graph.




Fig.
2

Ultrasoni
c measurement: Distance Vs.
Voltage



2.

Transfer function: Ultrasonic sensor


Experimentally measured of Distance (cm) and
voltage (mV) allow us to propose a system of
Input and Output. Known both parameters it is
possible to determine the transfer function. To
find the Transfer Function method we use
identification systems method.


Input Data: Distance (cm)

Output Data: Voltage (mV)


Sampling Interval: 1mseg






Fig. 22 Tran
s
fer Function Diagram for the ultrasonic
sensor

To estimate the transfer function it was preferred
to the use of Parametric Linear Parametric
Models. Among this group of models it was
selected the more manageable model: the
Autoregressive Model


ARX. First order was
enough due to it is sufficient to
estimate our
system.


The transfers function at discrete time (Z domain)
:



(

)












(14)



We make a conversion to continuous time, and it
has the following transfer function in the Laplace
domain:

y = 3.8555x + 3.2522

R² = 0.9988

0
500
1000
1500
2000
0
500
Voltage(mV)

Distance(cm)

Distance VS Voltage

Series1
Linear
(Series1)


12



(

)




























(15)


G.
Data Capture of sensors and actuators


To make the simulation both the system
and
controller we must have

a considerable amount of
information about the variables involved in the
dynamics of flying.


Therefore, it was necessary to
capture all signals
from the sensors and actuators in a real flight. The
frame structure is composed of the seven signs of
sensors: accelerometers in x, y, z; Gyroscopes in
x, y, z, and ultrasoni
c. The second part of the
frame

is composed of the five contr
ol signals of
the helicopter flight actuators: servo pitch, yaw
servo, two roll servos, engine and main rotor.


We performed the test flight of the helicopter
at
hover movement. In these tests the control loop is
closed by the flight engineer.

Then, the ob
jective
is to imitate

the control carried out manually by
the
expert flight through identification and neural
control algorithms.


Figures 23, 24 and 25

show the signals of the
accelerometers in meters per second squared
filtered with Kalman.



Fig. 23 Li
near acceleration

in

the

x
-
axis




Fig.
3


Linear acceleration

in

the

y
-
axis





Fig. 25
Linear acceleration in the z
-
axis


With the signals from the accelerometers can be
calculated angles for Pitch an
d Roll. The
following
figures 26 and 27

show the angles x and
y respectively:



13




Fig.
26
Pitch

Angle



Fig.
27
Roll Angle


Figures 28, 29 and 30

show the signals of the
gyroscopes measured in degrees per second with
Kalman filtered.




Fig.
28

Angular Velocity in the x
-

axi
s




Fig.
29
Angular Velocity in the

y

-

axis



Fig.
30
Angular Velocity in the

z



axis


The following figure 31

shows the height of the
helicopter with r
espect to the floor measured
by
the ultra sonic sensor

in centimeters with Kalman
filtered
.



Fig.
31
Height of the helicopter




14

Figures 32, 33, 34, 35 and 36

show the control
signals of the servo motors and the main rotor. To
do this, we used the capture mode PWM Freescale
to detect the duty cycle.




Fig.
32
Duty Cycle PWM Left side Roll Servo of
the
plate




Fig. 33
Duty Cycle PWM Right

side Roll Servo of the
plate




Fig.
34

Duty Cycle PWM Pitch

Servo of the plate


Fig.
35

Duty Cycle PWM

main rotor



Fig.
36
Duty Cycle PWM
Yaw Servo




15

H.
Neuronal Network Training for Control and
System
identification


1.

Roll Movement


The network design

training for the roll motion
:




Fig. 37 N
eural network design Roll movement



The six inputs at the input layer of the network are
the following signals from the sensors
:



Linear Acceleration X Axis in
m/s2.



Linear Acceleration Y Axis in m/s2.



Linear Acceleration Z Axis in m/s2.



Angle Y Axis in degrees °.



Angular velocity
Y

Axis in °/sec
.



Height of the helicopter respect of the
floor in cm.


It was decided to use five sigmoid activation
neurons at the
hi
dden layer t
o avoid the
computational load

of the microcontroller when
performing the physical implementation of the
controller.


Finally, the variable of the linear output layer:




PWM Du
ty cycle Roll servo.


The training was done u
sing
the
Matlab NNTool
with the

backpropagation algorithm. The training
process is shown in the figure below:




Fig. 38 NN Training process


Roll Movement


After making the training and
achieve the goal of
training that is

to reduce the error to a given value
(near zero). Save the W1 and W2 weights and bias
values b1 and b2.


Finally,
with the saved values
of the weights is
necessary to verify the result of train
ing the
network. Following figure

shows the result of
training

by comparing the neuronal control and
real contro
l (PWM signal measured in flight
tests
)
.




Fig. 39 Result of the neuronal controller for the Roll
movement

2.

Pitch movement


The netw
ork design training for the pitch motion
:




Fig. 40
N
eural network
design Pitch movement

The six inputs at the input layer of the network are
the following signals from the sensors
:



Linear Acceleration X Axis in m/s2.



Linear Acceleration Y Axis in m/s2.



Linear Acceleration Z Axis in m/s2.



Angle X

Axis in degrees °.



Angula
r velocity X Axis in °/sec.



Height of the helicopter respect of the
floor in cm.


It was decided to use five sigmoid activation
neurons at the
hidden layer
.




16

Finally, the variable of the linear output layer:




PWM Du
ty cycle Pitch servo.


The training was d
one u
sing
the
Matlab NNTool
with the

backpropagation algorithm. The training
process is shown in the figure below:



Fig. 41
Trai
ning process


Pitch Movement


Finally,
with the saved values
of the weights is
necessary to verify the result of train
ing the

network
.



Fig. 42
Result of the
neuronal controller for the Pitch

movement


3.

Yaw Movement


The netw
ork design training for the yaw motion
:



Fig. 43 Neural network design Yaw movement

The five

inputs at the input layer of the network
are the following signals from the sensors:



Linear Acceleration X Axis in m/s2.



Linear Acceleration Y Axis in m/s2.



Linear Acceleration Z Axis in m/s2.



Angular velocity
Z

Axis in °/sec.



Height of the helicopter res
pect of the
floor in cm.


Finally, the variable of the linear output layer:




PWM Du
ty cycle Yaw servo.


The training process is shown in the figure below:




Fig. 44
Trai
ning process


Pitch Movement

Finally,
with the saved values
of the weights is
necessary to verify the result of train
ing the
network
.




17



Fig. 45
Result of the
neuronal controller for the Yaw

movement


4.

Height Movement


The netw
ork design training for the height motion
:




Fig. 46
Neural network design Height

movement

The two

inputs at the input layer of the network
are the following signals from the sensors:



Angular velocity
Z

Axis in °/sec.



Height of the helicopter respect of the
floor in cm.


Finally, the variable of the linear output layer:




PWM Du
ty cycle Main Rotor.



The training process is shown in the figure below:




Fig. 47
Trai
ning process


Height Movement

Finally,
with the saved values
of the weights is
necessary to verify the result of train
ing the
network
.


Fig. 48
Result of the
neuronal controller for the
Height


CONTROL

TEAM


The development of autonomous scale helicopters
responds to the need for greater flexibility, agility
and simplicity of operation which are not
characteristic of ordinary helicopters. However,
they present highly nonlinear flight dyna
mics and
also they have high sensitivity to control inputs
and disturbances. If we add the fact that
helicopters present different characteristics for
each f
light mode (hover and cruise) [3
], the
development and implement
ation of an intelligent


control

system becomes a critical factor for the
deployment of this kind of air vehicles. The
control stage consists of designing a neural
controller that learns from a Linear Quadratic
Gaussian LQG Controller with Kalman filtering
and estimation, to ensure that
the helicopter
remains in hover and can move around its yaw
axis according to references inputs.


The stabilization and control of a scale helicopter
has been designed using different techniques. The
first design method consisted on turning the
controller
parameters empirically. This technique
of trial and error to design an acceptable control
system did not match with the dynamic behavior
of the helicopter, which is a complex system of
multiple inputs and multiple outputs (MIMO).
Therefore, to more advance
d approaches in the
development of a control system it is required a
precise helicopter model that re
presents its
complex dynamics [3
].


The first step in the controller development is to
obtain a mathematical model that represents the
flying dynamics of the aircraft. However, due to


18

complex characteristics of the helicopter, models
are far from reality and therefore the designed
controller

from this model does not work in real
life. For this reason we appeal to the System
Identification method in which, given an input and
output vector, we can obtain an approximate
model of the real system without knowing the
physics of the system.


To iden
tify the approximate model of the scale
helicopter, the state variables that govern the
behavior of the vehicle in hover are defined in
order to have a vision of those associated with a
sensor and those that need to be estimated. The
state variables are sh
own in Table1.


Symbol

Meaning

a
x

Longitudinal acceleration (m/s2)

a
y

Lateral acceleration (m/s2)

a
z

Vertical Acceleration (m/s2)

p

Roll rate (rad/s)

q

Pitch rate (rad/s)

r

Yaw rate (rad/s)

u

Longitudinal velocity (m/s)

v

Lateral velocity (m/s)

w

Vertical velocity (m/s)


Roll angle (rad)


Pitch angle (rad)


Yaw angle (rad)

h

Height (m)

Table1. State variables


It is considered as input signals of the system the
five PWM signals entering the
engines that allow
the movement of the helicopter.


The outputs are defined by the state variables
measured by sensors. The IMU sensor provides
information about the longitudinal acceleration
(ax), lateral acceleration (ay),
vertical acceleration
(az), angular velocity of roll (p), pitch (q) and yaw
(r); the ultrasonic sensor, the height variable (h).
The states variables that are not measured by these
sensors (u, v, w, ϕ, θ, ψ) are estimated using a
Kalman filter.


As the hel
icopter presents nonlinear dynamics,
linear identification methods fail to accurately
approximate the real helicopter model. However,
if the data acquisition is obtained from a close
loop system, the nonlinear dynamics are removed.
In this case, the data i
s obtained from the
helicopter controlled by an expert who maintains
that aircraft

at hover in a trim condition [4
].

To perform the system identification process, the
first step is the collection of data inputs and
outputs of the system with which, through

a
particular identification method, is possible to
obtain transfer functions that represent the system
behavior.


In this project it is used the method of 'Linear
Regression with Least Sq
uares using QR
decomposition' [5
] to approximate the real system
to

a second order transfer function because any
system can be expressed by that kind of function.
The linear regression model can be obtained from
a of second order transfer function as follows:






Rewriting the equation above and

adding an error
term representing the difference between the real
system and approximate system, the linear
regression model is obtained:






(20)


Where:

y(n): Output Data Available

ɸ(n): Regression Vector

θ : Unknown Par
ameters

e(n): Error Term


And:


Considering that there have been ‘N’ data samples
the expression above can be rewritten as follows:



19

I.

According to the least squares sense, it is
necessary to minimize:






(25)



A method of estimating the unknown parameters
in a way to reduce the square error is through
Linear Regression with Least Mean Square
Estimation using the pseudo
-
inverse:






(26)


However, the inverse of
ɸ
T
ɸ

can represent a
problem because this matrix can be singular or
very close to this type of matrix. Therefore, in this
paper the QR decomposition is applied and it is
prove that this method is computationally more
efficie
nt. This method stands that the

matrix
can be separated in an orthogonal Q matrix and in
an upper triangular R matrix:








(27)

The QR factorization of the matrix

is compu
ted
in matlab by the command:







(28)


The solution of the estimated parameter would be
given by:







(29)


Knowing that:





(30)

Where:

R
O
: matrix of ‘p’ rows and ‘
p’ columns.

z
1
: matrix of ‘p’ rows and a column.

p: number of unknown parameters.



After the system identification method is applied,
a vector of unknown parameters is obtained.
These parameters are the coefficients of the
transfer function that relates the input and output
data used in the identification process.


After identifying a
transfer function for each input
/ output combination, a Matrix Transfer Function
is obta
ined. This matrix consists of 35

tra
nsfer
functions because of the 5

inputs and 7 outputs
that are part of the helicopter system. The transfer
matrix can be converted
into a state space model
by the matlab function ‘ss’.



Fig 49
. 7x5

Matrix Transfer Function


Fig 50
. Converting Matrix Transfer
Function to State Space Matrixes


This state space model does not consider the state
variables that are not associated with
a sensor (but
are necessary to control the helicopter in hover) so
the state spaces matrix are resize according to
basic physical knowledge of the system.




20


Fig 51
. Resized State Space Matrixes


The stage of the control design is done once the
system ide
ntification stage is completed. For this
reason the controller has the height as a reference
in hover and in order to rotate in the yaw axis, the
yaw reference input is also needed. In this paper a
supervised Multiple Perceptron Layer MLP neural
network co
ntroller with backpropagation is design
to control the helicopter.


Since the type of learning of the neural network is
supervised [
4
], a controller that provides the
targets of the neural network is needed:



Fig 52
. Supervised Neural Network


The chosen controller is the Linear Quadratic
Gaussian (LQG) which is the optimal controller
obtained as the combination of an optimal LQR
state feedback gain with feedback from estimates
from an optimal LQE state estimation which is, in
practice, the Kalm
an filter [
7
]. The use of LQG
makes possible the tracking of any variable even
in the presence of Additive White Gaussian Noise
AWGN.



Fig 53
. Linear Quadratic Gaussian Controller
Diagram


Fig 54
. Linear Quadratic Gaussian Controller
Model


After design
ing the LQG controller, the state
space matrix can be resize and the stability of the
new system would be given by the poles of the
new ‘A’ matrix.


The neural network controller tries to identify the
controller K obtained by the Linear Quadratic
Regulator

LQR method. It is important to notice
that the inputs to the neural network are the
outputs of the Kalman filter. The configuration
chosen is composed of two layers: Sigmoid
hidden layer and linear output layer [
6
]:




Fig 55
.

Neural Network Configuration




21

Given this configuration, the equations that
govern the neural network are established:




It is necessary to establish a Quadratic Cost
Function in order to penalize the error and reduce
the energy
that is coupled to the system:




One of the suitable algorithms for updating the
weights (learning of the neural network) is the
Backpropagation algorithm which involves the
development of partial derivatives. Below it is
present
ed the learning algorithm of the neural
network.


Updating weights of the hidden layer:






(38)




Updating weights of the output layer:





(41)








COMPUTER

VISION


The system owns

a computer vision sub
-
system
which has the ability to track a specific mark
place
d on an object in real time; also the user has
the capacity to improve

the image
according to the
level of
brightness of the scene
.


The sub
-
system counts with a second operation
mode which is capable of detecting birds in the
sky, solving the problem of birds at airports,
which prevent the planes from taking off.

This sub
-
system receives the video stream in t
he
Ground Station from a wireless camera installed
in the Neurocopter, the images are processed and
then references to the control sub
-
system are sent.

For the processing to
be in real time, the system
use
s

a
n

NVIDIA GTX 465 video card with
CUDA technology

because studies certify that its
use can accelerate the operation
s in approximately
four times [8
].


A.

Color Spaces

In order to a better study of color the color models
allow a better understanding of color and let you
identify and manipulate them. We deci
ded to use
the YCbCr color space which belongs to the
Luminance plus Chrominance color model.


1.

YCbCr Color Space

This color space is usually used in digital video
formats or photographs. Y represents the
luminance and its range is from 0 to 255. Cb

and Cr represent the chrominance of the blue
and red components.

This space separates the luminance from the
chrominance in order to save the luminance
with a better resolution or transmit it with a
superior bandwidth while the chrominance
components can

be under sampled, compressed
or treated in separated.

The YCbCr color space derivates from the RBG
space corrected by the gamma factor, so the
algorithm [9] used to transform one color space
to another is:



(44
)


…(45
)




22

Where R’, G’ and B’ are the gamma factor
corrected ones. Usually for an NTSC video the
gamma factor is 2.2, and to generate the 24bit
RGB space the following is used


… (46
)



B.

Image acquisition


The
images are captured by a wireless camera and
sent in the band of 2.4 GHz. The images

are sent
in the UYVY format, which

is the
YCbCr 4:2:2
based on NTSC, that

means a sample of Y is taken
in all the pixels while samples of Cb and Cr are
taken every two pix
els. In this way the band width
for the transmission is reduced. However in order
to display the image, this must be transform from
the YCbCr format to RGB.



C.

Image processing


Due to some noise present in the image it is
necessary to use a median filter. This filter
removes salt and pepper noise and preserves the
images borders. Tests demonstrate that this filter
works better than the mean filter because the mean
filter doesn’t
remove impulsive noise due to the
window’s values averaging.

The median filter consists in applying a
transformation to an image through and operator.
This filter is also a space filter, but replaces the
central value of the window with the median of
the
values of the window [10].


Fig. 56

Sample Matrix


In order: 0, 2, 3, 3, 4, 6, 10, 19, 97



Fig.
57

Median value


Once the image is filtered, we evaluate the pixels
in order to get a similarity rate between
chrominance using Euclidean distance to find
th
ose

pixels related to the tracked color.
The
nonlinear formula between

(



)

and


(



)


is
presented ahead:






(



)


(



)





…(4
7
)


By this way we got a color thresholding.
Nevertheless, using this nonlinear formula
implicates a large
computational load, which is
why this algorithm was implemented in CUDA
achieving decrease of 55.7% in the computational
load.


After that
,

we use morphological filters to
improve segmentation such as: Opening, Closing,
Filling holes and Border Eliminatio
n.


Opening: It generally smoothes the edges of the
image, breaks isthmus and puts together little
isolated areas. The opening of a set A by and
structure element B is represented by
A

B

and it
is defined as followed:






(



)



…(48
)

As we see, the opening of A by B, is the erosion
of A by B followed by the dilation of the result by
B [11].




23

Closing: It trends to smooth edges but in contrast
to the opening algorithm, it generally joins narrow
separations, eliminates little holes and fi
lls
surrounding holes. The closing of a set A by a
structure element B is represented by
A

B

and it is
defined as followed:






(



)



…(49
)

As we see, the closing of A by B is the dilation of
A by B followed by the erosion of the result by B.

Nevertheless, this algorithm is not useful if we
need the segmentation of many objects because
these could be very close from each other and
could be combined losing one of them [12].


Filling Holes: It is a morphological algorithm
based on dilations, comp
lements and intersections.
Giving A, which represents a set that contains a
sub
-
set which elements are 8
-
connected dots of
the edge of the region so the following process is
in charge of filling the sub
-
set.






(






)


















…(50
)


Where B is
and structure element and finally the
union set X
k

and A contains the filled set and its
contour.

The objective of this algorithm, as its name says,
is to fill all the intern spaces inside the objects,
this allows to compensate lots of empty spaces
create
d by the opening function on the images
[13].




Fig. 58

Filling Holes


Edge Elimination: Using 4
-
conectivity around the
border of the image, objects are detected and then
eliminated. This is used to avoid taking as objects
things that are out of the vision angle of the
camera. In other words, it eliminates the objects
on the
edge of the image [14].


Once we have the image segmented

and
improved, we proceed to calculate the center of
mass of the object involved using:
































…(51
)

































(52
)



Where Cx is the center of mass in x and Cy
is the
center of mass in y,
and the center of mass of the
object is in the point with th
e coordinate (Cx, Cy)

[15]
.


In order to

improve the image depending on
weather conditions (All Weather Vision) the
correction gamma technique was used, which later
was implemented using CUDA technology.


The g
amma
c
orrection
operation
consists in a
non
-
linear adjust of the brightness or luminance on an
image. For the darkest pixels the brightness is
highly increased while the brightness for the
clearest pixels is increased in a minor amount. As
a result more details are visible on the image.


…(53
)


Where “c” and “γ” are constants, “r” is the image
and “s” is the final image.




Fig. 59

Gamma Values


The formula for achieving the gamma correction
on an image is:



…(54
)


We present two modes: Night Vision
and Sun
Block.

For the Night Vision mode we chose a gamma of
0.9. This value was selected through a qualitative


24

evaluation of different values from 0.5 to 1,
because in this range the image was visible and
the brightness of it was increased.


On the other
hand, for the Sun Block mode we
saw convenient to use a value of 1.07 because we
just wanted to eliminate the intense bright. A
comparison between the execution time of the
functions running in the CPU and the GPU is
shown.



Fig. 60

CPU vs. GPU execution

time


For all the tests made the execution time for this
operation on the GPU was lower than the CPU.
We achieve a reduction in 82.4% in execution
time.


Finally for achieving the bird detection mode we
used the thresholding technique.


The thresholding i
s a segmentation technique that
defines a threshold in order to separate

the objects
from the background.

It is useful only if there is a
clear difference between the objects and the
background of the scene.


Through many tests we saw convenient use the
va
lue of 0.3 for the threshold, obtaining the
following.


Fig. 6
1
. Bird Detection Results




Fig. 62.

Bird Detection Results




Fi
g. 63.

Bird Detection Results



As we can see the software puts a red mark among
the birds in order to identify them better.
Observing the results of the tests we can conclude
that the objective was achieved regardless the
presence of one or many birds.


D.

Representation



25


In the ground station we show the images obtain
from the camera previously improved and with a
footprint i
n the tracked object. In this screen is
showed the trajectory followed by the object. For
that we need the current height of the Neurocopter
to estimate the real movement of the object.


Through experimental tests, we determinate the
following relationship
:


Axes

Height

Sample

Pixels

X

1m

32cm

190

Y

1m

32cm

166

Table. 2

Pixel/cm relationship



In the X axi
s

with a height of 1m, 32cm are
equivalent to 190 pixels.

In the Y axis with a height of 1m, 32cm are
equivalent to 166 pixels.

So we can determinate
the movement of the object
in centimeters, we use the following formulas:



…(55
)


Where X and Y are the difference between the X
and Y components of the center of mass in the
time “t” and “t
-
1”.

L is the current height of the
Neurocopter.


The graphic interface allows the user to follow the
object using, at the same time, the gamma
correction technique for the brightness
improvement of the images.


Finally the other operation mode (Bird Detection)
is presented on a different wi
ndow to which one
can access from the main menu of the program.


E.

Movement Control


Since we count with the movement of the object,
we can approximate the movement angle (yaw)
and the linear movement of the Neurocopter, in
order to follow the desired o
bject.

The references are sent using the UDP protocol to
the control sub
-
system.



F.

Parallel Processing Technology


The main problems in digital image processing
applications are the processing speed and the time
of respond of the systems. This is why it
is
proposed as a solution alternative, to use a video
card with parallel processing technology, CUDA.
The right utilization of this technology not only
allows the digital image processing to be faster,
but also allows to become feasible the projects
withou
t adding external hardware to the PC.



GPU: A GPU is a graphical processing unit used
as a coprocessor to the host CPU. It has its own
memory unit and executes many threads in
parallel. This computational device has lots of
cores, but a simpler architect
ure than a standard
CPU. In our case the NVIDIA GTX 465 has 352
cores.

CUDA is NVIDIA’s parallel computing
architecture that enables dramatic increases in
computing performance by harnessing the power
of the GPU

[16].


CUDA Architecture


The CUDA architect
ure is based on threads.
These threads are grouped into blocks which at
the same time are group into grids as shown.



Fig. 64

CUDA Architecture


The differences between CUDA and CPU threads
are:

-
CUDA threads

are
extremely
lightweight because they have very little
creation overhead



26

-
CUDA uses thousands of threads for a
major efficiency, while multi
-
cores CPUs
can only use a few.


A kernel is a function called from the host (CPU)
that is processed in the GPU, execu
ted once a time
and a lot of threads execute each kernel.

All the kernels execute the same code, which
makes the processing faster.


A kernel is executed in a grid of blocks of threads.




Fig. 65

Understanding Kernels



The graphic card has a Global Memory in which
the kernel inputs and outputs reside. This is a
high
-

capacity DRAM, meanwhile the Shared
Memory, which is shared by the threads inside a
block, is smaller and as fast as registers.

An access memory scheme is s
hown.




Fig. 66

Memory

Access


The GPU can’t directly access the computer’s
principal memory, and the CPU can’t directly
access the GPU memory [17].



DATALINK

AND

GROUND

STATION

TEAM


It was determined that for the transmission of
the
autonomous vehicle data to the base station would
be using a specialized device and the choice of the
best technology. According to
our research, we

would use a radio modem that meets the
requirements raised in the technical specifications
of the product.


For the choice of suitable radio modem to reach
some of the project

there
is a ranking of factors.
Three brands

of radio modems

were chosen
preliminarily recommended for such applications

and they

are FreeWave, Digi and Microhard. The
ranking of factors c
onsider four factors that were
assigned a weight and also made a function of
each factor to determine the appropriate score.


The factors, weight and function are as follows
:


Weight (grams)

Weighting 40%



Fig. 67 Weight vs Grams.






Price (USD)

Weighting 35%





Fig. 68 Weight vs Price





27

Range (meters)

Weighting 20%




Fig. 69 Weight vs Meter



Spread Spectrum


Weighting 5%


It can’t be expressed as a function


With the factors and their weights were derived
some note for each modem model of
the three
brands mentioned above. The modem selected and
its characteristics shown in the next table.







XStream® RF

9XStream PKG R



Characteristic

Weight

Score

Spec
s

Price





Price

35%

16.5

265

Performance





Indoor Range





450 m


Outdoor
Range

20%

16.3

11Km
-

32Km


Transmit Power Output





100 mW


Interface Data Rate





125 bps
-

65000 bps


Throughput data rate





9600 bps
-

9200 bps


RF Data Rate





10000 bps
-

20000 bps


Receiver Sensitivity





(
-

110 dBm /

-
107 dBm)

Dimensions





Heigth





6.99 cm


Width





13.97 cm


Depth





2.86 cm


Weigth

40%

13.3

200 g

Power Requirements





Power Supply Voltage





7
-

18 VDC


Receive Current





70 mA


Transmit Current





170 mA

Security





Frecuency





902
-

928 Mhz


Modulation





FSK


Spread Spectrum

5%

x

Frequency

Hopping

Wide band

FM Modulator


Channel Capacity





7 hop

sequences

share 25

frequencies

Antenna





Connector





RPSMA


Impedance





50 ohms

unbalanced

Total Score


14.4



Table
. 3 Modem Characteristics



XStream PKG RF modem can be configured in
minutes to replace, low cost, serial cable between
all electronic devices. Available in 900 MHz and
2.4 GHz modems are configured via DIP switch
for RS
-
232/422/485 signals. Its extraordi
nary
sensitivity allows the receiver XStream
performance from two to eight times the range of
common modems, which allows customers to
cover more ground with fewer devices.

Available with a wide variety of data interfaces,
including serial RS
-
232/422/485,

USB and
telephone.


This includes the design of communication
protocol and that the transmitted data is generated
from the following sources:


a.

IMU Data

b.

GPS data captured

c.

Ultrasonic Sensor

d.

Control Data


To ensure that the data are transmitted from
source
to destination are network layers provide
shipping services, routing and congestion control
of data (data packets) from one no
de to another in
the network,
from the bottom layer to top layer
that is part of the user interface. The network
layers are as fol
lows:




28



Fig. 70 Network Layer


The use of the modem for data transmission
includes the physical, data li
nk and network level.
So
, design the system for error detection data
prioritizing who need more protection such as data
control unmanned autonomous v
ehicle.



Currently we are using a Garmin GPS LV325 1Hz
frequency. The GPS was able to send the frames
necessary to extract the information of latitude,
longitude and altitude. Frames necessary that the
GPS should occur:




Figure71
. Frame sent by the GPS


For the developm
ent of the central station, we
used the Visual Studio Version 2008

(C #) and
some libraries like the Google Earth, Socket and
Web Camera. This interface allows viewing of
sensor data, vehicle position, trajectory and
embedded video camera
. The steps were as
follows.


Frame
-
Reading GPS

-
Calculation of latitude, longitude and altitude

-
Location of points in Google Earth

-
View images captured by the camera

Due to the
complex process

of I
mage
Processing,
we choose

chosen to reduce the computat
ional
load using two computers, so that one fulfills the
function of server (receive data at the vehicle) and
the other as a client (receiving the information for
the calculation of control). Therefore
the
required
TCP / IP for data transmission and operat
ion of
computers as a client


server

can be represented
as follows
.




Figure
72
.Connection Diagram (client and server)


For this
purpose

we create an AD
-
HOC network
that shared files and internet when the server does
the work of wireless modem.



Figure

73
. Creating a ADHOC Network


The process of the serial communication is
explained in the next figure:




29


Figure

74
: Flow diagram of the serial port
communication



Figure

75
: Flow diagram of the serial port
communication


Once connected computers
(client
-
server), the
server sends the information which contains
vehicle position (latitude, longitude and altitude).
The flow chart summarizes the process from the
central station

and it

is shown in the figure below:





Figure 76
.Flow diagram implemente
d Ground Station


Finally, the base station design will be refined
according to the progress made with the other
subteams so that you can get an overview of the
status of sensors and helicopter to scale. This
means that you can see the battery status, spee
d,
position and status of each control variable used.






Figure
77
. Final design of the Ground Base Station



VI.
C
ONCLUSIONS




The mathematical analysis allows a
comprehensive understanding of the system
to be developed: Place a scale helicopter
called “
Neuro Copter” prototype in a state of
"Hover."



Given the cost numbers and physical
dimensions of the prototype to implement,
was determined to take safety measures for


30

the first flight test, using a security system
comprised of harness, and a safe landing.

In
addition, convenient saw the acquisition of a
prototype low
-
cost training in comparison to
the final model for the respective flight test
and flight training.



As a way of alleviating the computational
burden of the Control Team and the Vision
Team was
chosen to perform each of the two
processes on different computers. We created
a client
-
server application TCP / IP to
communicate between computers.



The GCS is an important tool that will serve
as interface man
-

machine for controlling the
UAV. This soft
ware will change in real time
between each of the navigation modes are
available: Collision Avoidance, Navigation
and Waypoint Day / Night Vision.



The GCS will allow real time viewing of both
the position of the UAV and its speed,
angular acceleration, hei
ght and other states.
In addition, you can display the battery status
is making contingency alert to be issued.



The image processing is strongly improved
by the use of parallel processing tecnology
provided by the graffic card.



The Linear Regression with L
east Squares
using QR decomposition is more efficient that
using pseudo
-
inverse and, also, this last
method does not always provides a consistent
solution for estimating the unknown
parameters.



The Linear Quadratic Gaussian Controller
allows the track of
all the states even if they
are contaminated by AWGN noise.



The estimated level of system identification
using neural networks using the MLP model
is vastly superior to the estimation using the
autoregressive model ARX.



Neural networks are highly recommend
ed for
system identification and development of
controllers for nonlinear systems.



For the training is recommended to leave 20%
of the information to make the neural network
able to generalize.



Once, the neuro controller is trained, it is
necessary to vali
date the neural network. For
example, feed to the system with different
values to the training patterns and verify that
the output is consistent.



Leave a small margin of error on the training
goal (non
-
zero error) in order to allow the
network to generaliz
e.



For the modeling of the system take
advantage of neural networks, and decompose
the system into simple elements.



The torque of the helicopter is 1, 0115 Kg.m.

VII.
R
EFERENCES


[1] PEREIRA, Fabio HCS08 Unleashed.
Designer’s Guide to the HCS08 Microcontrollers
.
2008



[2]
MUN
ERA HOYOS DIEGO ALEJANDRO
Microcontroladores de 32 Bits
. 2010


[3] METTLER, B and others
.
System
Identification of Small
-
Size Unmanned Helicopter
Dynamics. Carnegie Mellon University
-

Pittsburgh, Pennsylvania and US Army Aviation
and
Missile Command Ames Research Center.


[4
] CALLINAN, T. Artificial Neural Network
Identification and Control of the Inverted
Pendulum. 2003


[5
] Edlund, O., Notes on Least Squares, QR
-
factorization, SVD and Fitting, Department

of
Mathematics.


[6
] Van Gorp
, J,. Nonlinear Identification With
Neural Networks And Fuzzy Logic, Universidad
de Bruselas, Bruselas, Bélgica.


[7
] Braslavsky, J, Lecture 23: Optimal LQG
Control, School of Electrical Engineering and
Computer Science The University of Newcastle


[8
]
NVIDIA
.
Aceleración de MATLAB [en línea]


(
http://www.nvidia.es/object/matlab_acceleration_
es.html
).
2010


[9
]

INTERSIL. YCbCr to RGB considerations.
(http://www.intersil.com/data/an/an9717.pdf)
2010


[10
] JAIN, Anil. Fundamentals of Digital Image
Proce
ssi
ng. Upper Saddle River: Prentice Hall.

1989


[11] [12] [13] [14
] GONZÁLES, Rafael y
WOODS, Richard
.
Tratamiento digital de
imágenes 1°.ed.Addison
-
Wesley
Iberoamericana,S.A.: Delaware.

1996


[15
] PERTUSA, Jose
.
Técnicas de Análisis de
imágenes Aplicaciones
en Biología.
Valencia:
Universidad de Valencia
. 2003



31


[16
] NVIDIA Corporation. What is CUDA? Portal
web que detalla las características de la
Tecnología CUDA
(http://www.nvidia.com/object/what_is_cuda_new
.html) 2010a


[17
] North C
arolina State University (NCSU).
Cuda Programming model overview
(
http://moss.csc.ncsu.edu/~mueller/cluster/nvidia/
GPU+CUDA.pdf
) 2010