Research assignment - amgs.or.kr

haremboingΤεχνίτη Νοημοσύνη και Ρομποτική

20 Οκτ 2013 (πριν από 4 χρόνια και 20 μέρες)

107 εμφανίσεις


Research assignment


Bui Quang Tu

11 Physics

Ha Noi


Amsterdam High School for the Gifted


Neural

Network Simulation


I just make a research about the neural network simulati
on and write about it by my
word.
All the
derivative
function can be describe
easier to understand, I think. So I don’t
want to rewrite what other professers have write in the lectures. I am writting what I can
tell people about the neural networks.
My English is not very good so I will write as
simple as I can. Thank you all!




Biological neur
al

networks

A
neural

is an
electrically

excitable
cell

that processes and transmits information by
electrical and chemical signaling

in human body. Each
neurons

receive electrical and
chemical signal from orther
neurons
, and then send orther signals to orther
neurons too
.
Neu
ron
s connect to each other to form networks.





Artificial neural network

(we will call
neural network
)

An artificial neural networ
k is a
mathematical model

or
computational model

that is
inspired by the structure and/or functional aspects of
biological neural networks
.

A neural network consists of an interconnected group of
artificial neu
ron
s
, call

perceptron
”.

It receive the data from orthers
or from in
-
put, analyse them, and then
transfer to orthers or output. Like this:


X
1
,X
2
,...X
n

is the value the perceptron receive (from orthers or from input)

w
1
,w
2
,...w
n

is the
weight

of the value
respectively
X
1
,X
2
,...X
n
.

F is the transfer fun
c
tion of
the perceptron.

The out value y is

y = F((X
n
w
n
+ X
n
-
1
w
n
-
1

+...+

X
1
w
1
)


Ф)

Ф is the
threshold

of the perceptron.

Some
common transfer functions are:


The logistic function:

Hyperbolla function:



Many problems can be describe like this: there is
n

inputs, what is
the

output
(
s
)
? For
example: We know the weather today (many inputs, of course
), what is the weather
tomorrow? The problems in the neural network will be: what is the value
w
i

and what is
the transfer function
F
that we will have the right output y with
n
value
x
i
. The process
that we
determine the value
w
i

called
the learning process
. How the perceptron learn?
We will let it a set of input and output (that have known before) an
d an
algorithm

to
determine the value
w
i
.


We will make an example for easily understanding:

Suppose flowers have 2 kinds:
tau

and
mer
. And by
experience
, we have the table like
this:


Name

P
etal

s
=
汥湧瑨
=
u
1

P
etal

s
=
睩摴w
=
u
2

Body’s tall
=
u
3

Body’s
睩摴w
=
u
4

Kind

A

4.7

3.2

1.3

0.2

tau

B

6.1

2.8

4.7

1.2

mer

C

5.6

3.0

4.1

1.3

mer

D

5.8

2.7

5.1

1.9

tau

E

6.5

3.2

5.1

2.0

tau


And we let the
threshold

Ф is

w
0
X
0

with
X
0

equal to 1. (For short

formulas)


We works with numbers, so we will let
tau
is
-
1 and
mer
is 1. The transfer function we
will use is
f
(y) = 1 with y ≥ 0, and
f
(y) =
-
1 with y < 0.



The
algorithm

will

be describe here.


-

First, we will make an random set of
w
i

(Like every one must has mistakes when
learn some thing). For easy let
w
0
=

w
1
= w
2
= w
3

= w
4
= 0

-

Then test the function with that weights.

We have
f
(A)=
f
(0)=1. But we expect that the value
f
(A) will be
-
1 (because A is
tau
)

so we must do something to decrease the value of
sum
w
i
X
i
. C
onspicuous
ly that if we
change the value of
w
i

like:




w
i
=
w
i



r.
X
i

with r>0 we will see that the sum
w
i
X
i
will decrease
the
value (by

r.
X
i
2
).
We will choos
e
r = 0.1

-

The new set of
w
i
is:
w
0
=
-
1,w
1
=
-
0.47, w
2
=
-
0.32, w
3
=
-
0.13, w
4
=
-
0.02.

-

We continue test the function in the flower B. If it’s right we continue to the
flower C. If it is wrong we will
correct the weight like we do with the flower A. Do the
pro
cess over A B C D E then get back to A, then B... until the function is suitable with all
the flower. Then we have the “right” function with the
weight
.

(Mathematics have proved
that soon or late the process will end with the proper weights)


That is just
a simple example to make we understand how the
neural network

learn.

Actually the algorithm is more
complicated

and the function is not so simple like that.
And now we will see an algorithm call
backpropagation
.




Backpropagation algorithm

Now we will se
e an more reality algorithm. It is apllies for a neural network with more
than 2 class
in
and
out
. Like the picture:


The number of input perceptron depends on the problem. The number of hidden
perceptron depends on the problem, too. Each perceptron in th
e hidden class have an
output value to be input value of the ouput perceptron. A network may have more class
but we just consider a simple case for easy to understand.

We ‘re back with the problem of
tau

and
mer

flower. We make a simple network with the
st
ructure as following:


-

4 input perceptron with the four values
X
i

call 1, 2, 3, 4.

-

2 hidden perceptron (as few as we can)

call a, b.

-

1 output perceptron

call O.

-

Use the
transfer
function f(x) =


(
for continuos function
)

The
tau
if th
e output
<
0.5 and the
mer
if the output


0.5.

Name

P
etal

s

length

X
1

P
etal

s

width

X
2

Body’s tall

X
3

Body’s
width

X
4

Kind

A

4.7

3.2

1.3

0.2

tau

B

6.1

2.8

4.7

1.2

mer

C

5.6

3.0

4.1

1.3

mer

D

5.8

2.7

5.1

1.9

tau

E

6.5

3.2

5.1

2.0

tau

-


There are 10 we
ight we will call: w
1a

is the weight of the output value from perceptron 1
to perceptron a, and similarly to w
2a
, w
3a
, w
4a
, w
1b
, w
2b
, w
3b
,
w
4b
, w
bO
, w
aO
. Let w
0a
, w
0b

and w
0O

is


Ф (like we have done before)

We have the output value of perceptron a, b and
O:

O
a
= f(w
0a
x
0

+ w
1
a
x
1

+ w
2
a
x
2

+ w
3
a
x
3

+ w
4
a
x
4

)





(
x
0

= 1)

O
b
=

f(w
0
b
x
0

+ w
1b
x
1

+ w
2b
x
2

+ w
3b
x
3

+ w
4b
x
4

)

O
O
=
f(w
0O
x
0

+ w
aO
O
a

+ w
bO
O
b
)

We will make a random set of weight, too. We will take (just random):

w
0
a

= 0.0

w
1
a

= 0.1

w
2
a

=
-
0.1

w
3
a

=
-
0.1

w
4
a

=

0.0

w
0b

= 0.0

w
1b

=
-
0.1

w
2b

=
-
0.2

w
3b

= 0.1

w
4b

=
-
0.1

w
0O

= 0.1

w
aO

= 0.2

w
bO

=
-
0.1

We will call E
O
is the
expect

value

of the output O (we have by the sheet of experiment)
.
With the
tau
we let E
O
=0, with the
mer
we let E
O
=1.(further from the needed v
alue)

We will have the value of
error in perceptron O, a, b on the formula:



O
= O
O
.(1


O
O
).(E
O



O
O
)



a
= O
a
.(1


O
a
).

O.
w
aO




= O
b
.(1


O
b
).


O.
w
bO


(For gerenal
if there is lot of output perceptrons
the formula is



a
= O
a
.(1


O
a
).


(

Oi.
w
aOi
)

a
nd the
“O(1
-
O
)” is repalced by the deverative

of the
transfer function at the point f=O
. This result come from the mathematics analyse
)

And the new value of weights is:


w := w+r.


.x



(


is the error of the perceptron (a or b), x is the value input to t
hat perceptron
with
weight w,
r

is a
invariant

small number and now we will call it the
learning rate
)

Let r =

0.1

Let caculate the output with the begining weight on the flower A.
We have

O
a
=0.5050

O
b
=
0.2690

O
O
=
0.5434

Then we have

O
=
0.5434

(1
-
0.

5434)(0
-
0.

5434)=
-
0.1348


b
=
0.2690(1
-
0.2690)(
-
0.1348)(
-
0.1)=0.0027


a
=0.

5050 (1
-
0.5050)(
-
0.1348)0.2=
-
0.0067

Then we
update the value of the weights:

w
0a
=
0.0+0.1(
-
0.0067)1.0=
-
0.0007

w
1a
=0.1+0.1(
-
0.0067)4.7=0.0969

w
2a
=
-
0.1+0.1(
-
0.0067)3.2=
-
0.1021

w
3a
=
-
0.1+0.1(
-
0.00
67)1.3=
-
0.1009

w
4a
=0.0+0.1(
-
0.0067)0.2=
-
0.0001


w
0b
=0.0+0.1(
0.0027)1.0=0.0003

w
1b
=0.1+0.1(
0.0027
)4.7=
-
0.09
87

w
2b
=
-
0.1+0.1(
0.0027
)3.2
=
-
0.1991

w
3b
=
-
0.1+0.1(
0.0027)1.3=0.1004

w
4b
=0.0+0.1(
0.0027)0.2=
-
0.0999


w
0O
=0.1+0.1(
-
0.1348)=0.0865

w
aO
=0.2+0.1(
-
0.1348)0.50
50=0.1932


w
bO
=
-
0.1+0.1(
-
0.1348)0.2690=
-
0.1036


Then the new value of the output is O
a
=0.4992, O
b
=0.2709, O
O
=0.5386. We see that it is
more near the expect value than the first value. Continue this process to the flower B, C,
D then get back to A, then B a
gain.... and there are not stop here like the first example.
The weights can be update even if it is right for all the flower. We will have the best
weights after many process.




The problem of overfitting

and regularization
.

When we train a network with
a data set, we will use the network for many orther data
set, that is diffirent from the training set. So we want too see not just how good the
network is in our set, but also how good it could be with another data set. So we will do
like following.

Not im
portant what is the algorithm we use to train your network, we must firstly spilit
the data set into 2 parts,

1

and

2
. The number of patterns in each set ussualy is the same.
We will use the

1

for
training

and the

2

for
testing
. We will train the netwo
rk with the
patterns of

1

and let call the process that do the algorithm over all the patterns of

1

is 1
epoch
. After that we caculate the error of the network in

1


is E
1
. Then we apply the
network into the

2

that the network have never seen before, a
nd we caculate the error E
2
-
of the network in the

2
.

And then we
draw a
chart

that show the E
1

and E
2

like function
of the number of epochs (it is the time the computer finish one process). If both E
1
and E
2

decrease it means that the network is now lear
ning the structure of the data set. But if
after a certain number of epochs, while E
1

continues to decrease, the E
2

suddenly
increase, it means that the network starts to
overfit

the data. The network now is not good
anymore.

And then we think that the set

of weights just before the E
2

increase may be the good
weights. So just keep it in a seperate memory and then after finshing the learning process
we can therefore use the best weights. This work call regularizating by early stopping.
M
ention

that the E
2

c
an increase several times in the process so we must make a long
learning time to find what is the minimum value of the error.

Another way to regularizate the network is by using penalty term.
It is the replace the
simple error we have used before by another error, which is more complcated. If you
want to know
completely
about

it, you can read at

http://lcn.epfl.ch/tutor
ial/docs/supervised.pdf

section 1.7

.
I think that I can not write more
beautiful than Mr.Gerstner. But it need a little
knowledge of
mathematics.

The idea is we
change the structure of the error with the
penalty term
. Then we can have some better
conseque
nce.




The role of neural

network.

Neural network is one of the most important invention in modern science. It now apllied
in many diffirent areas: technology, physics, biology, military, economy, medicine
... One
of the best thing in the computer science.

The neural network simulation now can be even
used to research about the
real
biology neural network.




So I end

my assignment here. Thank you for what you have helped me proffeser Chao
Ming Fu. I have learned many things from past 10 weeks. See you late
r in Indonesia

if I

go :)

One more question professer: Where is the part “cognitive application”. I just don’t
know how to learn about it. Because I must focus on the national competition now so I
cannot continue untill January 11. If after that I can fini
sh about cognitive application I
will upload my assignment 2.