SoftCompMM3

wonderfuldistinctΤεχνίτη Νοημοσύνη και Ρομποτική

16 Οκτ 2013 (πριν από 3 χρόνια και 2 μήνες)

69 εμφανίσεις

Neural Networks

Introduction


Artificial Neural Networks (ANN)


Connectionist computation


Parallel distributed processing


Biologically Inspired computational models


Machine Learning


Artificial intelligence

"the study and design of
intelligent agents
" where an
intelligent agent
is a system that perceives its environment
and takes actions that maximize its chances of success.

History


McCulloch and Pitts introduced the Perceptron in
1943.


Simplified model of a biological neuron


The drawback in the late 1960's


(Minsky and Papert)


Perceptron limitations


The solution in the mid 1980's


Multi
-
layer perceptron


Back
-
propagation training


Summary of Applications


Function approximation


Pattern recognition/
Classification


Signal processing


Modeling


Control


Machine learning

Biologically Inspired.


Electro
-
chemical signals


Threshold output firing


Human brain: About
100 billion (10
11
) neurons and

100
trillion (10
14
) synapses

The Perceptron


Sum of Weighted Inputs
.


Threshold activation function


Activation Function


The sigmoid function: Logsig

(Matlab)

Activation Function


The tanH function: tansig (Matlab)

The multi layer perceptron (MLP)

f

f

f

f

...

...

f

f

f

f

...

1

1

f

f

f

f

...

1

z
in

z
out

W
1

W
2

W
3

W
1

W
2

W
3

F
1

F
2

F
3

1

1

1

z
in

z
out

X
1

X
2

X
3

Y
0

Y
1

Y
2

Y
3

The multi layer perceptron (MLP)

W
1

W
2

W
3

F
1

F
2

F
3

1

1

1

z
in

z
out

X
1

X
2

X
3

Y
0

Y
1

Y
2

Y
3

Supervised Learning


Learning a function from
supervised

training data. A
set of Input vectors
Z
in

and corresponding desired
output vectors
Z
out
.


The performance function

Supervised
Learning

Gradient descent backpropagation


The Back Propagation Error Algorithm

BPE learning.

W
1

W
2

F

1

z
in

z
out

f

f

f

f

S

S

S

S

...

1

...

...

X
1

Y
1

Neural Networks

0
Collect data.

1
Create the network.

2
Configure the network.

3
Initialize the
weights.

4
Train the network.

5
Validate the network.

6
Use the network.

Collect data.

Lack
of information in the traning data.

The main problem !






As few neurons in the hidden layer as posible.


Only use the network in working points represented in the traningdata.


Use validation and test data.


Normalize inputs/targets to fall in the range [
-
1,1
] or
have zero mean
and
unity
variance

Create the network.

Configure the network.

Initialize the weights.

Number of neurons in the hidden layer

f

f

f

f

S

S

S

S

...

1

...

...

Only one hidden layer.

Train the network.

Validate the network.

Dividing the
Data into three subsets.

1.
T
raining set

(fx. 70%)

2.
V
alidation set

(fx. 15%)

3.
T
est set

(fx. 15%)

N
umber
of iterations
.

trainlm: Levenberg
-
Marquardt

t
rainbr: Bayesian Regularization

t
rainbfg: BFGS Quasi
-
Newton

t
rainrp: Resilient Backpropagation

t
rainscg: Scaled
Conjugate
Gradient

t
raincgb: Conjugate
Gradient with Powell/Beale
Restarts

t
raincgf: Fletcher
-
Powell
Conjugate
Gradient

t
raincgp: Polak
-
Ribiére
Conjugate
Gradient

t
rainoss: One
Step
Secant

t
raingdx: Variable
Learning Rate Gradient
Descent

t
raingdm: Gradient
Descent with
Momentom

t
raingd: Gradient
Descent

Other types of Neural networks

The RCE net: Only for classification.

X
1

X
2

x

x

x

x

x

x

x

x

x

x

x

o

o

o

o

o

o

o

o

o

o

Other types of Neural networks

The RCE net: Only for classification.

X
1

X
2

x

x

x

x

x

x

x

x

x

x

x

o

o

o

o

o

o

o

o

o

o

l

l

l

l

S

...

...

Parzen Estimator

G

G

G

G

S

...

...

S

/

Y

X
in

Y
out

X

x

x

x

x

x

x

x

x

X
in

Y
out