Artificial Neural Networks - Introduction -

appliancepartAI and Robotics

Oct 19, 2013 (3 years and 9 months ago)

70 views

Biological inspiration

Animals are able to react adaptively to changes in their
external and internal environment, and they use their nervous
system to perform these behaviours.


An appropriate model/simulation of the nervous system
should be able to produce similar responses and behaviours in
artificial systems.


The nervous system is build by relatively simple units, the
neurons, so copying their behavior and functionality should be
the solution.

Biological inspiration

Dendrites

Soma (cell body)

Axon

Biological inspiration

synapses

axon

dendrites

The information transmission happens at the synapses.

Artificial neurons

Neurons work by processing information. They receive and
provide information in form of spikes.

The McCullogh
-
Pitts model

Inputs

Output

w
2

w
1

w
3

w
n

w
n
-
1

.

.

.

x
1

x
2

x
3



x
n
-
1

x
n

y

Artificial neural networks

Inputs

Output

An artificial neural network is composed of many artificial
neurons that are linked together according to a specific
network architecture. The objective of the neural network
is to transform the inputs into meaningful outputs.

Learning in biological systems

Learning = learning by adaptation


The young animal learns that the green fruits are sour,
while the yellowish/reddish ones are sweet. The learning
happens by adapting the fruit picking behavior.


At the neural level the learning happens by changing of the
synaptic strengths, eliminating some synapses, and
building new ones.

Neural network mathematics

Inputs

Output

Neural network approximation

Task specification:


Data: set of value pairs: (x
t
, y
t
), y
t
=g(x
t
) + z
t
; z
t

is random
measurement noise.


Objective: find a neural network that represents the input /
output transformation (a function) F(x,W) such that

F(x,W) approximates g(x) for every x

Learning with MLP neural
networks

MLP neural network:

with p layers

Data:

Error:

It is very complicated to calculate the weight changes.

x

y
out


1 2 … p
-
1 p

Learning with backpropagation

Solution of the complicated learning:



calculate first the changes for the synaptic weights
of the output neuron;



calculate the changes backward starting from layer
p
-
1, and propagate backward the local error terms.

The method is still relatively complicated but it
is much simpler than the original optimisation
problem.

.2

.8

.3

.4

.1

.6

Train = 1

.15

1

0

.2*1+.8*0=.2

.3*1+.4*0=.3

.1*.3+.6*.2=.15

Error = 1
-
.15=.85

.6

.8

.4

.4

.2

.8

Train = 1

.72

0

1

.6*0+.8*1=.8

.4*0+.4*1=.4

.2*.4+.8*.8=.72

Error = 1
-
.72=.28

.6

.9

.4

.45

.25

.9

Train = 0

1.56

1

1

.6*1+.9*1=1.5

.4*1+.45*1=.85

.25*.85+.9*.1.5=1.56

Error = 0
-
1.56=
-
1.56

Artificial Neural Network

Predicts

Structure

at this

point

Danger

You may train the network on your training
set, but it may not generalize to other data

Perhaps we should train several ANNs and
then let them vote on the structure

Profile network from HeiDelberg

family (alignment is used as input) instead of just the new
sequence

On the first level, a window of length 13 around the residue is
used

The window slides down the sequence, making a prediction for
each residue

The input includes the frequency of amino acids occurring in
each position in the multiple alignment (In the example, there
are 5 sequences in the multiple alignment)

The second level takes these predictions from neural networks
that are centered on neighboring proteins

The third level does a jury selection

PHD

Predicts 4

Predicts 6

Predicts 5