PPT

cracklegulleyΤεχνίτη Νοημοσύνη και Ρομποτική

19 Οκτ 2013 (πριν από 3 χρόνια και 9 μήνες)

69 εμφανίσεις

September 14, 2010

Neural Networks
Lecture 3: Models of Neurons and Neural Networks

1

Visual Illusions demonstrate how we perceive an “interpreted version” of
the incoming light pattern rather that the exact pattern itself.

Visual Illusions

September 14, 2010

Neural Networks
Lecture 3: Models of Neurons and Neural Networks

2

Visual Illusions

He we see that the squares A and B from the previous image actually have
the same luminance (but in their visual context are interpreted differently).

September 14, 2010

Neural Networks
Lecture 3: Models of Neurons and Neural Networks

3

How do NNs and ANNs work?


NNs are able to
learn

by
adapting their
connectivity patterns

so that the organism
improves its behavior in terms of reaching certain
(evolutionary) goals.



The strength of a connection, or whether it is
excitatory or inhibitory, depends on the state of a
receiving neuron’s
synapses
.



The NN achieves

learning

by appropriately
adapting the states of its synapses.

September 14, 2010

Neural Networks
Lecture 3: Models of Neurons and Neural Networks

4

An Artificial Neuron

x
1

x
2

x
n



W
i,1

W
i,2



W
i,n

x
i

neuron i

net input signal

synapses

output

September 14, 2010

Neural Networks
Lecture 3: Models of Neurons and Neural Networks

5

The Activation Function

One possible choice is a
threshold function
:

The graph of this function looks like this:

1

0



f
i
(net
i
(t))

net
i
(t)

September 14, 2010

Neural Networks
Lecture 3: Models of Neurons and Neural Networks

6

Binary Analogy: Threshold Logic Units (TLUs)

x
1

x
2

x
3

w
1
=

w
2
=

w
3
=



=

Example:

x
1
x
2
x
3

1

1

-
1

1.5

TLUs in technical systems are similar to the
threshold neuron model, except that TLUs only
accept binary inputs (0 or 1).

September 14, 2010

Neural Networks
Lecture 3: Models of Neurons and Neural Networks

7

Binary Analogy: Threshold Logic Units (TLUs)

x
1

x
2

w
1
=

w
2
=



=

Yet another example:

x
1


x
2

XOR

Impossible! TLUs can only realize
linearly separable

functions.

September 14, 2010

Neural Networks
Lecture 3: Models of Neurons and Neural Networks

8

Linear Separability

A function f:{0, 1}
n


{0, 1} is linearly separable if the
space of input vectors yielding 1 can be separated
from those yielding 0 by a
linear surface

(
hyperplane
) in n dimensions.

Examples (two dimensions):

1

0

1

1

x
2

x
1

0

1

0

1

1

0

0

1

x
2

x
1

0

1

0

1

linearly separable

linearly inseparable

September 14, 2010

Neural Networks
Lecture 3: Models of Neurons and Neural Networks

9

Linear Separability

To explain linear separability, let us consider the
function f:
R
n


{0, 1} with

where x
1
, x
2
, …, x
n

represent real numbers.


This is the exactly the function that our threshold
neurons use to compute their output from their inputs.

September 14, 2010

Neural Networks
Lecture 3: Models of Neurons and Neural Networks

10

Linear Separability

Input space in the two
-
dimensional case (n = 2):

w
1

= 1, w
2

= 2,



= 2

x
1

1

2

3

-
3

-
2

-
1

x
2

1

2

3

-
3

-
2

-
1

0

1

w
1

=
-
2, w
2

= 1,



= 2

x
1

1

2

3

-
3

-
2

-
1

x
2

1

2

3

-
3

-
2

-
1

0

1

w
1

=
-
2, w
2

= 1,



= 1

x
1

1

2

3

-
3

-
2

-
1

x
2

1

2

3

-
3

-
2

-
1

0

1

September 14, 2010

Neural Networks
Lecture 3: Models of Neurons and Neural Networks

11

Linear Separability

So by varying the weights and the threshold, we can
realize
any linear separation

of the input space into
a region that yields output 1, and another region that
yields output 0.

As we have seen, a
two
-
dimensional

input space
can be divided by any straight line.

A
three
-
dimensional

input space can be divided by
any two
-
dimensional plane.

In general, an
n
-
dimensional

input space can be
divided by an (n
-
1)
-
dimensional plane or hyperplane.

Of course, for n > 3 this is hard to visualize.

September 14, 2010

Neural Networks
Lecture 3: Models of Neurons and Neural Networks

12

Linear Separability

Of course, the same applies to our original function f
of the TLU using binary input values.

The only difference is the restriction in the input
values.

Obviously, we cannot find a straight line to realize the
XOR function:


1

0

0

1

x
2

x
1

0

1

0

1

In order to realize XOR with TLUs, we need to
combine multiple TLUs into a network.


September 14, 2010

Neural Networks
Lecture 3: Models of Neurons and Neural Networks

13

Multi
-
Layered XOR Network

x
1

x
2

x
1


x
2

x
1

x
2

1

-
1

0.5

-
1

1

0.5

1

1

0.5

September 14, 2010

Neural Networks
Lecture 3: Models of Neurons and Neural Networks

14

Capabilities of Threshold Neurons

What can threshold neurons do for us?

To keep things simple, let us consider such a neuron
with two inputs:

The computation of this neuron can be described as
the inner product of the
two
-
dimensional vectors

x

and
w
i
, followed by a threshold operation.

x
1

x
2

W
i,1

W
i,2

x
i

September 14, 2010

Neural Networks
Lecture 3: Models of Neurons and Neural Networks

15

The Net Input Signal

The
net input signal

is the sum of all inputs after
passing the synapses:

This can be viewed as computing the
inner product

of the vectors
w
i

and
x
:

where


is the
angle

between the two vectors.

September 14, 2010

Neural Networks
Lecture 3: Models of Neurons and Neural Networks

16

Capabilities of Threshold Neurons

Let us assume that the threshold


= 0 and illustrate the
function computed by the neuron for sample vectors
w
i

and

x
:

Since the inner product is positive for
-
90








90

, in this
example the neuron’s output is 1 for any input vector
x

to the
right of or on the dotted line, and 0 for any other input vector.

w
i

first vector component

second vector component

x

September 14, 2010

Neural Networks
Lecture 3: Models of Neurons and Neural Networks

17

Capabilities of Threshold Neurons

By choosing appropriate weights
w
i

and threshold


we can place the
line

dividing the input space into
regions of output 0 and output 1in
any position and
orientation
.

Therefore, our threshold neuron can realize any
linearly separable

function
R
n



{0, 1}.

Although we only looked at two
-
dimensional input,
our findings apply to
any dimensionality n
.

For example, for n = 3, our neuron can realize any
function that divides the three
-
dimensional input
space along a two
-
dimension plane.