# Neural Midterm Electronics 2013

Τεχνίτη Νοημοσύνη και Ρομποτική

20 Οκτ 2013 (πριν από 5 χρόνια και 2 μήνες)

96 εμφανίσεις

PORT SAID UNIVERSITY

FACULTY OF ENGINEERI
NG

DEPARTMENT OF ELECTR
ICAL ENGINEERING

PROGRAM/
YEAR

ELECTRONOICS AND
COMMUNICATIONS
DIVISION
201
2
-
1
3

SEMESTER

SECOND

COURSE TITLE:

Neural Networks

ةيبصعلا تاكبشلا

COURSE CODE:

EC
E3
54

DATE:

13

-

5

-
201
3

T
OTAL ASSESSMENT MARK
S:

20

TIME
ALLOWED:

1

HOUR

FRESH

Question (1)

(
10

marks)

(1)(4 marks) For each question, select

(a)
(2 marks)
The following network is a multilayer perceptron.

A. AND

B. OR

C. XOR

D. NOR

E. None of the above answers

(b)

(2 marks)
A perceptron with a unipolar step function has two inputs with weights
w
1

= 0.5

and
w
2
=

0.2
, and

a threshold

= 0.3

(

can be considered as a weight for an ext
ra input which is always set to
-
1)
. T
he network is trained using the learning rule
Δw =

(d
-

y) x
,
where x is the input vector,

is the
learning rate,
w

is the weight vector,
d

is

the desired output, and
y

is the actual output.
What are the
new values of the weights and threshold after one step of training with the input vector
x

= [0, 1]
T

and
desired output 1, using a learning rate

= 0.5
?

A.
w
1

= 1.0, w
2

= −0.2,

= −0.2.

B.
w
1

=

0.5, w
2

= 0.3,

= 0.7.

C.
w
1

= 0.5, w
2

= 0.3,

=
-
0.
2
.

D.
w
1

= 0.5, w
2
= 0.3,

= 0.
3
.

E.
w
1

= 0.5, w
2

= −0.2,

= 0.3.

(2
) (
4 marks)
State

whether each of the following statements is true or false by checking the
appropriate box.

Statement

True

Fals
e

A three
-
layer back

propagation neural network with 5 neurons in each
layer has a total of 50 connections and 50 weights.

(3
) (
2

mark
s
) Why is it not a good idea to have step activation functions in the hidd
en units of a
multi
-
layer feedforward network?

What are the preferred activation functions and why?

Q
uestion (
2
) (1
0

marks)

(a) (5 marks)
For
the
neural network

shown, the network have the following weight vectors:

and

that each unit has a bias

= 0
. If the network is tested with an input vector

x

=

[2, 3, 1]
T
,

(i)(2 marks)
c
alculate the output of the first hidden layer

y
1
.

A multi
-
layer network should have the same number of units in the
input layer and the output layer.

For effective training
of a neural network, the network should have at
least 5
-
10 times as many weights as there are training samples.

A single perceptron can compute the XOR function.

(ii) (3 marks)
calculate
the output of the third output neuron
z
3
.

(
b
)
(5 marks)

Consider a neural net for a binary classiﬁcation
,
which has one hidden layer as shown in
the ﬁgure. We
use a linear act
ivation function at hidden layers
an
d a sigmoid activation function at the
output layer,
where
x = (x
1
, x
2
)

and
w = (w
1
, w
2
,

. . . , w
9
).

(i)(3 marks)
What

is the output
y

from the above neural net? Express it in terms of
x
i

weights
w
i
.

(ii)(2 marks)
Draw a neural net with no hidden layer which is equivalent to the given neural net,

and
write weights
w

of this new

neural net in terms of
c

and
w
i
.