Subject:- Neural Networks Architecture and Applications

haremboingAI and Robotics

Oct 20, 2013 (3 years and 5 months ago)

74 views

Noorul Islam College of Engineering


Department of Computer Science and Engineering


Subject:
-

Neural Networks Architecture and Applications

Code : CP040

Class : ME



1. Give the problems while training the neural network by using BPN
algorithm.

The
BPN does not extrapolate well. If a BPN is in adequately or
insufficiently trained on a particular class of input vectors, subsequently
identification of members of that class may be unreliable .Make sure, that
the training data cover the entire expected i
nput space. During the training
process, select training
-
vector pairs randomly from the set, if the problem
lends itself to this strategy. In any event do not train the network completely
with input vectors of one class and then switch to another class. Th
e network
will forget the training. These are the problems while training the neural
network using BPN algorithm.


3. Define Lyapunov function or Energy Function?


If a bounded function of the state variables of a dynamic system can
be found, such that all

state changes result in a decrease in the value of the
function, then the system has a stable solution. This function is called
Lyapunov function or energy function. It is the form


M n


E=
-




y
i
w
ij
x
j



I =1 j=1



4. What is an unsupervised learning?






X 0






Fig: Un
-
Supervised Learning

Adaptive
Network


In learning without supervision, the desired response

is not known;
thus explicit error information cannot be used to improve network behavior.
Since no information is available as to correctness or incorrectness of
response, learning must somehow be accomplished based on observations of
responses to inputs
that we have marginal or no knowledge about.



5. What is associative memory? Can BAM be made auto associative?



Training input target output vectors are identical. The process of
training is also called storing the vectors.


Yes, BAM can be made auto as
sociative. The BAM stores pairs of
pattern A
i
, B
i

and is auto associative if B
i
=A
i
.


6. What is network capacity?



The number of pasterns that can be stored and recalled in a net is
called network capacity.



For Hopfield net p=0.15n where n is the number

of neurons in the net.


9. What is competitive network? Is a BPN an example of a competitive
network?



Competitive network can be viewed as a group of input patterns
clustered in a way inherent to the input data.


Competitive network is closely related t
o clustering. The heart of
competitive network lies in the winner take
-
all strategy. The winner of the
competition is the unit with the largest net input.


The node with the largest activation level is declared the winner in the
competition. This node is t
he only node that will generate an output signal
and all other nodes are suppressed to the zero activation level.


Yes, BPN is an example of a competitive network, because it is an
multilayer network with multiple output.



10. What is the principle behind

Cauchy’s training?



Cauchy distribution has the same general shape as the Boltzmann
distribution, but does not fall off as sharply at large energies.


The implication is that the Cauchy machine may occasionally take a
rather large jump out of a local min
imum.


The advantage of this approach is that the global minimum can be
reached with a much shorter annealing schedule.


For the Cauchy machine, the annealing temperature should follow the
inverse of the time.




T
0



T(t
n
)=
------




1+t
n



11. What do yo
u mean by Local Minimum?



Local minimum is the smallest value of a set, function, etc. within
some neighborhood. A point x* is a local minimum of y if y(x*)<y(x) for all
x such that

||x*
-
x|| <= ε, for some ε>0.






12. What is Learning Vector Quantizat
ion method (LVQ)?



Kohenen designed supervised version of vector quantization called
LVQ for adaptive pattern classification. Here class information is used fine
-
tune the reconstruction vectors in a Voronoi quantization so as to improve
the quality of the

classifier decision regions. In pattern classification
problems, it is inside of the class distribution that should be described more
accurately.


13. What is self
-
organizing networks?


Self organizing networks are those systems in which the system is not

given any external indication as to what the correct responses should be nor
whether the generated responses are right or wrong. Statistical clustering
methods without knowledge of the number of clusters are examples of
unsupervised learning.



Un
supervised learning aims at finding a certain kind of regularity in the data
represented by the exemplars.




14. What is momentum?


This is a technique used to increase the speed of convergence. When
calculating the weight change value, we add a fraction
of the previous
change. This addition term tends to keep the weight change going in the
same direction. Hence the term is called momentum.



17. What is the need for biasing?


The threshold adds an offset to the weighted sum. An alternative way
of achievin
g the same effect is to take the threshold out of the body of the
model neuron and connect it to an extra input value that is fixed to be 'on' all
time.

In this case ,rather than subtracting the threshold value from the
weighted sum, the extra input of +1
is multiplied by a weight equal to minus
the threshold value
-
θ, and added in as well as the other inputs is known as
biasing the neuron.



18. What are the applications of Hopfield net?


The application of the continuous Hopfield net is a class of problem
s
known as the optimization problem; one such is the traveling salesperson
problem.



19. How do you encode the association in BAM?


In BAM first we input pattern x to X layer. Then present input y to Y
layer. Update activations of units in Y layer. Comput
e net input


Yinj=

x
i

w
ij


I


Yj=f(Yinj)


Send signals to X layer

Update activations of units in X layer .Compute net input.


X
inj
=

w
ij
y
j


X
i
=f(x
inj
)

Send signals to Y layer

Test for convergence .If activation vectors x & y have reached equilibrium
stop training.


X Layer Y Layer

















X1

Xi

Xn

Y1

Yj

Ym


20. Write the expression for BAM structure.


For BAM the binary input pat
tern [s(p):t(p)] ,p=1…P. The weights are
determined by


w
ij
=

2(s
i
(p)
-
1)(2t
j
(p)
-
1)


Activation function is logistic sigmoid


ie. F(y
inj
)=1/(1+e(exp(
-
y
inj
)))



21. What is the stability plasticity dilemma in artificial neural network?


C
lusters (categories) formed by a competitive learning network are
not guaranteed to be stable. One way to prevent this is to gradually reduce
the learning rate to zero, thus freezing the learning categories. However,
when this is carried out, stability is
gained at the expense of losing plasticity,
or the ability of the network to react to new data. This problem is commonly
referred to as Grossberg’s stability.



22. Differentiate between production and training phase in neural networks?


Training mode and
production mode are two different network
operation modes. The process of training is simply a means of encoding
information about the problem to be solved. After training has been
completed, the network spends most of its time in production, ie. changing
connection weight initialisation from previous versions to new versions
produced by the system.



23. What is the purpose of convex combination method?


Convex region can be formed by the separating surfaces. It is the one
in which a line joining any two p
oints is entirely confined to the region
itself.



24. What is second order BPN?


Better learning can be achieved if the supervised learning is posed as
an unconstrained optimization problem, where the cost function is the error
function E(w).Here optimal
value of the increment in the weight is obtained
by considering only up to second order derivatives of the error function. This
is called second order BPN.



25. What are called Non
-
Recurrent Networks?




Associative memory networks are static and dynamic
memories. The
static network implements a feed forward operation of mapping without a
feedback, or recursive update operation. As such they are sometimes called
non
-
recurrent networks.


26. Define resonance in BAM?


Bidirectional associative memory resonat
es two patterns via a matrix
and its transpose .The BAM achieves accuracy by passing the output B back
through the system to produce a new value A, which could be closer. The
new value is passed forward again, producing a better estimate B and the
process
repeats until it settles down to a steady resonance between the stored
pattern Ai and Bi.



27. Optimization:



Optimization is the process of finding the extreme point of a real
valued multidimensional scalar function of the form y: ε
-
>R, where the
search

space ε is a compact subset of R
n

. An extreme point is a point x* in ε
such that y(x*) takes on its minimum value.



28. N
-
tupling:



N
-
tupling is used to provide systems that are cost effective and
manage resources efficiently.



30. Discriminant Functi
on:



Discriminant functions are lines or curves in input which separate
regions of the data according to their classification.


The decision is based on only two inputs. The output values in
learning are functions of the input values and network weights c
alled
discriminant function. The output of the subnet Y is a function of the input x
and weight w given by

Y=Φ(x, w)

This is the discriminant function of the subnet Y.



31. Feature Vector:



Classification is rarely performed using a single measurement or
feature, from the input pattern. Usually, several measurements are required
to be able to adequately disti
nguish inputs that belong to different categories.
If we make ‘n’ measurements on our input pattern, each of which is a unique
feature that we can use algebraic notation to create a set of these features and
call it a feature vector.



32. Define Neuro sof
tware?




Neuro Software is a powerful and easy to use neural network tool
designed to aid experts in real world data mining and pattern classification
tasks. It hides the underlying complexity of neural network process while
providing g
raphs and statistics for the users to easily understand results.



33. Distributed Memory



In neural network, ‘memory’ corresponds to an activation map of the
neurons.



Memory is thus distributed over many units giving resistance to noise.



In distribut
ed memories, such as in neural network it is possible to
start with noisy data and to recall the correct data.



Distributed memory is also responsible for fault tolerance.