Click Here To Download Full Report File - Way2project

lovethreewayΤεχνίτη Νοημοσύνη και Ρομποτική

20 Οκτ 2013 (πριν από 3 χρόνια και 10 μήνες)

81 εμφανίσεις

1.1

INTRODUCTION
:


Ever since eternity, one thing that has made human beings stand apart from the rest of the
animal kingdom is, its
brain

.The most intelligent device on earth, the “Human brain” is the
driving force that has given us the ever
-
progressive species diving into

technology and
development
,

as each day progresses.

Due to his inquisitive nature, man tried to make machines that cou
ld do intelligent job
processing, and take decisions according to instructions fed to it. What resulted was the
machine that revolutionized the whole world, the “Computer” (more technically speaking
the Von Neumann Computer). Even though it could perform m
illions of calculations every
second, display incredible graphics and 3
-
dimentional animations, play audio and video but
it made the same mistake every time.

Practice could not make it perfect. So the question for making more intelligent device
continued.
These researches lead to birth of more powerful processors with high
-
tech
equipments attached to it, super computers with capabilities to handle more than one task at
a time and finally networks with resources sharing facilities. But still the problem of
d
esigning machines with intelligent self
-
learning, loomed large in front of mankind. Then
the idea of initiating human brain stuck the designe
rs who started their researches .
O
ne of
the technologies that will change
the way for working of computer
, i.e. “
Ar
tificial Neural
Networks”
.


1.1.1

WHAT IS NEURAL NETWORK
?


An Artificial Neural Network (ANN) is
an information processing paradigm that is inspired

by the way biological nervous systems, such as the brain to process the

information.

The key

element of this paradigm is the novel structure of the information processing system. It is
composed of a large number of highly interconnected processing elements (
neurons)
working together
to solve specific problems Typically Neural Network is trained

o
r fed large
amount of data and rules about data relationships i.e. “A Grandfather is older than person’s
Father”
.
A program can then tell the network how to behave?
As people learn from
experience, the network is trained by learning
.
An ANN is configured for a specific
application, such as
pattern recognition or data classification, through a learning process.
Learning in biological systems involves adjustments to the synaptic connections that exist
between the
neurons
.


1.1.2

WHY WE
USE NEURAL NETWORK
?

Neural networks, with their remarkable ability to derive meaning from

complicated or
imprecise
data

,

that can be used to extract patterns and detect trends that are too complex to
be noticed by either humans or other computer
techniques. A trained neural network can be
thought of as an "expert" in the category of informa
tion it has been given to analyz
e.

This
expert can then be used to provide projections given new situations of interest and answer
"what if" questions.

Other ad
vantages include:

1.

Adaptive learning
: An ability to learn how to do tasks based on the data given for training
or initial experience.

2.

Self
-
Organiz
ation:

A
n ANN can create its own organiz
ation or representation of the
information it receives during learnin
g time.

3.

Real Time Operation:

ANN computations may be carried out in parallel, and special
hardware devices are being designed and manufactured which take advantage of this
capability
.

4.

Fault Tolerance via Redundant Information Coding:

Partial destruction of a network
leads to the corresponding degradation of performance. However, some network capabilities
may be retained even with major network damage.


1.1.3

N
EURAL NETWORK VERSUS CONVENTIONAL COMPUTERS:

Neural networks take a different approach to problem solving than that of conventional
computers. Conventional computers use an algorithmic approach i.e. the computer follows a
set of instructions in order to solve a problem. Unless the specific steps that

the computer
needs to follow are known the computer cannot solve the problem. That restricts the
problem solving capability of conventional computers to problems that we already
understand and know how to solve. But computers would be so much more useful
if they
could do things that we don't exactly know how to do.

Neural networks process information in a similar way the human brain does. The network is
composed of a large number of highly interconnected processing elements

(
neurons
)
working in parallel t
o solve a specific problem. Neural networks learn by example. They
cannot be programmed to perform a specific task. The examples must be selected carefully
otherwise useful time is wasted or even worse the network might be functioning incorrectly.
The disa
dvantage is that because the network finds out how to solve the problem by itself,
its operation can be unpredictable.

On the other hand, conventional computers use a cognitive approach to problem solving; the
way the problem is to solved must be known and

stated in small unambiguous instructions.
These instructions are then converted to a high level language program and then into
machine code that the computer can understand. These machines are totally predictable; if
anything goes wrong is due to a softwa
re or hardware fault.

Neural networks and conventional algorithmic computers are not in competition but
complement each other. There are tasks are more suited to an algorithmic approach like
arithmetic operations and tasks that are more suited to neural ne
tworks. Even more, a large
number of tasks, require systems that use a combination of the two approaches (normally a
conventional computer is used to supervise the neural network) in order to perform at
maximum efficiency.

Neural networks do not perform m
iracles. But if used sensibly they can produce some
amazing results.

1.
2.
HUMAN AND ARTIFICIAL

NEURONS
-

INVESTING THE SIMILARITIES:

1.2.1

HOW THE HUMAN BRAIN

L
EARNS
?


Much is still unknown about how the brain trains itself to process information, so theories
abound. In the human brain, a typical neuron collects signals from others through a host of
fine structures called
dendrites
. The neuron sends out spikes of electri
cal activity through a
long, thin stand known as an
axon
, which splits into thousands of branches. At the end of
each branch, a structure called a
synapse

converts the activity from the axon into electrical
effects that inhibit or excite activity from the
axon into electrical effects that inhibit or excite
a
ctivity in the connected neuron
s. When a neuron receives excitatory input that is
sufficiently large compared with its inhibitory input, it sends a spike of electrical activity
down its axon. Learning oc
curs by changing the effectiveness of the synapses so that the
influence of one neuron on another changes





Fig
-
1.1
Compo
ne
nts of neuron


Fig
-
1.2
T
he synapse


1.
2.2

FROM HUMAN

NEURONS TO ARTIFICIAL NEURONES:

We conduct these neural networks by first trying to deduce the essential features of
neurons

and their interconnections. We then typically program a computer to simulate these features.
However because our knowledge of
neurons

is incomplete and our computing power is
limited, our models are necessarily gross idealizations of real networks of
neurons
.


Fig
-
1.3
The neuron model



1.3

BASIC STRUCTURE OF NEURAL NETWORK

I
nput layer:

The bottom layer is known as input neuron network in this case x1 to x5 are

input layer

neurons.

Hidden layer:

The in
-
between input and output layer the layers are known

as hidden layers where the
knowledge of past experience / training is the

input to the next hidden layer or output layer.

Output Layer:

The topmost layer which give the final output. In this case z1 and z2 are output neurons.



Fig
-
1.4
Basic Structure Of Artificial Neural Network

1.3.1

NETWORK ARCHITECTURES:


1)
Single layer
feed
-
forward Networks:

In this layered neural network the neurons are organized in the form of layers

.In this
simplest form of a layered network, we have an input layer of source nodes those
projects on to an output layer of neurons, but not vise
-
versa. I
n other words, this
network is strictly a feed forward or acyclic type. It is as shown in figure:








Fig
-

1
.5


Such a network is called single layered network, with
designation “single later”
referr
ing to the o/p layer of neurons.

2)

Multilayer feed forward networks:

The second class of the feed forward neural network distinguishes itself by one or
more hidden layers, whose computation nodes are correspondingly called neurons or
units. The function of hidden neurons is intervenue between the external i/p and the
networ
k o/p in some useful manner. The ability of hidden neurons is to extract higher
order statistics is particularly valuable when the size of i/p layer is large.

The

i
/p vectors are feed
-
forward to 1
st

hidden layer and this pass to 2
nd

hidden layer
and so on

until the last layer i.e. output layer, which gives actual network response.






Fig 1.6




3)

Recurrent networks:

A recurrent network distinguishes itself from fee
d forward neural network, in that it
has least one feed forward loop. As shown in figures output of the neurons is fed back
into its own inputs is referred as self
-
feedback
.


A recurrent network may consist of a si
ngle layer of neurons
with each neuron
feeding
its output signal back to the inputs of all the other neurons. Network may have hidden
layers or not.



Fig
1
.7




Fig 1.8

1.4 LEARNING OF

ANNS

The property that is of primary significance for a neural network is the ability

of the network to
learn from environment, and to improve its performance through learning.

A neural network learns about its environment through an interactive process of adjustment
applied to its synaptic weights and bias levels. Network becomes more kno
wledgeable about its
environment after each iteration of the learning process.



Learning with a teacher:

1)

Supervised learning:

T
he learning process in which the teacher teaches the network by giving the network the
knowledge of environment in the form of

sets of the inputs
-
outputs pre
-
calculated
examples.

As shown in figure




Fig
-
1.9



Neural network response to inputs is observed and compared with the predefined output.
The difference is calculated
refer as “error signal” and that is feed back to input layers
neurons along with the inputs to reduce the error to get the perfect response of the network
as per the predefined outputs.



Learning without a teacher:

Unlike supervised learning, in unsupervise
d learning, the learning process takes place
without teacher that is there are no examples of the functions to be learned by the network.

1)


Reinforcement learning / neurodynamic programming

In reinforcement learning, the learning of an input output mapping

is performed through
continued interaction with environment in order to minimize a scalar index of performance.

As shown in figure.




Fig
-

1.10

In reinforcement learning, because no information on w
ay the right output should be
provided, the system must employ some random search strategy so that the space of
plausible and rational choices is searched until a correct answer is found. Reinforcement
learning is usually involved in exploring a new enviro
nment when some knowledge( or
subjective feeling) about the right response to environmental inputs is available. The system
receives an input from the environment and process an output as response. Subsequently, it
receives a reward or a panelty from the e
nvironment. The system learns from a sequence of
such interactions.

2)


Unsupervised learning:

I
n unsupervised or self
-
organized learning there is no external teacher or critic to over see
the learning process.


As indicated in figure.



Fig
-

1.11

Rather provision is made for a task independent measure of the quality of the representation
that the network is required to learn and the free parameters of the network are optimized
with
respect to that measure. Once the network has become tuned to the statistical
regularitie
s of the input data, it develop
s the ability to form internal representation for
encoding features of the input and there by to create the new class automatically.




1.
5
. A
N ENGINEERING APPROACH:

1.
5
.1 A
SIMPLE NEURON

An artificial neuron is a device with many inputs and one output. The neuron has two modes of
operation;
the training mode

and
the using mode
. In the training mode, the neuron can be trained
to fire (or n
ot), for particular input patterns. In the using mode, when a taught input pattern is
detected at the input, its associated output becomes the current output. If the input pattern does
not belong in the taught list of input patterns, the firing rule is use
d to determine whether to fire
or not.



Fig
-
1.4 A simple neuron

1.
5
.2 F
IRING RULES:

The firing rule is an important concept in neural networks and accounts for their high
flexibility. A firing rule determines how one calculates whether a neuron should fire for any
input pattern. It relates to all the input patterns, not only the ones on w
hich the node was
trained.

A simple firing rule can be implemented by using Hamming distance technique. The rule goes as
follows:


Take a collection of training patterns for a node, some of which cause it to fire (the 1
-
taught
set of patterns) and others

which prevent it from doing so (the 0
-
taught set).


Then the patterns not in the collection cause the node to fire if, on comparison, they have more
input elements in common with the 'nearest' pattern in the 1
-
taught set than with the 'nearest'
pattern in the 0
-
taught set. If there is a tie, then the patter
n remains in the undefined state.


For example, a 3
-
input neuron is taught to output 1 when the input (X1,X2 and X3) is 111 or
101 and to output 0 when the input is 000 or 001. Then, before applying the firing rule, the
truth table is;




X1:


0

0

0

0


1

1

1

1

X2:


0

0

1

1

0

0

1

1

X3:


0

1

0

1

0

1

0

1

OUT:


0

0

0/1

0/1

0/1

1

0/1

1




As an example of the way the firing rule is applied, take the pattern 010. It differs from 000 in
1 element, from 001 in 2 elements, from 101 in 3 elements and from 111 in 2 elements.
Therefore, the 'nearest' pattern is 000 which belongs in the 0
-
taught se
t.


Thus the firing rule requires that the neuron should not fire when the input is 001. On the other
hand, 011 is equally distant from two taught patterns that have different outputs and thus the
output stays undefined (0/1).


By applying the firing in
every column the following truth table is obtained;

X1:


0

0

0

0

1

1

1

1

X2:


0

0

1

1

0

0

1

1

X3:


0

1

0

1

0

1

0

1











OUT:


0

0

0

0/1

0/1

1

1

1


The difference between the two truth tables is called the
generalization of the neuron.

Therefore
the firing rule gives the neuron a sense of similarity and enables it to respond 'sensibly' to
patterns not seen during training.



1.
5
.3 P
ATTERN
RECOGNITION
-
AN EXAMPLE

An important application of neural networks is pattern recognition. Pattern recognition can be
implemented by using a feed
-
forward (figure 1) neural network that has been trained
accordingly. During training, the network is trained t
o associate outputs with input patterns.
When the network is used, it identifies the input pattern and tries to output the associated output
pattern. The power of neural networks comes to life when a pattern that has no output associated
with it, is given
as an input. In this case, the network gives the output that corresponds to a taught
input pattern that is least different from the given pattern.


Fig
-
1.5
Feed
-
forward Neural Network
.


The network of

figure 1 is trained to recogniz
e the patterns T and H. The associated patterns
are all black and all white respectively as shown below.


If we represent black squares with 0 and white squares with 1 then th
e truth tables for the 3
neurons after generaliz
ation are;

X11:


0

0

0

0

1

1

1

1

X12:


0

0

1

1

0

0

1

1

X13:


0

1

0

1

0

1

0

1

OUT:


0

0

1

1

0

0

1

1

Top neuron


X21:


0

0

0

0

1

1

1

1

X22:


0

0

1

1

0

0

1

1

X23:


0

1

0

1

0

1

0

1

OUT:


1

0/1

1

0/1

0/1

0

0/1

0

Middle neuron


X21:


0

0

0

0

1

1

1

1

X22:


0

0

1

1

0

0

1

1

X23:


0

1

0

1

0

1

0

1

OUT:


1

0

1

1

0

0

1

0

Bottom neuron


From the tables it ca
n be seen the following associat
ions can
be extracted:


In this case, it is obvious that the output should be all blacks since the input pattern is almost the
same as the 'T' pattern.


Here also, it is obvious that the output should be all whites since the input pattern is almost the
same as th
e 'H' pattern.


Here, the top

row is 2 errors away from the

T and 3 from an H. So the top output is black. The
middle row is 1 error away from both T and H so the output is random. The bottom row is 1 error
away from T and 2 away from H. Therefore the output is black. The total output o
f the network
is still in fa
vour

of the T shape.



1.
5
.4
A MORE COMPLICATED NEURON

The previous neuron doesn't do anything that conventional conventional computers don't do
already. A more sophisticated neuron (figure 2) is the McCulloch and Pitts model (MCP). The
difference from the

previous model is that the inputs are 'weighted', the effect that each input has
at decision making is dependent on the weight of the particular input. The weight of an input is a
number which when multiplied with the input gives the weighted input. These

weighted inputs
are then added together and if they exceed a pre
-
set threshold value, the neuron fires. In any
other case the neuron does not fire.


Fig
-
1.6
. An MCP neuron

In mathematical terms, the neuron fires if and only if;

X1W1 + X2W2 + X3W3 + ... > T

The addition of input weights and of the threshold makes this neuron a very flexible and
powerful one. The MCP neuron has the ability to adapt to a particular situation by changing its
weights and/or threshold. Various algorit
hms exist that cause the neuron to 'adapt'; the most used
ones are the Delta rule and the back error propagation. The former is used in feed
-
forward
networks and the latter in feedback networks.

1.5.5 Perceptron

At the heart of every Neural Network is wha
t is referred to as the perceptron (sometimes called
processing element or neural node) which is analogus to the neuron nucles in the brain. The
second layer that is very first hidden layer is known as perceptron. As was the case in the brain
the operation

of the perceptron is very simple; however also as is the case in the brain, when all
connected neurons operate as a collective they can provide some very powerful learning
capacity.


Input signals are applied to the node via input connection (dendrites in

the case of the brain.) The
connections have “strength” which change as the system learns. In neural networks the strength
of the connections are referred to as weights. Weights can either excite or inhibite the
transmission of the incoming signal. Mathem
atically incoming signals values are multiplied by
the value of those particular weights.


At the perceptron all weighted input are summed. This sum value is than passed to a scaling
function. The selection of scaling function is part of the neural networ
k design.


The structure of perceptron (Neuron Node) is as follow.


Perceptron




1.6 Advantages of Neural Networks

1)

Networks start processing the data without any preconceived hypothesis. They start
random with weight assignment to various input variable
s. Adjustments are made based
on the difference between predicted and actual output. This allows for unbiased and
batter understanding of data.


2)

Neural networks can be retained using additional input variables and number of
individuals. Once trained thay c
an be called on to predict in a new patient.


3)

There are several neural network models available to choose from in a particular
problem.


4)

Once trained, they are very fast.


5)

Due to increased accuracy, results in cost saving.


6)

Neural networks are able to
represent any functions. Therefore they are called ‘
Universal
Approximators
’.


7)

Neural networks are able to learn representative examples by back propagating errors.