neural network - Yimg

glibdoadingΤεχνίτη Νοημοσύνη και Ρομποτική

20 Οκτ 2013 (πριν από 3 χρόνια και 8 μήνες)

128 εμφανίσεις




78


Neural Network

Chapter
5


Chapter F
ive

Artificial Neural N
etwork

[ANNs]


5
.
1.
I
ntroduction:

Artificial neural networks (ANNs) have emerged as a Powerful
technique for modeling general input/output relationships in the past,
ANNs have been used for many complex tasks. Applications
have been
reported in areas such as control, telecommunications, biomedical.
Remote sensing , pattern recognition, and manufacturing, just to name a
few. However, in recent years, ANNs are being used more and more in
the area of RF/microwave design.

Artif
icial neural network models can be more accurate

than polynomial
regression models, allow more dimensions

than look
-
up table models, and
allow multiple outputs for a

single model. Models using ANNs are developed
by

providing sufficient training data ( simu
lation or

measured data) from
which it learns the underlying

input/output mapping. Several valuable
characteristics are

offered by ANNs. First, no prior knowledge about the

input/output mapping is required for model development:

Unknown
relationships are i
nferred from the data provided

for training. Therefore, with
an ANN.

T
he fitted function is

represented by the network and does not have to
be

explicitly, defined. Second, ANNs can generalize, meaning

they can
respond correctly to new data that has not be
en

used for model development.





79


Neural Network

Chapter
5


Third, ANNs have the ability to
model highly
nonlinear as well as
linear input/output

mappings. In fact, it has been shown that ANNs are
capable

of forming an arbitrarily close approximation to any

continuous
nonlinear map
ping .

T
his chapter describes a type of neural network structure that is useful for
biomedical applications.


The most commonly used neural network configurations,

known as mul t i l ayer per cept r ons ( MLP), ar e des cr i bed f i r st,

t oget her wi t h t he concept of ba
si c back propagat i on t rai ni ng.

A neural network has at least two physical

components, namely, the processing
elements and the

connections between them. The processing elements are

called
neurons, and the connections between the neurons

are known as links.
Every
link has a weight parameter

associated with it. Each neuron receives stimulus
from the

neighboring neurons connected to it, processes the

information, and
produces an output. Neurons that receive

stimuli from outside the network (i.e.,
not from neuro
ns of

the network) are called input neurons. Neurons whose

outputs are used externally are called output neurons.

Neurons that receive
stimuli from other neurons and whose

output is a stimulus for other neurons in
the neural network

are known as hidden neu
rons. There are different ways in

which information can be processed by a neuron, and

different ways of
connecting the neurons to one another.

Different neural network structures can
be constructed by

using different processing elements and by the specific

manner in which they are connected.


5
.
2.
The mathematical model of the neural biology has

been
developed based on the following assumptions:

5
.
2.
1.

Information processing occurs at many simple





80


Neural Network

Chapter
5



elements called neurons, which represent the basic processi
ng element (PBs) in
the

neural network .


5
.
2
.2
.

Signals are passed between neurons over connection

links.


5
.
2.
3.

Each connection link has an associated weight, which,in a typical neural
network, multiplies the signal

transmitted. The weights on the conn
ections
encode

the knowledge of a network.

5
.2.
4.

Each neuron applies an activation function to its net

input (sum of weighted input signals) to determine its

output .

5
.3.
A neural network is characterized by the following:


5
.3.
1.

Its pattern of connectio
n between the neurons, which is

called its architecture.

5
.3.
2.

Its method of determining the weights on the connections,

which is balled its training, or learning algorithm.

5
.3.
3.

Its activation function.


5
.
4
.
Neural Network Concepts


Concepts related to

neural networks give enough

details to provide some
understanding of what can be

accomplished with neural network models and
how these

models are developed. The basic concepts of a neural

network will
be defined in the following.







81


Neural Network

Chapter
5


5
.
4
.1.
Cells


A cell (
or unit) is an autonomous processing element

that models a
neuron. The cell can be thought of as a very

simple computer.
The
purpose of
each cell is to receive

information from other cells, then performs relatively
simple

processing tasks of the combined i
nformation, and sends the

results to one or more other cells.

5
.
4
.2.
Layers


A layer is a collection of cells that can be thought of as

performing some
types of a common function. It is generally

assumed that no cell is connected to
another cell in the same

layer. A
ll

neura
l

networks have an input layer and an
output

layer to interface with the external environment. Each input

and output
layer has at least one cell.

Any cell that is in
-
between the input layer and the output

layer is said to be in a
hidden la
yer. Neural networks are

often classified as single layer or multi
-
layer.
The difference

between the two types of the neural networks is described in


5
.
4
.3.
Arcs

An arc (or connection) is a one
-
way communication link

between two
cells. A feed
-
forward netwo
rk is one in which

the information flows from the
input layer through some

hidden layers to the output layer. A feedback network,
by

contrast, also permits "backward" communication.







82


Neural Network

Chapter
5


5
.
4
.4.
Weights


A weight Wij is a real number that indicates the

influ
e
nce that cell U
j
has
on cell u
i
. The weights are often

combined into a weight matrix W. These
weights may be

initialized as zeros, or initialized as random numbers, but

they can be altered during the training phase.


5
.
4
.5.
Propagation rules

A propagation r
ule is a network rule that applies to all

of the cells and specifies
how outputs from cells are

combined into an overall network input to cell u,.
The term

netj , indicates this combination. The most common rule is the

weighted
-
stun rule, wherein a sum is
formed by adding the

products of the
inputs and their corresponding weights,


5
.
5
.
The human neural system consists basically from the brain
and neve cells .

This nerves are connected as shown

in fig(5.1)

:
















Fig(
5
.1): The natural human nerv
e cell




83


Neural Network

Chapter
5




Where:
-


1
-
The AXON:
-

the connection between two cells.


2
-
The SOMA:
-

nerve cell node.


3
-
Dendrites:
-

The cell ends of the cell which connected to the different parts of
the body.

Saving any data in this cell depends on the atomicity of it. G
radient of
different elements (Ca
-
K
-
Na
-
Cl) controls this operation.

An Artificial Neural Network (ANN) is an information processing
paradigm that is inspired by the way biological nervous systems, such as the
brain, process information. The key element of
this paradigm is the novel
structure of the information processing system. It is composed of a large
number of highly interconnected processing elements (neurons) working in
unison to solve specific problems. ANNs, like people, learn by example. An
ANN is
configured for a specific application, such as pattern recognition or data
classification, through a learning process. Learning in biological systems
involves adjustments
to the syno
ptic connections that exist between the neurons.
This is true of ANNs as w
ell.

5
.
6
.

Modeling of Neural Network








1
W10

2
W11

n W1n





84


Neural Network

Chapter
5


Fig (
5
.2): Schematic Model of a McCulloch


Pitts and its Activity.


The mathematical model of n
eurons by McCulloch and Pitts

is described as















Where
net
j

is the input to a neuron j
,

j

is a threshold value,
W
ij

is the
strength and sense of synaptic connection from a neuron
i

into a neuron
j
,
O
j

is
an output signal of a neuron
i,

and

f(net)

is an output function or an acti
vation
function of a neuron

j
.

5
.
7
.
Activation rules


This network rule is often given by an activation

function F(x) to produce
the neuron's output signal. Most

frequently, the same function is used for all of
the cells.

Several different functions have be
en used in neural network

simulations.


5
.
7
.
1.
Identity function


F(x)=x for all x (
5
.3)

This activation is just the value of the combined input as

shown in Figure (
5
.3 a).



5
.
7
.
2. Threshold function (step function) :






)

net

(

f

O

j

j



j

i

ji

j

O

W

net












85


Neural Network

Chapter
5




The output is zero until the activation reaches a threshold θ ; then it jumps up by
the amount shown in figure (
5
.3b)


5
.
7
.
3. Sigmoid Function

:







The sigmoid , (meaning S
-

shaped) function , is abounded within a specific
range (

0

,

1

)

.
I
t is often used as activation function for neural networks in
which the desired output
values are either binary or in the interval between 0
and 1 . This is shown in figure (
5
.3c)




F(x)










X


0











86


Neural Network

Chapter
5


Fig .(
5
.3a) Identity function






F(x)











X


θ










Fig. (
5
.3b) Threshold function










Tan
-
Sigmoid without bias


Tan
-
Sigmoid with bias











Log
-
Sigmoid without bias

Log
-
Sigmoid with bias




87


Neural Network

Chapter
5


Fig. (
5
.3c) sigmoid function





Network architectures
.
8
.
5

The manner in which the
neurons of a neural network are structured
in
intimately linked with the learning algorit
hm used to

train the network. There
are three fundamentally different classes of network architectures
(Single
-
Layer
feed forward Networks, multilayer feed forward


n
etworks, and recurrent
networks

5
.
8
.
1 Single
-
Layer Feed forward Networks

In a layered neur
al network (NN), the neurons are organized in the form
of layers. In the simplest form of a layered network, there is an input layer of
source nodes that projects onto an output layer of neurons (computation nodes),
but not vice versa. In other words, this

network i
s strictly a feed forward. Fig
(4.4
) illustrates the case of four nodes in both the input and output layers. Such
a network is called a single
-
layer

network, with the designation” single
-
layer”
referring to the output layer of
computation nodes (
neurons).

5
.
8
.
2
.

Multilayer Feed forward Networks

The second class of a feed forward NN distinguishes itself by the
presence of one or more hidden layers, whose computation nodes are
correspondingly, called
hidden neurons

or hidden units. The function of
h
idden neurons is to intervene between the external input and the network
output in some useful manner. By adding one or more hidden layers, the
network is enabled to extract higher
-
order statistics. In a rather loose sense, the
network acquires a global
pe
rspective despite its local connectivity due to the
extra set of synaptic connections and the extra dimension of neural interaction.
The ability of hidden neurons to extract




88


Neural Network

Chapter
5



higher
-
order

statistics is particularly valuable when the size of the input
lay
er is large.

The source nodes in the input layer of the network supply
respective elements of the activation pattern (input vector), which
constitute the input signals applied to the neurons (computation
nodes) in the second layer (i.e., the first hidden
layer). The output
signals of the second layer are used as input to the third layer, and
so on for the rest of the network. Typically, the neurons in each
layer of the network have as their inputs the output signals of the
preceding layer only.

The set of

output signals of the neurons in the output (final) layer
of the network constitutes the overall response of the network to
the activation pattern supplied by the source nodes in the input
(first) layer. Fig (4.5) illustrates the layout of multilayer feed

forward NN for the case of single hidden layer. For brevity, the

Input layer of source
nodes


Output layer of
neurons




89


Neural Network

Chapter
5


Fig (
5
.4): Feed forward network with a single layer of neurons


network in Fig (4.5) is referred to as a 10
-
4
-
2 network because it has 10 source
nodes, 4 hidden ne
urons, and 2 output neuro
ns.


























Fig (5
.5): Fully connected Feed forward network with one hidden layer and one
output layer


The
neural network in Fig (4.5) is
said to be

fully


Input layer of source
nodes

Layer of hidden neurons

Layer of output neurons




90


Neural Network

Chapter
5


Co
nnected

in the sense that every node in each layer of the network is
connected to every other node in the adjacent forward layer. If, some of the



communication links (synaptic connections) are missing from the network, the
network is
partially connected
.

5
.
8
.
3
.

Recurrent Networks

:

A recurrent NN distinguishes itself from a feed forward neural network in
that it has at least one
feedback loop
. For example, a recurrent network may
consist of a single layer of neurons with each neuron


feeding its output
signal back to the input of all the other n
eurons, as illustrated
in Fig (
5
.
6
). In this structure, there are no self
-
feedback loops in the network;
i.e., the output of a neuron is not fed back into its own input. The recurre
nt
network illustrated in Fig(
5
.
6
) also has no hidden neurons, but it has unit
-
delay
operators.

Fig (
5
.
7
) illustrates another class of recurrent networks with hidden neurons.
The feedback connections originate from the hidden neurons as well as from the
output neurons passing through un
it
-
delay operators.

The presence of feedback loops, whether in t
he recurrent structure of Fig
(
5
.6) or that of Fig (
5
.
7
), has a profound impact on the learning capability of the
network and on its performance. Moreover, the feedback loops involve the use
o
f particular branches composed of unit
delay elements

(denoted by


), which
result in a
nonlinear dynamical behavior
, assuming that the NN contains
nonlinear units
.





91


Neural Network

Chapter
5


















Fig(
5
.6):Recurrent network with no self
-
feedback loops and no hidden l
ayers



















Fig (
5.7
): Recurrent network with hidden neurons





92


Neural Network

Chapter
5


5
.
9
.

Number of Hidden Neurons

Using too few hidden neurons, the network will not be able to solve the
problem. Using too many hidden neurons will increase the training time,
perhap
s so much that it becomes impossible to train in a reasonable period of
time. In addition, an excessive number of hidden neurons may cause a
problem called
over fitting
.

The network will have so much information
processing capability that will learn insign
ificant aspects of the training set.
That is irrelev
ant to the general population.


If the performance of the network is evaluated with the training set, it
will be excellent
.

However, when the network is called upon to work with the
general population, i
t will do poorly. That is because it will consider trivial
features unique to the training set members, as well as important general
features, and become
confused.



Therefore, choosing an appropriate number of hidden neurons is extremely
important.


One r
ough guideline for choosing the number of hidden neurons in many
problems is the
geometric

pyramid rule
. It states that, for many practical
networks the number of neurons follows a pyramid shape, with the number
decreasing form the input toward the output.

Of course, this is not true for auto
associative networks, which have the same number of inputs as outputs, but
many other netw
orks follow this pattern. Fig (4
.7) illustrates a typical three
-
layer network, using the geometric pyramid rule.





93


Neural Network

Chapter
5


























Fig (
5
.8): Typical three layer network.


5
.
10
.

Learning
-
rate Parameter

The least mean square (LMS) algorithm learning
-
rate parameter defined
by Darken and Moody (1992)
as











(4
.8)


whe
re
n

is the number of selected iterations,

τ
is a selected time constant, and

is a user
-
selected constants. In the general stage of adaptation involving a
number of iterations
n
, small compared to the search time constant

τ

, the
learning rate parameter

is approximately equal to
, and the algor
ithm
operates essentially as the “standard “ LMS algorithm by choosing a high value

)

n

(

1

)

n

(

o














94


Neural Network

Chapter
5


of

within the permissible range, hopping that the adjustable weights will
find and hover about a good set of values. Then, for a number of iterations
n

large compared t
o the search time constant


, the learning
-
rate parameter

approximates as (
), where
, then the weights conv
erge to
their optimum values
.


5
.
11
.
Error Back Propagation Algorithm


The Error Back Propagation (EBP) is one of the most

commonly used training

algorithms for neural networks. The

EBP networks are widely used because of
their robustness,

which allows them to be applied in a wide range of tasks. The

EBP is the way of using known input
-
output pairs of a target

function to find
the coefficients that

make a certain mapping

function approximate the target
function as closely as

possible.

Error back propagation is a systematic method for

training multilayer
artificial neural networks, the computations

of which have been cleverly
organized to reduce thei
r

complexity in time and memory space.


The algorithm is as follows:

Given: P training pairs.


{xi.d
-
. .X2,d2,....x,.d,},

where x, is (1x1), d. is (I x 1). and i = 1.2 ,.... I. The hidden

layer has outputs z of size (J x I ), and J = 1 ,2,... J. The

outpu
t layer has outputs
o of size (I x 1 )

Note that

1. The Oth component of each x, is of value 1 since input

vectors have been
augmented.




95


Neural Network

Chapter
5


2. The O
th

component of Z
j

is of value 1 since hidden

layer outputs have also
been augmented.

Compute: A set of weights

for a two
-
layer network that maps

inputs onto
corresponding outputs. The back propagation

algorithm was organized as
follow:

1. Let I be the number of units in the input layer, as

determined by the length of
the training input vectors.

So, I is the number

of units in the output layer. Now

choose J, the number of units in the hidden layer. As

shown in Figure (4.5), the
input and hidden layers

have an extra unit used for thresholding.


2. Initialize the weights in the network

.
Each weight

should be set rand
omly to
a number between
-
0.1 and

0.1.
Take learning rate η
> 0 and choose Emax.


3. Initialize the activations of the thresholding units. The

values of these
thresholding units should never

change, i.e. Xo = 1 , Zo = 1.

4. Set q=1,p=1 and E=0 where q is a
n integer that

denotes the number of
training steps and p is an

integer that denotes the counter within the training

cycle. E is the squared error.

5. Propagate the activations from the units in the input

layer to the units in the
hidden layer using the

ac
tivation function.




J








6. Propagate the activations from the units in the hidden

layer to the units in the output layer us
ing the

-




96


Neural Network

Chapter
5


activation function.



For i = 1,2,…….i




7. Compute the error value.

E
(
i
) = E (i
-
1) + 0.5 (di
-

oi)^2


8. The errors of the units in the output layer denoted by

5o,.

Errors are based on the network's actual output (Oj) and the target output
(dj).

δ
oi
=

(d
i


O
i
)(1
-

O
i
) O
i F
or i=1,2,…….i

9. The errors of the units in the hidden layer denoted by



are calculated as follows:


δ
zj

= Z
j
(1
-
Z
j
)
Σ
δ
oi
W
ij



For i=1,2,…………,i

10. Adjust the weights between the hidden and
output

layers as follows:


W
ij

=
W
ij

+
η

δ
oi

Z
j

For I = 1,2,…I & J = 1,2…..,j

11. Adjust the weights between the input and hidden

layers as follows:

V
j
i

=
V
j
i

+
η

δ
zj
X
i


For I = 1,2,…I & J = 1,2…..,j

12. If p < I then p=p+1, q

=q+1, go to step 4. Otherwise

go to step 13.





97


Neural Network

Chapter
5


13. The training cycle is completed. If E < Emax, terminate

the training session. Output weights W, V, q, and E.

Otherwise E = 0, p = 1, and initiate the new training

cycle by going to step 4.

The flow chart
of the error back propagation is shown



































98


Neural Network

Chapter
5

E

0

begin of a new training
step

YES

NO

NO

YES




start






Initialize weights W,V







begin of a new training step




Submit pattern X


and compute
l
ayers

response




Compute

cycle error E






E < E max



Calculate errors δo , δy





Stop


Adjust weight output


layer W




Adjust weight hidden



layer V






More patterns in


the training set?





Flowchart of the error
back propagation

algorithm







99


Neural Network

Chapter
5


5
.
12
.
Neural based diagnosis


5
.
12
.
1

Introduction

The artificial neural networks represent a good

means for diagnosis of
heart diseases, especially if an advanced algorithm is applied. Therefore, a
supervised classifier algorithm (under teacher classifier)
-
to classify and
identify the cases of heart diseases
-

is to be designed.


5
.
12
.
2
.

Sequence of o
perations

ANN works under two modes of operations

Construction

First
: A feature extraction is d
one pre
-
processing of the image
to make the
neural network easier in training and operation, these features include:

1
-
Histogram.

2
-
Mean
-
variance

of matrix.

3
-
Ed
ge.

Second
: The main process to train the network to identify the case (training and
processing the information taken from images of heart diseases); this is
achieved in the following steps:


1
-

Initiate and construct the Artificial Neural Networks which
have many
parameters:

-

Number of hidden layers

-

Number of Neurons

-

Acceptable sum squared error (error goal).

-

Activation functions.

-

Maximum number of epochs.




100


Neural Network

Chapter
5



-

Weight.

-

Bias.

-

Output layer.

2
-

Starting training of ANN with the specified parameters to get the o
ptimum
values of weight and bias of the network.


Recalling

By entering the simulated case we get the image of heart disease and the
name of disease hence, we get the required objective of software.

5
.
12
.
3
.

Illustration of some points in the program

-

The operation of neural networks is Non
-
linear, that is due to the
activation functions used.

-

Histogram is a vector of gray scale levels, length=2
n

where n=intensity.










Fig 6.8
Histogram example


-

Mean
-
variance

is a vector of
Mean
-
varianc
e

of each column of image
matrix, the length of the vector is the number of columns.

-

Edge is the abrupt change of intensity.

-

The hidden layer, the output layer, activation functions and number of
neurons all are constant.




101


Neural Network

Chapter
5



-

Only the input is variable.

-

When

the sum squared error decreases, the learning rate increases

And here, the neural network knows whether the direction of training
network is the right direction or not.

-

At extremely low error goal the network will not recognize the image if
there is any s
mall difference in the image and that is not required
because the image taken may have any slight variation of ideal shape
and that this case is called "
Over fitting
”; in this case, training time
will be too long and the number of epochs is increased.

-

At
high error goal the network will recognize the images as the same
image
-
will produce two images in the same step of recalling
-
although
there are great differences in the images and that is considered as a
disadvantage from accurate diagnosis point of view,

this case is called
"
Under fitting
", in this case training time will be small and so the
number of epochs is low.



5
.
12
.
4
.

Algorithm of training program



Clear & close all previous variables.



Read the inputs (ECG graphs).



Feature extraction (pre
-
process).



Construct neural network using desired parameters.



Initiate feed forward neural network.



Determine training parameters.



Training back propagation neural network.



Plotting sum squared error and learning rate versus epochs.



Save optimum values of weight and

bias.




102


Neural Network

Chapter
5


5
.
12
.
5
.

Algorithm of recalling program:




Clear & close all previous variables.



Load optimum values of weight and bias.



Supply tolerance range.



Enter the number of simulated case.



Simulate feed forward neural network using weight and bias.



Test and
classify ECG graph.



Display ECG graph and title.


























103


Neural Network

Chapter
5








































Fig
5
.9 Flowchar
t of training program





104


Neural Network

Chapter
5




































Fig
5
.10 Flowchart of Recalling program






105


Neural Network

Chapter
5




5
.
12
.
6

Training

1
st

network














2
nd

network
















106


Neural Network

Chapter
5


3
rd

network
















4
th

network


















107


Neural Network

Chapter
5



5.
13
.
MATLAB code for diagnosing neural network


1 Training code using image histogram method


The following code was designed to read each electrocardiogram as an image,
calculate its histogram
and then train the network on the histogram of each
image


%=======================================================
===

%=======================================================
===

% Program to train a neural network to use it as a classifier

%=============
==========================================
===

close all,clear % clear & close all previous variables

NNTWARN OFF % shutdown warnings

%=======================================================
===

% read the inputs (ECG GRAPH) &

% obtain histo
gram vector( feature extract )to simplify Network process

I1=imread('c:
\
ecg
\
a1.jpg');p(:,1)=imhist(I1(:,:,1));


% *

I2=imread('c:
\
ecg
\
a2.jpg');p(:,2)=imhist(I2(:,:,1)); % *

I3=imread('c:
\
ecg
\
a3.jpg');p(:,3)=imhist(I3(:,:,1));

% *

I4=imread('c:
\
ecg
\
a4.jpg');p(:,4)=imhist(I4(:,:,1)); % *

I5=imread('c:
\
ecg
\
a5.jpg');p(:,5)=imhist(I5(:,:,1));

%====================================================

I6=imread('c:
\
ecg
\
a6.jpg');p(:,6)=imhist(I6(:,:,1)); % *

I7
=imread('c:
\
ecg
\
a7.jpg');p(:,7)=imhist(I7(:,:,1)); % *

I8=imread('c:
\
ecg
\
a8.jpg');p(:,8)=imhist(I8(:,:,1)); % r

I9=imread('c:
\
ecg
\
a9.jpg');p(:,9)=imhist(I9(:,:,1));

% e

I10=imread('c:
\
ecg
\
a10.jpg');p(:,10)=imhist(I10
(:,:,1)); % p

%====================================================

I11=imread('c:
\
ecg
\
a11.jpg');p(:,11)=imhist(I11(:,:,1)); % r

I12=imread('c:
\
ecg
\
a12.jpg');p(:,12)=imhist(I12(:,:,1)); % o

I13=imread('c:
\
ecg
\
a13.jpg');p(:,13)=imhi
st(I13(:,:,1)); % c

I14=imread('c:
\
ecg
\
a14.jpg');p(:,14)=imhist(I14(:,:,1)); % c

I15=imread('c:
\
ecg
\
a15.jpg');p(:,15)=imhist(I15(:,:,1)); % e

%====================================================

I16=imread('c:
\
ecg
\
a16.jpg');p(:,16)
=imhist(I16(:,:,1)); % s




108


Neural Network

Chapter
5



I17=imread('c:
\
ecg
\
a17.jpg');p(:,17)=imhist(I17(:,:,1)); % *

I18=imread('c:
\
ecg
\
a18.jpg');p(:,18)=imhist(I18(:,:,1)); % *

I19=imread('c:
\
ecg
\
a19.jpg');p(:,19)=imhist(I19(:,:,1)); % *

I20=imread('c:
\
ecg
\
a20.jpg');p(:,20)=imhist(I20(:,:,1)); % *

%=======================================================
===

% preparing inputs & network costraction variable

%Pi
-

matrix of input vectors =256x7

P1=[p(:,1),p(:,2),p(:,3),p(:,4),p(:,5)];

P2=[p(:,6),p(
:,7),p(:,8),p(:,9),p(:,10)];

P3=[p(:,11),p(:,12),p(:,13),p(:,14),p(:,15)];

P4=[p(:,16),p(:,17),p(:,18),p(:,19),p(:,20)];

%Ti = target o/p matrix

T1=eye ( 5 );T2=eye ( 5 );

T3=eye ( 5 );T4=eye ( 5 );

S1=600;S2=600;

S3=600;S4=600;%Si
-

Size of ith layer

%==
=====================================================
===

% constract the networks (initiate feed forward networks)

[W1,B1,W2,B2]=initff(P1,S1,'logsig',T1,'purelin');

[W3,B3,W4,B4]=initff(P2,S2,'logsig',T2,'purelin');

[W5,B5,W6,B6]=initff(P3,S3,'logsig',T3,
'purelin');

[W7,B7,W8,B8]=initff(P4,S4,'logsig',T4,'purelin');

%=======================================================
===

% setting network parameters

df1=10;,df2=10;%dfi = Epochs between updating display, default = 25

df3=10;df4=10;

me=2000;%me = Maximum

number of epochs to train


eg1=0.0001;,eg2=0.0001; %egi = Sum
-
squared error goal, default = 0.02

eg3=0.0001;eg4=0.0001;

Lr1=0.01;,Lr2=0.01; %Lri = Learning rate, 0.01

Lr3=0.01;Lr4=0.01;

%tpi
-

Training parameters (optional).

tp1=[df1 me eg1 Lr1];tp2=[
df2 me eg2 Lr2];

tp3=[df3 me eg3 Lr3];tp4=[df4 me eg4 Lr4];

whos

%=======================================================
===




109


Neural Network

Chapter
5



%training the networks

% plot network curves

figure

[W1,B1,W2,B2,ep,tr1]=trainbpx(W1,B1,'logsig',W2,B2,'purelin',P1,T1,tp1) ;

pl
ottr(tr1,eg1)

figure

[W3,B3,W4,B4,ep,tr2]=trainbpx(W3,B3,'logsig',W4,B4,'purelin',P2,T2,tp2) ;

plottr(tr2,eg2)

figure

[W5,B5,W6,B6,ep,tr3]=trainbpx(W5,B5,'logsig',W6,B6,'purelin',P3,T3,tp3) ;

plottr(tr3,eg3)

figure

[W7,B7,W8,B8,ep,tr4]=trainbpx(W7,B7,'logs
ig',W8,B8,'purelin',P4,T4,tp4) ;

plottr(tr4,eg4)

%ep
-

the actual number of epochs trained.

%tri
-

training record: [row of errors]

%=======================================================
===

save learn_hist

% save all program generated variables "so opt
imum values of W & B is saved"



2 Recalling code


close all,clear

NNTWARN OFF

load learn_hist % load optimum values of W & B from Training

index = 0;

Tb=0.9960;%lower limit of tolerance

Tf=1.0048;%upper limit of tolerance

while index >= 0




index=i
nput ('Enter the number of simulated case ');


if index > 20




input ('the number of simulated case out of index')



index=input ('number of simulated case');




end







110


Neural Network

Chapter
5



%=======================================================
====


%

simulate feed forward network

a1= simuff(p(:,index),W1,B1,'logsig',W2,B2,'purelin');a1t=transpose(a1)

a2= simuff(p(:,index),W3,B3,'logsig',W4,B4,'purelin');a2t=transpose(a2);

a3= simuff(p(:,index),W5,B5,'logsig',W6,B6,'purelin');a3t=transpose(a3);

a4= sim
uff(p(:,index),W7,B7,'logsig',W8,B8,'purelin');a4t=transpose(a4);

a=[a1t a2t a3t a4t];

%=======================================================
====

% Test & classify disease

if a(1)>=Tb & a(1)<=Tf;



figure



imshow('c:
\
ecg
\
a1.jpg');


t
itle ('Accelerated Junctional Rhythm
-

Marquette copy');


end



if a(2)>=Tb & a(2)<=Tf;


figure





imshow('c:
\
ecg
\
a2.jpg')





title('Atrial Fibrillation With Moderate Ventricular
Response
-

Marquette copy');


end



if a(3)>=Tb
& a(3)<=Tf;


figure





imshow('c:
\
ecg
\
a3.jpg')


title('
--
Atrial Flutter With Variable AV Block
-

Marquette copy ');


end



if a(4)>=Tb & a(4)<=Tf;


figure





imshow('c:
\
ecg
\
a4.jpg')





title('
--
Atrial Flutter Wit
h Variable AV Block
-

Marquette copy');


end



if a(5)>=Tb & a(5)<=Tf;


figure





imshow('c:
\
ecg
\
a5.jpg')





title('Electronic Atrial Pacing
-

Marquette copy');



end



if a(6)>=Tb & a(6)<=Tf;


figure


imshow('c:
\
ec
g
\
a6.jpg')




111


Neural Network

Chapter
5






title('Sinus Bradycardia with 2:1 AV Block (note P waves in V2) ,Borderline
1st Degree AV Block (for conducted beats) ,Right Bundle Branch Block ');


end



if a(7)>=Tb & a(7)<=Tf;


figure





imshow('c:
\
ecg
\
a7.jpg')





title('Electronic Ventricular Pacemaker Rhythm
-

Marquette copy ');


end



if a(8)>=Tb & a(8)<=Tf;


figure





imshow('c:
\
ecg
\
a8.jpg')





title('First Digree block copy');


end




if a(9)>=Tb & a(9)<=Tf;


figure



imshow('c:
\
ecg
\
a9.jpg')





title('First Digree block copy');


end


if a(10)>=Tb & a(10)<=Tf;


figure


imshow('c:
\
ecg
\
a10.jpg')





title('Normal Sinus Rhythm
-

Marquette copy')


end




if a(11)>=Tb &
a(11)<=Tf;


figure


imshow('c:
\
ecg
\
a11.jpg')





title('Pacemaker Failure to Pace
-

Marquette copy')


end




if a(12)>=Tb & a(12)<=Tf;



figure



imshow('c:
\
ecg
\
a12.jpg')


title ('Pacemaker Failure To
Sense
-

Marquette copy')


end



if a(13)>=Tb & a(13)<=Tf;


figure





imshow('c:
\
ecg
\
a13.jpg')





title('
--
Pacemaker Fusion Beat
-

Marquette copy')




112


Neural Network

Chapter
5





end



if a(14)>=Tb & a(14)<=Tf;


figure





imshow('c:
\
ecg
\
a14.jpg
')


title('Rate
-
Dependant LBBB ')


end



if a(15)>=Tb & a(15)<=Tf;


figure





imshow('c:
\
ecg
\
a15.jpg')





title('right bundle branch block copy')


end



if a(16)>=Tb & a(16)<=Tf;


figure





imshow('c:
\
ecg
\
a16.jpg')





title('second degree block copy')



end



if a(17)>=Tb & a(17)<=Tf;


figure


imshow('c:
\
ecg
\
a17.jpg')




title('third degree block copy')


end



if a(18)>=Tb & a(18)<=Tf;


figure





imshow('c:
\
ecg
\
a1
8.jpg')





title('Ventricular Pacing in Atrial Fibrillation
-

Marquette copy')


end



if a(19)>=Tb & a(19)<=Tf;


figure





imshow('c:
\
ecg
\
a19.jpg')





title('WPW and Pseudo
-
inferior MI copy')


end





if a(20)>=Tb & a(20)<=Tf
;


figure


imshow('c:
\
ecg
\
a20.jpg')





title('WPW Type Preexcitation
-

Marquette copy')


end




end