Using Artificial Neural Networks

apricotpigletAI and Robotics

Oct 19, 2013 (3 years and 9 months ago)

79 views

Using Artificial Neural Networks
to identify

Marta Artamendi Tavera

University of Manchester

Neural Networks

ANN aim simulate biological NN


good pattern recognition

NN desirable
features: non
-
linear response





learning


ANN computing unit:
node

(thresholding neuron)



i
= g

ij

j


Takes values within [
-
1 1]

All neurons connecting to

Weights of connections

Non
-
linear transfer function (sigmoid function)

Artificial Neural Networks

Feed Forward architecture


FF use supervised training


change weights of connections

Learning:

Back
-
propagation algorithm


Minimise summed square error function wrt weights

t is the target output


Gradient descent method:


Initially random weights



updated

in proportion to:

and

NN Program

Target output
: function of mass

Jetnet 3.4: L.Lonhblad, C.Peterson, H.Pi, T.Rognvaldsson


Training
: Aim separate signal(

0
) and background

1 output node: target output 1, 0

NO training sample.


Train with data

m
0
: 0.135 GeV


: EM resolution 0.007 GeV

m: reconstructed invariant mass

NN Performance (FoM)

Test NN ability to separate signal and backgd




Fit output distribution to combination of the two


Signal : mass interval 012
-
0.15



Define Figure of Merit (
FoM
):

b
i

fraction backgd events in bin i

s
i
fraction signal events in bin i



frac signal events in distribution

frac backgd events in distribution

Figure of Merit

FoM depends upon:



NN configuration



No. times is exposed to



training sample (no. cycles)



Size of the training and


testing sample

NN Configuration

Input: events with two clusters.

E: energy

Lat: lateral shower shape

Z42: Absolute value complex


Zernike moment (4,2)

Z20: Zernike moment(2,0)

S1s9: ratio energy of 9 closer crystals


to central one

S9s25: ratio 25 surrounding crystals

s2TP: second moment in theta
-
phi

OK: good/bad crystal

NX: number of crystals

NB: number of bumps

ET: energy of nearest charged track


4 layered NN



11 input nodes


18 first hidden layer


11 second hidden layer


1 output node





Training sample 10000


Testing sample 10000


No cycles 5000

Preliminary results

Best
FoM:

0.22


Purity and Eff of NN cut

Compare results

Summary and Conclusions


Neural network can be very sensitive to several parameters


Is possible to successfully train a NN with data


Neural Network gives better results than simple cuts


Use a Neural network to select


0