pptx slides, 17.9.2010 - an interactive tool for the stock market ...

runmidgeAI and Robotics

Oct 20, 2013 (4 years and 24 days ago)

83 views

AN INTERACTIVE TOOL FOR THE
STOCK MARKET RESEARCH
USING RECURSIVE NEURAL
NETWORKS

Master Thesis

Michal
Trna

michal.trna@gmail.com

= Overview =


Introduction to RNN


Demo of the tool


Application on the chosen domain

AN INTERACTIVE TOOL
FOR THE
STOCK MARKET RESEARCH
USING RECURSIVE NEURAL
NETWORKS

= Introduction to NN & RNN =


Motivation of the NN


Brain contains 50

100 billion
neurons

1000 trillion synaptic connections

S
olves

complex problems

R
ecognition

of complex forms

F
orms

well
-
founded predictions

↑ Contours

of the human brain


Drawing of neurons from the
cerebellum of a pigeon by
Ramón y
Cajal

(1911) →

= Introduction to NN & RNN =


Non
-
local connection


Plasticity, synaptic learning


Creation and atrophy of the connections

Axon

Nucleus

Dendrites

Axon terminal




Action potential

1
-
100m/s

= Introduction to NN & RNN =

Hebb’s law:

When an axon of cell A is near enough to excite
cell B and repeatedly or persistently takes
part in firing it, some growth process or
metabolic change takes place in one or both
cells such that A's efficiency, as one of the
cells firing B, is increased.

i.e.:

Cells that fire together, wire together.

Donald

O.
Hebb
,
1949


Hebbian learning

/
Synaptic
learning


Anti
-
Hebbian learning

= Introduction to NN & RNN =


Mathematical model of neuron

Summing
junction

Activation
function

Σ

Output

x
j

.

.

.

f

Bias

Inputs

Neuron j

Synaptic
weights

Recipients

of the output

= Introduction to NN & RNN =


Artificial neural networks

A neural network is a massively parallel distributed
processor that has a natural propensity for storing
experiential knowledge and making it available for use.
It resembles the brain in two respects:

1.

Knowledge is acquired by the network through a
learning process.

2.

Interneuron connection strengths known as synaptic
weights are used to store the knowledge.

= Introduction to NN & RNN =


Artificial neural network


Properties


Adaptability


Fault tolerance


Knowledge representation, context


Non
-
linearity


I/O mapping


= Introduction to NN & RNN =


Hebbian theory



For
p

patterns of length
n
:


= Introduction to NN & RNN =


Feed
-
forward neural networks





Recursive neural networks

= Introduction to NN & RNN =


Perceptron

Summing
junction

Activation
function

Σ

Output

x
j

.

.

.

f

Bias

Inputs

Neuron j

Synaptic
weights

= Introduction to NN & RNN =


Perceptron


Separability, linear classifier


XOR problem

↑ Linear separation of logical AND, logical OR and logical XOR


= Introduction to NN & RNN =


Multilayer perceptron

= Introduction to NN & RNN =


Multi
-
layer perceptron


Learning algorithm = back
-
propagation


generate the output


propagates back to produce deltas of all output and
hidden layers


gradient of weights


modify the weight in the (opposite) direction of
grad.

= Introduction to NN & RNN =


Single
-
layer and Multi
-
layer perceptron

Single layer

Two layers

Three layers

Arbitrary set

XOR
-
like set

= Introduction to NN & RNN =


Recurrent networks (RNN)


Simple RNN: Elman/Jordan network


Fully connected: Hopfield network

= Introduction to NN & RNN =


Elman network

Context layer

= Introduction to NN & RNN =


Jordan network

Context layer

= Introduction to NN & RNN =


Hopfield Networks


Dynamic equation

= Introduction to NN & RNN =


Synaptic potential, threshold


Mode of operation


Synchronous


Asynchronous


Deterministic


Non
-
deterministic


Energy


Autoassociative memory


Capacity: 0.15 N

= Graph Approach =


Graph approach


Acquiring pattern
ξ
:





Hopfield network:

= Graph Approach =


Coloring

Red component

Blue component

= Graph Approach =


Tetrahedral property

= Graph Approach =


Tetrahedral property


Four possible configurations


1

1

1

1

1

1

0

0

0

1

1

1

0

0

1


1

0

0


1


1


1


1


1


1

= Graph Approach =


Parameters

= Graph Approach =


Energy point, projection to 2D





Energy lines


classes


Scalar energy



Control of the convergence


= Graph Approach =


Relative weight of neuron


contribution of this neuron to the component I or O





Deviation





“a hash function”

= Graph Approach =


Thresholds

= Tool =


Time for a demo


http://msc.michaltrna.info/markers/index.html

↑ Typical convergence path



Outlooks, future lines


To use deviation for discrimination of parasitic
states


Quantify the results


Application on automatic trading








Thank you for your attention!





Time for your questions