T
e
nge exchange rate model usin
g
new
neural network training
method
Author1
Author affiliation
Author Email
Author2
Author affiliation
Author Email
Abstract:
The paper reports results derived from new robust neural network (RNN) based
KZT/USD exchange rat
e model against AR (2) model. The characteristic of the RNN method
is explained followed by comparative study of quarterly KZT/USD models. Diagnostic
measure such as mean absolute deviations (MAD), R
2
and tracking signals (TS) suggests
superiority of the R
NN KZT/UDS model.
The
R
NN training methods computes adoptive
directional search vectors such that the error function is gradually optimized. The method
automatically computes
adjustable
dynamic learning rates
to direct
search toward the
minimum valley of t
he error function.
T
he algorithm identifie
s
dynamic
adjustable
learning
rates
to
generate convergent sequence of the error function. As a result, the training is
robust
to model exchange rate
.
The RNN model performs better in MAD, TS and R
2
value.
Key
-
Wor
ds
-
KZT/USD Exchange rate, Robust Neural Networks, Adjustable training,
Forecast.
1
.
Introduction
Due to the exchange rate crisis in the CIS region during the fall of USSR era, many small
open economies have adopted flexible exchange rates in combination w
ith some kind of
monetary or interest rate mechanism. Volatility of exchange rate in Kazakhstan is significant
issue taking account possibility of huge transaction loss induced by volatility exchange rate.
Kazakhstan economy highly depends on international
trade and FDI (Foreign Direct
Investment), exchange rate volatility in Kazakhstan have influence in international trade and
FDI. Kazakhstan has consistently achieved
moderate
growth rate
induced by oil export. But
international oil price is declining and
under such market condition exchange rate could be
influenced by several possible macroeconomic factors.
USD (dollar)
exchange rate trends
had
grown up to 2003 but dollar’s exchange rate to Tenge has fallen since 2003 (see Table 1).
There have been number
of attempts to apply
neural network (
NN
)
to the task of
financial
modeling (Cao et al. 2005, Jasic and Wood 2004, Kaastra and Boyd 1995, Lam 2004, Nygren
2004). When it comes to performing a predictive analysis, it is very difficult to build one
general mo
del that will fit every market. Such models tend to be specific to markets and asset
classes and a general model may not be applicable across markets. Similarly, there may be
some temporal changes as well which mean that the models may need to be modified
over
time in order to preserve their effectiveness
(Zhang, et al, 1998)
.
We develop KZT/USD
exchange rate forecast model using new RNN training method and brief results are reported
to show the superior performance of the model, against AR (2) model.
2
Rat
ionales for
Self
-
Adaptive Training Method
Consider a
N
N error function with
m
training weights and the learning rates are identified
automatically. For computational convenience, the higher dimensional error function in
E
m
dimension is decomposed into seve
ral error functions in lower dimension. Such transformed
error function retains the true convex characteristics of the original error function. The
gradient information is evaluated and the training method updates all the network weight
parameters say
w
j
,
j=1,2,…..m,
by a factor such that improvement in training is noticed. Each
epoch identifies
m
different learning rates along the training directions. Once the NN training
weights are updated, the error function is evaluated to notice the improvement in err
or
function and the rate of convergence.
Various form of
standard
back propagation
training
method and its variant are not self
-
adaptive
. They are heuristic
training method (
Ahmed et al
2000
a
; H
aykin, 1999
; Weir, 1991; and Kamarthi et al., 1999). The train
ing direction,
d
k
of
the error function
f
(w)
is computed using the gradient,
▽
f
(
w
k
)
information from a single
training pattern in standard on
-
line back propagation (Bishop, 1995
, Ahmed, et al, 2000
b
).
The fixed value of a learning rate,
does not always lead to a maximum local decrease in
function value. The lea
rning rate depends on the shape of the error function (Jacobs, 1988) as
it trains from the current epoch
to the next epoch
k
+
1,
k+
2,…
and so on. The constant
value of learning rate,
η
k
, can direct the search away from the local min
imum during epoch
k
and the directions
d
k
generated from gradient
▽
f
(
w
k
)
are different for a single weight
component in standard back propagation training.
Consider a training method, we call RNN,
where
an iterative algorithm is applied to an error functi
on with an initial arbitrary weight
vector
w
k
, at the beginning of iteration
k
, the algorithm generates a sequence of vectors
w
k+1
,
w
k+2
,.…
during epoch
k+1, k+2,….,..,
and henceforth. The iterative algorithm is globally
convergent if the sequence of vec
tors converges to a solution set
Ω
.
Consider the problem:
minimize
f (
w
)
, subject to:
w
Є
E
m
.
Let,
Ω
Є
E
m
be the solution set, and
let,
the application of
an algorithmic map,
ℬ
, generates the sequence
w
k+1
,
w
k+2
,.…
, starting with weight vector
w
k
such that
(
w
k
,
w
k+1
,
w
k+2
,.…
)
Є
Ω
,
then the algorithm converges globally and the algorithmic
map is closed over
Ω
.
The following properties are utilized to train the new RNN.
Property 1:
Suppose that
f: E
m
→
E
1
and the gradient of the error function,
▽
f
(
w
)
, is
defined then there is
a directional vector
d
such that
▽
f
(
w
)
T
d
<
0
,
and
f(w +
η
d) < f(w)
: (
η
Є
(0,
)Ⱐ
> 0),
then the vector
d
is a descent direction of
f
(
w
)
, where
i猠慳獵s敤r扩trary
灯獩tiv攠ec慬慲⸠
Property
2
:
Let
f: E
m
→
E
1
is a descent function. Consider a
ny training weight
w
Є
E
m
and
d
Є
E
m
:
d
≠
0
. Then the directional derivative
▽
f
(
w
, d
)
of the error function
f
(
w
)
in
minimum
direction
d
always exists.
The
expression to update the
R
NN network weight, w
k+1
is
given by
w
k+1
=w
k
+
η
k
d
k
.
Here,
η
k
is
defin
ed as a minimization problem of the type:
η
k
=
{
min
f(w +
η
d
)
}
;
subject to :
η
k
Є
L
, where,
L
= (
η
:
η
Є
E
1
)
.
3. Results
The following table shows R
2
as well the MAD value for the RNN model is better than the
AR (2) model. This confirms the superiority
of the KZT/USD exchange model using the new
RNN method. The TS vale in both case are within acceptable limits, however, the plot shows
that the TS vale in RNN is asymmetrical on the zero level, hence the overall predictability is
good with RNN model.
Mode
l
R
2
MA
D
TS
TS Plot
Fitted & Actual Data
RNN
0.9
2
3.11
0.05
1
AR(2
)
0.8
7
5.60
0.00
2
Table 1: Performance of RNN and AR (2) model
(A=Actual, F=Fitted Quarterly KZT/USD Exchange
rate1999 till 2007)
4.
Conclusions
This research examine
d
and analyze
d
the use of a ne
wly
developed neural network
model
(RNN)
in foreign exchange forecasting with KZT against USD. The
RNN
model gives the evidence that there is possibility of extracting information to forecast
exchange rate reliably. The evaluation of the m
odel is based on the estimate of mean
absolute error,
R
2
and
tracking
signal.
5.
References
1]
Xxxx, uyyyy
(
year
). Neural Network without Learning Parameter”,
xyz
2
nd
International
conference
, 2000,
xyz Place.
2]
vvvvv
,
cccccc
. (
Year
). “
Multi
-
directional
train
ing algorithm for Feed
-
Forward Neural Network”,
Journal
xyz
, 2000.
3]
Bishop, Christopher M. (1995). “Neural Networks for Pattern Recognition”. Oxford, UK:
Clarendon.
4]
Cao, Q., K.B. Leggio and M.J. Schniederjans (2005), ‘A Comparison Between Fama and French’s
Model and Artificial Neural Networks in Predicting the Chinese Stock Market’, Computers and
Operations Research, 32: 2499
–
2512.
5]
Haykin, Simon, (1999). “Neural Networks: A Comprehensive Foundation”, Prentice Hall, NJ.
6]
Jacobs, R.A. (1988). “Increased Rate
of Convergence Through Learning Rate Adaptation”. Neural
Networks, 1, 295
-
307.
7]
Jasic, T. and D. Wood (2004), ‘The Profitability of Daily Stock Market Indices Trades Based on
Neural Network Predictions: Case Study for the S&P 500, the DAX, the TOPIX and the
FTSE in
the Period 1965
–
1999’, Applied Financial Economics, 14(4): 285
–
297.
8]
Kaastra, L. and M. Boyd (1995), ‘Designing a Neural Network for Forecasting Financial and
Economic Time Series’, Neurocomputing, 10(3): 215
–
236.
9]
Kamarthi, S.V., and Pittner, S., (
1999). “Accelerating Neural Network Training Using Weight
Extrapolations”, Neural Networks, 12, 1285
-
1299.
10]
Lam, M. (2004), ‘Neural Network Techniques for Financial Performance Prediction: Integrating
Fundamental and Technical Analysis’, Decision Support Sy
stems, 37(4): 567
–
581.
11]
Nygren, K. (2004), Stock Prediction
—
A Neural Network Approach, Master’s Thesis, Royal
Institute of Technology, KTH, Sweden.
12]
Weir, M.K., (1991). “A Method for Self
-
Determination of Adaptive Learning Rates in Back
Propagation”. Neural
Networks, 4, 371
-
379.
13]
Zhang, G. and M.Y. Hu (1998), ‘Neural Network Forecasting of the British pound/US dollar
Exchange Rate’, International Journal of Management Science, 26(4): 495
–
506.
Enter the password to open this PDF file:
File name:
-
File size:
-
Title:
-
Author:
-
Subject:
-
Keywords:
-
Creation Date:
-
Modification Date:
-
Creator:
-
PDF Producer:
-
PDF Version:
-
Page Count:
-
Preparing document for printing…
0%
Comments 0
Log in to post a comment