Prediction of All India Summer Monsoon Rainfall Using Artificial Neural Network

haremboingΤεχνίτη Νοημοσύνη και Ρομποτική

20 Οκτ 2013 (πριν από 3 χρόνια και 9 μήνες)

60 εμφανίσεις

Theme
/
Session
:

Prediction and Predictability of Monsoon/I









Poster


Registration
ID:

OC
-
000227

Extended Abstract,

International confer
ence on "Opportunities and Challenges in Monsoon Prediction in a Changing Climate"
(
OCHAMP
-
2012), Pune, India,

21
-
25 February 2012




Prediction of All India Summer Monsoon Rainfall Using Artificial Neural Network

Pritpal Singh and Bhogeswar Borah


Department of Computer Science and Engineering, Tezpur

University,
Tezpur
,
Napaam
-
784028
,
India
,
pritpal@tezu.ernet.in


Abstract
:
Many researchers have
paid

attention to apply

artifi
cial neural network (ANN) in
Indian summer
monsoon r
ainfall (
ISMR
)

forecasting

because it has the
capacity to extract the relationship

between
input and
output data. In
this paper,

we have proposed
ten neural
network architectures

designated as BP
1
,

BP
2

,

BP
10

using
three layers of neurons

(one input layer, one

hidden layer
and one output layer). The ISMR data

set is taken from
P
a
thasarathy for the period 1871
-
1994 for
mo
nths
-

June,
July, August and September,

and the whole season (mean
of June, July, August and

September). The data sets are
trained and tested separately

for each of the neural
network architectures, viz.,

BP
1
-
BP
10
. Experimental results
indicate that neura
l

network
s

BP
1
-
BP
10

get a higher
accuracy rate than

the existing

model in both training and
test data sets. The experimental results also show that a
relatively smaller

number of inputs and hidden neurons
can achieve a

reasonably high accuracy rate.


Keyw
ords
:
Indian Summer Monsoon Rainfall, Artificial
Neural Network, Feed
-
Forward Neural Network, Back
-
Propagation Neural Network.


1

I
ntroduction

The ensemble of statistics and mathematics has increased the
accuracy of forecasting of
ISMR
up to some extent. But

the
non
-
linear nature of ISMR, its forecasting accuracy is still
below the sati
sfactory level. In 2002,
Indian Meteorological

Departmen
t (IMD) fails to predict the defi
cit

of rainfall during
ISMR, which led to considerable concern

in the meteor
ological
co
mmunity [1]
. In 2004, again drought has been observed in
the

country with a defi
cit o
f more than 13% rainfall [2]
. This
shortfall of rainfall could not be predicted

by any statistical or
dynamical model.

The major objective of the present study is to devel
op ANN
based model for prediction of ISMR in monthly and seasonal
time scales. The prediction of

ISMR is done based on the
observed time series of the

monthly rainfall data set. This
experiment also focuses on using a limited number of input
data.

2

Descript
ion of Data Sets

The ISMR data set is taken from Pathasarathy et al.
[3]

for
the period 1871
-
1
994 for months
-

June, July, August and
September, and the whole season (mean of June, July, August
and September). But, Sahai et al.

[4]

divided the data set int
o
training data set from th
e period 1876
-
1960 and the te
st data
set from the period 1961
-
1994.

We have also carried out our
experiment to predict
ISMR by using these periods of training
and test data

sets.


3
Description of the Method

ANN is an interco
nnection of information processing

systems,
units
or nodes whose construction and
implementation

is based on human brain. ANN can process

simultaneously large number of units which are called

neurons
in parallel mode. There is an interconnection

link betwe
en one
neuron to another neuron.

The neural networks are classifi
ed into either single
-
layer

or multi
-
layer neural networks. This layer exists in
between
input layer and output layer
.

A single
-
layer feed
-
forward
(SLFF) neural network is formed when the no
des of input
layer are connected with processing nodes with various
weights, resulting to form a series of output nodes. A multi
-
layer feed
-
forward (MLFF) neural network architecture can be
developed by increasing the number of layers in SLFF neural
networ
k.

The
back
-
propagation neural network (
BPNN
)

architecture
is one of the signifi
cant development
s

in the area of ANN. The
BPNN can consist of MLFF neural network with one input
layer, limited number of hidden layers and one output layer.

3

Architecture of th
e Proposed Model

In our neural network based architecture, some additional
complexities are removed by considering
the following
designing paradigms:


[A
]

A
n

MLFF neural network with a nonlinear activation

function can classify the data very efficiently
. So, this neural
network is considered
in the development of our

architecture.

[B
] A
n

MLFF neural network with more than three layers
can
generate
arbitrarily complex decision regions. So, a single
hidden layer (
i.e
., no more than three layers) is conside
red
in
design
ing

the architecture.

[C
]

A large number of neurons in
hidden layer can cause the training process of the MLFF neural
network more complex, because weig
ht of each
interconnection link

need
s

to be adjusted at each iteration of
the training proc
ess.
While adjusting the
weights
,

each time an
error is

calculated. To minimize this error, BPNN is used,
where error is propagated back to the hidden layers. So, the
BPNN neural network is ensemble
d

with MLFF neural
network in our architecture.



Based
on the mentioned designing paradigms, we
have
proposed the ten neural network architectures designated as
BP
1
, BP
2
..., BP
10

with the help of three layers neural network
(one input layer, one hidden layer and one output layer).

The
minimum number of neurons

in input layer and hidden layer is
determined by the following equation
:



HL
nodes
=
IL
nodes
+1 (1)


where, HL
nodes

is the number of nodes in the hidden

layers and
IL
nodes

is the number of nodes

in the input

layer. The
description of the neural network architecture
s

with diff
erent
Theme
/
Session
:

Prediction and Predictability of Monsoon/I









Poster


Registration
ID:

OC
-
000227

Extended Abstract,

International confer
ence on "Opportunities and Challenges in Monsoon Prediction in a Changing Climate"
(
OCHAMP
-
2012), Pune, India,

21
-
25 February 2012




HL
nodes

and IL
nodes

are listed

in Table 1. A neural network
architecture of BP
1

is

s
hown in Fig
ure 1
. By subsequently
increasing the values

of IL
nodes

and HL
nodes
, we c
an get
remaining architectures

of the neural networks as listed in
Table 1.



To control the training process of the BPNN and to provide
the best architecture
, we need to
set some additional
parameters like initial weight (IW), learning rate (LR),
moment
um, epoch, minimum weight delta (MWD) and
activation function (AF).



6
Experimental

Results and Conclusions


The data sets as mentioned in Section 2 are trained

and tested
separately for each of the neural network

architectures
, viz.
,
BP
1
-
BP
10
.

All th
e corresponding results are listed in Tables 2
and 3 after the training and testing processes in terms of root
mean square error (RS)
.

The last columns of Tables 2 and 3
show the results reported by
Sahai

et al.

[4]
model
.
Experimental results indicate tha
t neural networks BP
1
-
BP
10

exhibit

higher accuracy rate than Sahai

et al.

[4] model

in both
training and test data sets.

During

the training and testing
processes of neural networks,

diff
ere
nt experiments were
carried out
to set
additional parameters to ob
tain the optimal
results, and

we have chosen
the ones that
exhibit

the best
behavior in terms

of
RS
. The settings of all these additional
parameters

are listed in Table 4.


In this study, the validity of neural network simulation

for
monthly and seasonal

rainfall over the whole

India is evaluated
and corresponding neural network

architectures are presented.
The main advantage of the

neural network architectures is that
it can create its own

representation of the information as it
receives during

the learn
ing time. The presented neural
networks are

computationally robust and have the ability to
learn

very fast. The architectures of our neural networks are

comparatively simpler and have the ability to predict

the non
-
linear behavior of ISM
R in a better way

t
han
Sahai

et

al.

[4]

model
.


Figure
1
.
BP
1

neural network architecture.


Table 1.
Description of the ten neural network architectures.

Designation IL
nodes

HL
nodes Output layer node

BP
1


11 12 1

BP
2

12 13 1

BP
3

13 14 1

BP
4

14 15 1

BP
5

15

16


1

BP
6

16

17




1

BP
7

17
18




1

BP
8

18
19



1


BP
9


19

20 1


BP
10

20

21 1


Table 2.
Experimental results of training data set and their
comparison with exiting model [4].

S.



NN







Model

No.
BP
1


BP
2



BP
3


BP
4

BP
5

BP
6


BP
7

BP
8



BP
9




BP
10


[4]

RS(Jun
)


8.67 6.10 8
.21 9.72 9.22 8.59 8.30

8.99

10.36

10.84

22.34

RS(Jul
)

8.19 7.35 8.21

8.26

9.65

8.63
10.35 10.0 9.87

8.15 21.94

RS(Aug
)

7.01 3.14 3.34


7.08 6.7

6.81

5.65

7.95

5.27

7.83 17.84

RS(Sep
)

3.78

3.12 4.41

4.68

4.86

5.12 5.1 4.65

5.93

5.11


23.95

RS (Seas) 2.69 2.02 3.25

3.03

2.24

2.74 2.8

2.54

2.46

2.76


34.94


Table 3
.
Experimental results of test

data set and their
comparison with exiting model [4]
.

S.

NN





Model

No. BP
1
BP
2
BP
3
BP
4
BP
5
BP
6
BP
7


BP
8


BP
9


BP
10
[4]

RS(Ju
n)
3.56 2.69 3.63 4.18 3.33 5.39

6.05

5.40

3
74

5.41


22.34

RS(Jul)

5.58 3.38 4.60 3.65 5.84 5.87

5.0 4.42

3.73 5.69

21.94

RS(Aug)

1.09 1.01

4.39 2.69 3.16
3.28

5.86



5.71

6.27

6.83


17.84

RS(Sep)

4.28 3.00 3.79

4.79 3.05

6.12 3.52

7.63

4.42 6.51


23.95

RS (Seas) 1.14 1.02 4.03 3.25 3.78

2.37 4.31

3.76

2.99 4.54



34.94


Table 4
. A
ddition
al parameters and their
values during the
training
and testing processes of neural
net
works BP
1
-

BP
10
.

Serial No. A
dditional parameter

Input Value


1


IW 0.3


2 LR

0.3


3 Momentum 0.6


4 Epoch 10000


5 MWD

0.0001


6 AF Sigmoid


References

[1
]

Gadgil

et al.,

Current Science
,

84, 394 (2002)
.

[2] S. Gadgil, M. Rajeevan and R. Nanjundiah,
Current
Science
,

88
, 1389(2005)
.

[3] B. Pathasarathy
, A.A. Munot, D.R. Kothawale,
Theoritical
and Applied Climatology,
49
, 217 (1994).

[4]

A.K. Sahai, M.K. Soman and V. Satyan,
Climate
Dynamics,
16, 291 (2000).