Research the Effectiveness of Neural Network for Telecom Planning Prediction

jiggerluncheonΤεχνίτη Νοημοσύνη και Ρομποτική

19 Οκτ 2013 (πριν από 3 χρόνια και 9 μήνες)

104 εμφανίσεις

Research the Effectiveness of Neural Network for
Telecom Planning Prediction


Abstract—Today, the telecom planning prediction based
on quantitative analysis is becoming more necessary. It need
consider many influential factors, such as the consumption of
products, the quality of services, the quantity of customers,
the performance of equipment, incomes, investment and so on.
To verify and analyze the effectiveness of Neural Network for
telecom planning prediction, the model of planning prediction
Neural Network design is constructed in the paper. And a
case study was implemented in the paper, while a specific city
and the practical data were selected as samples for the case.

Keywords: BP Neural Network, prediction, eTOM,
Telecom planning
1. INTRODUCTION

The competition of the telecom companies is becoming
much stronger. To fulfill the requirement of network
services or to make better investment decision for a
telecom company, it is very important to predict
requirement of telecom network as a precondition. The
telecom planning should consider many influential factors,
such as the consumption of products, the quality of
services, the quantity of customers, the performance of
equipment, incomes, investment cost and so on. In this
paper, neural network is employed and tested, which works
well in certain cases. The main reason for this is the
quantity of customers or consumption of products not only
has the relation with history data, but also correlates with
the price of call, billing method, and local economic
condition.
The prediction model that we need should has ability of
non-linear reflection, needs less prior knowledge, and has
simple structure, and could simulate the input and output
of a complex system. The neural network is an abstract
computational model of the human brain. Similar to the
brain, the NN is composed of artificial neurons (or
processing units) and interconnections. Neural networks
have been studied for more than five decades since
Rosenblatt first applied the single-layer perceptrons to
pattern-classification learning in the late 1950s.
This paper presents the results of study on the application
of feed forward neural networks with backpropagation
algorithm to learn the dynamics of structural models. A
brief description of the internal operation of neural
network and its training algorithm is described. The
network is trained using practical data. The trained
network is used to predict a specific city telecom planning.

2. NEURAL NETWORK

It is apparent that the NN derives its computing power
through, first, its massive parallel distributed structure and,
second, its ability to learn and therefore to generalize. The
remarkable works on the study of neural networks can be
contributed to several researchers. Rumelhart proposed the
Error Back-Propagation Neural Network (BPNN,1986)
which is effective to solve many application problems.
Hopfield proposed the Hopfield Neural Network based on
the research of Ising model in 1984.Hardy ,Harder and
Desmarais proposed the Radial Basis Function Neural
Network (RBF NN) .Kosko studied the Fuzzy Neural
Network in 1987.And many researchers proposed a
number of algorithms such as K-Means algorithm, EM
algorithm, Gradient algorithm, Genetic algorithm,
Simulated Annealing algorithm.
The one used in this paper is a multilayer BP Neural
Network which is widely used in different applications. It
is a massively parallel, interconnected network of neurons.
The neurons are organized into layers with no feedback or
lateral connections. Each layer consists of some neurons
which receive its inputs from other neurons in the previous
layer, and if the weighted sum of the inputs exceeds a
threshold level will the neuron produce an output and
distribute it to the neurons in the next layer.
Input of neural network:
net
j
=∑
i
w
ji
x
ji

Output of nerve network:
)(
ii
netfo
=

E

−=
k
kk
ot
2
)(
2
1

Definition of delta rule: Δw
ji
=
ηδ
j
x
ji
η
is learning rate
δ
j
=
j
d
net
E




When Nj is output node: Δw
ji
=
ji
d
w
E



η
=
η
( t
j
-o
j
) o
j
( 1-o
j
) x
ji
When Nj is hidden layer node:
Δ
w
ji
=
ji
d
w
E



η
=
η


k
kjkjj
woo
δ
)1(

x
ji



The training algorithm used in this study is the
Levenberg-Marquardt (LM) algorithm. LM algorithm is a
variation of Newton's method that was designed for
minimizing functions that are sums of squares of nonlinear
functions. This is very well suited to neural network
training where the performance index is the mean squared
error.
3. Analysis of Prediction Problem
Many kinds of equipments resource’s information couldn’t
be offered by specialized network management system
such as transmission network management system. The
resource operations of eTOM focus on the information of
daily operation. While the strategy, infrastructure and
product region of it focus on infrastructure lifecycle
management and product lifecycle management, which is
different from the information of daily operation, the unit
of the data is mensal or quarterly. So after we collect the
data, we should preprocess the information, generate
mensal data for further treatment. Because the quantity of
the data is small, we use mensal data to ensure sufficient
samples for training and to lower the error of the prediction.
In the case of switch planning prediction, the traffic, call
attempt, Call completion ratio, switch capability, CPU load
of switch, VLR capacity, billing time, Billing income.
Because the peek load of the resources (in other words
must take into account, we need to process the maximum
load of each month. For an example, to the amount of call
attempt, we should process the values in the busiest time in
a month. And, for the load of switch’s CPU, we should
process the maximum load in each month. For traffic,
process maximum daily traffic in a month. Figure 1 uses
UML to analyse the prediction information.

i
Sc maximum amount of call
attempt in the i-th month

i
Ss maximum amount of call
completion in the i-th month

i
Se
maximum traffic in i-th month

i
Bu
the amount of online user

i
Bi
the billing income of the i-th
month
After the model is set up, we need to construct input data
space P from the data. The dimensionality of P is 12, it can
be described as:
( PnPnPiPP,1,...,...,,2,1


N is the quantity of samples, Pi=















Uq
Uq
U
U
1
.....
2
1
, Ui is input
vector , q is vector’s dimensionality of sample i, and We
get matrix P.
After trained the network under different hidden layer
number and different nerve number condition, we got
different prediction MSE(Tab 1 Tab2)
Input node

Hidden layer
node
Output node

MSE

12 24 1 0.002931

12 25 1 0.003709

12 27 1 0.005963

12 28 1 0.000936

12 31 1 0.002619

12 33 1 0.005213

12 35 1 0.001215

12 36 1 0.004929

Tab 1 The MSE of some 3 layers NN
Input

node
1th Hidden
layer node
2nd H
idden
layer node
Output

node
MSE
12

17 8 1

0.001768

12

18 11 1

0.002958

12

18 13 1

0.005212

12

19 13 1

0.001287

12

20 13 1

0.001363

12

25 13 1

0.044976

12

25 14 1

0.14277

12

27 12 1

0.053739

12

27 14 1

0.003439

12

27 15 1

0.007099

12

28 12 1

0.032284

12

28 13 1

0.08523

12

30 14 1

0.001566

12

30 15 1

0.004607

12

30 16 1

0.001881

12

30 17 1

0.00239

12

30 18 1

0.004943

12

30 19 1

0.00186

12

32 8 1

0.002554

12

32 11 1

0.003748

12

32 13 1

0.002005

12

32 15 1

0.002687

12

32 17 1

0.00301

12

33 17 1

0.001844

12

33 19 1

0.002729

12

34 17 1

0.002458

12

34 18 1

0.001745

12

35 17 1

0.002227

12

35 18 1

0.001175

12

36 17 1

0.003184

12

36 19 1

0.002128

12

37 19 1

0.002478

12

16 6 1

0.000933

12

16 10 1

0.000968

Tab 2 The MSE of some 4 layers NN
After contrasted the MSE of the different NN, a four layer
BPNN was employed. The four layers contain 12, 16, 6, 1
nodes respectively, while middle two layers are hidden.

5. Prediction Result

Figure 2 shows the NN training performance. The NN
have four layers contain 12,16,6,1 nodes, the input layer
have 12 node, the output layer have 1 node, the two middle
layers have 16,6 nodes respectively.

Figure 2 Training Performance
Figure 3 shows the best linear fit of training results. It’s
clearly that the network training is very effective.

Figure 3 The best linear fit of training results
Then, we use the data from September 2002 to November
2004 for a prediction. After normalization, we put the data
into the network, and get the predicted switch VLR best
planning value from October 2002 to February
2005.( Figure 4)

Figure 4 Prediction of the best planning capacity

Comparing the results to best planning, we could know
that the performance is good.( Figure 5)

Figure 5 Comparison of the results and the best planning
We use error analyze to show the effectiveness of
proposed method, in Figure 6., we could see that the
maximum error is a little higher than 1.5%, minimum error
reaches 0.0625%, which shows the reliability of the
network prediction.( Figure 6)

Figure 6 The error of the results
Figure 7 shows that the prediction is effective, it also
shows that the best plan’s increase is nonlinear. It need not
to increase the capacity in very quarters. Averaging one
year’s plan to 4 quarters is unreasonable. The plan of 2004
shows: it need not to increase the capacity in the 1th
quarter and the 2nd quarter, it need to increase 5000
capacity in the 3rd quarter, and it need to increase 15000
capacity in the 4th quarter.


Figure 7 Comparison of the best planning and the old planning
Comparing the old plan to the beat plan prediction, we
could know the best plan prediction is much better than the
old plan. From the best plan prediction, the average of
utilization rate is 87.315%. From the old plan, the average
of utilization rate is 79.778 %. From the old plan, it
needs160,000 capacity at the 3rd quarter 2002. From the
best plan prediction, it only needs 155,000 capacity at the
2nd quarter 2003.The time’s value of the investment could
be evaluated.
6. Conclusion
Base on BPNN, we could get the best plan prediction. A
specific city and the practical data were selected as
samples for the case. The results suggest that the proposed
method is an effective approach to the input-output
mapping problem, especially for telecom planning
prediction problem.

7. REFERENCES
[1] Chunhao Li, Method and Application of Data
Processing on Artificial Neural Network[J], Systems
Engineering-theory & Practice ,July,1997
[2]Li Hua, The Application of Matlab in Data Analysis of
Engine Feature Test of Armored Vehicles[J], Journal of
Armored Force Engineering Institute Vol.17 No.3 Sep.
2003
[3]Telecom Operations Map, TMF, GB910, Evaluation
Version 2.1.
[4] NGOSS: Development and Integration Methodology,
TMF, TR127.
[5] System Integration Map , TMF, GB914, Member
Evaluation Version2.0.
[6] Martin Creaner, The Progress of the NGOSS initiative
The Progress of the NGOSS initiative towards simpler
integrated towards simpler integrated management,TMF
2002
[7] Jenny Huang, TMF White Paper –NGOSS and MDA,
TMF
[8]WAKURAH , ZURADA JM . Bi-directional
computing architecture for time series
prediction[J] . Neural Networks , 2001 , l4(9) :
l307. 1321.
[9]NARAZAKI H , RALESCU A L . An improved
synthesis method for multilayered neural networks using
qualitative knowledge[J] . IEEE Trans on Fuzzy
Systems, 1993, l(2): 125—137.
[10] YU Ting-chao, ZHANG Tu-qiao Study of artificial
neural network model for forecasting urban water
demand[J].Journal of Zhejiang University, Sep .2004
Vol.38 No.9.
[11]Niu Dongxiao,Xing Mian, Meng Ming. Research on
ANN Power Load Forecasting Based on United Data
Mining Technology[J] Transactions of China
Electrotechnical Society . Sep .2004 Vol.19 No.9
[12]Li Niuren, Jiao Licheng. Prediction of the Oilfield
Output Under the Effects of Nonlinear Factors by Artificial
Neural Network[J] Journal of Xi’an Petroleum Institute
Jul.2002 Vol.17 No.4
[13]Simon Haykin, Neural Networks A Comprehensive
Foundation, Second Edition[M] Prentice Hall,Inc 1999
[14] Chin-Hsiung Loh, Shy-Ching Yeh, Application
of Neural Networks to Health Monitoring of Bridge
Structures[C], SPIE Vol. 3995 (2000)
[15]Philip D. Wasserman, Neural Computing:
Theory and Practice[M], Van Nostrand
Reinhold, 1989
[16]Rumelhart,D.E,,McCIelland(eds). Parallel
Distributed Processing, volume 1 and volume
2[M].MIT Press, Cambridge. 1986
[17] Tom M. Mitchell,Machine Learning [M]
McGraw Hill, 1997