Performance Evaluation of

weightwelloffMobile - Wireless

Dec 12, 2013 (3 years and 7 months ago)

87 views











םםע
הילצרהב ימוחתניבה זכרמה

תיב
-
בשחמה יעדמל יזרא יפא רפס


:אשונב

M.Sc
.
בע
ו
ידומיל תרגסמב רמג תד
ם

ראותל




Performance Evaluation of

Base Station Application Optimizer






הננ

גרובזניג

312015761


ןוסנסונ תינור ר"ד תייחנהב

רימת ימת ר"ד :תימינפ החנמ


רבמבונ
1122




Performance Evaluation of

Base Station Application Optimizer

Introduction





2

/
24



Contents

1. INTRODUCTION
................................
................................
................................
........................

4

2. NS 2 SIMULATOR

................................
................................
................................
......................

5

3. BASE STATION APPL
ICATION OPTIMIZER

................................
................................
.....

7

3.1.

A
RCHITECTURE FOR THE
B
ASE
S
TATION
A
PPLICATION
O
PTIMIZER

................................
..........

7

3.2.

B
ENEFITS OF
B
ASE
S
TATION
A
PPLICATION
O
PTIMIZER
................................
.............................

8

4. BASE STATION OPTI
MIZER SIMULATION IMP
LEMENTATION

.............................

10

4.1.

N
ETWORK
T
OPOLOGY GENERATION

................................
................................
.......................

10

4.2.

T
RAFFIC
G
ENERATION

................................
................................
................................
............

11

4.3.

C
ACHE
I
MPLEMENTATION

................................
................................
................................
.......

11

4.4.

C
ONFIGURATION

................................
................................
................................
.....................

12

4.5.

T
RACE

ANALYSIS

................................
................................
................................
....................

12

4.6.

S
IMULATION INSTALLATI
ON AND
U
SAGE

................................
................................
................

15

5. PERFORMANCE EVALU
ATION AND RESULTS

................................
.............................

16

5.1.

S
IMULATION
D
ESCRIPTION AND
E
SSENTIAL RESULTS

................................
.............................

16

5.2.

T
HE IMPACT OF THE CAC
HE HIT RATE
................................
................................
......................

19

5.3.

T
HE IMPACT OF THE BUF
FER SIZE AT THE BASE

STATION

................................
.........................

20

6. CONCLUSIONS AND F
UTURE WORK

................................
................................
...............

22

7. REFERENCES

................................
................................
................................
...........................

23

8. APPENDIX A
-

NS
-
2 INSTALLATION ON UB
UNTU

................................
.........................

24




Performance Evaluation of

Base Station Application Optimizer

Introduction





3

/
24



Performance Evaluation of

Base Station Application Optimizer

Introduction





4

/
24


1.

Introduction

Cellular operators are competing traditional broadband operators by offering mobile
broadband
access and IP services such as rich multimedia (e.g., video
-
on
-
demand, music download, video
sharing) to laptops, PDAs, smart
-
phones and other advanced handsets. They offer these services
through access networks such as High
-
Speed Packet Access (
HSPA) and Long
-
Term Evolution
(LTE)
[7.1.3]
. The new technologies offer mobile operators significantly improved data speeds,
short latency and increased capacity.

Traditionally, backhaul lines, connecting the cell sites with the core network, use TDM (E1,

T1)
lines, each providing up to 2 Mbps capacity. Though acceptable for voice and low data rate
applications, E1 capacity is inadequate for higher data rates. Obviously, the direct result of the
backhaul bottleneck is low utilization of the radio channels
and an unsatisfying user experience.
Enormous backhaul upgrade is required to new technologies such as PON or Microwave to satisfy
the high bandwidth demand. This upgrade is expected to be extremely expensive and its cost casts
a real doubt on the profitab
ility of enhanced network deployment. The biggest cost challenge
facing wireless service providers today is th
e backhaul network [7.1.1]
. As a result, the operators
are seeking data reduction solutions integrated with their network upgrades.

The basic ide
a behind the novel Base Station Application Optimizer solution is to replace the
traditional Base Station entity with a fast and smart entity, capable of analyzing and

optimizing the
user data in the application level. In particular, such unit can use its
location in the operator
network to prevent unnecessary data from travelling though the backhaul and core networks. Note
that the suggested optimization is discussed here in the context of LTE networks ("eNode
-
B") but
it can be performed on any base statio
n or access point (e.g., IEEE 802.16, IEEE 802.11).

The purpose of this project is to implement the Base Station Optimizer solution on an LTE
network in NS
-
2 simulator and analyze its performance.

Performance evaluation results indicate
that the base stat
ion application optimizer has a large potential to improve the network
performance and for large data reduction at backhaul and core networks links. In a realistic
mixture of application we show that under modest assumption on cache hit rates, the totaled
transferred bytes reduction factor is 1.2
-
1.8, the average packet delay is significantly reduced by
up to 75% and the packet loss percentages is reduced from 2% to less than 0.05%.
In additional
simulation
s

we study the impact of the application hit rates on the performance parameters, and
the impact of the size of the buffers (queues) at the base station on the performance parameters.

Performance Evaluation of

Base Station Application Optimizer

NS 2 Simulator





5

/
24


2.

NS 2 Simulator

NS

2 is an open
-
source event
-
driven simulator designed

specifically for research

in computer communication networks.

A
n event
-
driven simulation is initiated and run by a set

of events. A list of all scheduled events are usually maintained and updated

throughout the simulation process.

NS 2 provides
substantial support for

simulations of
various
protocols
(e.g. TCP, UDP, routing
protocols)
over wired and wireless networks.


NS 2 provides the ability to fully analyze network performance by collecting statistics on
parameters like average delay and jit
ter.

Due to its flexibility and modular nature, NS2 has gained constant popularity

in the networking
research community since its birth in 1989. Ever

since, several revolutions and revisions have
marked the growing maturity of

the tool, thanks to substan
tial contributions from the players in the
field.

Among these are the University of California and Cornell University who developed

the REAL network simulator,1 the foundation which NS is based on.

Since 1995 the Defense
Advanced Research Projects Agency (
DARPA) supported

s
evelopment of NS through the Virtual
InterNetwork Testbed (VINT)

project.
Currently the National Science Foundation (NSF) has
joined the

ride in development. Last but not the least, the group of researchers and developers

in the community

are constantly working to keep NS2 strong and

versatile.

NS 2 is written in C++ and
Object
-
oriented Tool Command

Language

(OTcl)
.


The core backend of the simulator modules are written in C++
. The frontend,

the configuration
and usage
,

of these modules i
s written in OTcl.

Therefore simulations, which consist from network topology

and traffic generation
, should be
written in OTcl.
T
he OTcl sets up simulation by assembling

and configuring the objects as well as
scheduling discrete events.

The C++ and the
OTcl are linked together using TclCL. Mapped

to a C++ object, variables in the
OTcl domains are sometimes referred to as

handles. Conceptually, a handle (e.g., n as a Node
handle) is just a string

in the OTcl domain, and does not contain any functionality.

Instead, the

functionality (e.g., receiving a packet) is defined in the mapped C++ object. In the OTcl domain, a
handle acts as a frontend

which interacts with users and othe
r OTcl objects.

The simulator supports a class

hierarchy in C++
,

the compiled hi
erarchy, and a similar class
hierarchy within the OTcl interpreter
,

the interpreted hierarchy. The two hierarchies are closely
Performance Evaluation of

Base Station Application Optimizer

NS 2 Simulator





6

/
24


related to each other; from the

user’s perspective, there is a one
-
to
-
one correspondence between a
class in the interpreted hiera
rchy and one in the compiled

hierarchy.

The root of this hierarchy is the class TclObject. Users create new simulator objects through the
interpreter; these

objects are instantiated within the interpreter, and are closely mirrored by a
corresponding objec
t in the compiled hierarchy.


The interpreted class hierarchy is automatically established through methods defined in the class
TclClass.
U
ser instantiated

objects are mirrored through methods defined in the class

TclObject.


Figure 1: Basic architecture

of NS
-
2

Network topology and traffic generation are written in .tcl simulation script.

NS
-
2 shell executable command, ./ns, receives as input the ./tcl simulation script and initiates the
simulation.

Once run, simulation leaves a text trace file which can

be replayed via
Network AniMator (NAM)

or used to display graphs via XGraph.

Additional trace file contains all statistics in different layers
with 1 millisecond resolution. This trace file should be analyzed in order to retrieve performance
parameters li
ke delay, dropped statistics and transferred data.

NS 2 simulator runs best on Linux. It requires installation and configuration.

We

have installed NS
-
2 simulator on U
buntu 11.04, free Linux client.

Full installation instructions
on windows with Ubuntu Lin
ux client
are part of the contribution of
this project
,
and are
included
in appendix A.

The
Base Station Optimizer simulation,

is
described in details in Chapter 4.



Performance Evaluation of

Base Station Application Optimizer

Base Station Application Optimizer





7

/
24


3.

Base Station Application Optimizer

3.1.

Architecture for the
B
ase
S
tation
A
pplication
O
ptimizer

We suggest a simple solution to the backhaul bottleneck problem. The traffic load on the
backhaul links can be reduced by replacing the traditional Base Station entity with the Base
Station Applicatio
n Optimizer (BS
-
OPT), a smart entity capable of analyzing and optimizing the
user data in the application level. This section describes the suggested solution architecture.

Figure 2 presents the

architecture
of

an integrated unit inside the base station.



Figure 2: LTE network with Base Station Application Optimizer


The solution of
an
integrated architecture is a software package installed at the base station.

Since the optimizer is integrated with
eNode
-
B
, it

allows an elegant and simple application
solution.


eNode
-
B
already supports the following capabilities:

1)

Security tunnel encryption and decryption: According to 3GPP standards, the links between
the eNode
-
B and the other network elements should be protected using a security tunnel such
as IPSec. The stand alone optimizer must be configured as a trusted entity

and it must have the
capability to encrypt and decrypt the traffic.

2)


GTP tunnel manipulation: The stand alone optimizer must access the information in the
application layer headers. Thus, it must have the capability of striping the GTP headers.
Furtherm
ore, transparent operation required GTP termination capabilities.

3)


X2
-
AP and S1
-
AP signaling interpretation: To support Quality of Service requirements and to
trace user mobility events, the stand alone optimizer must be able to understand a wide set of
r
elevant signaling messages.

therefore
the integrated optimizer software can be located at the proper data flows and save
Performance Evaluation of

Base Station Application Optimizer

Base Station Application Optimizer





8

/
24


processing time and delays.

The disadvantage of the integration with
eNode
-
B
is the required collaboration
with base station
evolution
.

The implementation of the application optimizer in a hierarchical structure is required to support
user mobility when the optimizer is partial deployed and some eNode
-
B does not support the
optimization. If the application optimizer is fully deployed in
all eNode
-
Bs in the radio access
network then it is possible to use a flat architecture as well.

3.2.

Benefits
of Base Station Application Optimizer

The essential benefit of application layer optimization in the base station is data reduction at the
backhaul bo
ttleneck. This can help in reducing the total backhaul upgrade costs and improving the
user experience by providing higher actual data rates and shorter delays.

A major latency and data reduction can be achieved by implementing an application cache at the

base station application optimizer.

Cache such as web cache, P2P or

streaming cache can reduce the traffic significantly as a function
of the cache size and the user behavior. Furthermore, researches have shown that users do not tend
to move a lot while
consuming data applications. Obviously, such pattern of user behavior
increases the cache hit rate dramatically.

Additional data reduction can be done by replacing many live video streaming between each
user and an internet live video streaming server wit
h a single live video stream between the base
station application optimizer and the streaming server, and only then delivering uni
-
streams to the
specific users between the eNode
-
B and the users' in the cell. The base station application
optimizer should r
eplace the Internet Group Management Protocol (IGMP) role and establish the
users multicast group memberships. The optimizer can control the transformation of the single
video stream into multiple uni
-
streams according to the multicast group members list.
Meaning, it
can hold a group table with all multicast group members' details similar to IGMP.

Using location information, the base station application optimizer can help in optimizing P2P
traffic. The location information can help optimizing the traffic pa
th between peers and reduce the
file download latency while reducing network resource consumption
.

However, implementing this
feature required capabilities of IP routing with mobile users within the eNode
-
B
.


Performance Evaluation of

Base Station Application Optimizer

Base Station Application Optimizer





9

/
24


Additional data reduction and improvement in the user experience can be achieved by fitting the
proper picture resolution/format, video format and transfer rate to the handset capabilities and to
the available resources in the cell. Another example of simp
le possible data reduction at the base
station application optimizer is file compression. Many handsets do not support compression
formats. Once the User Agent (that is, the browser application) of the handset reports that it does
not support compression f
ormats, the web servers avoid the file compression and response with
acceptable file format. The optimizer can overwrite the relevant HTTP header to reflect
compression format support resulting in a compressed file transmission by the web server. The
optim
izer can then decompress the files and transmit them to the handset according to the original
file format. A supporting Central Optimization Entity at the operator gateway can provide
additional data reduction such as compression of files that were not com
pressed by the web
servers or Delta compression of any data traffic between the gateway and the eNode
-
B.

However, implementing Deep Packet Inspection (DPI) technology in the base station is simple in
concept but complex in practice. Conceptually,

inspecting a packet to determine subscriber and
application type and then acting on that information looks easy. However, traffic rates and rapidly

evolving applications add complexity. Based on present data rates, packet rates are already
staggering. Eac
h LTE user UL/DL channel can carry millions of packets per second. At that speed,
there’s only ~100 nsec to receive and inspect each packet, determine its application, perform the
optimization algorithms, modify it if necessary, and forward it to the prope
r destination according
to the optimization plan. As a result, the base station must include strong multi
-
core, multi
-
threaded processors for packet inspection.


Performance Evaluation of

Base Station Application Optimizer

Base Station Optimizer Simulation Implementation





10

/
24


4.

B
ase Station Optimizer Simulation

Implementation

Base

station application optimizer is in
tegrated with eNode
-
B in LTE network therefore the
simulation is based on the LTE/SAE model for NS2 network simulator [7.1.4].

The major modifications of the networ
k model include extending the e
Node
-
B to support different
applications such as web, video,

file sharing and VoIP; users and traffic distribution according to
application; integrating an optimizer to eNode
-
B and configuring the end users traffic according to
various
hit rate
s
.

The simulation consists of
four
stages: network topology definition,

traffic
generation
,
s
imulations
over different parameters and trace analysis.

All the parameters are configurable, each
functionality is separated in a procedure therefore this implementation of LTE network allows
performing various simulations in a conve
nient
and flexible
way.

4.1.

Network
Topology generation

Network topology generation consists
of
defining the nodes and the links between them.

eNo
d
e
-
B (eNB), Serving Gateway (S
-
GW), GGW and the server nodes are defined as wired nodes
using DropTail
queuing algorithm. The link between eNB and S
-
GW is the S1 interface.

The nodes are using DropTail but the queue size is defined in fact on the links themselves.

The links in S1 interface have 10Mb bandwidth, 150ms delay.

User equipment
s
(
UE
)

are
defined

as wireless nodes
connected to the eNB

using DropTail
queuing algorithm. The links between UE
a
nd

eNB have 10Mb bandwidth, 10ms delay. The queue
size between the base station and each UE is 10 packets; the oversubscription factor
at
the S1
interface

is 1/3.


The number of end users is set
dynamically based on the simulation arguments.

The users are divided to 4 groups according to their used application, based on the application
usage distribution from mobile trend report by Allot Communication [7.1.
8]:

26% web usage, 37% video, 30% files sharing and 7% VoIP.

Since the number of end users is configurable the user distribution is calculated in every
simulation. (
using
utilities.tcl
-

calculateUserDistribution{} procedure).

The
n
etwork topology is de
fined in createTopology.tcl.



Performance Evaluation of

Base Station Application Optimizer

Base Station Optimizer Simulation Implementation





11

/
24


4.2.

Traffic
G
eneration

The traffic generator enables the creation of a realistic mixture of different traffic types
. It
creates VoIP traffic, using Constant Bit Rate, with small packets transmitted over UDP. Web
-
browsing
traffic is created using ON/OFF process with Pareto distribution, with large packets
transmitted over TCP. In addition, it creates video and file sharing traffic using smother Pareto
ON/OFF processes with large packets transmitted over TCP.

The simulator m
odels four different traffic generator functions:

1)

The VoIP traffic generator is a Constant Bit Rate application of 64 kbps connection based on a
UDP connection with 200 byte packet size.

2)


The Web browsing traffic generator is a Pareto application over TCP

with packet size of
1040 byte, average burst time (ON

period) of 200msec, average idle time (OFF period) of
2000msec, rate of 300 kbps and shape parameter (alpha) of 1.3.

3)

The video traffic generator is also a Pareto application with shape parameter of
1.5, ON period
average of 2000msec, with packet size of 1300 bytes, average idle time (OFF period) of
2000msec and rate of 600kbps.

4)

The file sharing traffic generator is also a Pareto application with shape parameter of 1.7, ON
period average of 2000msec,
with packet size of 1500 bytes, average idle time (OFF period) of
200msec and rate of 100kbps.

Procedures defining Pareto or CBR traffic are defined in createTraffic.tcl and reused in main.tcl
with different parameters. Once the network topology is define
d, the end users are allocated with
generated traffic according to
the
application group they belong to.

Each application has its own
flow id for tracing and performance analysis.

4.3.

Cache

Implementation

In case of optimized scenario, in each application grou
p the end users were divided according to a
hit rate distribution. If the user belongs to hit rate group


the traffic is generated between the end
user and eNB; otherwise


the traffic is generated between

the end user and the server.
The
different flo
ws are indicated by flow id per application for tracing and performance analysis.

The distribution of the users according to the hit rate is performed
in utilities.tcl
-

calculateHitRateUsersDistribution{} procedure.




Performance Evaluation of

Base Station Application Optimizer

Base Station Optimizer Simulation Implem
entation





12

/
24


4.4.

Configuration

All simulation paramet
ers are defined in configuration.tcl with default values.

The following parameters can be overridden by command line arguments passed to main.tcl


the
main simulation activation script:



number of end users



link bandwidth, drop tail queue delay and queue s
ize



a flag whether the e
-
NodeB should be optimized or not



application usage distribution and hit rate per application

4.5.

Trace analysis

NS
-
2 records and outputs all traffic events statistics

in the following format:



Figure 3: NS
-
2 trace file format

NS
-
2 doesn't provide trace analyzers. In order to evaluate Base Station Application Optimizer
performance we
should
collect the following statistics:
total data transferred only S1 interface;
data transferred on S1 interface per second
; end to end delay and number of dropped packets.

In
addition we
analyze the performance
per application type
.

Since the simulation time was relatively long


30min, the trace files contained a lot of data


up
to 8G in case of 100 end user
s. Therefore we had to deal with performance problems and optimize
the trace analyzers as much as possible.
For that reason we
used
GNU Awk.

Performance Evaluation of

Base Station Application Optimizer

Base Station Optimizer Simulation Implementation





13

/
24


Awk automatically applies a block of commands on all lines in the input file and allows selecting
specific reco
rds in each line. This allowed us to define logically complicated analysis based of
flow id, packet sequence id, source and destination nodes.

4.5.1.

Total bandwidth analyzer

Total bandwidth analyzer is implemented in totalBandwith
.sh shell script that combines
A
wk.


For each line we sum up packets' size on S1 interface (downlink and uplink)

by using source and
destination nodes ids.

Using the flow id indicator we separate the results according to different applications and hit rate
scenarios. The output of this

script is the total number of bytes per application passed on S1
interface.

Usage:

./totalBandwith.sh <trace_fileName>

4.5.2.

Bandwidth per second analyzer

Bandwidth per second analyzer is implemented in Awk.

Array of simulation length in seconds is created and

initialized with 0. Each trace line is analyzed
by source node, destination node, flow id, packet size and time. All the packets' sizes transferred
on S1 interface are summed up similar to total bandwidth analyzer but per second


time is used to
indicate

the index in the array.

Usage:

awk

f bandwithPerSecond.awk <trace_fileName> <endUsers_number>

Please note that endUsers_number is used for the results filename so it is possible to indicate
simple vs. optimized scenario using this argument as well.



Performance Evaluation of

Base Station Application Optimizer

Base Station Optimizer Simulation Implementation





14

/
24


4.5.3.

End to End delay

and Dropped statistics

This analysis is more complicated since we need to follow specific packets in the network by
using packet id. An array is allocated for each application type, the packet id indicated an index in
the array.
Packet id,

flow id, source node, destination node, time and event type are used.

In case of a sent packet, an enqueue event appears. If according to the source node the packet was
sent from an end user or the server, the time of the event is recorded in the correspo
nding cell in a
relevant application array as the start time.

In case of a receive event, If according to the source and destination nodes, the packet was
received at the server or by the end user, the time of the event is recorded in the corresponding ce
ll
in a relevant application array as the end time.

In case of a dropped packet event, we check where in the network the packet was dropped. The
number of dropped packets is updated and the most important


the corresponding cell in the
relevant applicatio
n array is updated with
-
1 in order not to count dropped packets in end to end
delay time.

At the end of the run over all lines in trace file, we have 4 application arrays with the start and end
time of all packets (or
-
1 in case the packet was dropped). W
e calculate the average delay time,
each packet start time and delay, dropped packets statistics.

Usage:

please note that there is a difference between simple and optimized scenarios trace
analysis.

In case of simple scenario:

awk

f endToEndDelay.awk <tr
ace_fileName> <endUsers_number>

In case of optimized scenario:

awk

f endToEndDelayCache.awk <trace_fileName> <endUsers_number>



Performance Evaluation of

Base Station Application Optimizer

Base Station Optimizer Simulation Implementation





15

/
24


4.6.

Simulation installation and Usag
e

The Base Station Application Optimizer is hosted at Google Projects [7.1.7].

SVN client i
s required for downloading the source code.

The simulation does not require any installation, the only prerequisite is validated installation of
NS
-
2.

The root folder, Base
-
station
-
application
-
optimizer, contains the following important folders:



simulati
on


all the .tcl scripts that define the topology, traffic, configuration and main
activation script



traceAnalyzers


all the trace analysis scripts together with their activation shell scripts

In order to run a simulation:



view simulation/configuration.t
cl and check the simulation configurable parameters



run:
ns main.tcl

if no arguments were passed, all the parameters will be taken from configuration.tcl.

out.nam and <trace_FileName>.out will be created. <trace_fileName> .out should be used with
trace a
nalyzers. Out.nam can be used in the following way: nam out.nam. This will allow
replaying the simulation graphically.




Performance Evaluation of

Base Station Application Optimizer

Performance Evaluation and Results





16

/
24


5.

Performance Evaluation and
Results

We performed intensive simulations to evaluate the potential improvement
of the

performance
parameters while using the
B
ase
S
tation
A
pplication
O
ptimizer.

First, a description of the simulation and the essential results are presented

in 5.1.

Then, we study the impact of the cache hit rate on the performance parameters

in 5.2


Final
ly, the impact of the size of the buffers at the base station on the performance parame
ters is
elaborated in 5.3.


5.1.

Simulation Description and Essential results

Simulation
s

are

created as described in Chapter 4.

The data reduction on the backhaul link is me
asured by comparing the sum of the bytes (uplink
and downlink) transferred between the gateway and the base station which include the application
optimizer (the S1 interface) with the sum of the bytes transferred between the gateway and the
traditional e
-
N
odeB, under the same traffic generation. Other performance parameters, such as
average delay and packet loss

are measured using the proper

statistics as described in Chapter 4.

Traffic scenarios include 20, 50 and 100 UEs with different traffic QoS classes

according to
their used application. The assumed hit rate for the web cache is 20% and for video and file
-
sharing 40%. Note that we used constant hit rate regardless the number of users in the cell.
Usually, the hit rate increases as the number of user i
ncreases, thus our performance estimation is
conservative. The queue size between the base station and each UE is 10 packets; the
oversubscription factor at the S1 interface is 1/3.

The results regarding the total data
reduction
are plotted at Table I and


Figure 9. When 20 concurrent users are
connected to the base station, the
application optimizer achieves data
reduction of 45%, with 50 users it achieves
37% and with 100 users 19% reduction.
Figure 9(a) plots the total data reduction
per application type

for the 20 concurrent
users’ scenario.

The total Kbytes transferred over the S1
interface are presented in Figure 9(b) for 20, 50 and 100 concurrent users respectively.

TABLE

I

D
ATA
R
EDUCTION
F
ACTOR

Users

Application

Optimized

KB

Traditional

KB

Reduction
Factor

20

Web

152487

181265

1.19


Video

402263

803161

2.00


File Share

249302

499097

2.00


VoIP

43190

43190

1.00


Total

847242

1526713

1.80

5
0

Web

414002

519829

1.26


Video

1100899

1896456

1.72


File Share

742733

1241815

1.67


VoIP

129570

129566

1.00


Total

2387204

3787667

1.59

10
0

Web

727680

964349

1.33


Video

2176196

2505661

1.15


File Share

1490109

2029426

1.36


VoIP

302259

294793

0.98


Total

4696243

5794228

1.23



Performance Evaluation of

Base Station Application Optimizer

Performance Evaluation and Results





17

/
24




Figure 9(a):
Data reduction per application type



Figure 9(b)
:
Total transferre
d KB with 20, 50 and 100 concurrent users.


Table I presents the data reduction factor per application in 20, 50 and 100 concurrent users
scenarios respectively. An interesting observation is that the VoIP traffic that obviously cannot be
optimized, actually presents negative data reduction percenta
ges (data reduction factor < 1) in the
concurrent 100 users scenario. Since the queues are emptier and fewer packets are dropped, the
CBR application could actually send more packets successfully. As can be seen from this table,
the data reduction factor
is high, meaning the potential cost reduction is high.

Further study of the data reduction over time is presented in Figure 10. As can be seen from this
0
100000
200000
300000
400000
500000
600000
700000
800000
900000
Web
Video
File sharing
VoIP
Optimized
Traditional
Application type

KB transferred

at S1
interface

0
1000000
2000000
3000000
4000000
5000000
6000000
7000000
20
50
100
Optimized
Traditional
Number of users in the cell

Total KB transferred

at S1
interface

Performance Evaluation of

Base Station Application Optimizer

Performance Evaluation and Results





18

/
24


figure, the data reduction factor is consistent over time.


Figure
10:

Data reduction over time


The results regarding the packet drop are
presented in Table II. It can be seen that when the
network is loaded in the 100 concurrent users’ scenario the number of packet drop is significantly
reduced. As mentioned before, this allows the CBR (that is, VoIP) application to send more
packets succes
sfully. The results regarding the packet average delay are presented in Table III. As
expected, applications with high hit rate gain lower delays.







0
1000
2000
3000
4000
5000
6000
7000
8000
0
100
200
300
400
500
600
700
800
TRD 20 UE
OPT 20 UE
TRD 50 UE
OPT 50 UE
TRD 100 UE
OPT 100 UE
Time

KBPS transferred

at S1
interface

TABLE

II

P
ACKET
D
ROP
S
TATISTIC

Users

Application

Traditional

Optimized



#Drop

%Drop

#Drop

%Drop

20

Web

0

0.00%

0

0.00%


Video

0

0.00%

0

0.00%


File share

0

0.00%

0

0.00%


VoIP

19

0.05%

0

0.00%


Total

19

0.00%

0

0.00%

50

Web

114

0.07%

109

0.07%


Video

291

0.05%

188

0.02%


File share

249

0.06%

105

0.03%


VoIP

65

0.06%

62

0.06%


Total

719

0.06%

464

0.03%

100

Web

5954

1.96%

165

0.05%


Video

12333

1.57%

603

0.03%


File share

9528

1.50%

374

0.05%


VoIP

12503

4.96%

263

0.10%


Total

40318

2.04%

1405

0.04%


TABLE

III

P
ACKET
A
VERAGE
D
ELAY

Users

Application

Traditional

Optimized

Reduction %

20

Web

0.4623

0.3904

15.56%


Video

0.4621

0.1139

75.35%


File Share

0.4622

0.2360

48.93%


VoIP

0.4612

0.4609

0.06%

5
0

Web

0.4636

0.3688

20.45%


Video

0.4633

0.1440

68.92%


File Share

0.4634

0.2809

39.39%


VoIP

0.4628

0.4617

0.23%

10
0

Web

0.4713

0.3480

26.16%


Video

0.4710

0.1448

69.26%


File Share

0.4710

0.2832

39.87%


VoIP

0.4691

0.4651

0.86%


Performance Evaluation of

Base Station Application Optimizer

Performance Evaluation and Results





19

/
24


5.2.

The impact of the cache hit rate

To understand the impact of the cache hit rate on the performance par
ameters we performed
additional set of simulations with application hit rates higher by 10% and lower by 10%. Table IV
presents the impact of the application hit rate on the data reduction factor. As expected, usually,
higher hit rate value results in high
er data reduction factor. However, the increase in the data
reduction factor is more moderate than the increase in the hit rate value.

The impact of the application hit rate on the packet drop statistic when the network is loaded in the
100 concurrent use
rs’ scenario is presented in Table V. The results are proportional to the
increase/decrease in the hit rate value. But it is interesting to see that even very low hit rate
significantly reduces (by factor of 12) the packet drops events.

Finally, the impac
t of the application hit rate on the packet average delay in the 100 concurrent
users’ scenario is demonstrated in Table VI. As expected, applications with high hit rate gain
lower delays.




TABLE

VI

T
HE
H
IT
R
ATE
I
MPACT ON
P
ACKET
A
VERAGE
D
ELAY

Application

%Reduction


Low HR

%Reduction
Med HR

%Reduction
High HR

Web

19.89%

26.16%

25.24%

Video

67.37%

69.26%

73.32%

File Share

36.55%

39.87%

46.57%

VoIP

0.40%

0.86%

0.98%


TABLE

V

T
HE
H
IT
R
ATE
I
MPACT ON
D
ROP
P
ACKET
P
ERCENTAGES

Application

%Drop

Traditional

%Drop
Low HR

%Drop
Med HR

%Drop
High HR

Web

1.96%

0.26%

0.05%

0.15%

Video

1.57%

0.12%

0.03%

0.05%

File Share

1.50%

0.20%

0.05%

0.12%

VoIP

4.96%

0.46%

0.10%

0.23%

Total

2.04%

0.17%

0.04%

0.09%


TABLE

IV

T
HE
H
IT
R
ATE
I
MPACT ON
D
ATA
R
EDUCTION
F
ACTOR

Users

Application

Reduction
Factor Low
HR

Reduction
Factor
Med HR

Reduction
Factor High
HR

20

Web

1.21

1.19

1.53


Video

1.60

2.00

1.99


File Share

2.01

2.00

2.01


VoIP

1.00

1.00

1.00


Total

1.61

1.80

1.87

5
0

Web

1.27

1.26

1.27


Video

1.58

1.72

1.89


File Share

1.68

1.67

1.89


VoIP

1.00

1.00

1.00


Total

1.53

1.59

1.72

10
0

Web

1.19

1.33

1.27


Video

1.15

1.15

1.27


File Share

1.28

1.36

1.52


VoIP

0.98

0.98

0.98


Total

1.19

1.23

1.33


Performance Evaluation of

Base Station Application Optimizer

Performance Evaluation and Results





20

/
24


5.3.

The impact of the buffer size at
the base station

To understand the impact of the buffer size at the base station on the performance parameters we
performed additional set of simulations with buffer larger by 25% and smaller by 25% (with
medium hit rate as described in section A above). W
e found that the size of the buffer at the base
station has no impact on the data reduction factor (see Table VII). The impact of the buffer size on
the packet drop statistic when the network is moderated loaded in the 50 concurrent users’
scenario is pres
ented in Table VIII. The results are proportional to the increase/decrease in the
buffer size value.

Finally, the impact of the buffer size on the packet average delay in the 100 concurrent 100
users’ scenario is demonstrated in Table IX. The results show

that keeping medium hit rate and
increasing/decreasing the buffer sizes have similar impact on the delay as increasing/decreasing
the application hit rate (tables VI and IX).




TABLE

VIII

T
HE
B
UFFER
S
IZE
I
MPACT ON
D
ROP
P
ACKET
P
ERCENTAGES

Application

%Drop

SB
-
TRD

%Drop
SB
-
OPT

%Drop
LB
-
TRD

%Drop
LB
-
OPT

Web

2.17%

0.09%

0.11%

0.05%

Video

1.70%

0.05%

0.16%

0.05%

File Share

1.75%

0.12%

0.24%

0.09%

VoIP

4.09%

0.21%

0.17%

0.16%

Total

2.10%

0.08%

0.18%

0.07%


TABLE

VIII

T
HE
B
UFFER
S
IZE
I
MPACT ON
D
ROP
P
ACKET
P
ERCENTAGES

Application

%Drop

SB
-
TRD

%Drop
SB
-
OPT

%Drop
LB
-
TRD

%Drop
LB
-
OPT

Web

2.17%

0.09%

0.11%

0.05%

Video

1.70%

0.05%

0.16%

0.05%

File Share

1.75%

0.12%

0.24%

0.09%

VoIP

4.09%

0.21%

0.17%

0.16%

Total

2.10%

0.08%

0.18%

0.07%


Performance Evaluation of

Base Station Application Optimizer

Performance Evaluation and Results





21

/
24














Performance Evaluation of

Base Station Application Optimizer

Conclusions and future work





22

/
24


6.

Conclusions and future work

In this
work

we presented a novel solution to the backhaul bottleneck problem of wireless
broadband networks: the Base Station Application Optimizer.

Benefits of the
base station application
optimizer

are reduced backhaul upgrade costs and
improved user experience by providing higher actual

data rates and shorter delays.

In this project, a simulation for Base Station Application Optimizer over LTE network was
implemented and tested.

The si
mulation can be further extended to support user mobility.

The simulation can be extended to support multi cast groups


replace many live video streaming
between each user and an internet live video streaming server with a single live video stream
betwee
n the base station application optimizer and the streaming server.



Performance Evaluation of

Base Station Application Optimizer

References





23

/
24


7.

References

1.

Patrick Donegan, "Backhaul Strategies for Mobile Carriers", In Heavy Reading , Vol. 4
No. 4, 2006.

2.

Ronit Nossenson, “Base Station Application Optimizer”, The International
Conference on
Data Communication Networking 2010 (DCNET), Athens, Greece.

3.

Long
-
Term Evolution Network Architecture
, Ronit Nossenson

Qin
-
long Qiu, Jian Chen, Ling
-
di Ping, Qi
-
fei Zhang, Xue
-
zeng Pan, 2009, LTE/SAE
Model and its Implementation in NS 2, 2009
Fifth International Conference on Mobile Ad
-
hoc and Sensor Networks, Fujian, China, pp. 299
-
303.

LTE implementation:
http://code.google.com/p/lte
-
model/
The Network Simulator (NS2),
http://isi.edu/nsnam/ns/

4.

"Introduction to Network simulator NS 2" by Teerawat Issariyakul and Ekram Hossain,
2009 Springer Science+Business Media, LLC

5.

Project source code and results:
http://code.google.c
om/p/base
-
station
-
application
-
optimizer/


6.

Allot Mobile Trends, Global mobile broadband traffic report, H1/2011.

7.

GNU Awk,
http://www.gnu.org/s/gawk/manual/gawk.html





Performance Evaluation of

Base Station Application Optimizer

Appendix A
-

NS
-
2 Installation on Ubuntu





24

/
24


8.

Appendix A
-

NS
-
2
Installation on Ubuntu

1.

Install ubuntu 11.04.

if you are interested in running windows and ubuntu in double boot then you can use the
following installer:
http://www.ubuntu.com/download/ubuntu/windows
-
installer


2.

Download and extract ns
-
2.34 to a folder in your home directory.

NS
-
2 suitable for Ubuntu 11.04 and these instructions can be downloaded from:

http://sourceforge.net/projects/nsnam/files/ns
-
2/2.34/


3.

Install the development files for X Windows plus the g++ compiler:

sudo apt
-
get install xorg
-
dev g++ xgraph

4.

In similar way install the following packages:

Autoconf
, automake, gcc
-
c++,
g
cc
-
4.4 g++
-
4.4
,
libX11
-
devel
,
xorg
-
x11
-
proto
-
devel
,
libXt
-
devel
,
libXmu
-
devel
.

Please note that step #4 is recommended but not a must. NS
-
2 installation will succeed without
these packages but with warnings. I haven't tried running verification of the inst
allation
without installing these packages.

5.

Fix the error in the linking of otcl by editing line 6304 of
otcl
-
1.13/configure

so that it
reads

SHLIB_LD="gcc
-
shared"

instead of

SHLIB_LD="ld
-
shared"

6.

Edit the file ns
-
2.34/tools/ranvar.cc and change the line
219 from

return GammaRandomVariable::GammaRandomVariable(1.0 + alpha_, beta_).value() *
pow (u, 1.0 / alpha_);

to

return GammaRandomVariable(1.0 + alpha_, beta_).value() * pow (u, 1.0 / alpha_);

7.

Change the lines 183 and 185 in file
ns
-
2.34/mobile/nakagami.cc to read

resultPower = ErlangRandomVariable(Pr/m, int_m).value();

and

resultPower = GammaRandomVariable(m, Pr/m).value();

8.

Change the line 270 in tcl8.4.18/unix/Makefile.in that reads

CC = @CC@

so it appends the version parameter f
or version 4.4:

CC = @CC@
-
V 4.4

Make sure it is a capital V

9.

Run ./install from ns
-
allinone
-
2.34 top folder

10.

Run
./validate from ns
-
allinone
-
2.34 top folder