Transparent network- performance verification for LTE rollouts

miststizzaMobile - Wireless

Dec 10, 2013 (3 years and 7 months ago)

67 views

ericsson White paper
284 23-3179 Uen | September 2012
How to verify an LTE deployment
Mobile-network operators face the significant challenge of maintaining a profitable business
while meeting market demands for faster data speeds and increased data volumes, and addressing
the needs of millions of smartphone subscribers.
To overcome these challenges and add capacity, many operators have chosen to deploy 4G LTE
networks – a strategy that supports the rollout of new services and enhances the user experience.
Capacity and competition challenges are compressing the deadlines for bringing LTE networks
into service. Add to this the fact that networks are becoming increasingly complex – not just in
terms of design and optimization, but also in relation to testing and verification. The result is a
focus shift and a structured approach to verification that supports simplicity and transparency.
Transparent
network-
performance
verification for
LTE rollouts
TransparenT neTwork-performance verificaTion for LTe roLLouTs • ToDaY’s verificaTion cHaLLenGes 2
Rapid changes in the mobile industry are creating difficulties for network operators. Ensuring
optimal network performance during rollout and understanding how performance ultimately
affects user experience are just two examples of the type of issues operators have to face.
Industry complexIty
The number of 4G networks being deployed is rising constantly. These new networks usually
require integration with 2G and 3G technologies and can include greenfield rollouts, overlays on
existing networks, transformation and multi-vendor solutions.
At the same time, rapid consumer adoption of smartphones has significantly increased the
amount of signaling in networks, created massive volumes of data and added to the number of
network events. Network-performance measurements – and their interpretation – need to be
harmonized and adapted to the complex nature of today’s networks to ensure that network-
performance targets set by the operator are met.
more servIces
In addition to these trends, new functionality and services are constantly being introduced, implying
that test methods and metrics also need to be kept up to date to ensure that user experience
continues to be measured accurately. Additional subscriber services, such as voice over LTE
(voLTe) and high-definition video on demand (voD), will become commonplace services offered
through high-bandwidth LTE networks and many operators will use QoS and QoE indicators to
ensure that these services are delivered with the high level of quality expected by users.
AutomAtIon
The increasing reliance on self-organizing networks (SON) to perform optimization tasks introduces
a different set of challenges. To realize the full operational cost savings that can be achieved
when SON functions are implemented, performance metrics and their capacity to reflect user
experience accurately must be completely reliable.
KpI complexIty
Traditionally, a range of key performance indicators (KPIs) has been used to ensure that quality
and performance targets are met. Usually, such KPIs are chosen from the bottom up, tend to be
uncoordinated, and do not always accurately reflect user experience. Furthermore, significant
effort can be invested in trying to improve one set of KPIs at the expense of others. Poorly
selected KPIs can be complicated to measure and difficult to work with, and reaching their set
targets may have little impact on overall system efficiency, performance and user experience.
Operators need to be assured that a given KPI will reflect the actual user experience accurately
after rollout. For example, an LTE radio bearer can be dropped when an always-on LTE device
is not transmitting data. Measuring such a KPI does not offer any real understanding of user
experience, as sessions are quickly reestablished when needed and any delay goes unnoticed
by the user.
The way smartphones behave adds to the complexity of measuring performance – traffic
generated by these devices tends to be chatty and differs significantly from the traffic that is generated
by, for example, an LTE USB dongle. The way a device behaves in a network can change when a
new model is introduced, software upgrades can alter the way an application functions, and the
constant flow of new apps onto the market all affect the way KPI measurements are interpreted.
Managing KPI complexity is difficult enough in a single-technology environment. In an ecosystem
that encompasses all technologies (2G, 3G and 4G) the level of complexity increases dramatically
and confidence in KPIs as a true reflection of actual user experience needs to be assured.
TODAY’S VERIFICATION
CHALLENGES
TransparenT neTwork-performance verificaTion for LTe roLLouTs • a new framework for verificaTion 3
There are as many ways of measuring network performance as there are equipment vendors.
Just as the metric system was introduced so that weights and distances could be compared on
similar scales and across international boundaries, the mobile industry today needs a shared
framework for network-performance verification.
This paper proposes a framework that simplifies the process of network-performance
verification and provides a concise view of performance that is aligned with user experience.
The key characteristics of this framework are that it:
> Complies with ITU (International Telecommunication Union) and ETSI (European
Telecommunications Standards Institute) recommendations for performance reporting.
> Uses commercially available terminals with standard network-parameter settings.
> Involves the selection of a manageable number of top-level KPIs, built from many lower-level
performance indicators (PIs).
> Includes only KPIs that are relevant to user experience.
> Specifies how and where each KPI should be measured.
> Focuses on verification that is in line with operator priorities and is applicable in strategically
selected areas.
complIAnce wIth
recommendAtIons
For mobile networks, the ITU
Telecommunication Standardization
Sector (ITU-T) has worked with ETSI
to describe a general model for QoS
from the user’s perspective
[1][2]
. The
3GPP-defined QoS categories are
service accessibility, retainability,
integrity and mobility
[3][4][5]
.
In accordance with these
recommendations, Figure 1 illustrates
a cost-efficient KPI methodology that
includes just a few QoS-aligned KPIs
based on measurements for subscriber
experience and network quality.
A NEW FRAMEWORK
FOR VERIFICATION
Figure 1: The QoS component of the ITU-T KPI hierarchy
KPI hierarchy – ITU-TKPI hierarchy – ITU-T
QoS
Contracted
KPIs
Contracted
KPIs
Service
accessibility
Service
accessibility
QoSQoS
Network monitoringNetwork monitoring
Service
retainability
Service
retainability
Service
integrity
Service
integrity
Availability performance
Availability performance
Resources
and facilities
Resources
and facilities
Reliability
performance
Reliability
performance
Transmission
performance
Transmission
performance
Maintainability
performance
Maintainability
performance
Maintenance
support
performance
Maintenance
support
performance
TransparenT neTwork-performance verificaTion for LTe roLLouTs • a new framework for verificaTion 4
commercIAl devIces And stAndArd settIngs
The capabilities of LTE are often demonstrated in trial scenarios with ideal and highly optimized
network conditions. Trials can be carried out using high-performance user equipment (UE)
simulators, well-tuned laptop settings and colocated core and radio-access network (RAN)
equipment with parameter settings optimized for performance rather than capacity. Such trial
environments do not reflect real network environments accurately. In operational networks, there
are tradeoffs between performance and capacity that cannot always be taken into account in
trials. The architecture of the core and transport networks may have an impact on performance,
and some KPIs may need to be adjusted accordingly.
To create reliable KPI measurements that reflect user experience accurately, testing should be
conducted using:
> Commercial terminals.
> Standard live network parameter settings.
> Transparent and fully documented KPI formulas, including the network events that trigger
counter stepping.
mInImIzIng KpIs
Figure 2 illustrates a structured approach
where many PI metrics are consolidated
into just a few KPIs that reflect user
experience accurately.
By adopting this approach, the KPIs that
best reflect user experience will be assigned
with the greatest importance. In this way,
priorities and performance targets can be
determined and the right level of effort can
be allocated to solving issues that are
relevant – thereby fast-tracking network-
performance improvements.
relevAnt KpIs
The relevance of a given metric to a KPI is
directly related to its impact on user
experience or overall system performance.
A metric that affects neither should be
classified as a PI. For example, most users
are completely unaware of the Time to Attach
metric, which is a measure of the amount of
time it takes for a device to connect to a network. This is a background operation that occurs
infrequently, and as such has little impact on user experience and overall system performance.
how And where?
Complex and time-consuming verification procedures should be limited to laboratory or field
environments. Such tests can be advantageous when it comes to validating the final set of KPIs
and suitable target-value ranges.
For cluster tuning and acceptance, a small number of KPIs should be selected to allow for
efficient drive testing and to ensure a sufficient degree of accuracy when measuring user experience.
Target values for KPIs should be determined to reflect different test environments. Once a network
is in operation, counter-based KPIs can be utilized to monitor performance as traffic increases.
These KPIs based on commercial traffic and network counters are likely to differ from those selected
for cluster tuning, which are based on drive test measurements. For example, measuring user-
perceived latency is straightforward with a drive test, but impractical with network counters.
Focused verIFIcAtIon
Tuning and verification activities are designed to offer maximum benefit to the operator. Focused
verification can be based on factors such as the location of key users, areas of dense traffic,
difficult environments and known trouble spots.
KPIKPI
KPI level
KPI level
Primary PI levelPrimary PI level
PI
PI
PI
PI PI
PI PI
PI
PI
PI
Events
Events Events
Events Events
Events
Figure 2: A structured approach to KPIs
TransparenT neTwork-performance verificaTion for LTe roLLouTs • a new framework for verificaTion 5
cluster And networK perFormAnce
This type of KPI and the way it is defined tends to be agreed between the operator and the
vendor responsible for design, build and integration before testing starts. Ideally, agreement
should be reached before contract finalization so that both parties have a mutual understanding
of the requirements and scope of verification testing. Table 1 summarizes some KPIs commonly
used for cluster verification.
Table 1: List of recommended first-order KPI categories for cluster verification
The KPIs recommended here are built on several lower-order or network-monitoring PIs. Figure 3
illustrates how a single top-level KPI – Session Setup Success Rate – is created from several
lower-order PIs as the best way of reflecting user experience when it comes to establishing a
session. Users are unaware of the network events required to establish a session and failures
can occur at any point, such as Radio Resource Control (RRC) establishment, S1 link establishment
or radio-access bearer (RAB) establishment. By reporting only the top-level KPI, a better
estimation of user experience is reported. Performance statistics for all PIs are recorded for
investigative and troubleshooting activities.
cAtegory descrIptIon
Accessibility How easy it is for the user to obtain a service within specified
tolerances and other given conditions. Session Setup Success
Rate is a common KPI in this category.
Retainability The capability of a service, once obtained, to continue to be
provided under given conditions for a requested period. Examples
of KPIs in this category include Session Abnormal Release Rate
(dropped calls) and Minutes Per Abnormal Session Release.
Integrity The degree to which a service, once obtained, is provided without
excessive impairments. examples include downlink (DL) and
uplink (UL) throughput, latency and packet loss.
Mobility Performance of all handover types. Examples include LTE
Handover success rate and inter-radio access Technology
(iraT) Handover success rate.
Session setup success rate
KPI
PIs
RRC establishment
success rate
S1 link establishment
success rate
System events and counters
ERAB establishment
success rate
Figure 3: KPI simplification for Session Setup Success Rate
TransparenT neTwork-performance verificaTion for LTe roLLouTs • a new framework for verificaTion 6
KpI FrAmeworK – An exAmple
Table 2 shows a typical KPI framework for an LTE network, highlighting where each KPI should
be tested. In this example, VoLTE is tested in a controlled environment and verified in the field
using a reference site or golden cluster. Having successfully completed voLTe tests in this way,
the default-bearer KPIs can be used with confidence for general cluster-acceptance drive tests.
This table is operator-specific, developed to suit individual operator targets and requirements,
and supports the objective of reducing the total number of KPIs.
As throughput performance indicators vary from cluster to cluster and are influenced by site
location and user behavior, the uplink and Downlink user Throughput are marked as monitored
PIs in the example. These indicators are used to set tuning objectives in cluster verification and
can be utilized to monitor load and capacity in operational verification – and are consequently
not KPIs.
Table 2: KPI framework for LTE network verification
KpI AreA KpI
controlled
envIronment
sIte
AcceptAnce
golden
cluster
cluster
In-servIce
operAtIon
Availability Cell Availability •
Accessibility Session Setup Success Rate • • • •
Retainability Session Abnormal Release Rate • • • •
Integrity RTT Latency • • • •
Integrity RTT Packet Loss (ping) • • • •
Integrity uL packet Loss (pDcp) • • •
Integrity DL packet Loss (pDcp) • • •
Integrity Downlink peak user Throughput • •
Integrity Uplink Peak User Throughput • •
Integrity Downlink user Throughput • PI PI PI
Integrity Uplink User Throughput • PI PI PI
Mobility Handover success rate • • • •
Mobility
Handover interruption Time
– Control Plane
• • •
Mobility
Handover – session continuity
Interruption Time
• •
VoLTE Voice – Session Setup Success Rate • • •
VoLTE
Voice – Session Abnormal
Release Rate
• • •
VoLTE Voice – Session Setup Time • • •
VoLTE voice – speech Delay • •
VoLTE Voice – Packet Loss Rate • •
VoLTE Voice – Jitter • •
different target values and test methods
TransparenT neTwork-performance verificaTion for LTe roLLouTs • TYpicaL neTwork verificaTion 7
A typical network-deployment project includes several stages, and different verification processes
are applicable at each of these. Figure 4 illustrates the traditional stages of a network rollout and
the applicable verification activities.
product verIFIcAtIon, certIFIcAtIon And conFormIty
In the first stage of a network rollout, checks need to be carried out on the network equipment
supplied to determine whether it is fit for purpose, and consequently whether the LTE solution
can be integrated into the existing network with the given interfaces. This verification can also
be used to validate KPIs that are strongly linked to product performance and are independent
of radio tuning. examples include the ability of the system to achieve DL and uL peak data-
throughput rates close to the theoretical maximum.
Product-conformance verification is typically carried out on a golden site or golden cluster,
or in the operator’s lab environment. The outcome of the verification procedure is specified in
an agreed set of configurations and parameters for the overall solution that have a certain level
of confidence associated with them. This outcome can be used to benchmark performance once
the network is deployed.
Some examples of product-conformance verification are:
certificate of conformance – typically, this verification method is applied when product
functionalities or attributes are complex or require specialized equipment to take accurate
measurements. In such cases, product vendors supply a certificate to verify product
conformance to the relevant industry standard.
supply of First office Application (FoA) reports – all new system features and software
releases undergo an extensive FOA period, supporting the first operation and demonstration
of new products in the operational system prior to commercialization. FOA reports can be
used to verify system performance.
demonstration of features – to check interfaces to legacy equipment or to confirm
configurations operators may require certain new system features to be demonstrated directly
in the operational network. This type of verification testing is typically performed on a golden
site or in a golden cluster.
performance measurement – appropriately selected performance aspects (KPIs and PIs) are
validated to ensure that the contractually agreed requirements are met; optimization may lead
to an adjustment of parameters.
TYPICAL NETWORK
VERIFICATIOn
Product
verification,
certification
and conformity
Product
verification,
certification
and conformity
Site build
verification
Site build
verification
Cluster and
network
verification
Cluster and
network
verification
In serviceIn service
• Laboratory or
controlled field tests
• In-depth testing
• Verify performance
and features
• Laboratory or
controlled field tests
• Verify performance
and features
• Site health checks
• Site build integrity
• Site health checks
• Site build integrity
Build
Build Tuning
Tuning Operate
Operate
• In-depth testing
• Accessibility
• Retainability
• Integrity
• Mobility
• Accessibility
• Retainability
• Integrity
• Mobility
Drive test KPI
Drive test KPI
• Accessibility
• Retainability
• Integrity
• Mobility
• Accessibility
• Retainability
• Integrity
• Mobility
Network counter KPI
Network counter KPI
Figure 4: Verification stages of a typical network rollout
TransparenT neTwork-performance verificaTion for LTe roLLouTs • TYpicaL neTwork verificaTion 8
BuIld
The purpose of site-build verification is to detect problems that arise during the installation and
integration processes, which may affect user performance.
In this phase, typical tests – sometimes referred to as shake-down tests – include basic
functionality checks of accessibility, retainability and integrity. Tests performed on an LTE site
build may include access attempts on network cells, DL and uL throughput tests, and round-trip
time tests for measuring end-to-end latency.
tune
The processes of cluster and network verification are usually performed following network tuning.
Tuning – or optimization – of the radio network ensures that network performance is in line with
the targets set prior to commercial launch. Tuning helps to ensure good quality for users, which
in turn facilitates successful network introduction for the operator.
The tuning phase is performed on base-station clusters and uses data from a number of
sources, including drive tests and network measurements. Improvements can be identified and
implemented to increase network performance. The final part of the tuning process typically
involves cluster verification, in which measured network performance is compared with preset
performance goals – typically the KPIs shown in Tables 1 and 2.
Cluster verification traditionally utilizes performance-evaluation data from network-drive tests.
The load levels of pre-commercial and recently launched networks tend to be low, and user traffic
tends to be network-friendly. When possible, statistical counters and event logs are also used
to analyze user behavior, but this is only possible to a certain degree owing to the small number
of active terminals in the network.
LTE brings a number of SON features that support the performance-tuning process. The
availability, scope and effectiveness of SON features such as Automated Neighbor Relations
(ANR) are expected to improve over time. By applying SON techniques, the process of performance
tuning should be faster, significantly simpler and will eventually reduce or even eliminate the need
for network-drive testing. 3GPP has started a feasibility study
[6]
to investigate the minimization
of drive tests in next-generation networks, with a view to reducing network operational costs.
If the introduction of SON functionality succeeds in reducing or eliminating network-drive
testing, it will become increasingly important for network-performance reports and measures to
reflect user performance accurately and transparently.
operAte
Once the network is in service, network performance can be assured through monitoring of
counters. Key aspects of operational performance monitoring are described in another Ericsson
white paper on service assurance
[7]
.
TransparenT neTwork-performance verificaTion for LTe roLLouTs • case sTuDies 9
cAse 1 – KpIs wIth low relevAnce
This case relates to an LTE rollout project targeting more than 10 million subscribers for LTE
mobile broadband data services. The performance criteria were largely based on a number of
traditional 3G (wcDma) voice kpis, including voice call setup Time, voice call Drop rate, rrc
Failure Rate, Block Error Rate and IRAT handover performance. These criteria and their target
values were inherited from the original Request for Proposal (RFP), which was written prior to
the full standardization of the LTE technology.
The relevance level of many of the chosen KPIs to user experience was low, and as a result
considerable time and effort was required to establish test methods and tools to measure and try
to improve them. Consequently, key technical resources were unavailable for other network-
improvement and optimization activities that would have had a much larger impact on user experience.
After several months of testing and discussion, it was agreed that some of the original KPIs
did not provide an accurate measure of user experience and were removed or reclassified as
PIs and used for monitoring.
cAse 2 – tArgetIng selected KpIs
In this case, an LTE operator defined 15 performance KPIs for the rollout project. This list was
reduced to eight KPIs for validation of cluster tuning and acceptance, which directly reduced
the amount of time and effort required for data collection and analysis, freeing up resources for
tuning and resolution of performance issues.
The operator carried out a number of special tests and measurements in one golden cluster.
The tests were linked to product functions and parameter settings, had low correlation with
network tuning, and included the seven KPIs that were excluded from the cluster tuning.
Performing these tests early in the process and on a limited area allowed the project engineers
to define and implement performance improvements for the whole network as it was rolled out
and tuned.
The operator’s network was tuned and put into service quickly with more than 90 percent of
clusters being accepted after just one round of tuning. The rollout engineers were able to shift
their focus completely to capacity management, operational processes and troubleshooting.
Today, this operator is a leading provider of LTE mobile broadband, and has a stable, high-
performing network.
cAse 3 – A mInImAlIst ApproAch
An operator with a low-cost model wanted to enter the LTE market quickly but without compromising
service quality. The network rollout project was a continuous build, with low traffic from the start.
Costs were a concern, so the deployment strategy focused on fewer KPIs and limited drive testing.
Those drive tests that were carried out were targeted to prioritized areas and performance
verification. Many parts of the network underwent only a single site test before launch.
The network was successfully launched, and performance was monitored with network counter
statistics. SON features were found to have worked effectively. As post-launch user traffic was
low, the few performance issues that arose were quickly identified and rectified before a wider
group of users was affected. Deployment, verification and a smooth launch were all achieved
within very tight deadlines.
Case studies
TransparenT neTwork-performance verificaTion for LTe roLLouTs • measure THe BenefiTs 10
The Next Generation Mobile Networks (NGMN) Alliance has recognized the problems associated
with previous approaches to network verification and has recommended a structured top-down
approach using a set of standard service-level KPIs
[8]
. These KPIs are grouped into five main
categories – accessibility, retainability, integrity, availability and mobility – where measurements
use common formulas and methods.
Network-level testing and acceptance should focus on these KPIs, which are based on
numerous resource-level PIs that provide system behavior and performance information at a
more granular level.
The telecom industry is pushing for the increased use of SON functions as a way to change
how vendors and operators tune and manage networks. For example, the ANR feature influences
how networks are tuned because it reduces the amount of effort required to plan and manage
neighbor relationships. To achieve the full capex and opex benefits with the delivery of SON,
reported KPIs need to be trustworthy.
BeneFIts For operAtors
> Faster time to market for new networks, services and coverage.
> Confidence that network-centric measurements accurately reflect the user experience.
> Transparent and easy benchmarking in multiple-vendor environments.
> Capacity to focus resources and efforts on areas that will result in the greatest benefit
for users.
> Ability to achieve targeted network efficiency.
BeneFIts For vendors
> Possibility to focus resources and efforts on tuning networks, maximizing network efficiency
and improving user experience.
> Capacity to meet aggressive timelines.
> Faster network deployment.
> Creation of strong operator partnerships.
> Opportunity to deliver high-quality networks that can be adapted to changing technology and
market conditions.
MEASURE THE
BENEFITS
Transparen
T
ne
T
work-performance verifica
T
ion for LT
e
ro
LL
ou
T
s •
c
onc
L
usion an
D

r
eferences

11
LTE networks provide the higher capacity and data speeds required to support ever-increasing
numbers of mobile-broadband subscriptions, smartphones and other connected devices. The
timely deployment of LTE networks is crucial for operators to launch new services, support faster
data speeds, provide greater capacity, and reduce investment costs.
The network complexity created by new services, new types of devices, multiple technologies,
and multiple-vendor deployments requires an efficient approach to network-performance
verification from initial testing through to in-service operation. Such an approach allows operators
to focus on bringing LTE networks into service quickly with the required level of network
performance for successful commercial launch and continued operation.
Conclusion
References
1.

ITU-T, Recommendation E.800, 2008-09, Series E: Overall Network Operation, Telephone
s
ervice,
s
ervice
o
peration and Human
f
actors, a
vailable at:
http://www.itu.int/rec/T-REC-E.800-200809-I/en
2.

ETSI,
Technical Specification 132 450, 2009-04, Key Performance Indicators (KPIs) for
e
volved
u
niversal Terrestrial
r
adio
a
ccess
n
etwork (
e
-
u
T
ran
): Definitions (3G
pp
T
s

32.450 Version 8.0.0 Release 8), available at:
http://www.etsi.org/deliver/etsi_ts/132400_132499/132450/08.00.00_60/ts_132450v080000p.pdf
3.

3GPP
, Technical Specification 32.450, 2009, Telecommunication Management; Key
Performance Indicators (KPIs) for Evolved Universal Terrestrial Radio Access Network
(
e
-
u
T
ran
): Definitions, available at: http://www.3gpp.org/ftp/Specs/html-info/32450.htm
4.

3GPP
, Technical Specification 32.451, 2009, Telecommunication Management; Key
Performance Indicators (KPIs) for Evolved Universal Terrestrial Radio Access Network
(E-UTRAN); Requirements, available at: http://www.3gpp.org/ftp/Specs/html-info/32451.htm
5.

3GPP
, Technical Specification 32.425, 2009, Telecommunication Management; Performance
Management (PM); Performance Measurements Evolved Universal Terrestrial Radio Access
Network (E-UTRAN), available at: http://www.3gpp.org/ftp/Specs/html-info/32425.htm
6.

3G
pp
,
Technical
r
eport 36.805, 2009,
s
tudy on minimization of drive-tests in next generation
networks, available at: http://www.3gpp.org/ftp/
s
pecs/html-info/36805.htm
7.

Er
icsson, White Paper, 2011, Keeping the Customer Service Experience Promise
– How to
m
eet the
s
ervice
a
ssurance
c
hallenge, available at:
http://www.ericsson.com/news/110121 wp service assurance 244188811 c
8.
n
G
mn

a
lliance,
w
hite
p
aper
, 2006,
n
ext-Generation
m
obile
n
etworks Beyond H
spa
&
ev
D
o
, a
vailable at: http://www.ngmn.org/uploads/media/Next Generation Mobile
n
etworks Beyond H
spa
ev
D
o
w
eb.pdf
TransparenT neTwork-performance verificaTion for LTe roLLouTs • GLossarY 12
GLOSSARY
2g 2nd-generation wireless telephone technology
3g 3rd-generation wireless telephone technology
3gpp 3rd Generation Partnership Project
4g 4th-generation mobile wireless standards
Anr Automatic Neighbor Relations
capex capital expenditure
dl downlink
erAB EUTRAN Radio Access Bearer
etsI European Telecommunications Standards Institute
e-utrAn Evolved Universal Terrestrial Radio Access Network
evdo evolution Data optimized/evolution, data only
FoA First Office Application
hspA High-speed packet access
IrAt Inter-Radio Access Technology (Inter-RAT)
Itu International Telecommunication Union
Itu-t ITU Telecommunication Standardization Sector
KpI key performance indicator
lte Long Term Evolution
ngmn Next Generation Mobile Networks
opex operational expenditure
pdcp packet data convergence protocol
pI performance indicator
Qoe quality of experience
Qos quality of service
rAB radio-access bearer
rAn radio-access network
rFp Request for Proposal
rrc Radio Resource Control
rtt round-trip time
son self-organizing networks
ue user equipment
ul uplink
usB Universal Serial Bus
vod video on demand
volte voice over LTE
wcdmA wideband code Division multiple access