Simulation-Based Analysis of MPLS Traffic Engineering

ginglyformweekΔίκτυα και Επικοινωνίες

29 Οκτ 2013 (πριν από 4 χρόνια και 8 μήνες)

109 εμφανίσεις


Based Analysis of

MPLS Traffic Engineering

Model Research & Development

OPNET Technologies, Inc.

Bethesda, MD, 20814



It is a well
known behavior [FF99] that when congestio
sensitive (TCP) and congestion
insensitive (UDP) traffic share
a common path, an increase in congestion
insensitive traffic
adversely affects congestion
sensitive traffic’s performance
(e.g., due to TCP’s congestion control mechanism). This paper
uses an

example network scenario to illustrate the benefits of
using MPLS traffic engineering [AMAOM99] and QoS in
eliminating the undesirable effects of mixing congestion
insensitive flows with congestion
sensitive flows [BSJ00]. We
use a UDP data flow to repres
ent a congestion
traffic flow and TCP data flows to represent congestion
sensitive traffic flows. A comparative simulation analysis is
provided for 1) non
MPLS enabled network, 2) a network
with two LSPs, one for a pure congestion
sensitive flo
w, and
another combined one for congestion
insensitive and
sensitive flow, and 3) a network with three LSPs,
one for each traffic flow, and traffic differentiation using
WFQ on the link carrying both congestion
insensitive and
ve flow, based on the throughput achieved
by the congestion
insensitive and congestion
sensitive flows
when they share the network resources.

MPLS Overview

Protocol Label Switching (MPLS) [RVC01] is the latest
technology in the evolution of routing
and forwarding
mechanisms for the core of the Internet. The “label” in MPLS
is a short, fixed
length value carried in the packet's header to
identify a Forwarding Equivalence Class (FEC). A FEC is a
set of packets that are forwarded over the same path thro
ugh a
network. FECs can created from any combination of source
and destination IP addresses, transport protocol, port numbers
etc. Labels are assigned to incoming packets using a FEC to
label mapping procedure at the edge routers. From that point
on, it is

only the labels that dictate how the network will treat
these packets

i.e., what route to use, what priority to assign,
and so on. MPLS defines label
switched paths (LSP), which
are pre
configured to carry packets with specific labels. These
LSPs can b
e used to forward specific packets through specific
routes, thus facilitating traffic engineering.

The above figure is a simple illustration of how MPLS
performs label switching. The ingress label edge router (LER)
receives an IP datagram (an unlabeled pac
ket) with a
destination address of The LER then maps the packet
to a FEC, and assigns a label (with a value of 8) to the packet
and forwards it to the next hop in the LSP. In the core of the
network, the LSRs ignore the packet's network layer hea
and simply forward the packet using a label
algorithm. In the above example, the label changes from 8 to 4
to 7.

Figure 1: Life cycle of a packet through an MPLS network


Network Topology

We used OPNET Modeler 8.0 for our simulation
using OPNET's MPLS models available in the OPNET Model
Library. This section explains the network model used in this
study. The main goal is to generate a mix of one
way (only
source to destination) congestion
insensitive and congestion
traffic and study the network performance with and
without MPLS traffic engineering (TE).

In order to achieve
this goal, we have prototyped a network with the following
main components:

Traffic sources/destination: One congestion
traffic sourc
e (named “UDP Source”) generating a
varying level of traffic (1.5 Mbps through 4.0 Mbps) and
two congestion
sensitive traffic sources (named “TCP
Source 1” and “TCP Source 2”) each generating 1.5 Mbps
of traffic to be sent from the source nodes to the
ective destination nodes. All end
stations are fully

enabled nodes

e.g., in the event of detecting
congestion, the TCP sources will reduce traffic flow input
as dictated by its congestion control mechanism.

Network topology: The edge network on
the source and
destination sides consists of an LER connected to the core.
The core network consists of four LSRs connected using two
parallel paths of 4.5 Mbps and 1.5 Mbps. The edge network is
configured to operate at OC
3 (155 Mbps) data rate so that it

does not introduce any congestion. Remember that our main
goal is to study the impact of overload in the core network.

All routers (LERs and LSRs) in the given baseline topology
are MPLS enabled. They have been configured such that their
label mapping an
d switching algorithms are enabled only
when LSPs are defined in the network. With no LSPs defined,
these routers will use the routes advertised by the dynamic
routing protocol running on their interfaces (OSPF in our
example). Due to the higher bandwidth
of the link between
LSR1 and LSR3, and LSR3 and LSR4, the default route taken
to get from the “Ingress LER” to the “Egress LER” will be
from LSR1 to LSR3 to LSR4.

In terms of results analysis, we will focus on the throughput
(in bits/sec) collected at the

outgoing interfaces to the various
destinations from the egress LER. Various other metrics that
can also be analyzed for the above network include application
response time, number of TCP retransmissions, traffic dropped
in the core of the network due to
congestions, etc.

Figure 2: Baseline network topology


Figure 3: (Scenario 1) Baseline network with no MPLS

Simulation Results/Analysis

This section explains the results and the corresponding
analysis performed as part of this study. We mod
eled several
different cases:

Scenario 1
: The network model does not have any LSPs
(i.e., there is no MPLS
TE). In this case, our goal is to
obtain baseline results to study the effect of increasing
insensitive traffic over congestion

traffic. We ran multiple simulations in which we
increased the amount of traffic generated by the “UDP
Source” (1.5 Mbps, 2.0 Mbps, 2.5 Mbps, 3.0 Mbps, 3.5

Mbps, 4.0 Mbps). Both TCP (congestion
sensitive) and
UDP (congestion
insensitive) traffic flow fro
m the
“Ingress LER” to the “Egress LER” via LSR1, LSR3, and

The amount of traffic being successfully received by the
destination nodes for various loads is shown in Figure 4.1
through Figure 4.4. Notice that when the amount of UDP
traffic sent by t
he “UDP Source” is increased beyond a
value such that the combined capacity of TCP and UDP
load is greater than the capacity of a core link (e.g., LSR1
to LSR3), the amount of traffic received by the TCP
destinations starts decreasing. This is because the
oriented TCP flow decreases its traffic input
when TCP at the source detects congestion. This allows
UDP to send more traffic, thereby further reducing TCP

This case demonstrates that the congestion
traffic is not at all

penalized for the congestion in the
network. The congestion
sensitive traffic suffers to the
extent that its throughput is almost reduced to zero when
the UDP traffic input is equal to the capacity of the core

Figure 4.1: All Flow
s generate 1.5 Mbps

Figure 4.2: UDP Flow increased to 2.5 Mbps


Figure 4.3: UDP Flow increased to 3.5 Mbps

Figure 4.4: UDP Flow increased to 4.0 Mbps

Scenario 2
: It is clear from results
of the previous
scenario that dynamic routing protocols such as OSPF do
not attempt to utilize all the available network resources
efficiently. MPLS remedies this deficiency by facilitating
traffic engineering. In this scenario, we enhanced the
baseline ne
twork to contain two LSPs as shown below:

One LSP is configured to carry traffic from “TCP Source
1” and “UDP Source”, and is pinned to route through the

bandwidth links in the core network. The second
LSP is configured to solely carry TCP flow from

Source 2” through a different section of the core network.
This traffic engineering design will allow the traffic from
“TCP Source 2” to flow through the core network without
experiencing any loss of throughput due to increase in
UDP traffic. However
, the traffic from “TCP Source 1”
will still be subject to the behavior we observed in
Scenario 1. The results for the different UDP traffic loads
for this case are shown in Figure 6.1 through Figure 6.4.

This case reveals that MPLS
TE can be instrumental


steering traffic from congested resources of a network to
uncongested areas. Apart from almost totally eliminating
SLA violations for the newly traffic engineered flows,
TE also helps in improving performance of the
remaining traffic in congested

areas of the network.

Figure 5: (Scenario 2) Baseline network with two LSPs


Figure 6.1: All Flows generate 1.5 Mbps

Figure 6.2: UDP Flow increased to 2.5 Mbps

Figure 6.3: UDP Flow i
ncreased to 3.5 Mbps

Figure 6.4: UDP Flow increased to 4.0 Mbps

Notice that the throughput for traffic generated by “TCP
Source 2” remains unaffected by the increase in UDP traffic
flow, whereas the traffic generated by “TCP Source 1”
duces as UDP traffic increases. An additional advantage of
traffic engineering the “TCP Source 2” traffic flow is that the
traffic from “TCP Source 1” is now able to access more
network resource, and suffers only at very
high UDP traffic


Figure 7: (Scenario 3) Baseline network with three LSPs

Scenario 3
: The network from Scenario 2 is enhanced to
contain a separate LSP for each traffic flow.Individual
LSPs are configured to carry the three different traffic
flows. There are two sep
arate LSPs for “TCP Source 1”
and “UDP Source” traffic flows. The paths that these
LSPs take are pinned through the high
bandwidth links in
the core network; therefore, they still share common
network resources.

In order to treat the TCP and UDP flows in

an isolated
fashion (even though they share common resources),
Weighted Fair Queuing (WFQ) has been enabled on the
output interface of “LSR 1” (i.e., the interface forwarding
packets on the link connecting “LSR 1” and “LSR 3”).
The WFQ configuration is su
ch that higher priority is
given to TCP traffic over UDP traffic in a 10:1 weighted

This traffic engineering design integrated with QoS
support allows traffic from “TCP Source 1” to flow
through the core network without experiencing loss of
put due to increase in UDP traffic. The traffic
from “TCP Source 2” flows through the core network
unaffected by any other traffic because it is configured to
go over uncongested portions of the network.

Notice that the traffic from “TCP Source 1” is not
to any degradation in performance because we have
prioritization for this flow in our example scenario. The
UDP traffic starts exhibiting performance degradation, as
soon the combined TCP and UDP load is greater than the
available link bandwidth. T
he results for the different
UDP traffic loads for this scenario are shown in Figure
8.1 through Figure 8.4.

This case demonstrates that by incorporating QoS into our
test network, we are able to significantly improve overall
network performance. We were
able to maintain the
throughput for congestion
sensitive traffic.

Notice that throughput for the traffic generated by “TCP
Source 1” remains unaffected by an increase in UDP
traffic flow, due to the isolation of congestion
traffic from other tra
ffic using MPLS LSPs.


Figure 8.1: All Flows generate 1.5 Mbps

Figure 8.2: UDP Flow increased to 2.5 Mbps

Figure 8.3: UDP Flow increased to 3.5 Mbps

Figure 8
.4: UDP Flow increased to 4.0 Mbps


As a summary of the experiments conducted as part of this
paper, we present the following graphs. These graphs show a
brief comparison of all the cases highlighted in this paper. The
basic approach for each o
f the three scenarios in the following
graphs is to increase the floe of UDP traffic every 5 minutes
and observe its effect on TCP traffic:


Scenario 1: No MPLS

UDP Traffic flow is being increased by
1.0 Mbps every 5.0 minute. This causes
reduction in throughput for all TCP
traffic flows

Scenario 2: Partial MPLS


Traffic flow is being increased by 1.0
Mbps every 5.0 minute (same as Scenario
1). This causes reduction in throughput for
TCP traffic flow sharing common
resources with UDP traffic.

Isolated TCP flow remains unaffected by
increase in network congestion

Scenario 3: Full MPLS
Implementation with QoS suppo

UDP Traffic flow is being increased by 1.0
Mbps every 5.0 minute (same as Scenario

Separate LSPs for TCP combined with
QoS prioritization

over common routes
results in not degrading TCP throughput



MPLS provides significant advantages with regards to traffic
engineering. Using
simulation analysis, our scenarios
demonstrate that network resources utilization can be
optimized using MPLS
TE and QoS. It is important to
incorporate “traffic engineering” aspects into a network to
optimally utilize resources across the network, and to
the service level agreements for congestion
sensitive traffic
flows while reducing the impact of congestion



S. Floyd and K. Fall, “Promoting the Use of End
End Congestion Control in the Internet”, IEEE/AC
Transactions on Networking, August 1999.


E. Rosen, A. Viswanathan, R. Callon,
“Multiprotocol Label Switching Architecture”, RFC
January 2001.


D. Awduche, J. Malcolm, J. Agogbua, M.
O'Dell, J. McManus, “Requirements for Traffic E
2702, September 1999.


P. Bhaniramka, W. Sun, R. Jain, “Quality of Service
using Traffic Engineering over MPLS: An Analysis”,
September 2000