Simulation-Based Analysis of MPLS Traffic Engineering

ginglyformweekNetworking and Communications

Oct 29, 2013 (3 years and 7 months ago)

61 views


1

Simulation
-
Based Analysis of

MPLS Traffic Engineering


Model Research & Development

OPNET Technologies, Inc.

Bethesda, MD, 20814

E
-
mail:
info@opnet.com



Abstract

It is a well
-
known behavior [FF99] that when congestio
n
-
sensitive (TCP) and congestion
-
insensitive (UDP) traffic share
a common path, an increase in congestion
-
insensitive traffic
adversely affects congestion
-
sensitive traffic’s performance
(e.g., due to TCP’s congestion control mechanism). This paper
uses an

example network scenario to illustrate the benefits of
using MPLS traffic engineering [AMAOM99] and QoS in
eliminating the undesirable effects of mixing congestion
-
insensitive flows with congestion
-
sensitive flows [BSJ00]. We
use a UDP data flow to repres
ent a congestion
-
insensitive
traffic flow and TCP data flows to represent congestion
-
sensitive traffic flows. A comparative simulation analysis is
provided for 1) non
-
MPLS enabled network, 2) a network
with two LSPs, one for a pure congestion
-
sensitive flo
w, and
another combined one for congestion
-
insensitive and
congestion
-
sensitive flow, and 3) a network with three LSPs,
one for each traffic flow, and traffic differentiation using
WFQ on the link carrying both congestion
-
insensitive and
congestion
-
sensiti
ve flow, based on the throughput achieved
by the congestion
-
insensitive and congestion
-
sensitive flows
when they share the network resources.


MPLS Overview

Multi
-
Protocol Label Switching (MPLS) [RVC01] is the latest
technology in the evolution of routing
and forwarding
mechanisms for the core of the Internet. The “label” in MPLS
is a short, fixed
-
length value carried in the packet's header to
identify a Forwarding Equivalence Class (FEC). A FEC is a
set of packets that are forwarded over the same path thro
ugh a
network. FECs can created from any combination of source
and destination IP addresses, transport protocol, port numbers
etc. Labels are assigned to incoming packets using a FEC to
label mapping procedure at the edge routers. From that point
on, it is

only the labels that dictate how the network will treat
these packets
--

i.e., what route to use, what priority to assign,
and so on. MPLS defines label
-
switched paths (LSP), which
are pre
-
configured to carry packets with specific labels. These
LSPs can b
e used to forward specific packets through specific
routes, thus facilitating traffic engineering.

The above figure is a simple illustration of how MPLS
performs label switching. The ingress label edge router (LER)
receives an IP datagram (an unlabeled pac
ket) with a
destination address of 10.1.9.7. The LER then maps the packet
to a FEC, and assigns a label (with a value of 8) to the packet
and forwards it to the next hop in the LSP. In the core of the
network, the LSRs ignore the packet's network layer hea
der
and simply forward the packet using a label
-
swapping
algorithm. In the above example, the label changes from 8 to 4
to 7.





Figure 1: Life cycle of a packet through an MPLS network






2

Network Topology

We used OPNET Modeler 8.0 for our simulation
analysis
using OPNET's MPLS models available in the OPNET Model
Library. This section explains the network model used in this
study. The main goal is to generate a mix of one
-
way (only
source to destination) congestion
-
insensitive and congestion
-
sensitive
traffic and study the network performance with and
without MPLS traffic engineering (TE).

In order to achieve
this goal, we have prototyped a network with the following
main components:




Traffic sources/destination: One congestion
-
insensitive
traffic sourc
e (named “UDP Source”) generating a
varying level of traffic (1.5 Mbps through 4.0 Mbps) and
two congestion
-
sensitive traffic sources (named “TCP
Source 1” and “TCP Source 2”) each generating 1.5 Mbps
of traffic to be sent from the source nodes to the
resp
ective destination nodes. All end
-
stations are fully

TCP/IP
-
enabled nodes


e.g., in the event of detecting
congestion, the TCP sources will reduce traffic flow input
as dictated by its congestion control mechanism.


Network topology: The edge network on
the source and
destination sides consists of an LER connected to the core.
The core network consists of four LSRs connected using two
parallel paths of 4.5 Mbps and 1.5 Mbps. The edge network is
configured to operate at OC
-
3 (155 Mbps) data rate so that it

does not introduce any congestion. Remember that our main
goal is to study the impact of overload in the core network.


All routers (LERs and LSRs) in the given baseline topology
are MPLS enabled. They have been configured such that their
label mapping an
d switching algorithms are enabled only
when LSPs are defined in the network. With no LSPs defined,
these routers will use the routes advertised by the dynamic
routing protocol running on their interfaces (OSPF in our
example). Due to the higher bandwidth
of the link between
LSR1 and LSR3, and LSR3 and LSR4, the default route taken
to get from the “Ingress LER” to the “Egress LER” will be
from LSR1 to LSR3 to LSR4.


In terms of results analysis, we will focus on the throughput
(in bits/sec) collected at the

outgoing interfaces to the various
destinations from the egress LER. Various other metrics that
can also be analyzed for the above network include application
response time, number of TCP retransmissions, traffic dropped
in the core of the network due to
congestions, etc.






Figure 2: Baseline network topology



3


Figure 3: (Scenario 1) Baseline network with no MPLS
-
TE


Simulation Results/Analysis

This section explains the results and the corresponding
analysis performed as part of this study. We mod
eled several
different cases:




Scenario 1
: The network model does not have any LSPs
(i.e., there is no MPLS
-
TE). In this case, our goal is to
obtain baseline results to study the effect of increasing
congestion
-
insensitive traffic over congestion
-
sensitive

traffic. We ran multiple simulations in which we
increased the amount of traffic generated by the “UDP
Source” (1.5 Mbps, 2.0 Mbps, 2.5 Mbps, 3.0 Mbps, 3.5

Mbps, 4.0 Mbps). Both TCP (congestion
-
sensitive) and
UDP (congestion
-
insensitive) traffic flow fro
m the
“Ingress LER” to the “Egress LER” via LSR1, LSR3, and
LSR4.


The amount of traffic being successfully received by the
destination nodes for various loads is shown in Figure 4.1
through Figure 4.4. Notice that when the amount of UDP
traffic sent by t
he “UDP Source” is increased beyond a
value such that the combined capacity of TCP and UDP
load is greater than the capacity of a core link (e.g., LSR1
to LSR3), the amount of traffic received by the TCP
destinations starts decreasing. This is because the
connection
-
oriented TCP flow decreases its traffic input
when TCP at the source detects congestion. This allows
UDP to send more traffic, thereby further reducing TCP
throughput.


This case demonstrates that the congestion
-
insensitive
traffic is not at all

penalized for the congestion in the
network. The congestion
-
sensitive traffic suffers to the
extent that its throughput is almost reduced to zero when
the UDP traffic input is equal to the capacity of the core
network.







Figure 4.1: All Flow
s generate 1.5 Mbps


Figure 4.2: UDP Flow increased to 2.5 Mbps


4







Figure 4.3: UDP Flow increased to 3.5 Mbps


Figure 4.4: UDP Flow increased to 4.0 Mbps





Scenario 2
: It is clear from results
of the previous
scenario that dynamic routing protocols such as OSPF do
not attempt to utilize all the available network resources
efficiently. MPLS remedies this deficiency by facilitating
traffic engineering. In this scenario, we enhanced the
baseline ne
twork to contain two LSPs as shown below:


One LSP is configured to carry traffic from “TCP Source
1” and “UDP Source”, and is pinned to route through the

high
-
bandwidth links in the core network. The second
LSP is configured to solely carry TCP flow from

“TCP
Source 2” through a different section of the core network.
This traffic engineering design will allow the traffic from
“TCP Source 2” to flow through the core network without
experiencing any loss of throughput due to increase in
UDP traffic. However
, the traffic from “TCP Source 1”
will still be subject to the behavior we observed in
Scenario 1. The results for the different UDP traffic loads
for this case are shown in Figure 6.1 through Figure 6.4.


This case reveals that MPLS
-
TE can be instrumental

in

steering traffic from congested resources of a network to
uncongested areas. Apart from almost totally eliminating
SLA violations for the newly traffic engineered flows,
MPLS
-
TE also helps in improving performance of the
remaining traffic in congested

areas of the network.





Figure 5: (Scenario 2) Baseline network with two LSPs






5










Figure 6.1: All Flows generate 1.5 Mbps


Figure 6.2: UDP Flow increased to 2.5 Mbps










Figure 6.3: UDP Flow i
ncreased to 3.5 Mbps


Figure 6.4: UDP Flow increased to 4.0 Mbps


Notice that the throughput for traffic generated by “TCP
Source 2” remains unaffected by the increase in UDP traffic
flow, whereas the traffic generated by “TCP Source 1”
re
duces as UDP traffic increases. An additional advantage of
traffic engineering the “TCP Source 2” traffic flow is that the
traffic from “TCP Source 1” is now able to access more
network resource, and suffers only at very
-
high UDP traffic
flows.













6



Figure 7: (Scenario 3) Baseline network with three LSPs






Scenario 3
: The network from Scenario 2 is enhanced to
contain a separate LSP for each traffic flow.Individual
LSPs are configured to carry the three different traffic
flows. There are two sep
arate LSPs for “TCP Source 1”
and “UDP Source” traffic flows. The paths that these
LSPs take are pinned through the high
-
bandwidth links in
the core network; therefore, they still share common
network resources.


In order to treat the TCP and UDP flows in

an isolated
fashion (even though they share common resources),
Weighted Fair Queuing (WFQ) has been enabled on the
output interface of “LSR 1” (i.e., the interface forwarding
packets on the link connecting “LSR 1” and “LSR 3”).
The WFQ configuration is su
ch that higher priority is
given to TCP traffic over UDP traffic in a 10:1 weighted
ratio.


This traffic engineering design integrated with QoS
support allows traffic from “TCP Source 1” to flow
through the core network without experiencing loss of
through
put due to increase in UDP traffic. The traffic
from “TCP Source 2” flows through the core network
unaffected by any other traffic because it is configured to
go over uncongested portions of the network.


Notice that the traffic from “TCP Source 1” is not
subject
to any degradation in performance because we have
prioritization for this flow in our example scenario. The
UDP traffic starts exhibiting performance degradation, as
soon the combined TCP and UDP load is greater than the
available link bandwidth. T
he results for the different
UDP traffic loads for this scenario are shown in Figure
8.1 through Figure 8.4.


This case demonstrates that by incorporating QoS into our
test network, we are able to significantly improve overall
network performance. We were
able to maintain the
throughput for congestion
-
sensitive traffic.


Notice that throughput for the traffic generated by “TCP
Source 1” remains unaffected by an increase in UDP
traffic flow, due to the isolation of congestion
-
sensitive
traffic from other tra
ffic using MPLS LSPs.








7








Figure 8.1: All Flows generate 1.5 Mbps


Figure 8.2: UDP Flow increased to 2.5 Mbps











Figure 8.3: UDP Flow increased to 3.5 Mbps

Figure 8
.4: UDP Flow increased to 4.0 Mbps





Summary

As a summary of the experiments conducted as part of this
paper, we present the following graphs. These graphs show a
brief comparison of all the cases highlighted in this paper. The
basic approach for each o
f the three scenarios in the following
graphs is to increase the floe of UDP traffic every 5 minutes
and observe its effect on TCP traffic:









8










Scenario 1: No MPLS
-
TE


UDP Traffic flow is being increased by
1.0 Mbps every 5.0 minute. This causes
reduction in throughput for all TCP
traffic flows


Scenario 2: Partial MPLS
-
TE
Implementation


UDP

Traffic flow is being increased by 1.0
Mbps every 5.0 minute (same as Scenario
1). This causes reduction in throughput for
TCP traffic flow sharing common
resources with UDP traffic.


Isolated TCP flow remains unaffected by
increase in network congestion


Scenario 3: Full MPLS
-
TE
Implementation with QoS suppo
rt


UDP Traffic flow is being increased by 1.0
Mbps every 5.0 minute (same as Scenario
1).


Separate LSPs for TCP combined with
QoS prioritization

over common routes
results in not degrading TCP throughput



9

Conclusion

MPLS provides significant advantages with regards to traffic
engineering. Using
simulation analysis, our scenarios
demonstrate that network resources utilization can be
optimized using MPLS
-
TE and QoS. It is important to
incorporate “traffic engineering” aspects into a network to
optimally utilize resources across the network, and to
respect
the service level agreements for congestion
-
sensitive traffic
flows while reducing the impact of congestion
-
insensitive
traffic.



References

[FF99]

S. Floyd and K. Fall, “Promoting the Use of End
-
to
-
End Congestion Control in the Internet”, IEEE/AC
M
Transactions on Networking, August 1999.


[RVC01]

E. Rosen, A. Viswanathan, R. Callon,
“Multiprotocol Label Switching Architecture”, RFC
-
3031,
January 2001.


[AMAOM99]

D. Awduche, J. Malcolm, J. Agogbua, M.
O'Dell, J. McManus, “Requirements for Traffic E
ngineering
Over MPLS", RFC
-
2702, September 1999.


[BSJ00]

P. Bhaniramka, W. Sun, R. Jain, “Quality of Service
using Traffic Engineering over MPLS: An Analysis”,
September 2000