University of the Free State Redesigning the campus network for innovation, VoIP, multicast and IPv6

thoughtlessskytopNetworking and Communications

Oct 29, 2013 (3 years and 7 months ago)


University of the Free State

Redesigning the campus network for
innovation, VoIP, multicast and IPv6

Presented by Louis Marais (Deputy Director ICTS, UFS)
and Andrew Alston (Owner, Alston Networks)



November 2012

Remote Presentation

The Original Network

What we had:

Approximately 19000 Wired Ports

450+ CCTV Cameras

500+ Switches

400+ Wireless Access Points

/16 Worth of Legacy IP Space

No Routing Protocols

No VLAN segmentation

No working multicast

No IPv6

No true IP Plan


Implementation (or way to implement it!)

The Original Network

What we didn’t have:

The Original Network

What we REALLY had:

ONE Broadcast Domain

Overloaded ARP Tables

Switch CPU utilizations in excess of 90%

Regular broadcast storms

Spanning Tree / Topology Loop issues

A disaster waiting to happen….

Making a plan….

Step 1

Figure out what the objectives are

Step 2

Figure out how to get there

Step 3

Obtain the required prerequisites

Step 4

Plan the deployment

Step 5

Implement (carefully)

Step 6

Resolve any unforeseen issues

Step 7

Skills Transfer

The Objectives….

Create a design that is scalable

Cater for the deployment of IPv6

Cater for the deployment of Multicast Technologies

Cater for IP Telephony (VoIP)

Allow for sufficient flexibility to support innovation

Get rid of the single broadcast domain

Don’t over complicate

it has to be manageable

Avoid a design that will require huge amounts of downtime during implementation

Ensure that NAT doesn’t exist on campus

it breaks too many things

The design we chose…

Build on service provider principles

Segment the network

Core / Distribution / Edge

OSPF for loopback distribution purposes

We wanted ISIS but it wasn’t supported

BGP for the majority of the table (IPv4)

OSPFv3 for all IPv6

The vendor didn’t support v6 under BGP

No VLAN spanning between distributions

PIM for multicast routing

The vendor didn’t support MSDP

The problems and prerequisites

Rollout of routing protocols, multicast and IPv6 could not begin before the
network was properly segmented

Segmenting the network would normally result in large amounts of downtime due
to subnet mask issues.

The switches required additional licenses that had to be purchased to support
routing protocols

Why the segmentation problem?

Devices in the various domains
would have the same subnet mask

Everything in each broadcast
domain would still believe it was
“directly connected” to everything
in other broadcast domains

Either insert more specific routes
on every device (impractical), or
change the subnet masks on each
device as we went (impractical)

The Solution?

Get more IP Space

anyway for expansion

Renumber the newly structured
segments into the new IP space as
we went

Leave the original broadcast
domain intact, simply reducing its
size as we went

The planning exercise

Performed a comprehensive IP planning exercise, and figure out just how
much IP space we needed both today and going forward

Applied for and were granted an additional /15 worth of IP space from

Put the new IP space through our IP plan and created a segmentation plan
for the network

Network divided into Core, 13 distributions, and the edge

Work out the size of the

required per distribution

Devise a VLAN plan that could be duplicated across the distributions in
a uniform fashion

Further divide the

routed to the distribution into VLAN
based “connected” subnets.

Further planning details

Each building on campus (100+ of them) received 6 VLAN’s

Wired, Wireless, CCTV, Access Control, Building Management, VOIP

The distributions carried all VLAN’s for a number of buildings (In some
cases up to 200 VLAN’s)

A decision was taken NOT to route the subnets, ONLY the

distribution, let the distribution connected routes take care of the subnets.

Only place the core
> distribution point to points and the core/distribution
loopbacks into OSPF, let BGP carry the rest.

Connected route redistribution into the dynamic routing protocols was

IP Planning


IP Planning


Management Tools

Management Tools (2)

Management Tools (3)

Management Tools (4)

Management Tools (5)

The Implementation…

Since VLAN 1 was already going to each distribution, leave it alone at the
start… we didn’t want downtime

Create a second VLAN for each distribution, and trunk it between the core
and the distribution.

Apply a /30 to each of these new point to point VLANs.

Create the required building VLAN’s on the distribution and assign the new
gateway IPs to each VLAN. All of this in the newly allocated AfriNIC space.

Assign the IPv6 addressing to the new VLAN’s at the same time.

Setup the OSPF / OSPFv3 / BGP between the core and the distribution and
ensure routing is working back to the core.

Trunk the building VLAN’s down to the edge switches from the distributions,
again, leaving VLAN 1 still trunked to the edge

NOTE: At this point there was still ZERO effect on production traffic

had NO downtime doing this.

Implementation Part 2…

Enable PIM Sparse Mode between the core and distributions on the point
to point VLAN’s.

Ensure the DHCP relays were setup correctly on the distribution switches

Create the DHCP Scopes for the new IPv4 space.

We decided to use RA(EUI
64) for IPv6, since DHCPv6 is not as supported on
all edge devices.

NOTE: We still had not touched production traffic, all of this was done as an
“overlay” network.

The moment of truth…

Come back in the dead of night… consume lots of red bull… take a deep
breath… and then….

Tag the edge ports into the new correct VLAN’s for their respective
buildings / uses


Force a switch port flap (shut down the port, wait 30 seconds, un shut the

Edge devices see a disconnect and reconnect and request a new DHCP lease
as a result.

On bringing the port back the edge device requests a new DHCP lease, and
lands in the correct subnet. It also gets an IPv6 address through RA

Total downtime on any network segment: 30 seconds.

The secondary moment of truth…

Once we knew that edge devices were getting IP and IPv6 addresses in the
correct subnets

Send some techs around to renumber static assigned devices (printers
and such things) into the new space and move them into the correct
VLAN’s as we went

Enable IPv6 on the proxy servers…

Enable IPv6 on the DNS servers…

Watch the traffic swing instantly to approx. 60% IPv6

The end result….

Smaller Broadcast domains

Less broadcast traffic

an increase in network performance of *70%*

Switch CPU loads down from 90% to 3% as a result of smaller ARP

A more manageable network

we know where IP addresses are in use, so
tracing things is easy

A more flexible network

work on one segment doesn’t affect the whole

Far less spanning tree / looping / L2 topology issues

IPv6 to every edge device

yes, it can be done

The end result (IPv6 Stats)

Mon 22 Oct 2012 00:05
Mon 22 Oct 2012 03:25
Mon 22 Oct 2012 06:45
Mon 22 Oct 2012 10:05
Mon 22 Oct 2012 13:25
Mon 22 Oct 2012 16:45
Mon 22 Oct 2012 20:05
Mon 22 Oct 2012 23:25
Tue 23 Oct 2012 02:45
Tue 23 Oct 2012 06:05
Tue 23 Oct 2012 09:25
Tue 23 Oct 2012 12:45
Tue 23 Oct 2012 16:05
Tue 23 Oct 2012 19:25
Tue 23 Oct 2012 22:45
Wed 24 Oct 2012 02:05
Wed 24 Oct 2012 05:25
Wed 24 Oct 2012 08:45
Wed 24 Oct 2012 12:05
Wed 24 Oct 2012 15:25
Wed 24 Oct 2012 18:45
Wed 24 Oct 2012 22:05
Thu 25 Oct 2012 01:25
Thu 25 Oct 2012 04:45
Thu 25 Oct 2012 08:05
Thu 25 Oct 2012 11:25
Thu 25 Oct 2012 14:45
Thu 25 Oct 2012 18:05
Thu 25 Oct 2012 21:25
Fri 26 Oct 2012 00:45
Fri 26 Oct 2012 04:05
Fri 26 Oct 2012 07:25
Fri 26 Oct 2012 10:45
Fri 26 Oct 2012 14:05
Fri 26 Oct 2012 17:25
Fri 26 Oct 2012 20:45
The costs / time involved

Planning took 3 weeks

the large majority of the time

Implementation was done largely at night over a 2 week period

No one on the network saw more than 30 seconds of downtime

The total cost to achieve a 70% performance increase and ensure the
network was future proof was less than $50,000.

The money spent included:

Routing licenses for the core and distribution switches (the
majority of the cost)

Consultancy fees

Vast quantities of

and pizza over the 2 week
implementation period.

Next Steps… Where to from here

We want a more robust multicast network

this means changing the way
we do multicast.

In order to cater for multi
homing, better traffic analysis using
and to allow for more flexibility we want a full BGP table in the core

We want to reduce the over
subscription between the edge and
distribution and distribution and core

MPLS Implementation across the border/core/distribution to allow for

cross connects between network segments and external entities.

To this end… a core and distribution upgrade is planned for early 2013.

The new core / distribution design

Replace the core switches with proper routers

Replace the distribution layer with 10G capable L3 switches that support
the required protocols

Implement a proper border router that is full table BGP capable

Implement MSDP from border to core to distribution.

Uplink each distribution with redundant links to separate core routers in a
virtual chassis configuration

Ensure that each distribution has at least 20G back to the core (and
potentially more)

Link the cores with redundant 100G technology.

Rollout MPLS across the border/core/distribution.


where appropriate for VoIP implementation

The new core / distribution design

Lessons Learnt

structuring a network for IPv6 / Multicast / The Future does not:

Require loads of downtime

Require vast amounts of money

structuring a network for IPv6 / Multicast / The Future does require:

Some vision as to where you want to go

VERY careful planning

Careful documentation


Any questions?

Our Contact Details:

Louis Marais:

Andrew Alston: