This project will deploy an OpenFlow testbed within the CS self ...

rabidwestvirginiaNetworking and Communications

Oct 26, 2013 (3 years and 9 months ago)

103 views

Meso
-
Scale Deployment of
OpenFlow:


1.

Clemson University
, t
his project will deploy an OpenFlow testbed on the Clemson University
campus and connect with
wireless mesh access points and mobile terminals
. This trial will conduct
OpenFlow

experimentation focused on OpenFlow enabled network operation solutions as a precursor
to deployment into Clemson research and production networks.



Toroki LS4810 switch














2.

Georgia Tech
, this project will deploy an OpenFlow testbed

in the CS building on
the Georgia Tech Campus. This project will conduct OpenFlow experimentation
focused on OpenFlow enabled
next generation network admission control
system

as a precursor to deployment into campus residential network. The
testbed will r
eplicate a campus deployment required to handle thousands of end
hosts.


Previously, the GTech network adm
ission control system is a VLAN based approach
,
but
there are
several

disadvantages for VLAN based access control. For example, the
number of VLAN is limited, and the control overhead to set the private and public IPs for
the authorization is big.

GTech OpenFlow deployed OpenFlow switches for campus access control. All the

hosts will go through OpenFlow switches to a centralized controller for authorization. If it
is authorized, it can access the network. Otherwise, it will be quarantined.
























































3.

Indiana University
,

t
his project will deploy an OpenFlow testbed between two campuses. This
trial will conduct OpenFlow experimentation focused on OpenFlow enabled

dynamic provisioning and
distributed monitoring

as a precursor to deployment into the Indiana University campus production
network. It will also examine and propose ways for network operators to manage and support their
networks using Openflow enabled devices.


HP
Procurve
5406
/
6600


Distributed
monitoring:

cSamp

implementation

(Coordinated Sampling)



a measurement framework for network
-
wide flow
monitoring.

Idea from CMU, IU wants to implemen
t on O
pen
F
low.

Current flow monitoring uses uniform packet sampling based on a sampling probability.
And current
routers record flow measurements completely independently of each other, thus leading to redundant
flow measurements and inefficient use of router resources.

So they argue that a centralized system that coordinates monitoring across different r
outers can
significantly increase the flow monitoring capabilities of a network.

Three improvements:

Use flow sampling instead of uniform packet sampling; hash
-
based packet selection to eliminate
duplicate measurements in the networks;
framework for distri
buting responsibilities across routers to
achieve network
-
wide monitoring goals while respecting router resource constraints.


D
ynamic provisioning

Deploy an inter
-
Campus Networking, between IU and IUPUI,
investigating how OpenFlow will
interoperate with e
xisting inter and intra campus provisioning tools such as Sherpa (used on NLR):


GMPLS
,
tools such as the Internet2 ION service
.


IPTV


Worked on design with IU Campus Network Architecture group for

using OpenFlow as a
mechanism for fine
-
grained control
of IPTV related

traffic flows. OpenFlow would be used for Layer 2
switching decisions to

avoid issues of multicast flooding and non
-
optimal topologies.


Data capture


Collaboration with Minaxi Gupta (IU) on designs for using

OpenFlow as a mechanism to
cap
ture cybersecurity data sets on campus

networks.



















4.

Rutgers
, t
his project will deploy OpenFlow switches between wireless research networks and
Internet2 to enable researcher
programmability and virtualization in the wired network connecting

the
Rutgers wireless assets to the GENI backbone
. OpenFlow switches in the ORBIT indoor mesh testbed
will also enhance performance and flexibility and provide support for grid virtualization.


NEC IP 8800
























5.

UW
-
Madison
, t
his project will deploy an OpenFlow testbed throughout the University of
Wisconsin campus. This project will develop OpenFlow extensions focused on the
creation of flexible,
heterogeneous slices, the development of a generic load balancer and a captive por
tal for
authentication
. This work is a precursor to the deployment of OpenFlow into campus production
networks.

We have designed the
OpenSAFE

system and an accompanying language, ALARMS, for
scalable and
flexible network monitoring
. OpenSAFE and ALARMS ar
e based on
OpenFlow
. The system can
perform a variety of network monitoring functions at line
-
rates. It also allows
flexible interposition of
middleboxes

as well as fine
-
grained load balancing across a diverse collection of hardware. This
system was demo
-
ed at GEC7, where it garnered a lot of interest from other GENI participants.


HP


OpenSAFE
:

Proble
m
: H
ow to route the traffic for network analysis in a r
obust,

high performance manner that does
not impact normal

network traffic?


They focus on network monitoring
via

inspection of copies of traffic at interesting points in the

network.


Current “copies of traffic”: 1.
Span ports are typically directed into
a single computer

running some sort
of IDS (Intrusion Detection System)
, 2.
Span

ports on network equipment is often extremely limited.




OpenSAFE

can direct spanned network traffic in arbitrary ways.

OpenSAFE can handle any number of
network inputs

and
manage the traffic in such a way that it can be used

by many services while
f
iltering packets at line rate.


Design:
The input is a connection from the span path at

the chosen network aggregation point to an
OpenFlow

switch port. Some number of filters are

in use, attached

to various OpenFlow switch ports. Finally,
output is

directed into some number of sinks.


























6.

University of Washington

This project will deploy a hybrid OpenFlow and RouteBrick
testbed

within the computer science and engineering department.
This project will develop building
blocks allowing researchers to investigate the placement of middlebox functionality in enterprise
networks
. This work is a precursor to the deployment of OpenFlow a
nd RouteBricks into campus
production networks.

Our goal is to build a testbed and develop building blocks that would allow researchers to investigate
the optimal placement of middlebox functionality in today's enterprise networks. We have begun the
deploy
ment of
OpenFlow

enabled HP Procurve switches in the computer science building here at
University of Washington. The deployment is initially being used by a small number of graduate
students in syst
ems and networking for both research and day
-
to
-
day use. Eventually it will be phased
in as the primary network for a larger set of users. After phase in, we will continue to operate a
research network so that researchers can test their ideas before valida
ting them on the production
system.


HP
ProCurve

6600










7.

Princeton University

This project will deploy an OpenFlow testbed within the CS self
-
operated IT department. This trial will
conduct
OpenFlow experimentation focused on

OpenFlow enabled management of experimental and
production services
as a precursor to deployment into their department's production network.


HP ProCurve switches



Do not have any documents!








8.

Stanford

University


T
his project will deploy an
OpenFlow
, testbed throughout the Stanford University campus. This
project will develop
OpenFlow

extensions focu
sed on the
creation of flexible, heterogeneous
slices, the development of a generic load balancer and a captive portal for authentication
. This
work is a precursor to the deployment of
OpenFlow

into

campus production networks.


Eight
OpenFlow

switches (HP, NEC, Quanta, and Toroki)


Developer of OpenFlow!!!!!!

Including a) OpenFlow reference implementation, b) FlowVisor, c) SNAC Controller, and

d)
Debugging/monitoring tools.




































9.

Internet2 Openflow

Internet2 will implement Stage 0 of the Internet2
OpenFlow

test network by creating a five node
OpenFlow

test network using small enterprise switches (
HP
ProCurve

5406

switches proposed,
pending evaluation). The initial design has 10GE WA
N interfaces, but that is subject to
negotiation of the MOU Internet2 has with GENI for wide
-
area connectivity. Stage 0 comprises
the appropriate
OpenFlow

controller(s), interfaces to connect severa
l enterprise
OpenFlow

campuses, and interfaces to connect other projects such as ProtoGENI and SPP. The Stage 0
network will allow members and non
-
members of Internet2 to connect to GENI
OpenFlow

services.

Internet2 will create
OpenFlow

nodes
in five cities
: Atlanta, Chicago, Houston, New York, and
Salt Lake City. Internet2 will also e
xplore integrating the
OpenFlow

nodes with
OpenFlow

deployments on ProtoGENI nodes. In this second, complementary
-
to
-
ProtoGENI scenario,
Int
ernet2 will create
OpenFlow

nodes in five cities: Atlanta, Chicago, Los Angeles, New York, and
Seattle.



10.

National LambdaRail

Openflow

Deploy and operate
OpenFlow
-
enabled
HP Procurve 6600

switches at 5 NLR POPs and
interconnect NLR's FrameNet network to the GENI
OpenFlow
-
enabled backbone, permit
ting
members and non
-
members of NLR to connect to GENI
OpenFlow

services.



































OpenFlow at LSU


This project will deploy an OpenFlow
testbed in Frey Building

on the LSU campus.


The purpose of OpenFlow deployment at LSU is to
help both

experiment
traffic
and production

traffic

to
make better use of

our existing high speed network
research
testbed
.



We have already established an
up to 10Gbps high speed
networ
k research

testbed CRON

(Cyberinfrastructure of Reconfigurable Optical Network)

at LSU.


For the experiment traffic,
CRON has

already

established a 10Gbps connection to
other
ProtoGENI
sites
on the GENI backbone
through Internet2

ION service.

According to the final plan of GENI, the national
backbones will be built out with OpenFlow. As a ProtoGENI site at LSU, we are responsible to p
rovid
e

OpenFlow capabilities on
our
ProtoGENI
testbed, so that

in the future

all the ProtoGENI sites will have
significant commonality in the technologies employed, and significant overlap in the sets of campuses.



Also

CRON is connected to
Louisiana Optical Network Initiative (LONI)

over 10G
bps

links
. Inside LONI
network, there will be several Louisiana colleges
and universities, such as Southern University, New
Orleans University, University of Louisiana at Lafayette, and so on, which will use the

high speed

computing resources of CRON at LSU.

Of course, these universities will be allowed to use CRON testbed
for
their high speed network

research

experiments the same as other ProtoGENI sites.
Moreover
, t
here
will be some high speed

production traffics
requirements

between these universities
,

for instance, the
remote visualization application.

T
hese high speed produ
ction traffics may want to use the
high speed
hardware/software in CRON testbed. In that case
, OpenFlow will be used
for

dynamically map
ping

the
production traffic to the specific

high speed

hardware
/software

they required.

For example, if one
university w
ants to use the hardware emulator to create some specific delay or rate limit, the controller
in OpenFlow can simply forward the traffic to the hardware emulator

which they already configured by
the software in
side

CRON.

The controller can distinguish
different requirement
s

through the TCP ports
or the VLAN id in the flow table.


Besides, i
nstead of just use VLAN t
o
distinguish all the different

experiment traffic
s

and production
traffic
s, we will choose to use OpenFlow. One reason is that the number of

VLAN is limited. Another
reason is that it is more flexible for
OpenFlow to choose all

the
infrastructures

inside CRON

than the
fixed VLANs, and OpenFlow can easily choose

infrastructures

since all the infrastructures have the

uniform
private IP
.








h