DATA CENTER STANDARDS

weepingwaterpickSécurité

23 févr. 2014 (il y a 3 années et 3 mois)

176 vue(s)






201
4







University Information
Technology Services


System Assurance Group


DATA CENTER STANDARD
S

This document outlines Indiana University Data Center guidelines and standards, including
equipment installations, data center access, and
operational procedures.

T
able of Contents

1.

Requesting installation

1.1


Space request

2.

Acquisition guidelines

2.1


Rack mounted devices

2.2


Equipment specifications

2.3


Power

2.4


Receiving and placement

2.5


Equipment arrival notification

3.

Equipment installation

3.1


Data Center staging area

3.2


Cabinet design

3.3


Cabinet/Rack

3.4


UPS

3.5


KVM solutions

3.6


Hardware identification

3.7


Disposal of refuse

3.8


Combustible and flammable material

3.9


Installation
review

3.10

Negotiations

3.11

Replacement parts

4.

Equipment removal

5.

Operations procedures

5.1


Data Center access

5.2


Equipment registration

5.3


Essential information

5.4


Change Management

5.5


Monitoring tools

5.6


Security

5.7


System backups

5.8


Data Center tours

6.

Data Center Networking

6.1


Campus Network Availability

6.2


Projects / RFPs

6.3


Requesting Network Connectivity

6.4


Firewall Security

6.5


Rack Ethernet Switches

6.6


Internal Rack Wiring

6.7


Sub
-
Floor
Copper/Fiber Wiring Requests

6.8


Server Network Adapter Bridging

6.9



Server Network Adapter Teaming/Trunk/Aggregate Links

6.10

SAN Fiber Channel (FC) Switches

6.11

Multicast

6.12

Jumbo Frames

6.13

Tagged VLANs

6.14

IPv6

7.

Data Center Network Firewalls

7.1


Exception
requests

7.2


A general guideline


7.3


Hosted based firewall

7.4


Outbound traffic

7.5


Service port ‘Any (1
-
65535)’

7.6


Source address ‘Any (0.0.0.0)’

7.7


Destination address ‘Any (0.0.0.0)’

7.8


Entire Subnets as destinations

7.9


Individual DHCP addresses

7.10

DNS entries must exist

7.11

Source & Destination Host Groups

7.12

Firewall security zones

7.13

Rules are specific to a single member firewall zone

7.14

Global address groups











Last updated
February 6, 2014

All UITS staff and external
departmental staff

who

have equipment responsibilities in
the
Data Centers

should accept the terms and responsibilit
ies

outlined in this document.


1. Requesting installation

1.1 Space request:

Prior to
your
submitting a proposal for new equipment,

you’ll

need
to begin i
nitial discussions regarding machine room space.
Once you s
ubmit

a

machine
room space request form
,

the System Assurance Group (SAG) will schedule

a meeting
.
The SAG includes the following participants: the UITS Operations Manager, a
representative from UITS Facilities, a representative from Networks, and the system
administrator and/or system owner. The purpose of the SAG meeting is to address
environmental
issues (i.e., equipment BTU and power specifications), space, and floor
location. The floor location may be determined based on environmental data.


2. Acquisition Guidelines

2.1 Rack mounted devices:
Ensure that you’
re purchasing rack
mounted equipment
-
-

everything needs to be either mounted in the rack or in its own proprietary cabinet

(as
opposed to free
standing)
.
The
Operations Manager

must approve any exceptions
.

2.2 Equipment specifications:
Upon making
your

equipment selections
,

send the vendor
specification sheet to
sag
-
l@indiana.edu

2.3 Power:

Request power for each device from the electrical engineer. Power requests
will be handled as follows:



IUPUI
:
Two rack
-
mounted cabinet distrib
ution units (CDU) will be installed in each
standard rack. These CDUs utilize 208V IEC C13 and C19 outlets. Your hardware will
need to operate at 208V and have the proper power cords. Installation of CDUs can
take up to a month, so please request power
as early as possible. For non
-
standard or
proprietary racks, twist
-
lock receptacles shall be provided under the floor for connection
of user
-
supplied CDUs.


IUB

Data Center: In order to maintain Uptime Institute Tier III requirements, two rack
-
mounted
CDU
s
fed from different power sources
wil
l
be supplied in each rack. In order
to assist with load balancing,
you

should purchase hardware with 208V IEC C13 or C19
plugs whenever possible.
Please do

not plug hardware

into any receptacle without
authorization of the electrical engineer.

2.4 Receiving and placement:

When

large, heavy equipment is involved
, you’ll need to
make arrangements for receiving the order
. It is
also
your responsibility to arrange for
the equip
ment to be moved to the proper place in the machine room. In emergency
situations
,

the Operations staff will act as the contact and receive equipment/parts after
normal dock hours.

2.5 Equipment arrival notification:

Receiving dock personnel will

notif
y
you

of
equipment arrival (unless otherwise arranged).


3. Equipment installation

3.1 Data Center staging area:


IUPUI
: There is very little to no staging area outside of the machine room
, so you’ll need
to use t
he machine room space
to uncrate/unpack and p
repare equipment for
installation



and you’ll need to move/install a
ll items to their intended location

within
two weeks
.

If for some reason
you

cannot
do so

within
this time period
,
please make
other arrangements through UITS Facilities or Operations.
(This will not guarantee other
storage locations.)


IUB

D
ata
C
enter
:

A

staging area for equipment
is located just off the dock.
This space is
for

uncrat
ing
/unpack
ing

and
prepar
ing

equipment for installation. This area is only for
temporary storage


you ha
ve
two weeks

to

move/install

all items

to their intended
location.
If for some reason
you

cannot
do so

within
this time period
,
please make
other
arrangements through UITS Facilities or Operations.

(This will not guarantee other
storage locations.)

3.2
Cabinet design:

U
nless the server manufacturer specifically dictates the equipment
must be housed in their proprietary cabinet
,

a
ll servers will be installed in the standard
cabinets provided by Operations.
You’ll need to submit p
roof
of vendor
proprietar
y
cabinet
requirements to SAG. Such cabinets should have front, back, and side panels.
Unless someone is working in the unit, cabinet doors must remain closed
.

3.3 Cabinet/Rack:

Operations will label the cabinet
(
both front and rear
)

with the
unique floor
grid location

and with

the power circuit serving that unit.

Equipment spacing within the cabinet/rack should allow appropriate airflow for proper
cooling. Blanking panels will need to be installed to fill in all vacant rack space.

3.
4

UPS:

NO

rack mounted
uninterr
uptible power supplies (
UPSs
)

will be
allowed.

The
Enterprise Data Centers

will provide backup power
.


3.
5

KVM solutions:

Rack mounted monitors and keyboard trays
are
required

only if you
need KVM
.

Cabling between racks not allowed.


3.
6

Hardware identification:

Supply Operations with the appropriate fully qualified
server names
,

and they will label all equipment within the cabinets so that hardware
is

easily identifiable.
You will

need prior approval from the Operations Manager and
Co
mmunications Office

to display any signage
.

3.
7

Disposal of refuse:

The person/team installing the device is responsible for the
d
isposal of all refuse (cardboard, styrofoam, plastic, pallets, etc.). Please see that
you
remove
any
refuse


and, if possible, recycle any cardboard


from
the
IUPUI
machine
room
and
IUB
Data Center staging area
on a
daily

basis
.

3.
8

Combustible and flammable material:

D
o

not leave
combustible material
s

in the
machine room
s



such materials

include cardboard, wood,
and
plastic,
as well as
manuals and books
. This
also
prohibits the use of wooden tables/shelves.

3.
9

Review installation:

The
person
request
ing installation

should

arrange with the
Operations Manager for a final review of equipmen
t installation
,

to ensure that
appropriate polic
ies

and procedures
are implemented

before the equipment becomes
production ready.

3.1
0

Negotiations:

Any negotiations and exceptions
must

be arranged between the
system owners and the Operations Manager
,

and

approved by the Director of Enterprise
Infrastructure and the relevant director or officer of the area involved.

3.1
1

Replacement parts:

All onsite replacement parts
should

be stored in a storage
cabinet or on storage shelves in the storeroom (e.g.
,

for
use by
service vendors
such as
IBM

or

Service Express). Make
any

necessary storage arrangements with Facilities or the
Operations Manager.


4. Equipment removal

4.1

When a new system is replacing a pre
-
existing machine, the old system must be
properly decommissioned via the Change Management process.
Submit a request to CNI
for the removal
of
firewall rules for m
achines that are decommissioned.

4.2

Removal of old har
dware must be coordinated with the UITS Facilities Manager and
follow all appropriate policy, standards, and guidelines relating to data destruction,
wiring removal, and component disposition.

4.3

Please be sure to i
nclude all of the appropriate capital a
sset transfers.

4.4

The cost of removal is borne by the owner, and all equipment must be removed no
later than 30 days after it has been decommissioned. Exception
s

to the 30 day

removal
period

require

approval by Facilities or the Operations Managers.



5. Operations procedures

5.
1

Data Center access:

Due to the sensitive nature of the data and computing systems
maintained within its facilities, security and access are important aspects of the
OVPIT/UITS environment. In most cases, the university is
contractually and legally
obligated to limit access to only those who have IT responsibilities requiring frequent
access.

S
ecurity cameras are
located
throughout OVPIT/UITS buildings. These

cameras

record

footage

for follow
-
up in the case of a security inc
ident. They also
provide

an effective
deterrence function in the safe operation of the building.

UITS staff with responsibilities in the data center may
gain

access
through an

arrangement between the department manager and Operations. Requests should be
m
ade via the
Special Access Request Form
.


Persons other than full
-
time UITS staff are permitted in the data center only under one
of the following conditions:

A.

They are full
-
t
ime staff
of

vendors providing services to UITS
:

Contract
consultants or service representatives may be authorized by prior arrangement
with Operations.

B.

They are full
-
time staff of Indiana University working on a system owned by an
IU department and house
d in the data center, under terms specified in a
Co
-
location
Agreement

--

a
ccess will be g
ranted in situations requiring
hands
-
on
system administration, not simply because a system is present on a machine in
the data center.

C.


They are full
-
time or contrac
ted staff of a non
-
IU entity that owns a system
housed in the data center, under terms specified in a co
-
location agreement



again,
access will
granted

when hands
-
on system administration is
necessary
,
not simply because a system is present on a machine i
n the data center.

D.

They are escorted by a full
-
time UITS staff member as part of a tour of the
facilities.

ID badges

and access cards will be provided for those individuals who meet criterion A,
B, or C. The ID badges must be worn and visible during visit
s to the data center.
All

staff
who
meets

criteria A
, B, or
C
is

expected to sign in to the data center through
Operations prior to entering the room, and to sign out upon exiting
.

Biometric hand geometry scanners

are installed at both Data Centers.
A registration
process will
be scheduled

and performed by the UITS Facilities or Operations staff
.

For additional information and

to learn about biometric hand geometry scanners,
review the internal KB document at
https://kb.iu.edu/data/azzk.html


(
Note
: the

internal document requires authentication).


The Vice President for Information Technology has developed a policy related to the
handling, management and disposition of "biometric" data used in the hand
geometry
scanner. It is stored in an internal KB document at

https://kb.iu.edu/data/bapr.html

(note: the internal document requires authentication).


5.
2

Equipment registration:

Equipment must be registered in the
m
achine
r
oom
i
nventory. Send
an
email to
dc
ops@indiana.edu

if you experienc
e
problems accessing
the
Data Center Inventory System
.


5.
3

Essential information:

The system owner will
enter
the essential information
into
the Machine Room Inventory System

and update th
at

information
if

it changes.
E
ssential information includes:



System hardware:

A complete description of the machine's hardware
configuration, including vendor, model, on
-
board memory, secondary storage
media vendor/type, etc.



System software:

A complete description of the machine's software
configuration, including operating system vendor, version, patch level, and other
major software components on the system



System function:

A complete description of the machine's function (the service
that
it provides)



System recovery:

Accurate startup and shutdown procedures and special
information relating to crash or other emergency recovery situations



On
-
call notification:

Primary and secondary system contacts and schedule
s
, plus
contact
information for
the manager supporting the system
(Please provide prior
to production date.)



Vendor and maintenance contract:

Vendor contact information, including
information related to the maintenance/service contract and warranty
(
The
Operations Manager will assist in
negotiating maintenance contracts on behalf of
UITS, but budgets for ongoing maintenance should be managed in the individual
units.
)

5.
4

Change Management:

The system manager or system administrator
needs to

participate in the Change Management process,
by

representing the deployment of a
new production system before implementation. A
t the start of fall and spring semesters,
a

change freeze
of

approximately 2 weeks
takes place
--

dates are posted on the
Change

Management

web site.

Only emergency c
hanges
with
the appropriate approvals

should
be implemented during change freezes
.

5.5

Monitoring tools:

N
etwork monitoring tools will scan

i
ncoming machines as
appropriate.
Please supply t
he network address and any spe
cial considerations for the
monitoring mode.

5.6 Security:

All servers are expected to be administered in a secure manner using
industry best practices
IT Policy 12
,

including employment of host based firewalls and
operation

behind the
machine room

firewall.

You are expected to p
roper
ly

review and
formal
ly

acknowledg
e

all relevant security polices and standards.
For more information,
p
lease see
the University Information Security Office Resources page
.

5.
7

System backups:

Best Practices
should
include t
he implementation of a proper
system backup schedule. This includes the deployment of incremental, full, and archive
program processes as needed.
You must use proven, supported b
ackup software
and
apply a
ppropriate standards for off
-
site backup data for pr
oduction systems.


Enterprise Infrastructure offers a data backup service as part of the
Intelligent
Infrastructure

suite of services.


5.
8

Data Center tours:

All tours of the machine room m
ust be scheduled with the
Operations Manager.


6. Data Center Networking

System Classification:



IU Enterprise Systems:
Indiana University

systems using IU IP addressing that
reside in the enterprise environments at the IUPUI and IUB data centers. These
systems the standard top
-
of
-
rack (TOR) switching configuration.



Non
-
IU Colocation Systems:
Systems that are using only Indiana University data
center space, power, and cooling. These external customers are not using IU IP
address space and networking.



IU

Research Systems:
Indiana University systems that are located primarily on
the IU research networks. Physical placement lies within the areas designated as
research environments at the IUPUI and IUB data centers.

6.1

Campus Network Availability:

All ent
erprise racks come standard with one switch
installed at the top of the cabinet to provide 48 ports of 10/100/1000 Mbps Ethernet
connections into the data center switching infrastructure. 10G or additional 1G
switches are available by request at an addit
ional cost. All public and private Ethernet
connections are to be provided by UITS unless special circumstances are reviewed and
approved by Campus Network Engineering (CNE). This policy applies to Enterprise and
any Research System using IU campus networ
ks.

6.2 Projects / RFPs:

If you are embarking on a new project, include Campus Network
Engineering (CNE) in these discussions. They can help assist you with the networking
requirements and ensure they are compatible with our existing network design and wi
ll
achieve the performance you require. Contact
noc@indiana.edu

to schedule a meeting
with these individuals.

6.3 Requesting Network Connectivity:

Entire Data Center VLANs/subnets are allocated
to departments/teams s
pecifically. Data Center VLANs/subnets are designed to not be
shared across multiple departments/teams. If your department does not have a Data
Center VLAN/subnet yet, contact
noc@indiana.edu

to request one. Once y
ou have a
VLAN/subnet assigned, you must request network switch ports by emailing the request
to
netdata@indiana.edu

.

Static IP addresses must be assigned to you by DNS Administration. They can be
requested at

the following email addresses.

IUPUI Data Center:
dns
-
admin@iupui.edu

IU Bloomington Data Center:
dns
-
admin@indiana.edu

This policy applies to Enterprise and any
research system located in the enterprise
environment using IU campus networks.

6.4 Firewall Security:

This policy applies to Enterprise and any Research Systems using
IU campus networks. All servers must be protected by the Data Center
firewalls.


Reque
st firewall rules via firewall.uits.iu.edu. The firewall request page
includes additional information on firewall policies and standards.
A full explanation of
firewall policies and best practices can be found in Section 7 of this document.

Exceptions to

this policy require your director’s approval. This exception request must
be submitted to Campus Networks Engineering (CNE). Approvals will be stored and
audited by CNI annually. Rogue non
-
firewalled hosts are subject to being shut down if
they do not
have a documented exception. Submit your request to
noc@indiana.edu

and an engineer will assist you.

6.5

Rack Ethernet Switches:

All enterprise environment switches in the Data Center will
be managed by CNI.


System
administrators shall not manage their own switches. This
applies to any Ethernet switch located in a rack in the enterprise environment. This
policy also includes private switches that are not designed to connect to the campus
network switches.

Blade ch
assis switches are allowed in the enterprise environment in certain cases. If you
intend to install a chassis server environment, please contact
noc@indiana.edu

to
schedule a meeting with a campus networks Data Cente
r engineer to discuss the chassis
networking.

6.6 Internal Rack Wiring:

Internal rack wiring should follow rack cabinet management
standards. Cables should be neatly dressed. All cables should be properly labeled so
they can be easily identified. Refer to
TIA/EIA
-
942 Infrastructure Standard for Data
Centers
, section 5.11; a copy is available in the Operations Center. Applies to all users.
Users in the Enterprise environment are not allowed to run cables outside of the racks.

6.7 Sub
-
Floor Copper/Fiber Wi
ring Requests:

All data cabling under the floor, including
SAN and Ethernet, must be installed by CNI in a cable tray. Any requests for sub
-
floor
copper or fiber can be made via the telecom.iu.edu site. CNI can supply copper, single
-
mode fiber, and multi
-
mode fiber connectivity between locations. This applies to
anyone with systems in the data center. The requestor is responsible for paying for
these special requests. CNI can provide an estimate for any work requested.

6.8 Server Network Adapter Bridgi
ng:

Server administrators are not permitted to use
any form of software based network adapter bridging. Attempts to bridge traffic
between two server interfaces are subject to automated detection and shutdown. This
policy applies to Enterprise and any Re
search System using IU campus networks.

6.9 Server Network Adapter Teaming/Trunk/Aggregate Links:
Server administrators
may use the teaming of network adapters to increase bandwidth and redundancy. CNI
can also setup static LACP trunks on the switch when

the aggregation of links is
required.

6.10 SAN Fiber Channel (FC) Switches:

CNI does not provide the management or
hardware for SAN switches. Administrators are allowed to install and manage their own
SAN switches. CNI can provide fiber trunks outsid
e of the racks as needed (policy 6.7).

6.11 Multicast:

Multicast is available by request only. Multicast functionality should
not be assumed to work until requested and tested with a network engineer.

6.12 Jumbo Frames:

Jumbo frames are available by re
quest. Jumbo frame functionality
should not be assumed to work until requested and tested with a network engineer.

6.13 Tagged VLANs:

CNI can provide VLAN tagging when needed. VLAN tagging
(trunking) can be used to aggregate VLANS over a common physic
al link to
environments such as VMWare, Hyper
-
V, etc.

6.14 IPv6:

IPv6 is available in the data centers by request. IPv6 functionality should not
be assumed to work until requested and tested with a network engineer. Note: IPv6 is
already enabled in
most IU Bloomington and IUPUI academic buildings.


7. Data Center Network Firewalls



This section applies to:

o

Enterprise Systems:
Indiana University

enterprise systems using IU IP
addressing that reside in the enterprise environments at the IUPUI and
IUB data centers. These systems utilize the standard top
-
of
-
rack (TOR)
switching configuration.

o

Firewall Security:

All servers must be behind the Data Center
firewalls.


Provide firewall configuration requests via firewall.uits.iu.edu.
Exceptions to this

policy require your director’s approval. This exception
request must be submitted to Campus Networks Engineering. Approvals
will be stored and audited by CNI annually. Rogue non
-
firewalled hosts
are subject to being shut down if they do not have a docu
mented
approval. Submit your request to noc@indiana.edu and an engineer will
assist you.

7.1 Exception requests

to any standards described within this section can be submitted
via
noc@indiana.edu
. Include the nature
of the exception request and any related
policies defined in this document.

7.
2

A general guideline
is that all firewall policies should be
as restrictive as possible
.
Recommended guidelines:



Use the group “All_IU_Networks” as a source instead of ‘ANY 0.0.
0.0’ when
services do not need to extend outside of the IU network. This reduces the
source scope from roughly 4,300,000,000 addresses down to just the 688,000 IU
owned IP addresses. This will block a lot of external scanning attempts from
outside the Un
ited States as well. CNI will maintain global address groups for
common groups of addresses across the campuses.



Avoid using ‘Any (1
-
65535)’ as a destination port when possible.

7.
3

Hosted based firewall
rules should be used on every host where
applicable. Host
based rules should be more restrictive than the data center firewall rules when possible.

7.
4

Outbound traffic (traffic leaving a firewall zone) is allowed by default.

You only
need to create exceptions for traffic entering the data cent
er from outside of a host’s
security zone.

7.
5

Service port ‘Any (1
-
65535)’

as a destination port in a firewall policy is allowed when
meeting all of the following criteria:



Source addresses must all be IU owned IP space. Must be as specific as
possible.
For example, global groups such as “All IU Statewide” cannot be used.



The destination addresses must be defined as specific /32 IP addresses. Cannot
be used when the destination is an entire subnet.

7.
6

Source address ‘Any (0.0.0.0)’

in a fire
wall policy is allowed when meeting the
following criteria:



The destination addresses must be defined as specific /32 IP addresses. Cannot
be used when the destination is an entire subnet.



Destination ports must be specific. ‘Any (1
-
65535)’ cannot be use
d.

7.
7

Destination address

‘Any (0.0.0.0)’

in a firewall policy is not allowed. The
destination should be specific to a subnet or set of IP addresses which have been
assigned to your department.

7.
8

Entire Subnets as destinations

are allowed when meeting
the all of the following
criteria (accepted starting 1/13/14):



Source addresses must all be IU owned IP space. This can be a source subnet as
long as it is from IU
-
owned IP space. Must be as specific as possible. For
example, global groups such as “Al
l IU Statewide” should not be used.



Destination port cannot be ‘Any’ (1
-
65535) when attempting to use a subnet as a
destination.

7.
9

Individual DHCP addresses

cannot be used as sources. The dynamic nature of DHCP
allows for the potential of unexpected acc
ess during future IP allocations. VPN space,
global groups, entire subnets, or static addressing can be used instead.

7.
10

DNS entries must exist

when using specific IPs as destination addresses. DNS
entries can be setup by contacting IU DNS Administrati
on.



IUPUI Data Center:
dns
-
admin@iupui.edu



IUB Data Center:
dns
-
admin@indiana.edu


7.1
1

Source & Destination Host Groups

can be used to group together source and
destin
ation objects when meeting the following criteria:



Destination host groups must be comprised of members with specific /32 IP
addresses that your group owns inside of the data center. Destination host
groups cannot be selected as a source in a policy.
Destination host groups must
be specific to a security zone.



Source host groups can be both specific /32 IP addresses and entire subnets
outside of the data center. Source host groups cannot be selected as a
destination in a policy. The same source host
group may be used within multiple
security zones.

7.1
2

Firewall security zones

are used to split up networks within the data center.



UITS Core
Services

UITS Hosted
Services

IU Community

Health Sciences

Servers and
systems
managed by

UITS Staff Only

UI
TS Staff
Only

Any IU Staff

Any IU Staff

Operating
system root,
administrator, or
equivalent
access

UITS Staff Only

UITS Staff
Only

Any IU Staff

Any IU Staff

Operating
system level
interactive logins
or virtual
desktop
sessions
1

UITS Staff Only

Any IU
Staff

Any IU Staff

Any IU Staff

User
-
provided
code
2

No

Yes

Yes

Yes

Examples

DNS, DHCP,
NTP, ADS,
CAS,
Exchange,
Lync, Oracle
Databases,
HRMS, FMS,
Onestart, etc

WebServe,
CHE, CVE

Non
-
UITS
physical servers,
Intelligent
Infrastructure
virtual servers
provisioned for
departments

Regenstrief,
School of
Medicine,
Speech &
Hearing,
Optometry,
Dentistry, IUB
Heath Center





1

Interactive logins include technologies such as SSH,
RDP, Citrix, VNC, Remote Powershell etc.

2

User
-
provide code includes all executable code installed on the system with user
-
level access instead of
by a designated system administrator, system developer, or enterprise deployment process. This may
include b
inaries, shell scripts, interpreted languages such as
P
erl or
P
ython, as well as web
-
based code
such as PHP/ASP.

Each Firewall Security Zone exists at both the IUPUI and IUB Data Centers. The following
list describes which zones are related as well

as what the relationship means:



UITS Core Security Zone

o

Campus Members:



IN
-
30
-
CORE



BL
-
30
-
CORE



UITS Hosted Security Zone

o

Campus Members:



IN
-
32
-
UITS



BL
-
32
-
UITS



IU Community Security Zone

o

Everything else, including physical servers, VM Servers provisioned by
SAV that are not UITS managed/hosted services, and academic
departments located within the data centers.

o

Campus Members:



IN
-
33
-
COLO



BL
-
33
-
COLO



Campus Members of the same security zone d
o not need firewall rules to
communicate to each other. For example a host in IN
-
30
-
CORE does not need
any firewall rules to communicate to a host in BL
-
30
-
CORE and vice versa. Rules
are required to communicate to different zones.



Training on these zones

can be provided by Campus Networks Engineering
(CNE). Contact
noc@indiana.edu

to request a meeting with CNE regarding
firewall security zone training.

7.1
3

Rules are specific to a single member firewall zone.

One r
ule cannot cover
inbound traffic for hosts in both BL
-
30
-
CORE and IN
-
30
-
CORE. If you had a server in IN
-
30
-
CORE and another server in BL
-
30
-
CORE that you wanted to allow HTTPS traffic to
from the world, it would require two rules.



Rule #1 within BL
-
30
-
COR
E

o

Source: Any

o

Destination: 129.79.1.1/32

o

Port: HTTPS



Rule #2 within IN
-
30
-
CORE

o

Source: Any

o

Destination: 134.68.1.1/32

o

Port: HTTPS

7.1
4

Global address groups
are built
-
in groups that any department can use. These are
commonly used source groups that are
maintained by CNI. “All_IU_Networks” is an
example of a global address group. This group contains every IU owned IP address.