Drivers for New Data Center
•
Out of power and cooling in
Komas
data
center
•
Increased demand for computing and data
storage capacity
•
Inadequate existing facilities on campus to
accommodate expansion
•
Buying co
-
location space too expensive
•
Diverse data center needs (enterprise and
high performance computing)
MCI Building Purchase
•
Purchased MCI building (875
S.
West
Temple)
•
Prime facility for a data center build
-
out
•
74,000 square feet to allow campus data
center consolidation and future phased build
-
out
•
Seismic retrofit
•
Space to support paper medical record
storage
HP/EYP Analysis
Scope Change
•
7 main data centers all have capacity issues
•
Needed strategy for consolidation and growth
capacity
•
Needed strategy to reduce data center operating
costs
•
Cloud computing and cloud storage
strategy
•
Colocation
–
UHEA
–
Weber State
–
Others
Budget
•
Design and construction
-
$21 M
•
Network and racks
-
$2M
•
1/3 of the cost as similar data centers with
less capability
Southwest Corner View
Arial View
Master Plan
EQUIPMENT
YARD
Project Team
•
Earl Lewis
–
Project Manager
•
Jim Livingston
-
Sponsor
•
Brent
Elieson
–
UIT
•
Steve
Corbato
–
UIT/
Cyberinfrstructure
•
Joe Breen
–
CHPC
•
Bryan Petersen
–
UEN
•
Glen Cameron
–
UIT
•
Mike
Ekstrom
–
UIT
•
Tim Richardson
–
UIT/ACS
•
Caprice Post
–
UIT/Architecture
•
Dave
Huth
–
UIT/Architecture
•
Andrew Reich
–
UIT Architecture
•
Lisa Kuhn
–
UIT/Finance
•
Steve Scott
–
UIT/
Secuity
•
William Holt
–
Hospital Financial
Management Analysis
•
Bill
Billingsly
–
CD&C PM
•
Jim Russell
–
State DFCM
•
Department of Sustainability
•
VCBO
•
SmithGroup
•
AlphaTech
•
Oakland Construction
•
Heery
Schedule
•
Design complete
•
Construction documents complete
•
Bids due first week in March
•
Construction done by mid January
•
Commissioning complete in March
•
Move
-
in to start in April, to be completed by
end of August
West Pavilion
•
DR and hot site for tier 1 applications
•
Part of move strategy to new data center
•
Slowly populate with network gear and other
infrastructure over the coming year
•
Populate with tier 1 applications as we move
to new DC
Move Planning
•
RFP for project management firm
–
completed
•
Planning to start first of April ‘11
•
Configuration Management Database
foundation to move planning
West Temple Data Center
You’re Cooler Than You Knew Tour
Table of Contents
•
Reason for project / Facility
•
Project scope
•
Building Structure
•
Administration Workflow
•
Fiber Entrances
•
Metro Optical Ring
•
MPOE/Meet Me rooms
•
White Space Cable Plant, MDA & Disk
•
Mechanical Design
•
Electrical Design
•
PUE & Savings Implications
•
Tiers
•
POD /Rack Design, ISO base
•
Procedures
•
Physical Security
•
Power Densities
•
Resiliency Strategy: Tier3, WPDC & Richfield
•
Co
-
location Capabilities
Reason for the Data Center…
Supply & Demand, no power on campus
Time
KWatts
Project Scope
•
10,000 KW facility in 3 phases
•
$4.5M bought 74,000 square feet
•
$23M will build out 4,000KW(2,400 pluggable)
•
High efficiency (low PUE)
•
Innovation
•
Low operating cost
•
Better workflows and more usable center
•
LEED Gold
875 S. West Temple
•
74,000 square feet, 1 city block!
•
Hospital seismic importance factor 1.5 x 2/3 maximum credible
earthquake force = “Essential Facility”
–
Compared to typical retrofitted buildings, this structure will perform better
–
Lateral design criteria meets IBC definition of “Essential Facility”
–
18” walls: 8” steel reinforced
shotcrete
and 10” Masonry
–
New roof, foundations and steel frames
Henry, J.R. (2006). “Section 1604.5: Importance Factors.” Structural Engineering and Design,
Stagnito
Media, Deerfield, IL,
http://www.gostructural.com/magazine
-
article
-
gostructural_com
-
september
-
2006
-
section_1604_5__importance_factors
-
4714.html
, last
accessed 5
-
14
-
10. 2006 IBC, FEMA 427
Workflow
Building Entrances
Metro Optical Ring Project
•
Connection Sites: Park, EBC, WTDC MPOE1 & MPOE2,
Level3 POP, BYU, USU, NOAA, Salt Palace, Kearns Bldg and others
•
Funding Sources: BTOP/NTIA, trade agreements, UU
•
Optronics
: DWDM vendor TBD.
•
Dark Fiber IRU’s, Conduit IRU’s & UU assets
•
Speed (80 x 10Gig)
•
Redundancy: circuits, pathways, chassis, CPU, Provider
•
Self Provisioning of circuits (
ethernet
& fiber channel
or even PTP, FR, etc.)
•
Cost savings 1/20
th
x (# of circuits ~20) = 1/400
th
DWDM: D
ense
W
avelength
D
ivision
M
ultiplexing works by combining and transmitting multiple signals simultaneously at different
wavelengths on the same fiber.
IRU
:
I
ndefeasible
R
ight of
U
se is the effective long
-
term lease (temporary ownership) of a portion of the capacity of an international
cable.
Metro Optical Ring
–
Data Center Path
MPOE / Meet Me Rooms
Purpose:
•
Building entrance aggregation
•
Carrier rooms
•
Inter
-
Carrier IP peering
•
Metro Optical Ring (self provisioned circuits)
•
Data Center collocation inter
-
occupant connectivity
•
MDA(MDF) distribution
Features:
•
>66 feet apart (tier 3 compliance)
•
Fire zone separation
•
2
nd
story suspended mezzanine
•
Sized for all 3 data center phases
•
Loading bay for forklift deliveries
White Space Cable Plant
•
7 server cabinets cabled back to 1 network rack
•
17 network racks trunked back to 2 MDA’s
•
SAN and
eNet
Core located in MDA
•
32 Disk cabinets located next to MDA
•
Ladder rack and fiber guide
•
100G capable fiber plant
•
6A copper plant
•
Equipment specific fan out fiber arrays & MTP’s
•
Fiber patch cord slack accommodated!
•
All cabinets are 32” wide!
•
60A PDU’s will fit inside cabinet, no air clogging
Mechanicals
•
2008 ASHRE TC 9.9 recommendation: 64.4
-
80.6F, 40
-
60% RH @ C.1&2
•
Ambient Air side economizer 85% of year including hot air remixing
•
Indirect evaporative cooling towers 14% of year (
w.s
. economizer)
•
DX chiller cooling ~1% of year and backup for water source
•
Atomizer humidification provides direct evaporative cooling benefit
•
Chilled water loop/heat exchangers for water cooled racks & equipment
•
FAN wall, filters, coils, service corridor, security corridor, 14’ plenum,
equipment driven air flows, Hot/Cold pressure differential
•
VFD near
-
unity PF
, follows the affinity laws, which means that flow is proportional to speed,
pressure is proportional to the square of speed, and horsepower is proportional to the cube of speed.
That means if an application only needs 80 percent flow, the fan or pump will run at 80 percent of rated
speed and only requires 50 percent of rated power. In other words, reducing speed by 20 percent
requires only 50 percent of the power.
•
Racks on concrete
•
Hot isle containment
Electrical Design
•
Meter power 4,000KW
•
Available white space power 2,400KW
•
Common Data Center Power Topologies:
1.
Block Redundant: Catcher System
2.
Distributed Redundant
3.
2N Redundant
4.
Parallel Redundant
5.
Isolated Redundant
•
N+1 at the block level for:
–
Transformer
–
Generator
–
ATS
–
UPS
–
STS
–
PDU
•
2N Power distribution system (OHPB, CPDU)
•
All CPDU’s will fit in cabinets properly!
•
220V +
•
All devices and 1
-
lines
PQM’d
PUE and Savings
•
PUE = power usage effectiveness. (meter load / equipment load) category 3 metering
•
PUE Benchmarks
–
Industry Average 2
–
Komas DC 1.65
–
NREL Golden Colorado 1.06
–
Yahoo Lake Michigan 1.08
–
Google DC’s 1.2 **
–
UU West Temple Data Center ~1.2
•
T3 Electrical Bill $ savings anticipated
–
$250K to $700K annual RMP savings depending on load, anticipated rate hikes & realized PUE
–
RMP efficiency incentive sever hundred thousand dollars, one time opportunity
•
Owner maintenance and change savings
–
CRAC’s (dozens of maintenance contracts eliminated)
–
Branch circuit builds (>400 avoided)
•
Recent Data Center Benchmarks
–
Codename Sequoia: Tier 2 @ 3x our cost (power normalized)
4/1/
2011 substantial completion ***
–
Yahoo Lake Michigan: Tier2 @ 1.5X our costs (power normalized) *
PUE Savings Model
Load*wast rate
wasted power
hours/yr
FY10 Rate
FY10 innefeciencies
FY16 Rate
FY16 innefeciencies
1,000*.2=
200
8760
0.06
$105,120
0.12
$210,240
1,000*.7=
700
8760
0.06
$367,920
0.12
$735,840
$262,800
$525,600
1,400*.2=
280
8760
0.06
$147,168
0.12
$294,336
1,400*.7=
980
8760
0.06
$515,088
0.12
$1,030,176
$367,920
$735,840
Parameters
PUE
PUE
KW
KW
$/KWH
$/KWH
1.2
1.7
1,000
1,400
0.06
0.12
Tier Level Definitions
Tier Level
Requirements
1
•
卩湧l攠湯n
-
r敤e湤ant st物r畴i潮 灡p栠獥s癩湧 t桥⁉T 敱ei灭敮p
•
乯N
-
r敤e湤ant ca灡捩c礠c潭o潮敮ts
•
Basic site infrastructure
guaranteeing 99.671% availability
2
•
䙵lfill猠all Ti敲eㄠr敱eir敭敮ts
•
Redundant site infrastructure
capacity components guaranteeing 99.741% availability
3
•
䙵lfill猠all Ti敲eㄠ& Ti敲e㈠r敱eir敭敮ts
•
M畬ti灬攠i湤数敮摥湴ist物r畴i潮 灡p桳 獥s癩湧 t桥⁉T煵i灭敮p
•
All IT equipment must be dual
-
powered and fully compatible with the topology of a
site's architecture
•
Concurrently maintainable
site infrastructure guaranteeing 99.982% availability
4
•
䙵lfill猠all Ti敲eㄬ Ti敲e㈠慮搠di敲″ r敱eir敭敮ts
•
䅬l c潯oi湧 敱ei灭敮琠p猠s湤数敮摥湴d礠摵al
-
灯w敲敤Ⱐi湣n畤i湧 捨cll敲猠a湤⁈敡瑩湧Ⱐ
Ventilating and Air Conditioning (HVAC) systems
•
Fault tolerant
site infrastructure with electrical power storage and distribution
facilities guaranteeing 99.995% availability
*Uptime Institute
UIT POD/Rack Design
•
Disk on seismic ISO Base platform
•
Subzero hot isle containment
•
Uniform Cabinets: ~48U, 32”x42”
•
7 to 1 center of row network racks
•
FTU’s in middle in network rack
•
Console Server & KVM whole DC solution
•
24 port 6A patch panels
•
0U CPDU’s on just one side
•
Cabling on other side
•
Cable fingers on rear
•
Cabinet top cable seals
•
Mount on concrete floor
•
Unistrut
bracing
Physical Security
•
Barricades, cameras, card readers, mantraps
•
DHS standards and consulting
•
24 x 7 presence
•
72 hours of generator fuel
•
Water independence
•
Physical access standards
Data Center Procedures
•
Effective Data Centers have ~200 documented procedures
–
Maintenance
–
Repair
–
3
rd
party access
–
Change Control
–
Testing
–
Equipment installations & standards
–
Receiving
–
Build Room use
–
System Documentation
–
Training
–
Standards
–
DCIM/BMS
–
…
Consistent outcomes require consistent behaviors
Infrastructure and operations (I&O) leaders must now go beyond performance management of IT equipment and begin to manage the
en
tire
data center infrastructure. Tools for data center infrastructure management
(DCIM)
will provide detailed monitoring and measurement of data
center performance, utilization and energy consumption, supporting more
-
efficient, cost
-
effective and greener environments.
David J.
Cappuccio
, Gartner
Power Densities
~240 Rack locations in 1,400KW T3 portion of DC
•
1,400KW / 6KW/rack= 233 racks (DL380 x 20)
•
1,400KW / 8KW/rack = 176 racks
•
1,400KW / 15KW/rack = 95 racks
•
1,400KW / 20KW/rack = 70 racks (HP C
-
class x 4)
UIT Data Center Resiliency Strategy
1.
Tier 3 & essential facility data center
2.
West Pavilion essential facility & survivable
data center
3.
Richfield remote DR data center
4.
PSI nuclear resilient data storage facility
Potential Co
-
Location Tenants
•
UIT
•
UEN
•
CHPC
•
LAW?
•
Development?
•
Other Dept’s
•
Weber State
•
CENIC.org
•
UHEA
DC Products
•
Cloud File Storage
•
Cloud Network Attached Storage
•
Cloud Server Hosting
•
Co
-
location
•
DC Centralization
•
Inter
-
Carrier IP Peering
Contact Anita Sjoblom for more info or visit orderit.utah.edu
Questions??
AHSRAE: http://tc99.ashraetcs.org/documents/ASHRAE_Extended_Environmental_Envelope_Final_Aug_1_2008.pdf
Enter the password to open this PDF file:
File name:
-
File size:
-
Title:
-
Author:
-
Subject:
-
Keywords:
-
Creation Date:
-
Modification Date:
-
Creator:
-
PDF Producer:
-
PDF Version:
-
Page Count:
-
Preparing document for printing…
0%
Σχόλια 0
Συνδεθείτε για να κοινοποιήσετε σχόλιο