Energy Saving in Cloud Data Centers
Presentation of the results of EU FP7 FIT4Green
Corentin Dupont
cdupont@create
-
net.org
Presentation
FIT4
Green
seeks energy saving policies for DCs, enhancing the
effects inside a federation by an aggressive strategy for
reducing the energy consumption in ICT
We aim at reducing cost for companies
Strengthening
competitive position
FIT4
Green
needs to be DC framework agnostic:
Demonstrated in Cloud computing, Traditional computing, Super
computing and Networking
Table of Contents
Introduction
&
overview
Requirements
Optimizer
design
SLA
Constraints
Power
Objective
Model
Heuristics
Experiments
on
Cloud
Test
-
bed
Scalability
Evaluation
Conclusion
&
Future
work
By shifting activities that do not
interfere with each other to the
same room and switch off
unused electrical consumers in
the other rooms
By moving activities to rooms
with more energy efficient light
bulbs or other more efficient
electrical devices that suffice
the needs
Common sense: saving energy at home
Page
4
Energy Saving Strategies
Page
5
VMs are consolidated and
unused servers are turned off
VMs are allocated to “more
efficient” servers/data
centres
:
incremental “cost” considered.
Energy Savings in a Federation
Page
6
PUE: 1,3
PUE: 1,2
PUE: 1,4
PUE: 2,4
PUE: 2,1
PUE: 1,8
PUE: 1,9
takes
the
difference
in
PUEs
into
account,
and
where
applicable
reallocate
VMs
to
the
Cloud
DC
with
a
better
efficiency
.
In
addition
a
larger
resource
pool
provides
larger
optimization
opportunities
(smaller
local
resource
buffers)
.
Moreover,
CUE
differences
drive
emission
optimizations
Federation
-
Enabled Optimizations
“Federation” policies
seek to:
Relocate VMs to capitalize on
geographical characteristics like:
Season &Temperature differences
Time
-
zone differences
–
‘Follow the
Sun’
Energy source differences
Relocate
VMs to capitalize on data
centres
characteristics like:
Equipment & Infrastructure
differences
PUE & CUE differences
Cogeneration options
Page
7
All strategies are ranked through their Energy KPIs.
Cloud Load
Page
8
Load
Time
Tetris vs.
Page
9
In
High
Performance
Computing
DCs
,
the
order
of
job
execution
is
optimized
In
Cloud
Computing
DCs
,
the
allocation
of
new
workload
is
being
considered
In
Traditional
Computing
DCs
,
workload
is
being
reallocated
based
on
energy/CO
2
efficiency
.
High Level View
Página
10
DC Federation
Model
Optimization
Reconfiguration
Monitoring
Power Calculator
Data Centre Monitoring and Automation Framework
Power Consumption Predictions
Updated Information
List of Suggested Actions
Energy Model
Page
11
Queue
Software
Applications
Network Topology
Framework
Capabilities
Storage
Server
ICT Resources
Queue
Software
Applications
Network Topology
Framework
Capabilities
Storage
Server
ICT Resources
n
i
i
core
idle
CPU
n
i
i
i
idle
CPU
CPU
t
E
t
E
t
CL
f
V
t
P
t
E
1
_
_
1
2
_
)
(
)
(
Optimizer
All strategies are ranked through their Energy KPIs
Metamodel
Topology
SLAs
Policies
Automatic
Rule
Automatic
Rule
Automatic
Rule
Automatic
Rule
Constraint
Constraint
Constraint
Constraint
Constraint Programming
Optimization Engine with Energy/Emission Goal
ACTIONS
Requirements
Abstracting out the constraints
•
Flexibility, extensibility
•
Deep
exploration of the
search
space
Framework Design
SLA CONSTRAINTS
SLA constraints flow
SLA CONSTRAINTS
SLA constraints examples
Category
Constraint
Approach
LoC
Hardware
HDD
Choco + ext. Entropy
121+(25)
CPUCores
Entropy (‘fence’)
0+(25)
CPUFreq
Entropy (‘fence’)
0+(25)
RAM
Choco + ext. Entropy
123+(25)
GPUCores
Entropy (‘fence’)
0+(25)
GPUFreq
Entropy (‘fence’)
0+(47)
RAIDLevel
Entropy (‘fence’)
0+(47)
QoS
MaxCPULoad
Choco
+ ext. Entropy
90+(25)
MaxVLoadPerCore
Choco
+ ext. Entropy
109+(25)
MaxVCPUPerCore
Choco
+ ext. Entropy
124+(25)
Bandwidth
Entropy (‘fence’)
0+(49)
MaxVMperServer
Entropy (‘capacity’)
0+(25)
Availability
PlannedOutages
Choco
+ ext. Entropy
Future Work
Availability
Choco
+ ext. Entropy
Future Work
Additional
Metrics
Dedicated
Server
Entropy (‘capacity’)
0 + (25)
Access
Entropy (‘fence’)
0 + (25)
POWER OBJECTIVE MODEL
Total
Reconf
.
Energy
Total Instant. Power
Energy
Migrations
Energy
On/Off
Power Servers Idle
Power VMs
Power Network
*
Reconf
Time
Power
Calculator
HEURISTICS
Root node:
no VM is allocated
First level node:
VM1 allocated on S1
First level node:
VM2 allocated on S1
First
level
node
:
VMx
allocated
on
Sy
Leaf
node
:
all
VMs
are
allocated
Leaf
node
:
all
VMs
are
allocated
Leaf
node
:
all
VMs
are
allocated
Leaf
node
:
all
VMs
are
allocated
At
each
level
: call F4G
branching
heuristic
. If a
constraint
is
broken
,
backtrack
to go up.
At
leaf
level
: note down
the solution and the
energy
saved
,
then
backtrack
to
find
a
better
solution.
First
level
node
:
VMx
allocated
on
Sy
First
level
node
:
VMx
allocated
on
Sy
Heuristics
Select VM on the least
energy
efficient server and
least
loaded
server
Call the F4G VM
selector
VM
selected
Select Server
which
is
the
most
energy
efficient server
and
most
loaded
server
Call the F4G
Server
selector
Server
selected
Select Server
which
is
empty
and the least
energy
efficient server
Call the F4G Server
selector
Server
selected
Composable
heuristics
•
Candidate VM
for migration
•
Target server
for migration
•
Candidate Server
for extinction
Heuritics
To
sum
up…
Experiments on Cloud
Testbed
Node
Controller
Node
Controller
Node
Controller
Node
Controller
Node
Controller
Node
Controller
Node
Controller
Cluster
Controller
Power and
Monitoring
Collector
Cluster
Controller
Cloud
Controller
Task scheduler
FIT
4
Green VMs
Blade Enclosure
1
Blade Enclosure
2
Lab
trial ressources
Enclosure 1
Enclosure 2
Processor
model
Intel Xeon
E5520
Intel Xeon
E5540
CPU
frequency
2.27GHz
2.53GHz
Cpu
&
Cores
Dual cpu
–
Quad
core
Dual cpu
–
Quad
core
RAM
24 GB
24GB
•
DC1: 4 BL 460c blades using
VMWare
ESX v4.0 native hypervisor, 3
blades for Cluster and Cloud Control
•
DC2: 3 BL460c blades using
VMWare
ESX v4.0 native hypervisor, 2
blades for Cluster Control and Power and Monitoring System.
Experiments on Cloud
Testbed
Total number of active virtual machines during full week of work
Number of
active VMs
Time
Active SLAs constraints:
•
Max
vCPU
per core = 2
•
Min VM Slot = 3
•
Max VM Slot = 6
Lab
trial
Workload
Experiments on Cloud
Testbed
Final test results for the various configurations
Configuration
Data
Centre 1
Data
Centre 2
Energy for
Federation
Without
FIT
4
Green
6350 Wh
4701 Wh
11051 Wh
With
FIT
4
Green
Static
Allocation
5190 Wh
4009 Wh
9199
Wh
Saving
16.7%
With
FIT
4
Green
Dynamic
Allocation
5068 Wh
3933 Wh
9001 Wh
Saving
18.5%
With
FIT
4
Green
Optimized
Policies
4860 Wh
3785 Wh
8645
Wh
Saving
21.7%
CONCLUSION & FUTURE WORK
Energy
aware
resource
allocation
in
datacenters
Flexibility
&
extensibility
Saves
up
to
18
%
in
HP
experiment
Scalability
with
parallel
processing
Future
work
:
SLA
re
-
negotiation
Green
SLAs
2
nd
phase tests numeric results
Numeric
results
Single
site tests
Federated
site tests
Traditional
DC
testbed
Around 30%
From 28%
to
48%
Supercomputing
DC
testbed
From 4% to
28%
From 30%
to 42%
Cloud computing
DC
testbed
From 10% to
24%
From 17% to 21%
Page
26
Scalability Evaluation
#
Configuration
Placement constraints activated
1
1
datacenter
none
2
1 datacenter
with
overbooking factor=2
“
MaxVCPUPerCore
”
constraint
set
on
each
server
3
2
federated
datacenters
“Fence”
捯nst牡rnt
s整
潮
敡捨
噍
FIT4
Green
Plug
-
in
Page
28
Single
allocation
Find
the
most
energy
efficient
and
suitable
resource
for
a
new
Workload
.
Global
optimization
Rearrange
the
resources
in
a
way
that
saves
maximum
amount
of
energy
or
carbon
emission
.
Page
29
Optimizer
entry points
Enter the password to open this PDF file:
File name:
-
File size:
-
Title:
-
Author:
-
Subject:
-
Keywords:
-
Creation Date:
-
Modification Date:
-
Creator:
-
PDF Producer:
-
PDF Version:
-
Page Count:
-
Preparing document for printing…
0%
Comments 0
Log in to post a comment