(DPM) Dynamic Power Management.

whinnycampingMechanics

Nov 5, 2013 (3 years and 9 months ago)

75 views

(
DPM
)

Dynamic Power Management.

I wanted to blog about my experience with DPM or Dynamic Power Management in VMware. From
what I hear this feature is not used by many. A
s

a system admin for years, I can understand not wanting
to power servers off. But I recently found myself looking at this feature from a home lab perspective
.
My home lab has a couple of r
ack servers, in particular Dell PowerEdge. One of the problems wi
th home
labs is the heat and the noise these servers generate. Note to those thinkin
g about
home labs
, r
ack
servers are not made to sit in your home

office.


T
hey were designed to be in racks in a climate
controlled data center
rooms
where you wouldn’
t be

trying to do things like hold a conference call with
servers screaming in the background
. But for those of us who seem to need this kind of hardware to
teach ourselves all the
latest

software, I proceed.

So, my though was if I could automate the ESXi
ho
st
startup
s

and shutdown
s
, I

could help the noise
,

the
heat situation

and
the
stress levels
.

I run v
Sphere

and a mix of ESX
i

from 4.0, 4.1, 5.0 and 5.1. If you
have to ask why, it is because I
do
support a number of different environments from stand a
lone ESX
i

to

redundant Tier4 data centers with stretched clusters between them. Enough of
the

rambling
.

To run DPM you need vCenter and you need a DRS

cluster

(doesn’t have to be HA

enabled
) of at least
two ESXi hosts. If you don’t understand how to set
up DRS

stop here and
Google

what you need on that.

I will only go over the high points in this document.

One thing you need is shared storage, and if you
don’t have a storage server, I suggest looking at a virtual storage appliance (a VM that acts as you
r
shared storage) or build/buy a NAS that is VMware compatible (most are).


The short of it is

you

create a cluster

by

(right click
ing

on your Data Center in
vSphere and select New
Cluster. Give it a name like Cluster1. Once the cluster is added, you
need to add at least two ESXi hosts
to the cluster. Right click on your Cluster1 and Add new Host, or if you have a new host

on vSphere

you
can drag and drop it into the cluster1 icon. Now to turn on DRS, right click on the cluster and edit.
Check off D
RS and Fully Automated.



I recommend Full Automation in DRS.
Basically

you wind up with a cluster that manages CPU and
Memory loads

for you

automatically
, giving you the best performance

and utilization
. Just remember
your VMs will be moving from hos
t to host within the cluster as needed t
o balance loads, which is really
no big deal.

Part of vSphere DRS is Power Management. I show it here set to Automatic and Aggressive.




Next is Host Options.

You should see all of your Cluster1 hosts and their

status. Note that in this
image one host has a status of Never for last time exited standby.


Before proceeding, you should test each host to see that it ca
n be placed in Standby and wake
up ok. To
test this, you can put each of your ESXi hosts in s
tandby one at a time, and wake them up. From
vSphere right click on a host and select Enter Standby Mode.





This will cause VMs to move to other hosts and this ESXi host will shut down. In vSphere this host will
no
t

show as powered off, but will show

as in Standby. You should then be able to right click this host
and Exit Standby
mode (
this takes some time for startup, be
patient
)
. This

is done,

in the case of my Dell
PowerEdge servers

by

a hardware feature called

WOL or Wake o
n
LAN

network adapters
.

You can see if
you have them in the Network Adapters view seen here.


Once you have verified manually that you can go into standby and exit, you’re ready for automation. I
did a lot of trials trying to get my host to

automatically
go into standby and
that is what motivated me to
write this

blog

the most. You can read up on how this

really

works, but my

simple

understanding is that
when utilization levels for both CPU and Memory get below or above
a

threshold then action is take

(or
recommended if not fully automated). So in my case what I did was use another feature of vSphere to
manipu
late load and DPM

state to help things along
.


Scheduled Tasks

Select Home in vSphere and then Scheduled Tasks

from the Management sections
. Fr
om here you have
a
list (
or maybe an empty list) of scheduled tasks.

You just right click on the list to add a New Scheduled
Task.

What I did was to set a task for VMs that I don’t need running at night (or whenever) and set the
task to shutdown the VMs
. I set one for each VM I wanted to shut down. The next thing I did was to
create tasks to migrate the remaining VMs to a single host that would remain powered on. And I put a
task in that

change cluster power settings
” to

(turn on DPM). All of these
tasks were timed at
approximately the same time. So what I have done here is to create a situation where a host has no
load and DPM
is turned on (so DPM

can
put the Host in Standby).

If this
works for you, you should see your host go to Standby in less

than 20 min.

I believe the
algorithms take a look at the last 20 min of activity to determine what to do.

If a task is placed at say 6AM to turn DPM off you will see any host in Standby mode powered on at that
time.

You can also
set

tasks to power on VM
s that will cause loads to increase over the threshold and cause a
host to be powered on in the cluster.

As a side note, if you have trouble getting the host to go into
Standby put

the host in standby manually
and see if you can increase the load on the ot
her host to cause it to power on
(come out of Standby)
.

I have been able to automate shutting down all but one host at night(it runs all my core services), and in
the AM my VMs are restarted by the scheduler and hosts are powered up. Good Luck…