>> Jie Liu: Hi, everyone. It's a great pleasure to welcome Jeff Rauenhorst, who is a VP of business development in a start-up called Federspiel Controls. And they've been doing some interesting work in controlling datacenter environmental conditions using sensor networks, database sensor networks. And today he's here to talk about the projects they're doing and some of the results that they got.

foamyflumpMobile - Wireless

Nov 21, 2013 (4 years and 7 months ago)


>> Jie Liu: Hi, everyone. It's a great pleasure to welcome Jeff Rauenhorst, who is a VP of
business development in a start
up called Federspiel Controls. And they've been doing some
interesting work in controlling datacenter environmental conditions u
sing sensor networks,
database sensor networks. And today he's here to talk about the projects they're doing and some
of the results that they got.


>> Jeff Rauenhorst: Thank you. I want to try to make this as interactive as possible today
. So
hopefully you guys can get something out of this and learn something a little bit about what we're
doing. So do you want to quickly go around and introduce yourself, maybe what you want to get
out of today, just so I have a little idea of what you g
uys are interested in.

>>: [indiscernible].

>> Jeff Rauenhorst: I think there might be some people online. Welcome.

So, yeah, again my name is Jeff Rauenhorst with Federspiel Controls. Just for a little background
on myself. I have a chemical

engineering degree. I worked for Honeywell in their process control
group for several years implementing IT systems to run large farm and biotech companies. So I
have a very IT
centric background. I'm an MBA from the UC Berkeley School of Business. I
joined Federspiel Controls last summer.

Federspiel Controls, just a little background, is a company that's focused on energy management
and saving our customers energy. We were started in 2004. Really focused on the building
market, specifically older

buildings. And our founder, Cliff Federspiel, came out of UC Berkeley,
where they had been doing a lot of research into wireless sensor networks, where a lot of the
core wireless sensor technology came out of. He really adapted their technology out of t
university and brought it to the commercial building market.

Our founder also has a Ph.D. from MIT in the intersection between artificial intelligence and
HVAC controls. So he has a very unique background, which is appropriate for what we're doing.

We've been in both the building and datacenter market for a while.

Hopefully today I can kind of tell you a little bit about what we're doing around using wireless
sensor network data to drive environmental controls specifically within datacenters. So

feel free
to stop me if you have any questions. Hopefully we can make this pretty interactive.

All right. Some of these slides, I mean, we kind of all know some of the problems in datacenters.
First, you know, server manufacturers and ASHRAE specify

we should measure the inlet air
going to servers, but very few people do it. Microsoft is clearly in the lead in its deployment of
actually reading inlet air temperatures.

The controls piece, even if you're measuring the inlet air, typically your cont
rol system isn't
controlled with inlet air. It's usually controlled by return air coming back to the CRAC units in the
datacenter or by supply air. But both ways of controlling really don't have any idea of what's
going on with your actual servers. So,
you know, there's clearly a lot of inefficiency and hotspots
and other thermal issues that crop up.

So it takes, you know, a large deployment really to understand your thermal environment, but
then when you actually begin to approach the control problem

of what's happening in
datacenters, it's a very difficult problem. First of all, as you know, datacenters tend to be open
planned. So they're quite large open areas. The areas of influence are quite wide, so one
cooling unit can affect a lot of differe
nt units. It's very hard to divide up or, as they call, 'zone' a
datacenter to control temperature. There's a big challenge with this open plan control.

When you build your HVAC systems in datacenters, you want them redundant. If one system
fails, yo
u don't want your whole datacenter to go down. With that redundancy comes a lot of
complexity. If you have redundant systems, how do you use them appropriately? When do you
use them?

There are some big challenges too with fighting. Hopefully you don
't have it in your datacenter,
but I've walked into some where we see two units literally four feet away from each other and one
is humidifying, the other is dehumidifying. We've seen some heating, which they should never do
in any datacenters.

So the
coordination of cooling in datacenters is a huge problem and it's not very well done from
what we've seen in a lot of datacenters. Hopefully Microsoft is ahead of the curve.

You guys are fortunate enough to have a lot of Ph.D.s on staff and they probab
ly are helping you
with your datacenter cooling control. Not everybody is that lucky. It's very hard to have one
time. So, you know, as we say, we like to


we package a Ph.D. and put it in a box, around
some intelligent cooling controls.

's big problems on both the measuring side and also the control side in datacenters. And
what we've seen is surprisingly datacenter cooling control typically is worse than, say, a building
like this. They monitor the rooms, have a pretty good understandi
ng of how the rooms work, but
in datacenters, the control paradigm is quite weak.

So this is what we do. So we have


I forgot


I left it out in the car, but we have our own
wireless center modules. We'll talk more exactly in detail exactly what th
ey are. This is what we
look like. Our model is that we actually run two thermistors straight off them. They put them on
top of a rack and run it down to measure the top and bottom of a rack. That really provides us the
insight into the thermal environ
ment in the datacenter.

What we do with that then is a bunch of analysis to really understand the simple things, like
where are your hotspots, you know, where are some of the issues. But really that data allows us
to, as you've probably seen CFD modeli
ng, computational fluid dynamics modeling, kind of have
a picture of airflow in a datacenter. We're able to take all our data from our network and build a
statistical CFD model basically in our software that is dynamic and changes, depending on your

You take out a couple racks, you put in a couple racks, or even during the day as, I don't know,
somebody, a server serving out videos, you know, gets really active when all the high school kids
come home, we can dynamically model that. And the
n what that allows us to do is actually begin
to control your cooling units and coordinate them to match your cooling capacity with your IT load.
And, yeah, the sales thing is we typically can show an [indiscernible] of less than two years.

So how do w
e do this? Well, you can see, a very ugly rack of one of our customers. Hopefully
yours all look much better. But we typically put our module at the top, running thermistors down.
You guys have a little different way of doing it, which is fine. And, y
ou know, this is deployed


we're just measuring inlet air going into servers and usually deployed every kind of four
to five racks, but depending on the particular customer site. So we deploy this and this collects
our data.

You know, this tec
hnology is a wireless mesh networking technology, so the simplest way is to
say it's ZigBee like. Again, I think the next slide will go into a little bit more of the details. So
there's a mesh network that is formed, pretty pictures. But the key thing i
s all this data then we
use to feed into basically this intelligent algorithm that dynamically controls cooling.

>>: What is your data U on that wireless?

>> Jeff Rauenhorst: Data U?

>>: Do you have all the [indiscernible] you need?

>> Jeff
Rauenhorst: That's a good question.

The technology is based on


are you familiar with Dust Networks? We use Dust Network radios.
So, you know, they make their, you know, mesh networking radio. It's aligned with a wireless
Heart standard, so it's n
ot ZigBee. They are incredibly robust. Their main customers use them in
places like oil refineries and heavy industry. So we get close, like 99.9 or something of that level,
of reliability of packets. It's a time
synchronized protocol, so it's not coll
ision based. So we get
very good reliability of our data.

Does that...

>>: What is your sampling frequency?

>> Jeff Rauenhorst: It's typically a minute or two minutes. It's configurable. But, yeah, usually
our customers are doing a minute to
two minutes.

>>: When you interface, do you interface with an Ethernet back in or another wireless network?

>> Jeff Rauenhorst: I didn't put that in there. So our wireless sensors talk to a gateway, which is
then plugged into Ethernet, yeah. So a

typical gateway can support about 256 or 250 sensors.

>>: And what frequency are you using?

>> Jeff Rauenhorst: The 900 megahertz licensed spectrum.

>>: The algorithm controls temperature. How far back does it take of samples? Does it take t
most recent readings from sensors to determine what the next temperature should be?

>> Jeff Rauenhorst: Right, right. So I think the simple answer is: kind of both. It uses historical
data to build up a model, and does things like perturb a syste
m, right? Turn down a CRAC unit
just a little bit and see how the whole system responds to develop its internal model. So it's using
a lot of the history data, you know, to determine basically, you know, to be able to answer
questions like, if this rack
gets hot, which cooling unit should I turn up and down? But then it's
also using real
time data to see how that model adjusts over time.

This is supposed to be a picture of a wireless mesh network. But you guys I think probably know
how that works. S
o, yes, a little bit more on the technology piece.

It is all battery powered. The Dust radios are incredibly power efficient, so we get three to 10
years of battery life. So we can monitor that battery life and show you how many months left you
have o
n the batteries. So time
synchronized protocol. There's about three layers of security. So
it's kind of a pseudo
random frequency hopping, which is kind of a security measure in itself. It's
not encryption, and keys and a couple other security features

built into it.

We found it's incredibly robust and reliable both in datacenters as well as


we deploy these in a
lot of buildings with concrete rebar, which tend to be very tough environments. So it works very

>>: Cordless phones?

>> Je
ff Rauenhorst: It's smart enough if it does hit a channel of interference, it will just stop using
that. So, again, very robust.

>>: How much time does it take for you to transfer the data back to [indiscernible] and based on
the data make a decision
? How does it affect the controls?

>> Jeff Rauenhorst: Yeah, so, that's a good question. I don't know the exact answer. But I think
it's less than a minute or two. So if we're getting a sample rate of, say, a minute to get it back to
the controls,
say it might be a couple seconds, and then usually within a minute it should make,
you know, a control decision based on that.

>>: What is the largest network you put in, a single station?

>> Jeff Rauenhorst: So, it can support 255. I think we typ
ically are a little conservative. So we
typically put in maybe like 170 at a time. It actually supports longer battery life because you have
less nodes that are both routing and sending signals. But, yeah. So, I mean, we've done 170
pretty easily.

et's see. As far as


yeah, so, I mean, Dust Networks is our supplier. We're not a wireless
center company. We don't make firmware or program. If you really want to learn more about the
technology, you know, Dust Networks, they've been a great partner

to work with and pretty robust

So our solution is entirely web based. So from an architecture sense perspective, you'd apply
wireless temperature sensors. It talks to a bay station, which then via Ethernet is connected to a
server that ru
ns our control algorithms. And that server then can control CRAC units directly.

We have a wireless control module as well that has


actually have a picture, yeah. It actually
has outputs, so we can directly plug into variable frequency drives on th
e control units


on the
cooling units or cooled water valves, or both, to directly control the units.

A lot of our customers have pretty basic datacenters, and the cooling units are standalone, so we
can directly control them. So we do both the sensi
ng and the control piece wirelessly.

>>: The control model is also wireless?

>> Jeff Rauenhorst: Yes, it's wireless. It has a zero

to 10
volt output so it can talk to anything.

>>: And the battery also lasts?

>> Jeff Rauenhorst: Yeah. If

we have a zero to 10 volt, we have double As, it's only three volts.
So we have to power that. But typically if you're plugging into a variable frequency drive, it has
power right there. So most devices you talk zero to 10 volts to has power, so we jus
t volt through
there. The batteries act as a backup.

The other piece that we do is there's building management systems, BMSs, or building
automation systems in your datacenter. We can speak and communicate with those systems.
So BACnet is an open pro
tocol that's used to communicate to building management systems.

We can communicate BMSs and then control set points and cooling units through that. Typically
more sophisticated customers already have their cooling units networked together, then we jus
pass through their system so they can use the same interface, do manual overrides of our control
if they so desire.

Then we also do analog integration. We can plug directly into a control panel and send an analog
signal. We can also do web services,

XML interfaces, [indiscernible] interfaces. Our system
really can bridge the gap between IT and facilities, which is kind of a challenge in this space.

Yeah, so, if you're from a facilities side, our software is kind of like a supervisory control. So

sit on top of your existing control and do energy efficiency.

Have you guys looked at demand response at all in your datacenters? Demand response, are
you familiar with that? During the hottest days of the summer you turn down your energy usage
d get paid by the local utilities.

>>: We have one of our power companies do that. Not all of them [indiscernible].

>> Jeff Rauenhorst: So our software enables some of the demand response capabilities, too, for
both building and datacenters.

, there we go. You asked a question about the control technology. I mean, here's

a little bit


well, high level on a slide, I can talk back to a little bit more. You know, the system is really
designed to solve a very difficult problem. When you depl
oy sensor networks, you get hundreds
and thousands or tens of thousands, or probably even more in your case, data points within a
datacenter, and you get lots of data.

But, you know, say, I don't know, a 50,000 square feet datacenter, you'll probably ha
ve, what,
about 40 cooling units roughly? So you have this problem where you have thousands of points
controlling a few points, you know, 30, 40, 50 points. So that's a very difficult control problem,
and that's basically what we have solved.

And we c
an handle, you know, lots of points. We're waiting for the day when servers, with their
internal temperature probes, actually finally get to communicate out to the rest of the world. I'm
not sure if you guys have looked at that. But, you know, most serv
er manufacturers have a
temperature probe, usually somewhere between four to six inches inside the intake into a server.
And so there are some interfaces, like a company called Richards
Zeta has an interface. Intel
has their, what, IPMI interface. Once
we can pull out that data, we'll have even more temperature

Our algorithm kind of does some advanced control to really understand, you know, how all these
points really can drive, you know, 30 or 40 points. And that's what we've solved. And
intelligent in the sense that it actually learns. It has, you know, a lot of artificial intelligence built
into it. And it can adapt.

I think a big challenge in, you know, a lot of co
lo facilities we've talked to, is customers take
racks in and
out, and the load is actually quite dynamic, both from the number of servers and also
the output of the servers, right, because, you know, datacenter server typically doesn't always
run, you know, at 80% utilization all the time. We would love it if that
were the case, but load
really shifts over the course of the day.

We've seen our system, you know, for one particular customer at 7:30 in the morning, whole
datacenter basically wakes up with the rest of the company. So it can dynamically respond to

Then it really has the ability to determine which cooling unit, which CRAH or CRAC influences
which server and how it changes over time. If you've been in datacenters, you know, with under
[indiscernible] floors, there's cabling, all sorts of stuff

underneath that, you know, can change over
time. You run a bunch of network cables, so the environment changes. And it's, you know, very
tough to model, you know, from a point in time. So our system kind of dynamically corrects for
those type of things

>>: Could you talk a little more, I don't want to get into things you don't have to, but what models
do you use?

>> Jeff Rauenhorst: Yeah, so that's probably about all I can say right now. All of this technology
is patent pending that we're deve
loping, yeah. That's kind of our secret sauce.

>>: Do you guys look just at proactive or just reactively controlling the temperature, so if you see
a temperature increasing you increase the cooling, or do you actually try to, like, proactively

what will happen?

>> Jeff Rauenhorst: Right. That's a great question. Right now it is reactive. You know, we've
developed this core control model that works. But clearly in the future, we are gonna develop a
more reactive and predictive capability
. Especially as we begin to have more customers
interested in demand response, you know, if we can tell it's going to be an incredibly hot day
today, you know, we can do some precooling, or depending on your HVAC system.

But, yeah, and make sure the da
tacenter is cool, you know, a couple minutes before all the
servers typically go up at 7:30 or some backup algorithm runs at 3:00 in the morning on a regular
basis, yeah. So we're not there yet, but it's definitely a direction we're heading.

I don't wa
nt to talk about this much because I'm sure you've had the same experience. Deploying
these networks adds a lot of value to begin to see where there are issues in your datacenter.
And, you know, gives you an insight for your facilities people that they'v
e never had before.
There's a huge value in that. You know, we talk to our customers and say we recommend that

A couple of our systems, you can get energy savings right away. But then you can really look at
some of the other best practices out

there, especially around cooling, whether it's hot aisle or cold
aisle containment, you know, making sure all your holes are plugged in your under
air floor
distribution, you know, blinking, all this stuff, having a system that gives you a lot of data is
a good first step. And it sounds like you guys are doing this.

So then really we like


why we like the wireless sensor module approach is that, you know, we
call kind of our service reply consistent commissioning, but it allows customers to re
ally make
sure your datacenter's always running optimally, you know, overtime.

You know, we know that


I mean, I've seen datacenters that are put up and working perfect on
day one, and day a hundred, they're already kind of out of whack. So you reall
y get kind of
time feedback on what's happening.

This is kind of an interesting insight. I'm not sure if you guys have run into this. This is always
kind of fascinating. A lot of racks have doors on them, right? And, you know, they're 95%
rated. You know, most people say, Oh, that's fine. And there's lots of studies saying it
doesn't affect airflow. But we've seen


this is one instance where there was over a 10
drop by just opening the front and back doors.

We're actually wor
king in a customer's cloud cluster. They had, what, four rows of 15 racks of,
you know, brand
new servers running a cloud instance. You know, all new gear. I think it was
IBM, both equipment and racks. And their doors, you know, were brand

e was a 19
degree temperature difference inside, you know, where the inlet where the air is
going into the server when the door was open and closed. And this was, you know, a fully
loaded, you know, blinking panels. It's kind of everything that you would


But I think what we think is that the door provides just enough back pressure that any of these
little holes can get the air, the hot air, flowing backwards, you know, coming from the outlet, just
because a little bit of back pressure makes it
easier for the air to come to the front instead of
going out the back.

So you've probably seen results like this deployed in your systems, but we see that's a big
benefit. So our little side campaign is, Take off your doors.

And, of course, you know
, more important than me talking, here's a couple case studies for
customers of what we've been able to achieve with our system. So this is the State of California
Franchise Tax Board, which their datacenter is quite busy right now, because all their tax
go through there. So this was a pilot project for us that we did with Lawrence Berkeley National
Lab. So they're right now verifying all their numbers, making sure they're accurate. This was a
small datacenter, 10,000 square feet, with 10 Lieber
t chilled water units.

They had built it out about 60% and then virtualized it. So they brought it down to about 40%
capacity. But what was interesting is the cooling units were all on all the time. Their operators
had tried to turn off a couple, but

kept on getting hotspots, so they were never comfortable trying
to turn it down.

You know, we put in


yeah, so it was 25 sensor wireless modules measuring 50 points. So, you
know, not a very data
dense deployment. But, you know, it was sufficient.

We typically
implemented on the outside of the rows, just where typically are the hottest temperatures.

Out of the


we installed variable frequency drives on four of the 12 cooling units. Typically
they're the ones closest to the racks. And we did
some rearrangement of tiles, you know, and
reset some of the set points of the cooling units. What our implementation was, is controlling
these four VFDs and the remaining eight we would turn on and off as necessary.

>>: So you control the fan speed,
then you manually set the set point for the air return

>> Jeff Rauenhorst: Right. So each implementation is slightly different. In this particular one,
they were Liebert VFDs, so they have this retrofit kit. So we were sending commands
to the
Liebert units through their gateway protocol, which were resetting the set points. So basically for
return air, because they were controlling off of return air.

>>: So you didn't touch that loop? There's still a return air?

>> Jeff Rauenhor
st: Right.

>>: And chill water [indiscernible] opening?

>> Jeff Rauenhorst: Right. So the local Liebert unit in this case actually controlled the VFD
speed and the chilled water valve according to its own controls. But we were changing their set

points. So, you know, that was the feedback loop we used for that system. Other instances, we
have basically made the Liebert units kind of dumb cooling coils where we actually control the
VFD and the chilled water valve independently. Again, it's just

depending on the customer.

And then for the other eight units, we turn them on and off because they only had one speed, as
well as adjusting their set points to adjust their chilled water valve position.

So what was interesting is at any given time,

typically six to eight of the units were turned off by
our system. And which ones are turned off kind of depended on the conditions of the servers,
while, of course, maintaining all the temperatures within ASHRAE limits.

I think what was interesting w
hen you get some kind of counterintuitive behavior that we didn't
even expect, is that most of the servers were clustered on kind of one side of the datacenter.
And what happened is the VFDs were next to those servers, so they were typically on but at a
ower speed. The cooling units in the middle of the datacenter were off. But there was typically
one that was furthest away from the servers that was on, which seemed very counterintuitive at

Kind of our analysis seems to imply that it was basic
ally providing back pressure. So all the cool
air coming from the cooling units next to the servers stayed there, and this just provided some
back pressure.

Which crack unit was on or off varied over time based on load. That's kind of the behavior you

get with some of this dynamic cooling control.

Now, you know, with a 10,000 square foot datacenter, we were still able to get some pretty
significant energy savings, you know, primarily from fan savings, but also from some chilled water


So intuitively, if you have some work load in the datacenter, you want to put a concentrated
load into a small level of servers or you want to spread it across many servers to make the
cooling unit most efficient?

>> Jeff Rauenhorst: That creates an
interesting question; one that I'm not sure I'm qualified to
fully answer that.

>>: In your experience.

>> Jeff Rauenhorst: I think one of the things that we've seen, the next case study will kind of
show it. If you have a more heavily loaded data
center, it's actually better to keep all your units
on, but turn them down to like, say, 80%, than, you know, have, you know, 80% of them on at full
blast and, you know, another 20% off.

>>: [Indiscernible] but we were at 80, 85% capacity, so they were

running all the time anyway, so
it didn't make any sense to try and have that variable speed. We wouldn't get any cost savings
because we had to run them all the time anyway.

>> Jeff Rauenhorst: At full, right.

>>: I mean, it might have been at 8
0%. I'm saying we weren't able to take advantage of turning
some of them off at the current time.

>> Jeff Rauenhorst: What's interesting is if you turn down a VFD 80%, or turn it down 20%, you
get about 50% energy savings because there's a cube
hour r
elationship between the fan speed
and power. So by turning down all your CRAC units 80%, you'll get more energy savings than by
turning off 20% of them.

>>: I don't have the VFD installed.

>> Jeff Rauenhorst: Yeah, in heavily loaded datacenters, t
here are


there's a little less bang for
your buck. But I think where the opportunity is is using


>>: I thought you said the datacenters are dynamic, so things change. I'm seeing that now in our
datacenter, we're decommissioning [indiscernible] o
ld equipment and bringing [indiscernible].

>> Jeff Rauenhorst: Yeah, in heavily loaded datacenters, there's still an opportunity for VFDs.
Because if you're able to use your wireless sensor network to control, hopefully you'll slightly
lower your cool
ing required, and then you can load up your datacenter more and you get higher
densities, you know, safely.

So there's still an opportunity to get some excess capacity out of your existing facilities with this
type of infrastructure.

Yeah, you can se
e this is when we turned it on. They had a bunch of alarms based on relative
humidity. And if you raise your temperature, your relative humidity goes down. So they had to
turn on their humidifiers for a little while to get the relative humidity back up,

which is kind of an
interesting story of how much energy humidification actually uses.

>>: So in the case where you have a datacenter, and let's say, you know, half of your machines
are heavily loaded, and half of them are running pretty much at idle,

is it best to have the heavily
loaded computers very condensed and try to cool just those or spread the heavy load across a
wide number


spread the heavily loaded machines across your physical datacenter so you have
more space in between?

>> Jeff Rau
enhorst: Again, I'm not the expert. But if you had variable frequency drives installed,
it would probably make sense to spread that load across your datacenter. Then your risk of
hotspots is also decreased. If you don't, it might make sense, where ther
e is excess cooling
capacity, put your load, and kind of do a little bit more of matching of IT load physically with your
cooling load.

Yeah, so 58% fan reduction, you know, this is


or 13% of your total IT


total datacenter energy
usage, you start
getting some big numbers.

>>: [indiscernible] do you have any data on how to affect the temperature distribution because

>> Jeff Rauenhorst: So that's a good question. I don't have


I don't exactly know. I know that
we've been
able to in all these instances get these energy savings while making sure the
temperature readings we're getting were within ASHRAE limits. So I should have that
information. That would be good to know. I'm sure someone from our team can run that

Here's another datacenter. Again, it's quite small. This was for a large software company in the
Bay Area. They wanted kind of a pilot to see how our system works. So, again, you know, kind
of small. I think what's interesting is


so it's 5,0
00 square feet, six CRAHs, so chilled water
Liebert units, but our implementation was slightly different. We installed six VFDs and we directly
controlled the VFDs. They didn't have a building management system or anything, so we were
actually controllin
g them directly.

And, you know, again, pretty small. But we deployed, measuring 48 temperature points. They
had gone through an audit with a local engineering firm of all the different best practices they
could roll out. Their datacenter, it was abou
t half hot aisle, cold aisle. It was kind of a mess. It
was kind of a lab facility. They had a bunch of holes in their floor.

You know, they could have done containment, done a hot air return plan in the ceiling. But they
chose to do nothing of that

and implemented our system first just to kind of see. Yeah, so all this
stuff goes in quickly.

So what was interesting is the energy savings they were able to achieve were equal to what they
predicted with all of the various energy efficiency measures

if they were implemented. So, you
know, basically their datacenter had a lot more cooling than was needed, so we were able to turn
down the fans to about 50% most of the time. So we get about 80% fan energy savings. You
know, pretty large energy saving

It was interesting. I was in there when they turned it on and it's amazing how quiet the datacenter
got, because, you know, the fans, the units, even the EPS units, you know, the buzz of them went
down a little bit as the IT load went down. So, you

know, this is again where we were directly
controlling chilled water valve position and fan speed.

So, I mean, that's kind of an overview of what we do. I wasn't sure exactly how this would work.
But it would be curious to hear a little bit of feedba
ck, now that you've heard a little bit of what
we're doing more on the control side, how would this fit into your deployment of wireless sensors?

Have you started


I think last time we talked in the fall, you were mainly doing data collection
and anal
ysis. If you begin to look at the control piece, where do you stand with your wireless
sensor network project?

>>: Is that a question?

>> Jeff Rauenhorst: Yeah, that's a question for anyone out here. Anyone.

>>: Sort of looking at the control

piece [indiscernible]. We were a little bit sort of constrained on
what we were doing on that [indiscernible]. This is definitely interesting [indiscernible]. As I said,
we want to take a joint computing and physical control approach. That sort of lea
ds to a question
[indiscernible], how can you dynamically control the cooling cycle [indiscernible]. So we're
investigating things like [indiscernible], load balancing, shut down machines, putting the cooling
factor into that [indiscernible].

>> Jeff R
auenhorst: Yeah, we're talking with a client, too, about putting a lot more data with IT
data as well as, you know, as I mentioned, the server temperature data, and how do you actually
do a control based on utilization of your CPU and the server, and some

of those features.

I think we'd all want to be there. The challenge is, for a lot of customers, IT and facilities are still
kind of two different worlds.

Yeah, it sounds like the data, the amount of data you're collecting, has been kind of unexpect
gotcha that you've kind of figured out. Any other challenges or any kind of big insights that you
guys have seen from your experience? It's gone as expected?

>>: Can't tell you.

>> Jeff Rauenhorst: Fair enough. Fair enough. Okay. Okay.

h, yeah, this is


sales guy threw this in. But, yeah, so hopefully that gave you a little bit of an
idea. Wireless mesh networking technology, it's kind of off
shelf. I think the real step forward
we've taken is with the control piece and having t
he dynamic control.

I think the interesting part is this really works well with other best practices out there. You know,
for example, hot aisle, cold aisle containment, we've done that with some clients and they're able
to get some great results. Bec
ause, as you contain your environments, you know, the responses
and the changes in those environments become a little bit more critical. So having the good
sensing data in there adds a lot of value to your control strategy.

As I'm sure you've seen, the

installation of these wireless sensors is pretty non
obtrusive to the
operations of the facility. So that's about all I have. If you guys have any more questions or...

>>: So as you work in this industry, where do you see the next big challenge? Ju
st have a
wonderful solution and everybody use it, or some challenge, technical challenges in particular?

>> Jeff Rauenhorst: Yeah, so, of course we'd love to have everybody out there with datacenters
use our solution, you know, because we're all in bu
siness to make money. But I think we've
touched on two really large challenges here. I mean, the predictive control that you mentioned,
you know, you can begin


especially owner
occupied datacenters, like Microsofts, Yahoos,
IBMs, some of those type of

people, where they have kind of a little bit more predictability in their
IT load, you know, being able to do some predictive control. It's a little harder in co
lo facilities,
but still applicable.

I think probably the more pressing challenge is merg
ing this world of IT and facilities, getting CPU
utilization data, getting hard drive utilization data, and putting that all into the facilities people,
getting them to talk so you can provide


you know, I know Microsoft has done a good job at
some of th
e dashboards around datacenter utilization. Most customers aren't there. You guys
are definitely in the forefront of that, to get the IT and facilities people seeing the same
information, making decisions out of the same information, as well as then driv
ing the control
piece of it, too.

We would love to get to the point where, you know, the facilities control can talk to your VM
installations and coordinate, you know, where VM instances run, both physically within a
datacenter or between datacenters, y
ou know, based on the going power rate in an area or
capacity. I think there's some big opportunities there.

But all those challenges really are


go down to how do facilities and IT people work together. It
becomes a little bit of a cultural thing w
ithin companies of how you align these two organizations
so that they work together, yeah.

The other big piece is that a lot of datacenters out there are very old. I think over half the
datacenters are over five years old in the U.S. You know, so how
do you retrofit your existing
datacenters, because building a new datacenter is easily $1,000 a square foot to build them. So
we see the financial challenges, at least in the short
term, are quite large for a lot of our
customers, so...

>>: With the d
ata that you collect, do you store most of it for long
term use? I know you said for
at least right now, the two examples you gave are kind of smaller. Is it just right now you're
running on smaller datacenters or do you do something with that data, eith
er get rid of it or...

>> Jeff Rauenhorst: Right, so just in


and this is my assumption, is that we don't have as
rich of an environment as your deployments here. My guess is you put a lot more
sensors. You use a sensor rack. I'm not sure w
hat level of data you're collecting. But we don't
collect quite as much data to do the control in the datacenter.

So given hard drive capacities, we don't see a big problem in the foreseeable future, you know,
with installations even up to you know, 10
0, 200 thousand square feet.

So I think the simple answer is we haven't hit that issue at this point. But what we typically see
term is that a lot more of the data around fan energy is a little bit more pertinent to keep over
the long
term, especi
ally as customers trend their energy usage. Some of the temperature data,
you know, probably can be compressed into, you know, 15

or 20
minute intervals over time, you
know, for anything that's over a couple years old.

>>: So is it up to the customer

>> Jeff Rauenhorst: At this point, yeah. At this point we save all the data and don't run into
capacity issues at this point.

>>: [indiscernible]?

>> Jeff Rauenhorst: So I think one of the interesting pieces is that where you actually positi
on the
sensors is pretty important. I'm not sure if you've run into this.

>>: Much denser server.

>> Jeff Rauenhorst: Even across one rack, a typical volume server, the temperature between the
sides and of the middle and the inside, there can be a


or five
degree temperature difference.
So it's interesting as we begin to work with more and more customers that positions of sensors is
important. As I talked about with the doors, you know, by putting on the outside of the door
versus inside the

door, we've seen up to 19
degree temperature differences.

So that's kind of a gotcha, is that you need to be a little bit careful of exactly where you place the
sensor on a rack.

One other piece. Oh, we've had some datacenters where, you know, we h
ave these variable
frequency drives. People would go around and turn them up as they're walking through the
datacenter, like an R&D facility for a customer we have. There's just a guy who likes to turn them
up because he wants to make sure the server's c

You know, there are some cultural things, too, around when you walk in a datacenter, you know,
most people, you've seen the, you know, heavy coats sitting right outside and they want their
datacenter cold. You know, I know like Christian Bellotti
is a big proponent of running your
datacenters as hot as possible.

So I think some of the gotchas are around understanding the culture of the datacenter and the
universe and your customer.

>>: So when you do controls, put the fan up and down, the ch
ill water, what kind of trends do you
see? Do you see a big trend when you turn something off or do you worry about the short
big changes in terms of temperatures?

>> Jeff Rauenhorst: No, that's a good question. I know the system can adapt. Als
o, just like any
control mechanism, there's some tuning parameters of how quickly the system responds. It has
some PID loops built in, if you guys are control engineers. So, you know, a little bit of it kind of
depends on the particular customer.



>> Jeff Rauenhorst: We have a pretty good set of kind of base parameters. But one of the


company offers service to, you know, do a bunch of analysis on their data on, say, a monthly
basis that we can recommend, you know, some
fine tuning and making sure your system runs as
well as possible.

So, yeah, we have done, you know, some minor adjustments with the control piece. Specifically
when the system turns on, yeah, we typically see a drop in fan speed. Some customers it's p
constant. Typically we'll see daily fluctuations. Like I said, one customer 7:30, it will be running
at its minimum at 50%, then it will start fluctuating between 50 and 60% during the day, then go
down at the end of the day just based on usage.

Just to jump back to one of the challenges that I just remembered is, so there's fans in all these
computers, right? Most of them now are variable speed. And I think a challenge all of us have is
we can control the cooling units, but if we ramp up the t
emperatures too high, then the fans on
these servers will kick on, and they're incredibly inefficient because they're really small, so they
can actually negate a lot of the energy usage you might be saving in your larger units.

Again, this example of a
degree difference in the door, you'd open up the door, you'd hear the
fans and servers ramp down. So I think a challenge, again, is we can pull more information out of
the IT systems, is making sure we look at a holistic perspective of cooling, both fr
om your CRACs
as well as your servers.

>>: On the examples that you gave, it seemed like the sensors weren't very dense, that you
didn't put them in there very dense. You talked about getting that temperature sensors from the
motherboards and whatnot,

which would be order of magnitude [indiscernible]. Do you think
you'll be able to use that data efficiently and actually get further improvements from additional
data, or is there some kind of point where getting more data won't actually help you? It's
data, but it's not going to help you?

>> Jeff Rauenhorst: That's a great question. What's interesting is the way we've built our
software. We can support, you know, as much data as you can get, and we will try to extract the
most information.

I think implementations rarely tell if there's that much more information to learn from the data
coming out of servers.

>>: So was there a reason why you picked the particular density you did on those examples? I
guess, do you have some intuition whe
re that data doesn't help you enough to justify additional

>> Jeff Rauenhorst: To be frank, the decision around number of sensors is as much a control as
an economic decision. The sensors cost money. And, you know, there's a certain point of
, each
customer has a certain budget. So, you know, we have some intuition about placing the sensors
with an appropriate density to really understand the thermal dynamic environment of your
datacenter, as well as, you know, the densities to make sure your

costs stay low and get an
appropriate return.

With taking temperatures from servers, your marginal cost of an additional temperature point is
incredibly small. So we think we can get a lot more value out of that. You know, the number of
points I thin
k will actually be necessary, too, because what we've seen is the temperature probes
in the server are pretty cheap and relatively inaccurate.

From a statistical perspective, if a rack is, what, 42 U, if you have 42 servers in there, you can do
some sta
tistical analysis around the law of large numbers. If you have over 30 points, roughly
you'll be able to get a better estimate of what the temperatures are.

So when you go to measuring server temperatures, having that data will be important, because a
lot of that will be filtered out because, you know, of bad sensors or, you know, rogue points.

>>: Do you think you'll move away from wireless sensors or will you use them in conjunction?

>> Jeff Rauenhorst: I think


the simple answer is: time w
ill tell. We'll always have a system
that will be able to do it both ways. We've had a couple clients, particularly telcos, that won't do
wireless in their datacenters, especially anything with a 911 switch. They won't even allow you
with cell phones in

there. So there are some environments that tend to


well, that won't do
wireless. So there are also some points that are hard to monitor.

So we actually typically put a sensor on the CRAC unit's measure return and supplier, because
the sensors in t
hese Liebert units are notoriously poor. So a wireless sensor there is the only
way to get better, accurate data. So I'm sure it will probably be a heterogeneous environment, at
least in the short
term, yeah.

>>: Thank you.