Free Cooling: the Server Side of the Storyby Johan De Gelas on February 11, 2014 7:00 AM EST
- Posted in
- Cloud Computing
- IT Computing
- Ivy Bridge EP
Data centers are the massive engines under the hood of the mobile internet economy. And it is no secret that they demand a lot of energy: with energy capacities ranging from 10MW to 100MW, they require up to 80,000 times more than what a typical US home needs.
And yet, you do not have to be a genius to figure out how the enormous energy bills could be reduced. The main energy gobblers are the CRACs, Computer Room Air Conditioners or the alternative, the CRAHs, the Computer Room Air Handlers. Most data centers still rely on some form of mechanical cooling. And to the outsider, it looks pretty wasteful, even stupid, that a data center is consuming energy to cool servers down while the outside air in a mild climate is more than cold enough most of the time (less than 20°C/68 °F).
There are quite a few data centers that have embraced "free cooling" totally, i.e. using the cold air outside. The data center of Microsoft in Dublin uses large air-side economizers and make good use of the lower temperature of the outside air.
Microsoft's data center in Dublin: free cooling with air economizers (source: Microsoft)
The air side economizers bring outside air into the building and distribute it via a series of dampers and fans. Hot air is simply flushed outside. As mechanical cooling is typically good for 40-50% of the traditional data center's energy consumption, it is clear that enormous energy savings can be possible with "free cooling".
Air economizers in the data center
This is easy to illustrate with the most important - although far from perfect - benchmark for data centers, PUE or Power Usage Effectiveness. PUE is simply the ratio of the total amount of energy consumed by the data center as a whole to the energy consumed by the IT equipment. Ideally it is 1, which means that all energy goes to the IT equipment. Most data centers that host third party IT equipment are in the range of 1.4 to 2. In other words, for each watt consumed by the servers/storage/network equipment, 0.4 to 1 Watt is necessary for cooling, ventilation, UPS, power conversion and so on.
The "single-tenant" data centers of Facebook, Google, Microsoft and Yahoo that use "free cooling" to its full potential are able to achieve an astonishing PUE of 1.15-1.2. You can imagine that the internet giants save massive amounts of energy this way. But as you have guessed, most enterprises and "multi-tenant" data centers cannot simply copy the data center technologies of the internet giants. According to a survey of more than 500 data centers conducted by The Uptime Institute, the average Power Usage Effectiveness (PUE) rating for data centers is 1.8. There is still a lot of room for improvement.
Let's see what the hurdles are and how buying the right servers could lead to much more efficient data centers and ultimately an Internet that requires much less energy.
Post Your CommentPlease log in or sign up to comment.
View All Comments
extide - Tuesday, February 11, 2014 - linkYeah there is a lot of movement in this these days, but the hard part of doing this is at the low voltages used in servers <=24v, you need a massive amount of current to feed several racks of servers, so you need massive power bars and of course you can lose a lot of efficiency on that side as well.
drexnx - Tuesday, February 11, 2014 - linkafaik, the Delta DC stuff is all 48v, so a lot of the old telecom CO stuff is already tailor-made for use there.
but yes, you get to see some pretty amazing buswork as a result!
Ikefu - Tuesday, February 11, 2014 - linkMicrosoft is building a massive data center in my home state just outside Cheyenne, WY. I wonder why more companies haven't done this yet? Its very dry and days above 90F are few and far between in the summer. Seems like an easy cooling solution versus all the data centers in places like Dallas.
rrinker - Tuesday, February 11, 2014 - linkBuilding in the cooler climes is great - but you also need the networking infrastructure to support said big data center. Heck for free cooling, build the data centers in the far frozen reaches of Northern Canada, or in Antarctica. Only, how will you get the data to the data center?
Ikefu - Tuesday, February 11, 2014 - linkIts actually right along the I-80 corridor that connects Chicago and San Francisco. Several major backbones run along that route and its why many mega data centers in Iowa are also built along I-80. Microsoft and the NCAR Yellowstone super computer are there so the large pipe is definitely accessible.
darking - Tuesday, February 11, 2014 - linkWe've used free cooling in our small datacenter since 2007. Its very effective from september to april here in Denmark.
beginner99 - Tuesday, February 11, 2014 - linkThat map from Europe is certainly plain wrong. Especially in Spain btu also Greece and italy easily have some day above 35. It also happens couple of days per year were I live, a lot more north than any of those.
ShieTar - Thursday, February 13, 2014 - linkDo you really get 35°C, in the shade, outside, for more than 260 hours a year? I'm sure it happens for a few hours a day in the two hottest months, but the map does cap out at 8500 out of 8760 hours.
juhatus - Tuesday, February 11, 2014 - linkWhat about wear&tear at running the equipment at hotter temperatures? I remember seeing the chart where higher temperature = shorter life span. I would imagine the OEM's have engineered a bit over this and warranties aside, it should be basic physics?
zodiacfml - Wednesday, February 12, 2014 - linkYou just need constant temperature and equipment that works at that temperature. Wear and tear happens significantly at temperature changes.