Free Cooling: the Server Side of the Story
by Johan De Gelas on February 11, 2014 7:00 AM EST- Posted in
- Cloud Computing
- IT Computing
- Intel
- Xeon
- Ivy Bridge EP
- Supermicro
Data centers are the massive engines under the hood of the mobile internet economy. And it is no secret that they demand a lot of energy: with energy capacities ranging from 10MW to 100MW, they require up to 80,000 times more than what a typical US home needs.
And yet, you do not have to be a genius to figure out how the enormous energy bills could be reduced. The main energy gobblers are the CRACs, Computer Room Air Conditioners or the alternative, the CRAHs, the Computer Room Air Handlers. Most data centers still rely on some form of mechanical cooling. And to the outsider, it looks pretty wasteful, even stupid, that a data center is consuming energy to cool servers down while the outside air in a mild climate is more than cold enough most of the time (less than 20°C/68 °F).
Free cooling
There are quite a few data centers that have embraced "free cooling" totally, i.e. using the cold air outside. The data center of Microsoft in Dublin uses large air-side economizers and make good use of the lower temperature of the outside air.
Microsoft's data center in Dublin: free cooling with air economizers (source: Microsoft)
The air side economizers bring outside air into the building and distribute it via a series of dampers and fans. Hot air is simply flushed outside. As mechanical cooling is typically good for 40-50% of the traditional data center's energy consumption, it is clear that enormous energy savings can be possible with "free cooling".
Air economizers in the data center
This is easy to illustrate with the most important - although far from perfect - benchmark for data centers, PUE or Power Usage Effectiveness. PUE is simply the ratio of the total amount of energy consumed by the data center as a whole to the energy consumed by the IT equipment. Ideally it is 1, which means that all energy goes to the IT equipment. Most data centers that host third party IT equipment are in the range of 1.4 to 2. In other words, for each watt consumed by the servers/storage/network equipment, 0.4 to 1 Watt is necessary for cooling, ventilation, UPS, power conversion and so on.
The "single-tenant" data centers of Facebook, Google, Microsoft and Yahoo that use "free cooling" to its full potential are able to achieve an astonishing PUE of 1.15-1.2. You can imagine that the internet giants save massive amounts of energy this way. But as you have guessed, most enterprises and "multi-tenant" data centers cannot simply copy the data center technologies of the internet giants. According to a survey of more than 500 data centers conducted by The Uptime Institute, the average Power Usage Effectiveness (PUE) rating for data centers is 1.8. There is still a lot of room for improvement.
Let's see what the hurdles are and how buying the right servers could lead to much more efficient data centers and ultimately an Internet that requires much less energy.
48 Comments
View All Comments
ShieTar - Tuesday, February 11, 2014 - link
I think you oversimplify if you just judge the efficiency of the cooling method by the heat capacity of the medium. The medium is not a heat-battery that only absorbs the heat, it is also moved in order to transport energy. And moving air is much easier and much more efficient than moving water.So I think in the case of Finland the driving fact is that they will get Air temperatures of up to 30°C in some summers, but the water temperature at the bottom regions of the gulf of Finland stays below 4°C throughout the year. If you would consider a data center near the river Nile, which is usually just 5°C below air temperature, and frequently warmer than the air at night, then your efficiency equation would look entirely different.
Naturally, building the center in Finland instead of Egypt in the first place is a pretty good decision considering cooling efficiency.
icrf - Tuesday, February 11, 2014 - link
Isn't moving water significantly more efficient than moving air because a significant amount of energy when trying to move air goes to compressing it rather than moving it, where water is largely incompressible?ShieTar - Thursday, February 13, 2014 - link
For the initial acceleration this might be an effect, though energy used for compression isn't necessary lost, as the pressure difference will decay via motion of the air again (but maybe not in the preferred direction. But if you look into the entire equation for a cooling system, the hard part is not getting the medium accelerated, but to keep it moving against the resistance of the coolers, tubes and radiators. And water has much stronger interactions with any reasonably used material (metal, mostly) than air. And you usually run water through smaller and longer tubes than air, which can quickly be moved from the electronics case to a large air vent. Also the viscosity of water itself is significantly higher than that of air, specifically if we are talking about cool water not to far above the freezing point of water, i.e. 5°C to 10°C.easp - Saturday, February 15, 2014 - link
Below Mach 0.3, air flows can be treated as incompressible. I doubt bulk movement of air in datacenters hits 200+ Mphjuhatus - Tuesday, February 11, 2014 - link
Sir, I can assure you the Nordic Sea hits ~20°C in the summers. But still that tempereture is good enough for cooling.In Helsinki they are now collecting the excess heat from data center to warm up the houses in the city area. So that too should be considered. I think many countries could use some "free" heating.
Penti - Tuesday, February 11, 2014 - link
Surface temp does, but below the surface it's cooler. Even in small lakes and rivers, otherwise our drinking water would be unusable and 25°C out of the tap. You would get legionella and stuff then. In Sweden the water is not allowed to be or not considered to be usable over 20 degrees at the inlet or out of the tap for that matter. Lakes, rivers and oceans could keep 2-15°C at the inlet year around here in Scandinavia if the inlet is appropriately placed. Certainly good enough if you allow temps over the old 20-22°C.Guspaz - Tuesday, February 11, 2014 - link
OVH's datacentre here in Montreal cools using a centralized watercooling system and relies on convection to remove the heat from the server stacks, IIRC. They claim a PUE of 1.09iwod - Tuesday, February 11, 2014 - link
Exactly what i was about to post. Why Facebook, Microsoft and even Google didn't manage to outpace them. PUE 1.09 is still as far as i know an Industry record. Correct me if i am wrong.I wonder if they could get it down to 1.05
Flunk - Tuesday, February 11, 2014 - link
This entire idea seems so obvious it's surprising they haven't been doing this the whole time. Oh well, it's hard to beat an idea that cheap and efficient.drexnx - Tuesday, February 11, 2014 - link
there's a lot of work being done on the UPS side of the power consumption coin too - FB uses both Delta DC UPS' that power their equipment directly at DC from the batteries instead of the wasteful invert to 480vac three phase, then rectify again back at the server PSU level, and Eaton equipment with ESS that bypasses the UPS until there's an actual power loss (for about a 10% efficiency pickup when running on mains power)