Archive for March, 2014

04
Mar
14

Is It Really That Bad?

http://www.nytimes.com/2012/09/23/technology/data-centers-waste-vast-amounts-of-energy-belying-industry-image.html?pagewanted=all&_r=0

Some time has passed since this article was published, so it should be less controversial now than it was when it first came out.  Needless to say, many data center designers, operators and owners had their feathers ruffled a little bit.  I’d like to add my own take on the content of this article, now that I’ve had time to count to 10 and take deep breaths (jk).

I’ll preface this by saying that most everything in the article is true.  Data centers consume a LOT of energy, they often generate power from fossil fuels and even have tailpipes!  The notion that the power consumption is inordinate or inefficient is really a value judgement on the part of Mr. James Glanz, and is the kind of editorializing that sadly passes for news in this day and age.  But, to some extent, his article is right.  Unfortunately, his article is myopic in its understanding of data centers and their function or role in our modern economy.

When I think about how much energy some of these data centers consume, I like to liken them to diesel buses in a public transportation system.  They are big, clunky, and by golly, the consume a lot of fuel.  For the sake of discussion, let’s assume that they get a meager 10 miles to the gallon.  This gas mileage seems abhorrent in the age of Hybrids and high efficiency diesel engines.  But is it terrible for the environment?  It depends.  A hybrid is a very efficient way to get around in a car as a single person.  Let’s say for this discussion that they get 50 miles to the gallon.  If there is a single passenger in the Hybrid, then it also gets 50 passenger miles per gallon.  If a dirty old city bus that gets 10 miles to the gallon has five passengers, then it too gets 50 passenger miles per gallon.  At peak traffic, a bus may hold as many as 30 or more people, meaning that it actually gets 300 passenger miles to the gallon.  That would be 6 times as efficient as the Hybrid.

Something similar happens in Data Centers as well.  When something takes down as much power and cooling as one of these facilities, they actually achieve an economy of scale.  To achieve the equivalent processing power dispersed among end users, there would be considerably more transmission and transformation loss of power, it would be less reliable, and the cooling would be considerably less efficient.  So it’s important to consider power costs per process, which are by far and away less in a Data Center than they would be on a desk top or laptop (if those things even possessed enough processing power to compare).  This is why Cloud Computing is starting to catch on.  You don’t need to have a powerful machine sitting on your desk if you only use it for process intensive purposes every once in a while.  Turning on a top-of-the-line Alienware to check email is terribly inefficient.  Data Centers can possibly allow users to outsource these processes and only use them when they need them.  With increasing use of mobile devices as thin clients to access cloud computing software, we are actually making the world far more efficient.  Imagine having the power of a super computer in a data center at your disposal when you need it, controlled from your iPad, which sips electricity compared to a traditional PC.

But to further address issues of efficiency, Data Center designers and operators are focused on a set of metrics that outline how efficient their installation is.  The reason?  Electricity is expensive, and at the quantities that we are talking about, the incentives are quite real; on the order of 7 and 8 figures real.  One metric (that may be falling out of favor) is the PUE, or Power Utilization Effectiveness.  This is just a ratio to determine how much of the electricity is actually dedicated to computing purposes rather than supporting functions.  The closer to 1 a facility is, the closer it is to having no other loads on the building but the IT load.  Google, a couple of years back, set the bar high at 1.21, which means that a very small percentage is dedicated to support loads (even better numbers have since been reported).  When I was starting out with Data Centers, we planned around a PUE of 2.0, meaning that the total load of the facility, including the IT load, was twice that of the IT load alone.  Now, we plan for an average of 1.6, and that drop has been relatively recent (certainly prior to Mr. Glanz’ article).

The larger issue that concerns Mr. Glanz is the blistering pace of growth for demand in data/processing.  He’s right to suggest that new found gains in efficiency or processing power are quickly met with increased demand for processes and data.  This is an economic issue that escapes cursory concerns about the use of limited resources.  When a process is made more efficient, it is done not to permanently block off the use of that resource for the future, but rather to make this newly freed resource available for other productive uses.  For example, let’s say that a company does something earning $1.00 per process, but that process currently consumes 1 watt of power.  That means that a 1MW facility will return $1,000,000 for every megawatt consumed.  Now, lets say you roll out a new, faster and more efficient processor that reduces the power consumption to .5 watts per process.  Your facility still has a maximum power capacity of 1MW, do you honestly think that the company is going to cap their usage at .5 Megawatts and be satisfied with $1,000,000 of processes?  No, they are going to use a full Megawatt, and earn $2,000,000 by serving more customers, or offering better service to existing customers.  Put another way, does your laptop or desktop really need a shiny new look with fancy graphics and faster processing?  We could probably write this article with DOS or a stripped down text editor in Linux without an X server, and consume WAY less electricity in the process.  Those bouncy icons annoying you on the task bar on your new Apple laptop?  Wasting resources in this view.  Who needs Angry Birds on an iPhone when we could revive Legend of the Red Dragon (those under 30 should probably use Google to learn what this is, or not and save resources) I jest, of course, but these things make the experience better.  If you ever had to stay up late creating a text document in Word Perfect  before it was native to Windows, staring at a blue screen with grey text, or creating a spreadsheet in Lotus 1-2-3 running directly out of DOS, then you don’t need me to explain why things are better now, despite the fact that they require more resources.

This brings up the other economic consideration, that our view of consumption of resources is not temporal, but immediate.  What I mean by this is that if we were to gather computers from 1989 and run the kinds of processes we see today, the cost would be absolutely immense (assuming of course that it would have been possible in the first place).  today, we are capable of producing far more at a fraction of what the previous costs would have been.  This is the result of the cost of a given process being reduced over time in terms of resources consumed.  So, as is often the case, a business function may be developed some years in the past, for previous systems that were far less efficient per process.  Technology has advanced but this legacy software, which still meets the needs of a certain business function remains in use, but is now running on a virtual server that is more efficient per process.  This is the inverse of the last problem, where rather than grabbing more energy to accomplish the same task, they actually grab increasingly less over time, freeing up resources for new business functions.

All of this is to say that this industry, its experts and visionaries, possess a sum total of knowledge and understanding greater than myself, or Mr. Glanz.  At the end of the day, I think the big question should be:  If this stuff is so inefficient, then why is everybody doing it?  I suppose that it could be that we are all just getting it wrong, including the NYT, which hosts the article in question in a Data Center.  But I think that the landscape of Data Centers today is the sum of innumerable decisions by companies and people that make the most sense, and in all of these cases, if they were losing propositions with respect to resources, they would almost certainly be losing propositions with money as well.  It’s well and good to extol the virtues of efficiency and not being wasteful.  But I think these are lessons that businesses already know all too well.

Advertisements