24
Apr
14

Current Trends in Power

‘Current,’ Get it?!

http://dailycaller.com/2014/04/23/americas-power-grid-at-the-limit-the-road-to-electrical-blackouts/

This is a light-weight article on electricity with a political agenda, but it does raise some interesting questions.  Data center clients are often drawn to northern climates for moderate temperatures and free cooling.  One consideration that I’ve personally never made is the level of competition for power during extreme cold temperatures, and how this might subject a data center to volatile power costs and the specter of lost power.

The author is correct that older plants are not in compliance with environmental regulations are being shut down.  The question becomes how willing or able are companies to provide new capacity to meet demand?  Could this be a boon for ERCOT in spite of Texas’ climate?

04
Mar
14

Is It Really That Bad?

http://www.nytimes.com/2012/09/23/technology/data-centers-waste-vast-amounts-of-energy-belying-industry-image.html?pagewanted=all&_r=0

Some time has passed since this article was published, so it should be less controversial now than it was when it first came out.  Needless to say, many data center designers, operators and owners had their feathers ruffled a little bit.  I’d like to add my own take on the content of this article, now that I’ve had time to count to 10 and take deep breaths (jk).

I’ll preface this by saying that most everything in the article is true.  Data centers consume a LOT of energy, they often generate power from fossil fuels and even have tailpipes!  The notion that the power consumption is inordinate or inefficient is really a value judgement on the part of Mr. James Glanz, and is the kind of editorializing that sadly passes for news in this day and age.  But, to some extent, his article is right.  Unfortunately, his article is myopic in its understanding of data centers and their function or role in our modern economy.

When I think about how much energy some of these data centers consume, I like to liken them to diesel buses in a public transportation system.  They are big, clunky, and by golly, the consume a lot of fuel.  For the sake of discussion, let’s assume that they get a meager 10 miles to the gallon.  This gas mileage seems abhorrent in the age of Hybrids and high efficiency diesel engines.  But is it terrible for the environment?  It depends.  A hybrid is a very efficient way to get around in a car as a single person.  Let’s say for this discussion that they get 50 miles to the gallon.  If there is a single passenger in the Hybrid, then it also gets 50 passenger miles per gallon.  If a dirty old city bus that gets 10 miles to the gallon has five passengers, then it too gets 50 passenger miles per gallon.  At peak traffic, a bus may hold as many as 30 or more people, meaning that it actually gets 300 passenger miles to the gallon.  That would be 6 times as efficient as the Hybrid.

Something similar happens in Data Centers as well.  When something takes down as much power and cooling as one of these facilities, they actually achieve an economy of scale.  To achieve the equivalent processing power dispersed among end users, there would be considerably more transmission and transformation loss of power, it would be less reliable, and the cooling would be considerably less efficient.  So it’s important to consider power costs per process, which are by far and away less in a Data Center than they would be on a desk top or laptop (if those things even possessed enough processing power to compare).  This is why Cloud Computing is starting to catch on.  You don’t need to have a powerful machine sitting on your desk if you only use it for process intensive purposes every once in a while.  Turning on a top-of-the-line Alienware to check email is terribly inefficient.  Data Centers can possibly allow users to outsource these processes and only use them when they need them.  With increasing use of mobile devices as thin clients to access cloud computing software, we are actually making the world far more efficient.  Imagine having the power of a super computer in a data center at your disposal when you need it, controlled from your iPad, which sips electricity compared to a traditional PC.

But to further address issues of efficiency, Data Center designers and operators are focused on a set of metrics that outline how efficient their installation is.  The reason?  Electricity is expensive, and at the quantities that we are talking about, the incentives are quite real; on the order of 7 and 8 figures real.  One metric (that may be falling out of favor) is the PUE, or Power Utilization Effectiveness.  This is just a ratio to determine how much of the electricity is actually dedicated to computing purposes rather than supporting functions.  The closer to 1 a facility is, the closer it is to having no other loads on the building but the IT load.  Google, a couple of years back, set the bar high at 1.21, which means that a very small percentage is dedicated to support loads (even better numbers have since been reported).  When I was starting out with Data Centers, we planned around a PUE of 2.0, meaning that the total load of the facility, including the IT load, was twice that of the IT load alone.  Now, we plan for an average of 1.6, and that drop has been relatively recent (certainly prior to Mr. Glanz’ article).

The larger issue that concerns Mr. Glanz is the blistering pace of growth for demand in data/processing.  He’s right to suggest that new found gains in efficiency or processing power are quickly met with increased demand for processes and data.  This is an economic issue that escapes cursory concerns about the use of limited resources.  When a process is made more efficient, it is done not to permanently block off the use of that resource for the future, but rather to make this newly freed resource available for other productive uses.  For example, let’s say that a company does something earning $1.00 per process, but that process currently consumes 1 watt of power.  That means that a 1MW facility will return $1,000,000 for every megawatt consumed.  Now, lets say you roll out a new, faster and more efficient processor that reduces the power consumption to .5 watts per process.  Your facility still has a maximum power capacity of 1MW, do you honestly think that the company is going to cap their usage at .5 Megawatts and be satisfied with $1,000,000 of processes?  No, they are going to use a full Megawatt, and earn $2,000,000 by serving more customers, or offering better service to existing customers.  Put another way, does your laptop or desktop really need a shiny new look with fancy graphics and faster processing?  We could probably write this article with DOS or a stripped down text editor in Linux without an X server, and consume WAY less electricity in the process.  Those bouncy icons annoying you on the task bar on your new Apple laptop?  Wasting resources in this view.  Who needs Angry Birds on an iPhone when we could revive Legend of the Red Dragon (those under 30 should probably use Google to learn what this is, or not and save resources) I jest, of course, but these things make the experience better.  If you ever had to stay up late creating a text document in Word Perfect  before it was native to Windows, staring at a blue screen with grey text, or creating a spreadsheet in Lotus 1-2-3 running directly out of DOS, then you don’t need me to explain why things are better now, despite the fact that they require more resources.

This brings up the other economic consideration, that our view of consumption of resources is not temporal, but immediate.  What I mean by this is that if we were to gather computers from 1989 and run the kinds of processes we see today, the cost would be absolutely immense (assuming of course that it would have been possible in the first place).  today, we are capable of producing far more at a fraction of what the previous costs would have been.  This is the result of the cost of a given process being reduced over time in terms of resources consumed.  So, as is often the case, a business function may be developed some years in the past, for previous systems that were far less efficient per process.  Technology has advanced but this legacy software, which still meets the needs of a certain business function remains in use, but is now running on a virtual server that is more efficient per process.  This is the inverse of the last problem, where rather than grabbing more energy to accomplish the same task, they actually grab increasingly less over time, freeing up resources for new business functions.

All of this is to say that this industry, its experts and visionaries, possess a sum total of knowledge and understanding greater than myself, or Mr. Glanz.  At the end of the day, I think the big question should be:  If this stuff is so inefficient, then why is everybody doing it?  I suppose that it could be that we are all just getting it wrong, including the NYT, which hosts the article in question in a Data Center.  But I think that the landscape of Data Centers today is the sum of innumerable decisions by companies and people that make the most sense, and in all of these cases, if they were losing propositions with respect to resources, they would almost certainly be losing propositions with money as well.  It’s well and good to extol the virtues of efficiency and not being wasteful.  But I think these are lessons that businesses already know all too well.

12
Aug
13

Understanding ‘Flight Paths’

Usually before we draw the first wall or column for a new data center, or start demolition in an existing facility, we are asked to aid in site due diligence, which includes evaluating threat assessments for a potential site. These may include nearby hazardous cargo routes, rail ways, nuclear power plants, and the list goes on. Among the most common concerns during this phase is the matter of air travel, which is often brought up as determining ‘flight paths’ or ‘glide slopes’ for airplanes at a nearby airport.

As you can imagine, this is easier asked than answered, and the answers can be quite complicated. I’ll start with the obvious, which I think everyone understands, that airplanes navigate in 3 dimensions and not two. They are physically capable of moving just about anywhere air is and solids or liquids are not. That being said, there are some areas where traffic is more concentrated.

Before I get into where or what those areas are, lets discuss airspace. In the United States, all air above the ground is divided into categories, and in general these categories divide air traffic into different navigation and flight styles (any pilot reading this is probably rolling his eyes right now, but please bear with me, I’m working from scratch here). These categories are assigned a letter, A through G with some more exotic stuff sprinkled throughout. The easiest to understand is Class A, which is a blanket of airspace covering the entire country that starts at 18,000 feet above the ground and extends to the limits of our atmosphere in space. This is reserved for aircraft with instruments and transponders, and is intended to be the highway for air travel.

I should note here that aircraft may travel pretty much anywhere in this space, with the guidance of air traffic control, but there are ‘lanes’ referred to as ‘Victor Lines’ or airways which tend to garner more traffic. These are lines that cross the country and are approximately 8 miles wide, so in this sense, it is possible that your site could be under a ‘flight path,’ and this is probably the closest thing to a defined flight path. It would be good to point out here that every site in the country is under some kind of navigable airspace.

So, if Class A is the ‘highway’ of air travel, where or what are the off ramps? Well, those would be Class B, C and D air spaces. These are ranked in descending order for the volume or quantity of air traffic for a given airport. So for reference, DFW International Airport has Class B airspace around it, Austin has Class C, and Waco has class D. It’s hard to visualize the airspace around an airport, but the best description would be a multi-tiered wedding cake, turned upside down, with the smallest portion extending from the ground at the airport to some altitude above it. The further away you get, the higher the bottom of the tier will be, but usually the top is the same (For example, around DFW, the bottom is at ground level and the top is at 11,000 altitude. If you go out a few miles, the class B airspace would start at 2,000 and go up to 11,000, so forth and so on.)

Class C works much the same way, but usually with only two tiers, and Class D tends to be a cylinder. These air spaces are used to free up the area for traffic that is landing or taking off. It gives planes room to enter a landing pattern around the airport, and keeps the air relatively clear for takeoff. If you are interested in seeing how these spaces are marked off, I recommend checking out some sectional charts that identify these airspaces.

We can start to see that airspace tends to represent a concentration of air traffic at airports. But how do we know how the planes will approach the airport or enter a traffic pattern? The short answer is that we don’t. Where and how planes will fly around airports has a lot to do with conditions at that airport, which can change from day to day. One day, the traffic may land and take off out of the South, the next day, the North. If the airport is exceptionally busy, the pattern may be larger and more expansive.

I should note here that TIA 942 has a recommendation for data centers to be located near, but not in, major metropolitan areas (“Not greater than 16km/10miles” for a Tier 4 Facility) and also provides guidance for proximity to airports (“Not less than .8km/1/2 mile or greater than 30 miles” for a Tier 4 Facility). TIA 942 also says that the facility “should not be in the flight path of any nearby airports.” As we’ve noted, if you are under controlled airspace, or any airspace for that matter, you might be in the ‘flight path’ of a nearby airport, especially if you take the recommendation of the TIA to locate no further than 30 miles from an airport in a big city.

For an area like DFW, the Class B airspace is quite expansive and would certainly cover data centers that were located per TIA. From a due diligence perspective, I think it would be fair to say that you would not want to locate near areas of greater air traffic density, but there is no public site in America that will be free of concerns over air traffic. Furthermore, it would probably be good for organizations like the TIA, and for anyone seeking out a site to develop a more robust understanding of air traffic when trying to assess risk.

26
Jan
12

Distributed Cooling Through Distributed Process?

A coworker of mine forwarded an article to me about Google’s data center in Finland:

http://www.wired.com/wiredenterprise/2012/01/google-finland/

It’s a great read about a new facility that uses sea water that from the pictures appears to make a direct heat exchange with their chilled water system rather than using chillers.  What is more fascinating to me though is the use of a software platform called ‘Spanner’.  The article discusses a data center that Google operates in Belgium where the temperatures are mild, and they seldom need expensive amounts of cooling.

In previous posts, I’ve mentioned how having multiple sites and multi-facility redundancy have eliminated the need for massive CapEx on any one facility for expensive backup power systems.  This scheme by Google shifts processes to another facility when the cooling bill gets too high, effectively using the planet as a kind of massive economizer.  There is always cooling available somewhere around the world.  Of course, I have no doubt that this scheme carries costs.  There is a reason why Google chose to locate a Data Center in Belgium (probably multiple reasons), and shifting a process temporarily might result in slightly less efficient or expedient processes due to transfer time and possibly latency across high speed networks.  

But still, it showcases the classic economic principle of achieving an economy of scale.  They grow more efficient with each facility they build.  Simply amazing.

 

03
Jan
12

They Don’t Build’em Like They Used To

2011 was a good year for me and my wife.  We bought an old home that was likely built in 1915.  We don’t know that for certain, but the records for the original deed and plans were lost in a fire in Dallas in 1930, so all homes in our neighborhood are listed as having been built in 1930, when the earliest remaining records are dated.  We did this because the home is made of old growth pine, old and strong, and as they say it has ‘good bones’ and charm to boot.

I happen to come from a family of builders in the city of McComb, Mississippi.  They built homes and commercial buildings that were contemporaries of my house.  My father still marvels at their perfection, noting that not a crack exists in the foundations that they poured and that the bricks were instructive for any mason living today.  These things are true because they didn’t have the benefits of construction science or building codes as we know them today.  Everything was tremendously over-built to compensate for this lack of knowledge.  To build a building like that today would cost a fortune.

Nowadays, things are built to suit, with as much economy as possible.  This is not to say that buildings are ‘cheap’, but rather that they more closely reflect a near perfect pairing of need and use with the built form.  For example, there is no need to provide a structure that would bear three times the load proposed in the occupancy of the building. The majority of the data center work I have done in the last few years has been in existing facilities, ranging in age from 60+ years to 20+.  Many people see a building that has space and proximity to critical infrastructure and the appropriate setbacks and security measures, and think that it might be suited to a data center occupancy.

I am happy to say that there isn’t a (non-condemnable) structure that can’t accommodate this use.  But this gets into an architectural topic that was vogue in the late 80’s and early 90’s: conversion of program.  The program has quietly become more crucial as buildings have become more efficient for a given purpose.  The ideas passed back in the day were that program and occupancy were largely just social assignments to buildings that could be swapped out in compelling and interesting ways (Like how a McDonalds could be converted into a house, all the essential elements were there, right?).  What this project type has taught me is that this is often easier said than done.  This to me is a axiomatic moment, it is an exhange of economy for flexibility.  Because the building is no longer over built, it may not immediately be suitable for a given occupancy without some work first.  If we spent our time designing and building for a multitude of programs, then a building’s cost would rapidly rise.

So what does all this mean for data centers in existing buildings?  Nothing more than that there are challenges with such a demanding project type.  Most facilities don’t have built-in infrastructure, or structural capacity to fully accommodate the program.  These things can be added.  And then there is an ‘X’ factor that is unique to each building.  I always tell my superiors that something ‘funky’ will come up during the process, and without fail it happens; I have yet to pick up a pattern to the chaos.  These facilities just require a kind of patience and flexibility, because they will challenge even the most meticulous plans.  But I can’t really put into words the satisfaction I feel after a successful data center is inserted into an existing building.

08
Nov
11

Green Power’s Positives

http://www.datacenterdynamics.com/focus/archive/2011/11/winddata-ready-to-break-ground-on-wind-powered-data-center

When I was in architecture school, we had a semester where ideas that revolved around solar power were explored.  The timing could not have been better as the assignment was right on the heels of hurricane Katrina.  At that time, solar power was not as cost competitive as it is today, and it still has a ways to go.  As part of our due diligence when considering this system, we brought in a vendor, and he said something that I would always remember: “It’s nice to get a smaller electricity bill than all of my neighbors, but I know these reductions won’t pay for the system, even after tax rebates.  The real value is when I have power and my neighbors don’t because they rely on the power grid.”

It’s truly anyone’s guess as to whether a solar power plant or wind farm will pay off in the long run.  For all anyone knows, there could be a massive oil reserve discovered next year driving down the costs of energy.  In the short term, it appears that green power simply costs more than traditional forms like coal and oil.  For data center users, this matters because a small shift in the costs per kilowatt hour can translate into large sums of expenditures from operations that consume quite a lot of electricity.  However, in spite of the fact that green power is generally more costly at present, more and more data centers are looking hard at this technology.

If it costs more money, why the interest?  It might be for several reasons.  First, it is a power supply that can be entirely under the control of a private facility owner, and this power supply might be more reliable or add to the reliability of power feeding a facility.  This also removes uncertainties generated from competing interests for electricity.  If capacity were ever reached on a power grid, tough choices would have to be made on how power was distributed to customers; California can attest to that.

Another reason might be the level of cost predictability that comes along with these systems.  The cost of oil or energy can swing wildly in the face of civil unrest, military action or energy cartels.   Renewable energy generation carries a known cost upfront, with a predictable useful life and maintenance cost that can more easily be predicted than the whims of a commodities market.

We might also be looking at renewable energy sources that are for the first time approaching cost competitiveness with traditional power.  If the costs of power continue to rise, then these systems will start to yield cost savings.

In short, the power supplied by these systems might cost more, but cost isn’t really the only factor.  Having the power that you want, that you control, when and where you need it, can be a compelling factor for considering these.  It can take a lot of the volatility and unpredictability out of the cost and supply of energy.

06
Nov
11

Datacenter Dynamics 2011 in Dallas

Datacenter Dynamics held their annual Dallas conference on November 1, 2011 in Richardson.  I was very fortunate to be a guest of a product vendor, PDI. As an aside, I was very impressed with their PowerWave Bus System, it is a very slick system,  but that will have to be a post for another time.   Several talks were given, and all of them were educational, if not something of an advertising opportunity.  My personal favorite was a presentation by Cisco on their new data centers in North Texas, given by a Mr. Tony Fazackarley.  What I found most enjoyable about this talk was the holistic description of this new facility, and why Cisco made some of the choices that they did with respect to cooling, backup power and disaster recovery.

The Cisco facility that was the topic of the presentation made use of a direct cooling scheme rather than a traditional raised access floor layout with remote CRAC/CRAH unit cooling under the floor.  If I understood the diagrams being presented correctly, the cooling is supplied from above and allowed to ‘fall’ into the cold aisles to supply the air required.  The cabinets deployed had chimneys to send hot air directly into the upper space of the data hall, but there was no ceiling, just a support grid structure.  This hot air is allowed to return to an AHU or vent directly outside during economizing. This lines up with current industry trends to supply the cold air to IT equipment as closely to the IT load as possible, reducing the power consumption by fans to pressurize an underfloor plenum.

The next area that I found intriguing in this facility was the choice in backup power for the data center.  This particular facility opted for rotary UPS systems paired with diesel generators in lieu of a traditional static switch UPS.  One of the advantages mentioned for this system is that power outages of short duration can drain battery strings more quickly reducing their life beyond design, whereas the rotary system will continue proper functioning without reduction in useful life.  In my experience with data center design, I have not had the chance to see these systems deployed by a client.  Most opt for the static UPS paired with batteries.  When I asked a colleague about the mechanical complexity of these rotary systems and the increased risk exposure in downtime for a system when compared to a battery plant replacement, he was very confident that these systems are very robust, and that while parts are going to go bad or break down, maintenance was a simple procedure.  From the presentation, it sounds like the major drawback was the noise generated by the engines that constantly run to turn the rotary system (I believe he mentioned a constant noise level of 110db).

Another interesting area of discussion was around Cisco’s disaster recovery, where many of their data centers are paired for redundancy, and smaller existing sites were converted into disaster recovery sites for critical processes.  Care is taken in site selection to ensure that a singular event will not likely take out both facilities.  All told it was a very informative presentation offering a lot of insight into how Cisco is handling its facility site selection, tier ratings and best practices.  I hope to have more posts from this year’s conference after I have a chance to review my notes (There was quite a lot!)

 

02
Nov
11

EMP Attacks

It’s fair to say that after the attacks on September 11th, 2001, our discussions on security changed forever.  I personally recall never having conceived of attacks of that nature prior to that day.  Since then, security has received a new and enthusiastic level of scrutiny.  Many people make a living thinking of scenarios that might seem unimaginable to the rest of us.  They look around and ask, ‘where are our soft spots as a country?’ and critical infrastructure always seems to fit the bill.

The concerns are straight out of a Tom Clancy novel: we are a technologically advanced nation, and we rely to a high degree on electronics and integrated circuitry, and then some rogue force acquires an EMP device to decimate our technology and thrust us back into the stone age.  The topic of Electro Magnetic Pulse attacks has come up in data center design more than once, and it is often a topic of discussion at forums and consortiums on data centers.

First, some history.  The first noted EMP disturbance was actually a by-product of high altitude nuclear detonation tests over the Johnston Islands in the Pacific.  A detonation named ‘Starfish Prime’ caused electrical disturbances in Hawaii several hundred miles away. The physics are complicated, but as a nuclear detonation occurs, the Compton effect causes a kind of major power surge in equipment that usually exceeds the capacity of the conductor to handle.  The result is fried and non-functional circuitry.  Naturally, this effect got the attention of the Department of Defense who saw several potential applications for this effect.  Several tests were conducted until 1963, when the above ground nuclear testing treaty was signed due to concerns over radiation pollution in the Earth’s atmosphere.  No EMP from a nuclear ordinance has been created since.

In spite of the ban, the effects of high altitude detonations was well understood by that time, so DoD standards and specifications were developed to protect sensitive electronics in critical buildings and war machines.  The DoD attempted to build gigantic testing facilities that would simulate this effect, the first being the trestle at Kirtland Air Force Base, another being the EMPRESS system developed by the Navy.  From what I have read, these did simulate the effect, but could not create a power spike on the magnitude of a nuclear weapon.  They were better than nothing, but less than the real thing.

Fast forward to today, the concern is now fresh on the minds of anyone building a critical facility.  If the more robust electronics of the post war era could not stand up to EMP, how could the delicate integrated circuitry of model electronics ever stand a chance? How can we protect our sensitive equipment from this kind of attack?  Well, general consensus today is that a Faraday cage is the best way to protect systems from this effect.  This has manifested itself from the very sensible sheet metal rooms or computer cabinets to the questionable installation of chicken wire into the envelope of the building. It’s here that I would like to make two arguments: 1) You can’t really guarantee that you can protect your equipment for several reasons and 2) with cloud computing taking off, this will probably matter less and less for end users.

Here are the problems with trying to harden a facility against EMP.  First, there really isn’t that much information available to the public about this kind of weapon.  Remember, there has not been a documented EMP event since before 1963, or nearly 50 years.  Second, there is no viable way to test or commission an installation of chicken wire (or any other protection scheme).  This is especially problematic because every penetration into a chicken wire cage is a potential conductor of electricity and could compromise the integrity of the cage.  This means every wire, pipe, duct or structural member.  DoD specs call for special line arresters and filters on all incoming power lines.  Finally, consider what would be required to generate this EMP.  A well placed high altitude nuclear detonation over Kansas City would affect most of the 48 states and substantial portions of Canada and Mexico.  The list of candidates to accomplish this task is short, and it flies in the face of current theories of nuclear deterrence, namely that a nation keeps these weapons in the hopes of not using them.  None of this addresses the much larger concerns of a society thrust into darkness, with power and infrastructure in ruin.

And here’s why it won’t really matter for end users in the years to come.  The best shield against EMP is actually the Earth itself.  The extents of the EMP are the sight lines to the horizon from the point of detonation, everything beyond is un-affected.  As companies migrate to the cloud, their information and processes will live redundantly in the cloud across a wide physical geography.  If Google’s American data centers went down, its European, Asian and Scandinavian centers would still run, and processes would be backed up.  This kind of thinking is not new, companies will place redundant data centers a minimum distance from each other so a singular event is not likely to take out both.  Yes, physical infrastructure would be lost, and the costs would be devastating to a facility owner, but the real value of a data center is the business processes that occur in them, and those will surely live on and survive such an attack.

01
Nov
11

Moving Beyond the Tier Rating

http://www.datacenterknowledge.com/archives/2011/10/31/facebook-cuts-back-on-generators-in-sweden/

This is an interesting article about an emerging resiliency strategy for large scale IT operations.  If you read through the Tier guidelines from the Uptime Institute, you’ll note that for the two upper tiers (more resilient with respect to downtime) that generator plants are considered the primary power source for the building, and that all other utility feeds are just lagniappe.  Well, what happens when those utility feeds are more reliable than a generator plant?

There is a whole series of events that must occur in the proper order to ensure that from the time a utility feed is dropped and gens are brought online, IT processes are preserved.  This is a very complex process and it is why we commission data centers.  We want to be sure that these backup systems come online without a hitch.  However, there are so many parts that must work properly, there exists the real possibility of failure.  To give you an idea in basic terms, the sequence might go something like this:

1. The utility feed goes down

2. A static switch at a UPS throws over to battery or flywheel power temporarily

3. Generators are brought online

4. Some kind of switch gear switches the power over to generator from failed utility

5. Static switch at UPS switches back over to primary feed

The equipment that is installed to make this happen is very, very expensive.  The generators can easily run into the 6-figures for each set, and all of the required switchgear and UPS modules constitute a substantial part of the cost of the project.  They can also carry substantial maintenance costs.  The other factor here is that a company with redundant processes across the globe can afford to allow downtime at any given facility.  In this way, it’s a bit like a car rental business in that there is no need for insurance, because having a whole fleet of cars IS the insurance.  The most telling part of the article is the last section, where they rightly point out that this would be courting disaster for a smaller operation that is more critical to a company’s function.

In the case of the power grid across the pond, to not have an outage in nearly 30 years is nothing short of amazing!  The Facebooks and Googles of the world appear to have transcended the world of tier ratings in a big way, and now they enjoy a competitive advantage with their lower cost facilities.

25
Oct
11

Life is Imperfection

Fly in the Ointment? Meet Cricket in the Epoxy.




November 2020
M T W T F S S
 1
2345678
9101112131415
16171819202122
23242526272829
30