Archive for October, 2011

25
Oct
11

Life is Imperfection

Fly in the Ointment? Meet Cricket in the Epoxy.

Advertisements
18
Oct
11

Tornadoes and Other High Winds

It goes without saying that  Data Center clients are always looking to locate in an area that fits certain favorable criteria for data centers, not the least of which is a lack of major weather events that might contribute to downtime.  If you look on the SuperNAP website, they are very proud of the fact that they are located in a zone of the country that has virtually no disruptive weather events (Las Vegas).  There is little rain, no hurricanes, earthquakes, volcanoes or few tornadoes.  Of course, weather can’t be the only deciding factor, others might include tax incentives or the cost of utilities, or proximity or connectivity to regional or national locales.  For all the clients that cannot locate their entire IT operation in Las Vegas, weather events will eventually come up as part of the risk assessment.

I am by no means an expert on chance, nor am I an insurance adjuster, but it is still an interesting exercise to watch.  If you have ever played roulette, you have probably seen the sign above the wheel that shows the results of the last few spins.  It should be obvious to all of us that it is purely a game of chance, and that the numbers on the sign are not an indicator that a certain number or color is due because it keeps showing up or doesn’t show up at all.  If the last three spins were 18, then the odds that the next spin will be 18 will still be one in 38 (on an American wheel, for you gambling enthusiasts).  So it is with tornadoes.  We look at historical data, and to some degree we can say with certainty that particular areas of the country will receive more tornadoes than others.  Beyond that, it gets a little fuzzier, and this stems from an evolving process for studying and understanding tornadoes.

The currently accepted measure of severity for tornadoes is the Enhanced Fujita Scale, based on research from the National Weather Service, American Meteorological Society and the Wind Science and Engineering Research Center at Texas Tech.  With time and more exhaustive research, this scale will become more accurate and useful, however, I see two major issues with the scale as it exists currently.  First, the scale is based on observed damage, not on direct measurement of a tornado event.  This means that they look at the damage, and then try to figure out what kind of wind would be required to cause that much damage.  Second, our records of tornado events are incomplete and inconsistent earlier than 1950, when these events were tracked in a national registry.

I’ll address the windspeed first.  The windspeed factor is crucial because this is the end result that we are trying to understand.  When we design a data center, the windspeed which it should resist is among the first decisions made.  We look at the frequency of tornadoes of all scales and pick a design wind speed in excess of that which is required by local building code.  The danger here is that, since the Fujita scale is based on observed damage, it is conceivable that an owner either over builds and thus wastes resources that could have gone into critical infrastructure, or underestimates the risk, exposing the facility to more risk than was assumed based on historical data.

And this brings up the second issue which is historical data.  Consider this:  America is a large country, and is not densely populated from coast to coast, there are gaps between population centers.  If a tornado were to strike one of these areas and not cause any noted damage, would there be damage to assess a windspeed, or would the tornado even be registered?  I suspect that the recorded data probably under-counts the number of tornadoes in a given state or county.  The other question is how accurate past accounts of damage were when being assessed on the Fujita scale.

Ultimately, the decision rests with the owner.  Just like with investments, past performance is no guarantee of future returns, and the decision will come down the owner’s tolerance for risk at that facility.  I have heard of people locating in bunkers, and others who take a riskier approach per facility by distributing processes.

13
Oct
11

Fascinating Discussion

This week, I was discussing cloud computing with an equipment vendor and an electrical engineer.  One of the product reps that had been in the room just minutes before had casually stated that ‘virtualization would eliminate or reduce the role of tier ratings in data center design moving forward.’  This was a very bold statement, but there is some merit to what was suggested.  Cloud computing means externalizing or outsourcing processing to a server or facility that is not local to the user, so in the sense that a process could be sent to several locations simultaneously might suggest that this represents an increased level of redundancy for that process.

However, the designer in me still thinks that there is a physical connection to the cloud that might require more redundancy.  Think about it for a moment.  If you run an office and have only a single telcom line entry into the building, and that line or any part of it’s network responsible for delivering your data to a remote location for cloud computing is severed, then there is failure regardless of how redundant the cloud may be.  It might mean now that we have externalized the risk to areas that are no longer under our direct control.  A data center can bring multiple fiber providers into a facility, and multiple utility feeds.  These things help ensure that there is no disruption to the critical IT processes.  The other issue at work is who is going to provide the capital for multiple cloud sites, and at what cost?  Does the cost of redundancy for systems decrease with the prospect of spreading the processing around to multiple sites?

If we rely on the idea of ‘2N’ as provided by multiple data center sites, does that mean that currently we accept the possibility of a single point of failure in data delivery?  It’s a high level discussion that is probably dependent on the particulars of what kind of data processing is going on.  But I just can’t see a future where the tier ratings aren’t a factor in design anymore.

13
Oct
11

You Learn Something New Every Day

In data centers, it is common to deploy power and cooling systems in a redundant configuration.  This is to say, there will be more equipment installed than is actually required, so that if one system fails, there is at least one system there to pick up the load.  This is usually expressed as ‘N’ for the number of units required, ‘N+1’ for the needed number of systems plus an additional, 2N for twice the number required and so on.  Well, I learned recently on a data center project that I am trying to wrap up that running all systems in tandem for cooling actually produces energy savings because of the energy required for fans or equipment to start.  So even though they have N + a number of redundant chillers or CRACs, all units would run at the same time and at a lower capacity thus sharing the load.

I should caution that this is specific to a particular project and the input of a mechanical engineer (which I am not) is required to make an ultimate determination. I am told that this is a common energy savings strategy for larger data centers with chiller plants carrying a redundancy or with large CRAH units.   This comes as a surprise to me after meeting clients who advocated alternating equipment on a schedule to balance run time for equipment, which must seem preferable to designating a primary unit and wearing the unit out and having to rely on the backup system while the primary is maintained or replaced.  I’ll be interested to learn more.

13
Oct
11

Smaller Scales Welcome the Proprietary

It seems that the smaller the overall design and deployment of a data center, the more focused and specialized the equipment and best practices can become.  I am currently working on a small data room (calling it a data center might not accurately convey the size of the project) that will utilize a proprietary cooling strategy, as well as a proprietary power delivery strategy for the racks that are being deployed.

The project was planned with 5kw per rack, which to me seems like a lot of power and density, but in a couple of years, if current industry trends continue for data processing, it might just be a middle of the pack deployment.  In a larger enterprise facility, this kind of power consumption would require a rather large chiller plant and some serious power delivery and backup systems.  But the solution that has been proposed is a packaged UPS/Battery unit that deploys nicely within a data hall space, and is modular for growth if needed in the future.  The interface and manual switching are delightfully simple, so much so that anyone with enough patience to study the reductive one-line diagram could safely manage the use of this hardware.

The cooling proposed is equally clever.  Small CRAC units that fit into a standard cabinets provide cool air right where it is needed the most, eliminating the need for a more traditional underfloor plenum pressurized by CRAC/CRAH units located remotely from the servers.  The space below the raised floor can now be reserved for refrigerant lines and whips, leaving this space clean and free of obstructions.  This is very efficient on a small scale with a small volume to cool.  To ‘top’ it all off, they have decided to deploy a hot air containment system to keep the data room at a pleasant, workable temperature.  It may be small, but it is a microcosm that reflects perfectly the current trends touted by larger data centers splashing the headlines of technology news outlets.