Smart Grid Issues for Electric Vehicles looks like Data Center Power Monitoring Complexity

Like fractals, complex system can be self-similar.

Fractal

From Wikipedia, the free encyclopedia

For the 2009 recording by Skyfire, see Fractal (EP).

The Mandelbrot set is a famous example of a fractal

A fractal is generally "a rough or fragmented geometric shape that can be split into parts, each of which is (at least approximately) a reduced-size copy of the whole,"[1] a property called self-similarity. Roots of mathematical interest on fractals can be traced back to the late 19th Century; however, the term "fractal" was coined by Benoît Mandelbrot in 1975 and was derived from the Latin fractus meaning "broken" or "fractured." A mathematical fractal is based on an equation that undergoes iteration, a form of feedback based on recursion.[2]

A fractal often has the following features:[3]

Because they appear similar at all levels of magnification, fractals are often considered to be infinitely complex (in informal terms). Natural objects that approximate fractals to a degree include clouds, mountain ranges, lightning bolts, coastlines, snow flakes, various vegetables (cauliflower and broccoli), and animal coloration patterns. However, not all self-similar objects are fractals—for example, the real line (a straight Euclidean line) is formally self-similar but fails to have other fractal characteristics; for instance, it is regular enough to be described in Euclidean terms.

Gigaom has an article about the complexity of a smart grid electric vehicle system.

Report: IT and Networking Issues for the Electric Vehicle Market This content requires a paid GigaOM Pro subscription

Summary:

This Pike Research report focuses on the IT and networking requirements associated with technology support systems for the emerging Electric Vehicle (EV) market. Key areas covered include vehicle connection and identification, energy transfer and vehicle-to-grid systems, communications platforms, pricing and billing systems and implementation issues.

The new generation of mass-produced EVs (including both plug-in and all-electrics) that will start arriving in 2010 will be able to charge at the owner’s residence, place of business, or any number of public and private charging stations. Keeping track of the ability of these vehicles and the grid to transfer energy will require transmitting data over old and new communications pathways using a series of developing and yet-to-be-written standards.

Industries that previously had little to no interaction with each other are now collaborating, determining new technologies and standard protocols and formats for sharing data. Formerly isolated networks must be able to handshake and seamlessly share volumes of financial and performance data. EV charging transactions will, for the first time, bring together platforms including vehicle operating systems and power management systems, utility billing systems, grid performance data, charging equipment applications, fixed and wireless communications networks, and web services.

When you look at the complexity of the system, it looks amazingly like the issues to put in a real-time energy monitoring system in the data center.

  1. Executive Summary
  2. Vehicle Connection and Identification
    1. Building Codes
    2. Battery Status
    3. Managing Vehicle-Grid Interaction
    4. Power Transfer
      1. Timed Power Transfer
  3. Communications Between Charging Locations and the Grid
    1. Home Area Networks
      1. Smart Meters
    2. Communications Channels
      1. Broadband
      2. ZigBee
      3. Powerline Networking
      4. Cellular Networks
  4. Utility Interaction with Customers
    1. Real-time Energy Pricing
    2. Enabling Vehicles to Respond to Grid Conditions
    3. Renewable Energy
    4. Future Vehicle to Grid (V2G) Applications
  5. Implementation Issues
    1. Cost
    2. Standards in Flux
    3. Clash of Multiple Industries
      1. Control
    4. Privacy
Read more

Bridging the Gap between Facilities and IT, with a Carbon Footprint discussion

I was listening to Mike Manos’s Chiller Side chat discussion hosted by DataCenterKnowledge’s Rich Miller.  One point Mike made was the challenge of facilities and IT having different views of what the data center needs to be.  Many of the questions were about power and cooling systems, so I think most of the listeners didn’t know how to address Mike’s challenge for facilities and IT working together.

image

Later in Mike’s conversation he discussed the issue of the carbon footprint from power production as an issue for data centers and the need to think about this as part of a green data center strategy.  In fact, Mike mentioned green IT often.

I was going to ask Mike “What is an example of what you have found works to bridge the gap between facilities and IT?”  Unfortunately, there was technical problems so I couldn’t ask the question.

So I came up with my own answer.  “Discuss the carbon footprint of site selection as facilities and IT personnel evaluate locations.”   To get the lowest carbon footprint requires the teams to work together to evaluate the alternatives and the trade-offs. 

I know Mike uses this strategy as he selects sites for Digital Realty Trust and its customers.

Do you?

Read more

KC Mares Asks The Tough Questions, Rewarded by PUE of 1.04

DataCenterKnowledge has a post on Ultra-Low PUE.

Designing for ‘Ultra-Low’ Efficiency and PUE

September 10th, 2009 : Rich Miller

The ongoing industry debate about energy efficiency reporting based on the Power Usage Efficiency (PUE) metric is about to get another jolt. Veteran data center specialist KC Mares reports that he has worked on three projects this year that used unconventional design decisions to achieve “ultra-low PUEs” of between 1.046 and 1.08. Those PUE numbers are even lower than those publicly reported by Google, which has announced an average PUE of 1.20 across its facilities, with one facility performing at a 1.11 PUE in the first quarter of 2009.

KC’s post has more details.

Is it possible, a data center PUE of 1.04, today?

I’ve been involved in the design and development of over $6 billion of data centers, maybe about $10 billion now, I lost count after $5 billion a few years ago, so I’ve seen a few things. One thing I do see in the data center industry is more or less, the same design over and over again. Yes, we push the envelope as an industry, yes, we do design some pretty cool stuff but rarely do we sit down with our client, the end-user, and ask them what they really need. They often tell us a certain Tier level, or availability they want, and the MWs of IT load to support, but what do they really need? Often everyone in the design charrette assumes what a data center should look like without really diving deep into what is important.

And KC asks the tough questions.

Rarely did I get the answers from the end-users I wanted to hear, where they really questioned the traditional thinking and what a data center should be and why, but we did get to some unconventional conclusions about what they needed instead of automatically assuming what they needed or wanted.

We questioned what they thought a data center should be: how much redundancy did they really need? Could we exceed ASHRAE TC9.9 recommended or even allowable ranges? Did all the IT load really NEED to be on UPS? Was N+1 really needed during the few peak hours a year or could we get by with just N during those few peak hours each year and N+1 the rest of the year?

KC provides background we wish others would share.

Now, you ask, how did we get to a PUE of 1.05? Let me hopefully answer a few of your questions: 1) yes, based on annual hourly site weather data; 2) all three have densities of 400-500 watts/sf; 3) all three are roughly Tier III to Tier III+, so all have roughly N+1 (I explain a little more below); 4) all three are in climates that exceed 90F in summer; 5) none use a body of water to transfer heat (i.e. lake, river, etc); 6) all are roughly 10 MWs of IT load, so pretty normal size; 7) all operate within TC9.9 recommended ranges except for a few hours a year within the  allowable range; and most importantly, 8) all have construction budgets equal to or LESS than standard data center construction. Oh, and one more thing: even though each of these sites have some renewable energy generation, this is not counted in the PUE to reduce it; I don’t believe that is in the spirit of the metric.

If you want higher efficiencies and lower costs you need to be ready to the tough questions. 

The easy thing to do is collect the requirements of various stakeholders and say this is what we need built.  And, don’t ask the questions of how much does that requirement cost?

I know KC’s blog entry has others curious, and he has lots more appointments.

Hopefully this will wake up many others to ask the tough questions of “how much does that data center requirement cost?”

Read more

Green Data Center Degree Launched by IBM at Community College

Greener Computing has an article on IBM’s efforts to educate future data center staff.

ARMONK, NY — A new, two-year associate's degree from the Metropolitan Community Collegein Omaha, Neb., is being touted as the first of its kind to give students an intensive focus on designing and managing green data centers.
The program was launched today in cooperation with IBM, and will offer students coursework on virtualization and server consolidation, energy efficiency, security and compliance skills. The training center is built on IBM hardware, software and online training resources.

If you can’t make it Omaha, NB the degree is available for remote students.

The online component was developed between the MCC and IBM's Academic Initiative, a project that provides online training to more than 3,000 schools worldwide. As a result, the courses in MCC's green data center program will be offered online to remote students.

"We're seeing a dramatic increase in demand here in Nebraska for specialists who understand how to help companies reduce the costs associated with running an energy-intensive data center," said Tom Pensabene, Dean of Information Technology of Metropolitan Community College. "Now, our students are getting exposure to leading edge IBM technologies, increasing their chances of being hired for jobs in this growing area."
Among the courses on offer in the program are:

• Hardware, Disaster Recovery, & Troubleshooting;
• Introduction to Data Center Management;
• Virtualization, Remote Access, & Monitoring• Data Center Racks & Cabling;
• Building a Secure Environment;
• Applied Data Center Management;
• Networking Security; and
• Data Center Internship

The college we site for the degree is here.

Read more

Data Center Job Insecurity, Risk to health bigger than losing the job

Data center uptime is an obsession by any enterprise who has measured the revenue loss when their data center is down. Unfortunately, this can turn into turning up the pressure and stress on the data center staff as they could lose their job for a mistake.

MSNBC and LiveScience have an interesting study that discusses job stress is worse for your health than no job.

Worry over job is worse for health than no job

Uncertainty can lead to health woes, depression, study finds

By Robert Roy Britt

updated 1:53 p.m. PT, Fri., Aug 28, 2009

Simply worrying about losing your job can cost you your health, a new investigation of data from two long-term studies finds.

Surprisingly, the effect is worse than actually losing your job, the research suggests.

"Based on how participants rated their own physical and mental health, we found that people who were persistently concerned about losing their jobs reported significantly worse overall health in both studies and were more depressed in one of the studies than those who had actually lost and regained their jobs recently," said Sarah Burgard, a sociologist at the University of Michigan.

What I challenged any data center operator to do is to measure the mental health of its staff as an indicator of the potential risk to the site.  Human error is still the largest cause of data center outages, and job stress is a leading contributor to problems.

The article continues with another point about a tough job.

If you're feeling good about your job's prospects, here's one more thing to stress about: Other research has shown that the stress of a tough job — long hours and high pressure to perform — can also ruin your health.

This reminds me the good data center managers I’ve met have a genuine concern for their employees well-being.

Read more