of data centers - availability, reliability and sheer horsepower (performance) - are at odds with the conservation-based assumptions of sustainability. Through experimentation with available tools, and the promise of new functionality in subsequent "greener" releases, data center operators and their providers (as well as analysts and other observers) are working on a number of emerging practices that may result in an honorable compromise between performance and sustainability. In broad strokes, the emerging set of greener practices suggests organizations should build on long-established data center disciplines. Here are nine steps to start:

1. Take a broad, holistic view of the organization and its operations in assessing energy use, and factor energy and cooling cost reduction into life cycle management.

2. Consider power efficiency as a key placement attribute in scheduling server workloads.

3. Balance energy consumption and utilization when picking platforms. CPU utilization averages 90 percent on a mainframe but only 5 percent to 15 percent on servers. At the processer level, activate "throttle down" features to reduce energy consumption and consider migration to multicore processors which provide better performance at lower clock speeds.

4. Compare blade servers and rack servers on the basis of computing capacity and power and cooling requirements - not on space. The calculations, not to mention operational considerations, are complex and deserve disciplined analyses. For example, blade and virtualization technologies result in denser data centers that require more power and more cooling, but server consolidation through virtualization can result in significant energy savings.

5. Measure and monitor the energy consumption of servers at least annually. Choose more efficient power supplies for servers and recognize that redundancy and load sharing strategies raise both uptime rates and energy use. Many rack servers ship with supplies that are 60 percent to 70 percent efficient - but the Energy Star 80 Plus requirement, which requires power supplies in computers and servers to be at least 80 percent energy efficient, can save an estimated 301 kilowatt-hours per server annually.

6. Use the operating system to ration the voltage going to the processor, particularly as new power management features in new operating system releases provide granular controls.

7. Take advantage of metrics and models developed by industry initiatives such as The Green Grid to improve the energy efficiency of existing data centers and plan more effectively for new facilities.

8. Adapt performance dashboards to reflect sustainability measures, including metrics such as energy efficiency, emission and waste reduction, and supply chain and staff management.

9. Remember the classically simple (but often overlooked) answer: When not in use, turn it off.

Clearly data center optimization is a much larger undertaking than even the most refined list can capture.

Enter The Green Grid, a not-for-profit industry consortium focused on "advancing energy efficiency in data centers and computing ecosystems." It has completed the key elements of its technology road map, and the first priority is developing metrics for benchmarking, measuring and optimizing data center power consumption.

The consortium was welcomed as an aggregation point as the industry and data center operators struggled to come to terms with green IT. Even at that, some analysts worried that the consortium's ties to industry might hold it back from the kind of innovation needed to re-imagine the data center as part of a sustainable ecosystem.

It is worth noting that The Green Grid aspires not only to help tune up existing data centers, but also to help planners make smarter decisions when and if they are able to rethink data centers and build them from scratch.

Lean and Green

Chopra looks to the private sector for the needed innovation to fundamentally reform data centers. "We need to encourage our vendor community to build green-friendly data centers and

Paul W. Taylor Paul W. Taylor  |  Chief Content Officer, e.Republic Inc.

Paul W. Taylor, Ph.D., is the editor-at-large of Governing magazine. He also serves as the chief content officer of e.Republic, Governing’s parent organization, as well as senior advisor to the Governing Institute. Prior to joining e.Republic, Taylor served as deputy Washington state CIO and chief of staff of the state Information Services Board (ISB). Dr. Taylor came to public service following decades of work in media, Internet start-ups and academia. He is also among a number of affiliated experts with the non-profit, non-partisan Information Technology and Innovation Foundation (ITIF) in Washington, D.C.