Caron Carlson from FierceCIO flagged the great article below from Jim Ditmore in Information Week.
Frankly I was shocked to see how power consumption has dropped — mainly driven by the need for consumer electronics to have longer battery lives.
Net net, IT shops are able to have their cake and eat it too: processing power rides the upward curve of Moore’s Law while electrical / cooling power rides a downward curve of ever greater efficiency.
Emphasis in red added by me.
Brian Wood, VP Marketing
———————
Server efficiencies help keep data center costs in check
Keeping data centers powered and cooled has remained a challenge in recent years as IT departments have hustled to keep up with computing demands. Over the next few years, many IT shops are likely to have to upgrade their data centers or build new ones, but that doesn’t mean their data center costs have to rise, writes Jim Ditmore, vice president of IT infrastructure and operations at Allstate.
Over the past six or seven years, server power efficiency has improved at a greater rate than processor performance has, Ditmore writes in a post at InformationWeek. Prior to 2006, upgrading to the latest server model–even if you retained the same number of servers–would require greater power and cooling resources. The recent increase in power efficiency relative to the increase in processing efficiency is likely to continue.
For IT departments, it will likely become less necessary to add new data centers to the mix. By using private cloud computing and best practices with regard to capacity and performance management, you can likely use your current number of servers to provide greater computing power, Ditmore suggests.
—————
Why Your Data Center Costs Will Drop
A recent Intel study shows that the compute load that required 184 single-core processors in 2005 now can be handled with just 21 processors, where every nine servers gets replaced by one.
For 40 years, technology rode Moore’s Law to yield ever-more-powerful processors at lower cost. Its compounding effect was astounding: One of the best analogies is that we now have more processing power in a smartphone than the Apollo astronauts had when they landed on the moon. At the same time, though, the electrical power requirements for those processors continued to increase at a similar rate as the increase in transistor count. While new technologies (CMOS, for example) provided a one-time step-down in power requirements, each turn-up in processor frequency and density resulted in similar power increases.
In the meantime, most IT shops have experienced compute and storage growth rates of 20% to 50% a year, requiring either additional data centers or major increases in power and cooling capacity at existing centers. Since 2008, there has been some alleviation due to both slower business growth and the benefits of virtualization, which has let companies reduce their number of servers by as much as 10 to 1 for 30% to 70% of their footprint. But IT shops can deploy virtualization only once, suggesting that they’ll be staring at a data center build or major upgrade in the next few years.
But an interesting thing has happened to server power efficiency. Before 2006, such efficiency improvements were nominal, represented by the solid blue line below. Even if your data center kept the number of servers steady but just migrated to the latest model, it would need significant increases in power and cooling. You’d experience greater compute performance, of course, but your power and cooling would increase in a corresponding fashion. Since 2006, however, compute efficiency (green line) has improved dramatically, even outpacing the improvement in processor performance (red lines).
This stunning shift is likely to continue for several reasons. Power and cooling costs continue to be a significant proportion of overall server operating costs. Most companies now assess power efficiency when evaluating which server to buy. Server manufacturers can differentiate themselves by improving power efficiency. Furthermore, there’s a proliferation of appliances or “engineering stacks” that pull significantly better performance from conventional technology within a given power footprint.
A key underlying reason for increases in compute efficiency is the fact that chipset technologies are increasingly driven by the requirements for consumer mobile devices. One of the most important requirements of the consumer market is improved battery life, which also places a premium on energy-efficient processors. Chip (and power efficiency) advances and designs in the consumer market will flow back into the corporate (server) market. An excellent example is HP’s Project Moonshot, which leverages ARM chips, typically used only in consumer devices previously. Expect this power efficiency trend to continue for the next five and possibly the next 10 years.
This analysis assumes 5% to 10% business growth, (translating to a need for a 15% to 20% increase in server performance/capacity). You’ll have to employ best practices in capacity and performance management to get the most from your server and storage pools, but the long-term payoff is big. If you don’t leverage these technologies and approaches, your future is the red and purple lines on the chart: ever-rising compute and data center costs over the coming years.
By applying these approaches, you can do more than stem the compute cost tide; you can turn it. Have you started this journey? Have you been able to reduce the total number of servers in your environment? Are you able to meet your current and future business needs and growth within your current data center footprint?
What changes or additions to this approach would you make?
http://www.informationweek.com/global-cio/interviews/why-your-data-center-costs-will-drop/240146724