In our last article, I showed that the current power management does not seem to work well with the Windows Scheduler. We got tons of interesting suggestions and superb feedback. Also several excellent academic papers from two universities in Germany which confirm our findings and offer a lot of new insights. More about that later.The thing that is really haunting me once again is that our follow up article is long overdue. And it is urgent, because some people feel that the benchmark we used undermines all our findings. We disagree as we chose the Fritz benchmark not because it was realworld, but because it let us control the amount of CPU load and threads so easily. But the fact remains of course that the benchmark is hardly relevant for any server. Pleading guilty as charged.
So how about SQL Server 2008  Enterprise x64 on Windows 2008 x64? That should interest a lot more IT professionals.We used our "" SQL Server test, you can read about our testing methods here. That is the great thing about the blog, you do not have to spend pages on benchmark configuration details :-). Hardware configuration details: a single Opteron 2435 2.6 GHz running in the server we described here. This test  is as real life as it gets: we test with 25, 50, 100 and so on users which fire off queries with an average rate of one per second. Our vApus stresstest makes sure that all those queries are not sent at the same time but within a certain time delta, just like real users. So this is much better than putting the CPU under 100% load and measuring maximum throughput. Remember in our first article, we stated that the real challenge of a server was to offer a certain number of users a decent responsetime, and this preferably at the lowest cost. And the lowest cost includes the lowest power consumption of course.  
While I keep some of the data for the article, I like to draw your attention to a few very particular findings when comparing the "balanced" and "performance" power plan of Windows 2008. Remember the balanced performance plan is the one that should be the best one: in theory it adapts the frequency and voltage of your CPU to the demanded performance with only a small performance hit. And when we looked at the throughput or queries per second figures, this was absolutely accurate. But throughtput is just throughput. Response time is the one we care about.
Let us take a look at the graph below. The response time and power usage of the server when set to performance (maximum clock all the time) is equal to one. The balanced power and response time are thus relative to the numbers we saw in performance.  Response time is represented by the columns and the first Y-axis (on the left), Power consumption is represented by the line and by the second Y-axis (on your right).
 The interesting  thing is that reducing the frequency and voltage never delivers more than 10% of power savings. One reason is that we are testing with only six-core CPU. The power savings would be obviously better when you look at a dual or even quad CPU system. Still, as the number of core per CPU increases, systems with less CPUs become more popular. If you have been paying attention to what AMD and Intel are planning in the next month(s), you'll notice that they are adapting to that trend. You'll see even more evidence next month.
What is really remarkable is that our SQL Server 2008 server took twice as much time to respond when the CPU is using DVFS (Dynamic Voltage Frequency Scaling) than when not. It clearly shows that in many cases, heavy queries were scheduled on cores which were running at a low frequency (0.8 - 1.4 GHz). 
I am not completely sure whether or not CPU load measurements are completely accurate when you use DVFS (Powernow!), but the CPU load numbers tell the same story.
The CPU load on the "balanced" server is clearly much higher. Only when the CPU load was approaching 90%, was the "balanced" server capable of delivering the same kind of performance as when running in "performance" mode. But then of course the power savings are insignificant. So while power management makes no difference for the number of users you can serve, the response time they experience might be quite different. Considering that most servers run at CPU loads much lower than 90%, that is an interesting thing to note.
Comments Locked


View All Comments

  • Robear - Thursday, February 18, 2010 - link

    This is why I love Anandtech.

    This article seems both scientific and useful. It's not just gamers and people who like to tinker that are interested in hardware. A lot of us in corporate environments live and breathe this stuff; largely because our job performance depends on our level of knowledge. Most of the benchmarks we get are right from De11, I3M, etc. It's damned near impossible to find legitimate, independent reviews on different hardware platforms, or on stuff like power management.

    I wish the IT products and offerings had as much performance data as the mainstream. I can't tell you how many under-performing I3M servers we've had to eat.
  • mino - Monday, February 22, 2010 - link

    Yeah, great one.

    The funny thing is, for the last 5 years, I have been telling my bosses, colleagues and partners to forget about CPU power management and just run low-voltage CPUs with "Full Power" schemes.

    I was consistently put into "weird guy" position by some ass "experts" from a HW vendor with their "GREEN" powerpointery crap.

    Hopefully, Anandtech's research-backed results will relieve me of those pointless arguments.
  • v12v12 - Monday, February 22, 2010 - link

    Again... MARKETING sells and dominates budgets... Common-sense and against-the-grain LOGIC do not. That guy didn't spend hrs rehearsing his PPT presentation, just to be shown-up by the dreaded "logic guy" head-raiser in the back of the room, lol!

    (Off-topic, but related)
    Sorta like "green drives..." what a complete CON! WTF would you come out with a complete line of under-performing drives, when you COULD invest that money, time and R&D into developing 1-2 drive lines that have smart AI built in that would auto (or option to be set) adjust it's performance; spindle speed, seek aggressiveness/timing etc... Nope! Throw logic out the door for a whole complete line of useless, slow drives... Call them "green" and watch the naves flock!

    How about a drive that can vary itself from say 10Krpm to 5400? Naaagh, that would make too much sense. MARKETING 101—fool the masses.
  • jamesadames12 - Thursday, February 18, 2010 - link">">
  • ScavengerLX - Thursday, February 18, 2010 - link

    You have a somewhat comical error in the second sentence :)

  • JohanAnandtech - Thursday, February 18, 2010 - link

    Watched "little britain" (British comical show) before I wrote this blog. Looks like the subliminal messages got through. :-) (Fixed now)
  • Ratman6161 - Thursday, February 18, 2010 - link

    In a relatively small organization (35 employees serving a customer base of just over 100,000) the reason we like multi-core processors so much is for running VMWare-ESX server. We currently use dual socket machines with Intel quad cores. This gives us 8 cores for the price of a two socket system. These systems would be huge overkill for any one of our servers but we can run a lot of virtual machines on them.

    In this environment the power management capabilities of the VM's OS don't really seem relevant. Or am I missing something?
  • JohanAnandtech - Thursday, February 18, 2010 - link

    Yes, I believe you are. First of all, Hyper-V bases it's power management on the same polling power manager as Windows 2008 R2 AFAIK.

    If I am not mistaken - I still have to check this - even ESX' power management is influenced by what the guest OS does in this area. Finally, there is no guarantee that ESX's power manager will be so much better on this. We'll follow up on this.
  • Vigilant007 - Friday, February 19, 2010 - link

    VMWare ESX or vSphere have numerous ways of doing power management that depend on the environment that it is configured in, the work load, and etc, and when properly configured could arguably provide better power management then Hyper-V. VMware doesn't rely on the guest OS to the best of my knowledge. It has a lot of things in place in both the vkernel as well as in it's guest additions to try to keep everything working as optimally as possible but a lot of it relies on the hardware and configuration. If you have 4 VMware servers setup, it would be trivial for VMware to keep 3 of them in an ultra low power state at the start of the day when your employees are slowly getting their work day started, then as work progresses using DRS to slowly power on more servers, and distribute the VM's on that the original had across the rest as the demand arises, without any downtime. I'd say that that would have a substantial drop in power usage doing this. As CPU's become more elaborate, I know there are features in the most modern Intel processors that will actually power down entire cores if the work load is light enough, but it all comes back to how you have it all configured. Virtualization may not give us the ability to walk on water, but with the right configuration, it's amazing how much you can get done.

    If anything, because of Hyper-V's model of using a parent OS, I would almost think that you wouldn't see as much of a benefit in power because the Parent partition is having to do a lot of timing and etc to make sure that the Child OS's are getting the resources they need, though that is purely speculation. I am personally not a fan of Hyper-V's model of very very bare hypervisor with no device support relying on a full blown operating system to provide the driver model, and fully expect by 2012 that they'll have driven the driver model down to the hypervisor.
  • ncage - Thursday, February 18, 2010 - link

    You should get in contact with either/or Intel and someone on the Windows Server Team. I'm sure you have better connections then me :). It be interesting what they would have to say.

Log in

Don't have an account? Sign up now