Original Link: https://www.anandtech.com/show/3522



In our last article about server CPUs, I wrote: 
 
"the challenge for AMD and Intel is to convince the rest of the market - that is 95% or so - that the new platforms provide a compelling ROI (Return On Investment). The most productive or intensively used servers in general get replaced every 3 to 5 years. Based on Intel's own inquiries, Intel estimates that the current installed base consists of 40% dual-core CPU servers and 40% servers with single-core CPUs."
 
At the end of the presentation of Pat Gelsinger (Intel) makes the point that replacing nine servers based on the old single core Xeons with one Xeon X5570 based server will result in a quick payback. Your lower energy bill will pay back  your investment back in 8 months according to Intel.
 
Why these calculations are quite optimistic is beyond the scope of this blogpost, but suffice to say that Specjbb is a pretty bad benchmark to perform ROI calculations (it can be "inflated" too easiliy) and that Intel did not consider the amount of work it takes to install and configure those servers. However, Intel does have a point that replacing the old power hungry Xeons (irony...) will deliver a good return on investment.
 
In contrast, John Fruehe (AMD) is pointing out that you could upgrade dualcore Opteron based servers (the ones with four numbers in their modelnumbers and DDR-2) with hex-core AMD "Istanbul" CPUs. I must say that I encountered few companies who would actually bother upgrading CPUs, but his arguments make some sense as the CPU will still use the same kind of memory: DDR-2. As long as your motherboard supports it, you might just as well upgrade the BIOS, pull out your server, replace the 1 GB DIMMs with 4 GB DIMMs and replace the dual cores with hex-cores instead of replacing everything. It seems more cost effective than redo the cabling, reconfigure a new server and so on...
 
There were two reasons why few professional IT people bothered with CPU upgrades:
  1. You could only upgrade to a slightly faster CPU. Upgrading a CPU to a higher clocked, but similar CPU rarely gave any decent performance increase that was worth the time. For example, the Opteron was launched at 1.8 GHz, and most servers you could buy at the end of 2003 were not upgradeable beyond 2.4 GHz.
  2. You could not make use of more CPU performance. With the exception of the HPC people, higher CPU performance rarely delivered anything more than even lower CPU percentage usage. So why bother?
AMD has also a point that both things have changed. The first reason may not be valid anymore if hex-cores do indeed work in a dualcore motherboard. The second reason is no longer valid as virtualization allows you to use the extra CPU horse power to consolidate more virtual servers on one physical machine. On the condition of course that the older server allows you to replace those old 1 GB DIMMs with a lot of 4 GB ones. I checked for example the HP DL585G2 and it does allow up to 128 GB of DDR-2.
 
So what is your opinion? Will replacing CPUs and adding memory to extend the lifetime of servers become more common? Or should we stick to replacing servers anyway?
 
{poll 124:400}

Log in

Don't have an account? Sign up now