An Overview of Server DIMM types

Typically, desktop and mobile systems use Unbuffered DIMMs (UDIMMs). The memory controller inside your CPU addresses each memory chip of your UDIMM individually and in parallel. However, each memory chip places a certain amount of capacitance on the memory channel and thus weakens the high frequency signals through that channel. As a result, the channel can only take a limited number of memory chips.

This is hardly an issue in the desktop world. Most people will be perfectly happy with 16GB (4x4GB) and run them at 1.6 to 2.133GHz while overvolting the DDR3 to 1.65V. It's only if you want to use 8GB DIMMs (at 1.5V) that you start to see the limitations: most boards will only allow you to install two of them, one per channel. Install four of them in a dual channel board and you will probably be limited to 1333MHz. But currently very few people will see any benefit from using slow 32GB instead of 16GB of fast DDR3 (and you'd need Windows 7 Professional or Ultimate to use more than 16GB).

In the server world, vendors tend to be a lot more conservative. Running DIMMs at an out-of-spec 1.65V will shorten their life and drive the energy consumption a lot higher. Higher power consumption for 2-3% more performance is simply insane in a rack full of power hogging servers.

Memory validation is a very costly process, another good reason why server vendors like to play it safe. You can use UDIMMs (with ECC most of the time, unlike desktop DIMMs) in servers, but they are limited to lower capacities and clockspeeds. For example, Dell's best UDIMM is a 1333MHz 4GB DIMM, and you can only place two of them per channel (2 DPC = 2 DIMMs Per Channel). That means that a single Xeon E5 cannot address more than 32GB of RAM when using UDIMMs. In the current HP servers (Generation 8), you can get 8GB UDIMMs, which doubles the UDIMM capacity to 64GB per CPU.

In short, UDIMMs are the cheapest server DIMMs, but you sacrifice a lot of memory capacity and a bit of performance.

RDIMMs (Registered DIMMs) are a much better option for your server in most cases. The best RDIMMs today are 16GB running at 1600MHz (800MHz clock DDR). With RDIMMs, you can get up to three times more capacity: 4 channels x 3 DPC x 16GB = 192GB per CPU. The disadvantage is that the clockspeed throttles back to 1066MHz.

If you want top speed, you have to limit yourself to 2 DPC (and 4 ranks). With 2DPC, the RDIMMs will run at 1600MHz. Each CPU can then address up to 128GB per CPU (4 channels x 2 DPC x 16GB). Which is still twice as much as with UDIMMs, while running at a 20% higher speed.

RDIMMs add a register, which buffers the address and command signals.The integrated memory controller in the CPU sees the register instead of addressing the memory chips directly. As a result, the number of ranks per channel is typically higher: the current Xeon E5 systems support up to eight ranks of RDIMMs. That is four dual ranked DIMMs per channel (but you only have three DIMM slots per channel) or two Quad Ranked DIMMs per channel. If you combine quad ranks with the largest memory chips, you get the largest DIMM capacities. For example, a quad rank DIMM with 4Gb chips is a 32GB DIMM (4 Gbit x 8 x 4 ranks). So in that case we can get up to 256GB: 4 channels x 2 DPC x 32GB. Not all servers support quad ranks though.

 

LRDIMMs can do even better. Load Reduced DIMMs replace the register with an Isolation Memory Buffer (iMB™ by Inphi) component. The iMB buffers the Command, Address, and data signals.The iMB isolates all electrical loading (including the data signals) of the memory chips on the (LR)DIMM from the host memory controller. Again, the host controllers sees only the iMB and not the individual memory chips. As a result you can fill all DIMM slots with quad ranked DIMMs. In reality this means that you get 50% to 100% more memory capacity.

Supermicro's 2U Twin HyperCloud DIMMs
Comments Locked

26 Comments

View All Comments

  • koinkoin - Friday, August 3, 2012 - link

    For HPC solutions I like the Dell C6220, dense, and with 2 or 4GB of memory per cpu core you get a good configuration in a 2U chassis for 4 servers.

    But for VMware, servers like the R720 give you more room to play with memory and IO slots.

    Not counting that those dense server don’t offer the same level of management and user friendliness.
  • JohanAnandtech - Friday, August 3, 2012 - link

    A few thoughts:

    1. Do you still need lots of I/O slots now that we can consolidate a lot of gigabit Ethernets in Two 10GBe

    2. Management: ok, a typical blade server can offer a bit more, but the typical remote management solutions that Supermicro now offers are not bad at all. We have been using them for several years now.

    Can you elaborate what you expect from the management solution that you won't expect to see in a dense server?
  • alpha754293 - Friday, August 3, 2012 - link

    re: network consolidation
    Network consolidation comes at a cost premium. You can still argue that an IB QDR will give you better performance/bandwith, but a switch is $6k and other systems that don't have IB QDR built in, it's about $1k per NIC. Cables are at least $100 a piece.

    If you can use it and justify the cost, sure. But GbE is cheap. REALLY REALLY cheap now that it's been in the consumer space for quite some time.

    And there aren't too many cases when you might exceed GbE (even the Ansys guys suggest investing in better hardware rather than expensive interconnects). And that says a LOT.

    re: management
    I've never tried Supermicro's IMPI, but it looks to be pretty decent. Even if that doesn't work, you can also use 3rd party like logmein and that works quite well too! (Although not available for Linux, but there are Linux/UNIX options available out there as well).

    Supermicro also has an even higher density version of this server (4x half-width, 1U DP blade node.)
  • JonBendtsen - Monday, August 6, 2012 - link

    I have tried Supermicro IPMI, works nicely. I can power on/off the machine and let it boot from a .iso image I have on my laptop. This means that in case I have to boot from a rescue CD, then I do not even have to plug a CD drive into the machine. Everything can be done from my laptop, even when I am not in the office, or even the country.
  • bobbozzo - Tuesday, August 7, 2012 - link

    Can you access boot screens and the BIOS from the IPMI?

    For Linux, I use SSH (or VNC server), but when you've got memory or disk errors, etc., it's nice to see the BIOS screens.

    Bob
  • phoenix_rizzen - Thursday, August 9, 2012 - link

    Using either the web interface on the IPMI chip itself, or the IPMIView software from SuperMicro, you get full keyboard, mouse, console redirection. Meaning, you can view the POST, BIOS, pre-boot, boot, and console of the system.

    You can also configure the system to use a serial console, and configure the installed OS to use a serial console, and then connect to the serial console remotely using the ipmitool program.

    The IPMI implementation in SuperMicro motherboards (at least the H8DG6/H8DGi series, which we use) is very nice. And stable. And useful. :)
  • ForeverAlone - Friday, August 3, 2012 - link

    Only 128GB RAM? Unacceptable!
  • Guspaz - Monday, August 20, 2012 - link

    It starts to matter more when you're pouring on the VMs. With two sockets there, you're talking 16 cores, or 32 threads. That's the kind of machine that can handle a rather large number of VMs, and with only 128GB of RAM, that would be the limitation regarding how many VMs you could stick on there. For example, if you wanted to have a dedicated thread per VM, you're down to only 4GB per VM, which is kind of low for a server.
  • darking - Friday, August 3, 2012 - link

    I think the price on the webpage is wrong. or atleast it differs by market.

    i just checked the Danish and the British webstores, and the 32GB LRDIMMS are priced at around 2200$ not the 3800$ that the US webpage has.
  • JohanAnandtech - Friday, August 3, 2012 - link

    They probably changed it in the last few days as HP as lowered their price to $2000 a while ago. But when I checked, it was $3800

Log in

Don't have an account? Sign up now