Intel to Launch Next-Gen Sapphire Rapids Xeon with High Bandwidth Memoryby Dr. Ian Cutress on June 28, 2021 12:00 PM EST
- Posted in
- Xeon Scalable
- Sapphire Rapids
As part of today’s International Supercomputing 2021 (ISC) announcements, Intel is showcasing that it will be launching a version of its upcoming Sapphire Rapids (SPR) Xeon Scalable processor with high-bandwidth memory (HBM). This version of SPR-HBM will come later in 2022, after the main launch of Sapphire Rapids, and Intel has stated that it will be part of its general availability offering to all, rather than a vendor-specific implementation.
Hitting a Memory Bandwidth Limit
As core counts have increased in the server processor space, the designers of these processors have to ensure that there is enough data for the cores to enable peak performance. This means developing large fast caches per core so enough data is close by at high speed, there are high bandwidth interconnects inside the processor to shuttle data around, and there is enough main memory bandwidth from data stores located off the processor.
Our Ice Lake Xeon Review system with 32 DDR4-3200 Slots
Here at AnandTech, we have been asking processor vendors about this last point, about main memory, for a while. There is only so much bandwidth that can be achieved by continually adding DDR4 (and soon to be DDR5) memory channels. Current eight-channel DDR4-3200 memory designs, for example, have a theoretical maximum of 204.8 gigabytes per second, which pales in comparison to GPUs which quote 1000 gigabytes per second or more. GPUs are able to achieve higher bandwidths because they use GDDR, soldered onto the board, which allows for tighter tolerances at the expense of a modular design. Very few main processors for servers have ever had main memory be integrated at such a level.
Intel Xeon Phi 'KNL' with 8 MCDRAM Pads in 2015
One of the processors that used to be built with integrated memory was Intel’s Xeon Phi, a product discontinued a couple of years ago. The basis of the Xeon Phi design was lots of vector compute, controlled by up to 72 basic cores, but paired with 8-16 GB of on-board ‘MCDRAM’, connected via 4-8 on-board chiplets in the package. This allowed for 400 gigabytes per second of cache or addressable memory, paired with 384 GB of main memory at 102 gigabytes per second. However, since Xeon Phi was discontinued, no main server processor (at least for x86) announced to the public has had this sort of configuration.
New Sapphire Rapids with High-Bandwidth Memory
Until next year, that is. Intel’s new Sapphire Rapids Xeon Scalable with High-Bandwidth Memory (SPR-HBM) will be coming to market. Rather than hide it away for use with one particular hyperscaler, Intel has stated to AnandTech that they are committed to making HBM-enabled Sapphire Rapids available to all enterprise customers and server vendors as well. These versions will come out after the main Sapphire Rapids launch, and entertain some interesting configurations. We understand that this means SPR-HBM will be available in a socketed configuration.
Intel states that SPR-HBM can be used with standard DDR5, offering an additional tier in memory caching. The HBM can be addressed directly or left as an automatic cache we understand, which would be very similar to how Intel's Xeon Phi processors could access their high bandwidth memory.
Alternatively, SPR-HBM can work without any DDR5 at all. This reduces the physical footprint of the processor, allowing for a denser design in compute-dense servers that do not rely much on memory capacity (these customers were already asking for quad-channel design optimizations anyway).
The amount of memory was not disclosed, nor the bandwidth or the technology. At the very least, we expect the equivalent of up to 8-Hi stacks of HBM2e, up to 16GB each, with 1-4 stacks onboard leading to 64 GB of HBM. At a theoretical top speed of 460 GB/s per stack, this would mean 1840 GB/s of bandwidth, although we can imagine something more akin to 1 TB/s for yield and power which would still give a sizeable uplift. Depending on demand, Intel may fill out different versions of the memory into different processor options.
One of the key elements to consider here is that on-package memory will have an associated power cost within the package. So for every watt that the HBM requires inside the package, that is one less watt for computational performance on the CPU cores. That being said, server processors often do not push the boundaries on peak frequencies, instead opting for a more efficient power/frequency point and scaling the cores. However HBM in this regard is a tradeoff - if HBM were to take 10-20W per stack, four stacks would easily eat into the power budget for the processor (and that power budget has to be managed with additional controllers and power delivery, adding complexity and cost).
One thing that was confusing about Intel’s presentation, and I asked about this but my question was ignored during the virtual briefing, is that Intel keeps putting out different package images of Sapphire Rapids. In the briefing deck for this announcement, there was already two variants. The one above (which actually looks like an elongated Xe-HP package that someone put a logo on) and this one (which is more square and has different notches):
There have been some unconfirmed leaks online showcasing SPR in a third different package, making it all confusing.
Sapphire Rapids: What We Know
Intel has been teasing Sapphire Rapids for almost two years as the successor to its Ice Lake Xeon Scalable family of processors. Built on 10nm Enhanced SuperFin, SPR will be Intel’s first processors to use DDR5 memory, have PCIe 5 connectivity, and support CXL 1.1 for next-generation connections. Also on memory, Intel has stated that Sapphire Rapids will support Crow Pass, the next generation of Intel Optane memory.
For core technology, Intel (re)confirmed that Sapphire Rapids will be using Golden Cove cores as part of its design. Golden Cove will be central to Intel's Alder Lake consumer processor later this year, however Intel was quick to point out that Sapphire Rapids will offer a ‘server-optimized’ configuration of the core. Intel has done this in the past with both its Skylake Xeon and Ice Lake Xeon processors wherein the server variant often has a different L2/L3 cache structure than the consumer processors, as well as a different interconnect (ring vs mesh, mesh on servers).
Sapphire Rapids will be the core processor at the heart of the Aurora supercomputer at Argonne National Labs, where two SPR processors will be paired with six Intel Ponte Vecchio accelerators, which will also be new to the market. Today's announcement confirms that Aurora will be using the SPR-HBM version of Sapphire Rapids.
As part of this announcement today, Intel also stated that Ponte Vecchio will be widely available, in OAM and 4x dense form factors:
Sapphire Rapids will also be the first Intel processors to support Advanced Matrix Extensions (AMX), which we understand to help accelerate matrix heavy workflows such as machine learning alongside also having BFloat16 support. This will be paired with updates to Intel’s DL Boost software and OneAPI support. As Intel processors are still very popular for machine learning, especially training, Intel wants to capitalize on any future growth in this market with Sapphire Rapids. SPR will also be updated with Intel’s latest hardware based security.
It is highly anticipated that Sapphire Rapids will also be Intel’s first multi compute-die Xeon where the silicon is designed to be integrated (we’re not counting Cascade Lake-AP Hybrids), and there are unconfirmed leaks to suggest this is the case, however nothing that Intel has yet verified.
The Aurora supercomputer is expected to be delivered by the end of 2021, and is anticipated to not only be the first official deployment of Sapphire Rapids, but also SPR-HBM. We expect a full launch of the platform sometime in the first half of 2022, with general availability soon after. The exact launch of SPR-HBM beyond HPC workloads is unknown, however given those time frames, Q4 2022 seems fairly reasonable depending on how aggressive Intel wants to attack the launch in light of any competition from other x86 vendors or Arm vendors. Even with SPR-HBM being offered to everyone, Intel may decide to prioritize key HPC customers over general availability.
- SuperComputing 15: Intel’s Knights Landing / Xeon Phi Silicon on Display
- A Few Notes on Intel’s Knights Landing and MCDRAM Modes from SC15
- Intel Announces Knights Mill: A Xeon Phi For Deep Learning
- Intel Begins EOL Plan for Xeon Phi 7200-Series ‘Knights Landing’ Host Processors
- Knights Mill Spotted at Supercomputing
- The Larrabee Chapter Closes: Intel's Final Xeon Phi Processors Now in EOL
- Intel’s 2021 Exascale Vision in Aurora: Two Sapphire Rapids CPUs with Six Ponte Vecchio GPUs
- Intel’s Xeon & Xe Compute Accelerators to Power Aurora Exascale Supercomputer
- Hot Chips 33 (2021) Schedule Announced: Alder Lake, IBM Z, Sapphire Rapids, Ponte Vecchio
- Intel’s Full Enterprise Portfolio: An Interview with VP of Xeon, Lisa Spelman
- What Products Use Intel 10nm? SuperFin and 10++ Demystified
- Intel 3rd Gen Xeon Scalable (Ice Lake SP) Review: Generationally Big, Competitively Small
Post Your CommentPlease log in or sign up to comment.
View All Comments
kpb321 - Monday, June 28, 2021 - linkI wonder if the HBM models will still support Optane. That could lead to a really interesting and complicated memory hierarchy. The HBM as probably the smallest and fastest pool of memory that can probably either be addressable memory or act as a cache. DDR5 as a middle tier and Optane as a final persistent tier. I'm sure making good benefit of that all will take some custom work but I wouldn't be too surprised if someone decides that it is worthwhile.
brucethemoose - Monday, June 28, 2021 - linkHBM + Optane, with no DDR5 in between, would be an interesting configuration if possible.
schujj07 - Monday, June 28, 2021 - linkPossible yes, practical no. In that idea you would be using Optane in memory mode most likely. According the VMware documentation, you want to have a 1:4 ratio for Optane in memory mode as the RAM acts as a cache for the Optane. Following best practice would mean that your host would only have 256GB Optane with 64GB HBM. Problem there is the smallest Optane DIMMs are 128GB so you would be at a 1:8 ratio for 512GB RAM which is against best practices. On top of that 512GB RAM isn't that much is a server host now of days. The more RAM you can have in a virtual environment the more VMs you can easily run.
kpb321 - Monday, June 28, 2021 - linkI someone did run only HBM and Optane memory I'm sure it would be for some custom server software tuned specifically for that and not for something as standard as a VMware server. Assuming it's supported, HBM + maxed out Optane memory would theoretically be the highest possible memory config. Previous systems required at least one rank of DDR4 memory so the system had some normal memory to work with. Being able to eliminate that by using the HBM would allow more total Optane memory. That could be handy for some extremely large memory set situations.
brucethemoose - Monday, June 28, 2021 - linkYeah, just what I was thinking. There are surely some too-big-for-ddr5 datasets/workloads that would benefit from running on Optane instead of an NVMe drive, and that could still utilize 64GB of cache/scratch space.
schujj07 - Monday, June 28, 2021 - linkSAP HANA does benefit from Optane in App Direct mode. SAP HANA is an in RAM DB so every GB of storage needed for the DB requires a DB of RAM. I've seen some HANA DBs that are 1.7TB in size and they can be bigger than that. App Direct mode can make the startup & shutdown process much faster. That said App Direct mode is done in a 1:1 ratio usually according to best practices. However, it can be done in up to 1:4 ratios. Again you run into a max amount of Optane being 512 GB if you have 64GB HBM.
JayNor - Monday, June 28, 2021 - linkAn Intel cxl presentation indicated they will move Optane to a cxl memory pool, which should spur adoption.
brucethemoose - Monday, June 28, 2021 - linkSurely there would be a latency and power hit vs. hanging it off the IMC, right?
mode_13h - Monday, June 28, 2021 - linkOptane is already slower than DDR4, so you might as well move it out to CXL. There, it can at least scale up more and be symmetrically shared by multiple CPUs or accelerators.
mode_13h - Monday, June 28, 2021 - link> HBM + Optane, with no DDR5 in between
Yes, I thought that as well.
It would make more sense for a laptop to use HBM as main memory and then swap to Optane. You wouldn't need very much HBM to make that workable. It could give you instant S5 sleep/wake.
As for servers, I think a large, in-memory DB is probably the use case that makes sense to me.
However, if you add DDR5 to make a 3-teir memory hierarchy, I was thinking along the sames lines as JayNor about putting the Optane in a CXL module.