It’s no secret that Intel’s enterprise processor platform has been stretched in recent generations. Compared to the competition, Intel is chasing its multi-die strategy while relying on a manufacturing platform that hasn’t offered the best in the market. That being said, Intel is quoting more shipments of its latest Xeon products in December than AMD shipped in all of 2021, and the company is launching the next generation Sapphire Rapids Xeon Scalable platform later in 2022. Beyond Sapphire Rapids has been somewhat under the hood, with minor leaks here and there, but today Intel is lifting the lid on that roadmap.

State of Play Today

Currently in the market is Intel’s Ice Lake 3rd Generation Xeon Scalable platform, built on Intel’s 10nm process node with up to 40 Sunny Cove cores. The die is large, around 660 mm2, and in our benchmarks we saw a sizeable generational uplift in performance compared to the 2nd Generation Xeon offering. The response to Ice Lake Xeon has been mixed, given the competition in the market, but Intel has forged ahead by leveraging a more complete platform coupled with FPGAs, memory, storage, networking, and its unique accelerator offerings. Datacenter revenues, depending on the quarter you look at, are either up or down based on how customers are digesting their current processor inventories (as stated by CEO Pat Gelsinger).

That being said, Intel has put a large amount of effort into discussing its 4th Generation Xeon Scalable platform, Sapphire Rapids. For example, we already know that it will be using >1600 mm2 of silicon for the highest core count solutions, with four tiles connected with Intel’s embedded bridge technology. The chip will have eight 64-bit memory channels of DDR5, support for PCIe 5.0, as well as most of the CXL 1.1 specification. New matrix extensions also come into play, along with data streaming accelerators, quick assist technology, all built on the latest P-core designs currently present in the Alder Lake desktop platform, albeit optimized for datacenter use (which typically means AVX512 support and bigger caches). We already know that versions of Sapphire Rapids will be available with HBM memory, and the first customer for those chips will be the Aurora supercomputer at Argonne National Labs, coupled with the new Ponte Vecchio high-performance compute accelerator.

The launch of Sapphire Rapids is significantly later than originally envisioned several years ago, but we expect to see the hardware widely available during 2022, built on Intel 7 process node technology.

Next Generation Xeon Scalable

Looking beyond Sapphire Rapids, Intel is finally putting materials into the public to showcase what is coming up on the roadmap. After Sapphire Rapids, we will have a platform compatible Emerald Rapids Xeon Scalable product, also built on Intel 7, in 2023. Given the naming conventions, Emerald Rapids is likely to be the 5th Generation.

Emerald Rapids (EMR), as with some other platform updates, is expected to capture the low hanging fruit from the Sapphire Rapids design to improve performance, as well as updates from the manufacturing. With platform compatibility, it means Emerald will have the same support when it comes to PCIe lanes, CPU-to-CPU connectivity, DRAM, CXL, and other IO features. We’re likely to see updated accelerators too. Exactly what the silicon will look like however is still an unknown. As we’re still new in Intel’s tiled product portfolio, there’s a good chance it will be similar to Sapphire Rapids, but it could equally be something new, such as what Intel has planned for the generation after.

After Emerald Rapids is where Intel’s roadmap takes on a new highway. We’re going to see a diversification in Intel’s strategy on a number of levels.

Starting at the top is Granite Rapids (GNR), built entirely of Intel’s performance cores, on an Intel 3 process node for launch in 2024. Previously Granite Rapids had been on roadmaps as an Intel 4 node product, however, Intel has stated to us that the progression of the technology as well as the timeline of where it will come into play makes it better to put Granite on that Intel 3 node. Intel 3 is meant to be Intel’s second-generation EUV node after Intel 4, and we expect the design rules to be very similar between the two, so it’s not that much of a jump from one to the other we suspect.

Granite Rapids will be a tiled architecture, just as before, but it will also feature a bifurcated strategy in its tiles: it will have separate IO tiles and separate core tiles, rather than a unified design like Sapphire Rapids. Intel hasn’t disclosed how they will be connected, but the idea here is that the IO tile(s) can contain all the memory channels, PCIe lanes, and other functionality while the core tiles can be focused purely on performance. Yes, it sounds like what Intel’s competition is doing today, but ultimately it’s the right thing to do.

Granite Rapids will share a platform with Intel’s new product line, which starts with Sierra Forest (SRF) which is also on Intel 3. This new product line will be built from datacenter optimized E-cores, which we’re familiar with from Intel’s current Alder Lake consumer portfolio. The E-cores in Sierra Forest will be a future generation than the Gracemont E-cores we have today, but the idea here is to provide a product that focuses more on core density rather than outright core performance. This allows them to run at lower voltages and parallelize, assuming the memory bandwidth and interconnect can keep up.

Sierra Forest will be using the same IO die as Granite Rapids. The two will share a platform – we assume in this instance this means they will be socket compatible – so we expect to see the same DDR and PCIe configurations for both. If Intel’s numbering scheme continues, GNR and SRF will be Xeon Scalable 6th Generation products. Intel stated to us in our briefing that the product portfolio currently offered by Ice Lake Xeon products will be covered and extended by a mix of GNR and SRF Xeons based on customer requirements. Both GNR and SRF are expected to have full global availability when launched.

The E-core Sierra Forest focused on core density will end up being compared to AMD’s equivalent, which for Zen4c will be called Bergamo – AMD might have a Zen5 equivalent when SRF comes to market.

I asked Intel whether the move to GNR+SRF on one unified platform means the generation after will be a unique platform, or whether it will retain the two-generation retention that customers like. I was told that it would be ideal to maintain platform compatibility across the generations, although as these are planned out, it depends on timing and where new technologies need to be integrated. The earliest industry estimates (beyond CPU) for PCIe 6.0 are in the 2026 timeframe, and DDR6 is more like 2029, so unless there are more memory channels to add it’s likely we’re going to see parity between 6th and 7th Gen Xeon.

My other question to Intel was about Hybrid CPU designs – if Intel was now going to make P-core tiles and E-core tiles, what’s stopping a combined product with both? Intel stated that their customers prefer uni-core designs in this market as the needs from customer to customer differ. If one customer prefers an 80/20 split on P-cores to E-cores, there’s another customer that prefers a 20/80 split. Having a wide array of products for each different ratio doesn’t make sense, and customers already investigating this are finding out that the software works better with a homogeneous arrangement, instead split at the system level, rather than the socket level. So we’re not likely to see hybrid Xeons any time soon. (Ian: Which is a good thing.)

I did ask about the unified IO die - giving the same P-core only and E-core only Xeons the same number of memory channels and I/O lanes might not be optimal for either scenario. Intel didn’t really have a good answer here, aside from the fact that building them both into the same platform helped customers synergize non-returnable development costs across both CPUs, regardless of the one they used. I didn’t ask at the time, but we could see the door open to more Xeon-D-like scenarios with different IO configurations for smaller deployments, but we’re talking products that are 2-3+ years away at this point.

Xeon Scalable Generations
Date AnandTech Codename Abbr. Max
Cores
Node Socket
Q3 2017 1st Skylake SKL 28 14nm LGA 3647
Q2 2019 2nd Cascade Lake CXL 28 14nm LGA 3647
Q2 2020 3rd Cooper Lake CPL 28 14nm LGA 4189
Q2 2021 Ice Lake ICL 40 10nm LGA 4189
2022 4th Sapphire Rapids SPR * Intel 7 LGA 4677
2023 5th Emerald Rapids EMR ? Intel 7 **
2024 6th Granite Rapids GNR ? Intel 3 ?
Sierra Forest SRF ? Intel 3
>2024 7th Next-Gen P ? ? ? ?
Next-Gen E
* Estimate is 56 cores
** Estimate is LGA4677

For both Granite Rapids and Sierra Forest, Intel is already working with key ‘definition customers’ for microarchitecture and platform development, testing, and deployment. More details to come, especially as we move through Sapphire and Emerald Rapids during this year and next.

Comments Locked

144 Comments

View All Comments

  • nandnandnand - Friday, February 18, 2022 - link

    It's here to stay, whether you like it or not, and it will be better, at least after it has been around for a few years.
  • ksec - Thursday, February 17, 2022 - link

    I was expecting PCI-E 6.0 to be 2024/2025 timeline. The earliest estimate to be 2026 seems to be longer than usual.

    It is unfortunate AMD dont have an answer to Sierra Forest. At least not as far aside am aware of.
  • ksec - Thursday, February 17, 2022 - link

    ( AMD Bergamo to me is Cache Size variant of Zen 4, which is different to what Intel is doing here )
  • sgeocla - Friday, February 18, 2022 - link

    AMD has 128 cores Bergamo coming next year and 256 core Turin in 2024, at the same time as Sierra Forrest, presumably if Intel can execute and not delay this like all the other launches.
    TSMC's execution has been best in class and so has AMD's in recent years in the DC.
    Zen3 core is double the size of Intel E cores but Zen4D is going to cut down on the cache size and probably AVX-512 and other things cloud hyperscalers don't need so Zen4D is going to be AMD's E cores only with much higher IPC and better energy efficiency.
    Sierra Forrest is a response to what AMD is doing and not the other way around and it will be released after AMD's chip as well.
    And if Intel has further delays of can't make enough volume of their process has a hiccup, then it will be a complete disaster.
    They are betting the farm with ramping capex so much as to have negative cash flow on top of large amounts of debt.
  • schujj07 - Friday, February 18, 2022 - link

    From what I have read Bergamo is expected to have performance about that of Zen3. That would put the IPC a good 30% higher than the current Gracemont E-cores.

    "AMD Bergamo is going to be the cloud variant. On a call before the event, Mark Papermaster and Forrest Norrod said that these are different chips that leverage the same ISA, but that there are cache tweaks here to get to 128 cores. The idea behind Bergamo is that cloud computing workloads are different than the traditional workloads and so AMD can optimize for more cores per socket instead of optimizing for HPC performance. AMD also is looking at the Zen 4c to provide better power efficiency per core. If we look to Arm vendors, the Zen 4c is seemingly aligning AMD’s offerings more towards a customized cloud CPU product like the Ampere Altra (Max) instead of a traditional large core CPU." https://www.servethehome.com/amd-bergamo-to-hit-12...

    While that isn't exactly an Intel E-core, Zen was already vastly more power efficient that Core. Therefore it isn't out of the question that they could have E-core power levels but Zen3 performance. That said we won't really know until it is released later this year.
  • Mike Bruzzone - Sunday, February 20, 2022 - link

    Impact of Intel Plant/Equipment/Construction investment impact here, look in comment string for the sufficiently detailed financial production assessment posted on January 30;

    https://seekingalpha.com/article/4481960-intel-q4-...

    Best outcome Intel succeeds in gaining process leadership and IDM + foundry reconfiguration
    Worst outcome Intel reconfigures under Chapter 11 bankruptcy in the middle of construction.

    mb

    mb
  • lemurbutton - Thursday, February 17, 2022 - link

    Basically, Intel won't be competitive in the server market until 2024 when it will have node parity with AMD and probably core-count parity as well with its E-core product.
  • schujj07 - Friday, February 18, 2022 - link

    Before 12th Gen was released people were expecting it will take Intel until 2025 to get parity on desktop and server. While 12th Gen is impressive, AMD has been dragging their hands on Zen3D probably because they have no reason to release the CPU. They are selling every CPU right now and Zen3 is still competitive with 12th Gen, all be it a bit slower, so no rush on things.
  • Mike Bruzzone - Friday, February 18, 2022 - link

    schujj07 ; under the covers report here;

    Yes, AMD sold all AMD could sell from 676,000 wafers in 2021; 119,108,089 components is a company record averaging 30 M per quarter up 55.8% from 2020.

    Zen 3D is hot for the package area to pull off the heat on potentially the material composite vis-a-vis composite relied for TR / Epyc licensed from Fujitsu for heat dissipation. 3D definitively requires a cooling solution. 32 GiB SRAM slice adds + 12W you can figure it out from Epyc cache TDP variance and 5800X OC's hits 147W and slightly higher. 3D at 105W is a phantom. 3D is also expensive + $45 from TSMC that's to AMD $210 before OEM mark up x1.55 is fairly traditional for AMD to the OEM but it can go up to x2. I estimated AMD made 15 M 5800X 3D (no 5900 on heat primarily) and am starting to wonder where all the AMD 3D hype has gone because there hasn't been a word in weeks. mb
  • _abit - Saturday, February 19, 2022 - link

    LOL, the fud is strong with this one:

    3d cache is laid over the region of existing die cache, which doesn't get all that hot as compute logic does. So no, it will not result in hotspots, and si is a fairly good conductor of heat. The previous hotspots retain roundly the same thermal contact and conductivity capacity. There is probably no more than a 10% overall increase of heat, which may well be offset and then some by amd's new power efficiency features in zen 3+. Who knows, it may even run cooler. I assume it will be 6nm indeed, but even at 7nm, it is doable.

    If you want zen 3d, you probably want games. And if you want games, there's no point overclocking the cpu manually, you may even lose boost clocks. So no, it matters not what 5800x OC hits, even more so if the 3d version is zen 3 +.

    45$ to bind chips in the millions scale sounds way too much. Where are you getting those numbers?

    What OEM markups? And if your vendors have it, why are you bringing that against amd? Shill much?

Log in

Don't have an account? Sign up now