Though the formal specification has yet to be ratified by JEDEC, the memory industry as a whole is already gearing up for the upcoming launch of the next generation of High Bandwidth Memory, HBM3. Following announcements earlier this summer from controller IP vendors like Synopsys and Rambus, this morning SK Hynix is announcing that it has finished development of its HBM3 memory technology – and according to the company, becoming the first memory vendor to do so. With controller IP and now the memory itself nearing or at completion, the stage is being set for formal ratification of the standard, and eventually for HBM3-equipped devices to start rolling out later in 2022.

Overall, the relatively lightweight press release from SK Hynix is roughly equal parts technical details and boasting. While there are only 3 memory vendors producing HJBM – Samsung, SK Hynix, and Micron – it’s still a technically competitive field due to the challenges involved in making deep-stacked and TSV-connected high-speed memory work, and thus there’s a fair bit of pride in being first. At the same time, HBM commands significant price premiums even with its high production costs, so memory vendors are also eager to be first to market to cash in on their technologies.

In any case, both IP and memory vendors have taken to announcing some of their HBM wares even before the relevant specifications have been announced. We saw both parties get an early start with HBM2E, and now once again with HBM3. This leaves some of the details of HBM3 shrouded in a bit of mystery – mainly that we don’t know what the final, official bandwidth rates are going to be – but announcements like SK Hynix’s help narrow things down. Still, these sorts of early announcements should be taken with a small grain of salt, as memory vendors are fond of quoting in-lab data rates that may be faster than what the spec itself defines (e.g. SK Hynix’s HBM2E).

Getting into the technical details, according to SK Hynix their HBM3 memory will be able to run as fast as 6.4Gbps/pin. This would be double the data rate of today’s HBM2E, which formally tops out at 3.2Gbps/pin, or 78% faster than the company's off-spec 3.6Gbps/pin HBM2E SKUs. SK Hynix’s announcement also indirectly confirms that the basic bus widths for HBM3 remain unchanged, meaning that a single stack of memory is 1024-bits wide. At Hynix’s claimed data rates, this means a single stack of HBM3 will be able to deliver 819GB/second worth of memory bandwidth.

SK Hynix HBM Memory Comparison
  HBM3 HBM2E HBM2
Max Capacity 24 GB 16 GB 8 GB
Max Bandwidth Per Pin 6.4 Gb/s 3.6 Gb/s 2.0 Gb/s
Number of DRAM ICs per Stack 12 8 8
Effective Bus Width 1024-bit
Voltage ? 1.2 V 1.2 V
Bandwidth per Stack 819.2 GB/s 460.8 GB/s 256 GB/s

SK Hynix will be offering their memory in two capacities: 16GB and 24GB. This aligns with 8-Hi and 12-Hi stacks respectively, and means that at least for SK Hynix, their first generation of HBM3 memory is still the same density as their latest-generation HBM2E memory. This means that device vendors looking to increase their total memory capacities for their next-generation parts (e.g. AMD and NVIDIA) will need to use memory with 12 dies/layers, up from the 8 layer stacks they typically use today.

What will be interesting to see in the final version of the HBM3 specification is whether JEDEC sets any height limits for 12-Hi stacks of HBM3. The group punted on the matter with HBM2E, where 8-Hi stacks had a maximum height but 12-Hi stacks did not. That in turn impeded the adoption of 12-Hi stacked HBM2E, since it wasn’t guaranteed to fit in the same space as 8-Hi stacks – or indeed any common size at all.

On that matter, the SK Hynix press release notably calls out the efforts the company put into minimizing the size of their 12-Hi (24GB) HBM3 stacks. According to the company, the dies used in a 12-Hi stack – and apparently just the 12-Hi stack – have been ground to a thickness of just 30 micrometers, minimizing their thickness and allowing SK Hynix to properly place them within the sizable stack. Minimizing stack height is beneficial regardless of standards, but if this means that HBM3 will require 12-Hi stacks to be shorter – and ideally, the same height as 8-Hi stacks for physical compatibility purposes – then all the better for customers, who would be able to more easily offer products with multiple memory capacities.

Past that, the press release also confirms that one of HBM’s core features, integrated ECC support, will be returning. The standard has offered ECC since the very beginning, allowing device manufacturers to get ECC memory “for free”, as opposed to having to lay down extra chips with (G)DDR or using soft-ECC methods.

Finally, it looks like SK Hynix will be going after the same general customer base for HBM3 as they already are for HBM2E. That is to say high-end server products, where the additional bandwidth of HBM3 is essential, as is the density. HBM has of course made a name for itself in server GPUs such as NVIDIA’s A100 and AMD’s M100, but it’s also frequently tapped for high-end machine learning accelerators, and even networking gear.

We’ll have more on this story in the near future once JEDEC formally approves the HBM3 standard. In the meantime, it’s sounding like the first HBM3 products should begin landing in customers’ hands in the later part of next year.

Source: SK Hynix

Comments Locked

19 Comments

View All Comments

  • Shmee - Wednesday, October 20, 2021 - link

    Cool news, I hope they put this in video cards. Fury and Vega were pretty great with it.
  • blanarahul - Thursday, October 21, 2021 - link

    384 bit GDDR6X @ 21 GT/s = 1008 GB/sec @ 24 GB capacity
    2048 bit HBM3 @ 6.4 GT/s = 1638 GB/sec @ 32/48 GB capacity

    They'll have to put it in RTX 4080/4090. It would be stupid not to considering the absurd power requirements of 80/90 series cards. GDDR6X is here to stay for mainstream products though.
  • TheinsanegamerN - Thursday, October 21, 2021 - link

    Both got beaten easily by GDDR5 and GDDR5X equipped cards for lower prices and lower power consumption. What made them so great?
  • Oxford Guy - Sunday, October 24, 2021 - link

    The small size of Fiji made the Nano possible. It was pretty neat, I suppose.
  • Oxford Guy - Sunday, October 24, 2021 - link

    'Minimizing stack height is beneficial regardless of standards'

    Unless it reduces yield too much and/or worsens the cost/benefit (competitiveness) ratio too much in some other respect.
  • FLORIDAMAN85 - Monday, October 25, 2021 - link

    Me: Oh, new HBM3, we'll be getting that in our next graphics cards, right?

    Tech Industry:.......

    Me:... We'll be getting that in our next graphics cards, right?
  • Oxford Guy - Monday, October 25, 2021 - link

    AMD: 'How do you like our latest PolarisForever™ cards?'

    Nvidia: 'Hey hey... How about that swell new OneKidney™ plan?'

    Intel: 'Well... uh... We're supply constrained. Cool, right?'
  • FLORIDAMAN85 - Monday, October 25, 2021 - link

    What can't we do with a terrabyte of bandwidth?
    Can't wait to make a RAMdisc with this.
  • Vitor - Thursday, October 28, 2021 - link

    Imagine a Mac workstation with 256gb of this memory and soc built on tsmc 2nm. Technically feasiable in 3 years.

Log in

Don't have an account? Sign up now