Thermal Management on Stacked Silicon

With a standard processor design, there is a single piece of silicon doing all the work and generating the heat – it’s bonded to the package (which doesn’t do any work) and then depending on the implementation, there’s some adhesive to either a cooler or a headspreader then a cooler. When moving to a stacked chiplet design, it gets a bit more complicated.

Having two bits of silicon that ‘do work’, even if one is the heavy compute die and the other is an active interposer taking care of USB and audio and things, does mean that there’s a thermal gradient between the silicon, and depending on the bonding, potential for thermal hotspots and build-up. Lakefield makes it even more complex, by having an additional DRAM package placed on top but not directly bonded.

We can take each of these issues independently. For the case of die-on-die interaction, there is a lot of research going into this area. Discussions and development about fluidic channels between two hot silicon dies have been going on for a decade or longer in academia, and Intel has mentioned it a number of times, especially when relating to a potential solution of its new die-to-die stacking technology.

They key here is hot dies, with thermal hotspots. As with a standard silicon design, ideally it is best to keep two high-powered areas separate, as it gives a number of benefits with power delivery, cooling, and signal integrity. With a stacked die, it is best to not have hotspots directly on top of each other, for similar reasons. Despite Intel using its leading edge 10+ process node for the compute die, the base die is using 22FFL, which is Intel’s low power implementation of its 14nm process. Not only that, but the base die is only dealing with IO, such as USB and PCIe 3.0, which is essentially fixed bandwidth and energy costs. What we have here is a high-powered die on top of a low powered die, and as such thermal issues between the two silicon die, especially in a low TDP device like Lakefield (7W TDP), are not an issue.

What is an issue is how the compute die gets rid of the heat. On the bottom it can do convection by being bonded to more silicon, but the top is ultimately blocked by that DRAM die. As you can see in the image above, there’s a big air gap between the two.

As part of the Lakefield design, Intel had to add in a number of design changes in order to make the thermals work. A lot of work can be done with the silicon design itself, such as matching up hotspots in the right area, using suitable thickness of metals in various layers, and rearranging the floorplan to reduce localized power density. Ultimately both increasing the thermal mass and the potential dissipation becomes high priorities.

Lakefield CPUs have a sustained power limit of 7 watts – this is defined in the specifications. Intel also has another limit, known as the turbo power limit. At Intel’s Architecture Day, the company stated that the turbo power limit was 27 watts, however in the recent product briefing, we were told is set at 9.5 W. Historically Intel will let its OEM partners (Samsung, Lenovo, Microsoft) choose its own values for these based on how well the design implements its cooling – passive vs active and heatsink mass and things like this. Intel also has another factor of turbo time, essentially a measure of how long the turbo power can be sustained for.

When we initially asked Intel for this value, they refused to tell us, stating that it is proprietary information. After I asked again after a group call on the product, I got the same answer, despite the fact that I informed the Lakefield team that Intel has historically given this information out. Later on, I found out through my European peers that in a separate briefing, they gave the value of 28 seconds, to which Intel emailed me this several hours afterwards. This value can also be set by OEMs.

Then I subsequently found one of Intel’s ISSCC slides.

This slide shows that a basic implementation would only allow sustained power for 2.5 seconds. Adding in an adhesive between the top die and the DRAM moves up to 12.4 seconds, and then improving the system cooling goes up to 20 seconds. The rest of the improvements work below the compute die: a sizeable improvement comes from increasing the die-to-die metal density, and then an optimized power floor plan which in total gives sustained power support for 150+ seconds.

Lakefield: Top Die to Bottom Die Hybrid CPUs: Sunny Cove and Tremont
Comments Locked

221 Comments

View All Comments

  • Tomatotech - Friday, July 3, 2020 - link

    personal use computers (laptop / phones / etc) have a hell of a lot going on behind the scenes with constant and multiple cloud interactions and updates. Apple manages with tiny amounts of RAM on their phones because they have an obsessive focus on cutting out cruft and streamlining iOS. MacOS and Windows? Not so much. 8GB is needed for futureproofing.

    If Microsoft is able to revamp Windows on iOS lines then maybe 6GB or 4GB, but Apple themselves have set 8GB as a minimum for for all new MacOS computers, and they are a company that has a history of not putting enough RAM in their cheaper offerings.

    As for the new ARM Mac computers, we will see, but it's unlikely they will have less than 8GB. 16GB minimum is a strong possibility for various reasons when they come out in 2021-ish. IF, and it's a BIG IF, Apple releases a new ultra-long-life device running a mix of MacOS and iOS, then the battery difference of running less RAM might swing it, but we're firmly in making-stuff-up land now.
  • nonoverclock - Friday, July 3, 2020 - link

    Corporate laptops are sort of a different story with all of their agents running. Definitely need more resource to handle all that. On a basic home laptop with Office and a few other apps, most people could probably do ok with 8 GB.
  • Lucky Stripes 99 - Saturday, July 4, 2020 - link

    Agreed. My personal laptop has fewer than a third of the processes and half of the services of what my work laptop has at bootup. The encrypted filesystem is an especially nasty resource hog. I never seem to have a fast enough work laptop with all of that stuff.
  • ProDigit - Friday, July 3, 2020 - link

    Then run linux, like ubuntu.
  • dotjaz - Saturday, July 4, 2020 - link

    So what? Ryzen 4300U can be configured to 10W or lower if they want to complete with it. Besides Van Gogh is around the corner, it's almost certainly a native Quad-core part with RDNA2 for 15W and below. 7W is certainly within reach especially considering 3200U could do that.
  • Skydanthology - Thursday, July 2, 2020 - link

    Yeah, this is for ultra-mobile laptops or dual-screen tablets that require the lowest standby power. Besides, AMD also uses an older process node when making I/O or memory contollers.
  • eek2121 - Thursday, July 2, 2020 - link

    Not for mobile.
  • ikjadoon - Thursday, July 2, 2020 - link

    They said "ultra-mobile laptops"...which is exactly Lakefield's target.

    "Lakefield processors are the smallest to deliver Intel Core performance and full Windows compatibility across productivity and content creation experiences for ultra-light and innovative form factors."

    Literally from Intel: https://newsroom.intel.com/news/intel-hybrid-proce...
  • jeremyshaw - Thursday, July 2, 2020 - link

    It's already somewhat covered in the article, however 22FFL =/= Intel's old "22FinFET". Intel's 22nm FinFET is closer to other foundries' 16/14/12nm FinFET anyways*, so it's strange you aren't bashing AMD for being behind the times on their 14/12nm IOD.

    *This is roughly when the "Intel N-1 node is equal to TSMC/Samsung/GF's N node" started, FWIW. Some say it was 32nm when it really started, but we can all agree that by 22nm, Intel really pulled ahead for a bit. Well, part of that was due to TSMC and GF fumbling badly at 32/28nm, but that somewhat dilutes the metric, anyways.
  • Jorgp2 - Thursday, July 2, 2020 - link

    >Surely they are not going to go back to 22nm? The packaging engineering is interesting enough, but there seems to be no fundamental architecture improvements. I see Zen-xx crushing this straight out the gate. Definitely not adaptable for desktop use.

    And?

    AMD was using a similar node until last year.

Log in

Don't have an account? Sign up now