Using Power More Efficiently: Dynamic Tuning 2.0

A common thread in modern microprocessor design is being able to use the power budget available. There have been many articles devoted to how to define power budgets, thermal budgets, and what the mysterious ‘TDP’ (thermal design power) actually means in relation to power consumption. Intel broadly uses TDP and power consumption simultaneously, along with a few other values, such as power limits 1 and 2 (PL1 and PL2), which apply to sustained power draw and peak power draw respectively. Most Intel processors up until this point will allow a processor to turbo, up to a peak power draw of PL2 for a fixed time, before enforcing a PL1 sustained power draw. This is all very OEM dependent as well. However, for Ice Lake, this changes a bit.

For Ice Lake, Intel has a new feature called Dynamic Tuning 2.0, which implements a layer of machine learning on top of the standard turbo mode. The idea behind DT2.0 is that the processor can predict the type of workload that is incoming, say transcode, and adjust the power budget intelligently to give a longer turbo experience.

Technically the concepts of PL1 and PL2 don’t magically disappear under this new regime – the processor ends up going below max turbo because the algorithm predicts that the user won’t need it, and this saves up ‘power budget’ in order to enable the turbo to work for longer.

This is a topic that Intel will hopefully go into more detail. We do know that it requires collaboration at the OS level, but how these algorithms are trained would be a useful trove of information. It is unclear whether Intel will allow this feature to be enabled/disabled at the user level, for testing purposes, but it should be noted that unless it is by default ‘on’ for OEM systems, we might end up with some systems enabling it while others do not.

Two Versions of Ice Lake, Two Different Power Targets Thunderbolt 3: Now on the CPU*
Comments Locked

107 Comments

View All Comments

  • vFunct - Tuesday, July 30, 2019 - link

    Why did they not go with HDMI 2.1 and PCIe 4.0?
  • bug77 - Tuesday, July 30, 2019 - link

    AMD'd newly released 5700(XT) doesn't support HDMI 2.1, it's not surprising Intel doesn't support it either.
    And PCIe 4.0 would be power hog.
  • ToTTenTranz - Wednesday, July 31, 2019 - link

    The 5700 cards don't support VirtuaLink either, despite AMD belonging to the consortium since the beginning like nvidia and the RTX cards having it for about a year.

    First generation Navi cards are just very, very late.
  • tipoo - Tuesday, July 30, 2019 - link

    PCI-E 4 currently needs chipset fans on desktop parts, the power needed isn't suitable for 15-28W mobile yet.
  • DanNeely - Tuesday, July 30, 2019 - link

    Because Intel product releases have been a mess since the 10nm trainwreck began. Icelake was originally supposed to be out a few years ago. I suspect PCIe4 is stuck on whatever upcoming design was supposed to be the 7nm launch part.

    HDMI 2.1 is probably even farther down the pipeline; NVidia and AMD don't have 2.1 support on their discrete GPUs yet. Intel has historically been a lagging supporter of new standards on their IGPs, so that's probably a few years out.
  • nathanddrews - Tuesday, July 30, 2019 - link

    This whole argument that "real world" benchmarks equate to "most used" is rather dumb anyway. We don't need benchmarks to tell us how much faster Chrome opens Reddit, because the answer is always the same: fast enough to not matter. We need benchmarks at the fringes for those reasons brought up in the room: measuring extremes in single/multi threaded scenarios, power usage, memory speeds; finding weaknesses in hardware and finding flaws in software; and taking a large enough sample to be meaningful across the board.

    Intel wants to eat its cake and still have it - to be fair - who doesn't? But let's get real, AMD is kicking some major butt right now and Intel has to spin it any way they can. What's funny is that the BEST arguments that I've heard from reviewers to go AMD actually has nothing to do with performance, but rather the Zen platform as a whole in terms of features, upgradeability, and cost.

    I say this as a total Intel shill, too. The only AMD systems running in my house right now are game consoles. All my PCs/laptops are Intel.
  • twotwotwo - Tuesday, July 30, 2019 - link

    Interesting to read what Intel suggested some of their arguments in the server space would be: lower TCO like the old Microsoft argument against Linux, and having to revalidate all your stuff to use an AMD platform. Some quotes (from a story in their internal newsletter; the full thing is floating around out there, but couldn't immediately find):

    https://www.techspot.com/news/80683-intel-internal...

    I mean, they'll be fine long term, but trying to change the topic from straightforward bang-for-buck, benchmark results, etc. is an approach you only take in a...certain sort of situation.
  • eek2121 - Wednesday, July 31, 2019 - link

    Unfortunately, your average IT infrastructure guy no longer knows how fast a Xeon Platinum 8168 is vs an AMD EPYC 7601. They just ask OEMs like Dell or HP to sell them a solution. I've even seen cases where faster solutions were replaced with slower solutions because they were more expensive and the numbers looked bigger. It turns out that the numbers that looked bigger were not the numbers that they should have been paying attention to.

    One company I worked at almost bought a $100,000 (yeah I know, small change, but it was a small company) pre-built system. We, as software developers, talked them into letting us handle it instead. We knew a lot about hardware and as a result? We spent around $15,000 in hardware costs. Yes there were labor costs involved in setting everything up, but it only took about 2 weeks for 4 guys, 2 of which were juniors. Had we gone with the blade system, there would have been extensive training needed, which would have costed about the same in labor. Our solution was fully redundant, a hell of a lot faster (the blade system used hardware that was slower than our solution, and it was also a proprietary system that we would be locked into, so there was an additional service contract that costed $$$ and would have to be signed). During my entire time there, we had very few issues with the solution we built outside the occasional hard drive (2 drives in 4 years IIRC) dying and having to pop it out, pop in a new one, and let the RAID rebuild. Zero downtime. In addition, our wifi solution allowed roaming all over a giant building without dropping the signal. Speeds were lightning fast and QoS allowed us to keep someone from taking up too much bandwidth on the guest network. The entire setup worked like a dream.

    We also wanted to use a different setup for the phone system, but they opted to work with a vendor instead. They paid a lot of money for that, and constantly had issues. The administration software was buggy, sometimes the entire system would go down, even adding a user would take down the entire system until things were updated. IIRC after I left they finally switched to the system we wanted to use and had no issues after that.
  • wrkingclass_hero - Tuesday, July 30, 2019 - link

    Uh, I would not be putting cobalt anywhere near my mouth
  • PeachNCream - Tuesday, July 30, 2019 - link

    Real men aren't scared of a few toxic chemicals entering their digestive systems! Clearly you and I are not real men, but we now have a role model to emulate over the course of our soon-to-be-shortened-by-cancer lives.

Log in

Don't have an account? Sign up now