For those that keep a close eye on consumer hardware, AMD recently has been involved in a minor uproar with some of its most vocal advocates about the newest Ryzen 3000 processors. Some users are reporting turbo frequencies much lower than advertised, and a number of conflicting AMD partner posts have generated a good deal of confusion. AMD has since posted an update identifying an issue and offering a fix, but part of all of this comes down to what turbo means and how AMD processors differ from Intel. We’ve been living on Intel’s definitions of perceived standards for over a decade, so it’s a hard nut to crack if everyone assumes there can be no deviation from what we’re used to. In this article, we’re diving at those perceived norms, to shed some light on how these processors work.

A Bit of Context

Since the launch of Zen 2 and the Ryzen 3000 series, depending on which media outlet you talk to, there has been a peak turbo issue with the new hardware. This turbo frequency issue has been permeating in the ecosystem since Zen 2 was launched, with popular outlets like Gamers Nexus noting that on certain chips, the advertised turbo frequency was only achieved under extreme cooling conditions. For other outlets, being within 50 MHz of the peak turbo frequency has been considered chip-to-chip variation, or a function of early beta firmware. A wide array of people put varying amounts of weight behind this, from conspiracy to not being bothered about it at all.

However, given recent articles by some press, as well as some excellent write-ups by Paul Alcorn over at Tom’s Hardware*, we saw that the assumed public definitions of processor performance actually differs from Intel to AMD. What we used as the default standard definitions, which are based on Intel’s definitions, are not the same under AMD, which is confusing everyone. No one likes a change to the status quo, and even with articles out there offering a great breakdown of what's going on, a lot of the general enthusiast base is still trying to catch up to all of the changes.

This confusion – and the turbo frequency discussion in general – were then brought to the fore of news in the beginning of September 2019. AMD, in a two week span, had several things happen essentially all at once.

  1. Popular YouTuber der8aur performed a public poll of frequency reporting that had AMD in a very bad light, with some users over 200 MHz down on turbo frequency,
  2. The company settled for $12.1m in a lawsuit about marketing Bulldozer CPUs,
  3. Intel made some seriously scathing remarks about AMD performance at a trade show,
  4. AMD’s Enterprise marketing being comically unaware of how its materials would be interpreted.

Combined with all of the drama that the computing industry can be known for – and the desire for an immediate explanation, even before the full facts were in – made for a historically bad week for AMD. Of course, we’ve reported on some of these issues, such as the lawsuit because they are interesting factoids to share. Others we ignored, such as (4) for a failure to see anything other than an honest mistake given how we know the individuals behind the issues, or the fact that we didn’t report on (3) because it just wasn’t worth drawing attention to it.

What has driven the discussion about peak turbo has come to head because of (1). Der8auer’s public poll, taken from a variety of users with different chips, different motherboards, different cooling solutions, different BIOS versions, still showed a real-world use case of fewer than 6% of 3900X users were able to achieve AMD’s advertised turbo frequency. Any way you slice it, without context, that number sounds bad.

Meanwhile, in between this data coming out and AMD’s eventual response, a couple of contextual discrepancies happened between AMD’s partner employees and experts in the field via forum posts. This greatly exacerbated the issue, particularly among the vocal members of the community. We’ll go into detail on those later.

AMD’s response, on September 10th, was a new version of its firmware, called AGESA 1003-ABBA. This was released along with blog post that detailed that a minor firmware issue was showing 25-50 MHz drop in turbo frequency was now fixed.

Naturally, that doesn’t help users who are down 300 MHz, but it does come down to how much the user understands how AMD’s hardware works. This article is designed to shed some light on the timeline here, as well as how to understand a few nuances of AMD's turbo tech, which are different to what the public has come to understand from Intel’s use of specific terms over the last decade.

*Paul’s articles on this topic are well worth a read:
Ryzen 3000, Not All Cores Are Created Equal
Investigating Intel’s Claims About Ryzen Reliability
Testing the Ryzen 3000 Boost BIOS Fix

This Article

In this article we will cover:

  • Intel’s Definition of Turbo
  • AMD’s Definition of Turbo
  • Why AMD is Binning Differently to Intel, relating to Turbo and OC
  • A Timeline of AMD’s Ryzen 3000 Turbo Reporting
  • How to Even Detect Turbo Frequencies
  • AMD's Fix
Defining Turbo, Intel Style
Comments Locked

144 Comments

View All Comments

  • Smell This - Wednesday, September 18, 2019 - link


    { s-n-i-c-k-e-r }
  • BurntMyBacon - Wednesday, September 18, 2019 - link

    Electron migration is generally considered to be the result of momentum transfer from the electrons, which move in the applied electric field, to the ions which make up the lattice of the interconnect material.

    Intuitively speaking, raising the frequency would proportionally increase the number of pulses over a given time, but the momentum (number of electrons) transferred per pulse would remain the same. Conversely, raising the voltage would proportionally increase the momentum (number of electrons) per pulse, but not the number of pulses over a given time. To make an analogy, raising the frequency is like moving your sandpaper faster while raising your voltage is like using coarser grit sandpaper at the same speed.

    You might assume that if the total number of electrons are the same, then the wear will be the same? However, there is a certain amount of force required to dislodge an atom (or multiple atoms) from the interconnect material lattice. Though the concept is different, you can simplistically think of it like stationary friction. Increasing the voltage increases the force (momentum) from each pulse which could overcome this resistance where nominal voltages may not be enough. Also, increasing voltage has a larger affect on heat produced than increasing frequency. Adding heat energy into the system may lower the required force to dislodge the atom(s). If the nominal voltage is unable or only intermittently able to exceed the required force, then raising the frequency will have little effect compared to raising the voltage. That said, continuous strain will probably weaken the resistance over time, but it is likely that this still less significant than increasing voltage. Based on this, I would expect (read my opinion) four things:
    1) Electron migration becomes exponentially worse the farther you exceed specifications (Though depending on where your initial durability is it may not be problematic)
    2) The rate of electron migration is not constant. Holding all variables constant, it likely increases over time. That said, there are likely a lot of process specific variables that determine how quickly the rate increases.
    3) Increasing voltage has a greater affect on electron migration than frequency. Increasing frequency alone may be considered far more affordable from a durability standpoint than increases that require significant voltage.
    4) Up to a point, better cooling will likely reduce electron migration. We are already aware that increased heat physically expands the different materials in the semiconductor at different rates. It is likely that increased heat energy in the system also makes it easier to dislodge atoms from their lattice. Reducing this heat build-up should lessen the effect here.

    Some or all of these may be partially or fully incorrect, but this is where my out of date intuition from limited experience in silicon fabrication takes me.
  • eastcoast_pete - Wednesday, September 18, 2019 - link

    Thanks Ian! And, as mentioned, would also like to hear from you or Ryan on the same for GPUs. With lots of former cryptomining cards still in the (used) market, I often wonder just how badly those GPUs were abused in their former lifes.
  • nathanddrews - Tuesday, September 17, 2019 - link

    My hypothesis is that CPUs are more likely to outlive their usefulness long before a hardware failure. CPUs failing due to overclocking is not something we hear much about - I'm thinking it's effectively a non-issue. My i5-3570K has been overclocked at 4.2GHz on air for 7 years without fault. I don't think it has seen any time over 60C. That said, as a CPU, it has nearly exhausted its usefulness in gaming scenarios due to lack of both speed and cores.

    What would cause a CPU to "burn out" that hasn't already been accounted for via throttling, auto-shutdown procedures, etc.?
  • dullard - Tuesday, September 17, 2019 - link

    Thermal cycling causes CPU damage. Different materials expand at different rates when they heat, eventually this fatigue builds up and parts begin to crack. The estimated failure rate for a CPU that never reaches above 60°C is 0.1% ( https://www.dfrsolutions.com/hubfs/Resources/servi... ). So, in that case, you are correct that your CPU will be just fine.

    But, now CPUs are reaching 100°C, not 60°C. That higher temperature range doubles the temperature range the CPUs are cycling through. Also, with turbo kicking on/off quickly, the CPUs are cycling more often than before. https://encrypted-tbn0.gstatic.com/images?q=tbn:AN...
  • GreenReaper - Wednesday, September 18, 2019 - link

    Simple solution: run BOINC 24/7, keeps it at 100°C all the time!
    I'm sure this isn't why my Surface Pro isn't bulging out of its case on three sides...
  • Death666Angel - Thursday, September 19, 2019 - link

    Next up: The RGB enabled hair dryer upgrade to stop your precious silicon from thermal cycling when you shut down your PC!
  • mikato - Monday, September 23, 2019 - link

    Now I wonder how computer parts had an RGB craze before hair dryers did. Have there been andy RGB hair dryers already?
  • tygrus - Saturday, September 28, 2019 - link

    The CPU temperature sensors have changed type and location. Old sensors were closer to the surface temperature just under the heatsink (more of an average or single spot assumed to be the hottest). Now its the highest of multiple sensors built into the silicon and indicates higher temperatures for the same power&area than before. There is always a temperature gradient from the hot spots to where heat is radiated.
  • eastcoast_pete - Wednesday, September 18, 2019 - link

    For me, the key statement in your comment is that your Sandy Bridge i7 rarely if ever went above 60 C. That is a perfectly reasonable upper temperature for a CPU. Many current CPUs easily get 50% hotter, and that's before any overclocking and overvolting. For GPUs, it even worse; 100 - 110 C is often considered "normal" for "factory overclocked" cards.

Log in

Don't have an account? Sign up now