AMD is known to heavily use Cinebench to market their CPUs but deliberately took it down the moment they saw how faster 28 cores > Slower 32 cores. It's not rocket science.
Actually I think some of the cooling technology required for that Intel demo of vaporware was originally pioneering by NASA, so you could consider it rocket science.
Honestly though, what do you expect? Look at 5GHz quad cores and the heat they generate, then extrapolate that to 28 cores.
Intel is getting rather desperate to stay on top, they're overclocking their own CPUs beyond safe limits to beat AMD. Sounds rather like the Coppermine @ 1.13GHz fiasco. They released the CPU then had to recall it, replace a huge number of them and release it again.
Intel is so desperate to stay on top they'll crash and burn to do it. They're not used to ever coming in second place, even when it's on a product that realistically very, very few will buy.
Luckily for them, there are countless "fanboys" and people with "brand loyalty", who only care about the highest clocks and the highest shiny numbers put out by intel. Doesn't matter if it's falsified(in some cases) or if it requires liquid nitrogen to cool. Smooth performance doesn't matter. Optimisation doesn't matter. Stability doesn't matter. 1% doesn't matter. Unless it's about amd or any other company. Then those hardware are "bulldozers".
Objectively, even as badass as the Zen is, I'm still considering Intel for my next build for a few specific reasons.
I gotta admit though, a 32 core beast would be a nice replacement for my current TVBox, however, the "reboot to game" thing will have to be corrected first.
You don't need to "reboot to game". Most games work fine on the current 1950X and the ones that don't can either be started via a command line option or via a program such as process lasso. Probably about 1% of the 800 games I own have issues with 16 cores/32 threads. The rest work fine without needing to do anything.
That was a concern if I went with a Threadripper. I try to make this computer just work as it's connected to the TV in the living room.
It's mostly mine but everyone watches movies, plays multiplayer games with me (Fretz on Fire for one) and depends on the homemade DVR.
It would suck if I actually had to reboot. The kids just wouldn't play if that was the case, they don't follow instructions if it doesn't work perfectly the first time. I figured they'd have a software patch or something eventually that got around the problem.
I'm no stranger to the command line options. The only way I can play Gran Turismo 4 on my FX 6300 with PCSX2 is to use cores 1, 3 and 5 and set at high priority. If the FX chooses where the threads go on it's own they bounce and the clock speed stays pretty close to 4GHz. Fixed to certain cores it will turbo to 4.3 and runs pretty damn well. Also have to lock out a core or two in order to get Crysis running on some systems.
It's the game... not the CPU being the problem. Like they always say, the problem is between keyboard and chair. I know FC4 had issues with CPUs with more than 16 threads. I know, because I had a 16 core Xeon that failed to run it.
AMD got plenty of crap for the heatsink needed for the 5GHz bulldozer, and that fit in the case (it was more or less a standard AIO watercooler). Granted, even at 5GHz a bulldozer wasn't going to win any actual benchmark or other real speed competitions (yes, benchmarks might only be a step up blindly looking at frequency, but frequency was all the 5GHz bulldozer had).
This is even more silly. I can't believe AMD pulled Cinebench for this (assuming they plan on shipping Threadripper, at least put the numbers back up once you ship and only compare to shipping Intel chips after that).
It wasn't 5 GHz in more than marketing. It was a 4.7 GHz part that could rarely get to 5 GHz on a thread, maybe two. It could not run all of its threads at 5 GHz. The overclocked "The Stilt" said that the 9000 series dies were substandard (excessive leakage) and AMD created the 220 watt spec simply so they wouldn't have had to be sent to the crusher. Also, the 220 watt spec was "conservative" in that it understated the power actually required by a 9590.
For the 9590 they pulled the same thing Intel did with the PIII Coppermine 1.13GHz, factory overclocked, no real headroom to speak of.
I don't think the 9000s were substandard, the regular cores have a hard time hitting that speed. I think even cherry picked the architecture and the process just cannot support 5GHz safely.
A lot of the FX line will hit 4.8GHz safely, with no voltage bump or just barely bumping it up. Mine will do it in turbo mode with 2 cores loaded all day long without touching the voltage. Even locking the voltage in, pushing it, and turning off 4 cores, the remaining two cores will not hit 5GHz 100% stable.
Kinda wish I had a better board though. If my VRM could handle it I'm pretty sure this CPU could do 4.8 on all six cores. VRM overheat and the motherboard throttles it under load, so I'm just running default voltage, c-n-q and turbo enabled and pushed it for all it's worth.
Look a the photo's they didn't just have a 1000W cooling system on the cpu, They had a MASSIVE heatsink on the VRM's. If you need 2000W and a $5000 chiller to run this it's not a viable part, period.
They possibly did not use Cinebench because of low-information users posting threads like this after Intel’s 5 GHz demo of a non-stock, overclocked processor misinterpreting it. Any well-informed user knows neither Intel nor any CPU manufacturer will be selling a 28-core processor at 5 GHz any time soon but the laymen misread the PR stunt and fell for it, hook, line and sinker:
Given AMD’s consistent advantage in multithreaded IPC with SMT compared to Intel’s HyperThreading, I would not be surprised in the least if their 32-core ThreadRipper 2 bests Intel’s 28-core Core i9.
Here is just another example of a low-information post, this one from the tech news world. Clearly, Intel never stated this was stock form of what is to come nor would it ever be yet easily manipulated onlookers ate it up like manna from heaven:
Who announces a brand new CPU with a completely unrealistic overclock and cooling setup? No one, because it's misleading and that's exactly what Intel did here.
I'm not sure, as everything depends on the clock speed that they release. Let's assume that Intel has the 28c part similar to the Xeon server part. That is clocked for 24/7 usage, so lets assume they can run the part about 15-20% faster for the 'i9' version. The Xeon is clocked for a 3.8 'all core' turbo, so maybe 4.2-4.3 boost for bursty workloads.
Add in the 15% extra cores that TR2 has, subtract the 20% advantage in frequency that Intel has, and it'll be close. Pricing will matter.
Christopher, I'd think the TR2 with 32 cores will sell for $1500 minimum. It would probably be better for them to sell it in the $1750 to $2000 range depending on how much more it can number crunch past the 1950X. AMD needs to sell at a price with a reasonable margin to get them financially stable. Not an Intel margin, but higher than they're currently at.
Intel's stunt was comparable and equally useless (but often entertaining) to LN2 stunts. Neither can lead to a commercial product, and both are done for PR purposes.
Here is a comment from an Intel engineer after Itntel hastily folded ujp their drmo & headed to the door when AMD announced their 32-core TR2 (some NSFW language):
This. Anyone claiming this is the stock form of the upcoming 28-Core HEDT flagship is, frankly, easily manipulated by Intel’s sly PR stunt to suggest otherwise. To the contrary, they are not releasing a 5 GHz 28-core processor when this was no more than overclocking magic show that no regular end user could ever achieve without $10,000+ on hand for the extreme cooling hardware and associated supporting equipment.
My bet Intel will have two Modes with 28 core chip
1. 5Ghz water cooled version 2. 4ghz conventual cooled system - still faster than 32 core TR since intel cores are faster then AMD and even though has 4 less cores.
As anybody compare the sizes of TR vs Intel 28 Core - TR looks bigger.
Only in single-thread per core. Where all those multi-core CPUs are used, SMT and HT will be used, and SMT gives +50%, while HT barely +20% (and sometimes -10% or worse, depending on the app).
I believe the differences in the SMT implementations comes down to the differences in the integer cores.
If I read the data sheets correctly, AMD's integer pipelines are all symmetrical, Intel has a long pipeline (or two) and a couple of short pipelines. Which architecture really does better depends entirely on the code.
AMD should have better total number crunching on paper, Intel can finish certain lines of code faster on a 4 stage pipeline vs the 13-14 stages of the AMD, then Intel has the 13-17 stage pipeline for more complicated problems.
Water cooling isn't enough. Intel required an insulated custom loop, a high end water chiller, a 29 phase motherboard, and a copper triple fan VRM heatsink.
You are looking at spending at least $3,000 just on those parts. And that's not considering if Intel delided or direct die mounted, which would void your $10,000 CPU warranty.
You are simply gullible if you think Intel will ever release a consumer facing CPU that consumes 933 watts.
Toms Hardware's comment about a $10,000 cpu was in relation to the Xeon platinum cpu, not Intel's 28 core one. I don't expect Intel's 28 core cpu to go beyond $ 4,000. Not if Intel still wants to be competitive with AMD (I expect to see AMD's hit around $2000-$2200),
I wouldn't bet on that 5Ghz value. It is a publicity stunt. The only way Intel could ever hit 5Ghz was to use a sub-zero (-10C) phase change-based chiller. You might be lucky to hit 4.5-4.7Ghz with it though.
This user seems to be a wccftech troll that has found its way to Anandtech. Heavily OCed Cpu(with unknown details about socket, etc) with some exotic cooling system vs stock cpu with aftermarket cooler now thats not a fair comparison even in remotest sense of testing.
Considering Intel Waterchilled theirs in order to reach that score, no surprise. AMD probably didn't want to cheat by water cooling their cpu to its maximum potential that no one would use constantly
People were upset that AMD kept using Cinebench, thinking that it was the only benchmark they were good at, and now you're upset that they're *not* using Cinebench in the demo.
Yes because we all know heavily overclocked scores using a phase change cooler are so representative of IRL situations.
By that logic, AMD should have just demonstrated Bulldozer using an extreme OC and phase change cooling, it would have likely gotten better scores in single thread then any Intel CPU at the time.
Considering the cooling method and 29-phase requirement of Intel's extreme overclocking, including the nearly 933 watts of power the CPU alone would consume, it's safe to assume that's not something everyday consumers should expect.
"Breaks"....LOL! Do you honestly not know what 'breaks cover' means? It means 'is revealed'....; it does not mean, 'malfunctions' (assuming you are are a non-native English speaker?)
There is no idea what the price will be - I am hoping they removed AVX-512 from lower the price and have 2 versions - all depends on how much performance difference is with TR 32
Also, AMD releases 32 core Threadripper CPU working on AIR cooling along with announcement of X399 boards. Intel uses liquid cooling, cobbled together motherboard that looks like a college project and no known motherboards waiting to be released and you prefer Intel? ROFL
Eh I dont belive that for a second. Intel's current gen 18 core has to be overclocked to pulling nearly 400 watts to compete with the old threadripper with LESS CORES, now that amd has 12nm plus double the cores, I find it impossible to believe intel's old xeon re-branded as a consumer product is going to do better than their existing 18 core. As everyone else has pointed out, that score is because of the cooling they used, not the old stinky xeon.
After Intel spent all day hyping up their 28-core desktop processor that seems to require exotic VRMs, cooling, and everything else... AMD introduces a 32-core desktop processor that can drop into existing ThreadRipper 1 motherboards. I find that funny.
As I remember it, AMD knew the date Intel was going to release it's 1Ghz chips and did a quick release a few days before so they could say first to 1Ghz. Intel's 1Ghz chip worked fine, it was their 1.13Ghz Intel had to recall.
AMD was able to get it's 180nm Athlon's up to 1.4Ghz while Intel's P3 was stuck at 1.0Ghz until they got to 130nm.
Intel released the Coppermine 180nm (.18 micron for those of us around at the time) @ 1.13GHz. At 1GHz they bumped the voltage from the stock 1.65v to 1.75v to get it stable, not a big deal. At 1.13GHz I believe before it was over the chips were run at 1.85v (I think the initial release was 1.75v) and there were a few refunds and replacements.
The .13 micron Tualatin came out shortly there after, but Intel had already burned a lot of the customers that were excited about the new CPUs and AMD gobbled them up.
As good as the Tualatin was (it was an excellent replacement for the Coppermine) the sales numbers were terribly small. They were pushing the P4 by the time they release the Tualatin and regardless of the T being faster than the P4 the OEMs were pushing the new architecture hard.
It was pretty much a small subset of nerds that new about the Tualatin that hadn't already jumped to AMD that bought them.
I did get my hands on a few 1GHz Tualatin and I *think* I even saw a 933 or 966 Tualatin in the wild. I have a 1.4GHz Tualatin in storage somewhere. One day I'll have to dig out my old hardware and run benchmarks that will run on old to modern just for the sheer hell of it.
It was rather heartbreaking looking at the 1.7GHz Williamette Celeron that someone had bought over the 1.4GHz Tualatin because it had a faster clock speed.
128MB of RDRAM for $600 vs 256MB of PC133 for $240...
Couple of months back I had this impulse to build a kind of retro gaming system with only "underdog" components. Tualatin, maybe a Kyro 2 ... something like that.
Got 3 (!) PIII-S 1,13Ghz for 3$ on eBay. But then I saw the prices for decent i815 mainboards and gave away the CPUs for free :D
Shoot I remember shelling out over a grand to buy a 200mhz pentium pro + mboard back in the day. that hurt since Intel immediately move on to the silly slot-1 pentium 2 platform just after that.
Athlon got their first. It was a MAJOR feather in AMD's cap. The 1 GHz P3s were a paper launch. You couldn't actually buy one once Intel said they were available, because there was no supply. Intel only declared them launched so that AMD didn't have "the only" 1 GHz processor. It was almost a year before you could get a GHz P3.
The 1.1 GHz P3s were the ones that had to be recalled. Intel was trying to reclaim the speed crown, and stumbled. Fortunately for Intel, they ALSO weren't available in any meaningful amount, and it was a rather inexpensive recall.
I believe it was the 1.13GHz. The 1.1 was on the 100MHz bus and that seemed to be the top for the Coppermine core without exotic cooling. Pretty sure I've got one of those around here as well. I was actually hoping to get a 1.1 PIII running on a socket to slot adapter on a 440bx board, but my AMD Athlon system that started out as a Duron 733 bumped to an Athlon 1GHz, to an Athlon 1.4GHz then an Athlon XP 2400+. Didn't have much need for the PIII in a 440BX aside from geeking out.
So, since Cuttress can hardly keep it in his pants when talking about how much we need to have fewer and fewer cores on a single piece of silicon, how excited are we to have a "threadripper" where each die gets a whole one channel of RAM and 75% of system memory requires an off-die hop.
I seem to remember when the hot thing was an "on die memory controller". At this point, we are down to a "25% on-die memory controller if you are lucky" in Threadripper 2.
Actually, two of the dies have two channels of direct memory access each, the other two do not. And it's not that different from Intels approach on those bigger chips with ringbus, remember the CPUs with two rings of ringbus? The second ring did *not* have direct memory access either. This is similar. And considering the price point I really don't think we can complain, if we *need* the extra memory bandwidth or access we can go to Epyc instead.
And yes, this approach *does* introduce more latency, but for a lot of applications that's not a concern at all or the impact is only minor. This isn't a gaming chip after all.
Threadripper is advertised as a HEDT chip where workstation meets gaming and overclocking. Doing poorly at any of those and it is entirely something else.
There's zero games which would benefit from even more than just 8 cores. If the OS is aware of some cores being worse due to not having memory attached to them directly, it could simply never use them unless the load really requires more than 16 cores / 32 threads. So for gaming this should easily be a non-issue. (But I'm not saying there won't be other workloads where this indeed might be quite suboptimal.)
This. People fail to see this as what it is. This is a 16 core CPU, with "overcoring" when necessary. This probably will work like a Threadripper 1, using the cores with lesser memory latency, and just kick in the other pair of 16 cores when necessary.
I'm sure the next version will be more ambitious with their memory access, but I personally like this proposition. Not everyone need the best from 32 cores every time, and those who need it won't buy this particular CPU.
"This probably will work like a Threadripper 1, using the cores with lesser memory latency, and just kick in the other pair of 16 cores when necessary."
No OS will know this. So no. Your threads will be assigned randomly, unless you will set maximum number of threads and affinity mask for the process manually.
Windows has been updated to use cores first rather than threads on every CPU with SMT and it assigns based on modules on the AMD Bulldozer and it's children.
The Skylake assigns the thread based on a dozen different metrics, one of them being the individual core temperature. I think they can handle it.
The benefit stated in HEDT is content creation, which also targets gamers WHO STREAM or work while they play. If you can keep one game using 1 die, you wont have latency problems. The other dies could be working slower on background tasks like 3d rendering or streaming videos with real time processing.
The Xeons E5/E7 did split the memory controller between different rings internally. This permitted the previous 18 core and 24 core chips to act as two NUMA nodes per socket.
However the number of QPI links varied per ring and there was only a single PCIe root complex per die.
Great, so now that AMD has abandoned on-die memory controllers in high-end parts, when can we expect your article claiming that Intel must make sure to go back to 1990's era off-chip memory controllers since they are clearly idiots who "can't keep up" with AMD?
No, the guy above is an idiot who has never heard of a NUMA node on his life and neither have you.
Claiming that an on-die ring bus (which isn't even accurate for Skylake X) is exactly the same thing as having to dump every memory request for half your CPU through and over-glorified PCIe connector is flat out wrong.
Look up what the term "NUMA node" means some time. Then realize that having four of them in a single socket (and massively unbalanced to boot) is not something to be bragging about.
Then we'll see if AMD has polished their interconnect enough that two dies not having direct memory access isn't a major problem for the intended workloads on this platform.
Their simulations and tests must have been good enough to greenlight this. If pricing is right, it'll be good for some people. I guess there will be an option in the BIOS to just disable those two extra dies and run it like a 1st gen TR while enjoying the rest of the 2nd gen improvements which are significant.
AMD improves memory latency and cache misses by assigning the closest cores to the closest resources.
If this was a performance issue it would have shown up in the first threadripper. Clearly it's working well as Threadripper performs just as well as Intel counterparts at half the price.
Well most people don't. Even those who programmed low level enough don't either. or at least they understand but not too sure how best to handle it. Go and ask FreeBSD Devs.
On my Desktop, I am glad I have the choice of 32 Core. Now Bring me Zen 2.
Oh yeah it's such a "performance benefit" to have to rewrite your entire software stack to have to keep track of the physical location of each byte to prevent performance from going off a cliff.
That's like saying that having a cast on your leg is a "performance benefit" because having a broken leg is a great thing.
The better solution is to not have to worry about it by having a chip that doesn't need to have massively unbalanced memory allocation in the first place. In other words, having a cast on your broken leg sure isn't a "performance benefit" compared to not having a broken leg.
Did you even bother to read that Wikipedia article?
Are you writing your own OS? If so, then yes, absolutely, make sure you rewrite your scheduler to deal with this. Commercially available OSes already are there. Windows has had NUMA support for what, a decade? More?
NUMA support in the OS is not enough to get to normal scaling. Apps have to be written in a certain way, and for many many platforms/languages (except for C and C++) it is not even realistically possible. Ask me how I know.
This is UNBALANCED NUMA though. Very different from the normal one. If they would assign one channel each, at least NUMA-aware memory-latency-dependent applications would have a chance to work OK on 32-64 threads.
Although most NUMA-aware apps would be designed to depend on throughput and not latency, and in the latter case 2x2 or 4x1 is the same on 32+ threads, while 16- will have double of low-latency bandwidth if happily assigned by OS (or AMD-aware code - unlikely except in AMD-influenced tests) to the better cores.
Who cares how AMD decide to arrange their memory controllers? What matters is the performance. Given that Ian already raised concerns about memory latency in this launch article, we can be pretty confident he'll be exploring it in detail once he gets his hands on one of these CPUs. If Threadripper 2 suffers from high memory latency and that significantly impacts performance in real-world workloads then I'm trusting Ian to find out about it and help us all make an informed purchasing decision.
Here's the thing though, 12 months ago today purchasing a premium HEDT machine meant spending $1700 on a 10 Core Broadwell E 6950X with an all-core turbo of around 3.5Ghz. Surely it's reasonable for people to get a little excited that come September (~ 15 months later) you'll will be able to get a 32 core CPU that may even come in at a similar price-point (maybe $2K, we'll have to wait and see). Sure, AMD's multi-die approach has some drawbacks. Reviews will ultimately tell us how significant those drawbacks are. But in terms of benefits, that multi-die approach gives us access to a 32 Core CPU. That's a hell of a tradeoff and one that many enthusiasts would happily take I suspect.
*facepalm* well thankfully the chip wasn't designed like that! (Both the actual IMC's themselves & all DMA support have been disabled on the 2 newly active dies; meaning that max memory distance is no different than on Threadripper v1. Only potential cache hops have had their max possible distance pushed up to EPYC levels, but that's still a worst case scenario as Windows & increasing #'s of game's & program's have been updated for Ryzen to minimize cross-CCX & cross-die cache & memory operations as much as possible).
Maybe try reading the article first before you post? Or at the very least do some most basic levels of critical thinking (X399 = only 4x memory channels = only 2x enabled IMC's)....
In your article it is mentioned "usually it is suggested to just go buy an EPYC for those workloads)". Correct me if I'm wrong but is there any commercially available motherboard that has more than 56 lanes.
Your post got me curious so I went looking. This Gigabyte Newegg board seems to offer 88 PCIe lanes to PCIe slots, plus a bunch of extras you'd expect on a high end board (including dual 10Gb): https://www.newegg.com/Product/Product.aspx?Item=N...
This is a very nice board to work with. We're using it to build an iSCSI storage server at work. There's lots of RAM slots, lots of PCIe slots and lanes to work with, and the integrated 10 Gbps Ethernet was the icing on the cake (compared to the SuperMicro boards available for EPYC). Price was surprisingly low for all the features it includes.
Only downside was that it took about 6 months for all the parts to arrive. And the onboard SATA controller is a weird one that uses slimSATA connectors with breakout cables to connect 4 SATA devices each (doesn't work with multi-lane backplanes we discovered). Our next build will use a direct-connect backplane to allow the use of the onboard SATA controller; for this build we had to add an LSI/Avago/Broadcom HBA.
Intel's 5ghz 28 core was used to steal AMDs thunder and I doubt they'll sell more than a handful if any. You can't really compare an AMD 32 core that uses 300 watts for chip and cooling to Intel's chip that probably uses 1 to 2kw.
Forget about the 5GHz. The real question is how high can it go on air?
If it's above 4GHz. And AMD only gains a couple hundred on their WIP 3.4GHz. 28 Cores over 4GHz will still breat 32 at 3.6GHz in both singl and multithread workloads.
28 cores 2.5 base up to 3.8 max turbo and pleasy look at the anandtech server article where it clearly explains how many cores can boost to which frequency when being used 205W
Intel's advertised clocks are already without AVX512 running and they throttle heavily when it is. Example: Xeon Silver 4116 is 2.1Ghz base clock but only 1.4Ghz when AVX512 is active.
Yes I believe so - AVX512 is known as processor power if you don't used - better yet instead of just disable it and it have less die space. Also reduce cost of CPU - If I was Intel I would come out two versions of this chip
1. that uses extensive water cooling and up to 5Ghz 2. one that uses conventional cooling and up to 4Ghz or so.
Intel is also smart for pre-release some information, this means they can find out what AMD is doing and make changes to improved the product.
The CPU allegedly used 1310W by itself and the phase change cooler used 1100W. Other system components not counted for. If you take the 670W max power consumption figure from Tom's Hardware test of the Platinum 8176 (which this essentially is just an unlocked version of) with 2.8GHz all core and just multiply that up you get 1196W so the alleged 1310W power consumption seems fairly credible.
My biggest question is whether it has that same NUMA node mode nonsense for gaming where you had to reboot the PC to change profiles? I would be running a Threadripper if it weren't for that.
You'd only want to reboot to change that if you have specific software that particularly needs it or particularly benefits from it. I doubt most 1950X users a) reboot to change to NUMA mode, b) reboot to disable half their chip. Besides that, antique software like Process Lasso or even good old Windows Task Manager are very capable of limiting software to specific CPU cores (and thus avoiding the dreaded latency).
Unless you want to play at or above 144Hz, you simply won't notice.
You can also automate fixes for incompatible games using Process Lasso, when needed, but again, this is uncommon. Another poster said they had problems with maybe 8 games out of hundreds.
"The codename for the processor family is listed as "Colfax".
Are you sure this was the codename for the processor and not the system assembler? There is a workstation and server vendor called Colfax that sells AMD-based systems, eg:
I think it's a good move on AMD's part, and it would be interesting to see how these parts are priced.
I hope that AMD plans to make some 180W SKUs. While IIRC AMD hadn't promised TR socket compatibility the way it did for AM4, I think it would be nice if AMD offered CPUs which are guaranteed to be compatible with older motherboards.
There might be existing X399 boards that does not have the VRM requirements to run the new 28/32C CPUs but they definitely said that these CPUs uses the existing X399 platform.
...and that's exactly like a B350 might not have a lot of fun with a 8-core CPU, or a B360 board might not get far with a 8700K. There's cheap, barebones Z370 too, while at it.
They haven't redesigned the cores to take any advantage of 12nm though. Just lots of dead space between modules, so the same latencies and power waste in intra-chip communications.
32 cores, 4 memory channels, that's gonna be problematic and likely better off with Epyc. On the upside, the memory channels limitation minimizes cannibalization so they might be able to price it well, maybe 2-3k$.
I'm definitely getting a 24 or 32 core Threadripper 2, been waiting for this for a year and was hoping they'd increase the core count this time. Finally a decent core count that's worthy of a 2018 CPU.
Me too. I can finally retire my ageing dual socket Xeon. Probably going for the 24C as I don't expect to be using more for several years ahead and it'll probably be priced better per core than the 32C one. Not to mention that clocks might be pushed a bit higher on the 24C with a good air cooler.
My thoughts exactly. Six cores per CCX should be a little easier to power, cool and overclock, same as the FX series.
Not saying I wouldn't switch for an eight core if I tripped over one cheap, but my six core is clocked high enough that I really wouldn't see much benefit going to an eight core. Likely my VRM would limit the 8 core faster and at best I'd have the same performance in most situations.
AMD is competing to give consumer more for their money. Intel doesn't want to give consumer a bang for their buck and tries to bring back the gigahertz race of 2000s. Intel fanboyz are happy.
Now I'm impressed with the scene Intel and AMD is making. This is a brawl. I knew that Intel aren't putting out their best on their CPUs considering their poor thermal pastes and good performance once replaced or dellided CPUs. But, I haven't expected them to bring out the best of the CPUs this soon. I thought, we will not see them at all once their 10nm products arrives.
I have to wonder, if AMD can do this at 14nm in just a short time since the first Ryzen, then Intel has been sitting on this capability for years!
Anyway, AMD is doing fine with the their CPU business now. Nvidia is now showing an Intel like move, delaying their next gen products with no definite date. Does it mean AMD isn't releasing anything new?
I catch hell getting Crysis to work on modern CPU and OS these days. Crysis Warhead, based on the *exact same executable* works, Crysis freaks out.
Run 64bit to get it working on an FX in 7 64bit, but I still haven't been able to get the original working on my Skylake laptop. I've gotten so far as the cutscenes finally, then it crashes. EA will never release a patch for it, but they'll probably keep selling it on Origin.
Crytek was actually pretty good about patching games before the acquisition.
It is all cool and all to see a 32/64 core in the high end but AMD clearly was not ready to do this if they had to disable the extra dies memory channels and basically not have them getting direct access to the system memory. I guess we will have to wait and see how this all pans out and how much it hurts performance if at all.
Now onto Intel and their bogus 28 core CPU demo and using phase change cooling to make it run at 5GHz but not disclosing this up front just to up shoot AMD's new Thread Ripper release. Come on Intel you are better than this or at least I would have hoped so. Now that you had your fun and got PR from it lets see the real speeds and the real performance of that CPU without the phase change cooling and using good air cooling I am going to bet it won't get any where close to 7.3k in Cinebench. Pretty sad Intel pretty sad indeed then again they got the affect they were looking for because all those that are lessor knowing about all things computers have been duped.
"AMD clearly was not ready to do this if they had to disable the extra dies memory channels and basically not have them getting direct access to the system memory. I guess we will have to wait and see how this all pans out and how much it hurts performance "
That is the mechanism but you conclusions are wrong imo.
Those limitations are an architectural necessity - they cant have more lanes than x399, but there are cpu intensive workloads that suit this configuration, or I doubt they would bother making it.
Its certainly an interesting new task to lob onto Fabric - act like a plx for orphan ccxS.
AMD published their 32 core desktop threadripper without any 8-channel motherboard is the most issue I concern. If you try dual channel RAM under one die and another die has no RAMs on it, you could get a great performance loss on it. So X399 has been set so if 32 cores use x399 it will lose 4 channel memories and 64-lanes of PCIE 3.0.Infinity fabric cannot solve this problem. AMD REALLY need to give us a new motherboard for 8 channel like X499 or X399-8ch
All cores in all current Ryzen and TR CPUs access memory through the infinity fabric. Remember, the infinity fabric is the internal bus. Just like all cores on Intel CPUs access memory through their ring or mesh bus. I really don't see the huge issue. Granted the off-package infinity fabric has a slightly higher latency but decent thread management in the OS should eliminate almost all negative consequences.
Threadripper 1 has 64 PCIe lanes, 4 memory controllers, up to 16 cores per CPU, and 1 CPU per motherboard, using Zen cores.
EPYC 1 has 128 PCIe lanes, 8 memory controllers, up to 32 cores per CPU, and up to 2 CPU sockets per motherboard, using Zen cores.
Threadripper 2 has 64 PCIe lanes, 4 memory controllers, up to 32 cores per CPU, and 1 CPU per motherboard, using Zen+ cores.
EPYC 2 should have 128 PCIe lanes, 8 memory controllers, up to 32 cores per CPU (although there are rumours this may double), and up to 2 CPUs per motherboard, using Zen2 cores.
BTW, 32 cores with SMT bumps right against the Windows 64 "logical CPU per group" limitation (I wonder what m0r0n @MS invented this sht to begin with), and taking advantage of all cores in multi-group configurations requires rewrite of multi-threaded code in existing apps.
If your multithreaded app can take advantage of 64 threads that's it for Windows apparently. Anything over 64 cores would be handled by a second scheduler (if I read this all correctly) and you'd gain load capacity, not multithreaded performance.
I haven't seen many apps that take full advantage of my six core, so it may not be an issue for a while yet.
Disappointing that the two extra cores don't enable any more PCIe lanes, but understandable... and still more lanes than Intel can offer at a much higher price.
"Nonetheless, it was stated by several motherboard vendors that some of the current X399 motherboards on the market might struggle with power delivery to the new parts, and so we are likely to see a motherboard refresh."
I'm no fan of Intel, but just look at their current CPU's to guess at an efficient clock for the 5ghz one. That would be 4.6ghz for Coffee Lake. So I expect the 28 core to be a 4.6ghz max part on air, as that is the efficient clock for those CPUs.
Even 4.2GHz is extremely generous. The Platinum 8176 uses 670W with 28 cores at 2.8GHz all cores. Just multiplying that up to 4.2GHz assuming you can overclock it without increasing voltages would be 1005W power consumption for the CPU alone. Good luck getting that stable with an air cooler.
If they went that route, they'd have had to create a whole new design, which would have pushed the release of Zen+ back so far, they'd have been better off just concentrating on Zen 2 on 7 nm.
The problem with more cores per CCX is that in the early development phase, you would end up with more failed dies that would be garbage. 14nm was still new to Global Foundries when Ryzen first launched, so 4 cores was good for yields. Second generation could have gone to six, but due to this being a refresh cycle, why risk problems with production?
7nm and Zen 2 cores MAY be the right time to go to 6-cores per CCX, but a lot depends on things like clock speed. We may see six core CCX with Zen 2, but if that will be compatible with first generation motherboards, or if there will be limitations on performance due to changes is another story. Quad core CCX on the other hand, wouldn't be a change for the first generation motherboards, and only clock speed plus other things such as power would come into play.
Windows and how Windows talks to the chipset/processor would also be another potential issue with the change to 6 core per CCX.
I honestly wish they would throw TDP in the garbage where it belongs, it is far to obscure a number because everyone who uses it seems to "calibrate" it the way THEY see fit, is a very flat number for an anything but flat comparison between products because of the numerous ways it can be tested for this number to be "hit"
that is, the cpu is that 100% load on all cores, 1 core at 100% 85% of the time, 80% load all cores 60% of the time etc. or in the case of coolers, are they testing it via a very specific "hot plate" so they supply X amount of "heat" and it keeps this cooler at Y temperature for Z amount of time etc.
They probably (IMO) should be really fine tuning the testing methods so the consumer as well as the AIB/OEM know for sure that a given cooler will work, or that you have enough power supply to power it and so forth.
nothing like a cooler made by CM H**er 2*2 being rated for 180w and when you try to strap it to a cpu that is also rated for this level that the cooler CANNOT actually deal with this heatload.
TDP is a "place holder" number as far as I can tell, AMD is generally the most conservative IMO with their TDP rating, however, all it takes is a different driver etc and this TDP number can be blown way past, so to use TDP in relation to power consumption (as so many review sites do to congratulate one part while demonizing another) is crazy.
There are VERY rare exceptions where some makers put fancy circuits into the product to make 10000000% sure it CANNOT pass this number, but these really are rare occurrence by and large in my personal experience as well as many many years of following cpu-gpu-motherboard information.
either way 250w TDP may seem like A LOT, but that is only 7.1825w per core given heat load, or 0.256 per thread, which is peanuts given the work is likely is capable of.
Intel and Nv chase the crazy high clock speeds (also make sure reviewers use VERY specific test software so that their actual absolute and continual power consumption numbers always look AMAZING) AMD on the other hand seem to be more about "yep we chew power, but, we have the performance, durability and forward looking designs they do not seem to care about"
in other words (IMHO) AMD seems to be looking at the "what will be" whereas Intel and especially Ngreedia are absolutely content about building for the moment and should is die horrible death (poor component/thermal interface choices) that is ok, means more $$$ and they will fix the problems with the next revision "we promise" ^.^
I think your math is off a bit, but your point is valid.
That is, more or less, the basis for the turbo modes. 65w / 4 cores = 16.25w, but a single core can pull upwards of 30w under full load.
Given full capacity under full load with no throttling the typical 65w quad core should be 120w, which is probably what most overclockers actually see.
Because AMD is a company that has always been forced to be innovative, rather than using a brute force approach to getting higher performance with each new product cycle. Most people still have not heard about Gen-Z, which AMD has been a part of from the beginning for example. Looking forward, AMD is looking to improve the overall system design, not just the CPU, GPU, or chipset.
For any serious work you won't have this chips OCd so it should be safe for your current X399 mobos, TR3 at 7nm would be way more efficient and probably aim to 3.4-3.6Ghz base at the same TDP.
Can anyone point me to a an off the shelf application that can utilize 16 cores, much less 32? I don't mean some in-house custom research project where you spin off 32 threads just to stress the system. I mean Adobe or 3D rendering engines.
On the more mainstream things, believe it or not, an 8k/60p video on youtube can hit 50% cpu usage on my 1950X. Of course, if you have a nvidia 10 series or higher GPU, you can run 8k/60p on a quad core (maybe even less!) with no problems. No amd gpu yet fully accelerates video above 4k.
"Despite the mainstream Ryzen processors already taking a devastating stab into the high-end desktop market, AMD’s Threadripper offered more cores at a workstation-friendly price."
So AMD is now making a few billion a year or more NET INCOME? No? Ok, then surely Intel must be making FAR LESS now with this "DEVASTATING STAB" AT Intel's market right? Nope. Oh right, they set a record for income...Well HRMPF....I'm confused. How is it that AMD made a "devastating stab" without either making tons of money or even making a small dent in the ENEMY commonly known as Intel?
Again, for you to have success either you have to be doing MUCH better now than "back then" (at some point in history), or the enemy has to be doing much worse (or in the best case, you get both of these). I see neither happening here so I think it's a blip on the radar until you MAKE MASSIVE PROFIT. Worse they now seem to be giving Intel gpu tech which will lead to death in a few years anyway as Intel can make a better AMD APU than AMD themselves at least until Intel is totally behind in fabs (getting there now it seems, should've bought NV for $10-15 when they had a chance).
Don't get me wrong, great we have competition now, but until AMD figures out how to PRICE appropriately, they'll continue to just be some guy in the market that doesn't make a dime while selling a great product...LOL. Until you realize you are NOT in business to be OUR FRIEND, you'll keep making NOTHING, while being the #2 cpu and gpu maker in the world. How is that sentence even possible? Wake me when you have a PE ratio for a few years (can't happen without PROFITS). I really wish AMD would concentrate on MAXIMIZING PROFITS, instead of maximizing how much we like them today...ROFL. I'm not saying your job is to be your customer's mortal enemy, but you don't have to be our best friend either. R&D costs money and if you're not making any, you can't afford more R&D :) Simple right?
While Intel market share remains VERY high, the sales numbers are starting to reflect the growing popularity of Ryzen in the overall market. A big part of the delay has been OEMs not offering nearly as many AMD based systems, and I have yet to see one of the big name OEMs put one of the new Ryzen based APUs into one of their systems in the retail sector. Yes, you can find a few first generation Ryzen based machines with a discrete video card, but I haven't seen Raven Ridge based desktops and almost no laptops.
I'm looking forward to seeing the clocks. If the 24 core part retains ALL THE CACHE like the smaller parts in the Ryzen line up have done, then it will be a monster. 64MB of L3 cache! On a consumer CPU!
I'm sure it will like the TR1s did. Having the full 64MB of cache will also pretty much eliminate the memory channel concerns, at least for the 24C one. At 32C it might start to become issue.
I can't help but feel AMD should have skipped 12nm for TR and gone straight to 7nm as early as they could if the 3GHz clock is true as for all but the massively threaded applications this CPU will be slower than first generation. Although i see no reason why it wouldn't be able to clock as high as 2700x if its only using a few cores.
2018 for AMD is still 12nm, with a few 7nm Vega cards for the AI market being sold at the end of 2018. 2019 will be 7nm. Would you rather AMD not release ANY updated Threadripper in 2018 just because 7nm comes out next year?
2018 is 12nm, with 2019 chips being 7nm. 2017 was Zen cores, 2018 with 12nm is Zen+ cores(just some minor improvements to the design, not big changes). 2019 will bring Zen 2 cores with more significant improvements combined with 7nm. I will note that 7nm will start to ramp up production, the limited volume 7nm Vega AI cards with up to 32GB of video memory will come out at the end of 2018, but obviously, those will not be high volume parts.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
203 Comments
Back to Article
realistz - Tuesday, June 5, 2018 - link
AMD avoiding Cinebench like the plague after Intel showed its score ROFL.knns - Tuesday, June 5, 2018 - link
whats the point of the cinebench score when it can only be achieved using an unrealistic phase change cooling system that no consumer will ever use?realistz - Tuesday, June 5, 2018 - link
AMD is known to heavily use Cinebench to market their CPUs but deliberately took it down the moment they saw how faster 28 cores > Slower 32 cores. It's not rocket science.Alexvrb - Tuesday, June 5, 2018 - link
Actually I think some of the cooling technology required for that Intel demo of vaporware was originally pioneering by NASA, so you could consider it rocket science.sgeocla - Tuesday, June 5, 2018 - link
Even TomsHardware noticed the huge cooling system needed by the Intel CPU:https://www.tomshardware.com/news/intel-cpu-28-cor...
nagi603 - Wednesday, June 6, 2018 - link
Lol, that's just insane.ianken - Wednesday, June 6, 2018 - link
The fan boys won't care. It'll all be "ehmagherd 28 core 5Ghz!!!!!!!"0ldman79 - Wednesday, June 6, 2018 - link
Honestly though, what do you expect? Look at 5GHz quad cores and the heat they generate, then extrapolate that to 28 cores.Intel is getting rather desperate to stay on top, they're overclocking their own CPUs beyond safe limits to beat AMD. Sounds rather like the Coppermine @ 1.13GHz fiasco. They released the CPU then had to recall it, replace a huge number of them and release it again.
Intel is so desperate to stay on top they'll crash and burn to do it. They're not used to ever coming in second place, even when it's on a product that realistically very, very few will buy.
JoeWright - Wednesday, June 6, 2018 - link
Luckily for them, there are countless "fanboys" and people with "brand loyalty", who only care about the highest clocks and the highest shiny numbers put out by intel. Doesn't matter if it's falsified(in some cases) or if it requires liquid nitrogen to cool. Smooth performance doesn't matter. Optimisation doesn't matter. Stability doesn't matter. 1% doesn't matter. Unless it's about amd or any other company. Then those hardware are "bulldozers".0ldman79 - Wednesday, June 6, 2018 - link
I've never understood it myself.Subjectively I'm a bit of an AMD fan.
Objectively, even as badass as the Zen is, I'm still considering Intel for my next build for a few specific reasons.
I gotta admit though, a 32 core beast would be a nice replacement for my current TVBox, however, the "reboot to game" thing will have to be corrected first.
eek2121 - Wednesday, June 6, 2018 - link
You don't need to "reboot to game". Most games work fine on the current 1950X and the ones that don't can either be started via a command line option or via a program such as process lasso. Probably about 1% of the 800 games I own have issues with 16 cores/32 threads. The rest work fine without needing to do anything.0ldman79 - Wednesday, June 6, 2018 - link
eek2121, that's good to hear.That was a concern if I went with a Threadripper. I try to make this computer just work as it's connected to the TV in the living room.
It's mostly mine but everyone watches movies, plays multiplayer games with me (Fretz on Fire for one) and depends on the homemade DVR.
It would suck if I actually had to reboot. The kids just wouldn't play if that was the case, they don't follow instructions if it doesn't work perfectly the first time. I figured they'd have a software patch or something eventually that got around the problem.
I'm no stranger to the command line options. The only way I can play Gran Turismo 4 on my FX 6300 with PCSX2 is to use cores 1, 3 and 5 and set at high priority. If the FX chooses where the threads go on it's own they bounce and the clock speed stays pretty close to 4GHz. Fixed to certain cores it will turbo to 4.3 and runs pretty damn well. Also have to lock out a core or two in order to get Crysis running on some systems.
atragorn - Thursday, June 7, 2018 - link
its not your cpu doing the bouncing its windowsDaveLT - Thursday, June 7, 2018 - link
It's the game... not the CPU being the problem.Like they always say, the problem is between keyboard and chair. I know FC4 had issues with CPUs with more than 16 threads. I know, because I had a 16 core Xeon that failed to run it.
ChristopherFortineux - Friday, June 8, 2018 - link
You do not have to enable Game Mode at all with TR. I do not use this setting to game.wumpus - Wednesday, June 6, 2018 - link
AMD got plenty of crap for the heatsink needed for the 5GHz bulldozer, and that fit in the case (it was more or less a standard AIO watercooler). Granted, even at 5GHz a bulldozer wasn't going to win any actual benchmark or other real speed competitions (yes, benchmarks might only be a step up blindly looking at frequency, but frequency was all the 5GHz bulldozer had).This is even more silly. I can't believe AMD pulled Cinebench for this (assuming they plan on shipping Threadripper, at least put the numbers back up once you ship and only compare to shipping Intel chips after that).
Oxford Guy - Wednesday, June 6, 2018 - link
It wasn't 5 GHz in more than marketing. It was a 4.7 GHz part that could rarely get to 5 GHz on a thread, maybe two. It could not run all of its threads at 5 GHz. The overclocked "The Stilt" said that the 9000 series dies were substandard (excessive leakage) and AMD created the 220 watt spec simply so they wouldn't have had to be sent to the crusher. Also, the 220 watt spec was "conservative" in that it understated the power actually required by a 9590.0ldman79 - Wednesday, June 6, 2018 - link
For the 9590 they pulled the same thing Intel did with the PIII Coppermine 1.13GHz, factory overclocked, no real headroom to speak of.I don't think the 9000s were substandard, the regular cores have a hard time hitting that speed. I think even cherry picked the architecture and the process just cannot support 5GHz safely.
A lot of the FX line will hit 4.8GHz safely, with no voltage bump or just barely bumping it up. Mine will do it in turbo mode with 2 cores loaded all day long without touching the voltage. Even locking the voltage in, pushing it, and turning off 4 cores, the remaining two cores will not hit 5GHz 100% stable.
Kinda wish I had a better board though. If my VRM could handle it I'm pretty sure this CPU could do 4.8 on all six cores. VRM overheat and the motherboard throttles it under load, so I'm just running default voltage, c-n-q and turbo enabled and pushed it for all it's worth.
rahvin - Thursday, June 7, 2018 - link
Look a the photo's they didn't just have a 1000W cooling system on the cpu, They had a MASSIVE heatsink on the VRM's. If you need 2000W and a $5000 chiller to run this it's not a viable part, period.https://www.tomshardware.com/news/intel-28-core-pr...
atragorn - Thursday, June 7, 2018 - link
Thats what I said here http://www.tomshardware.com/forum/id-3718662/intel...sonichedgehog360@yahoo.com - Wednesday, June 6, 2018 - link
They possibly did not use Cinebench because of low-information users posting threads like this after Intel’s 5 GHz demo of a non-stock, overclocked processor misinterpreting it. Any well-informed user knows neither Intel nor any CPU manufacturer will be selling a 28-core processor at 5 GHz any time soon but the laymen misread the PR stunt and fell for it, hook, line and sinker:https://www.reddit.com/r/pcmasterrace/comments/8oo...
Given AMD’s consistent advantage in multithreaded IPC with SMT compared to Intel’s HyperThreading, I would not be surprised in the least if their 32-core ThreadRipper 2 bests Intel’s 28-core Core i9.
sonichedgehog360@yahoo.com - Wednesday, June 6, 2018 - link
Here is just another example of a low-information post, this one from the tech news world. Clearly, Intel never stated this was stock form of what is to come nor would it ever be yet easily manipulated onlookers ate it up like manna from heaven:https://www.tweaktown.com/news/62123/rip-threadrip...
evernessince - Wednesday, June 6, 2018 - link
Who announces a brand new CPU with a completely unrealistic overclock and cooling setup? No one, because it's misleading and that's exactly what Intel did here.B3an - Wednesday, June 6, 2018 - link
Not surprised they fell for it on PCMR. That sub is a disgrace to the PC Master Race. Absolute retarded plebs.iwod - Wednesday, June 6, 2018 - link
Low Information User - I like this phase.bill.rookard - Wednesday, June 6, 2018 - link
I'm not sure, as everything depends on the clock speed that they release. Let's assume that Intel has the 28c part similar to the Xeon server part. That is clocked for 24/7 usage, so lets assume they can run the part about 15-20% faster for the 'i9' version. The Xeon is clocked for a 3.8 'all core' turbo, so maybe 4.2-4.3 boost for bursty workloads.Add in the 15% extra cores that TR2 has, subtract the 20% advantage in frequency that Intel has, and it'll be close. Pricing will matter.
The_Assimilator - Wednesday, June 6, 2018 - link
Considering that Xeon goes for $10k, AMD's already won.ChristopherFortineux - Friday, June 8, 2018 - link
If the top end TR2 drops at the same price as the 1950x intel will be in severe trouble.Fujikoma - Sunday, June 10, 2018 - link
Christopher, I'd think the TR2 with 32 cores will sell for $1500 minimum. It would probably be better for them to sell it in the $1750 to $2000 range depending on how much more it can number crunch past the 1950X. AMD needs to sell at a price with a reasonable margin to get them financially stable. Not an Intel margin, but higher than they're currently at.0ldman79 - Wednesday, June 6, 2018 - link
Intel did that knowing exactly what would happen. They're back to playing the dirty PR game.The only thing AMD can really do is be quiet and let the industry throw Intel under the bus for it.
Santoval - Wednesday, June 6, 2018 - link
Intel's stunt was comparable and equally useless (but often entertaining) to LN2 stunts. Neither can lead to a commercial product, and both are done for PR purposes.SirPerro - Wednesday, June 6, 2018 - link
Probably. Let's see how rocket science goes when we compare performance per dollar.12345 - Wednesday, June 6, 2018 - link
They don't want people using it to figure out the clock speeds based on 2nd gen Ryzen scores.cyberguyz - Thursday, June 7, 2018 - link
I suspect if you used the same cooler on the 32-core threadripper, you would be hitting similar clocks and 'ripping' that 28-core intel a new one.ChristopherFortineux - Friday, June 8, 2018 - link
The 32 Core TR will rip it a new one at 4GHz.cyberguyz - Thursday, June 7, 2018 - link
Here is a comment from an Intel engineer after Itntel hastily folded ujp their drmo & headed to the door when AMD announced their 32-core TR2 (some NSFW language):https://www.youtube.com/watch?time_continue=40&...
.Ne0 - Tuesday, June 26, 2018 - link
Rocket Science it is !Intel 28 core CPU apparently took 1 Kilowatt/hour, and Liquid Nitrogen grade coolant.
Not everyone can handle that, buy Rocket Scientists can
sonichedgehog360@yahoo.com - Wednesday, June 6, 2018 - link
This. Anyone claiming this is the stock form of the upcoming 28-Core HEDT flagship is, frankly, easily manipulated by Intel’s sly PR stunt to suggest otherwise. To the contrary, they are not releasing a 5 GHz 28-core processor when this was no more than overclocking magic show that no regular end user could ever achieve without $10,000+ on hand for the extreme cooling hardware and associated supporting equipment.HStewart - Wednesday, June 6, 2018 - link
My bet Intel will have two Modes with 28 core chip1. 5Ghz water cooled version
2. 4ghz conventual cooled system - still faster than 32 core TR since intel cores are faster then AMD and even though has 4 less cores.
As anybody compare the sizes of TR vs Intel 28 Core - TR looks bigger.
peevee - Wednesday, June 6, 2018 - link
"since intel cores are faster then AMD"Only in single-thread per core. Where all those multi-core CPUs are used, SMT and HT will be used, and SMT gives +50%, while HT barely +20% (and sometimes -10% or worse, depending on the app).
HStewart - Wednesday, June 6, 2018 - link
But that was before they were command 16 cores on AMD to 8 cores on Intelpeevee - Wednesday, June 6, 2018 - link
It does not matter. HT is not as beneficial as SMT for exactly the same reasons Intel has better IPC in single thread.0ldman79 - Wednesday, June 6, 2018 - link
I believe the differences in the SMT implementations comes down to the differences in the integer cores.If I read the data sheets correctly, AMD's integer pipelines are all symmetrical, Intel has a long pipeline (or two) and a couple of short pipelines. Which architecture really does better depends entirely on the code.
AMD should have better total number crunching on paper, Intel can finish certain lines of code faster on a 4 stage pipeline vs the 13-14 stages of the AMD, then Intel has the 13-17 stage pipeline for more complicated problems.
babadivad - Wednesday, June 6, 2018 - link
Even if they do a 5Ghz water cooled version, it won't be 5Ghz on all cores. Because it's impossible, even with water cooling.evernessince - Wednesday, June 6, 2018 - link
Water cooling isn't enough. Intel required an insulated custom loop, a high end water chiller, a 29 phase motherboard, and a copper triple fan VRM heatsink.You are looking at spending at least $3,000 just on those parts. And that's not considering if Intel delided or direct die mounted, which would void your $10,000 CPU warranty.
You are simply gullible if you think Intel will ever release a consumer facing CPU that consumes 933 watts.
cyberguyz - Thursday, June 7, 2018 - link
Toms Hardware's comment about a $10,000 cpu was in relation to the Xeon platinum cpu, not Intel's 28 core one. I don't expect Intel's 28 core cpu to go beyond $ 4,000. Not if Intel still wants to be competitive with AMD (I expect to see AMD's hit around $2000-$2200),cyberguyz - Thursday, June 7, 2018 - link
I wouldn't bet on that 5Ghz value. It is a publicity stunt. The only way Intel could ever hit 5Ghz was to use a sub-zero (-10C) phase change-based chiller. You might be lucky to hit 4.5-4.7Ghz with it though.Chaitanya - Wednesday, June 6, 2018 - link
This user seems to be a wccftech troll that has found its way to Anandtech. Heavily OCed Cpu(with unknown details about socket, etc) with some exotic cooling system vs stock cpu with aftermarket cooler now thats not a fair comparison even in remotest sense of testing.Hiorian - Tuesday, June 5, 2018 - link
Considering Intel Waterchilled theirs in order to reach that score, no surprise. AMD probably didn't want to cheat by water cooling their cpu to its maximum potential that no one would use constantlycoder543 - Tuesday, June 5, 2018 - link
People were upset that AMD kept using Cinebench, thinking that it was the only benchmark they were good at, and now you're upset that they're *not* using Cinebench in the demo.CajunArson - Tuesday, June 5, 2018 - link
You're right. It used to be a staple of Lisa Su standing around talking about it and suddenly it disappeared.evernessince - Wednesday, June 6, 2018 - link
Yes because we all know heavily overclocked scores using a phase change cooler are so representative of IRL situations.By that logic, AMD should have just demonstrated Bulldozer using an extreme OC and phase change cooling, it would have likely gotten better scores in single thread then any Intel CPU at the time.
Considering the cooling method and 29-phase requirement of Intel's extreme overclocking, including the nearly 933 watts of power the CPU alone would consume, it's safe to assume that's not something everyday consumers should expect.
svan1971 - Wednesday, June 6, 2018 - link
Well at least their shit didn't break.https://www.tomshardware.com/news/intel-28-core-pr...
MDD1963 - Thursday, June 7, 2018 - link
"Breaks"....LOL! Do you honestly not know what 'breaks cover' means? It means 'is revealed'....; it does not mean, 'malfunctions' (assuming you are are a non-native English speaker?)svan1971 - Wednesday, June 6, 2018 - link
https://www.tomshardware.com/news/intel-28-core-pr...BaldFat - Wednesday, June 6, 2018 - link
Yes a $8,700+ CPU from Intel will beat the $1,200???? AMD Threadripper.HStewart - Wednesday, June 6, 2018 - link
There is no idea what the price will be - I am hoping they removed AVX-512 from lower the price and have 2 versions - all depends on how much performance difference is with TR 32Tewt - Wednesday, June 6, 2018 - link
Interesting, first comment is the same as over on techpowerup. Either a rabid fan or Intel doing PR control.Tewt - Wednesday, June 6, 2018 - link
Also, AMD releases 32 core Threadripper CPU working on AIR cooling along with announcement of X399 boards. Intel uses liquid cooling, cobbled together motherboard that looks like a college project and no known motherboards waiting to be released and you prefer Intel? ROFLSIDESIDE - Wednesday, June 6, 2018 - link
Eh I dont belive that for a second. Intel's current gen 18 core has to be overclocked to pulling nearly 400 watts to compete with the old threadripper with LESS CORES, now that amd has 12nm plus double the cores, I find it impossible to believe intel's old xeon re-branded as a consumer product is going to do better than their existing 18 core. As everyone else has pointed out, that score is because of the cooling they used, not the old stinky xeon.ChristopherFortineux - Friday, June 8, 2018 - link
If this CPU scores even double a stock 1950x in CB. It will get over 6k..Flunk - Tuesday, June 5, 2018 - link
32 cores? That's the sort of thing we never would have got from Intel if AMD wasn't pushing them.rhysiam - Wednesday, June 6, 2018 - link
Ah, this is an AMD CPU. Intel's current offerings top out at 28 cores.HStewart - Wednesday, June 6, 2018 - link
It is not the number of cores that makes the difference - one must also take in account of the core speed and technology in the core.coder543 - Tuesday, June 5, 2018 - link
After Intel spent all day hyping up their 28-core desktop processor that seems to require exotic VRMs, cooling, and everything else... AMD introduces a 32-core desktop processor that can drop into existing ThreadRipper 1 motherboards. I find that funny.The Benjamins - Tuesday, June 5, 2018 - link
They beat the 7890xe on a CLC with the 24c TR 2000 on a air cooler.shabby - Wednesday, June 6, 2018 - link
It's kind of like that race to 1ghz back in the day... Intel got there first but then came the recall.0ldman79 - Wednesday, June 6, 2018 - link
Someone else is old enough to remember!!ilt24 - Wednesday, June 6, 2018 - link
As I remember it, AMD knew the date Intel was going to release it's 1Ghz chips and did a quick release a few days before so they could say first to 1Ghz. Intel's 1Ghz chip worked fine, it was their 1.13Ghz Intel had to recall.AMD was able to get it's 180nm Athlon's up to 1.4Ghz while Intel's P3 was stuck at 1.0Ghz until they got to 130nm.
drexnx - Wednesday, June 6, 2018 - link
the palomino Athlon XPs were still .18u (remember when it was point (number) micron?) and they got up to what, 1733mhz?0ldman79 - Wednesday, June 6, 2018 - link
Technically they only hit 1400MHz, they were "PR Rated 1733".I had a 2400+ that was overclocked from 2GHz to 2.2.
dr.denton - Sunday, June 10, 2018 - link
0,18µ Palomino went up to 1,733Ghz, which made it a "2100+".There never was a "1733+", only a "1700+", running at 1,46Ghz.
Confusing, I know ^^
0ldman79 - Wednesday, June 6, 2018 - link
Intel released the Coppermine 180nm (.18 micron for those of us around at the time) @ 1.13GHz. At 1GHz they bumped the voltage from the stock 1.65v to 1.75v to get it stable, not a big deal. At 1.13GHz I believe before it was over the chips were run at 1.85v (I think the initial release was 1.75v) and there were a few refunds and replacements.The .13 micron Tualatin came out shortly there after, but Intel had already burned a lot of the customers that were excited about the new CPUs and AMD gobbled them up.
As good as the Tualatin was (it was an excellent replacement for the Coppermine) the sales numbers were terribly small. They were pushing the P4 by the time they release the Tualatin and regardless of the T being faster than the P4 the OEMs were pushing the new architecture hard.
It was pretty much a small subset of nerds that new about the Tualatin that hadn't already jumped to AMD that bought them.
I did get my hands on a few 1GHz Tualatin and I *think* I even saw a 933 or 966 Tualatin in the wild. I have a 1.4GHz Tualatin in storage somewhere. One day I'll have to dig out my old hardware and run benchmarks that will run on old to modern just for the sheer hell of it.
0ldman79 - Wednesday, June 6, 2018 - link
It was rather heartbreaking looking at the 1.7GHz Williamette Celeron that someone had bought over the 1.4GHz Tualatin because it had a faster clock speed.128MB of RDRAM for $600 vs 256MB of PC133 for $240...
dr.denton - Sunday, June 10, 2018 - link
Couple of months back I had this impulse to build a kind of retro gaming system with only "underdog" components. Tualatin, maybe a Kyro 2 ... something like that.Got 3 (!) PIII-S 1,13Ghz for 3$ on eBay. But then I saw the prices for decent i815 mainboards and gave away the CPUs for free :D
cyberguyz - Thursday, June 7, 2018 - link
Shoot I remember shelling out over a grand to buy a 200mhz pentium pro + mboard back in the day. that hurt since Intel immediately move on to the silly slot-1 pentium 2 platform just after that.Lord of the Bored - Wednesday, June 6, 2018 - link
Athlon got their first. It was a MAJOR feather in AMD's cap.The 1 GHz P3s were a paper launch. You couldn't actually buy one once Intel said they were available, because there was no supply. Intel only declared them launched so that AMD didn't have "the only" 1 GHz processor. It was almost a year before you could get a GHz P3.
The 1.1 GHz P3s were the ones that had to be recalled. Intel was trying to reclaim the speed crown, and stumbled. Fortunately for Intel, they ALSO weren't available in any meaningful amount, and it was a rather inexpensive recall.
0ldman79 - Wednesday, June 6, 2018 - link
I believe it was the 1.13GHz. The 1.1 was on the 100MHz bus and that seemed to be the top for the Coppermine core without exotic cooling. Pretty sure I've got one of those around here as well. I was actually hoping to get a 1.1 PIII running on a socket to slot adapter on a 440bx board, but my AMD Athlon system that started out as a Duron 733 bumped to an Athlon 1GHz, to an Athlon 1.4GHz then an Athlon XP 2400+. Didn't have much need for the PIII in a 440BX aside from geeking out.MDD1963 - Thursday, June 7, 2018 - link
Intel did NOT get to 1 GHz first, AMD's slot A Athlon 1000 was first.....limited distribution, granted, but, it was for sale.Notmyusualid - Sunday, June 10, 2018 - link
@ shabbyIndeed, but I think they are calling today the 'Core Wars', if I am not mistaken.
cyberguyz - Thursday, June 7, 2018 - link
That is typical Intel marketing and why I walked away from them. Now on TR 1950X and not regretting a moment of it.CajunArson - Tuesday, June 5, 2018 - link
So, since Cuttress can hardly keep it in his pants when talking about how much we need to have fewer and fewer cores on a single piece of silicon, how excited are we to have a "threadripper" where each die gets a whole one channel of RAM and 75% of system memory requires an off-die hop.I seem to remember when the hot thing was an "on die memory controller". At this point, we are down to a "25% on-die memory controller if you are lucky" in Threadripper 2.
Domaldel - Tuesday, June 5, 2018 - link
Actually, two of the dies have two channels of direct memory access each, the other two do not.And it's not that different from Intels approach on those bigger chips with ringbus, remember the CPUs with two rings of ringbus?
The second ring did *not* have direct memory access either.
This is similar.
And considering the price point I really don't think we can complain, if we *need* the extra memory bandwidth or access we can go to Epyc instead.
And yes, this approach *does* introduce more latency, but for a lot of applications that's not a concern at all or the impact is only minor.
This isn't a gaming chip after all.
realistz - Tuesday, June 5, 2018 - link
Threadripper is advertised as a HEDT chip where workstation meets gaming and overclocking. Doing poorly at any of those and it is entirely something else.mczak - Wednesday, June 6, 2018 - link
There's zero games which would benefit from even more than just 8 cores. If the OS is aware of some cores being worse due to not having memory attached to them directly, it could simply never use them unless the load really requires more than 16 cores / 32 threads. So for gaming this should easily be a non-issue. (But I'm not saying there won't be other workloads where this indeed might be quite suboptimal.)SirPerro - Wednesday, June 6, 2018 - link
This. People fail to see this as what it is. This is a 16 core CPU, with "overcoring" when necessary. This probably will work like a Threadripper 1, using the cores with lesser memory latency, and just kick in the other pair of 16 cores when necessary.I'm sure the next version will be more ambitious with their memory access, but I personally like this proposition. Not everyone need the best from 32 cores every time, and those who need it won't buy this particular CPU.
peevee - Wednesday, June 6, 2018 - link
"This probably will work like a Threadripper 1, using the cores with lesser memory latency, and just kick in the other pair of 16 cores when necessary."No OS will know this. So no. Your threads will be assigned randomly, unless you will set maximum number of threads and affinity mask for the process manually.
0ldman79 - Wednesday, June 6, 2018 - link
Says who?Windows has been updated to use cores first rather than threads on every CPU with SMT and it assigns based on modules on the AMD Bulldozer and it's children.
The Skylake assigns the thread based on a dozen different metrics, one of them being the individual core temperature. I think they can handle it.
tamalero - Wednesday, June 6, 2018 - link
The benefit stated in HEDT is content creation, which also targets gamers WHO STREAM or work while they play.If you can keep one game using 1 die, you wont have latency problems. The other dies could be working slower on background tasks like 3d rendering or streaming videos with real time processing.
GreenReaper - Sunday, June 10, 2018 - link
True in most cases, but bear in mind that there are potential contention issues with, say, Level 3 cache, memory bandwidth or access to the GPU.Kevin G - Wednesday, June 6, 2018 - link
The Xeons E5/E7 did split the memory controller between different rings internally. This permitted the previous 18 core and 24 core chips to act as two NUMA nodes per socket.However the number of QPI links varied per ring and there was only a single PCIe root complex per die.
Ian Cutress - Tuesday, June 5, 2018 - link
It's two dies get two channels each and two other dies get zero.I love you too, btw
CajunArson - Wednesday, June 6, 2018 - link
Great, so now that AMD has abandoned on-die memory controllers in high-end parts, when can we expect your article claiming that Intel must make sure to go back to 1990's era off-chip memory controllers since they are clearly idiots who "can't keep up" with AMD?sor - Wednesday, June 6, 2018 - link
Intel already does this. Someone above pointed this out to you. You seem to be looking for a fight.CajunArson - Wednesday, June 6, 2018 - link
No, the guy above is an idiot who has never heard of a NUMA node on his life and neither have you.Claiming that an on-die ring bus (which isn't even accurate for Skylake X) is exactly the same thing as having to dump every memory request for half your CPU through and over-glorified PCIe connector is flat out wrong.
Look up what the term "NUMA node" means some time. Then realize that having four of them in a single socket (and massively unbalanced to boot) is not something to be bragging about.
.vodka - Wednesday, June 6, 2018 - link
Then we'll see if AMD has polished their interconnect enough that two dies not having direct memory access isn't a major problem for the intended workloads on this platform.Their simulations and tests must have been good enough to greenlight this. If pricing is right, it'll be good for some people. I guess there will be an option in the BIOS to just disable those two extra dies and run it like a 1st gen TR while enjoying the rest of the 2nd gen improvements which are significant.
Time will tell.
Gothmoth - Wednesday, June 6, 2018 - link
Is this Retard above allowed to insult people... 😂 He googled Numa now he think he is an expert.... Lolevernessince - Wednesday, June 6, 2018 - link
You clearly have no idea what NUMA is. It's a performance benefit to have certain cores have direct access to memory and other's not.https://en.wikipedia.org/wiki/Non-uniform_memory_a...
AMD improves memory latency and cache misses by assigning the closest cores to the closest resources.
If this was a performance issue it would have shown up in the first threadripper. Clearly it's working well as Threadripper performs just as well as Intel counterparts at half the price.
iwod - Wednesday, June 6, 2018 - link
Well most people don't. Even those who programmed low level enough don't either. or at least they understand but not too sure how best to handle it. Go and ask FreeBSD Devs.On my Desktop, I am glad I have the choice of 32 Core. Now Bring me Zen 2.
CajunArson - Wednesday, June 6, 2018 - link
Oh yeah it's such a "performance benefit" to have to rewrite your entire software stack to have to keep track of the physical location of each byte to prevent performance from going off a cliff.That's like saying that having a cast on your leg is a "performance benefit" because having a broken leg is a great thing.
The better solution is to not have to worry about it by having a chip that doesn't need to have massively unbalanced memory allocation in the first place. In other words, having a cast on your broken leg sure isn't a "performance benefit" compared to not having a broken leg.
Did you even bother to read that Wikipedia article?
Colin1497 - Wednesday, June 6, 2018 - link
Are you writing your own OS? If so, then yes, absolutely, make sure you rewrite your scheduler to deal with this. Commercially available OSes already are there. Windows has had NUMA support for what, a decade? More?peevee - Wednesday, June 6, 2018 - link
NUMA support in the OS is not enough to get to normal scaling. Apps have to be written in a certain way, and for many many platforms/languages (except for C and C++) it is not even realistically possible. Ask me how I know.peevee - Wednesday, June 6, 2018 - link
This is UNBALANCED NUMA though. Very different from the normal one. If they would assign one channel each, at least NUMA-aware memory-latency-dependent applications would have a chance to work OK on 32-64 threads.Although most NUMA-aware apps would be designed to depend on throughput and not latency, and in the latter case 2x2 or 4x1 is the same on 32+ threads, while 16- will have double of low-latency bandwidth if happily assigned by OS (or AMD-aware code - unlikely except in AMD-influenced tests) to the better cores.
rhysiam - Wednesday, June 6, 2018 - link
Who cares how AMD decide to arrange their memory controllers? What matters is the performance. Given that Ian already raised concerns about memory latency in this launch article, we can be pretty confident he'll be exploring it in detail once he gets his hands on one of these CPUs. If Threadripper 2 suffers from high memory latency and that significantly impacts performance in real-world workloads then I'm trusting Ian to find out about it and help us all make an informed purchasing decision.Here's the thing though, 12 months ago today purchasing a premium HEDT machine meant spending $1700 on a 10 Core Broadwell E 6950X with an all-core turbo of around 3.5Ghz. Surely it's reasonable for people to get a little excited that come September (~ 15 months later) you'll will be able to get a 32 core CPU that may even come in at a similar price-point (maybe $2K, we'll have to wait and see). Sure, AMD's multi-die approach has some drawbacks. Reviews will ultimately tell us how significant those drawbacks are. But in terms of benefits, that multi-die approach gives us access to a 32 Core CPU. That's a hell of a tradeoff and one that many enthusiasts would happily take I suspect.
ET - Wednesday, June 6, 2018 - link
If two dies get two channels and a channel dies, would other dies channel channels to the dies who channel died?(Not a serious question; just punning.)
mkaibear - Wednesday, June 6, 2018 - link
^whose(but respect for the pun ;)
Cooe - Wednesday, June 6, 2018 - link
*facepalm* well thankfully the chip wasn't designed like that!(Both the actual IMC's themselves & all DMA support have been disabled on the 2 newly active dies; meaning that max memory distance is no different than on Threadripper v1. Only potential cache hops have had their max possible distance pushed up to EPYC levels, but that's still a worst case scenario as Windows & increasing #'s of game's & program's have been updated for Ryzen to minimize cross-CCX & cross-die cache & memory operations as much as possible).
Maybe try reading the article first before you post?
Or at the very least do some most basic levels of critical thinking (X399 = only 4x memory channels = only 2x enabled IMC's)....
drajitshnew - Tuesday, June 5, 2018 - link
In your article it is mentioned "usually it is suggested to just go buy an EPYC for those workloads)". Correct me if I'm wrong but is there any commercially available motherboard that has more than 56 lanes.rhysiam - Wednesday, June 6, 2018 - link
Your post got me curious so I went looking. This Gigabyte Newegg board seems to offer 88 PCIe lanes to PCIe slots, plus a bunch of extras you'd expect on a high end board (including dual 10Gb): https://www.newegg.com/Product/Product.aspx?Item=N...drajitshnew - Wednesday, June 6, 2018 - link
Thanksphoenix_rizzen - Wednesday, June 6, 2018 - link
This is a very nice board to work with. We're using it to build an iSCSI storage server at work. There's lots of RAM slots, lots of PCIe slots and lanes to work with, and the integrated 10 Gbps Ethernet was the icing on the cake (compared to the SuperMicro boards available for EPYC). Price was surprisingly low for all the features it includes.Only downside was that it took about 6 months for all the parts to arrive. And the onboard SATA controller is a weird one that uses slimSATA connectors with breakout cables to connect 4 SATA devices each (doesn't work with multi-lane backplanes we discovered). Our next build will use a direct-connect backplane to allow the use of the onboard SATA controller; for this build we had to add an LSI/Avago/Broadcom HBA.
Pork@III - Tuesday, June 5, 2018 - link
At last a teraflop processor from AMD for the amateur marketZippZ - Wednesday, June 6, 2018 - link
Intel's 5ghz 28 core was used to steal AMDs thunder and I doubt they'll sell more than a handful if any. You can't really compare an AMD 32 core that uses 300 watts for chip and cooling to Intel's chip that probably uses 1 to 2kw.PixyMisa - Wednesday, June 6, 2018 - link
This is very likely true.remosito - Wednesday, June 6, 2018 - link
Forget about the 5GHz. The real question is how high can it go on air?If it's above 4GHz. And AMD only gains a couple hundred on their WIP 3.4GHz. 28 Cores over 4GHz will still breat 32 at 3.6GHz in both singl and multithread workloads.
duploxxx - Wednesday, June 6, 2018 - link
one might already start by looking at the current stats of the cpu....https://ark.intel.com/products/120496/Intel-Xeon-P...
28 cores
2.5 base up to 3.8 max turbo and pleasy look at the anandtech server article where it clearly explains how many cores can boost to which frequency when being used
205W
feel free to continue dreaming
tamalero - Wednesday, June 6, 2018 - link
Could they disable AVX512 or other parts to reduce the heat produced at the cost of some specific performante points?Right now it seems AMD will smoke intel's offer with more cores, less heat, less power and not require exotic cooling to make things worse for intel.
smilingcrow - Wednesday, June 6, 2018 - link
AVX offset is used for that.notashill - Wednesday, June 6, 2018 - link
Intel's advertised clocks are already without AVX512 running and they throttle heavily when it is. Example: Xeon Silver 4116 is 2.1Ghz base clock but only 1.4Ghz when AVX512 is active.Luckz - Wednesday, June 6, 2018 - link
AVX doesn't exactly generate heat while not being used.HStewart - Wednesday, June 6, 2018 - link
Yes I believe so - AVX512 is known as processor power if you don't used - better yet instead of just disable it and it have less die space. Also reduce cost of CPU - If I was Intel I would come out two versions of this chip1. that uses extensive water cooling and up to 5Ghz
2. one that uses conventional cooling and up to 4Ghz or so.
Intel is also smart for pre-release some information, this means they can find out what AMD is doing and make changes to improved the product.
SaturnusDK - Wednesday, June 6, 2018 - link
The CPU allegedly used 1310W by itself and the phase change cooler used 1100W. Other system components not counted for.If you take the 670W max power consumption figure from Tom's Hardware test of the Platinum 8176 (which this essentially is just an unlocked version of) with 2.8GHz all core and just multiply that up you get 1196W so the alleged 1310W power consumption seems fairly credible.
jospoortvliet - Wednesday, June 6, 2018 - link
But consumption tends to go up exponentially with clock speed so they must have to bin like crazy to get chips which do this at under 1.1kw...oRAirwolf - Wednesday, June 6, 2018 - link
My biggest question is whether it has that same NUMA node mode nonsense for gaming where you had to reboot the PC to change profiles? I would be running a Threadripper if it weren't for that.Tamz_msc - Wednesday, June 6, 2018 - link
You shouldn't be buying Threadripper for gaming in the first place.Luckz - Wednesday, June 6, 2018 - link
You'd only want to reboot to change that if you have specific software that particularly needs it or particularly benefits from it. I doubt most 1950X users a) reboot to change to NUMA mode, b) reboot to disable half their chip.Besides that, antique software like Process Lasso or even good old Windows Task Manager are very capable of limiting software to specific CPU cores (and thus avoiding the dreaded latency).
peevee - Wednesday, June 6, 2018 - link
start /NODE:piroroadkill - Monday, June 11, 2018 - link
Antique? Process Lasso is a nice piece of software, and it still receives updates :)_zenith - Saturday, June 9, 2018 - link
You don't need to.Unless you want to play at or above 144Hz, you simply won't notice.
You can also automate fixes for incompatible games using Process Lasso, when needed, but again, this is uncommon. Another poster said they had problems with maybe 8 games out of hundreds.
martinw - Wednesday, June 6, 2018 - link
"The codename for the processor family is listed as "Colfax".Are you sure this was the codename for the processor and not the system assembler? There is a workstation and server vendor called Colfax that sells AMD-based systems, eg:
http://www.colfax-intl.com/nd/Servers/CX1250a-E7.a...
It would seem unlikely that AMD would take a customer company name as a codename.
Ryan Smith - Wednesday, June 6, 2018 - link
"Are you sure this was the codename for the processor and not the system assembler?"Yes. I had the same thought. But that's what it literally says in AMD's slides.
https://images.anandtech.com/galleries/6398/AMD%20...
ET - Wednesday, June 6, 2018 - link
I think it's a good move on AMD's part, and it would be interesting to see how these parts are priced.I hope that AMD plans to make some 180W SKUs. While IIRC AMD hadn't promised TR socket compatibility the way it did for AM4, I think it would be nice if AMD offered CPUs which are guaranteed to be compatible with older motherboards.
SaturnusDK - Wednesday, June 6, 2018 - link
There might be existing X399 boards that does not have the VRM requirements to run the new 28/32C CPUs but they definitely said that these CPUs uses the existing X399 platform.Luckz - Wednesday, June 6, 2018 - link
...and that's exactly like a B350 might not have a lot of fun with a 8-core CPU, or a B360 board might not get far with a 8700K. There's cheap, barebones Z370 too, while at it.Wolfclaw - Wednesday, June 6, 2018 - link
Zen+ now on 12nm while the GPU team are pushing ahead with 7nm, all looking good for AMD, lets just hope they don't do a Phenom down the line !peevee - Wednesday, June 6, 2018 - link
They haven't redesigned the cores to take any advantage of 12nm though. Just lots of dead space between modules, so the same latencies and power waste in intra-chip communications.jjj - Wednesday, June 6, 2018 - link
32 cores, 4 memory channels, that's gonna be problematic and likely better off with Epyc.On the upside, the memory channels limitation minimizes cannibalization so they might be able to price it well, maybe 2-3k$.
SaturnusDK - Wednesday, June 6, 2018 - link
Under $2K for the top part actually.0ldman79 - Wednesday, June 6, 2018 - link
If you need the memory bandwidth you buy an Epyc.If you can get buy with just the extra CPU power, Threadripper.
They've got staggered product lines. Not a problem.
B3an - Wednesday, June 6, 2018 - link
I'm definitely getting a 24 or 32 core Threadripper 2, been waiting for this for a year and was hoping they'd increase the core count this time. Finally a decent core count that's worthy of a 2018 CPU.SaturnusDK - Wednesday, June 6, 2018 - link
Me too. I can finally retire my ageing dual socket Xeon. Probably going for the 24C as I don't expect to be using more for several years ahead and it'll probably be priced better per core than the 32C one. Not to mention that clocks might be pushed a bit higher on the 24C with a good air cooler.0ldman79 - Wednesday, June 6, 2018 - link
My thoughts exactly. Six cores per CCX should be a little easier to power, cool and overclock, same as the FX series.Not saying I wouldn't switch for an eight core if I tripped over one cheap, but my six core is clocked high enough that I really wouldn't see much benefit going to an eight core. Likely my VRM would limit the 8 core faster and at best I'd have the same performance in most situations.
Da W - Wednesday, June 6, 2018 - link
AMD is competing to give consumer more for their money.Intel doesn't want to give consumer a bang for their buck and tries to bring back the gigahertz race of 2000s.
Intel fanboyz are happy.
zodiacfml - Wednesday, June 6, 2018 - link
Now I'm impressed with the scene Intel and AMD is making. This is a brawl. I knew that Intel aren't putting out their best on their CPUs considering their poor thermal pastes and good performance once replaced or dellided CPUs. But, I haven't expected them to bring out the best of the CPUs this soon. I thought, we will not see them at all once their 10nm products arrives.I have to wonder, if AMD can do this at 14nm in just a short time since the first Ryzen, then Intel has been sitting on this capability for years!
Anyway, AMD is doing fine with the their CPU business now. Nvidia is now showing an Intel like move, delaying their next gen products with no definite date. Does it mean AMD isn't releasing anything new?
Pork@III - Wednesday, June 6, 2018 - link
Can Crisis Play With This Processor? :D0ldman79 - Wednesday, June 6, 2018 - link
lolHonestly, probably not.
I catch hell getting Crysis to work on modern CPU and OS these days. Crysis Warhead, based on the *exact same executable* works, Crysis freaks out.
Run 64bit to get it working on an FX in 7 64bit, but I still haven't been able to get the original working on my Skylake laptop. I've gotten so far as the cutscenes finally, then it crashes. EA will never release a patch for it, but they'll probably keep selling it on Origin.
Crytek was actually pretty good about patching games before the acquisition.
oleyska - Thursday, June 7, 2018 - link
My ryzen 1700 runs crysis perfectly fine.msroadkill612 - Thursday, June 21, 2018 - link
It will play havoc.Chickthief - Wednesday, June 6, 2018 - link
Does this mean 32 cores and 64 threads with 80 MEGABYTES OF CACHE AND 128 FREAKING PCI-E LANES?phoenix_rizzen - Wednesday, June 6, 2018 - link
64 PCIe lanes in Threadripper.If you want the full 128 PCIe lanes, you need to move to EPYC.
rocky12345 - Wednesday, June 6, 2018 - link
It is all cool and all to see a 32/64 core in the high end but AMD clearly was not ready to do this if they had to disable the extra dies memory channels and basically not have them getting direct access to the system memory. I guess we will have to wait and see how this all pans out and how much it hurts performance if at all.Now onto Intel and their bogus 28 core CPU demo and using phase change cooling to make it run at 5GHz but not disclosing this up front just to up shoot AMD's new Thread Ripper release. Come on Intel you are better than this or at least I would have hoped so. Now that you had your fun and got PR from it lets see the real speeds and the real performance of that CPU without the phase change cooling and using good air cooling I am going to bet it won't get any where close to 7.3k in Cinebench. Pretty sad Intel pretty sad indeed then again they got the affect they were looking for because all those that are lessor knowing about all things computers have been duped.
0ldman79 - Wednesday, June 6, 2018 - link
It's a good way to stagger their product lines.When you look at the PCIe lane madness (and other things) that Intel does, just losing 4 memory channels on the TR isn't that bad.
Intel Kaby Lake X. Nuff said.
msroadkill612 - Thursday, June 21, 2018 - link
"AMD clearly was not ready to do this if they had to disable the extra dies memory channels and basically not have them getting direct access to the system memory. I guess we will have to wait and see how this all pans out and how much it hurts performance "That is the mechanism but you conclusions are wrong imo.
Those limitations are an architectural necessity - they cant have more lanes than x399, but there are cpu intensive workloads that suit this configuration, or I doubt they would bother making it.
Its certainly an interesting new task to lob onto Fabric - act like a plx for orphan ccxS.
TassadarL - Wednesday, June 6, 2018 - link
AMD published their 32 core desktop threadripper without any 8-channel motherboard is the most issue I concern. If you try dual channel RAM under one die and another die has no RAMs on it, you could get a great performance loss on it. So X399 has been set so if 32 cores use x399 it will lose 4 channel memories and 64-lanes of PCIE 3.0.Infinity fabric cannot solve this problem. AMD REALLY need to give us a new motherboard for 8 channel like X499 or X399-8chSaturnusDK - Thursday, June 7, 2018 - link
All cores in all current Ryzen and TR CPUs access memory through the infinity fabric. Remember, the infinity fabric is the internal bus. Just like all cores on Intel CPUs access memory through their ring or mesh bus. I really don't see the huge issue. Granted the off-package infinity fabric has a slightly higher latency but decent thread management in the OS should eliminate almost all negative consequences.peevee - Wednesday, June 6, 2018 - link
What's the difference with EPIC now? Threadripper 2 = EPIC 2 (non-existing yet?)silverblue - Wednesday, June 6, 2018 - link
Apparently, the next generation EPYC will use Zen 2 and not Zen+. As such, EPYC will not be refreshed until 2019.phoenix_rizzen - Wednesday, June 6, 2018 - link
Threadripper 1 has 64 PCIe lanes, 4 memory controllers, up to 16 cores per CPU, and 1 CPU per motherboard, using Zen cores.EPYC 1 has 128 PCIe lanes, 8 memory controllers, up to 32 cores per CPU, and up to 2 CPU sockets per motherboard, using Zen cores.
Threadripper 2 has 64 PCIe lanes, 4 memory controllers, up to 32 cores per CPU, and 1 CPU per motherboard, using Zen+ cores.
EPYC 2 should have 128 PCIe lanes, 8 memory controllers, up to 32 cores per CPU (although there are rumours this may double), and up to 2 CPUs per motherboard, using Zen2 cores.
Pretty decent evolution of features there.
Pork@III - Thursday, June 7, 2018 - link
EPYC 2 up to 48(or maybe 64?) cores per CPU don't forget ZEN2 architecture on 7nm.peevee - Wednesday, June 6, 2018 - link
BTW, 32 cores with SMT bumps right against the Windows 64 "logical CPU per group" limitation (I wonder what m0r0n @MS invented this sht to begin with), and taking advantage of all cores in multi-group configurations requires rewrite of multi-threaded code in existing apps.Filiprino - Wednesday, June 6, 2018 - link
Are you joking? :|0ldman79 - Wednesday, June 6, 2018 - link
He is not.If your multithreaded app can take advantage of 64 threads that's it for Windows apparently. Anything over 64 cores would be handled by a second scheduler (if I read this all correctly) and you'd gain load capacity, not multithreaded performance.
I haven't seen many apps that take full advantage of my six core, so it may not be an issue for a while yet.
The_Assimilator - Wednesday, June 6, 2018 - link
Disappointing that the two extra cores don't enable any more PCIe lanes, but understandable... and still more lanes than Intel can offer at a much higher price.Oxford Guy - Wednesday, June 6, 2018 - link
"Nonetheless, it was stated by several motherboard vendors that some of the current X399 motherboards on the market might struggle with power delivery to the new parts, and so we are likely to see a motherboard refresh."Oh, I'm absolutely shocked.
Alistair - Wednesday, June 6, 2018 - link
I'm no fan of Intel, but just look at their current CPU's to guess at an efficient clock for the 5ghz one. That would be 4.6ghz for Coffee Lake. So I expect the 28 core to be a 4.6ghz max part on air, as that is the efficient clock for those CPUs.Alistair - Wednesday, June 6, 2018 - link
If it is based on Skylake-X, then just 4.2ghz max on air probably.SaturnusDK - Thursday, June 7, 2018 - link
Even 4.2GHz is extremely generous. The Platinum 8176 uses 670W with 28 cores at 2.8GHz all cores. Just multiplying that up to 4.2GHz assuming you can overclock it without increasing voltages would be 1005W power consumption for the CPU alone. Good luck getting that stable with an air cooler.poohbear - Wednesday, June 6, 2018 - link
anyone else notice the 5% spike in AMD stock today? Their stock has jumped 40% in the past month!!!!SaturnusDK - Thursday, June 7, 2018 - link
Yup. Bought mine when they were below $2 a share in early 2016. Now at $14.4. Good investment.SaturnusDK - Thursday, June 7, 2018 - link
Oh, climbed to over $15 now.peevee - Wednesday, June 6, 2018 - link
If only they would take full advantage of 12nm in Zen+, they could have had 6 cores per CCX, 12 per die. Then Ryzen would be plenty.peevee - Wednesday, June 6, 2018 - link
I mean, without all that slow and power-hungry die-hopping.Hul8 - Wednesday, June 6, 2018 - link
If they went that route, they'd have had to create a whole new design, which would have pushed the release of Zen+ back so far, they'd have been better off just concentrating on Zen 2 on 7 nm.Targon - Thursday, June 7, 2018 - link
The problem with more cores per CCX is that in the early development phase, you would end up with more failed dies that would be garbage. 14nm was still new to Global Foundries when Ryzen first launched, so 4 cores was good for yields. Second generation could have gone to six, but due to this being a refresh cycle, why risk problems with production?7nm and Zen 2 cores MAY be the right time to go to 6-cores per CCX, but a lot depends on things like clock speed. We may see six core CCX with Zen 2, but if that will be compatible with first generation motherboards, or if there will be limitations on performance due to changes is another story. Quad core CCX on the other hand, wouldn't be a change for the first generation motherboards, and only clock speed plus other things such as power would come into play.
Windows and how Windows talks to the chipset/processor would also be another potential issue with the change to 6 core per CCX.
Dragonstongue - Wednesday, June 6, 2018 - link
I honestly wish they would throw TDP in the garbage where it belongs, it is far to obscure a number because everyone who uses it seems to "calibrate" it the way THEY see fit, is a very flat number for an anything but flat comparison between products because of the numerous ways it can be tested for this number to be "hit"that is, the cpu is that 100% load on all cores, 1 core at 100% 85% of the time, 80% load all cores 60% of the time etc.
or in the case of coolers, are they testing it via a very specific "hot plate" so they supply X amount of "heat" and it keeps this cooler at Y temperature for Z amount of time etc.
They probably (IMO) should be really fine tuning the testing methods so the consumer as well as the AIB/OEM know for sure that a given cooler will work, or that you have enough power supply to power it and so forth.
nothing like a cooler made by CM H**er 2*2 being rated for 180w and when you try to strap it to a cpu that is also rated for this level that the cooler CANNOT actually deal with this heatload.
TDP is a "place holder" number as far as I can tell, AMD is generally the most conservative IMO with their TDP rating, however, all it takes is a different driver etc and this TDP number can be blown way past, so to use TDP in relation to power consumption (as so many review sites do to congratulate one part while demonizing another) is crazy.
There are VERY rare exceptions where some makers put fancy circuits into the product to make 10000000% sure it CANNOT pass this number, but these really are rare occurrence by and large in my personal experience as well as many many years of following cpu-gpu-motherboard information.
either way 250w TDP may seem like A LOT, but that is only 7.1825w per core given heat load, or 0.256 per thread, which is peanuts given the work is likely is capable of.
Intel and Nv chase the crazy high clock speeds (also make sure reviewers use VERY specific test software so that their actual absolute and continual power consumption numbers always look AMAZING) AMD on the other hand seem to be more about "yep we chew power, but, we have the performance, durability and forward looking designs they do not seem to care about"
in other words (IMHO) AMD seems to be looking at the "what will be" whereas Intel and especially Ngreedia are absolutely content about building for the moment and should is die horrible death (poor component/thermal interface choices) that is ok, means more $$$ and they will fix the problems with the next revision "we promise" ^.^
0ldman79 - Wednesday, June 6, 2018 - link
I think your math is off a bit, but your point is valid.That is, more or less, the basis for the turbo modes. 65w / 4 cores = 16.25w, but a single core can pull upwards of 30w under full load.
Given full capacity under full load with no throttling the typical 65w quad core should be 120w, which is probably what most overclockers actually see.
Da W - Thursday, June 7, 2018 - link
WHY DID I JUST BOUGHT 1000 AMD SHARES??????????FreckledTrout - Thursday, June 7, 2018 - link
Why you are screaming in broken english about the stock market on a tech site? Are you ok, you seem lost? Can we point you in the right direction?Targon - Thursday, June 7, 2018 - link
Because AMD is a company that has always been forced to be innovative, rather than using a brute force approach to getting higher performance with each new product cycle. Most people still have not heard about Gen-Z, which AMD has been a part of from the beginning for example. Looking forward, AMD is looking to improve the overall system design, not just the CPU, GPU, or chipset.Lolimaster - Thursday, June 7, 2018 - link
For any serious work you won't have this chips OCd so it should be safe for your current X399 mobos, TR3 at 7nm would be way more efficient and probably aim to 3.4-3.6Ghz base at the same TDP.MutualCore - Thursday, June 7, 2018 - link
Can anyone point me to a an off the shelf application that can utilize 16 cores, much less 32? I don't mean some in-house custom research project where you spin off 32 threads just to stress the system. I mean Adobe or 3D rendering engines.Goty - Thursday, June 7, 2018 - link
Handbrake.piroroadkill - Friday, June 8, 2018 - link
Well, it's x264 or x265 that's actually doing the work, not handbrake.Ippokratis - Thursday, June 7, 2018 - link
VRay (renderer), Keyshot(renderer), Arnold(renderer), Houdini (3d-VFX-rendering), photogrammetry software, other. Adobe's bad core scaling is Adobe's problem.blppt - Thursday, June 7, 2018 - link
On the more mainstream things, believe it or not, an 8k/60p video on youtube can hit 50% cpu usage on my 1950X. Of course, if you have a nvidia 10 series or higher GPU, you can run 8k/60p on a quad core (maybe even less!) with no problems. No amd gpu yet fully accelerates video above 4k.designerfx - Monday, July 2, 2018 - link
how about "people run a lot of different applications that would easily use 4-8 cores a piece"?TheJian - Thursday, June 7, 2018 - link
"Despite the mainstream Ryzen processors already taking a devastating stab into the high-end desktop market, AMD’s Threadripper offered more cores at a workstation-friendly price."So AMD is now making a few billion a year or more NET INCOME? No? Ok, then surely Intel must be making FAR LESS now with this "DEVASTATING STAB" AT Intel's market right? Nope. Oh right, they set a record for income...Well HRMPF....I'm confused. How is it that AMD made a "devastating stab" without either making tons of money or even making a small dent in the ENEMY commonly known as Intel?
Again, for you to have success either you have to be doing MUCH better now than "back then" (at some point in history), or the enemy has to be doing much worse (or in the best case, you get both of these). I see neither happening here so I think it's a blip on the radar until you MAKE MASSIVE PROFIT. Worse they now seem to be giving Intel gpu tech which will lead to death in a few years anyway as Intel can make a better AMD APU than AMD themselves at least until Intel is totally behind in fabs (getting there now it seems, should've bought NV for $10-15 when they had a chance).
Don't get me wrong, great we have competition now, but until AMD figures out how to PRICE appropriately, they'll continue to just be some guy in the market that doesn't make a dime while selling a great product...LOL. Until you realize you are NOT in business to be OUR FRIEND, you'll keep making NOTHING, while being the #2 cpu and gpu maker in the world. How is that sentence even possible? Wake me when you have a PE ratio for a few years (can't happen without PROFITS). I really wish AMD would concentrate on MAXIMIZING PROFITS, instead of maximizing how much we like them today...ROFL. I'm not saying your job is to be your customer's mortal enemy, but you don't have to be our best friend either. R&D costs money and if you're not making any, you can't afford more R&D :) Simple right?
Targon - Monday, June 11, 2018 - link
While Intel market share remains VERY high, the sales numbers are starting to reflect the growing popularity of Ryzen in the overall market. A big part of the delay has been OEMs not offering nearly as many AMD based systems, and I have yet to see one of the big name OEMs put one of the new Ryzen based APUs into one of their systems in the retail sector. Yes, you can find a few first generation Ryzen based machines with a discrete video card, but I haven't seen Raven Ridge based desktops and almost no laptops.piroroadkill - Friday, June 8, 2018 - link
I'm looking forward to seeing the clocks. If the 24 core part retains ALL THE CACHE like the smaller parts in the Ryzen line up have done, then it will be a monster. 64MB of L3 cache! On a consumer CPU!SaturnusDK - Friday, June 8, 2018 - link
I'm sure it will like the TR1s did. Having the full 64MB of cache will also pretty much eliminate the memory channel concerns, at least for the 24C one. At 32C it might start to become issue.TennesseeTony - Saturday, June 9, 2018 - link
"Pricing on the processors is set to be revealed either today or closer to the launch time. "Utterly nailed the date! Excellent prediction!
Oberoth - Sunday, June 10, 2018 - link
I can't help but feel AMD should have skipped 12nm for TR and gone straight to 7nm as early as they could if the 3GHz clock is true as for all but the massively threaded applications this CPU will be slower than first generation. Although i see no reason why it wouldn't be able to clock as high as 2700x if its only using a few cores.Targon - Monday, June 11, 2018 - link
2018 for AMD is still 12nm, with a few 7nm Vega cards for the AI market being sold at the end of 2018. 2019 will be 7nm. Would you rather AMD not release ANY updated Threadripper in 2018 just because 7nm comes out next year?AutomaticTaco - Monday, June 11, 2018 - link
12nm? What happened to the next Threadripper or Ryzen going 7nm or 10nm? Just curious.Targon - Monday, June 11, 2018 - link
2018 is 12nm, with 2019 chips being 7nm. 2017 was Zen cores, 2018 with 12nm is Zen+ cores(just some minor improvements to the design, not big changes). 2019 will bring Zen 2 cores with more significant improvements combined with 7nm. I will note that 7nm will start to ramp up production, the limited volume 7nm Vega AI cards with up to 32GB of video memory will come out at the end of 2018, but obviously, those will not be high volume parts.