How to Save $6000 on a 28-core Flagship Intel Xeon: Platinum 8280 vs Gold 6258Rby Dr. Ian Cutress on August 7, 2020 8:00 AM EST
Test Bed and Benchmarks
For this test, we’ve run through our updated suite of benchmarks, as part of our #CPUOverload project. As this isn’t a strict review of the processors, more of a comparison article to see if they perform the same, then each benchmark is relatively binary– yes it performs the same, or no they don’t (and which one is better). For these tests, we fired up our single socket LGA3647 testbed.
|AnandTech LGA3647 Test Bed|
|CPU||Intel Xeon Platinum 8280
Intel Xeon Gold 6258R
|Cooling||Asetek 690LX-PN (500W)|
|Motherboard||ASUS ROG Dominus Extreme (0601)|
|DRAM||SKHynix 6 x 32 GB DDR4-2933|
|SSD||Crucial MX500 1TB|
|Chassis||Anidees Crystal XL|
Both processors were tested on 192 GB of SK-Hynix DDR4-2933 RDIMMs, and a sufficient 500W liquid cooling configuration.
For non-performance benchmark related data, we saw both CPUs score the same average core-to-core latency (8280 was 45.8 ns, 6258R was 45.6 ns), both CPUs get to turbo from idle to max in 35-38 milliseconds, and power consumption was almost identical.
There is a slight variation here, though this could just be down to the specific voltage characteristics of the chips I have. The 6258R hits nearer the 205 W TDP that both chips have.
For the performance benchmarks, don’t get too excited all at once. We’ll mark any performance difference as significant where a >4% change is observed.
|Intel Xeon Scalable 2nd Gen Shootout|
6258R vs 8280
|Agisoft 1.3||1867 sec||1797 sec||103.8%|
|AppTimer GIMP||54.1 sec||55.0 sec||98.4%|
|3DPMavx||54280 pts||56177 pts||103.5%|
|yCruncher 2.5b||47.00 sec||46.20 sec||101.7%|
|NAMD ApoA1||4.42 ns/day||4.56 ns/day||103.2%|
|AIBench 0.1.2||523 pts||521 pts||99.6%|
|DwarfFortress S||124 sec||124 sec||=|
|Dolphin 5.0||329 sec||329 sec||=|
|Blender 2.83||224 sec||224 sec||=|
|Corona 1.3||13.30 Mray/sec||13.64 Mray/sec||102.6%|
|POV-Ray 3.7.1||10370 pts||10461 pts||100.8%|
|V-Ray||36899 Kray/sec||38366 Kray/sec||103.98%|
|CB R20 ST||391 pts||393 pts||100.5%|
|CB R20 MT||11539 pts||11851 pts||102.7%|
|Handbrake 1.3.2 4K||74 fps||74 fps||=|
|7zip Combined||183k MIPS||189k MIPS||103.2%|
|AES Encode||15.9 GB/s||16.4 GB/s||103.1%|
|WinRAR 5.90||30.52 sec||30.17 sec||101.2%|
|Legacy / Web|
|CB10 ST||8183 pts||8185 pts||100.02%|
|CB10 MT||66851 pts||66198 pts||99.0%|
|Kraken||929 ms||929 ms||=|
|Speedometer||90 rpm||90 rpm||=|
|GB4 ST Overall||4739 pts||4737 pts||99.95%|
|GB4 MT Overall||65039 pts||66274 pts||101.9%|
|DRAM Read||124 GB/s||126 GB/s||101.6%|
|DRAM Write||102 GB/s||102 GB/s||=|
|DRAM Copy||115 GB/s||116 GB/s||100.9%|
|sha256 8k ST||486 MB/s||487 MB/s||100.2%|
|sha256 8k MT||12452 MB/s||12833 MB/s||103.1%|
|LinX 0.9.5||1484 GFLOPs||1528 GFLOPs||103.0%|
|SPEC (Geomean of tests, Estimated)*|
|*SPEC results not submitted to SPEC.org have to be labelled as 'Estimated' as per SPEC press licensing rules.|
Well, that was a whole lotta nothing.
If we retain that a 4% difference might be more than just statistical noise, then none of these benchmarks come close. A slightly blurry eye with these results might concede that the 6258R actually has the upper hand, which might go in line with the slight variation in power consumption we saw in the power test. But by and large, these chips are essentially identical in performance.
Breakdowns of most of the benchmarks and sub-tests can be found by looking at our benchmark comparison database, Bench. To get the best experience when comparing products on Bench. I find it best to increase the browser zoom and reduce the browser window width, so it looks like this:
Click on the image to go to the section in Bench that compares these two CPUs.
Post Your CommentPlease log in or sign up to comment.
View All Comments
benedict - Friday, August 7, 2020 - linkThe question is not whether Intel is shooting itself in the foot.
It is whether anyone buying Intel is shooting himself in the foot.
In the past no one got fired for buying Intel. Maybe it's time to change that.
ZoZo - Friday, August 7, 2020 - linkIt's not just about performance or performance/watt, it's also about platform features and robustness. If you buy Intel you're a bit more certain that things will "just work".
For example, I bought a 3rd gen Threadripper and have encountered incompatibilities with Linux KVM-based virtual machines, whether it's the FLR reset bug on the USB controllers when passing them through (fixed late June in Linux kernel, 7 months after TR was released), the fact that a Windows guest doesn't yet support nested AMD virtualization (coming in 2H20), or strange performance behaviors that don't happen on Intel (no resolution in sight). These quirks are bound to exist on Epyc too, but I'll admit that virtualization is probably the most tricky thing you can put the platform through. If you're building a server that just takes a bunch of containers without any virtualization gimmicks, it should be fine.
For workstations, the Intel platform supports RDIMMs and therefore much more RAM, unless you buy from the very few OEMs that sell the Threadripper Pro.
Revv233 - Friday, August 7, 2020 - linkYou put that very well about the expectations of not having wierd issues.
I always kind of figured it was due to chipset more than CPU but I've experienced that from my old Barton & A64's in spades...
To be fair I once had a P4 Northwood on a VIA chipset and I felt that same pain.
When you are talking mission critical stuff. One bad taste from 20 years ago is going to make you hesitate before you give something another try.
eek2121 - Sunday, August 9, 2020 - linkI agree with him. I run AMD all day long, but at work we tried that with Rome, apparently. Something about our application stack didn’t work well with EPYC. I can only assume it was memory latency, however I am not in that department so I don’t know. What I DO known is that AMD has an amazing product, but the platform isn’t there yet. They either need to clamp down on OEMs or release a first party platform to push the OEMs to release better quality stuff.
yeeeeman - Friday, August 7, 2020 - linkI can't say this enough times....
I see a lot of comments about AMD being now all of a sudden THE CHOICE.
Hold on for a bit, since there is much more to it than performance or efficiency.
Performance wise, sure, AMD has an advantage, but it is not out of the ordinary. Efficiency wise, the same.
Quality wise, there is no comparison between Intel and AMD platforms. AMD has a lot of work to do on the software and even hardware/firmware front. Many people are not aware of this maybe, but Intel during all these years has invested a lot of effort and money in creating streamlined platforms, with quality software, quality firmware, that it is almost plug and play.
AMD on the other hand, being the underdog and so far away of Intel for so long, nobody focused on their products and they lack badly in optimization and compatibility. Sure, things will change with time, but it will take at least 5 more years of actual work and $$$ from AMD to make it happen.
Same is valid for notebook space. Everyone is crying about how AMD laptops are crippled and how OEMs love Intel. Well, Intel has invested a lot of money and effort in creating design templates and making it as easy as possible for OEMs to create a new laptop.
AMD....well, they just have a good cpu and that is it. There is a lack of field engineers, a lack of streamlined process, a lack of clear BOM and especially a perception from the market that still sees AMD as the cheaper option. Non hardware guys, which is basically 90% of the market cannot deduct from a sticker that 4th gen Ryzen is much better than Buldozer or whatever. AMD needs to invest $$$ into publicity, into OEM partnerships, into creating something similar to project Athena, something to give them the premium feel so that the market perception will change. Otherwise, it will take years before they actually reach majority of market share and by that time Intel could come back.
Anyway, back to the topic of this review, 10k $ is in any case a ridiculous amount of money for an old CPU. 3k $ is even stretching it, so you say it right. If someone wants a top Intel CPU, they should buy this Gold version.
duploxxx - Saturday, August 8, 2020 - linkI can't say this enough. You talk bias. You don't need field engineers. Bom are delivered by oem like Dell hpe Cisco that increase there portfolio for amd on every release. Intel is not investing a lot of money in business for stability all they do is making sure business dies not get fed up with all the CVE bugs they need to patch and deliver in fact the long lasting issues with supplies had them against the wall big time. All intel does is paying money to oem in r&d to keep designing base lines so that they can keep selling the masses. This is done in both WS as Server area. But let's be honest AMD would never be capable to deliver far more. But it's thx to AMD that things like Skylake R exist don't forget that.
Smell This - Saturday, August 8, 2020 - link
Now with new glue and Omni-Path v220.127.116.11.000
(Two Dies Are Twice As Nice As One!)
The Cascade Lake Xeon Scalable platform failed last year, and the refresh will fail, again, today.
ProDigit - Saturday, August 8, 2020 - linkWho cares about optimization, when their processors are 25% more efficient, and host between 25-150% more cores?
DominionSeraph - Monday, August 10, 2020 - linkThis.
When the Ryzen 2700 dropped to $150 I jumped on it, figuring I could retire my i7 4790. I put it together and its encoding speed was a very impressive improvement. I couldn't swap out my main immediately as the mobo didn't have enough SATA ports for all my drives so I was using them side-by-side for a while, and I noticed the AMD just lagged and had weird quirks where the 4790 was perfection 24/7/365. I couldn't live with it and gave up on the idea of using it as a replacement, and I eventually sold it off.
I'm still on the i7 4790 as my main even though I now have a 3950X too. AMD is ok for a secondary crunching machine but it's just not suitable for human use.
WizardMerlin - Wednesday, August 26, 2020 - linkHaving been using my 3900X for many months now for 16 hours a day for a varied workload, I'd disagree.