How to Save $6000 on a 28-core Flagship Intel Xeon: Platinum 8280 vs Gold 6258R
by Dr. Ian Cutress on August 7, 2020 8:00 AM ESTTest Bed and Benchmarks
For this test, we’ve run through our updated suite of benchmarks, as part of our #CPUOverload project. As this isn’t a strict review of the processors, more of a comparison article to see if they perform the same, then each benchmark is relatively binary– yes it performs the same, or no they don’t (and which one is better). For these tests, we fired up our single socket LGA3647 testbed.
AnandTech LGA3647 Test Bed | |||||
AnandTech | COLUMN | ||||
CPU | Intel Xeon Platinum 8280 Intel Xeon Gold 6258R |
28C/56T 28C/56T |
2.7-4.0G 2.7-4.0G |
205W 205W |
$10009 $3950 |
Cooling | Asetek 690LX-PN (500W) | ||||
Motherboard | ASUS ROG Dominus Extreme (0601) | ||||
DRAM | SKHynix 6 x 32 GB DDR4-2933 | ||||
SSD | Crucial MX500 1TB | ||||
PSU | EVGA 1600T2 | ||||
GPU | Sapphire RX460 | ||||
Chassis | Anidees Crystal XL |
Both processors were tested on 192 GB of SK-Hynix DDR4-2933 RDIMMs, and a sufficient 500W liquid cooling configuration.
For non-performance benchmark related data, we saw both CPUs score the same average core-to-core latency (8280 was 45.8 ns, 6258R was 45.6 ns), both CPUs get to turbo from idle to max in 35-38 milliseconds, and power consumption was almost identical.
There is a slight variation here, though this could just be down to the specific voltage characteristics of the chips I have. The 6258R hits nearer the 205 W TDP that both chips have.
For the performance benchmarks, don’t get too excited all at once. We’ll mark any performance difference as significant where a >4% change is observed.
Intel Xeon Scalable 2nd Gen Shootout | |||
AnandTech | Platinum 8280 |
Gold 6258R |
Performance 6258R vs 8280 |
Office | |||
Agisoft 1.3 | 1867 sec | 1797 sec | 103.8% |
AppTimer GIMP | 54.1 sec | 55.0 sec | 98.4% |
Science | |||
3DPMavx | 54280 pts | 56177 pts | 103.5% |
yCruncher 2.5b | 47.00 sec | 46.20 sec | 101.7% |
NAMD ApoA1 | 4.42 ns/day | 4.56 ns/day | 103.2% |
AIBench 0.1.2 | 523 pts | 521 pts | 99.6% |
Simulation | |||
DigiCortex 1.35 | 2.47x | 2.48x | 100.4% |
DwarfFortress S | 124 sec | 124 sec | = |
Dolphin 5.0 | 329 sec | 329 sec | = |
Rendering | |||
Blender 2.83 | 224 sec | 224 sec | = |
Corona 1.3 | 13.30 Mray/sec | 13.64 Mray/sec | 102.6% |
POV-Ray 3.7.1 | 10370 pts | 10461 pts | 100.8% |
V-Ray | 36899 Kray/sec | 38366 Kray/sec | 103.98% |
CB R20 ST | 391 pts | 393 pts | 100.5% |
CB R20 MT | 11539 pts | 11851 pts | 102.7% |
Encoding | |||
Handbrake 1.3.2 4K | 74 fps | 74 fps | = |
7zip Combined | 183k MIPS | 189k MIPS | 103.2% |
AES Encode | 15.9 GB/s | 16.4 GB/s | 103.1% |
WinRAR 5.90 | 30.52 sec | 30.17 sec | 101.2% |
Legacy / Web | |||
CB10 ST | 8183 pts | 8185 pts | 100.02% |
CB10 MT | 66851 pts | 66198 pts | 99.0% |
Kraken | 929 ms | 929 ms | = |
Speedometer | 90 rpm | 90 rpm | = |
Synthetic | |||
GB4 ST Overall | 4739 pts | 4737 pts | 99.95% |
GB4 MT Overall | 65039 pts | 66274 pts | 101.9% |
DRAM Read | 124 GB/s | 126 GB/s | 101.6% |
DRAM Write | 102 GB/s | 102 GB/s | = |
DRAM Copy | 115 GB/s | 116 GB/s | 100.9% |
sha256 8k ST | 486 MB/s | 487 MB/s | 100.2% |
sha256 8k MT | 12452 MB/s | 12833 MB/s | 103.1% |
LinX 0.9.5 | 1484 GFLOPs | 1528 GFLOPs | 103.0% |
SPEC (Geomean of tests, Estimated)* | |||
SPEC2006 ST | 45.8 | 45.8 | = |
SPEC2017 ST | 6.0 | 6.0 | = |
SPEC2017 MT | 109.4 | 111.1 | 101.6% |
*SPEC results not submitted to SPEC.org have to be labelled as 'Estimated' as per SPEC press licensing rules. |
Well, that was a whole lotta nothing.
If we retain that a 4% difference might be more than just statistical noise, then none of these benchmarks come close. A slightly blurry eye with these results might concede that the 6258R actually has the upper hand, which might go in line with the slight variation in power consumption we saw in the power test. But by and large, these chips are essentially identical in performance.
Breakdowns of most of the benchmarks and sub-tests can be found by looking at our benchmark comparison database, Bench. To get the best experience when comparing products on Bench. I find it best to increase the browser zoom and reduce the browser window width, so it looks like this:
Click on the image to go to the section in Bench that compares these two CPUs.
81 Comments
View All Comments
koguma - Friday, August 7, 2020 - link
You're kind of comparing apples and oranges here. The threadripper is an enthusiast/workstation product. AWS isn't having any issues with virtualization using Epyc.duploxxx - Friday, August 7, 2020 - link
Neither do we with our 1000+ amd servers when we switched from intel to epyc2 series. We have never seen some much price / perf / power ratio in favor of amd. Gone are all the security bugs and security measurements we had to take in our DC to not be confronted with the CVE. Our total server farm was halved and the total hw price was 2/3. With hypervisor it was also easy switch. Just select the maintenance window and switch.Don't compare some workstation thread ripper virtualization who's with epyc2. There are a lot of DC moving into amd these days. Sure intel will still sell the bulk but the move by many was made on epyc2 and will increase with epyc3 launch. 2-3y ago when you ask oem consultant what hw to offer it was only intel. These days they ask what the server is used for and they offer depending on requirements.... Those who run intel only are dump old school that don't look at there budgets and performance and have no clue. Critical environment? Run on Z and you will find out what crytical really means.
ZoZo - Friday, August 7, 2020 - link
That's great, but they've most certainly got the resources to tweak their software, with AMD engineers ready to assist. And they're probably not doing any PCI-E passthrough of USB controllers. Nor do the customers try to run virtualization software (Hyper-V, Windows Sandbox) within a Windows guest there.Maybe it's just those "edge cases", but when you encounter problems with those, you start to wonder what else could go wrong in other cases.
duploxxx - Saturday, August 8, 2020 - link
You are just working with a consumer playground. In server world we have dedicated clusters that run pci-e passthrough for nics and GPU. No issues on the vSphere at all. And Windows? What about it? +4000vm on windows would that be enough? With vm up to 60vcpu and 150tb hd space.ZoZo - Saturday, August 8, 2020 - link
I specifically mentioned USB controllers because that FLR bug is on AMD's Starship and Matisse controllers regardless of platform. If those are the USB controllers on EPYC, then it will have the same problem (it locks up the PC, hard reboot required). Other things seem fine (GPU, NIC, NVMe SSD). The only "consumer playground" thing there is that perhaps you're more likely to be doing that on a consumer platform than on an enterprise server.I didn't say Windows didn't run, I said nested virtualization didn't work. Please try to make an effort in reading comprehension.
duploxxx - Saturday, August 8, 2020 - link
This is a thread about Server CPU. Not workstation. So why you bring this info than in the first place.... USB devices on server? You are joking right.ZoZo - Sunday, August 9, 2020 - link
Fine, focus on the USB thing and ignore the Windows nested virtualization. I guess some things don't (or didn't) just work on AMD workstation platforms, but everything has been absolutely peachy on server platforms.Say company X needs new servers to run their complex IT infrastructure. But here's the thing: if something doesn't work, if there's an incompatibility, people you care about get executed. Oh, and you're going up against someone else doing the same thing, and if he gets it done faster, same punishment for you.
What would you go for? Intel or AMD?
Targon - Monday, August 10, 2020 - link
That is why it takes several years for the server market to even consider making a shift or to start implementing a different design. Two years later, corporations have started to make the decision to test the waters with some Epyc servers and see how it goes. Considering the number of MAJOR security issues with Intel, the "tested" platform, any minor issues with Epyc won't be something to be concerned with.TheJian - Saturday, August 8, 2020 - link
THIS...The largest companies in the world are using "FAILED" AMD chips (sounds like you're saying they're failures yeeeeman!)...LOL. I haven't heard of amazon, facebook, google, microsoft etc, firing admins for buying EPYC. Intel pissed away 4.1B+ per year for 4.5yrs and now has fab issues because it should have been spent on 10/7nm (it was freaking 20B! That is a state of the art fab+ some). We are here now, because INTEL is exactly what you are claiming AMD is...trash right now. :) It happens to everyone at some point. Look at windows 10...ROFL. 10+ versions, and ALL SH!TE. They release a patch and it WTF's 4 versions of the OS...LOL.How many bugs does an Intel chip have? SECURITY I mean?...Nuff said? We are talking their ENTIRE CHIP LIST for DECADES. Nah, Intel is 100% bug free right?...ROFLMAO. Caminogate. Timna, larrabee, TITANIC (LOL), etc etc. I could go on with the Intel failures but...whatever. They're the only other perfect thing every created on earth besides JESUS. I swear ;) They are the bees knees pal...LOL. That said, if AMD loses this time, it's because they refuse to CHARGE accordingly for their WINNING chips. If 4950x is really 500 (not $750 like the chip it replaces, rather pricing a 16c at 8c old pricing), again they will waste a top money maker. That is $250 off each chip sold. That is HUGE BANK to the NET INCOME line. AMD appears to be wasting a 4th gen on stupidity and pricing like a 2nd rate LOSER. Do you think Intel would have priced this at $4000 if you hadn't STUPIDLY priced your 64c at $7K? They're only 28c and they still are smart enough to charge MORE than 1/2 your 64c. Even in defense they make a right move if possible (get every dollar you can!). AMD would have priced a 28c vs. an intel 64c at $2000. That 28c was supposed to be around 17-25K and the 56c was supposed to be 50K. Then 64c AMD happened and NO NET INCOME for AMD, and still a RECORD for INTEL. You are NOT DOING IT RIGHT. Your stock after this Q report should have crashed $40 (and I mean $40 OFF the price, and even that is ridiculous, stock should be $20 with no improvement in INCOME). Quit giving away your silicon while Intel maximizes INCOME. The only point I really agree with the OP is you can't win like this. MAKE INCOME or SELL your company to someone who GETS HOW TO GROW. Lisa Sue is likely a great engineer, but management, well they need to show us the MONEY. Sue should be demoted to running divisions or something, not managing money for the company (or at least not firing whoever IS pricing products - surely she has some say here).
You are 20% of the market of x86 and make 600mil/yr. Intel owns the other 80% of x86 and still pulls down 23B NET INCOME (NOT REVENUE! that is like 70B+). Do the math, you should be 4B-5B NET INCOME at these levels or YOU ARE DOING IT WRONG (and that will still be a bit of a discount to win some share, but INCOME is more important). Your stock is ~1/2 of Intel at 100B (Intel is 208B Mkcap). YOU AMD, make 600mil/yr on that, they make 23B/YR TTM. Intel PE of 9, yours...165? LOL. You can live with a small share if you make NET INCOME. You can't live with a bunch of share but NO INCOME on it. Which, at 100-160mil/Q, you are making NOTHING NET. Sorry, 10% of what you should be making is NOTHING. Again, you idiots (you haven't learned in 4 gens if rumored prices are true for 4th ryzen), YOU ARE PRICING WRONG.
Your problem now? Intel will be buying YOUR WAFER STARTS, for MORE MONEY, so you can't now. Make hay while the sun shines...OOPS, someone forgot to learn this at AMD? Party is almost over, INtel contracts done or getting done for BILLIONS of TSMC wafers (180K, AMD only making 200K wafers from TSMC), and they still have fabs at 100% of their own. You are stupid. PERIOD. Four years WASTED on trying to get a few points of share rather than RAKING everyone you could for every dollar possible WHILE YOU CAN. Instead of the smart slower way, you took the quick price cut way and well, don't start wars you can't win yet. If you had priced everything within say 10% of Intel chips, they wouldn't even have cut ANY chip prices, as they have shareholders that require seeking NET INCOME and they keep doing it. AMD seems to refuse for some reason to attack correctly. Haven't read ART of WAR or Art of the Deal? It shows.
4am, no time to check if this is attack proof...LOL. To lazy to paste into word and this small box sucks. The data (math) is simple here. STock prices are not rocket science. SELL AMD ASAP, buy Intel, next xmas (2021) you will laugh at the ~26-27B NET INCOME they are making by then after adding 180K wafers to sell from TSMC. That is going to add to the bottom line. Intel is setting records and took a $27 hit to shares from $69. It won't take long for people to wake up and get that it is still making the same money as when it was $69 and heading higher NET INCOME wise. You'll be too late then ;) AMD is massively overpriced, so says NET, revenue, margin, mkap etc etc etc. When everyone loves your stock be afraid (yes I sold early :)), when everyone hates it, if metrics, fundamentals etc are all good (especially after FAKE NEWS crash over fabs that easily fixed by USING TSMC etc), BUY. Free money in 18mo. My family owns NO AMD (for now). I thought they'd be making BILLIONS at 20% share, but they forgot to price correctly for 4yrs. Oh well, dead money, or at least DANGEROUS for stockholders.
WraithR32 - Monday, August 10, 2020 - link
TL;DR