The Intel Core i9-7980XE and Core i9-7960X CPU Review Part 1: Workstationby Ian Cutress on September 25, 2017 3:01 AM EST
The buzz since Intel announced it was bringing an 18-core CPU to the consumer market has been palpable: users are anticipating this to be Intel’s best performing processor, and want to see it up against the 16-core AMD Threadripper (even at twice the cost). Intel is the incumbent: it has the legacy, the deep claws in software optimization, and the R&D clout to crush the red rival. However, a jump as large as this, moving from 10-core to 18-core in consumer, is a step Intel has been reluctant to make in the past. In this first analysis, we’ve run a few tests on the new 18-core (and 16-core) from Intel to find out the lie of the land.
Dissecting the new Core i9-7980XE and Core i9-7960X
Intel’s high-end desktop (HEDT) platform is designed to be the hard-hitting prosumer (professional consumer) platform providing all the cores without the extras required by the enterprise community. Up until this new generation of 2017 parts, we were treated to three or four CPUs each cycle, carved from Intel’s smallest enterprise silicon, slowly moving from 6 cores in 2009 to 10 cores in 2015, usually aiming for the top CPU to carve the $999 price point (usually at $999, anyway). With the 2017 HEDT platform, called Basin Falls, that changed.
The first launch of Basin Falls earlier this year had three parts. The new socket and chipset were expected as Intel updates every other generation, and this upgrade provided substantially more connectivity than before. The second part was the first three Skylake-X processors, built from Intel’s smallest enterprise silicon (like before), ranging from 6 cores at $389 to 10 cores at $999. Again, this second part was par for the course, albeit with a few microarchitecture changes in the design worth discussing (later). The third part of the initial launch was a bit of a curve ball: Intel configured two processors using their latest consumer microarchitecture, Kaby Lake-X. This is the curveball: normally the prosumer platform is a microarchitecture generation behind, due to development cycles. These two parts are also only quad-core, using repurposed ‘mainstream enthusiast’ parts but set at higher frequencies and higher power budgets, aiming to be the fastest single-threaded processors on the market.
The second launch of Basin Falls is basically what is happening today, and this is the new step from Intel. To add to the three Skylake-X processors already in the stack, using the smallest enterprise silicon, Intel is adding four more Skylake-X processors, this time using the middle-sized enterprise silicon. These new processors build on the others by significantly increasing core count, which comes at the cost of extra power requirements.
|Cores / Threads||6/12||8/16||10/20||12/24||14/28||16/32||18/36|
|Base Clock / GHz||3.5||3.6||3.3||2.9||3.1||2.8||2.6|
|Turbo Clock / GHz||4.0||4.3||4.3||4.3||4.3||4.2||4.2|
|L3||1.375 MB/core||1.375 MB/core|
|Memory Freq DDR4||2400||2666||2666|
All seven processors are listed in the table above. The four new parts are on the right, under the ‘HCC’ (high core count) silicon:
- The Core i9-7980XE, with 18 cores at $1999
- The Core i9-7960X, with 16 cores at $1699
- The Core i9-7940X, with 14 cores at $1399
- The Core i9-7920X, with 12 cores at $1199 (technically launched August 28th)
As with other product stacks, moving higher up step will cost more than the step previous. Intel (and others) are taking advantage of the fact that some consumers (and especially prosumers) will buy the best part because it can be offset against workflow, or just because it exists.
These four processors are almost identical, aside from core count: all four use the same base design, all four support DDR4-2666 memory out of the box, and all four will support 44 PCIe 3.0 lanes (plus 24 from the chipset). The top three are rated at 165W TDP (thermal design power), while the 12-core part is 140W. There is some variation in the frequencies: while all four parts will support 4.4 GHz as their top TurboMax clock (also known as ‘favored core’, more on this later), the Turbo 2.0 frequencies are all 4.3 GHz except the top two processors, and the base clock frequencies in general decrease the higher up the stack you go. This makes sense, physically: to keep the same TDP as cores are added, the processor will reduce in base clock frequency to meet that same target.
In our initial review of the Skylake-X processors, we were able to obtain the per-core turbo frequencies for each processor.
Despite the low base frequencies, each processor (when all cores are working) will still be above 3.4 GHz. The ‘base’ frequency number is essentially Intel’s guarantee: under normal conditions, this is the highest frequency Intel will guarantee. When AVX or AVX2/AVX512 instructions are being used, the frequencies will be lower than those listed (due to the energy density of these compact instructions) but still above the base frequency, and offering higher overall performance than using the same math in non-AVX formats.
Shown in the table are the turbo frequencies without TurboMax. TurboMax is a new feature first implemented with Broadwell-E, whereby the most efficient cores (as measured during manufacturing and embedded in the processor) can achieve a higher frequency. For Skylake-X, this feature was upgraded from one loaded core to when up to two cores are loaded. This means that the first two columns, labeled 1 and 2, will move up to 4.4 GHz for the top four processors. TurboMax also requires BIOS support, although we had some issues with this, mentioned later in this review.
Intel’s Competition: AMD
Before dissecting the processor, it is important to know what Intel is up against with the new processors. Arguably this is new territory for the consumer space: before this year, if a user wanted more than 10 cores, they had to invest in expensive Xeon processors (or even two of them), and a platform to support it.
Speaking directly for consumer lines, the obvious competition here is from AMD’s Threadripper processors. These are derived from their new Zen microarchitecture and offer 16-cores at $999 or 12 cores at $799.
|AMD vs Intel|
|TR 1900X||TR 1920X||TR 1950X||7920X||7940X||7960X||7980XE|
|Silicon||2 x Zeppelin||HCC|
|Cores / Threads||8/16||12/24||16/32||12/24||14/28||16/32||18/36|
|Base Clock / GHz||3.8||3.5||3.4||2.9||3.1||2.8||2.6|
|Turbo Clock / GHz||4.0||4.0||4.0||4.3||4.3||4.2||4.2|
|XFR / TBM3||4.2||4.2||4.2||4.4||4.4||4.4||4.4|
|L2||512 KB/core||1 MB/core|
|L3||32 MB||64 MB||1.375 MB/core|
|Memory Freq DDR4||2666||2666|
From a performance perspective, Intel is expected to outright win: AMD’s 16-core processor was pitched against the previous generation’s 10-core processor and usually won, especially in multithreaded benchmarks. The single core performance of the AMD parts were a little behind Intel, but the core count made up for the difference. With Skylake-X adding both single thread performance as well as 8 more cores in the design should give Intel an easy lead in raw performance.
However, AMD has positioned that 1950X at $999, which is half the price of the i9-7980XE. AMD also cites more PCIe lanes from the CPU (60 vs 44), and no confusion over chipset functionality support. Intel’s rebuttal is that the performance is worth the cost, and that it has more chipset PCIe lanes for additional functionality beyond PCIe co-processors like GPUs.
Intel’s Competition: Intel
Intel’s enterprise Xeon platform is still a direct competitor here, in two different ways.
The ‘traditional’ multi-socket enterprise parts will cost substantially more than these new consumer parts, in exchange for some extra features as well, although even moving to a dual socket system with two $999 processors will not be much of a comparison: a Core i9-7980XE compared to a 2P Xeon Silver system will have advantages in core frequency and a unified memory interface, in exchange for maximum memory support and potential 10 gigabit Ethernet or Intel’s QuickAssist Technology.
|Ten+ Core Intel Xeon-W Processors (LGA2066)|
|Xeon W-2195||18/36||2.3 GHz||4.3 GHz||24.75||1.375||140 W||TBD|
|Xeon W-2175||14/28||TBD||TBD||19.25||1.375||140 W||TBD|
|Xeon W-2155||10/20||3.3 GHz||4.5 GHz||13.75||1.375||140 W||$1440|
|Core i9-7980XE||18/36||2.6 GHz||4.2 GHz||24.75||1.375||165W||$1999|
|Core i9-7960X||16/32||2.8 GHz||4.2 GHz||22.00||1.375||165W||$1699|
Intel also launched Xeon-W processors in the last couple of weeks. These occupy the middle ground between Skylake-X and the enterprise Xeon-SP parts. Xeon-W uses the same socket as Skylake-X, but requires a completely new chipset, so the motherboards are not interchangeable. These Xeon-W parts are still up to 18 core, almost mirroring the Skylake-X processors, and support quad-channel memory, but support up to 512GB of ECC of it, compared to 128GB of non-ECC. The Xeon-W set of processors cost an extra 10-20% over the Skylake-X parts (add some more for the motherboard too), but for any prosumer that absolutely needs ECC memory, but does not want a dual-processor or does not have double the budget, then Xeon-W is going to be the best bet.
This Review: The Core i9-7980XE and Core i9-7960X
This review is titled ‘Part 1: Workstation’, as for the most part, this review will tackle some of the new processors in workstation type workloads including some initial data using SpecWPC, a standardized industry-standard workstation benchmark suite, as well as our more intense workloads. The review will take a nod towards usability with single-threaded workloads and responsiveness (because it really does matter how fast a PDF opens if this CPU is the main processor in a work system).
The main comparison points for this review will be AMD’s Ryzen Threadripper processors, the 16-core 1950X and the 12-core 1920X. On the Intel side, we retested the 10-core i7-6950X, as well as using our 10-core Core i9-7900X numbers from the initial Skylake review. Unfortunately we do not have the Core i9-7940X or Core i9-7920X yet to test, but we are working with Intel to get these parts in.
[Speaking directly from Ian]: I know a lot of our readers are gamers, and are interested in seeing how well (or poorly) these massive multi-core chips perform in the latest titles at the highest resolutions. Apologies to disappoint, but I am going to tackle the more traditional consumer tasks in a second review, and which will mean that gaming will be left for that review. For the users that have followed my reviews (and Twitter) of late, I am still having substantial issues with my X299 test beds on the gaming results, with Skylake-X massively underperforming where I would expect a much higher result. After having to dedicate recent time to business trips (Hot Chips, IFA) as well as other releases (Threadripper), I managed to sit down in the two weeks between trips to figure what exactly what was going on. I ended up throwing out the two X299 pre-launch engineering samples I was using for the Skylake-X testing, and I received a new retail motherboard only a few days before this review. This still has some issues that I spent time trying to debug, which I think are related to how the turbo is implemented, which could either be Intel related or BIOS-specific. To cause insult to injury to everyone who wants to see this data, I have jumped on a plane to travel half-way around the world for a business trip during the week of this launch, which leaves the current results inconclusive. I have reached out to the two other motherboard vendors that I haven’t received boards from; just in case the issue I seem to be having is vendor specific. If I ever find out what this issue is, then I will write it up, along with a full Skylake-X gaming suite. It will have to wait to mid-late October, due to other content (and more pre-booked event travel).
I also wanted to benchmark the EPYC CPUs that landed in my office a few days ago, but it was not immediately playing ball. I will have to try and get some Xeon-W / Xeon Gold for comparison with those.
Pages In This Review
- 1: Dissecting the Intel Core i9-7980XE and Core i9-7960X
- 2: New Features in Skylake-X: Cache, Mesh, and AVX-512
- 3: Explaining the Jump to HCC Silicon
- 4: Opinion: Why Counting ‘Platform’ PCIe Lanes (and using it in Marketing) Is Absurd
- 5: Test Bed and Setup
- 6: Benchmark Overview
- 7: Workstation Performance: SpecWPC v2.1
- 8: Benchmarking Performance: PCMark 10
- 9: Benchmarking Performance: Office Tests
- 10: Benchmarking Performance: Rendering Tests
- 11: Benchmarking Performance: Encoding Tests
- 12: Benchmarking Performance: System Tests
- 13: Benchmarking Performance: Legacy Tests
- 14: A Few Words on Power Consumption
- 15: Conclusions and Final Words
- The Intel Skylake-X Review: Core i9-7900X, i7-7820X and i7-7800X Tested
- The Intel Kaby Lake-X Review: Core i7-7740X and i5-7640X Tested
- Intel Announces Basin Falls: The New High-End Desktop Platform and X299 Chipset
Post Your CommentPlease log in or sign up to comment.
View All Comments
ddriver - Monday, September 25, 2017 - linkYou are living in a world of mainstream TV functional BS.
Quantum computing will never replace computers as we know and use them. QC is very good at a very few tasks, which classical computers are notoriously bad at. The same goes vice versa - QC suck for regular computing tasks.
Which is OK, because we already have enough single thread performance. And all the truly demanding tasks that require more performance due to their time staking nature scale very well, often perfectly, with the addition of cores, or even nodes in a cluster mode.
There might be some wiggle room in terms of process and material, but I am not overly optimistic seeing how we are already hitting the limits on silicon and there is no actual progress made on superior alternatives. Are they like gonna wait until they hit the wall to make something happen?
At any rate, in 30 years, we'd be far more concerned with surviving war, drought and starvation than with computing. A problem that "solves itself" ;)
SharpEars - Monday, September 25, 2017 - linkYou are absolutely correct regarding quantum computing and it is photonic computing that we should be looking towards.
Notmyusualid - Monday, September 25, 2017 - link@ SharpEars
Yes, as alluded to by IEEE. But I've not looked at it in a couple of years or so, and I think they were still struggling with an optical DRAM of sorts.
Gothmoth - Monday, September 25, 2017 - linkand what have they done for the past 6 years?
i am glad that i get more cores instead of 5-10% performance per generation.
Krysto - Monday, September 25, 2017 - linkThe would if they could. Improvements in IPC have been negligible since Ivy Bridge.
kuruk - Monday, September 25, 2017 - linkCan you add Monero(Cryptonight) performance? Since Cryptonight requires at least 2MB of L3 cache per core for best performance, it would be nice to see how these compare to Threadripper.
evilpaul666 - Monday, September 25, 2017 - linkI'd really like it if Enthusiast ECC RAM was a thing.
I used to always run ECC on Athlons back in the Pentium III/4 days.Now with 32-128x more memory that's running 30x faster it doesn't seem like it would be a bad thing to have...
someonesomewherelse - Saturday, October 14, 2017 - linkIt is. Buy AMD.
IGTrading - Monday, September 25, 2017 - linkI think we're being to kind on Intel.
Despite the article clearly mentioning it in a proper and professional way, the calm tone of the conclusion seem to legitimize and make it acceptable that Intel basically deceives its customers and ships a CPU that consumes almost 16% more power than its stated TDP.
THIS IS UNACCEPTABLE and UNPROFESSIONAL from Intel.
I'm not "shouting" this :) , but I'm trying to underline this fact by putting it in caps.
People could burn their systems if they design workstations and use cooling solutions for 165W TDP.
If AMD would have done anything remotely similar, we would have seen titles like "AMD's CPU can fry eggs / system killer / motherboard breaker" and so on ...
On the other hand, when Intel does this, it is silently, calmly and professionally deemed acceptable.
It is my view that such a thing is not acceptable and these products should be banned from the market UNTIL Intel corrects its documentation or the power consumption.
The i7960X fits perfectly in its TDP of 165W, how come i7980X is allowed to run wild and consume 16% more ?!
This is similar with the way people accepted every crapping design and driver fail from nVIDIA, even DEAD GPUs while complaining about AMD's "bad drivers" that never destroyed a video card like nVIDIA did. See link : https://www.youtube.com/watch?v=dE-YM_3YBm0
This is not cutting Intel "some slack" this is accepting shit, lies and mockery and paing 2000 USD for it.
For 2000$ I expect the CPU to run like a Bentley for life, not like modded Mustang which will blow up if you expect it to work as reliably as a stock model.
whatevs - Monday, September 25, 2017 - linkWhat a load of ignorance. Intel tdp is *average* power at *base* clocks, uses more power at all core turbo clocks here. Disable turbo if that's too much power for you.