The AnandTech Coffee Lake Review: Initial Numbers on the Core i7-8700K and Core i5-8400
by Ian Cutress on October 5, 2017 9:00 AM EST- Posted in
- CPUs
- Intel
- Core i5
- Core i7
- Core i3
- 14nm
- Coffee Lake
- 14++
- Hex-Core
- Hyperthreading
Silicon and Process Nodes: 14++
Despite being somewhat reserved in our pre-briefing, and initially blanket labeling the process node for these chips as ‘14nm’, we can confirm that Intel’s newest ‘14++’ manufacturing process is being used for these 8th Generation processors. This becomes Intel’s third crack at a 14nm process, following on from Broadwell though Skylake (14), Kaby Lake (14+), and now Coffee Lake (14++).
With the 8th Generation of processors, Intel is moving away from having the generation correlate to both the process node and microarchitecture. As Intel’s plans to shrink its process nodes have become elongated, Intel has decided that it will use multiple process nodes and microarchitectures across a single generation of products to ensure that every update cycle has a process node and microarchitecture that Intel feels best suits that market. A lot of this is down to product maturity, yields, and progress on the manufacturing side.
Intel's Core Architecture Cadence (8/20) | |||||
Core Generation | Microarchitecture | Process Node | Release Year | ||
2nd | Sandy Bridge | 32nm | 2011 | ||
3rd | Ivy Bridge | 22nm | 2012 | ||
4th | Haswell | 22nm | 2013 | ||
5th | Broadwell | 14nm | 2014 | ||
6th | Skylake | 14nm | 2015 | ||
7th | Kaby Lake | 14nm+ | 2016 | ||
8th | Kaby Lake Refresh Coffee Lake Cannon Lake |
14nm+ 14nm++ 10nm |
2017 2017 2018? |
||
9th | Ice Lake? ... |
10nm+ | 2018? | ||
Unknown | Cascade Lake (Server) | ? | ? |
Kaby Lake was advertised as using a 14+ node with slightly relaxed manufacturing parameters and a new FinFET profile. This was to allow for higher frequencies and better overclocking, although nothing was fundamentally changed in the core manufacturing parameters. With Coffee Lake at least, the minimum gate pitch has increased from 70nm for 84nm, with all other features being equal.
Increased gate pitch moves transistors further apart, forcing a lower current density. This allows for higher leakage transistors, meaning higher peak power and higher frequency at the expense of die area and idle power.
Normally Intel aims to improve their process every generation, however this seems like a step ‘back’ in some of the metrics in order to gain performance. The truth of the matter is that back in 2015, we were expecting Intel to be selling 10nm processors en-masse by now. As delays have crept into that timeline, the 14++ note is holding over until 10nm is on track. Intel has already stated that 10+ is likely to be the first node on the desktop, which given the track record on 14+ and 14++ might be a relaxed version of 10 in order to hit performance/power/yield targets, with some minor updates. Conceptually, Intel seems to be drifting towards seperate low-power and high-performance process nodes, with the former coming first.
Of course, changing the fin pitch is expected to increase the die area. With thanks to HEKPC (via Videocardz), we can already see a six-core i7-8700K silicon die compared to a quad-core i7-7700K.
The die area of the Coffee Lake 6+2 design (six cores and GT2 graphics) sits at ~151 mm2, compared to the ~125 mm2 for Kaby Lake 4+2 processor: a 26mm2 increase. This increase is mainly due to the two cores, however there is a minor adjustment in the integrated grpahics as well to support HDCP 2.2, not to mention any unpublished changes Intel has made to their designs between Kaby Lake and Coffee Lake.
The following calculations are built on assumptions and contain a margin of error
With the silicon floor plan, we can calculate that the CPU cores (plus cache) account for 47.3% of the die, or 71.35 mm2. Divided by six gives a value of 11.9 mm2 per core, which means that it takes 23.8 mm2 of die area for two cores. Out of the 26mm2 increase then, 91.5% of it is for the CPU area, and the rest is likely accounting for the change in the gate pitch across the whole processor.
The Coffee Lake 4+2 die would then be expected to be around ~127 mm2, making a 2mm2 increase over the equivalent Kaby Lake 4+2, although this is well within the margin of error for measuring these processors. We are expecting to see some overclockers delid the quad-core processors soon after launch.
In previous Intel silicon designs, when Intel was ramping up its integrated graphics, we were surpassing 50% of the die area being dedicated to graphics. In this 6+2 design, the GPU area accounts for only 30.2% of the floor plan as provided, which is 45.6 mm2 of the full die.
Memory Support on Coffee Lake
With a new processor generation comes an update to memory support. There is always a small amount of confusion here about what Intel calls ‘official memory support’ and what the processors can actually run. Intel’s official memory support is typically a guarantee, saying that in all circumstances, with all processors, this memory speed should work. However motherboard manufacturers might offer speeds over 50% higher in their specification sheets, which Intel technically counts as an overclock.
This is usually seen as Intel processors having a lot of headroom to be conservative, avoid RMAs, and maintain stability. In most cases this is usually a good thing: there are only a few niche scenarios where super high-speed memory can equate to tangible performance gains* but they do exist.
*Based on previous experience, but pending a memory scaling review
For our testing at least, our philosophy is that we test at the CPU manufacturers’ recommended setting. If there is a performance gain to be had from slightly faster memory, then it pays dividends to set that as the limit for official memory support. This way, there is no argument on what the rated performance of the processor is.
For the new generation, Intel is supporting DDR4-2666 for the six-core parts and DDR4-2400 for the quad-core parts, in both 1DPC (one DIMM per channel) and 2DPC modes. This should make it relatively simple, compared to AMD’s memory support differing on DPC and type of memory.
It gets simple until we talk about AIO designs using the processors, which typically require SODIMM memory. For these parts, for both quad-core and hex-core, Intel is supporting DDR4-2400 at 1DPC and DDR4-2133 at 2DPC. LPDDR3 support is dropped entirely. The reason for supporting a reduced memory frequency in an AIO environment with SODIMMs is because these motherboards typically run their traces as chained between the memory slots, rather than a T-Topology which helps with timing synchronization. Intel has made the T-Topology part of the specification for desktop motherboards, but not for AIO or integrated ones, which explains the difference in DRAM speed support.
These supported frequencies follow JEDEC official sub-timings. Familiar system builders will be used to DDR4-2133 at a CAS Latency of 15, but as we increase the speed of the modules, the latency increases to compensate:
Intel’s official sub-timing support at DDR4-2666 is 19-19-19. Outside of enterprise modules, that memory does not really exist, because memory manufacturers can seem to mint DDR4-2666 16-17-17 modules fairly easily, and these processors are typically fine with those sub-timings. CPU manufacturers typically only state ‘supported frequency at JEDEC sub-timings’ and do not go into sub-timing discussions, because most users care more about the memory frequency. If time permits, it would be interesting to see just how much of a performance deficit the official JEDEC sub-timings provide compared to what memory is actually on sale.
222 Comments
View All Comments
mapesdhs - Friday, October 6, 2017 - link
Any idea what that optimisation is? Seems odd that adding extra pure cores would harm performance, as opposed to adding HT which some games don't play nice with. Otherwise, are you saying that for this test, if it was present, the i3 8100 would come out on top? Blimey.Ian Cutress - Saturday, October 7, 2017 - link
They're either doing something to align certain CPU tasks for AVX, or it's bypassing code. You'd have to ask the developers on that.mapesdhs - Monday, October 9, 2017 - link
I doubt they'd explain what's happening, might be proprietory code or something.WickedMONK3Y - Thursday, October 5, 2017 - link
You have the spec of the i7 8700K slightly wrong. It has a base frequency of 3.7GHz not 3.8GHz.https://ark.intel.com/products/126684/Intel-Core-i...
Ian Cutress - Thursday, October 5, 2017 - link
Mistake on our part. I was using our previous news post as my source and that had a Typo. This review (and that news) should be updated now.Slomo4shO - Thursday, October 5, 2017 - link
Ian, this is probably your worst review to date. Lackluster choice of CPUs, mid-grade GPU, and lack of direct competition in the product stack... Why would you not use a GTX 1080 Ti or Titan XP?Ian Cutress - Thursday, October 5, 2017 - link
All the CPUs we've ever tested are in Bench. Plenty of other data in there: the goal was to not put 30+ CPUs into every graph.Our benchmark database includes over 40 CPUs tested on the GTX 1080, which is the most powerful GPU I could get a set of so I can do parallel testing across several systems. If that wasn't enough (a full test per CPU takes 5 hours per GPU), the minute I get better GPUs I would have to start retesting every CPU. At the exclusion of other content. Our benchmark suite was updated in early Q2, and we're sticking with that set of GPUs (GTX 1080/1060/R9 Fury/RX 480) for a good while for that reason.
Note I had three days to do this review.
crimson117 - Thursday, October 5, 2017 - link
Good job! More people need to know about the bench...Slomo4shO - Thursday, October 5, 2017 - link
To be fair the R5 1600 was added to the benches after the fact. In addition, your othwr reviews tend to be much more detailed and data driven with relevant products and multiple GPUs.Why would I read your review if you expect me to dig through your benchmark to obtain relivant data?
I can understand and appreciate the time crunch but it is a poor excuse for some of the decisions made in this review.
Take it with a grain of salt, this was not your best work.
mapesdhs - Friday, October 6, 2017 - link
Ooohhh the effort of examing the data in Bench! :D First world problems. Sheesh...Run your own tests then, see how you get on with having a life. It's insanely time consuming.