It has been ten years since Intel introduced quad-core processors into its mainstream product range. It was expected that six-core parts would hit the segment a few years after, however due to process improvements, microarchitecture gains, cost, and a lack of competition, the top-end mainstream processor is still a quad-core a decade later. That changes today.

Launching today are Intel's new 8th Generation Coffee Lake CPUs, with the Core i5 and Core i7 parts having six distinct physical cores. In this review we're covering the top SKU, the Core i7-8700K, along with looking at numbers from the Core i5-8400.

Coffee Lake Hits Primetime

There are a number of interesting elements to this launch to be excited about, and a number of factors that raise even further questions, which we will go in to.

To start, the processor stack that Intel is making available today consists of six desktop processors that all fall under the ‘8th Generation’ nomenclature, and are built under the codename ‘Coffee Lake’ to designate the microarchitecture and manufacturing process combination.

  

All these new processors are desktop parts, meaning they are socketed processors for use in appropriate motherboards featuring the Z370 chipset. Technically these processors use the LGA1151 socket, which is also used by the 6th Generation and 7th Generation processors with the Z170 and Z270 chipsets. However due to differences in the pin-layout of these two sets of processors, 8th Gen will only work in Z370 boards and there is no level of cross compatibility. We will discuss this later.

Intel 8th Generation 'Coffee Lake' Desktop Processors
  i7-8700K i7-8700 i5-8600K i5-8400 i3-8350K i3-8100
Cores 6C / 12T 6C / 6T 4C / 4T
Base Frequency 3.7 GHz 3.2 GHz 3.6 GHz 2.8 GHz 4.0 GHz 3.6 GHz
Turbo Boost 2.0 4.7 GHz 4.6 GHz 4.3 GHz 4.0 GHz - -
L3 Cache 12 MB 9 MB 8 MB 6 MB
DRAM Support DDR4-2666 DDR4-2400
Integrated Graphics GT2: 24 EUs GT2: 23 EUs
IGP Base Freq 350 MHz 350 MHz
IGP Turbo 1.20 GHz 1.15 GHz 1.05 GHz 1.15 GHz 1.10 GHz
PCIe Lanes (CPU) 16 16
PCIe Lanes (Z370) < 24 < 24
TDP 95 W 65 W 95 W 65 W 91 W 65 W
Price (tray) $359 $303 $257 $182 $168 $117
Price (Newegg)
Sale until 10/12
$380 $315 $260 $190 $180 $120
Price (Amazon) $N/A $N/A $N/A $N/A $N/A $N/A

At the top of the stack are two Core i7 Coffee Lake processors. In previous generations ‘Core i7’ meant that we were discussing quad-core parts with hyperthreading, but for this generation it moves up to a six-core part with hyperthreading. The Core i7-8700K starts at a 3.7 GHz base frequency and is designed to turbo to 4.7 GHz in single threaded workloads, with a thermal design power (TDP) of 95W. The K designation means this processor is unlocked and can be overclocked by adjusting the frequency multiplier, subject to appropriate cooling, applied voltage, and the quality of the chip (Intel only guarantees 4.7 GHz).  The Core i7-8700 is the non-K variant, with lower clocks (3.2 GHz base, 4.6 GHz turbo) and a lower TDP (65W).  Both of these processors use 256 KB of L2 cache per core and 2 MB of L3 cache per core.

Kaby Lake i7-K vs Coffee Lake i7-K
i7-7700K   i7-8700K
4C / 8T Cores 6C / 12T
4.2 GHz Base Frequency 3.7 GHz
4.5 GHz Turbo Boost 2.0 4.7 GHz
8 MB L3 Cache 12 MB
DDR4-2400 DRAM Support DDR4-2666
GT2: 24 EUs Integrated Graphics GT2: 24 EUs
350 MHz IGP Base Freq 350 MHz
1.15 GHz IGP Turbo 1.20 GHz
16 PCIe Lanes (CPU) 16
< 24 PCIe Lanes (Chipset) < 24
95W TDP 95 W
$339 Price (tray) $359
$340 Price (Newegg) $380
$351 Price (Amazon) $N/A

When compared to the previous generation, the Core i7-8700K starts at a higher price, but for that price comes more cores and a higher turbo frequency. The Core i7-8700K is a good example of how adding cores works: in order to keep the same power consumption, the overall base frequency has to be lowered to match the presence of extra cores. However, in order to keep the responsiveness higher than the previous generation, the single thread performance is often pushed to a higher multiplier. In almost all situations this counts as a win-win, and makes pushing for the 6-core part, on paper at least, a no-brainer.

Kaby Lake i5-7400 vs Coffee Lake i5-8400
i5-7400   i5-8400
4C / 4T Cores 6C / 6T
3.0 GHz Base Frequency 2.8 GHz
3.5 GHz Turbo Boost 2.0 4.0 GHz
6 MB L3 Cache 9 MB
DDR4-2400 DRAM Support DDR4-2666
GT2 Integrated Graphics GT2: 23 EUs
350 MHz IGP Base Freq 350 MHz
1.00 GHz IGP Turbo 1.05 GHz
16 PCIe Lanes (CPU) 16
< 24 PCIe Lanes (Chipset) < 24
65 W TDP 65 W
$182 Price (tray) $182
$190 Price (Newegg) $190
$185 Price (Amazon) $N/A

In the middle of the stack are the Core i5 processors, with the new generation matching the ‘same configuration without hyperthreading’ philosophy that followed in the previous generation. The two Core i5 parts operate at lower clockspeeds compared to the Core i7, and perhaps more so than we are previously used to, especially with the Core i5-8400 having a base frequency of 2.8 GHz. Intel sampled us the Core i5-8400 for our review, because it hits an important metric: six cores for under $200. Comparing cache sizes to the Core i7, the new parts have the same L2 configuration at 256 KB per core, but have a reduced L3 at 1.5 MB per core as part of the product segmentation.

Kaby Lake i5-7400 vs Coffee Lake i3-8100
i5-7400   i3-8100
4C / 4T Cores 4C / 4T
3.0 GHz Base Frequency 3.6 GHz
3.5 GHz Turbo Boost 2.0 -
6 MB L3 Cache 6 MB
DDR4-2400 DRAM Support DDR4-2400
GT2 Integrated Graphics GT2: 23 EUs
350 MHz IGP Base Freq 350 MHz
1.00 GHz IGP Turbo 1.10 GHz
16 PCIe Lanes (CPU) 16
< 24 PCIe Lanes (Chipset) < 24
65 W TDP 65 W
$182 Price (tray) $117
$190 Price (Newegg) $120
$185 Price (Amazon) $N/A

It is interesting to note that in the last generation, Intel had processors with two cores and two threads (2C/2T), two cores with hyperthreading (2C/4T), quad cores with four threads (4C/4T) and quad cores with hyperthreading (4C/8T). This layout had staggered, regular steps. With the move to 6C/12T on the high-end Core i7, and 6C/6T on the mid-range Core i5, Intel completely skips the 4C/8T parts and moves straight to 4C/4T on the Core i3. This is likely because a 4C/8T processor might overtake a 6C/6T part in some multi-threaded tests (it would also explain why moving from a previous 4C/8T Core i7 processor to a 6C/6T Core i5 8th generation is not always an increase in performance).

However at the bottom of the stack are the 4C/4T Core i3 processors, where Intel is pushing out an overclockable Core i3 processor again. This is a little bit of a surprise: in our testing of the previous generation overclockable Core i3, the fact that it was dual core was a setback in a lot of testing. With the Core i3-K now being quad-core, and overclocking it to try and beat a six-core chip for less money, for certain things like gaming we might see less of a difference between the two. Nonetheless, the Core i3s do retain the policy of no Turbo modes on these parts. Another interesting point is the cache: the i3-8350K has 2 MB of L3 cache per core, whereas the i3-8100 only has 1.5 MB of L3 cache per core.

One of our key items to watch in this segment from the initial announcement is that i3-8100. Here is a quad-core processor for only $117. I suspect that this will hit most of the mainstream computing requirements that the previous generation Core i5 (at $182) used to cater for. On paper at least, it seems Intel might have an interesting task trying to explain why more users are opting for a Core i3 this time around.

Turbo Modes

One of the interesting things to come out of our briefings with Intel was the fact that Intel made a very clear change in policy when it comes to press disclosure. When the question was asked about per-core turbo values for each of the CPUs, Intel made a clear statement first, then a secondary one when quizzed further:

“Intel will no longer provide this information”

"We are only including processor frequencies for base and single-core Turbo in our materials going forward - the reasoning is that turbo frequencies are opportunistic given their dependency on system configuration and workloads"

This change in policy is somewhat concerning and completely unnecessary. The information itself could be easily obtained by actually having the processors and probing the required P-states (assuming the motherboard manufacturer does not play silly tricks), so this comes across as Intel withholding information for arbitrary reasons.

Nonetheless, we were able to obtain the per-core turbo ratios for each of the new processors for our motherboard. Given Intel's statement above, it seems to suggest that each motherboard might have different values for these, with no Intel guidelines given.

For the most part, there is nothing out of the ordinary here. Intel uses the base frequency as a guaranteed base under abnormal environmental circumstances and heavy code (AVX2), although in most circumstances even the all-core turbo ratio will be higher than the base frequency.

The odd-one-out is actually the Core i5-8400. It is being shipped with a low base frequency, at 2.8 GHz, but the all-core turbo ratio is 3.8 GHz. Shipping with such a low base frequency is perhaps masking the performance of this part: it should be, on paper at least, only a whisker or two behind the Core i5-8600K.

It is noticeable that the two Core i7 parts both have an all-core turbo of 4.3 GHz, which is only ever matched by the single threaded turbo of the Core i5-8600K. Not only is moving up from the Core i5 to the Core i7 doubling the threads, but the frequency gain is another addition in performance. The Core i5-8600K has a tray price of $257, while the Core i7-8700 is at $303. Overclocking is lost but the threads are doubled, the available turbo frequencies are improved, the cache goes up, and the TDP goes down.

 

 

I’ve been running a little Twitter poll on this. It looks like the Core i7-8700 gets the nod almost every time.

This Review: Initial Impressions

For this review today, we are focusing on our preliminary testing of the Core i7-8700K. Intel sampled us both the Core i7-8700K and the Core i5-8400.

These chips only arrived three days before launch. They would have arrived sooner, but I was out of the country on a pre-booked business trip and the courier decided to redeliver as late as possible when I returned. So despite some initial motherboard teething issues (again!), we were able to run our CPU suites and GTX 1080 testing on both chips. We will follow up with data on the other GPUs in the meantime, likely in dedicated CPU reviews, where we’ll include overclocking performance and workstation analysis.

So my apologies go out to our regular readers, especially those that have been expecting the usual gargantuan AnandTech reviews. Time and travel are cruel mistresses, and regular scheduled programming should recommence shortly. 2017 has been the most exciting year in a long while for these quick-fire CPU launches, but also the toughest: whereas previously we would be able to line up a couple of rounds of extra testing, this year has been one launch after another.

Die Sizes and DRAM Compatibility
Comments Locked

222 Comments

View All Comments

  • mkaibear - Saturday, October 7, 2017 - link

    Well, I'd broadly agree with that!

    There are latency issues with that kind of approach but I'm sure they'd be solvable. It'll be interesting to see what happens with Intel's Mesh when it inevitably trickles down to the lower end / AMD's Infinity Fabric when they launch their APUs.
  • mapesdhs - Tuesday, October 10, 2017 - link

    Such an idea is kinda similar to SGI's shared memory designs. Problem is, scalable systems are expensive, and these days the issue of compatibility is so strong, making anything new and unique is very difficult, companies just don't want to try out anything different. SGI got burned with this re their VW line of PCs.
  • boeush - Saturday, October 7, 2017 - link

    I think it's a **VERY** safe bet that most systems selling with an i7 8700/k will also include some sort of a discrete GPU. It's almost unimaginable that anyone would buy/build a system with such a CPU but no better GPU than integrated graphics

    Which makes the iGPU a total waste of space and a piece of useless silicon that consumers are needlessly paying for (because every extra square inch of die area costs $$$).

    For high-end CPUs like the i7s, it would make much more sense to ditch the iGPU and instead spend that extra silicon to add an extra couple of cores, and a ton more cache. Then it would be a far better CPU for the same price.

    So I'm totally with the OP on this one.
  • mkaibear - Sunday, October 8, 2017 - link

    You need a better imagination!

    Of the many hundreds of computers I've bought or been responsible for speccing for corporate and educational entities, about half have been "performance" oriented (I'd always spec a decent i5 or i7 if there's a chance that someone might be doing something CPU limited - hardware is cheap but people are expensive...) Of those maybe 10% had a discrete GPU (the ones for games developers and the occasional higher-up's PC). All the rest didn't.

    From chatting to my fellow managers at other institutions this is basically true across the board. They're avidly waiting for the Ryzen APUs to be announced because it will allow them to actually have competition in the areas they need it!
  • boeush - Sunday, October 8, 2017 - link

    It's not surprising to see business customers largely not caring about graphics performance - or about the hit to CPU performance that results from splitting the TDP budget with the iGPU...

    In my experience, business IT people tend to be either penny-wise and pound-foolish, or obsessed with minimizing their departmental TCO while utterly ignoring company performance as a whole. If you could get a much better-performing CPU for the same money, and spend an extra $40 for a discrete GPU that matches or exceeds the iGPU's capabilities - would you care? Probably not. Then again, that's why you'd stick with an i5 - or a lower-grade i7. Save a hundred bucks on hardware per person per year; lose a few thousand over the same period in wasted time and decreased productivity... I've seen this sort of penny-pinching miscalculation too many times to count. (But yeah, it's much easier to quantify the tangible costs of hardware, than to assess/project the intangibles of sub-par performance...)

    But when it comes specifically to the high-end i7 range - these are CPUs targeted specifically at consumers, not businesses. Penny-pinching IT will go for i5s or lower-grade i7s; large-company IT will go for Xeons and skip the Core line altogether.

    Consumer builds with high-end i7s will always go with a discrete GPU (and often more than one at a time.)
  • mkaibear - Monday, October 9, 2017 - link

    That's just not true dude. There are a bunch of use cases which spec high end CPUs but don't need anything more than integrated graphics. In my last but-one place, for example, they were using a ridiculous Excel spreadsheet to handle the manufacturing and shipping orders which would bring anything less than an i7 with 16Gb of RAM to its knees. Didn't need anything better than integrated graphics but the CPU requirements were ridiculous.

    Similarly in a previous job the developers had ludicrous i7 machines with chunks of RAM but only using integrated graphics.

    Yes, some it managers are penny wise and pound foolish, but the decent ones who know what they're doing they spend the money on the right CPU for the job - and as I say a serious number of use cases don't need a discrete GPU.

    ...besides it's irrelevant because the integrated GPU has zero impact on performance for modern Intel chips, as I said the limit is thermal not package size.

    If Intel whack an extra 2 cores on and clock them at the same rate their power budget is going up by 33% minimum - so in exchange for dropping the integrated GPU you get a chip which can no longer be cooled by a standard air cooler and has to have something special on there, adding cost and complexity.

    Sticking with integrated GPUs is a no-brainer for Intel. It preserves their market share in that environment and has zero impact for the consumer, even gaming consumers.
  • boeush - Monday, October 9, 2017 - link

    Adding 2 cores to a 6-core CPU drives the power budget up by 33% if and **ONLY IF** all cores are actually getting fully utilized. If that is the case, then the extra performance from those extra 2 cores would be indeed actually needed! (at least on those occasions, and would be, therefore, sorely missed in a 6-core chip.). Otherwise, any extra cores would be mostly idle, not significantly impacting power utilization, cooling requirements, or maximum single-thread performance.

    Equally important to the number of cores is the amount of cache. Cache takes up a lot of space, doesn't generate all that much heat (compared to the actual CPU pipeline components), but can boost performance hugely, especially on some tasks that are memory-constrained. Having more L1/L2/L3 cache would provide a much better bang for the buck when you need the CPU grunt (and therefore a high-end i7), than the waste of an iGPU (eating up ~50% of die area) ever could.

    Again, when you're already spending top dollar on an i7 8700/k (presumable because you actually need high CPU performance), it makes little sense that you go, "well, I'd rather have **LOWER** CPU performance, than be forced to spend an extra $40 on a discrete GPU (that I could then reuse on subsequent system builds/upgrades for many years to come)"...
  • mkaibear - Tuesday, October 10, 2017 - link

    Again, that's not true. Adding 2 cores to a 6 core CPU means that unless you find some way to prevent your OS from scheduling threads on it then all those cores are going to end up used somewhat - which means that you have to plan for your worst case TDP not your best case TDP - which means you have to engineer a cooling solution which will work for the full 8 core CPU, increasing costs to the integrator and the end user. Why do you think Intel's worked so hard to keep the 6-core CPU within a few watts of the old 4-core CPU?

    In contrast an iGPU can be switched on or off and remain that way, the OS isn't going to assign cores to it and result in it suddenly dissipating more power.

    And again you're focussing on the extremely limited gamer side of things - in the real world you don't "reuse the graphics card for many years to come", you buy a machine which does what you need it to and what you project you'll need it to, then replace it at the end of whatever period you're amortising the purchase over. Adding a $40 GPU and paying the additional electricity costs to run that GPU over time means your TCO is significantly increased for zero benefits, except in a very small number of edge cases in which case you're probably better off just getting a HEDT system anyway.

    The argument about cache might be a better one to go down, but the amount of cache in desktop systems doesn't have as big an impact on normal workflow tasks as you might expect - otherwise we'd see greater segmentation in the marketplace anyway.

    In short, Intel introducing desktop processors without iGPUs makes no sense for them at all. It would benefit a small number of enthusiasts at a cost of winding up a large number of system integrators and OEMs, to say nothing of a huge stack of IT Managers across the industry who would suddenly have to start fitting and supporting discrete GPUs across their normal desktop systems. Just not a good idea, economically, statistically or in terms of customer service.
  • boeush - Tuesday, October 10, 2017 - link

    The TDP argument as you are trying to formulate it is just silly. Either the iGPU is going to be in fact used on a particular build, or it's going to be disabled in favor of headless operation or a discrete GPU. If the iGPU is disabled, then it is the very definition of all-around WASTE - a waste of performance potential for the money, conversely/accordingly a waste of money, and a waste in terms of manufacturing/materials efficiency. On the other hand, if the iGPU is enabled, it is actually more power-dense that the CPU cores - meaning you'll have to budget even more heavily for its heat and power dissipation, than you'd have for any extra CPU cores. So in either case, your argument makes no sense.

    Remember, we are talking about the high end of the Core line. If your build is power-constrained, then it is not high-performance and you have no business using a high-end i7 in it. Stick to i5/i3, or the mobile variants, in that case. Otherwise, all these CPUs come with a TDP. Whether the TDP is shared with an iGPU or wholly allocated to CPU is irrelevant: you still have to budget/design for the respective stated TDP.

    As far as "real-world", I've seen everything from companies throwing away perfectly good hardware after a year of use, to people scavenging parts from old boxes to jury-rig a new one in a pinch.

    And again, large companies with big IT organizations will tend to forego the Core line altogether, since the Xeons provide better TCO economy due to their exclusive RAS features. The top-end i7 really is not a standard 'business' CPU, and Intel really is making a mistake pushing it with the iGPU in tow. That's where they've left themselves wide-open to attack from AMD, and AMD has attacked them precisely along those lines (among others.)

    Lastly, don't confuse Intel's near-monopolistic market segmentation engineering with actual consumer demand distribution. Just because Intel has chosen to push an all-iGPU lineup at any price bracket short of exorbitant (i.e. barring the so-called "enthusiast" SKUs), doesn't mean the market isn't clamoring for a more rational and effective alternative.
  • mkaibear - Wednesday, October 11, 2017 - link

    Sheesh. Where to start?

    1) Yes, you're right, if the iGPU isn't being used then it will be disabled, and therefore you don't need to cool it. Conversely, if you have additional cores then your OS *will* use them, and therefore you *do* need to cool them.

    iGPU doesn't draw very much power at all. HD2000 drew 3W. The iGPU in the 7700K apparently draws 6W so I assume the 8700K with a virtually identical iGPU draws just as much (figures available via your friendly neighbourhood google). Claiming the iGPU has a higher power budget than the CPU cores is frankly ridiculous. (in fact it also draws less than .2W when it's shut down which means that having it in there is far outweighed by the additional thermal sink available, but anyway)

    2) Large companies with big IT organisations don't actually forego the Core line altogether and go with Xeons. They could if they wanted to, but in general they still use off-the shelf Dells and HPs for everything except extremely bespoke setups - because, as I previously mentioned, "hardware is cheap, people are expensive" - getting an IT department to build and maintain bespoke computers is hilariously expensive. No-one is arguing that for an enthusiast building their own computer that the option of the extra cores would be nice, but my point all along has been that Intel isn't going to risk sacrificing their huge market share in the biggest market to gain a slice of a much smaller market. That would be extremely bad business.

    3) The market isn't "clamoring for a more rational and effective alternative" because if it was then Ryzen would have flown off the shelves much faster than it did.

    Bottom line: business IT wants simple solutions, the fewer parts the better. iGPUs on everything fulfil far more needs than dGPUs for some and iGPUs for others. iGPUs make designing systems easier, they make swapouts easier, they make maintenance easier, they reduce TCO, they reduce RMAs and they just make IT staff's lives easier. I've run IT for a university, a school and a manufacturing company, and for each of them the number of computers which needed a fast CPU outweighed the number of computers which needed a dGPU by a factor of at least 10:1 - and the university I worked for had a world-leading art/media/design dept and a computer game design course which all had dGPUs. The average big business has even less use for dGPUs than the places I've worked.

    If you want to keep trying to argue this then can you please answer one simple question: why do you think it makes sense for Intel to prioritise a very small area in which they don't have much market share over a very large area in which they do? That seems the opposite of what a successful business should do.

Log in

Don't have an account? Sign up now