Another snippet of information from Intel today relates to the company’s future mobile platform CPU. We know it’s called Ice Lake-U, that it is built on Intel’s 10nm process, that it has Sunny Cove cores, and has beefy Gen11 integrated graphics. We’re still waiting on finer details about where it’s going to be headed, but today Intel is unloading some of its integrated graphics performance data for Ice Lake-U.

It should be noted that this data is performed by Intel, and we have had no ability to verify it in any way. Intel shared this information with a number of press in order to set a level of expectations. We’ve been told that this is Intel’s first 1 TeraFLOP graphics implementation, and it performs as such. The presentation was given by Ryan Shrout, ex owner and editor-in-chief of PC Perspective, and data was performed by his team inside Intel.

Ryan first showed us a direct comparison between the Gen9 graphics found in Intel’s latest and best Whiskey Lake platform at 15W up against a 15W Ice Lake-U product. The results make for pleasant reading. In the game demo scenes that Intel showed us, we saw upwards of a 40% gain in performance in average frame rates. Percentile numbers were not shown.

When comparing to an equivalent AMD product, Intel stated that it was almost impossible to find one of AMD’s latest 15W APUs actually running at 15W in a device – they stated that every device they could find was actually running one of AMD’s higher performance modes. To make the test fair, Intel pushed one of its Ice Lake-U processors to the equivalent of a 25W TDP and did a direct comparison. This is essentially AMD’s Vega 10 vs Intel’s Gen 11.

For all the games in Intel’s test methodology, they scored anywhere from a 6% loss to a 16% gain, with the average somewhere around a 4-5% gain. The goal here is to show that Intel can focus on graphics and gaming performance in ultra-light designs, with the aim to provide a smooth 1080p experience with popular eSports titles.

Update: As our readers were quick to pick up on from Intel's full press release, Intel is using faster LPDDR4X on their Ice Lake-U system. This is something that was not disclosed directly by Intel during their pre-Computex presentation.

Intel Test Systems Spec Comparison
  Ice Lake-U Core i7-8565U
(WHL-U)
Ryzen 7 3700U
(Zen+)
CPU Cores 4 4 4
GPU Gen 11
(<=64 EUs?)
UHD Graphics 620
(24 EUs)
Vega 10
(10 CUs)
Memory 8GB
LPDDR4X-3733
16GB
DDR4-2400
8GB
DDR4-2400
Storage Intel SSD 760P
256GB
Intel SSD 760P
512GB
SK Hynix BC501
256GB

For some background context, LPDDR4X support is new to Ice Lake-U, and long overdue from Intel as a consequence of Intel's 10nm & Cannon Lake woes. It offers significant density and even greater bandwidth improvements over LPDDR3. Most 7/8/9th Gen Core U systems implemented LPDDR3 for power reasons, and OEMs have been chomping at the bit for LPDDR4(X) so that they don't have to trade off between capacity and power consumption.

That Intel used LPDDR4X in Ice Lake-U versus DDR4 in the AMD system means that Intel had a significant memory bandwidth and latency advantage – around 56%, on paper at least. This sort of differential matters most in integrated graphics performance, suggesting that this is one angle that Intel will readily leverage when it comes to comparisons between the two products.

Moving on, the last set of data comes from Intel’s implementation of Variable Rate Shading (VRS), which was recently introduced in DirectX 12. VRS is a technique that allows the game developer to change the shading resolution of an area on the screen on the fly, allowing a developer to reduce the amount of pixel shading used in order to boost performance, and ideally doing this with little-to-no impact in image quality. It is a new supported feature on Gen11, but it does require the game to support the feature as well. The feature is game specific, and the settings are tuned by the game, not the driver or GPU.

Intel showed that in an ideal synthetic test, they scored a 40% uplift with VRS enabled, and in the synthetic test comparing VRS on and off, that extra performance put it above an equivalent AMD Ryzen system. AMD’s GPU does not support this feature at this time.

Intel is also keen to promote Ice Lake as an AI CPU, due to its AVX512 implementation, and any software than can take advantage of AI can be equipped with accelerated algorithms to speed it up.

We expect to hear more about Ice Lake this week at Computex, given Intel’s keynote on Tuesday, but we also expect to see some vendors showing off their Ice Lake-U designs.

Want to keep up to date with all of our Computex 2019 Coverage?
 
Laptops
 
Hardware
 
Chips
 
Follow AnandTech's breaking news here!
POST A COMMENT

72 Comments

View All Comments

  • johannesburgel - Sunday, May 26, 2019 - link

    There were several Broadwell and Skylake parts which already could do 1 TeraFLOP/s at Single Precision, even on OpenCL. The Iris Pro 580 in the Core i7 6770HQ for example benchmarks at > 1 TeraFLOP/s in clpeak.

    So I wonder what they mean by "first TeraFLOP GPU".
    Reply
  • IntelUser2000 - Sunday, May 26, 2019 - link

    Actually they claimed 1TFlop with Haswell, when you combine the CPU and GPU together. The GPU was quite close.

    In this case they mean 1TFLOP for mainstream systems.
    Reply
  • johannesburgel - Sunday, May 26, 2019 - link

    I just benchmarked the HD9 graphics in my Core i7-7500U ultrabook at 430 GFLOP/s in clpeak. It has 24 execution units, so I would expect all more recent parts with 48 execution units or more to get that TeraFLOP/s, especially when they don't have to run inside the power envelope of an ultrabook. Namely that would be Iris Plus Graphics 640/650/655. Those three are labelled "Consumer". Reply
  • IntelUser2000 - Sunday, May 26, 2019 - link

    The Iris Plus 640/650 systems use the expensive eDRAM packaging while the one in Icelake-U does away with it while delivering better performance. This then becomes comparable to the GT2 part with 24EU and without eDRAM. Reply
  • isthisavailable - Sunday, May 26, 2019 - link

    Nice improvement but the gap between current Ryzen chips and this is so less that I expect AMD to pull ahead again with next gen Ryzen mobile chips. Reply
  • maroon1 - Sunday, May 26, 2019 - link

    Ryzen 7 3700U came out recently. You won't see any new APU until 2020 (probably Q2 2020) Reply
  • RedGreenBlue - Sunday, May 26, 2019 - link

    Seems like a decently fair comparison. However, I wonder how much and which direction those benchmarks would shift if they’d been run at higher resolutions. I would expect an Intel core to beat the Ryzen core in gaming at low resolution even if the graphics were evenly matched. I’d like to have seen a more pure graphics test, but I guess if you’re gaming on a 25 watt or less than 25 watt machine you won’t be pushing resolution very much anyway. Reply
  • RedGreenBlue - Sunday, May 26, 2019 - link

    Looking forward to seeing if this is a totally redesigned architecture Raja was involved in. Reply
  • IntelUser2000 - Sunday, May 26, 2019 - link

    There's no such thing as a truly redesigned architecture. That would be a waste of time anyway.

    Gen 11 is a significant improvement over Gen 9, but the fundamentals are still Intel GPU architecture.

    Raja won't be able to have much effect on this considering the timeline. We can expect more input on the next gen, now called by Xe name. But it'll still be Intel GPU architecture. If Raja had any part in the direction of the design, it'll be low level that most of us won't get to know.
    Reply
  • RedGreenBlue - Sunday, May 26, 2019 - link

    Low level wouldn’t be the best way to describe it, low level details we’ll never be told, probably yes, but he’s in charge of that division with 4,500 people under him. And I definitely think his input would have greatly impacted performance, because Intel likely would not have been that close to finishing the design when they hired him. Die shot still looks like the 8 cluster EU groups, though.
    Obviously I didn’t mean totally redesigned in a literal sense talking about chip architecture, but rather, just not a tweak to a few aspects. His start at Intel seemed to coincide with the AMD cross-license agreement. And yeah, for GPU’s Intel’s mainly just had to do that because of patent infringement reasons, but I think it would be stupid to get access to some of AMD’s GPU patent portfolio and not implement parts of it that weren’t available with the previous Nvidia portfolio. I expect they would also HAVE to get rid of some things that were in the Nvidia license but not in the AMD license. Also the way they’re touting this for AI suggests Raja’s experience came into play, or it could just be from the cross-license and Nvidia wouldn’t give some of that in the previous deal or a new deal, or they could be exaggerating abilities a bit.
    Reply

Log in

Don't have an account? Sign up now