On Monday, Intel announced that it had penned a deal with AMD to have the latter provide a discrete GPU to be integrated onto a future Intel SoC. On Tuesday, AMD announced that their chief GPU architect, Raja Koduri, was leaving the company. Now today the saga continues, as Intel is announcing that they have hired Raja Koduri to serve as their own GPU chief architect. And Raja's task will not be a small one; with his hire, Intel will be developing their own high-end discrete GPUs.

Starting from the top and following yesterday’s formal resignation from AMD, Raja Koduri has jumped ship to Intel, where he will be serving as a Senior VP for the company, overseeing the new Core and Visual Computing group. As a chief architect and general manager, Intel is tasking Raja with significantly expanding their GPU business, particularly as the company re-enters the discrete GPU field. Raja of course has a long history in the GPU space as a leader in GPU architecture, serving as the manager of AMD’s graphics business twice, and in between AMD stints serving as the director of graphics architecture on Apple’s GPU team.

Meanwhile, in perhaps the only news that can outshine the fact that Raja Koduri is joining Intel, is what he will be doing for Intel. As part of today’s revelation, Intel has announced that they are instituting a new top-to-bottom GPU strategy. At the bottom, the company wants to extend their existing iGPU market into new classes of edge devices, and while Intel doesn’t go into much more detail than this, the fact that they use the term “edge” strongly implies that we’re talking about IoT-class devices, where edge goes hand-in-hand with neural network inference. This is a field Intel already plays in to some extent with their Atom processors on the GPU side, and their Movidius neural compute engines on the dedicated silicon sign.

However in what’s likely the most exciting part of this news for PC enthusiasts and the tech industry as a whole, is that in aiming at the top of the market, Intel will once again be going back into developing discrete GPUs. The company has tried this route twice before; once in the early days with the i740 in the late 90s, and again with the aborted Larrabee project in the late 2000s. However even though these efforts never panned out quite like Intel has hoped, the company has continued to develop their GPU architecture and GPU-like devices, the latter embodying the massive parallel compute focused Xeon Phi family.

Yet while Intel has GPU-like products for certain markets, the company doesn’t have a proper GPU solution once you get beyond their existing GT4-class iGPUs, which are, roughly speaking, on par with $150 or so discrete GPUs. Which is to say that Intel doesn’t have access to the midrange market or above with their iGPUs. With the hiring of Raja and Intel’s new direction, the company is going to be expanding into full discrete GPUs for what the company calls “a broad range of computing segments.”

Reading between the lines, it’s clear that Intel will be going after both the compute and graphics sub-markets for GPUs. The former of course is an area where Intel has been fighting NVIDIA for several years now with less success than they’d like to see, while the latter would be new territory for Intel. However it’s very notable that Intel is calling these “graphics solutions”, so it’s clear that this isn’t just another move by Intel to develop a compute-only processor ala the Xeon Phi.

NVIDIA are at best frenemies; the companies’ technologies complement each other well, but at the same time NVIDIA wants Intel’s high-margin server compute business, and Intel wants a piece of the action in the rapid boom in business that NVIDIA is seeing in the high performance computing and deep learning markets. NVIDIA has already begun weaning themselves off of Intel with technologies such as the NVLInk interconnect, which allows faster and cache-coherent memory transfers between NVIDIA GPUs and the forthcoming IBM POWER9 CPU. Meanwhile developing their own high-end GPU would allow Intel to further chase developers currently in NVIDIA’s stable, while in the long run also potentially poaching customers from NVIDIA’s lucrative (and profitable) consumer and professional graphics businesses.

To that end, I’m going to be surprised if Intel doesn’t develop a true top-to-bottom product stack that contains midrange GPUs as well – something in the vein of Polaris 10 and GP106 – but for the moment the discrete GPU aspect of Intel’s announcement is focused on high-end GPUs. And, given what we typically see in PC GPU release cycles, even if Intel does develop a complete product stack, I wouldn’t be too surprised if Intel’s first released GPU was a high-end GPU, as it’s clear this is where Intel needs to start first to best combat NVIDIA.

More broadly speaking, this is an interesting shift in direction for Intel, and one that arguably indicates that Intel’s iGPU-exclusive efforts in the GPU space were not the right move. For the longest time, Intel played very conservatively with its iGPUs, maxing out with the very much low-end GT2 configuration. More recently, starting with the Haswell generation in 2013, Intel introduced more powerful GT3 and GT4 configurations. However this was primarily done at the behest of a single customer – Apple – and even to this day, we see very little OEM adoption of Intel’s higher performance graphics options by the other PC OEMs. The end result has been that Intel has spent the last decade making the kinds of CPUs that their cost-conscious customers want, with just a handful of high-performance versions.

I would happily argue that outside of Apple, most other PC OEMs don’t “get it” with respect to graphics, but at this juncture that’s beside the point. Between Monday’s strongly Apple-flavored Kaby Lake-G SoC announcement and now Intel’s vastly expanded GPU efforts, the company is, if only finally, becoming a major player in the high-performance GPU space.

Besides taking on NVIDIA though, this is going to put perpetual underdog AMD into a tough spot. AMD’s edge over Intel for the longest time has been their GPU technology. The Zen CPU core has thankfully reworked that balance in the last year, though AMD still hasn’t quite caught up to Intel here on peak performance. The concern here is that the mature PC market has strongly favored duopolies – AMD and Intel for CPUs, AMD and NVIDIA for GPUs – so Intel’s entrance into the discrete GPU space upsets the balance on the latter. And while AMD is without a doubt more experienced than Intel, Intel has the financial and fabrication resources to fight NVIDIA, something AMD has always lacked. Which isn’t to say that AMD is by any means doom, but Intel’s growing GPU efforts and Raja’s move to Intel has definitely made AMD’s job harder.

Meanwhile, on the technical side of matters, the big question going forward with Intel’s efforts is over which GPU architecture Intel will use to build their discrete GPUs. Despite their low performance targets, Intel’s Gen9.5 graphics is a very capable architecture in terms of features and capabilities. In fact, prior to the launch of AMD’s Vega architecture a couple months back, it was arguably the most advanced PC GPU architecture, supporting higher tier graphics features than even NVIDIA’s Pascal architecture. So in terms of features alone, Gen9.5 is already a very decent base to start from.

The catch is whether Gen9.5 and its successors can efficiently scale out to the levels needed for a high-performance GPU. Architectural scalability is in some respects the unsung hero of GPU architecture design, as while it’s kind of easy to design a small GPU architecture, it’s a lot harder to design an architecture that can scale up to multiple units in a 400mm2+ die size. Which isn’t to say that Gen9.5 can’t, only that we as the public have never seen anything bigger than the GT4 configuration, which is still a relatively small design by GPU standards.

Though perhaps the biggest wildcard here is Intel’s timetable. Nothing about Intel’s announcement says when the company wants to launch these high-end GPUs. If, for example, Intel wants to design a GPU from scratch under Raja, then this would be a 4+ year effort and we’d easily be talking about the first such GPU in 2022. On the other hand, if this has been an ongoing internal project that started well before Raja came on board, then Intel could be a lot closer. Given what kind of progress NVIDIA has made in just the last couple of years, I can only imagine that Intel wants to move quickly, and what this may boil down to is a tiered strategy where Intel takes both routes, if only to release a big Gen9.5(ish) GPU soon to buy time for a new architecture later.

In directing these tasks, Raja Koduri has in turn taken on a very big role at Intel. Until recently, Intel’s graphics lead was Tom Piazza, a Sr. Fellow and capable architect, but also an individual who was never all that public outside of Intel. By contrast, Raja will be a much more public individual thanks to the combination of Intel’s expanded GPU efforts, Raja’s SVP role, and the new Core and Visual Computing group that has been created just for him.

For what Intel is seeking to do, it’s clear why they picked Raja, given his experience inside and outside of AMD, and more specifically, with integrated graphics at both AMD and Apple. The flip side to that however is that while Apple’s graphics portfolio boomed under Raja during his time at the company, his most recent AMD stint didn’t go quite as well. AMD’s Vega GPU architecture has yet to live up to all of its promises, and while success and failure at this level is never the responsibility of a single individual, Intel will certainly be looking to have a better launch than Vega. Which, given the company’s immense resources, is definitely something they can do.

But at the end of the day, this is just the first step for Intel and for Raja. By hiring an experienced hand like Raja Koduri and by announcing that they are getting into high-end discrete GPUs, Intel is very clearly telegraphing their intent to become a major player in the GPU space. Given Intel’s position as a market leader it’s a logical move, and given their lack of recent discrete GPU experience it’s also an ambitious move. So while this move stands to turn the PC GPU market as we know it on its head, I’m looking forward to seeing just what a GPU-focused Intel can do over the coming years.

Source: Intel

Comments Locked

200 Comments

View All Comments

  • HStewart - Thursday, November 9, 2017 - link

    Basically I think Intel and AMD made a deal and the chief engineer was part of deal. This happen to me in a different way.

    This story sounds more real now than original store about Intel using AMD GPU - why should they do so when they have ability to do it on there own - only with need help from a good engineer.

    Now one thing that is still interesting is possible Apple connection. Possibly Apple has found it better to work with Intel than AMD and is part of the push for this to happen - but who knows
  • euskalzabe - Wednesday, November 8, 2017 - link

    WHOAAA... THIS. IS. INSANE. I didn't see this coming.
  • Pinn - Wednesday, November 8, 2017 - link

    I may have bought the Andy Grove Port intel card and returned it. Only thing good about current intel graphics is decent Linux support. Intel is desperate. I may still get the xpoint card as my intel 750 1.2tb card is being a champ and doesn’t throttle for hermals.
  • Frenetic Pony - Wednesday, November 8, 2017 - link

    Hires the guy that just failed to do exactly that at AMD to do it for them. Now there's 2 possibilities:

    A. He was drastically underfunded as he implied and AMD's financial situation is at fault.
    B. The failure of Vega to upend, or even totally compete, in the market is his fault and Intel just made bad hiring decision.

    It'll be interesting to see which, but a win for us consumers either way. 3 way GPU fight, 3 way GPU fight! Go go competition
  • Yojimbo - Wednesday, November 8, 2017 - link

    The answer is pretty obviously A.

    Koduri was AMD's CTO of graphics before 2009, though, so he may be responsible for some of the problems that beset AMD, though.
  • Friendly0Fire - Thursday, November 9, 2017 - link

    AMD made some good moves in the graphics space during that era though. GCN was a good arch early on, sometimes more power efficient than Nvidia's, and they managed to get their proprietary API basically enshrined into the standards as D3D12/Vulkan (which share a lot in common with Mantle).

    It's only the last few years that have been quite rough, and there's no telling if that's just underfunding. GCN's long in the tooth and should probably have been replaced by now, so that's my guess.
  • Yojimbo - Thursday, November 9, 2017 - link

    When GCN first came out, it was more power efficient than the Fermi architecture because it was newer than the Fermi architecture and because NVIDIA was still getting the ball rolling with the building of their architecture. Fermi was very inefficient, but Kepler showed that that inefficiency wasn't something that was inherent in NVIDIA's overarching design. GCN GPUs, however, have shown over the years that they consistently lag behind NVIDIA's GPUs in throughput efficiency, i.e. the percentage of execution units they can keep busy, as well as showing that they can't maintain as high of clock rates under load as NVIDIA's GPUs. I think the consistency of these situations and the fact that AMD benefits from DX12 as much as it does perhaps shows some issue related to the basic architecture of GCN. They fell behind pretty early, when Kepler was released, before the effects of AMD's reduced R&D spending should have caught up with them. I don't know if the memory bandwidth issues GCN GPUs have had are a result of separate failures or if they are also related to basic GCN architectural decisions.

    GCN definitely should have been replaced by now. Some of the issues have been slowly improved over the years, perhaps evidenced by the smaller performance advantage DX12 implementations have over DX11 implementations with Polaris and Vegas compared to AMD's earlier architectures. But I think that instead of allocating resources towards doing the work necessary to resolve the issues, AMD have put their resources towards their CPUs and have let their GPUs rot a bit.
  • BOBOSTRUMF - Wednesday, November 8, 2017 - link

    Timeline, my interpretation:

    Apple to Intel : " We want better Graphics in smaller package or we cancel all our deals and use ARM Cpu's in our MacBooks"
    Intel to Apple: "We can't do it, sorry, not smart enough in Graphics "
    Apple to Intel: "We have someone inside AMD, worked for us, we can trust him, he will deliver you the graphics, but you lower the prices !"
    Intel to Apple "Interesting, let us think a strategy"
    Intel to AMD "We want to licence your Graphics technology, we pay good money !"
    AMD's Lisa Su "No !"
    Intel executives "Only for Apple products, already our market, we will not compete on PC market, we PROMISE "
    AMD's Raja Koduri to Lisa Su " We must accept, Intel will give up making graphics, and we win money from licensing. WIN-WIN for us Lisa "
    Lisa Su "Alright Raja, I trust your judgement "
    Intel's Shareholders to Intel's executives" How much we pay for licence !"
    Intel executives "80-100$, a good deal"
    Intel's Shareholders "Nooo!, Apple already gives us too little for our CPU's, greedy bastards. And we want graphic cards of our own, with Intel's logo not AMD's. Can you hire the mole?
    Intel's executives "We have 10 lawyers to look over his contract, yes he can join another tech company if he resigns "
    Intel's Shareholders "Good, licence the technology but don't mention anything about any Apple's exclusivity. Use 50 lawyers if needed. Then hire the mole !
  • webdoctors - Wednesday, November 8, 2017 - link

    For some reason, every time you mentioned the mole I kept thinking of that scene from Austin Powers: https://www.youtube.com/watch?v=QEExYuRelbg
  • euskalzabe - Wednesday, November 8, 2017 - link

    LOL that was great :D

Log in

Don't have an account? Sign up now