Comments Locked

22 Comments

Back to Article

  • blanarahul - Monday, January 6, 2014 - link

    How does it compare with Tegra K1's GPU?
  • Ryan Smith - Monday, January 6, 2014 - link

    Without clockspeeds it's extremely hard to say. From a high level a 6 cluster Series6XT configuration offers the same number of FLOPS per clock, but this doesn't take into account real world efficiency either.

    We're going to have to wait and see what the individual implementations are like.
  • Jon Tseng - Monday, January 6, 2014 - link

    Hmmm but if IPC is comparable and K1 is available 2014 vs. 6XT in 2015 does that mean NVIDIA has an advantage? Or is this too simplistic?
  • Mondozai - Monday, January 6, 2014 - link

    Well, the K1 has support for OpenGL 4.4, while this one's stuck at 3.0. And that's keeping in mind that it will not be out until 2015 at the earliest.
  • alexvoica - Friday, January 10, 2014 - link

    While peak FP32 performance is similar for Series6 and Series6XT, we've made a number of performance optimizations and added some clever architectural improvements (turning parts of the design - including clusters - on/off, better resource utilization, updated rasterizer, improved GPU compute paths, etc.) which offer better low power performance. This is essential for mobile; matching user experience to theoretical peak performance works reasonably well for desktop PCs and consoles, mobile designs are a whole different affair and any mobile design should be judged based on performance efficiency (getting the best performance possible in a limited power envelope).

    Regards,
    Alex.
  • iwod - Monday, January 6, 2014 - link

    I am going to say K1. Very impressive indeed to have that fitted into Mobile. And "if" Series6XT is only coming out in 2015, then it will be going against Maxwell, judging from Nvidia pass record, i would bet Maxwell will be super impressive.

    And Since Nvidia is now licensing their Kelper design as well. I wonder if there are any one will to take it. It will be a very powerful thing to have the same GPU in PC, and Mobile.

    But Performance is only one thing, power efficiency at low usage level is also important, and we have yet to see how Kelper do in that regards.

    Although according to rumors Apple is doing their own GPU, and since GPU and CPU function are merging it is likely Series6XT may never see its day in an Apple product. I am going to say K1. Very impressive
  • djgandy - Wednesday, January 8, 2014 - link

    Do you people never learn that Nvidia just talks crap? Funny, 6 months ago Nvidia was saying "There is no point in OpenGL ES 3.0 on mobile". Now you are claiming they have GL 4.4, which also implies they think FP64 is needed for mobile?
  • juvhelp - Monday, January 6, 2014 - link

    ASTC is also available in (already shipping) ARM Mali devices (I.e. the Exynos powered version of the Note 3).
  • bengildenstein - Monday, January 6, 2014 - link

    ASTC is one of the most exciting thing that has happened to Android, and it's great that seemingly all major players (except NVidia) will be supporting the standard in their upcoming GPUs. This along with a more feature-full and more strictly defined OpenGLES 3.0 should make cross platform development MUCH easier when targeting the Android platform (no need to fork art assets).

    Hopefully Android Runtime (ie. ART) integration also improves the performance of Android's native Java code (it compiles the bytecode it at install time to a native executable), which would further ensure write-once run-anywhere across the platform. Together with Renderscript, there is a really comprehensive architecture-agnostic framework for development. This is great news for Android game developers.

    And this is good news. We should be seeing more console/PC ports to the platform as time goes forward.
  • dragonsqrrl - Monday, January 6, 2014 - link

    Nvidia announced support for ASTC with Tegra K1.
  • Krysto - Monday, January 6, 2014 - link

    So the next iPad will need GX6650 to be competitive with Tegra K1, otherwise fail. It's nice to see both them and Nvidia are adopting ASTC.
  • name99 - Monday, January 6, 2014 - link

    It is highly likely that the next Apple GPU will be Apple custom, following the pattern they have established with CPUs.
    They appear to have had the necessary design skills internally for a while, and custom allows them to optimize for what they care about without having to be slowed by someone else's schedule. It also allows them to start typing together their various bits and pieces (cores, L3, GPU, memory controller) using a fast ring like Intel, not limited to whatever standard ARM bus a 3rd party GPU supports.

    What's interesting (for Apple) here is the various "eco-system standard" pieces, like ATSC and geometry compression, which Apple will presumably include in their device. But not the precise device itself.
  • stingerman - Monday, January 6, 2014 - link

    The strong argument can be made that Tegra K1 will need iOS, otherwise fail. Unfortunately for Nvidia, Apple is a large shareholder of Imagination and repeatedly ahead of the mobile GPU curve.
  • Bawl - Monday, January 6, 2014 - link

    I don't know if Apple would be able to do that, but GX6650 seems to be a much better fit than G6630 for their next A8. Apple certainly loves all those new battery optimisations. And without them, G6630 seems to be quite battery hungry.

    Once again thanks for this analyse. I love so much all your GPUs articles. Especially when it's about Imagination GPUs (big fan since the Dreamcast).
  • alexvoica - Friday, January 10, 2014 - link

    G6630 is still a pretty efficient design. If you read the original press release/blog article we've done at launch, it mentions a number of key features: PowerGearing, PVRIC (1st gen image compression), PVRGC and PVRTC1/2.

    GX6650 upgrades that feature set to PowerGearing GXT, PVRIC2 (2nd gen image compression), PVRGC and ASTC+PVRTC1/2, together with the overall performance boost.

    Best regards,
    Alex.
  • da_asmodai - Monday, January 6, 2014 - link

    Does Anandtech just not review Snapdragon all of a sudden? The already announced Snapdragon 805's Adreno 420 GPU supports ASTC so how is "Series6XT is the first design we’ve seen with support for ASTC"? Both Imagination Technologies and nVidia are playing catchup on spec sheets here with Qualcomm and who knows who will actually have products on store shelves for consumers first.
  • stingerman - Monday, January 6, 2014 - link

    Qualcomm makes GPUs? ;)
  • ssiu - Monday, January 6, 2014 - link

    It takes 2 years for a "refresh"? Is there any performance claim from the company compared to the similar Series6 parts? (x % faster at same power / y % lower power at same performance etc.) It seems underwhelming for something 2 years newer ...
  • alexvoica - Friday, January 10, 2014 - link

    I've mentioned in the corresponding blog article, that
    a) Series6XT GPUs are up to 50% faster compared to their Series6 counterparts, clock for clock, cluster for cluster
    b) these performance claims are based on well known graphics benchmarks

    Regards,
    Alex.
  • name99 - Monday, January 6, 2014 - link

    "Imagination’s press release doesn’t make it clear where PVRGC in particular is implemented, but from the description it sounds like this is an in-flight geometry compression technology, intended to reduce the amount of bandwidth needed to shuffle geometry within the GPU and between the GPU and its external RAM."

    I have to wonder if this will be externally usable.
    While Apple Maps 3D mode is pretty awesome (in cities where it is supported --- yes yes, we all know Apple Maps sux in your particular city, blah blah) it's most obvious visual flaw is the coarse-grained geometry. It's especially obvious in the rendition of trees which look, let's say, *OK*, but not much better than that.
    It's not clear to me if the bottleneck here is the amount of geometry Apple wants to send over the network (especially since at least some people are going to be using this with metered cellphone connections) or if the problem is the amount of geometry a low-end iOS device can support.

    Either way, I think geometry compression (especially if Apple can use it to pump out better geometry over the cell network) is actually a nice step forward, of interest to more than just the usual gaming crowd.
  • MrPoletski - Tuesday, January 7, 2014 - link

    The geometry compression is blatantly there to compress parameter information in the display list while the scene is being rendered.

    It's a tile based renderer, remember, and captures the whole scene into a geometry parameter buffer before starting to draw it.
  • iwod - Tuesday, January 7, 2014 - link

    Given how NO one expected 64Bit ARM to be in a shipping product so fast, it could be another case where Apple has A8 shipping the PowerVR 6 GX this summer.

Log in

Don't have an account? Sign up now