As always, our good friends over at Kishonti managed to have the first GPU performance results for the new 4th generation iPad. Although the new iPad retains its 2048 x 1536 "retina" display, Apple claims a 2x improvement in GPU performance through the A6X SoC. The previous generation chip, the A5X, had two ARM Cortex A9 cores running at 1GHz paired with four PowerVR SGX 543 cores running at 250MHz. The entire SoC integrated 4 x 32-bit LPDDR2 memory controllers, giving the A5X the widest memory interface on a shipping mobile SoC in the market at the time of launch.

The A6X retains the 128-bit wide memory interface of the A5X (and it keeps the memory controller interface adjacent to the GPU cores and not the CPU cores as is the case in the A5/A6). It also integrates two of Apple's new Swift cores running at up to 1.4GHz (a slight increase from the 1.3GHz cores in the iPhone 5's A6). The big news today is what happens on the GPU side. A quick look at the GLBenchmark results for the new iPad 4 tells us all we need to know. The A6X moves to a newer GPU core: the PowerVR SGX 554.

Mobile SoC GPU Comparison
  PowerVR SGX 543 PowerVR SGX 543MP2 PowerVR SGX 543MP3 PowerVR SGX 543MP4 PowerVR SGX 554 PowerVR SGX 554MP2 PowerVR SGX 554MP4
Used In - iPad 2 iPhone 5 iPad 3 - - iPad 4
# of SIMDs 4 8 12 16 8 16 32
MADs per SIMD 4 4 4 4 4 4 4
Total MADs 16 32 48 64 32 64 128

As always, Imagination doesn't provide a ton of public information about the 554 but based on what I've seen internally it looks like the main difference between it and the 543 is a doubling of the ALU count per core (8 Vec4 ALUs per core vs. 4 Vec4). Chipworks' analysis of the GPU cores helps support this: "Each GPU core is sub-divided into 9 sub-cores (2 sets of 4 identical sub-cores plus a central core)."

I believe what we're looking at is the 8 Vec4 SIMDs (each one capable of executing 8+1 FLOPS). The 9th "core" is just the rest of the GPU including tiler front end and render backends. Based on the die shot and Apple's performance claims it looks like there are four PowerVR SGX554 cores on-die, resulting in peak theoretical performance greater than 77 GFLOPS.

There's no increase in TMU or ROP count per core, the main change between the 554 and 543 is the addition of more ALUs. There are some more low level tweaks which helps explain the different core layout from previous designs, but nothing major.

With that out of the way, let's get to the early performance results. We'll start with low level fill rate and triangle throughput numbers:

GLBenchmark 2.5 - Fill Test

Fill rate goes up by around 15% compared to the iPad, which isn't enough to indicate a huge increase in the number of texture units on the 554MP4 vs. the 543MP4. What we may be seeing here instead are benefits from higher clocked GPU cores rather than more texture units. If this is indeed the case it would indicate that the 554MP4 changes the texture to ALU ratio from what it was in the PowerVR SGX 543 (Update: this is confirmed). The data here points to a GPU clock at least 15% higher than the ~250MHz in the 3rd generation iPad.

GLBenchmark 2.5 - Fill Test (Offscreen 1080p)

GLBenchmark 2.5 - Triangle Texture Test

Triangle throughput goes up by a hefty 65%, these are huge gains over the previous generation iPad.

GLBenchmark 2.5 - Triangle Texture Test (Offscreen 1080p)

GLBenchmark 2.5 - Triangle Texture Test - Fragment Lit

The fragment lit triangle test starts showing us close to a doubling of performance at the iPad's native resolution.

GLBenchmark 2.5 - Triangle Texture Test - Fragment Lit (Offscreen 1080p)

GLBenchmark 2.5 - Triangle Texture Test - Vertex Lit

GLBenchmark 2.5 - Triangle Texture Test - Vertex Lit (Offscreen 1080p)

GLBenchmark 2.5 - Egypt HD

Throw in a more ALU heavy workload and we really start to see the advantage of the new GPU: almost double the performance in Egypt HD at 2048 x 1536. We also get performance that's well above 30 fps here on the iPad at native resolution for the first time.

GLBenchmark 2.5 - Egypt HD (Offscreen 1080p)

Normalize to the same resolution and we see that the new PowerVR graphics setup is 57% faster than even ARM's Mali-T604 in the Nexus 10. Once again we're seeing just about 2x the performance of the previous generation iPad.

GLBenchmark 2.5 - Egypt Classic

Vsync bound gaming performance obviously won't improve, but the offscreen classic test gives us an idea of how well the new SoC can handle lighter workloads:

GLBenchmark 2.5 - Egypt Classic (Offscreen 1080p)

For less compute bound workloads the new iPad still boasts a 53% performance boost over the previous generation.

Ultimately it looks like the A6X is the SoC that the iPad needed to really deliver good gaming performance at its native resolution. I would not be surprised to see more game developers default to 2048 x 1536 on the new iPad rather than picking a lower resolution and enabling anti-aliasing. The bar has been set for this generation and we've seen what ARM's latest GPU can do, now the question is whether or not NVIDIA will finally be able to challenge Imagination Technologies when it releases Wayne/Tegra 4 next year.

Comments Locked


View All Comments

  • B3an - Friday, November 2, 2012 - link

    I don't understand why Android and Win RT devices don't use these PowerVR GPU's. They're nothing to do with Apple in any way and are clearly much better than the competition.
  • augiem - Friday, November 2, 2012 - link

    I'm incredulous. Why is it so hard for them to understand this? Sony did it with the PS Vita. Intel and Apple do own small stakes in the company, but the chips are not exclusive. Ugh.
  • andsoitgoes - Saturday, November 10, 2012 - link

    I'm impressed with Sony, they know how to make stuff look DAMN good.

    I'm very intrigued to see what the power this means for games on the iPad going forward. I've seen the vita, and if the iPad can power even a fraction of that, what could that mean for the visuals?

    The biggest most unbelievably frustrating issue is pricing and controls. The ipad sucks with regards to the controls. Sucks. And the fact that game manufacturers can't put out higher than normal priced games without flopping financially. It's beyond frustrating to think what this device CAN do, but how in turn it has in essence limited itself.

    I'm no game elitist, but a serious game does Infinity Blade not make.
  • Peanutsrevenge - Friday, November 2, 2012 - link

    It's had me wondering as well.
    Reasons I can think of:

    PowerVR refuse to open source their drivers.

    The performance isn't that on Android due to Dalvik handicap.

    PowerVR charge a high price.

    I'm not overly clued up on such things, so purely speculating, but the iEtc devices have consistently had a massive GPU advantage, though TBH, I don't think it really matters at the moment as I've not found GPU performance to be a problem on Android, but I don't game on mine.

    Perhaps a topic for this weekends Podcast (Which I'm sure(and hope) will be at least 6 hours long);) ;) ;)
  • mavere - Friday, November 2, 2012 - link

    I assume that Apple began to focus on GPUs simply as a way to maintain GUI smoothness. Along the way, gaming capability became such a strong marketing tool that the company emphasized graphics performance even more.
  • augiem - Friday, November 2, 2012 - link

    The thing is everyone assumes GPU's are only used for gaming, but they're not. As the displays go higher and higher res, a powerful GPU is important for keeping the experience smooth. That's why I find it really silly that Google went with a higher-res display than iPad and a weaker GPU.
  • Krysto - Friday, November 2, 2012 - link

    But Mali T604 also seems to be a lot more efficient than the A5X and probably A6X GPU's too.

    Nexus 10 uses a 22% smaller battery, and powers 33% more pixels, and yet it only has 6% lower battery life than the iPad. If you normalize those numbers, then the iPad's GPU is 50% more inefficient (or Mali T604 is 30% more efficient than iPad's GPU).
  • Krysto - Friday, November 2, 2012 - link

    My point is that Google (and ARM) went more for higher performance/power consumption in GPU's, the same way Apple did the same with their Swift CPU core (but not in GPU's).
  • mavere - Friday, November 2, 2012 - link

    As our discussion and your conclusion revolve around the GPUs, you cannot use those battery-life figures as evidence because the WiFi tests do not assess GPU efficiency.
  • ltcommanderdata - Friday, November 2, 2012 - link

    Is that based on the web browsing result from Anand's Nexus 10 Preview because that tells you very little about GPU efficiency since the GPU is hardly used. You'd need to run GLBenchmark loops to actually test GPU battery life under load.

Log in

Don't have an account? Sign up now