Machine Learning Inference Performance

AIMark 3

AIMark makes use of various vendor SDKs to implement the benchmarks. This means that the end-results really aren’t a proper apples-to-apples comparison, however it represents an approach that actually will be used by some vendors in their in-house applications or even some rare third-party app.

鲁大师 / Master Lu - AIMark 3 - InceptionV3 鲁大师 / Master Lu - AIMark 3 - ResNet34 鲁大师 / Master Lu - AIMark 3 - MobileNet-SSD 鲁大师 / Master Lu - AIMark 3 - DeepLabV3

In AIMark 3, the benchmark uses each vendor’s proprietary SDK in order to accelerate the NN workloads most optimally. For Qualcomm’s devices, this means that seemingly the benchmark is also able to take advantage of the new Tensor cores. Here, the performance improvements of the new Snapdragon 865 chip is outstanding, posting in 2-3x performance compared to its predecessor.

AIBenchmark 3

AIBenchmark takes a different approach to benchmarking. Here the test uses the hardware agnostic NNAPI in order to accelerate inferencing, meaning it doesn’t use any proprietary aspects of a given hardware except for the drivers that actually enable the abstraction between software and hardware. This approach is more apples-to-apples, but also means that we can’t do cross-platform comparisons, like testing iPhones.

We’re publishing one-shot inference times. The difference here to sustained performance inference times is that these figures have more timing overhead on the part of the software stack from initialising the test to actually executing the computation.

AIBenchmark 3 - NNAPI CPU

We’re segregating the AIBenchmark scores by execution block, starting off with the regular CPU workloads that simply use TensorFlow libraries and do not attempt to run on specialized hardware blocks.

AIBenchmark 3 - 1 - The Life - CPU/FP AIBenchmark 3 - 2 - Zoo - CPU/FP AIBenchmark 3 - 3 - Pioneers - CPU/INT AIBenchmark 3 - 4 - Let's Play - CPU/FP AIBenchmark 3 - 7 - Ms. Universe - CPU/FP AIBenchmark 3 - 7 - Ms. Universe - CPU/INT AIBenchmark 3 - 8 - Blur iT! - CPU/FP

Starting off with the CPU accelerated benchmarks, we’re seeing some large improvements of the Snapdragon 865. It’s particularly the FP workloads that are seeing some big performance increases, and it seems these improvements are likely linked to the microarchitectural improvements of the A77.

AIBenchmark 3 - NNAPI INT8

AIBenchmark 3 - 1 - The Life - INT8 AIBenchmark 3 - 2 - Zoo - Int8 AIBenchmark 3 - 3 - Pioneers - INT8 AIBenchmark 3 - 5 - Masterpiece - INT8 AIBenchmark 3 - 6 - Cartoons - INT8

INT8 workload acceleration in AI Benchmark happens on the HVX cores of the DSP rather than the Tensor cores, for which the benchmark currently doesn’t have support for. The performance increases here are relatively in line with what we expect in terms of iterative clock frequency increases of the IP block.

AIBenchmark 3 - NNAPI FP16

AIBenchmark 3 - 1 - The Life - FP16 AIBenchmark 3 - 2 - Zoo - FP16 AIBenchmark 3 - 3 - Pioneers - FP16 AIBenchmark 3 - 5 - Masterpiece - FP16 AIBenchmark 3 - 6 - Cartoons - FP16 AIBenchmark 3 - 9 - Berlin Driving - FP16 AIBenchmark 3 - 10 - WESPE-dn - FP16

FP16 acceleration on the Snapdragon 865 through NNAPI is likely facilitated through the GPU, and we’re seeing iterative improvements in the scores. Huawei’s Mate 30 Pro is in the lead in the vast majority of the tests as it’s able to make use of its NPU which support FP16 acceleration, and its performance here is quite significantly ahead of the Qualcomm chipsets.

AIBenchmark 3 - NNAPI FP32

AIBenchmark 3 - 10 - WESPE-dn - FP32

Finally, the FP32 test should be accelerated by the GPU. Oddly enough here the QRD865 doesn’t fare as well as some of the best S855 devices. It’s to be noted that the results here today were based on an early software stack for the S865 – it’s possible and even very likely that things will improve over the coming months, and the results will be different on commercial devices.

Overall, there’s again a conundrum for us in regards to AI benchmarks today, the tests need to be continuously developed in order to properly support the hardware. The test currently doesn’t make use of the Tensor cores of the Snapdragon 865, so it’s not able to showcase one of the biggest areas of improvement for the chipset. In that sense, benchmarks don’t really mean very much, and the true power of the chipset will only be exhibited by first-party applications such as the camera apps, of the upcoming Snapdragon 865 devices.

System Performance GPU Performance & Power
Comments Locked


View All Comments

  • s.yu - Tuesday, December 17, 2019 - link

    There are countless shallow and useless arguments to be made from your standpoint, for example you could argue that turning system animations off "slows down" "real world experience", because without the animations filling in for the latency, "the average joe and jane" perceive "real world" lags/stutters which in reality take less time than playing the animation takes, i.e. is faster, not to mention a decrease to the load on the GPU.
  • Sam6536 - Monday, December 16, 2019 - link

    Where are rog phone 2 benchmarks?
    Not taking the most powerful android phone into consideration in this test isn't fair
  • joms_us - Tuesday, December 17, 2019 - link

    How the hell Apple A9 is faster than Ryzen or Skylake if A13 is pathetically slower in this comparison and not even close to double performance as show in SPEC.

    Makes me think if somebody is drinking Koolaid here?
  • diehardmacfan - Tuesday, December 17, 2019 - link

    ahhh yes, poo-poo an industry standard benchmark like SPEC for SoC bencharking in an article about an SoC, then link to a device performance test developed by AndroidAuthority.

    Andrei your patience with idiots is astounding.
  • Nicon0s - Tuesday, December 17, 2019 - link

    @diehardmacfan What exactly is wrong with Speed Test GX 2.0? And it wasn't developed by Android Authority.
    The SD 865 completed a bunch of real world CPU related tasks, faster than the A13. This makes this "industry standard benchmark like SPEC" quite irrelevant for somebody interesting to buying a smartphone because in actual usage the A13 doesn't present any real performance advantage.
    Also in the GPU test the SD 865 was only slightly behind even if it pushed more pixels.

    If I would only be interested in buying a smartphone in order to use it to run SPEC and GFXBench Aztec Ruins off-screen benchmark all day long than the iphone 11 would be my number one pick.

    For anything other than that I don't see any real and tangible performance advantage.
    This Anandtech performance analysis seems disconnected from the real world experience of using such high end devices. Android sites do a better job analyzing the experience and significance of the performance of these mobile SOC and what it actually means for smartphone users. For example XDA has a realy nice benchmark where they test the overall fluidity of using certain smartphones. This both tests the OS optimizations and SOC performance.
  • joms_us - Tuesday, December 17, 2019 - link

    Excellent point, I am sick and tired of this propaganda to uplift an Apple product just because it shines in one or two primitive and bias benchmarking tool when thousands of other apps say otherwise.
  • s.yu - Tuesday, December 17, 2019 - link

    May I interest you in some rhino horn powder claimed by thousands of traditional Chinese witch...I mean doctors to enlarge your penis?
  • s.yu - Tuesday, December 17, 2019 - link

    In short: Poor validity and poor reliability. There's nothing particularly useful about that test.
    It generates mixed, or rather obfuscated scores correlating to an unknown extent to UI design choice, certain drivers, and hardware performance.
    This is somewhat metaphysics, and has no place in science.
  • cha0z_ - Friday, December 27, 2019 - link

    That test is fun and great, but totally not representative of anything. Taking it serious is not serious. :)
  • MetaCube - Tuesday, December 17, 2019 - link

    How are you still not banned ?

Log in

Don't have an account? Sign up now