Google I/O: Qualcomm Celebrates Launch of Adreno 420 GPU for Android Gamingby Ryan Smith on June 23, 2014 1:00 PM EST
With Google’s annual I/O developers’ conference taking place this week, this should end up being a busy week for Android news. The conference itself doesn’t officially start until Wednesday the 25th this year, but several partners are already chomping at the bit to get going. First among these will be Qualcomm, who will be using the backdrop of the I/O conference to celebrate the launch of their latest high-performance SoC-class GPU, the Adreno 420.
Adreno 420 is the first member of Qualcomm’s next-generation SoC GPU family to make it out the door and in to a finished product, bringing with it OpenGL ES 3.1 and Direct3D 11 functionality. First announced last year as part of the Snapdragon 805 SoC, Snapdragon 805 and by extension Adreno 420 is now shipping in Samsung’s just-announced Galaxy S5 Broadband LTE-A. The S5B marks the first Adreno 420 product to reach consumer hands, and for as much as an SoC can have a formal launch this would be it, with Qualcomm celebrating its launch coinciding with this year’s Google I/O conference.
For Qualcomm the Adreno 420 in particular is an especially big deal since it is the first GPU to ship based on the Adreno 400 architecture. The Adreno 400 architecture marks a significant advancement in the feature set of Qualcomm’s GPUs, bringing Qualcomm’s latest architecture generally up to par with the existing desktop GPUs by integrating full Direct3D feature level 11_2 functionality alongside the more mobile-focused OpenGL ES 3.1 feature set. By doing so Qualcomm has reached feature parity with desktop GPUs (for the time being), even slightly exceeding NVIDIA’s GPUs by Direct3D standards due to being an FL 11_2 architecture versus NVIDIA’s FL 11_0 architecture.
As we have already covered Adreno 420 in some depth last month we won’t spend too much time going over it now, though admittedly this is also partially due to the fact that Qualcomm is remaining tight lipped about the Adreno 400 architecture beyond a high-level feature standpoint. In brief the Adreno 400 architecture (and Adreno 420) is a full Direct3D FL 11_2 implementation, utilizing a unified shader architecture along with the appropriate feature additions. New to the Adreno 400 architecture is support for tessellation, including the necessary hull and domain shader stages, and as a Direct3D 11 product this would also include with it other Direct3D 11 features such as compute shaders and draw indirect support.
Meanwhile on the OpenGL ES side of matters, while ES 3.1 is not as expansive as Direct3D 11, this none the less means that the Adreno 400 architecture brings with it ES 3.1 functionality along with a number of its Direct3D-derrived features as extensions. Mobile developers will also be happy to hear that this is the first Qualcomm product to support adaptive scalable texture compression (ASTC), the OpenGL next-generation compression technology that should further improve, unify, and simplify the use of compressed textures on mobile platforms.
For Google I/O we are expecting Qualcomm to be heavily promoting the Android gaming possibilities of the Adreno 420 and the Snapdragon 805. The low power nature of mobile devices and the SoCs that power them means that while Qualcomm can’t match the performance of the larger desktop GPUs, Adreno 420 will be a big step up from the performance offered by the older Adreno 330 GPU. But more importantly for Qualcomm, they can do something that hasn’t been done before by bringing desktop-level features to Android devices.
From our May 2014 Snapdragon 805 Preview
In a sense this will be a repeat of the launch of Direct3D 11 on the desktop, except now with Qualcomm (and eventually other vendors) promoting the advanced features offered by these devices, throwing out examples and tools at developers to entice them to write games for this latest generation of hardware, and to otherwise put it to good use. Even without being able to match desktop processors, there are a number of effects that are made available (or at least more practical) via these new features, and can be used effective on mobile hardware. From a practical perspective Qualcomm should be able to offer Android developers the base graphics functionality of the current generation consoles at performance levels similar to the previous generation consoles.
Post Your CommentPlease log in or sign up to comment.
View All Comments
ArthurG - Monday, June 23, 2014 - linkwell TK1 is already in Xiaomi MiPad (big big design win), is supposed to be in Nexus 9 with the 64bit Denver version and will come soon in Acer CB5 chromebook, just to name a few.
p3ngwin1 - Monday, June 23, 2014 - linkso no PHONES like he said then.
Laststop311 - Tuesday, June 24, 2014 - linkfewer better performing cores is better. Bump that single threaded performance up
AnandTechUser99 - Monday, June 23, 2014 - linkAccording to Xiaomi, the MiPad will offer 11 hours of video playback on a 6700mAh battery. However, I would wait for an actual test to see the real world power consumption.
Like Tegra K1, Snapdragon 805 does not have an integrated baseband.
name99 - Monday, June 23, 2014 - linkVideo playback does not engage the GPU. All that proves is that the SoC is capable of doing the most basic "power down unused pieces when it makes sense".
The more relevant sort of benchmark would be something like "how long does the system last playing game X as compared to another device playing game X".
A different (but relevant) sort of benchmark would test battery life under "normal" usage conditions, the idea being to test how efficient the GPU is at handling not demanding tasks but the sort of basic compositing and animation that makes up the UI.
fivefeet8 - Tuesday, June 24, 2014 - linkVideo playback does use some of the GPU for rendering and decoding of HD streams. BSplayer on Android also has a hardware render backend for supported GPU's which suprisingly increases battery life.
gonchuki - Tuesday, June 24, 2014 - linkAccelerated video decoding is done by fixed function hardware. It's embedded on the GPU but it's totally independent of the programmable parts of the GPU, which is what actually matters when you really want to compare power and performance.
haardrr - Tuesday, June 24, 2014 - link
a 'mute' point would be to not mention it, it will be unusable... (i am not sure you can use it in this way but, I think you mean...
a 'moot' is a good idea even if disadvantages outweight the advantages of using it ...
haardrr - Tuesday, June 24, 2014 - linka 'moot' point is a... (bad proof-reading)
tuxRoller - Monday, June 23, 2014 - linkAt what power usage?