2012: Meet Our New Mobile Benchmark Suite

Testing computer hardware can be a difficult process. On the one hand there’s a desire for more information and benchmarks, and on the other hand there’s a desire for timely reviews. Our goal at AnandTech has always been to deliver the most comprehensive reviews possible, and while we strive to timeliness there are occasions where additional testing or questions may delay a review. Ultimately, there’s a balancing act that needs to be maintained, and over time we periodically refresh our review suite and testing methodologies.

With 2012 now here, we’re launching a new suite of benchmarks for our laptop reviews. We'll also have the results from our first laptop using the new tests, courtesy of ASUS' G74SX. Some of the tests have already been in use for a while and others are brand new. In order to provide a single location with a list of our benchmarks and testing procedures, we have put together this short overview. We plan on using the following test suite throughout 2012, and while it’s possible we will add some benchmarks, we don’t have any plans to stop using any of the following at least for the next year.

General Performance Tests

Starting with our general tests, all of these have been in use for several months at least, with many tests dating back to 2010 and earlier. We’ll continue to use the full PCMark 7 suite, PCMark Vantage (x64), Graysky’s x264 HD encoding test, Cinebench 11.5, 3DMark06, 3DMark Vantage (Entry-Level and Performance defaults), and 3DMark 11. We’ll also continue with our battery life tests (now with Internet Explorer 9 in place of IE8) and LCD tests. So for most areas, our test suite remains largely unchanged—we’re finally dropping Cinebench 10, but that’s about it.

As we’ll mention in the conclusion, we’re willing to add some additional general performance benchmarks if there are any specific requests. One of the difficult things to quantify with modern PCs is how fast they are in the things most people do on a regular basis. Part of the problem is that most PCs from the past three or four years are all “fast enough” for generic tasks like surfing the web—if you’re actually reading the content of web pages rather than just repeatedly loading a complex page, I’m not sure most users would notice the difference between a 2GHz Core 2 Duo or Athlon X2 laptop and a quad-core i7-2760QM. This is why battery life is such an important element, as where many wouldn’t notice the difference between a web page loading in two seconds and a web page loading in one second, they’re far more likely to notice two hours of battery life versus four or eight hours. Anyway, let us know if you have other mobile benchmarks you’d like us to consider.

With that out of the way, we’ll save the next page for the major changes: our updated gaming suite.

All New Gaming Test Suite
Comments Locked

48 Comments

View All Comments

  • JarredWalton - Saturday, January 7, 2012 - link

    I don't know anyone that uses PowerDirector 10, so I'd be curious about how it's viewed (note: I'm not a video editor by any stretch). WinZip on the other hand is a far more interesting option; I'll keep an eye out for that one. :-)
  • Ryan Smith - Saturday, January 7, 2012 - link

    I'd note that we dropped our video encoding benchmark on GPU Bench midway through the year last year, because GPU accelerated video encoding was actually CPU limited. Performance quickly plateaued at GTX 460/Radeon 5750 levels, as at that point the GPUs outran the CPU.
  • QChronoD - Friday, January 6, 2012 - link

    Would it be possible to add the screen size to the specs listed for each system in Bench? It's kinda silly to be missing since that's one of the primary criteria people use to narrow down models.
  • ArKritz - Friday, January 6, 2012 - link

    Wouldn't it make more sense to just use medium presets for the medium benchmark, high for high, ultra (or very high) for ultra and just drop the "low" benchmarks altogether?
  • JarredWalton - Saturday, January 7, 2012 - link

    Basically, we've done what you suggested, only we call the settings Low, Medium, and High rather than Medium, High, and Ultra. It just seems weird to call test settings Medium/High/Ultra--or take it to another level and test at High/Very High/Extreme--when we can call the settings Low/Med/High. It's just semantics. Anyway, the settings were selected for two things:

    1) Get reasonable quality for the target res/setting (Min/Low is often insufficient)
    2) Make sure there's a difference between the detail settings (this is why we don't test DiRT 3 at Ultra and Ultra + 4xAA for instance).

    In several games, the difference between many of the settings is negligible, both in terms of quality and in terms of performance. We don't feel there's much point in testing at settings where games run at 100+ FPS unless there's no other option (e.g. Portal 2), and likewise we didn't want to have results where it was basically same quality, different resolution (unless we couldn't find a better option). Batman is another example of this, as 1366x768 at Low settings is only slightly slower than 1366x768 at Very High settings. Anyway, the main thing was to let people know exactly how we plan to test in one location, so that I can just link it in future reviews.
  • Gast - Saturday, January 7, 2012 - link

    Have you looked at changing the names to providing some sort of meaning other than just the level of the test? Something along the lines of "Playable, High Quality, Max Quality"

    Changing the names from Medium, High, and Ultra will be jarring for me. When skimming I will see "Low" and think the minium settings needed to run the game. Which is different than the "playable" or "medium" settings you are presenting.

    While I can learn to adjust to this change, irregular AT readers might not and walk away with the wrong impression of what the test was representing.
  • PolarisOrbit - Saturday, January 7, 2012 - link

    I agree that when you use the terms "low / medium / high" there is an implication that you may be referring to the in-game settings rather than your interpretation of the different settings that are worth benchmarking. A careless reader may not notice the difference.

    To me, it makes sense to compare a cheap laptop to the cheap level and an expensive laptop to the expensive level (obviously I mean expensive gaming laptop since for a gaming benchmark). So I would suggest dividing it by market segment like so:
    low -> value settings
    medium -> mainstream settings
    high -> performance settings
  • JarredWalton - Saturday, January 7, 2012 - link

    Our charts will continue to list the game settings we use for testing, plus I intend to link back to this article on the gaming section so that new readers can understand exactly what we're testing. We could also call the settings "Value/Mainstream/Performance" or something, but all that really says to me is "they are using custom settings so make sure you know what is being tested". Which isn't necessarily a bad thing.

    I think at some point I need to go through the games and capture screenshots at our Low/Med/High settings as well to give a better indication of what the various settings mean -- and maybe add a "minimum" screenshot as well to show why we're skipping that in most titles. That probably won't happen until post-CES though.
  • Gast - Saturday, January 7, 2012 - link

    "they are using custom settings so make sure you know what is being tested"

    That's basically what I'm pushing for. If would be ok if your medium test was very similar to the medium setting, but since almost all of your tests have a naming conflict with in game settings (low test = medium settings) I would find it helpful to call them something different.
  • JarredWalton - Saturday, January 7, 2012 - link

    Okay, I've gone ahead and renamed the benchmarks to Value/Mainstream/Enthusiast, as I can see where confusion might otherwise result. Hopefully I caught all the references, but if not I'm sure someone will set me straight. :-)

Log in

Don't have an account? Sign up now