Test Procedures

Our usual SSD test procedure was not designed to handle multi-device tiered storage, so some changes had to be made for this review and as a result much of the data presented here is not directly comparable to our previous reviews. The major changes are:

  • All test configurations were running the latest OS patches and CPU microcode updates for the Spectre and Meltdown vulnerabilities. Regular SSD reviews with post-patch test results will begin later this month.
  • Our synthetic benchmarks are usually run under Linux, but Intel's caching software is Windows-only so the usual fio scripts were adapted to run on Windows. The settings for data transfer sizes and test duration are unchanged, but the difference in storage APIs between operating systems means that the results shown here are lower across the board, especially for the low queue depth random I/O that is the greatest strength of Optane SSDs.
  • We only have equipment to measure the power consumption of one drive at a time. Rather than move that equipment out of the primary SSD testbed and use it to measure either the cache drive or the hard drive, we kept it busy testing drives for future reviews. The SYSmark 2014 SE test results include the usual whole-system energy usage measurements.
  • Optane SSDs and hard drives are not any slower when full than when empty, because they do not have the complicated wear leveling and block erase mechanisms that flash-based SSDs require, nor any equivalent to SLC write caches. The AnandTech Storage Bench (ATSB) trace-based tests in this review omit the usual full-drive test runs. Instead, caching configurations were tested by running each test three times in a row to check for effects of warming up the cache.
  • Our AnandTech Storage Bench "The Destroyer" test takes about 12 hours to run on a good SATA SSD and about 7 hours on the best PCIe SSDs. On a mechanical hard drive, it takes more like 24 hours. Results for The Destroyer will probably not be ready this week. In the meantime, the ATSB Heavy test is sufficiently large to illustrate how SSD caching performs for workloads that do not fit into the cache.

Benchmark Summary

This review analyzes the performance of Optane Memory caching both for boot drives and secondary drives. The Optane Memory modules are also tested as standalone SSDs. The benchmarks in this review fall into three categories:

Application benchmarks: SYSmark 2014 SE

SYSmark directly measures how long applications take to respond to simulated user input. The scores are normalized against a reference system, but otherwise are directly proportional to the accumulated time between user input and the result showing up on screen. SYSmark measures whole-system performance and energy usage with a broad variety of non-gaming applications. The tests are not particularly storage-intensive, and differences in CPU and RAM can have a much greater impact on scores than storage upgrades.

AnandTech Storage Bench: The Destroyer, Heavy, Light

These three tests are recorded traces of real-world I/O that are replayed onto the storage device under test. This allows for the same storage workload to be reproduced consistently and almost completely independent of changes in CPU, RAM or GPU, because none of the computational workload of the original applications is reproduced. The ATSB Light test is similar in scope to SYSmark while the ATSB Heavy and The Destroyer tests represent much more computer usage with a broader range of applications. As a concession to practicality, these traces are replayed with long disk idle times cut short, so that the Destroyer doesn't take a full week to run.

Synthetic Benchmarks: Flexible IO Tester (FIO)

FIO is used to produce and measure artificial storage workloads according to our custom scripts. Poor choice of data sizes, access patterns and test duration can produce results that are either unrealistically flattering to SSDs or are unfairly difficult. Our FIO-based tests are designed specifically for modern consumer SSDs, with an emphasis on queue depths and transfer sizes that are most relevant to client computing workloads. Test durations and preconditioning workloads have been chosen to avoid unrealistically triggering thermal throttling on M.2 SSDs or overflowing SLC write caches.

Introduction SYSmark 2014 SE
Comments Locked

96 Comments

View All Comments

  • FunBunny2 - Wednesday, May 16, 2018 - link

    one of the distinguishing points, so to speak, of XPoint is its byte-addressable protocol. but I've found nothing about the advantages, or whether (it seems so) OS has to be (heavily?) modified to support such files. anyone know?
  • Billy Tallis - Wednesday, May 16, 2018 - link

    The byte-addressability doesn't provide any direct advantages when the memory is put behind a block-oriented storage protocol like NVMe. But it does simplify the internal management the SSD needs to do, because modifying a chunk of data doesn't require re-writing other stuff that isn't changing. NVDIMMs will provide a more direct interface to 3D XPoint, and that's where the OS and applications need to be heavily modified.
  • zodiacfml - Friday, May 18, 2018 - link

    Quite impressive but for 32GB Optane drive, I can have a 250 GB SSD.

    The Optane might improve performance for fractions of a second over SSDs for applications but it won't help during program/driver installations or Windows updates which needs more speed.

    I'd reconsider it for a 64 GB Optane as a boot drive for the current price of the 32GB.
  • RagnarAntonisen - Sunday, May 20, 2018 - link

    You've got to feel for Intel. They spend a tonne of cash on projects like Larrabee, Itanium and Optane and the market and tech reviewers mostly respond with a shrug.

    And then everyone complains they're being complacent when it comes to CPU design. Mind you they clearly were - CPU performances increased at a glacial rate until AMD released a competitive product and then there was a big jump from 4 cores to 6 in mainstream CPUs with Coffee Lake. Still if the competition was so far behind you can afford to direct to R&D dollars to other areas.

    Still it all seems a bit unfair - Intel get criticised when they try something new and when they don't.

    And Itanium, Larrabee and Optane all looked like good ideas on paper. It was only when they had a product that it became clear that it wasn't competitive.
  • Adramtech - Sunday, May 20, 2018 - link

    since when is a 1st or 2nd Gen product competitive? I'm sure if they don't have a path to reach competitiveness, the project will be scrapped.
  • Keljian - Tuesday, May 29, 2018 - link

    While I don't doubt the tests are valid, I would really like to see a test with say PrimoCache - with the blocksize set to 4k. I have found in my own testing that Optane (with PrimoCache using optane as an L2 @ 4k) is very worthwhile even for my Samsung 950 pro.
  • Keljian - Tuesday, May 29, 2018 - link

    https://hardforum.com/threads/intel-900p-optane-wo... - Here are my benchmark findings for the 850 evo and 950 pro using the 32gb optane as L2 cache. You'll notice the 4k speeds stand out.
  • denywinarto - Tuesday, May 29, 2018 - link

    Thinking of using this with 12 tb hgst for a gamedisk drive for a ISCSI-based server, the data read is usually the same as they only game files. But occasionally new game gets added. Would it be a better option compared to raid? SSD are too expensive.
  • Lolimaster - Monday, October 1, 2018 - link

    Nice to use the 16GB as pagefile, chrome/firefox profile/cache
  • Lolimaster - Tuesday, October 2, 2018 - link

    It's better to use them as extra ram/pagefile or scratch disk.

Log in

Don't have an account? Sign up now