Test Procedures

Our usual SSD test procedure was not designed to handle multi-device tiered storage, so some changes had to be made for this review and as a result much of the data presented here is not directly comparable to our previous reviews. The major changes are:

  • All test configurations were running the latest OS patches and CPU microcode updates for the Spectre and Meltdown vulnerabilities. Regular SSD reviews with post-patch test results will begin later this month.
  • Our synthetic benchmarks are usually run under Linux, but Intel's caching software is Windows-only so the usual fio scripts were adapted to run on Windows. The settings for data transfer sizes and test duration are unchanged, but the difference in storage APIs between operating systems means that the results shown here are lower across the board, especially for the low queue depth random I/O that is the greatest strength of Optane SSDs.
  • We only have equipment to measure the power consumption of one drive at a time. Rather than move that equipment out of the primary SSD testbed and use it to measure either the cache drive or the hard drive, we kept it busy testing drives for future reviews. The SYSmark 2014 SE test results include the usual whole-system energy usage measurements.
  • Optane SSDs and hard drives are not any slower when full than when empty, because they do not have the complicated wear leveling and block erase mechanisms that flash-based SSDs require, nor any equivalent to SLC write caches. The AnandTech Storage Bench (ATSB) trace-based tests in this review omit the usual full-drive test runs. Instead, caching configurations were tested by running each test three times in a row to check for effects of warming up the cache.
  • Our AnandTech Storage Bench "The Destroyer" test takes about 12 hours to run on a good SATA SSD and about 7 hours on the best PCIe SSDs. On a mechanical hard drive, it takes more like 24 hours. Results for The Destroyer will probably not be ready this week. In the meantime, the ATSB Heavy test is sufficiently large to illustrate how SSD caching performs for workloads that do not fit into the cache.

Benchmark Summary

This review analyzes the performance of Optane Memory caching both for boot drives and secondary drives. The Optane Memory modules are also tested as standalone SSDs. The benchmarks in this review fall into three categories:

Application benchmarks: SYSmark 2014 SE

SYSmark directly measures how long applications take to respond to simulated user input. The scores are normalized against a reference system, but otherwise are directly proportional to the accumulated time between user input and the result showing up on screen. SYSmark measures whole-system performance and energy usage with a broad variety of non-gaming applications. The tests are not particularly storage-intensive, and differences in CPU and RAM can have a much greater impact on scores than storage upgrades.

AnandTech Storage Bench: The Destroyer, Heavy, Light

These three tests are recorded traces of real-world I/O that are replayed onto the storage device under test. This allows for the same storage workload to be reproduced consistently and almost completely independent of changes in CPU, RAM or GPU, because none of the computational workload of the original applications is reproduced. The ATSB Light test is similar in scope to SYSmark while the ATSB Heavy and The Destroyer tests represent much more computer usage with a broader range of applications. As a concession to practicality, these traces are replayed with long disk idle times cut short, so that the Destroyer doesn't take a full week to run.

Synthetic Benchmarks: Flexible IO Tester (FIO)

FIO is used to produce and measure artificial storage workloads according to our custom scripts. Poor choice of data sizes, access patterns and test duration can produce results that are either unrealistically flattering to SSDs or are unfairly difficult. Our FIO-based tests are designed specifically for modern consumer SSDs, with an emphasis on queue depths and transfer sizes that are most relevant to client computing workloads. Test durations and preconditioning workloads have been chosen to avoid unrealistically triggering thermal throttling on M.2 SSDs or overflowing SLC write caches.

Introduction SYSmark 2014 SE
Comments Locked

96 Comments

View All Comments

  • TrackSmart - Tuesday, May 15, 2018 - link

    People seem to be talking around each other in these threads, without actually reading the substance of each person's reply.

    Dr. Swag didn't mention ONLY using a 500GB SSD. Just the opposite. He/she was suggesting that you could use a 500GB SSD for both a boot drive AND a 64GB cache drive. So you end up with ~440GB of normal SSD space (enough for most programs) AND a ~60GB cache drive to speed up your HDD accesses. All for the same price as adding a 64GB optane drive.

    Addressing Dr. Swag's actual comment: I partially agree. One downside to the arrangement you suggested is that most affordable SSDs have lower write endurance than cache drives. They are also likely to be slower than an optane drive (but still fast compared to HDDs). And if your SSD boot/cache all-in-one drive dies, you might lose data on both the SSD and the HDD.

    Regarding WithoutWeakness: Your comment makes sense if you are accessing the same subset of data over-and-over again. But if you are accessing a block of data ONCE to run an analysis and then moving onto a new block of data, then you will experience HDD speeds. Same goes for the first access to the data in cases where you will be using it multiple times. Slow the first time, faster in future times. So the downsides of a small cache will remain in a number of scenarios.

    I personally think that Intel missed the boat with Optane. These solutions would have been a lot more convincing when SSD storage was a lot more expensive (i.e. 5+ years ago) and before other caching options existed for making use of 'normal' SSDs.
  • ಬುಲ್ವಿಂಕಲ್ ಜೆ ಮೂಸ್ - Tuesday, May 15, 2018 - link

    Power loss protection ?

    From what I know so far, the MX500 (500GB) cache contains unique data that has not yet been written to normal nand and Crucial does not recommend using an SLC cache unless you have battery backup protection

    An Optane cache drive is a "copy" of data already on the hard drive (or SSD) and I don't see a problem with power loss resulting in data loss once you clear the cache
  • SkipPpe - Friday, May 18, 2018 - link

    Something like an Intel 3510 would be a better drive to use for this.
  • ಬುಲ್ವಿಂಕಲ್ ಜೆ ಮೂಸ್ - Tuesday, May 15, 2018 - link

    DOH......
    Nevermind!
  • sharath.naik - Tuesday, May 15, 2018 - link

    I was wondering if the lifespan of these are no better than SSD. wont this burn out much faster than the drives lifespan if used as a cache for it?
  • MajGenRelativity - Tuesday, May 15, 2018 - link

    Optane drives are more durable than the average SSD
  • CheapSushi - Wednesday, May 16, 2018 - link

    Even more so than MLC NAND, which seems to be getting harder and harder to find (aside from Samsung's PRO line).
  • Drumsticks - Tuesday, May 15, 2018 - link

    Is anybody else interested in the performance of the 800p as a cache drive? The difference between an Optane SSD 800p and a 1TB HDD versus a 1TB SATA drive nowadays is less than $15, so it's pretty comparable for effectively the same capacity of storage. On the other hand, in the 25 or so graphs presented in this review, the 118GB caching solution outperforms a SATA drive, sometimes handily, in 24 of them. The 25th is power consumption, and one of them has a single loss in run 1 of the latency measurement for the heavy test.

    Hell, sometimes that solution outperforms the 900p. Why would you pick a comparably priced 1TB SATA SSD over something like that? If you need less storage, a 500GB will perform even worse than a 1TB, and a 250GB would be even worse still. Going down in capacity on the Optane drive would still probably keep you in the range of the SATA drive, while leaving you with double or quadruple the capacity.
  • Giroro - Tuesday, May 15, 2018 - link

    "58GB 800P is functionally identical to the 64GB M10 and both have the exact same usable capacity of 58,977,157,120 bytes."

    Hold on, either something is wrong or that is straight-up false advertisement, a new low that is far beyond how storage manufacturers usually inflate their capacity specs. Don't just breeze past the part where Intel may be illegally marketing this thing. As far as I know Optane doesn't use over-provisioning, and it definitely isn't the normal GiB/GB conversion issue or the typical "formatting" excuse that doesn't actually apply to solid state media, so what gives?

    It has to be a mistake, right?
  • The_Assimilator - Wednesday, May 16, 2018 - link

    > it definitely isn't the normal GiB/GB conversion issue

    Actually, it is.

Log in

Don't have an account? Sign up now