Test System

Along with the Optane SSD sample, Intel provided a new server based on their latest Xeon Scalable platform for our use as a testbed. The 2U server is equipped with two 18-core Xeon Gold 6154 processors, 192GB of DRAM, and an abundance of PCIe lanes.

Because this is a dual-socket NUMA system, care needs to be taken to avoid unnecessary data transfers between sockets. For this review, the tests were setup to largely emulate a single-socket configuration: All SSDs tested for this review were connected to PCIe ports provided by CPU #2. All of the benchmarks using the FIO tool were configured to only use cores on CPU #2 and only allocate memory connected to CPU #2, so inter-socket latency is not a factor. This setup would be quite limiting when doing full enterprise application testing, but for synthetic storage benchmarks one CPU is far more than necessary to stress any single SSD.

For this review, the test system was mostly configured to offer the highest and most predictable storage performance. HyperThreading was disabled, SpeedStep and processor C-states were off, and other motherboard settings were set for maximum performance. The one notable exception is fan speeds: since this test server was installed in a home office environment instead of a datacenter, the "acoustic" fan profile had to be used instead of "performance".

Enterprise SSD Test System
System Model Intel Server R2208WFTZS
CPU 2x Intel Xeon Gold 6154 (18C, 3.0GHz)
Motherboard Intel S2600WFT
Chipset Intel C624
Memory 192GB total, Micron DDR4-2666 16GB modules
Software Linux kernel 4.13.11
FIO 3.1

The Linux kernel's NVMe driver is constantly evolving, and several new features have been added this year that are relevant to a drive like the Optane SSD. Rather than use an enterprise-oriented Linux distribution with a long-term support cycle for an older kernel version, this review was conducted using the very fresh 4.13 kernel series.

Earlier this year, the FIO storage benchmarking tool hit a major milestone with version 3.0 that switches timing measurements from microsecond precision to nanosecond precision. This makes it much easier to analyze the performance of Optane devices, where latency can be down in the single-digit microsecond territory.

The Competition

Intel SSD DC P3700 1.6TB

The Intel SSD DC P3700 was the flagship of their first generation of NVMe SSDs, and one of the first widely available NVMe SSDs. It was a great showcase for the advantages of NVMe, but is now outdated with its use of 20nm planar MLC NAND and capacities that top out at 2TB.

Intel launched a new P4x00 generation of flash-based enterprise NVMe SSDs this year using a new controller and 3D TLC NAND. However, due to the existence of the Optane SSD DC P4800X as the new flagship drive, the P3700 didn't get a direct successor: the P4600 is currently Intel's top flash-based enterprise SSD, and while it offers higher performance and capacities than the P3700 it does not match the rated write endurance of the P3700.

Intel SSD DC P3608 1.6TB

Intel's first-generation NVMe controller only supports drive capacities up to 2TB. To get around that limitation and to offer higher performance in some respects, Intel created the SSD DC P3608. Roughly equivalent to two P3600s on one card behind a PLX PCIe switch, the P3608 appears to the system as two SSDs but can be used with software RAID to create a large, high-performance volume. Our P3608 sample has a total of 1.6TB of accessible storage (800GB per controller) and has the highest built-in overprovisioning ratios in the P3608 family. This gives it the best aggregate random write performance, which is rated to match a single P3700 while random reads and sequential transfers can exceed the P3700.

For this review, one controller on the P3608 was tested as a stand-in for the P3600, and a software RAID 0 array spanning both controllers was also tested.

Intel has not officially announced the successor P4608, but they have submitted it for testing and inclusion on the NVMe Integrators List, so it will probably launch within the next few months.

Micron 9100 MAX 2.4TB

Launched in mid 2016, the Micron 9100 series was part of their first generation of NVMe SSDs. Micron wasn't new to the PCIe SSD market, but their early products predated the NVMe standard and instead used a proprietary protocol. The 9100 series uses Micron 16nm MLC NAND flash and a Microsemi Flashtec NVMe1032 controller.

Our Micron 9100 MAX 2.4TB sample is the fastest model from the 9100 series, and the second-largest. Micron recently announced the 9200 series that switched to 3D TLC NAND for a huge capacity boost and adopted the latest generation of Microsemi controllers to allow for a PCIe x8 connection, but we don't have a sample.

Intel Optane SSD 900p 280GB

Last month, Intel launched their consumer Optane SSD based on the same controller platform as the P4800X. The Optane SSD 900p has some enterprise-oriented features disabled and is rated for only a third the write endurance, but offers essentially the same performance as the P4800X for about a third the price.

This makes the 900p a very attractive option for users who need Optane level of performance but don't want to pay for the absolute highest write endurance. It isn't intended as an enterprise SSD, but the 900p can compete in this space.

Introduction Fine Tuning Performance
Comments Locked

58 Comments

View All Comments

  • Lord of the Bored - Thursday, November 9, 2017 - link

    Me too. ddriver is most of why I read the comments.
  • mkaibear - Friday, November 10, 2017 - link

    He is always good for a giggle. I suppose he's busy directing hard drive manufacturers to make special hard drive platters for him solely out of hand-gathered sand from the Sahara. Or something.

    Still it's a shame to miss the laughs. It's always the second thing I do on SSD articles - first read the conclusion, then go and see what deedee has said. Ah well.
  • extide - Friday, November 10, 2017 - link

    Please.. don't jinx us!
  • rocky12345 - Thursday, November 9, 2017 - link

    Interesting drive to say the least. Also a well written review thanks.
  • PeachNCream - Thursday, November 9, 2017 - link

    30 DWPD over the course of 5 years turns into a really large amount of data when you're talking about 750GB of capacity. Isn't the typical endurance rating more like 0.3 DPWD for enterprise solid state?

    So this thing about Optane on DIMMs is really interesting to me. Is the plan for it to replace RAM and storage all at once or to act as a cache or some sort between faster DRAM and conventional solid state? Even with the endurance its offering right now, it seems like it would need to be more durable still for it to replace RAM.

    Oh (sorry case of shinies) could this be like a DIMM behind HBM on the CPU package where HBM does more of the write heavy stuff and then Optane lives between HBM and SSD or HDD storage? Has Intel let much out of the bag about this sorta thing?
  • Billy Tallis - Thursday, November 9, 2017 - link

    Enterprise SSDs are usually sorted into two or three endurance tiers. Drives meant for mostly-read workloads typically have endurance ratings of 0.3 DWPD. High-endurance drives for write-intensive uses are usually 10, 25 or 30 DWPD, but the ratings of high-endurance drives have decayed somewhat in recent years as the market realized few applications really need that much endurance.
  • lazarpandar - Thursday, November 9, 2017 - link

    Can this be used to supplement addressable system memory? I remember Intel talking about that during the product launch.
  • Billy Tallis - Thursday, November 9, 2017 - link

    Yes. It makes for a great swap device, especially with a recent Linux kernel. Alternatively, Intel will sell it bundled with a hypervisor that presents the guest OS with a pool of memory equal in size to the system's DRAM plus about 85% of the Optane drive's capacity. The hypervisor manages memory placement, so from the guest OS's perspective the memory is a homogeneous pool, not x GB of DRAM and y GB of Optane.
  • tuxRoller - Friday, November 10, 2017 - link

    It's a bit odd Intel would go for the hypervisor solution since the kernel can handle tiered pmem and it's in a better position to know where to place data.
    I suppose it's useful because it's cross-platform?
  • xype - Friday, November 10, 2017 - link

    I’d guess a hypervisor solution would also allow any critical fixes to be propagated faster/easier than having to go through a 3rd party (kernel) provider?

Log in

Don't have an account? Sign up now