Peak Random Read Performance

For client/consumer SSDs we primarily focus on low queue depth performance for its relevance to interactive workloads. Server workloads are often intense enough to keep a pile of drives busy, so the maximum attainable throughput of enterprise SSDs is actually important. But it usually isn't a good idea to focus solely on throughput while ignoring latency, because somewhere down the line there's always an end user waiting for the server to respond.

In order to characterize the maximum throughput an SSD can reach, we need to test at a range of queue depths. Different drives will reach their full speed at different queue depths, and increasing the queue depth beyond that saturation point may be slightly detrimental to throughput, and will drastically and unnecessarily increase latency. Because of that, we are not going to compare drives at a single fixed queue depth. Instead, each drive was tested at a range of queue depths up to the excessively high QD 512. For each drive, the queue depth with the highest performance was identified. Rather than report that value, we're reporting the throughput, latency, and power efficiency for the lowest queue depth that provides at least 95% of the highest obtainable performance. This often yields much more reasonable latency numbers, and is representative of how a reasonable operating system's IO scheduler should behave. (Our tests have to be run with any such scheduler disabled, or we would not get the queue depths we ask for.)

One extra complication is the choice of how to generate a specified queue depth with software. A single thread can issue multiple I/O requests using asynchronous APIs, but this runs into at several problems: if each system call issues one read or write command, then context switch overhead becomes the bottleneck long before a high-end NVMe SSD's abilities are fully taxed. Alternatively, if many operations are batched together for each system call, then the real queue depth will vary significantly and it is harder to get an accurate picture of drive latency. Finally, the current Linux asynchronous IO APIs only work in a narrow range of scenarios. There is a new general-purpose async IO interface that will enable drastically lower overhead, but until that is adopted by applications other than our benchmarking tools, we're sticking with testing through the synchronous IO system calls that almost all Linux software uses. This means that we test at higher queue depths by using multiple threads, each issuing one read or write request at a time.

Using multiple threads to perform IO gets around the limits of single-core software overhead, and brings an extra advantage for NVMe SSDs: the use of multiple queues per drive. Enterprise NVMe drives typically support at least 32 separate IO queues, so we can have 32 threads on separate cores independently issuing IO without any need for synchronization or locking between threads.

4kB Random Read

4kB Random Read (Power Efficiency)
Power Efficiency in kIOPS/W Average Power in W

Now that we're looking at high queue depths, the SATA link becomes the bottleneck and performance equalizer. The Kingston DC500s and the Samsung SATA drives differ primarily in power efficiency, where Samsung again has a big advantage.

4kB Random Read QoS

The Kingston DC500s have slightly worse QoS for random reads compared to the Samsung SATA drives. The Samsung entry-level NVMe drive has even higher tail latencies, but that's because it needs a queue depth four times higher than the SATA drives in order to reach its full speed, and that's getting close to hitting bottlenecks on the host CPU.

Peak Sequential Read Performance

Since this test consists of many threads each performing IO sequentially but without coordination between threads, there's more work for the SSD controller and less opportunity for pre-fetching than there would be with a single thread reading sequentially across the whole drive. The workload as tested bears closer resemblance to a file server streaming to several simultaneous users, rather than resembling a full-disk backup image creation.

128kB Sequential Read

128kB Sequential Read (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

For sequential reads, the story at high queue depths is the same as for random reads. The SATA link is the bottleneck, so the difference comes down to power efficiency. The Kingston drives both blow past their official rating of 1.8W for reads, and have substantially lower efficiency than the Samsung SATA drives. The SATA drives are all at or near full throughput with a queue depth of four, while the NVMe drive is shown at QD8.

Steady-State Random Write Performance

The hardest task for most enterprise SSDs is to cope with an unending stream of writes. Once all the spare area granted by the high overprovisioning ratios has been used up, the drive has to perform garbage collection while simultaneously continuing to service new write requests, and all while maintaining consistent performance. The next two tests show how the drives hold up after hours of non-stop writes to an already full drive.

4kB Random Write

The Kingston DC500s looked pretty good at random writes when we were only considering QD1 performance, and now that we're looking at higher queue depths they still exceed expectations and beat the Samsung drives. The DC500M's 81.2k IOPS is above its rated 75k IOPS, but not by as much as the DC500R's 58.8k IOPS beats the specification of 28k IOPS. When testing across a wide range of queue depths, the DC500R didn't always maintain this throughput, but it was always above spec.

4kB Random Write (Power Efficiency)
Power Efficiency in kIOPS/W Average Power in W

The Kingston DC500s are pretty power-hungry during the random write test, but they stay just under spec. The Samsung SATA SSDs draw much less power and match or exceed the efficiency of the Kingston drives even when performance is lower.

4kB Random Write

The DC500R's best performance while testing various random write queue depths happened when the queue depth was high enough to have significant software overhead from juggling so many thread, so it has pretty poor latency scores. It managed about 17% lower throughput with a mere QD4 where QoS was much better, but this test is set up to report how the drive behaved at or near the highest throughput observed. It's a bit concerning that the DC500R's throughput seems to be so variable, but since it's all faster than advertised, it's not a huge problem. The DC500M's great throughput was achieved even at pretty low queue depths, so the poor 99.99th percentile latency score is entirely the drive's fault rather than any artifact of the host system configuration. The Samsung 860 DCT has 99.99th percentile tail latency almost as bad as the DC500R, but the 860 was only running at QD4 at the time so that's another case where the drive is having trouble, not the host system.

Steady-State Sequential Write Performance

128kB Sequential Write

Testing at higher queue depths didn't help the DC500R do any better on our sequential write test, but the other SATA drives do get a bit closer to the SATA limit. Since this test uses multiple threads each performing sequential writes at QD1, going too high hurts performance because the SSD has to juggle multiple write streams. As a result, these SATA drives peaked with just QD2 and weren't quite as close to the SATA limit as they could have been with a single stream running at moderate queue depths.

128kB Sequential Write (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

The Kingston DC500R's excessive power draw was commented on when this result turned up on the last page for the QD1 test, and it's still the most power-hungry and least efficient result here. The DC500M is drawing a bit more power than at QD1 but is within spec and manages to more or less match the efficiency of the NVMe drive, but Samsung's SATA drives again turn in much better efficiency scores.

Performance at Queue Depth 1 Mixed I/O & NoSQL Database Performance
Comments Locked

28 Comments

View All Comments

  • Umer - Tuesday, June 25, 2019 - link

    I know it may not be a huge deal to many, but Kingston, as a brand, left a really sour taste in my mouth after V300 fiasco since I bought those SSDs in a bulk for a new build back then.
  • Death666Angel - Tuesday, June 25, 2019 - link

    Let's put it this way: they have to be quite a bit cheaper than the nearest, known competitor (Crucial, Corsair, Adata, Samsung, Intel...) to be considered as a purchase by me.
  • mharris127 - Thursday, July 25, 2019 - link

    I don't expect Kingston to be any less expensive than ADATA as Kingston serves the mid price market and ADATA the low priced one. Samsung is a supposed premium product with a premium price tag to match. I haven't used a Kingston SSD but have some of their other products and haven't had a problem with any of them that wasn't caused by me. As far as picking my next SSD, I have had one ADATA SSD fail, they replaced it once I filled out some paperwork, they sent an RMA and I sent the defective drive back to them, the second one and a third one I bought a couple of months ago are working fine so far. My Crucial SSDs work fine. I have a Team Group SSD that works wonderfully after a year of service. I think my money is on either Crucial or Team Group the next time I buy that product.
  • Notmyusualid - Tuesday, June 25, 2019 - link

    ...wasn't it OCZ that released the 'worst known' SSD?

    I had almost forgotten about those days.

    I believe the only customers that got any value out of them where those on PERC and other known RAID controllers, which were not writing < 128kB blocks - and I wasn't one of them. I RMA'd & insta-sold the return, and bought X25M.

    What a 'mare that was.
  • Dragonstongue - Tuesday, June 25, 2019 - link

    Sandforce controller was ahead of it's time, not in the most positive ways all the time either...

    I had an Agility 3 60gb, used for just over 2 years for my system, mom used now an additional over 2.5 years, however it was either starting to have issues, or the way mom was using caused it to "forget" things now and then.

    I fixed with a crucial mx100 or 200 (forget LOL) that still has over 90% life either way, the Agility 3 was "warning" though still showed as over 75% life left (christmas '18-19) .. def massive speed up by swapping to more modern as well as doing some cleaning for it..

    SSD have come A LONG way in a short amount of time, sadly the producers of them via memory/controller/flash are the problem bad drivers, poor performance when should not, not work in every system when should etc.
  • thomasg - Tuesday, June 25, 2019 - link

    Interestingly, I still have one of the OCZs with the first pre-production SandForce, the Vertex Limited 100 GB, which has been running for many years at high throughput and many many Terabytes of Writes.
    Still works perfectly.
    I'm not sure I remember correctly, but I think the major issues started showing up for the production SandForce model that was used later on.
  • Chloiber - Tuesday, June 25, 2019 - link

    I still have a Supertalent Ultadrive MX 32GB - not working properly anymore, but I don't even remember how many firmware updates I put that one through :)
    They just had really bad, buggy firmwares throughout.
  • leexgx - Wednesday, June 26, 2019 - link

    main problem with sandforce is the compressed layer and the nand layer was not ever managed correctly with made trim ineffective and GC was having to be used on new writes resulting in high access times and half speed writes after 1 full drive of writes , you note did not have to be filled.if it was a 240gb ssd all you had to do to slow it down permanently was write over time 240-300gb(due to compression to get an actual full 1DWP) of data only way to reset it was secure erase (unsure if that was ever fixed on the SandForce SF3000 , seagate enterprise SSDs and ironwolf nas SSD)

    other issue which was more or less fixed was rare BSOD (more or less 2 systems i managed did not like them) or well the drive eating it self and becoming a 0Mb drive (extremely rare but did happen) the 0mb bug i think was fixed if you owned a intel drive but the BSOD fix was limited success
  • Gunbuster - Tuesday, June 25, 2019 - link

    Indeed. Not going to support a company with a track record of shady practices.
  • kpxgq - Tuesday, June 25, 2019 - link

    The V300 fiasco is nothing compared to the Crucial V4 fisaco... quite possible the worst SSD drives ever made right along with the early OCZ Vertex drives. Over half the ones I bought for a project just completely stopped working a month in. I bought them trusting the Crucial brand name alone.

Log in

Don't have an account? Sign up now