In the process of assimilating SanDisk, Western Digital has been re-using their hard drive branding on consumer SSDs: WD Green, Blue and Black can refer to either mechanical hard drives or SSDs. The WD Blue brand is used for the most mainstream products, which for SSDs meant SATA drives. The first WD Blue SSD introduced in 2016 used planar TLC NAND and a Marvell controller with the usual amount of DRAM for a mainstream SSD. The next year, the WD Blue was updated with 3D TLC NAND that kept it competitive with the Crucial MX series and Samsung 850 EVO. 2018 passed with no changes to the WD Blue hardware, but prices were slashed to keep up with the rest of the industry: the 1TB drive that debuted with a MSRP of $310 is now selling for $120.

SanDisk's 64-layer 3D TLC NAND is nearing the end of its product cycle, but they and other NAND flash manufacturers aren't in a hurry to switch over to 96L NAND, so it's not quite time for another straightforward refresh of the WD Blue. Instead, Western Digital has chosen to migrate the WD Blue brand over to a different market segment. Now that the WD Black is well-established as a high-end NVMe product, there's room for an entry-level NVMe SSD, and it will be the new WD Blue SN500. This is little more than a re-branding of an existing OEM product (WD SN520), in the same way that the current WD Black SN750 SSD is based on the WD SN720. The SN520 was announced more than a year ago, but as an OEM product we were unable to obtain a review sample. Like the high-end SN720 and SN750, the SN520 and WD Blue SN500 use Western Digital's in-house NVMe SSD controller architecture, albeit in a cut-down implementation with just two PCIe lanes and no DRAM interface. The high-end version of this controller architecture has proven to be very competitive (especially for a first-generation product), but so far we have only the SN500's spec sheet by which to judge the low-end controller.

WD Blue SN500 Specifications
Capacity 250 GB 500 GB
Form Factor M.2 2280 Single-Sided
Interface NVMe PCIe 3 x2
Controller Western Digital in-house
NAND SanDisk 64-layer 3D TLC
DRAM None (Host Memory Buffer not supported)
Sequential Read 1700 MB/s 1700 MB/s
Sequential Write 1300 MB/s 1450 MB/s
4KB Random Read 210k IOPS 275k IOPS
4KB Random Write 170k IOPS 300k IOPS
Power Peak 5.94 W 5.94 W
PS3 Idle 25 mW 25 mW
PS4 Idle 2.5 mW 2.5 mW
Endurance 150 TB 300 TB
Warranty 5 years
MSRP $54.99

High-end client/consumer NVMe SSDs all use PCIe 3.0 x4 interfaces, but the entry-level NVMe market is split between four-lane and two-lane controllers. Two-lane controllers are generally cheaper and their smaller size makes them attractive for small form factor devices that can't fit a full 22x80mm M.2 card. The WD SN520 is a 22x30mm design that is also available in 42mm and 80mm card lengths, but the retail WD Blue SN500 will only be sold in the 80mm length that is most common for consumer M.2 drives.

The switch from SATA to NVMe means the new WD Blue SN500 will offer much higher peak performance, but the use of a DRAMless controller means there may be some corner cases where heavy workloads show little improvement or even regress in performance. The SN500's controller does not use the NVMe Host Memory Buffer, but does include an undisclosed amount of memory on-board that serves a similar purpose. This means that omitting the external DRAM from the drive should not have as severe a performance impact as it does for DRAMless SATA drives like the WD Green SSD.

Even if the new WD Blue SN500 succeeds at offering far better performance than the current WD Blue SATA SSD, it will still be a big step backward in terms of capacity: the SATA product line ranges from 250GB to 2TB, but the SN500 will only be offered in 250GB and 500GB capacities. We hope that Western Digital has an upgraded WD Green in the works to keep affordable 1TB+ drives in their portfolio.

The MSRPs for the WD Blue SN500 are a few dollars higher than current retail pricing for the mainstream SATA SSDs they are intended to succeed. Western Digital has not mentioned when the SN500 will hit the shelves, but there will probably not be much delay after today's announcement, since this hardware has been shipping to OEMs for a year already.

Comments Locked


View All Comments

  • deil - Thursday, March 14, 2019 - link

    Well (16¢/GB) for NVME speeds, that's wow at least today.
  • MDD1963 - Monday, March 18, 2019 - link

    well, for 'half-spec' NVME speeds and still 3x SATA spec, it's pretty darn inexpensive... Now we need mainboards to have 6x NVME slots instead of 6x SATA ports..
  • WiredTexan - Friday, March 29, 2019 - link

    "well, for 'half-spec' NVME speeds and still 3x SATA spec, it's pretty darn inexpensive... Now we need mainboards to have 6x NVME slots instead of 6x SATA ports."

    And there's the problem. Literally no room on an ATX board for more than 3. Each one also consumes PCIe lanes, etc. Not really knowledgeable about this field, but is there a group working on a new spec to replace the current standards of iTX, mATX, ATX and EATX for consumers? Seems we're moving into new territory that can't be accomodated by the current standards. Or is PCIe4 enough, along with increased capacity of NVMe?
  • amnesia0287 - Tuesday, August 27, 2019 - link

    PCIE 4.0 and the abundance of lanes on the newest amd chips partially solves this. Once we move to PCIE5.0 current PCIE3.1 speeds could be achieved using a single lane. Then adding lots of u.2 style ssd will be easy (can only fit so many m.2 slot`s on a pc, though I could totally see someone figuring out a pcie x16 card with 16 m.2 ssd which is just silly to think about.

    We are almost there tho. I’ve moved almost ALL my storage to ssd, only 2 spinners left. Next round of upgrades should fix that.

    What I’d love to see is some massive 3.5” ssd, I don’t even care which interface. I’ve never understood why they gravitated to 2.5”. I guess it doesn’t really matter to much. They can already do 8 and I think even 16tb in the 2.5 form factor. But 3.5 would add so much room for like capacitors or batteries or more chips (and channels).
  • BPB - Thursday, March 14, 2019 - link

    This seems like a nice choice for my older PC that needs a PCIe card to use an NVMe drive. The system would never take full advantage of more expensive drives. I may finally upgrade to an NVMe drive on that system now that I can get a reasonable size for cheap.
  • TelstarTOS - Thursday, March 14, 2019 - link

    Where are 2TB drives, WD?
  • jabber - Thursday, March 14, 2019 - link

    So would leaving say 10GB+ free for over provisioning help with these lesser performance driven drives?
  • Cellar Door - Thursday, March 14, 2019 - link

    You don't need to do that - you won't see any real world difference unless you are running a professional workload. In which case you should be starting out with a different drive.
  • abufrejoval - Thursday, March 14, 2019 - link

    I am a bit confused when you assert a DRAMless design yet speak of an "undisclosed amount of memory on-board" and categorially exclude a host memory buffer...

    I guess the controller would include a certain amount of RAM, more likely static because it takes an IBM p-Series CPU to mix DRAM and logic on a single die.

    I guess there could in fact be a PoP RAM chip and we couldn't tell from looking at the plastic housing, but could they afford that?

    That leaves embedded MRAM or ReRAM which I believe WD is working on, but would it already be included on this chip?

    And I wonder if a HMB-less design can actually be verified or where and how you can see what amount of host memory is actually being requested by an NVMe drive.

    BTW: How do they actually use that memory? The optimal performance would actually be achieved by having the firmware execute on the host CPU on its own DRAM, but for that the drive would have to upload the firmware, which is a huge security risk unless it were to be eBPF type code (hey, perhaps I should patent that!)

    What remains is PCI bus master access, which would explain how these drives may not be speed daemons.
  • Billy Tallis - Thursday, March 14, 2019 - link

    When WD introduced the second generation WD Black SSD, they briefed the media on their controller architecture in general and answered some questions about the SN520. The controller ASIC includes SRAM buffers, but they don't disclose the exact capacity. It's probably tens of MB, comparable to the amount of memory used by HMB drives and far too small to be worth using a separate DRAM device. WD specifically stated that HMB was not used, and that they had sufficient memory on the controller itself to make using HMB unnecessary. (And even without such a statement, it's trivial to inspect HMB configuration from software, since the drive has to ask the OS to give it a buffer to use, and the OS gets to choose how much memory to give the SSD access to.)

    None of the above buffers have anything to do with executing SSD controller firmware; that's always 100% on-chip even for drives that have multi-GB DRAM on board. SSDs use discrete DRAM or HMB or (in this case) on-controller buffers to cache the flash translation layer's mappings between logical block addresses and physical NAND locations.

Log in

Don't have an account? Sign up now