Micron Announces PCIe 4.0 Client SSDsby Billy Tallis on June 1, 2021 9:10 PM EST
In Micron's keynote today at (virtual) Computex, the memory manufacturer announced they have started shipping the companies first PCIe 4.0 SSDs, using their latest 176-layer 3D TLC NAND flash memory. The two new product families are the Micron 3400 and 2450 series client SSDs.
The 3400 series is their high-end client SSD, with double the read throughput of their preceding Micron 2300, and 85% higher write throughput. The 3400 uses Micron's latest in-house SSD controller design, and Micron is touting performance and power efficiency that make the drive suitable for applications ranging from notebooks to workstations. As is typical for high-end client PCIe 4.0 SSDs, the capacity options start at 512GB and go up 2TB.
The Micron 2450 series is a more entry-level design but still featuring PCIe 4.0 support. This one uses a third-party DRAMless controller, likely the Phison E19T (also believed to be used in the recently-announced WD Black SN750 SE). The 2450 is available in three different M.2 card lengths from the usual 80mm down to the 30mm card size suitable for extremely compact systems. The Micron 2450 series covers the more mainstream capacity range of 256GB through 1TB.
The most highly-awaited products with Micron's 176L 3D TLC might be the upcoming refreshed Phison E18 drives that threaten to dominate the high-end market segment, but Micron's own 176L SSDs will help bring this latest generation of NAND to a wider range of products, including pre-built systems where OEMs seldom offer options quite as high-end as a Phison E18 drive. Micron's new client SSDs are already in volume production and shipping to customers.
Post Your CommentPlease log in or sign up to comment.
View All Comments
mode_13h - Tuesday, June 1, 2021 - linkThat's how I'd usually "overprovision" them.
Billy Tallis - Wednesday, June 2, 2021 - linkCrucial products started to diverge from Micron client drives around the MX500, when Crucial switched to using Silicon Motion controllers while Micron client drives kept using Marvell. On the NVMe side, Micron rolled out their in-house controllers to the client OEM products well before Crucial started using them, but now they're both using a mix of in-house and third-party NVMe controllers.
mode_13h - Friday, June 4, 2021 - linkThanks!
DigitalFreak - Wednesday, June 2, 2021 - linkIIRC, Cruicial is for end users and Micron is for OEMs.
bug77 - Wednesday, June 2, 2021 - linkCrucial is just the retail division of Micron.
As for components used traditionally, that doesn't mean much these days where availability usually trumps everything else.
mode_13h - Tuesday, June 1, 2021 - linkLet's see some SLC or MLC drives, using that same NAND. I'd give up the capacity for better endurance, sustained writes, and longer data retention.
Wereweeb - Wednesday, June 2, 2021 - linkJust buy the Chia mining drives. The "premium" ones are all essentially QLC SSD's in pSLC mode.
antonkochubey - Wednesday, June 2, 2021 - linkWhat are you doing that necessities more endurance than those drives offer?
More so, if your workload is really that endurance-heavy, why are you complaining about consumer-grade SSDs instead of looking at industrial ones (all of which, by the way, are also TLC nowadays)?
mode_13h - Wednesday, June 2, 2021 - linkI just had to recover data of the PC off a former coworker whose machine had been unplugged for the past 2 years. If he'd had a QLC or maybe even TLC drive, that data would be gone!
As for endurance, I've not burned out a drive yet, but virtually all of mine are SLC or MLC. Workloads include lots of software builds and database I/O.
I don't like how we have to pay an extra premium for write-oriented datacenter SSDs, just to get SLC or MLC. It could even be a configurable option in the firmware, to put a SSD into SLC or MLC mode.
mode_13h - Friday, June 4, 2021 - linkBTW, it was a Micron 1100 SATA drive, which apparently uses TLC. The drive had been powered off for about 3 years!
I know it wasn't too far from the cliff, however, because some of the blocks that hadn't been written by the filesystem did indeed have read errors, when I ran badblocks.