Apex Storage X21 Carries 21 M.2 SSDs: 168 TB of NAND at up to 31 GB/secondby Anton Shilov on March 15, 2023 4:00 PM EST
- Posted in
- Apex Storage
Riser cards carrying multiple M.2-2280 SSDs are nothing new, but Apex Storage went above and beyond with its new X21 board, which can carry up to 21 M.2 drives. The add-in-card is aimed at applications that take advantage of both high storage capacities and performance, including databases as well as machine learning applications. At maximum capacity and utilization, the X21 can hold 168 TB of M.2 storage, running at a peak sequential read/write speed of up to 31 GB/s.
The Apex Storage X21 is a dual-PCB add-in-card with a PCIe x16 interface that is based around an unspecified PCIe Gen4 switch system (ed: and what looks like 2 switches) covered with a large heatsink. The AIC has 21 M.2-2280 slots for SSDs, and using 8 TB M.2 drives it can offer up to 168 TB storage capacity when fully populated – and twice that once 16 TB M.2 SSDs become available.
Apex Storage says that when equipped with fast SSDs, an X21 will be able to offer sequential read speeds of up to 30.5 GB/s as well as sequential write speeds of up to 28.5 GB/s, fully saturating the PCIe 4.0 x16 interface and leaving behind all SSDs available today. The X21 riser card also promises over 10 million random read/write IOPS (7.5M read IOPS, 6.2M write IOPS) as well as an average latency of 79µs and 52µs for reads and writes, respectively.
In fact, given the total number of drives used, it is possible that even previous-generation SSDs with a PCIe 3.0 x4 interface will be able to guarantee extremely high performance in this setup. Still, to get the maximum performance and take advantage of the NVMe 2.0 protocol, Apex Storage recommends using drives with a PCIe 4.0 x4 interface and up-to-date feature set.
Of course, 21 M.2 SSDs consume a lot of power, so the card comes with two six-pin auxiliary PCIe power connectors that can deliver up to 150W to the card. It is unclear how much power the X21 AIC can suck from the PCIe slot as well, but in theory the combination of the two connectors and the slot can provide up to 225W to the drives, or a bit over 10 Watts per drive. The cramped card doesn't offer much in the way of cooling, however, so Apex Storage recommends using external fans that provide airflow of at least 400 LFM – essentially requiring a server-style blower configuration.
Apex Storage's X21 adapters can work in multi-card configurations provided that their host has enough PCIe Gen4 lanes. With a multi-card storage subsystem we are looking at up to 107 GB/s sequential read performance as well as 70 GB/s sequential write performance, which is comparable to throughput offered by all-flash-arrays that are typically considerably bigger.
Apex Storage did not touch upon availability timeframe as well as pricing of its X21 card.
Post Your CommentPlease log in or sign up to comment.
View All Comments
jdq - Wednesday, March 15, 2023 - linkThe Apex Storage website states that the card is bootable, and that "Pricing is $2,800.00 USD with volume discounts available." Reply
ballsystemlord - Wednesday, March 15, 2023 - linkWhat size of SSDs does that $2,800 card come with? Reply
rpg1966 - Thursday, March 16, 2023 - linklol Reply
Athlex - Wednesday, March 15, 2023 - linkThis is very cool. I have their original 16-SSD SATA design but an NVMe version is the dream...
The older one isn't listed, but it was sold here:
The Von Matrices - Wednesday, March 15, 2023 - link21 is a strange number of drives to support. I know there are Microchip PCIe switches that support 100 lanes, which would be a perfect number for the 84 disk + 16 interface lanes, but this card has two switch chips. Maybe 2 x 68-lane switches with a 16-lane connection between them and 4 lanes going unused? Reply
deil - Thursday, March 16, 2023 - linkSounds reasonable. I assume that extra 4 lanes are dedicated to firmware updates. Reply
abufrejoval - Saturday, March 18, 2023 - linkI hope to see a lot more variants of this concept, because lower capacity and lower PCIe revision NVMe sticks keep piling up, which are simply too sad to waste for lack of lanes: recyling older SATA-SSDs was a lot easier.
And to be honest, on AM4/5 platforms especially, I'd like to see an approach, that takes advantage of the modularity which the base architecture has, whereby the SoC simply offers bundles of lanes, which can then turned into USB, SATA and another set of lanes by what is essentially a multi-protocol ASmedia switch... that can even be cascaded.
Now if that should be a mainboard with x4 PCIe slots instead of all these M.2 connectors, of if you should use M.2 to PCIe connectors for these break-out switched expansion board, is a matter of taste, form factors and volumes.
Well at least at PCIe 5.0 levels trace length could be another factor, but from what I see on high-end server boards, cables make trace lengths more manageable than PCBs with dozens of layers, both probably not budget items.
Anyway, a quad NVMe to single M.2 (or even PCIe x4) board that ideally can even deliver PCIe 5.0 on the upstream port from those four PCIe 3.0 1-2TB M.2 drives for less than €100 should sell significant volumes. Reply
phoenix_rizzen - Friday, April 14, 2023 - linkWith the right PLX switches, you could use a PCIe 5.0 x16 connection to the motherboard to provide 64 PCIe 3.0 lanes to drives. That's enough for 16 PCIe 3.0 x4 NVMe drives to run at full bandwidth.
In theory, anyway. :) Reply
mariushm - Tuesday, March 21, 2023 - linkI'm surprised with the design choices... like they could have aligned the SSDs on the daughter board to be mirrors of the ones on the main board, and that would allow maybe to have a custom heatsink between both boards with some holes through the center of the heatsink to allow for airflow - use adhesive thermally conductive tape to make sure those 8 ssds are connected to heatsink, heatsink gets cooled by air flow (maybe make it optional to have card longer and have a blower fan at the end). They could have used those slightly more expensive polymer tantalum capacitors that are much lower height, so that the air flow is not blocked by big surface mount capacitors. Reply