Just a couple of months after launching its first SATA based SSD controller developed in-house, OCZ is announcing a PCIe based SSD controller co-developed with Marvell. The controller is based on Marvell's 88NV9145 silicon, codenamed Kilimanjaro, and is an OCZ exclusive as the two companies apparently worked together in creating it. I'll see the chip in person next week at Storage Visions (just before CES) but it should carry both OCZ and Marvell logos. It looks like there will be an OCZ derived version of this chip as well as a Marvell branded part that will be available for others to use.

The controller itself features a native PCIe 2.0 x1 interface rather than SATA. That in itself isn't very impressive, but the first platform to use it will feature an array of these controllers behind a PCIe switch. The first implementation will be the OCZ Z-Drive R5 and will be available in MLC, eMLC and SLC NAND configurations of up to 12TB. 

OCZ is claiming compatibility with VMware ESX/ESXi, Linux, Windows Server 2008 and OS X. Both full and half-height configurations will be available, similar to the Z-Drive R4

I'm curious as to why OCZ and Marvell decided to design a native PCIe to NAND Flash controller but limited it to a x1 interface. Ideally we'd see something like a native x4, x8 or x16 controller, especially given how much bandwidth you can push through these large NAND arrays. I'll find out more next week for sure, but I wonder if the target market for this controller might be something beyond a multi-controller PCIe card. 

Update: Marvell released more details about the 88NV9145 silicon. Each PCIe 2.0 x1 controller (pictured above) supports four NAND channels and up to four NAND die per channel. Using 8GB NAND die that works out to be a maximum capacity of 128GB of NAND per controller (we'd need how many controllers to hit 12TB!?). Marvell is claiming its controller is good for up to 93,000 4K random read IOs per second or 70,000 4K random write IOPS. 

Comments Locked

22 Comments

View All Comments

  • NickB. - Friday, January 6, 2012 - link

    Doesn't a single PCIE 2.0 channel max at 500MB (Bytes, not bits) per second, in each direction... and doesn't SATA 3 max at about 600MB total? Seems *a little* apple/orange-ish to me, but anyway...

    The thing I'm curious about is latency. For a normal PCIE-SSD to get between the processor and the flash you go to the onboard chipset, to the PCIE chip on the SSD, which then passes to the SATA based controller. Sure savings for the mfr's in component counts will be a bonus, and this setup should use less power, but there should be some improvement in latency as well.

    Also, isn't there a PCIE interface directly on the Sandy and Ivy Bridge chips? In most of the architecture diagrams I've seen it shows it as an x16 interface for graphics cards but is there anything keeping someone from using that for an SSD - like, say, in an Ultrabook that uses the integrated graphics anyway?
  • xdrol - Friday, January 6, 2012 - link

    OCZ's previous PCIe-SSD cards had a PCIe 4x interface - they can build such a card with 4 modules. That is a total of 2 GB/s, versus SATA3's 600 MB/s.

    And nothing stops them to make a 16x card (8 GB/s).

    See the fig 2 at the bottom of the article - 8 modules on a PCIe 8x card.
  • NickB. - Friday, January 6, 2012 - link

    Yup, I think that might have been added after I posted... or I just missed it :)

    Either way, this is interesting. Could be an alternative to mSATA... maybe?

    Maybe it's wrong of me, but for some reason I have it in my head that this could allow for the disk equivalent of moving the memory management onto the CPU from the motherboard chipset. CPU->SSD Controller direct via PCIE for up to 128GB... or CPU->PCIE Switch->SSD Controller for over 128GB. I keep thinking that this could give you a screaming fast machine because of the latency improvements... but if not, at the very least it could allow for cheaper SSD-based machines with the storage built into the motherboard.
  • ericloewe - Friday, January 6, 2012 - link

    Apple uses those PCI-E lanes for graphics (8x) and Thunderbolt controllers (4x per controller, allowing up to two as in the big iMac)

    So, I'd say they can be used for everything, which makes sense intuitively. Graphics benefit a lot because they're high-bandwidth, very low latency, but storage should benefit just as much.
  • MGSsancho - Saturday, January 7, 2012 - link

    Plus it allows for much flexability in configurations for many companies. Not every product ship will saturate all lanes 100% of the time. A server might need those PCIe lanes for storage and network while abandoning display adapters.
  • Matysek - Friday, January 6, 2012 - link

    we're doomed.
  • FunBunny2 - Friday, January 6, 2012 - link

    What do you mean We, Kiemosabe?
  • FunBunny2 - Friday, January 6, 2012 - link

    My recollection of PCIe SSD (at least, Fusion-io variety) was that the card was (to some delta) just raw NAND, with a software "controller" loaded to main storage and executed by the cpu. This off-load was a (still?) source of controversy.

    Anand: how about a refresher on how PCIe SSD are implemented? Frankly, this doesn't make sense.
  • nubie - Friday, January 6, 2012 - link

    At a consumer level this makes perfect sense.

    Extra lanes on an entry level motherboard are going to be x1 only, same for laptops on Mini-PCIe.

    I am hoping there will be x1 and mini-pcie versions, you could make some very svelte ITX or wearable PC's if it was in that form factor.

    Don't forget that small and quick is sometimes a goal, not just blazing speed above all else. This is a good baby step, you have to start somewhere, KISS is a good principle.
  • MGSsancho - Saturday, January 7, 2012 - link

    Exactly, Or mobos with 4x lanes.

Log in

Don't have an account? Sign up now