Comments Locked

51 Comments

Back to Article

  • lilmoe - Thursday, August 11, 2016 - link

    It's all about price, folks. On the consumer end, that is.
  • ImSpartacus - Thursday, August 11, 2016 - link

    Yeah, I don't see much changing there except a steady march to the bottom. All these are pro/high end products.
  • name99 - Friday, August 12, 2016 - link

    Consumer is a big space. Are Apple products consumer? Because they're a possible target for this sort of thing. (Depending on how Intel prices XPoint, however, THAT may be enterprise only for two or three years.)
  • plopke - Thursday, August 11, 2016 - link

    The more I see from Optane , QuantX , Z-SSD,... the more I wonder what the benefit is for mainstream consumer products , to expensive, coming as a cache system for consumer enthusiastic people which can buy 16/32GB DDR4 quiet cheap + very quick SSDs instead of spending money on this.

    But on enterprise level will be fun to watch how this all evolves.
  • lefty2 - Thursday, August 11, 2016 - link

    I don't think anyone ever said that they have benefit for mainstream consumer products. It's mainly for enterprise and enthusiast markets.
    Also, Samsung only talks about performance. 3D X-point is 1000 times more longativity than normal NAND and that factor is just as important as the performance. So, what's the longativity of Z-Nand?
  • plopke - Thursday, August 11, 2016 - link

    You are right , guess Intel is the only one who might pitch this to consumers atm.
  • ddriver - Thursday, August 11, 2016 - link

    Well, if it is indeed SLC, then it could handle 100k+ p/e cycles. That's a LOT more than MLC flash.

    SLC's problem was low capacity, which samsung can solve by stacking dies into a single package. I am surprised they took so long, but then again, it is not as cost effective as MLC and TLC stacking, so it makes sense they didn't do it for consumer drives, but now that the enterprise market is gearing for a transition to flash and with xpoint on the horizon, the fat profit margins of the enterprise market have made stacking SLC very viable.
  • bcronce - Thursday, August 11, 2016 - link

    3D VAND TLC has between 10k and 70k p/e cycles, depending on if they optimize the firmware for speed or durability. I haven't heard how many cycles SLC 3D VNAND has, I assume more. The main point is modern MLC can have about the same amount of cycles as last gen SLC.
  • ats - Thursday, August 11, 2016 - link

    This is completely false on so many levels. Modern NAND and 3D NAND has significantly LESS real endurance than NAND from 5-10 years ago. What you are actually seeing are advances in redundancy and error correction codes. Those old numbers for say 50/40nm flash were RAW P/E cycles. You could hook the flash up to any interface and it would give at least those numbers of P/E cycles. What is being quoted for modern flash is not RAW P/E values but calculated P/E values assuming a certain redundancy and a certain level of error correction (generally quite complex taking microseconds to compute).
  • frenchy_2001 - Thursday, August 11, 2016 - link

    Nope.
    What has happened is that 3D NAND took multiple steps back in XY lithography and then went 3D (up) and stacked layers.
    So, recent 3D NAND are back on the 40nm process (instead of 16nm process for planar NAND). This gives them *MUCH* bigger cells, thicker walls and as such, better endurance. Process improvements helped too.
    Basically, 3D NAND is back to RAW p/e cycles superior to what we had in the past, just due to cell size. On top of that, they now have better error correction and better wear leveling and controllers. This is why even TLC in 3D NAND has decent endurance.
  • ats - Thursday, August 11, 2016 - link

    3D NAND isn't that much of a panacea. Raw cell endurance is still less than historical 40nm cell endurance.
  • extide - Thursday, August 11, 2016 - link

    3D NAND did reset that clock a bit though because they went UP in feature size.
  • ddriver - Thursday, August 11, 2016 - link

    Complete BS.

    SLC - 50k to 100k+
    MLC - 3k to 10k
    TLC - 0.5k to 1k

    Process also matters, and note size shrinks the wear increases. Stacking nand dies vertically doesn't do any magic in terms of endurance, it might even decrease it a little.
  • revanchrist - Thursday, August 11, 2016 - link

    1000X my ass, those numbers are total marketing bshit to create hype. Read this article: http://www.tomshardware.com/news/intel-micron-3d-x... and you can see that "Micron also revealed that its first QuantX SSDs would feature 25 DWPD (Drive Writes Per Day) of endurance over a five year period with the first generation of 3D XPoint. In comparison, some enterprise SSDs based on MLC NAND provide up to 10 DWPD, whereas TLC NAND SSDs provide between <1 to 5 DWPD of endurance."
  • ddriver - Thursday, August 11, 2016 - link

    Yeah, it seems optane is mostly hype, density is cr@p, performance and endurance are like 2-3x better than current MLC PCIe SSDs, nowhere near the 1000x intel claims. By the time it becomes widely avaiable, samsung will have an easy time competing with stacked SLC.
  • fanofanand - Thursday, August 11, 2016 - link

    There is a certain level of "honesty" that must be utilized in these presentations, and I'm sure Intel toed that line carefully. Typically they will say "up to" and then quote some insane number. Well maybe they are comparing it to the first OCZ SSD ever made? So in that sense it isn't BS, but it also isn't what most people would consider "honest". It's all about staying within the legal boundaries.
  • ddriver - Thursday, August 11, 2016 - link

    Even if we assume intel based their claims on some highly isolated, practically irrelevant in real world applications test, it is still highly suspicious that such tests would come with EXACTLY 1000x faster and more durable than nand, not 905, not 982, exactly 1000 for both metrics. Highly unrealistic. Also, they clearly don't have an "up to":

    http://www.intel.com/content/www/us/en/architectur...

    It says "1000x faster than nand" and "1000x endurance of nand", there are no "up to"s. And I in turn say "total BS".
  • ddriver - Thursday, August 11, 2016 - link

    I'd honestly be surprised and impressed if it is actually twice as good as what nand SSDs will be able to offer by that time and at that price point in real world scenarios. That would be more in line with the lazy mediocre intel we've been seeing the last several years.
  • smilingcrow - Thursday, August 11, 2016 - link

    I think the latency is going to be significantly better. There was a very recent quote supposedly from Facebook who have been testing the technology claiming very useful real world gains. Not even 10x but in real world you rarely get even 10x gains except in bench-marking tools.
  • smilingcrow - Thursday, August 11, 2016 - link

    Facebook revealed its performance results with an Intel Optane SSD prototype which tripled the number of transactions that the company achieved with a normal P3600 NAND SSD during a RocksdDB throughput test, but more importantly, Optane reduced the 99.99th percentile results (worst-case latency) by more than 10X.

    http://www.tomshardware.com/news/intel-micron-3d-x...
  • fanofanand - Thursday, August 11, 2016 - link

    You should definitely contact your state attorney general and show them the "facts" you have put together, go after them for false advertising! I'm sure you have FAR more industry insight than little ol' Intel.
  • shabby - Thursday, August 11, 2016 - link

    "Micron representatives indicated that the company based these statements on the speed of the actual storage medium (at the cell level) before it is placed behind the SSD controller, firmware, software stack and drivers. These factors conspire to increase latency and decrease the performance of the final solution (i.e., the SSD)."

    http://www.tomshardware.com/news/intel-micron-3d-x...
  • ddriver - Thursday, August 11, 2016 - link

    Yeah, that's like claiming you have an engine that can do a 1000 miles on a gallon of fuel. But hey, that's the engine driving itself only, put it in a vehicle and suddenly it is no better than your average engine...

    BTW nowadays almost everyone false-advertises, and nobody seem to care. It is practically trivial to bypass the laws allegedly aiming to stop false advertising.
  • close - Thursday, August 11, 2016 - link

    The controller and firmware optimizations can make or break any storage. That's why Samsung's 840 EVO can be either a great or a shitty product depending in which stage of its life you happened to use it.

    Don't judge a preview sample before the product (and technology) has time to properly mature. Or at least until the 5.25" version comes out ;).
  • ddriver - Thursday, August 11, 2016 - link

    I hope you do realize the moment I have money to throw away, I will R&D a 5.25" HDD, if only to break it over your head ;) I mean come on, don't you really have anything better than your belief a 5.25" HDD doesn't make sense only based on the lazy and greedy industry which can't be bothered to make it?

    There is nothing preventing nand from benefiting massively from improved controllers as well, caching and increasing parallelism. It can work miracles in terms of bandwidth and iops, and can hide a lot of the latency too. By the time xpoint "matures" it will be on par with contemporary nand SSDs. But hey, when intel said "1000x better" they probably meant **any** nand storage, say like a good old SD card LOL.
  • name99 - Friday, August 12, 2016 - link

    No it's not. I hate Intel as much as the next person, but you are being deliberately obtuse here.
    Optane in SSDs is a tech demo, it's not the end game. The whole point is that it CAN run "as a pure engine" without SSD controller, software stack, etc. Flash CANNOT do that, and can't be improved to do that.
  • Xanavi - Thursday, August 11, 2016 - link

    You don't seem to understand Resistive techology at all then. It is a metal that changes it's geological state, it literally will hold its state forever, and will not wear out anywhere near close to NAND holding electrons destruction levels. Their concern is the parts that change the state via hotspots, that part my wear out closer to 1000x better, but the element itself? More like One MILLLION times better.
  • Xanavi - Thursday, August 11, 2016 - link

    As for the speed, the element can change state in nanoseconds, it is likely the controller and interface that is limiting its potential at this time. You have no idea how different this technology is and what it can do in the future. Read up brah.
  • JoeyJoJo123 - Thursday, August 11, 2016 - link

    Yeah, they should take a cue from nVidia about honesty. You know, how completely "honest" they were about a GTX 970 having "4GB" of memory.
  • name99 - Friday, August 12, 2016 - link

    Missing the point. The value of optane/XPoint is in the direct memory bus attachment. Using it to create SSDs is dumb. Flash is harder to attach to the memory bus directly because of the page size issue --- you'd need to cut the page size down to something like 128 bytes and that is surely not practical.

    Putting persistent storage directly on the memory bus allows for essentially in-memory database speeds, without the cost of hitting the OS and IO systems to persist data. THAT is why optane is interesting.
  • ats - Thursday, August 11, 2016 - link

    DWPD tells you mostly jack about the underlying storage's endurance. DWPD is heavily influenced by over provisioning and write amplification. Without known the OP and WA of the drives in question, using DWPD is purely meaningless. Hell I can take just about any consumer drive and get it to 10k DWPD pretty easily, send me the worst TLC drive out there, and I can make it into a 100k DWPD drive using simple command lines.
  • Kristian Vättö - Thursday, August 11, 2016 - link

    100,000 DWPD? That's not even possible. With a 128GB drive, you would have to write 148GB/s to fill the drive 100,000 times in one day.

    DWPD is what counts in the real world. Sure it's influences by OP and WA, but you can't get rid of OP and WA when dealing with an actual drive. The user, be that a consumer or an IT architect, only cares about how much they can write to the drive, not how many P/E cycles the underlying memory technology has.
  • fanofanand - Thursday, August 11, 2016 - link

    Stop with the facts, his emotional rant was far more interesting.
  • MrSpadge - Thursday, August 11, 2016 - link

    He was surely referring to increasing OP manually, which makes the amount of writes needed for a "full drive write" progressively smaller and helps with WA. Pushing this to the extreme doesn't make sense - but if a manufacturer chose to do it, they could claim a huge DWPD number.
  • fanofanand - Thursday, August 11, 2016 - link

    Which would still be physically impossible to do 100,000 times per day.
  • thetuna - Thursday, August 11, 2016 - link

    Imagine a 1B drive with 1TB of over provisioning.
    Obviously ridiculous, but not physically impossible.
  • ats - Thursday, August 11, 2016 - link

    Take 128GB drive. Provision to 1.28GB. 100k DWPD done. (this is in reality how a lot of actual enterprise drives are made, massive over provisioning).

    The point being that DWPD isn't an endurance specification but a warranty specification and there are numerous ways to shift the number.
  • MrSpadge - Thursday, August 11, 2016 - link

    Please read the article you just linked. Micron provides fine answers to the performance and endurance question. In short: approximately 1000x is what the cell can do, whereas 25 DWPD and the teased performance is what the current product is engineered for.
  • JoeyJoJo123 - Thursday, August 11, 2016 - link

    WTF is longavity?
  • ddriver - Thursday, August 11, 2016 - link

    No need to wonder, the benefit will be zero. Mainstream applications break free from the storage bottleneck by means of a single SATA SSD, adding faster storage does next to nothing, since the bottleneck is now the CPU. It is actually a good thing those will be pointless for consumers and prosumers, since they will be too expensive to afford anyway, so people won't be that bummed about not being able to afford them.
  • Xanavi - Friday, August 12, 2016 - link

    Driver you obviously have a problem with Intel and it's quite sickening. Buy AMD, take a breather. Intel is going to sell so many of these that the cash will stack to heaven and make you so envious you'll puke your guts all over the sidewalk. LOL
  • edzieba - Thursday, August 11, 2016 - link

    "a new non-standard oversized M.2 form factor 32mm wide and 114mm long, compared to the typical enterprise M.2 size of 22mm by 110mm."

    It is rather odd, 30mm widths are part of the m.2 standard, so a 30110 drive would be only barely smaller than their nonstandard '32114' drive, and completely compatible with many current m.2 slots. Even weirder, the pictured mockup shows the m.2 connector stretch to 32mm wide, breaking compatibility with m.2 (which maintains connector width regardless of PCB width). Why they would even call such a completely noncompatible drive 'm.2' doesn't make much sense over just calling it a PCIe 4x card.
  • johnp_ - Thursday, August 11, 2016 - link

    My understanding is that it's some kind of "Enterprise M.2" that supports hot-plug/-swap and front panel access. I assume that the current M.2 connectors physically can't handle that and therefore need a slight redesign.

    The form factor is called M.2 32114 and allows 1U servers with 30+ front panel slots (too lazy to count)

    Relevant slide (missing here): https://www.computerbase.de/bildstrecke/73916/9/
  • johnp_ - Thursday, August 11, 2016 - link

    Well I'm an idiot. Overlooked the big "x 32ea" m(
  • johnp_ - Thursday, August 11, 2016 - link

    This presentation states on page 10:

    <blockquote>
    Connector contacts have extremely small pitch, making hot-plug “impossible”. Ground pins are not extended, as on larger form factors.
    </blockquote>

    http://www.snia.org/sites/default/files/12May%20M....
  • SunLord - Thursday, August 11, 2016 - link

    It's a non-standard size type but it conforms to the M.2 standard for the connector and really all it would need is for other manufacturers to add support for the mount screw at 114mm and leave more spacing on the sides to support it. Really they should of gonna longer then 114mm something like a 30122 to given length is less likely to be an issue compared to width
  • edzieba - Thursday, August 11, 2016 - link

    "It's a non-standard size type but it conforms to the M.2 standard for the connector and"
    Not according to the posted illustration in the presentation.
  • extide - Friday, August 12, 2016 - link

    Normal M.2 connectors are 22mm wide, there is no way a 30mm wide M.2 stick would fit in those connectors. Sure, 30mm may be part of the existing standard, but would still need a different connector.
  • extide - Thursday, August 11, 2016 - link

    Wow, Samsung is SO DAMN NERVOUS about 3D Xpoint! That's crazy they whipped up some competing solution from stuff laying around, I wonder if it is just some spiffed up NAND or some truly new technology like 3D XPoint. Hrmmm
  • haukionkannel - Friday, August 12, 2016 - link

    Current estimations Are that They have increased lines (normally 4) to increase parallerism and use MLC memory in SLC mode. So They practically use whole only half of the capasity of nand to increase the endurance and speed. (Same method that is normally used in small capasity as an cache)
  • Xanavi - Friday, August 12, 2016 - link

    Totally right on the sweating g bullets let's pull out some SLC and steal some thunder, Samsung will try to steal or license this shit as soon as possible before NAND is dead.

Log in

Don't have an account? Sign up now