Report: First Intel 3D XPoint products coming to the desktop

Rumor mill Benchlife.info claims to have new information about the first round of 3D XPoint "Optane" storage products for the consumer and enthusiast space. According to Benchlife's source, Intel will roll out its Optane Memory 8000p series devices in 16GB and 32GB capacities. Optane is not intended to be primary storage—Intel presents their NAND flash alternative as a sort of intermediary between system memory and storage.

As is the case with flash memory, the higher capacity model offers additional performance, as shown in the table below. Read performance is five to 10% faster on the larger model, and write performance of the 32GB model is about 70% faster than its little brother's. Benchlife says that these initial consumer Optane products will communicate over two lanes of PCIe 3.0 and will come in M.2 2241 and 2280 physical packages.

Specification (Unit) Intel Optane 8000p 16GB Intel Optane 8000p 32GB
Random 4KB Read (up to) (IOPS) 285,000 300,000
Random 4KB Write (up to) (IOPS) 70,000 120,000
Sequential 128KB Read (MBps) 1,400 1,600
Sequential 128KB Write (MBps) 300 500

Those who've been looking for a reason to drop their crusty old Sandy Bridge chips and upgrade to something new might finally have a reason. Benchlife reports that Optane will require Kaby Lake or newer Intel CPUs. Intel will reportedly will be more open-armed with respect to operating systems—64-bit versions of Windows 7, 8.1, and 10 will all be supported.

Optane is Intel's trade name for the product of its joint venture with Micron for the production of 3D XPoint non-volatile memory technology. The companies say 3D XPoint will offer a generational leap of non-volatile memory that is denser than DRAM. Compared to NAND flash, 3D XPoint is said to offer substantially higher write endurance and performance potential. TechPowerUp expects Intel 8000p devices to ship before the end of the year.

Comments closed
    • evilpaul
    • 3 years ago

    HUGE NECRO, Batman!

    Do we have any more information on these things yet?

    One of the few benefits to having a huge amount of RAM (32GB counts as “huge” in this example) in a desktop system like mine that never uses nearly that much is that large files often end up cached (I’ve got ~20GB of stuff cached, 9.3GB in use right now). A 2GB 7zip file sitting on my WD Green storage drive can decompress in a few seconds if it’s sitting in otherwise idle RAM compared to 30 seconds if it has to be read from the drive. Same goes with copying files. There’ll be a massive, physically impossible burst of speed that covers a decent chunk of file’s transfer before slowing down to what the drive can actually handle. My system drive is an Intel 400GB 750 series, so it’s plenty speedy compared to the rest of my storage.

    Having 32GB of nonvolatile 1.6GB/s read cache or being able to write 500MB/s to the Optane cache as a powerloss safe intermediary before copying it to a slow mechanical disk would effectively hide most of the slowdown associated with the current SSD+HDD setup that most gamers and power users who can’t go to pure NVMe solid state storage currently deal with. I’d kind of like a second gen Optane 64GB with PCIe 3.0 x4 connectivity and 3.2GB/s read speeds, but baby steps, right?

    • willmore
    • 3 years ago

    Like many others here, I don’t see the point of offering a 16GiB or 32GiB stand alone drive at this point. Motherboards setup to use something like that for caching aren’t very common. It’s possible that this is just meant to be a nitch product–which speaks poorly to the cost effectiveness of it.

    What something like this makes sense for at this time is as a cache *on* and SSD. Intel makes flash as well as Xpoint. Why not make a drive that has 16 or 32 GiB of this on it and slow TLC backing it. They could use the Xpoint as a cache/write aggregator to reduce the write amplification effect on the flash and *never have to worry about losing power* like a capacitor or battery backed device has to. (or worse, devices with *no* backup of any kind)

    This could easily replace the SLC caching tricks that have been employed to mitigate the slowness of TLC.

    Then again, maybe Intel did the calculation and decided the cost savings of going from MLC to TLC would be more than eaten up by the Xpoint chips it would take to make the TLC perform as well or better than the MLC.

    • Kougar
    • 3 years ago

    A little disappointing if so. A PCIe x2 link is only 2GB/s , but those performance figures look like AHCI M.2 drives from three years ago. Throw in the expected price premium and meager 32GB capacity and it’s going to be no contest in the consumer market unless Samsung’s 960’s start exploding.

      • Klimax
      • 3 years ago

      Reminder: Those numbers are at most with 2 chips.

        • Kougar
        • 3 years ago

        Look at the read performance scaling between 1 vs 2 chips though. That’s not encouraging.

        Also it’s probably limited to two chips due to the price they are going to ask, I can’t imagine why they would even launch a 32GB model let along a 16GB one unless the cost prohibits releasing larger capacity consumer models.

      • TheJack
      • 3 years ago

      Yeah. Intel has a history of DOAing it’s own products by overpricing them.

    • UberGerbil
    • 3 years ago

    [quote<]Those who've been looking for a reason to drop their crusty old Sandy Bridge chips and upgrade to something new might finally have a reason. Benchlife reports that Optane will require Kaby Lake or newer Intel CPUs. [/quote<] But... the Sandy Bridge platform was where Intel debuted [url=http://www.intel.com/content/www/us/en/architecture-and-technology/smart-response-technology.html<]Smart Response Technology[/url<], which they could now repurpose: instead of caching caching slow HDs with fast SSDs, they could cache slow SSDs with fast Optane! ๐Ÿ˜‰

      • TwistedKestrel
      • 3 years ago

      I would accept this if it meant they actually updated SRT for older chipsets to work properly with Win 10

    • djayjp
    • 3 years ago

    I’m feeling a big “meh” coming on

    • alex.narayan
    • 3 years ago

    so if my math is correct the random read and random write (both 4kb) speeds for the Intel Optane 8000p 32GB stick/drive would be 468.75 MB/s writes, and 1171.875 MB/s reads.

    writes = 120000 iops * 4KB * (1MB/1024KB) = 468.75 MB/s
    reads = 300000 iops * 4KB * (1MB/1024KB) = 1171.875 MB/s

    That doesn’t seem very fast to replace RAM and NVMe. What am I missing?

      • TheJack
      • 3 years ago

      You are missing nothing. These things were hyped up to being a thousand times faster than the fastest SSD, now we know better. I do think these are meant for testing purposes by guinea pigs and eventually later iterations will indeed be faster.

      • Theolendras
      • 3 years ago

      Impressive by itself, but not close to the 100x time faster then NAND claim indeed.

      Updated : Ah ! just saw the Samgung 960Pro claiming 380K read IOPS and 360K write IOPS, completely destroying the write IOPS on standard NAND. Bummer.

      • Kougar
      • 3 years ago

      It was never meant to replace system RAM, just create a third storage tier between NAND and RAM. Though at those speeds it’s going to cost more than NAND, offer similar performance, and offer 10x or better endurance from the sound of it.

        • the
        • 3 years ago

        It is my understanding the 3D Xpoint can be addressed directly like memory. The reason you wouldn’t want it to replace DRAM is due to latency (DRAM is far, far lower) and bandwidth (roughly 1/5th that of DDR4). Then again, this is superior to NVMe in terms of latency and bandwidth.

        Still, the ability to have vasts amounts of memory addressable is desirable for the growing niche of big data applications. Intel has indicated that they will offer a 1 TB 3D Xpoint DIMM at some point. So for SkyLake-EP and its 12 memory slots, that is 12 TB of memory per socket. An eight socket SkyLake-EX system with twice as much capacity per socket would support 192 TB* in a single system. While raw processing would take a performance hit due to the higher latencies/lower bandwidth of 3D Xpoint but the entire concept of storage with regards to the data set can be removed. Waiting on IO would be replaced by waiting on memory. This change could still produce a significant performance gain depending on the big data workload.

        *This would also require Intel to increase their physical address space which tops out at 64 TB.

          • Theolendras
          • 3 years ago

          Well the latency was suppose to be better, but this first generation does not seem to provide any gain on that front, even tough those aren’t provided the lack of IOPS gain tells pretty much the story that latency can’t be that great. Still it could be due to a badly designed controller, but in anyway, this does not provide any tangible gain in it’s first show off over NAND, except maybe durability.

            • Kougar
            • 3 years ago

            Exactly. Better durability at the expense of price. For consumers I’m getting the impression that the price premium will probably be large enough that they’re better off buying a 960 Pro or Evo and just replacing it if/when they wear it out.

            Write performance seems to scale decently but read performance barely budges. While my first guess is a controller bottleneck, Intel already has better performing controllers on its 750 drive so after mulling it over I think it’s something else. Combined with the fact that Intel is being so tight on details since announcing 3DXpoint early last year and the announced delay on 3DXpoint products hints they aren’t achieving what they hoped to get out of the tech.

            • the
            • 3 years ago

            Yeah, it is telling that the first 3D Xpoint technology is now going to be consumer targeted instead of on the server side. Granted their original plan has always been to launch a NVMe storage product first, this makes me wonder when we’ll be seeing the 3D Xpoint DIMMs to go alongside the launch SkyLake-EP/EX. Always a good idea to launch tech like this side-by-side as it is a good motivator to get customers to upgrade.

            • the
            • 3 years ago

            I’ve been under the impression that 3D Xpoint was only to have superior latency with respect to NAND. It was never to compete with DRAM on this front from what I can tell.

            • Theolendras
            • 3 years ago

            My understanding was that it was supposed to be somewhere in the middle in terms of latency and that they would offer both the DIMM format for some specific application and NVMe devices. These first devices seems to show it is a bit better than current NAND drive, but by a small margin be it sequential or random, so latency (which influence random performance quite a bit generally) can’t be that far from current NAND.

      • Beahmont
      • 3 years ago

      You are missing the part were those numbers come without massive parallelization. The 16 GB drive is one chip, and the 32 GB drive is 2 chips. Compare 1 and/or 2 NAND chip(s) to this stuff and then see how your numbers look.

        • Klimax
        • 3 years ago

        Correct. Most of tech sites missed that and readers too.

          • Theolendras
          • 3 years ago

          Yes but it is not hard to see why, this does not provide a pragmatic gain over a 256Go + Nand drive configuration. Geeks over here are divided, no wonder a general consumer would be. I mean, don’t bother assembling a configuration that can’t exceed the technology it’s meant to take over. Go for the halo effect, release a product that can benefit from the get go most IO bottlenecked situation and price it trough the roof as enterprise configuration if need be, but showcase something an undebattable benefit.

          Even durability don’t quite hold in a 16 Go configuration, take a NAND with 20 time the storage, while individual cells will die faster a comparable total writes could be endured…

          Make no mistake it’s a promising tech, but it’s like a cute lady is showing up on a first date in garbage bags, you have the right to feel deceived.

        • Kougar
        • 3 years ago

        Only the write performance seems to be scaling well, read performance barely changes between 1 vs 2 chips. Assuming this performance info is accurate anyhow.

        If the chips are really capable of great scaling then it would indicate a controller bottleneck. Either way with that x2 interface it doesn’t really matter if they add more chips or not.

      • Pwnstar
      • 3 years ago

      It’s not meant to replace RAM. Where did you get such inaccurate info?

      • psuedonymous
      • 3 years ago

      [quote<]What am I missing?[/quote<] Latency figures. To retrieve from NAND you need to read whole pages and write entire blocks, and these have overheads in the tens to hundreds of microseconds range. 3-point can be read and written in much finer granularity (like DRAM and SRAM) and with latencies in the single-digit microsecond range (expected to drop further). If all you're doing is storagelike handling of 4kb blocks then you'll see a bit of improvement in write delays, but that would probably be eaten up by other overheads. But if you're grabbing a few bytes, modifying it with a very basic operation, then dumping it back, having those extra low latencies is going to be a big boon. You've got a storage-like device that performs more like a RAM-like device, so taking advantage of it means coding to talk to it in a more RAM-like manner.

        • Theolendras
        • 3 years ago

        Write IOPS would be much better if the cells delivers that kind of breaktrough unlessthe controller wasn’t ready for prime time… A compagny the size of intel should have marketing that knows better. Show someting great and that people dream about and then trickle down from there as production and availability improve…

    • Wirko
    • 3 years ago

    What combination of CPU and chipset will MS and Intel support that can run Windows 7/8.1 and use Optane?

    • Chrispy_
    • 3 years ago

    How many dollars per GB is the real question.

    I get the feeling that it’s going to be in the “if you have to ask….” range for a couple of generations, much like early SSDs.

    • JosiahBradley
    • 3 years ago

    Another Intel lockin tech. Let’s hope Micron doesn’t pull this **** with us too.

    • short_fuze
    • 3 years ago

    I’ll remind everyone again, of the articles by (pen-name) “Stephen Breezy” over at Seeking Alpha:

    [url<]http://seekingalpha.com/author/stephen-breezy/articles[/url<] His contention is that Intel is spreading alot of FUD right now, for reasons unknown. The true performance of the phase-change technologies Intel is dangling out there, is to radically alter the PC as we know it. The full power of this stuff doesn't come into play until there's no bottle-neck at all between it and the CPU. [Edit: in fact, if his research is correct, Optane is just a hint that this stuff WILL become the CPU. if I understand him correctly...]

      • Andrew Lauritzen
      • 3 years ago

      That’s a little bit of an overstatement. But it is true that Optane as a *storage* device is fundamentally limited by the OS storage stack. One of the main benefits of this stuff is byte addressing + persistence. That’s very cool and powerful once the OS gets out of the way (and indeed treats it more like persistent RAM than storage), but that’s going to take some time for software to catch up and explore the possibilities.

      As a pure storage device, I agree it’s not that interesting as most of the performance benefits are dwarfed by the inefficiencies in the storage stack itself. Even NVMe can only help a little bit here.

      [Disclosure: I work for Intel but have nothing to do with the memory/storage side nor any special info about it.]

        • short_fuze
        • 3 years ago

        Thanks for the clarification Andrew. Like I implied, it’s a bit murky to me. I made a career out of LAN tech and such (Novel CNE, MCSE, QA, database programming, etc. – more middle-tech stuff) but never had to deal with that kind of fundamental research.

        You mentioned byte addressing + persistence. Are these semi-CPU-ish, just close to the idea of what silicon currently does? Or (are you in a position to confirm/deny) can the material being used in Optane be more of the paradigm change I see Breezy hinting at?

          • Andrew Lauritzen
          • 3 years ago

          I’m not in a position to officially confirm/deny anything as I don’t have any special info or insight into the storage side as per my disclaimer ๐Ÿ™‚

          However my understanding is that the underlying technology is certainly capable of efficient RAM-like accesses, unlike NAND which has to load and deal with large blocks. If you load a single DWORD from a NAND device there is a staggering amount of overhead ๐Ÿ™‚ That’s true of most storage mediums to date and thus the storage stack (hardware and software) has a lot of these assumptions baked in.

          The easiest way to get around a lot of these software issues is to effectively expose it through the “RAM” concept instead. i.e. the OS is not directly involved with individual accesses and thus there is significantly lower overhead. Of course with that notion we’re still missing one neat piece of the technology – that it is persistent and non-volatile ๐Ÿ™‚ But I think ultimately it’s going to be more powerful to add the concept of persistence to the notion of RAM than to try and lean out the entire storage stack for this one application. See for instance, the direction of this library/work: [url<]http://pmem.io/.[/url<] There's a few presentations from conferences floating around that goes into some of this in more detail and shows the relatively high overheads of the storage stack relative to the speed of 3D XPoint, but unfortunately I don't have links. Hopefully some googling will turn up a few.

            • Klimax
            • 3 years ago

            Just throw RAMDrive at it. Windows have it included (it’s hidden as it is mainly sued by Kernel, but it can be used by user too) Also its source code is available as example code for WDK.

            IIRC it is very simple code and closest to what you want. The only (significant) overhead at that point comes from filesystem. (And file system is required for permanent storage)

            • Andrew Lauritzen
            • 3 years ago

            I don’t believe that actually helps for the reasons you state – the file system and storage subsystem in Windows *is* the overhead I’m talking about. When talking about XPoint, that’s actually the vast majority of the overhead depending on access patterns.

        • the
        • 3 years ago

        I thought the first wave of OS support would essentially be using Optiane as a glorified RAMDisk. Simply the host OS/hypervisor would mark the Optane memory segment for this and move on from there. Persistence for surviving a reboot/shutdown would require some firmware changes but not too challenging. At this point, there would still be a software stack but that’s mainly for filesystem support so that applications actually know how to handle the data stored there.

        Long term, changing how the OS and programs work with persistent memory is indeed going to be a game changer. However, that would require changing decades of legacy code developed around the concept of volatile memory and slow long term storage.

          • bhtooefr
          • 3 years ago

          I’d say IBM is probably the one company most able to take advantage of this – IBM i was actually designed, in the 1970s (with System/38’s CPF), for single-level store. Everything’s in one address space, including everything on disk.

            • the
            • 3 years ago

            Never thought of that but that would be one that wouldn’t need as much adaption as other systems.

            However, while it IBM i (a.k.a OS/400 a.k.a IBM rebranding 2016) doesn’t have a traditional file system, I was still under the impression that it knew what was on disk vs. memory. My memory of this is fuzzy as I barely interacted with an ancient AS/400 at a previous job.

            Everything could be in a flat address space though. IBM i internally uses 128 bit addressing though no hardware natively implements its memory model. Very Java-like in this regard.

            • bhtooefr
            • 3 years ago

            Software sees a single-level store (with a flat address space), although some things might know whether a page is on disk versus memory. Been a while since I’ve fired up my own i box, though, it’s too damn loud.

    • Theolendras
    • 3 years ago

    Great cache drive, hope the price will be somewhat reasonable in those low capacity package.

    • TheJack
    • 3 years ago

    intermediary between system memory and storage.
    No idea what that means. I’d be interested if it was bootable.

    I guess they mean kind of ready boost.

      • Lord.Blue
      • 3 years ago

      Considering the OS support list, I’m inclined to agree. Seems like Readyboost.

      • stdRaichu
      • 3 years ago

      Assuming it gets off the ground, potential for this is pretty awesome.

      > Imagine not needing to have battery backup for your HDD, SSD or RAID cards
      > Imagine not needing to suspend to RAM or or hibernate to disc any more – it’ll just be dumped out of DRAM into this stuff two orders of magnitude faster than current hibernate is capable of
      > As far as readyboost-on-steroids goes, super-heavy sections of the filesystem could get transparently copied from HDD to SSD to Xpoint (would require OS and/or driver support of course), there’s a dozen technologies in storage-land doing similar stuff already

      Bootable support will come pretty soon I imagine – the next “revolution” in operating systems (more an evolution or return to computing’s roots though really) IMHO is likely to be one where memory and storage overlap transparently (and then your more conventional tiered storage after that). Much of the current design of OS’s is based on the assumption that RAM and storage are completely different beasts, that RAM is several orders of magnitude faster than persistent storage and that RAM is always lost when power is removed – technologies like NAND, Xpoint and (my personal favourite) MRAM are causing these assumptions to be re-evaluated.

        • evilpaul
        • 3 years ago

        MRAM has been coming for what twenty years now?

          • UberGerbil
          • 3 years ago

          More like 35. I saw working demos of bubble memory “hard drives” in the early 1980s. In fact I think it was even sold in some Grid Systems laptops back in the day.

          Edit: and that is of course ignoring the very early core memories; arguably we’re just going Back (70 years!) to the Future, since some of the [url=https://en.wikipedia.org/wiki/Whirlwind_I<]very first computing machines[/url<] used magnets for memory.

      • chuckula
      • 3 years ago

      It looks like the ideal place to put a page file.

        • Antimatter
        • 3 years ago

        Wouldn’t more RAM be faster and cheaper?

          • Pwnstar
          • 3 years ago

          Yup.

          • chuckula
          • 3 years ago

          Would more RAM be faster: Sure, but what do you do on a board that’s already maxed out?
          Would more RAM be cheaper: Evidence needed.

    • whm1974
    • 3 years ago

    OK I can see needing Kaby Lake for Optane as DIMM sticks, but why would special support be needed for M.2, U.2, and PCIe form factors?

      • Theolendras
      • 3 years ago

      I’m puzzled by the exact same question…

      • DeadOfKnight
      • 3 years ago

      I thought it needed the new chipset coming with Kaby Lake, not Kaby Lake itself.

      • Beahmont
      • 3 years ago

      My guess would be the memory controller and the native PCI-E lanes to handle data speeds and memory addressing if it truly is the CPU and not the motherboard being referred to here.

    • chuckula
    • 3 years ago

    It’s definitely a niche product at first, but before anybody jumps in to attack those performance numbers compared to a high-end drive the the 960 Pro, remember that a [i<]single chip[/i<] of Optane memory at 128Gbits gives you 16 GB of storage. The high-end NVME drives are getting their performance from parallelism from a whole lot more than just one or two NAND chips.

      • K-L-Waster
      • 3 years ago

      Sounds like an interesting technology, but probably best to wait for 2nd or 3rd generation products before plunking down $$$ for one. (It’s still bleeding edge, and I don’t like bleeding…)

      • xeridea
      • 3 years ago

      If you could get same or a lot better performance from an NVMe SSD, with 30x the storage, why fool with this? Sure it has a lot higher endurance, but with 30x the capacity you have a lot more endurance to burn. It will all come down to price, but unless these are super cheap, I don’t see it making sense.

        • smilingcrow
        • 3 years ago

        This should have much better latency if Intel weren’t exaggerating but how relevant that is depends on your workload.

          • Andrew Lauritzen
          • 3 years ago

          As I noted in a reply above, the fundamental technology does have much more broad applicability than just storage (byte addressing + endurance + latency), but if you throw it behind a storage stack/controller then I agree it’s of questionable utility for client workloads. Not sure what the plan is here for these products if the rumor/sizing is true.

          [Disclosure: I work for Intel but have nothing to do with the memory/storage side nor any special info about it.]

        • brucethemoose
        • 3 years ago

        For sequential and high queue depths? Sure.

        But no flash SSD is going to come close to XPoint’s QD1 speeds and access latencies.

      • CuttinHobo
      • 3 years ago

      Makes me wonder just how big of an Optane device they used for their Computex demo in order to shave a render time down from 25 hours to 9.

      Also, how in the world would a render running on a single computer be so disk-bound that it would create the circumstances required for this to utterly embarrass an Intel 750 NVME drive?

      I guess its real benefits at these capacities just don’t show up in this spec sheet?

      [url<]https://techreport.com/news/30216/intel-computex-keynote-confirms-kaby-lake-and-optane-for-2016[/url<]

    • derFunkenstein
    • 3 years ago

    [quote<] Optane is not intended to be primary storageโ€”Intel presents their NAND flash alternative as a sort of intermediary between system memory and storage.[/quote<] And I suddenly don't get 3D Xpoint on the desktop. It's persistent, so why couldn't it be a "primary" storage location? Not that it needs to be a boot drive (where I'm sure it's wasted), but wouldn't it be a great database solution (at larger sizes, anyway)? I don't know what you'd do with something relatively small like this that isn't as fast as regular RAM. I'm happy to be schooled, though.

      • chuckula
      • 3 years ago

      [quote<]It's persistent, so why couldn't it be a "primary" storage location? [/quote<] It can be, but at those sizes it's not practical right now. This isn't a question of the underlying technology, it's a question of a what a particular product is suited to do. [quote<]Not that it needs to be a boot drive (where I'm sure it's wasted), but wouldn't it be a great database solution (at larger sizes, anyway)?[/quote<] It will be great for databases. However, these products aren't targeting the enterprise market where customers are more than happy to shell out really big bucks for a large-scale Optane drive.

        • derFunkenstein
        • 3 years ago

        I guess what I don’t get is, what does Intel want consumers to do with these? Photoshop scratch drive? I’m all kinds of “out of ideas” here. They’re too small for uncompressed 4K video (and probably overkill at the size where they’d be useful). Once you get a 128GB drive, you’re probably pushing 1,500MB/sec and I’m sure whatever controller Intel is using will get moire and more parallel (like you said in your other comment) from there. By the time they’re large enough to consider for a boot drive, they’ll be capped by even PCIe 3.0 x4 M.2 slots.

        Which is really frickin sweet, btw. Better write endurance + better performance = goodbye NAND in all but the cheapest and most disposable devices (tablets + phones)

          • cegras
          • 3 years ago

          With the correct algorithms, it might be just about keeping the lower levels continuously fed. Maybe it’ll allow larger and larger contiguous data sets to be held in RAM.

          • Andrew Lauritzen
          • 3 years ago

          Yeah I’m a bit confused myself… if the rumored sizes are true then they seem fairly pointless for consumers. It would make more sense to put it on an SSD directly as a cache (similar to the current SLC caches that are incidentally similar sizes) rather than expose it in a separate device and try and trust software to do something useful with it.

          It makes more sense on servers where the iOPS and capacity benefits can be directly utilitized. But I mean… it’s already not that expensive to put 32GB of RAM in a modern machine, so I don’t really understand the point here until they jump in capacity.

          [Disclosure: I work for Intel but have nothing to do with the memory/storage side nor any special info about it.]

            • Ninjitsu
            • 3 years ago

            Perhaps something Win 10 specific to come later?

          • Kougar
          • 3 years ago

          3DXPoint tech costs more than NAND, Intel flat out said as much. So it isn’t going to be a NAND replacement, it’s going to be something in-between.

          If performance continues to scale that well that would be awesome. But it’s a safe bet PCIe 4.0 will be out before we see large affordable capacities of 3DXpoint. The first couple generations of the tech will probably be mostly pointless for consumer use when faster, much cheaper alternatives like the Samsung 960’s and others exist.

          That the initial capacities are so small tells me the price premium is going to be very high, it’ll probably be a repeat of the launch of SSDs all over again. But the impression I had is that 3DXPoint is never going to be equal cost with NAND, which is why most talks has Intel/Micron comparing it to the cost of DRAM.

        • Duct Tape Dude
        • 3 years ago

        [quote<]It can be, but at those sizes it's not practical right now.[/quote<]Why? The first SSDs were tiny but could still be booted from. 32GB is enough to run an OS, and Optane is made for m.2--not a physical PCIe connector, not a DIMM slot, but m.2, which is intended for mobile storage like a boot drive. I get you, derFunkenstein. Seems odd to wage war on SSDs only to go after something different at launch.

          • chuckula
          • 3 years ago

          I just checked the size of the Windows folder [not the whole drive] on my vanilla Windows 10 VM that never had any bloatware installed on it and where the only non-trivial software installed is a stripped-down version of Office 2010 and the free Acrobat reader 11.

          Current folder size: 20.5 GB. So the 32 GB drive [i<]might[/i<] do the trick assuming you won't run into the temporary bloat that occurs whenever Windows does a major update. I don't think Intel wants to get those types of customer complaints when the drive runs out of space though.

            • Duct Tape Dude
            • 3 years ago

            I remember lots of netbooks had 16GB drives around the time of Vista. They were incredibly crappy computers, but 32GB is sufficient space to run Windows for a bit.

            However, I seriously doubt that Windows complaints are the #1 reason why Intel wouldn’t make the drive bootable. It’s a first-gen product. Only enthusiasts will understand why it’s worth the money, and some of those will be Linux users who could run an OS on <8GB drives anyway.

            Just let us boot from the darn thing, Intel!

            • Ninjitsu
            • 3 years ago

            On my laptop, with Win 10, C:\ is ~31GB at the moment.

        • Firestarter
        • 3 years ago

        not practical? I bought a Chromebox with a 16GB SSD not long ago, it was apparently deemed plenty practical enough for running ChromeOS

          • chuckula
          • 3 years ago

          Yeah, and the primary partition on my Arch linux system uses about 6 GB of capacity for the system software (that includes the OS, the heavyweight KDE plasma desktop, web browser, libreoffice, Office 2010 installed under Wine, and even Steam [but not the game library]).

          However, that’s not the point of this product or its intended market. Nobody ever said that it’s theoretically impossible to put some type of OS on a 16GB drive. What Intel is correctly saying is that for mass-market uses in full-sized PCs that take these products, the commercially intended use ain’t going to be to replace larger SSDs. And they’re right.

      • Wirko
      • 3 years ago

      At this point, I see Xpoint as a solution in search of a problem. It won’t stay that way for long, but until we start seeing Xpoint-aware applications, what’s the point of having an Xpoint, even if the chipset and OS support it?

Pin It on Pinterest

Share This