Intel gives hard drives a boost with Optane Memory

Intel’s Optane SSD DC P4800X and its underlying 3D Xpoint memory technology seem poised to bridge a performance gap between NAND density and DRAM performance in certain types of datacenter workloads, but that $1520 device isn’t meant for the average PC builder in the least. Intel still has ambitions for Optane in client PCs, but competing with NAND SSDs on both a speed and capacity basis would prove prohibitively expensive given Optane’s roughly $4-per-gigabyte cost right now. Instead, Intel is taking a different tack for the moment: improving the performance of hard-drive-only systems for a reasonable cost.

The company says that people considering desktop PCs want three things above all: performance, security, and storage capacity. Right now, hard drives tick the capacity box, but anybody who’s used an SSD knows that spinning rust doesn’t perform anything like flash. Hard drives still make up the vast majority of storage devices shipping in today’s PCs, however, and Intel anticipates that’ll continue to be the case for the foreseeable future.

Enter Optane Memory. Intel’s first client Optane product uses a hardware-and-software stack that purports to cache a user’s most commonly accessed files on a fast slice of PCIe 3.0 x4-connected storage. Its reliance on newer tech aside, Optane Memory as a whole sounds similar in principle to what Apple does with its Fusion Drive-equipped iMacs. It also sounds similar in principle to Intel’s own Turbo Memory technology, a small slice of NAND on a riser card that did basically nothing for system performance when it debuted in some notebooks back in 2005. Given Intel’s history with elaborate solid-state caching layers, I was skeptical about the prospects of another take on the idea. Optane has some fundamental differences that promise a better showing this time around, however.

For one, Optane doesn’t need a large number of dies to achieve maximum performance like most NAND devices do, so one Optane die (as one will find on Intel’s 16GB Optane Memory device) or two dies (as found on the 32GB gumstick) should offer enough parallelism to deliver a performance increase over devices with small amounts of NAND on board, like SSHDs. Even more importantly, Optane Memory devices enjoy the same class of high QD1 performance that the SSD DC P4800X does, so they can offer maximum performance at the low queue depths typical of desktop workloads.

For its part, Intel thinks Optane Memory will boost system responsiveness above all—the major beef with hard-drive-only PCs. Using the Sysmark benchmarking suite, the company claims only minor increases in application performance, but the suite’s responsiveness test comes away with twice the performance of a system with a hard drive alone.

To bolster the point that QD1 performance matters most for desktop users and that Optane is ideally suited to boosting system responsiveness, Intel shared some internal data about the read and write queue depths for the synthetic PCMark Vantage benchmark and some demo workloads from its labs.

Intel also performed trace analysis for the launch performance demands of several commonly-used applications one might find in the typical workplace.

Finally, Intel collected application traces from a number of its employees’ PCs to show the queue depth demands of more productivity-focused workloads.

The point of these graphs is clear: most random desktop workloads level off at about QD4, and the majority of accesses happen at QD1 or QD2. Given those characteristics, Optane Memory seems ideally suited to speed up application launches and perhaps to lessen the wait for commonly-accessed files, at least so long as those accesses are primarily random. Presuming that’s the case, Optane Memory seems much better poised to offer some kind of speedup to a user’s commonly-used applications than a small NAND cache like Turbo Memory might have.

Optane Memory isn’t much good without its companion software, an unobtrusive utility that sits in the system tray. Intel says this application will render the Optane cache transparent to the user experience. The only storage device that should show up with an Optane Memory system is the user’s primary system drive (along with any other non-Optane Memory storage devices that may be installed). There’s a lot of black-boxiness to how the app’s caching algorithm works, but Intel suggests Optane Memory won’t be caching entire applications like games on the Optane device itself, simply because they’d often be too large to fit in 16GB or 32GB of space. Instead, I got the impression that the Optane Memory app will look for common files (like shared libraries and operating system files) that are often accessed by many programs and cache those instead.

 

Some early thoughts about performance and pricing

While we were at Intel’s Folsom, CA facilities recently, I got some hands-on time for comparison testing with an Intel-provided Optane Memory module and a compatible PC equipped with a hard drive and Windows 10. Intel allowed us to run some canned tests on these Optane Memory-equipped systems, and after comparing first-launch performance with second-launch performance, the Optane Memory cache did shave off a few seconds of launch time from each application available to us (the GIMP, Blender, and FL Studio). We’ll need to do more comparison testing to get a full idea of what Optane Memory offers, but users should at least enjoy some net benefit from this technology if it comes installed in a given PC.

Given how aggressively Optane began caching programs after even one program load, the tech should have near-immediate benefits for PCs with the Optane Memory software installed. It remains to be seen how extended use will affect what can remain in the Optane cache, and we also need to test a broader range of applications than the almost certainly cherry-picked choices that Intel made on the testing PCs available to us. Still, there is a definite benefit to Optane Memory when it works. We’ll just need more time with Optane Memory—and time with a broader range of comparison systems—to see whether the tech is actually worth it.

The most pressing question about any whiz-bang new storage technology is whether it’s worth the cost to add it to a system to begin with. Thankfully, Optane Memory is relatively affordable. Intel told me that the 16GB version of the device will retail for $44, while the 32GB device will go for $77.

The cost of entry for the 16GB Optane Memory cache seems reasonable enough that budget builders trying to get both capacity and responsiveness might not have to break the bank. Adding the 16GB Optane Memory device to our B250-chipset-powered budget box would take its price from $500-ish to $550-ish. The same $100 or so (between a WD Blue 1TB and the 16GB Optane Memory cache) gets a nice 250GB-class SSD, but such a drive will fill up much faster than the WD Blue.

The problem for Intel in the DIY PC world is that builders will be weighing that extra $45 and thinking about putting it toward a more powerful graphics card like a Radeon RX 480 4GB, which is well within the range of attainability over our GeForce GTX 1050 Ti budget pick given that extra dough. We guess many will be willing to tolerate slower boot-ups and game loading times for higher performance once an application is loaded (at least assuming a game doesn’t become CPU-bound, as it might with a Pentium G4620 and an RX 480 4GB). Still, Optane Memory might prove to be an affordable way to spiff up a hard-drive-equipped PC on a budget without sacrificing capacity.

I’m less enthused about the prospect of pairing Optane Memory with an NAND SSD in a more expensive system like our Sweet Spot build. Intel still claims that users will experience benefits from pairing Optane’s unique QD1 performance characteristics with a less-responsive NAND device, but the challenge I see in that market is whether the performance difference (if there is one) is noticeable enough to be worth the price over simply buying a larger SSD. Intel might still find some takers in this market, however, since the impact of $45 to $75 extra on a high-end system isn’t that much in the grand scheme of things. Given Optane’s order-of-magnitude latency reduction versus NAND for responsiveness, I have a hunch this technology might have interesting impacts on 99th-percentile frame times in games. Just a hunch, though.

As we’ve known for some time, Optane Memory won’t work with just any system. OEMs and system builders who want to take advantage of the technology will need a seventh-generation Core (or Kaby Lake) processor and a motherboard with any 200-series chipset (B250, Q250, Q270, H270, and Z270) that has an M.2 slot. Intel explains this limited hardware support by noting it’s only performed the necessary qualification work for Optane Memory on systems comprising those two key components.  Fair enough, I suppose.

Even with those limitations, an Optane Memory stick will certainly fit into any PCIe 3.0 x4-powered M.2 slot, and it’ll likely appear as an NVMe storage device to the operating system, but the tiny sizes of the intitial Optane Memory sticks make this use case little more than a curiosity.

If nothing else, Intel has seized on a real pain point for system builders on a budget. The ever-increasing sizes of photos, videos, and game installs are putting more and more pressure on storage space for affordable PCs these days, and they’re certainly outstripping the price decreases we’ve seen for a given amount of NAND flash storage. While a 128GB SSD is affordable these days, it’ll be quickly overrun by the needs of anybody with a few advanced programs and a modest Steam library. Even a 240GB-class SSD is getting harder and harder to live with these days, as I’m reminded fairly often when I need to fire up WinDirStat to figure out what’s filling up my daily-driver PC’s 840 Pro.

As I’ve noted repeatedly in the preceding paragraphs, the 1TB hard drive we recommend to budget builders lets those folks rest easy with plenty of room to store today’s increasingly belt-busting titles. The downside is that a hard-drive-only system just can’t be as snappy or responsive as a box with an SSD for its system drive. Once you’ve seen the SSD light, switching back to a box with nothing but a hard drive for storage is a grating experience. We’ll have to see whether Optane Memory eases that pain when we get an opportunity to run it through a broader range of tests soon.

Comments closed
    • Pettytheft
    • 3 years ago

    I’m a bit interested in grabbing a 2TB drive for all my Steamness and one of these to see how it does. I still run all my games off of a SSD but I have to uninstall things when a new game comes out.

      • derFunkenstein
      • 3 years ago

      Steam now has an option to move games. Right-click an entry in your library and choose Properties, then go to Local Files. The Move Install Folder button lets you move the game between drives. What I’ve taken to doing is moving a game to my SSD if I’m playing it and don’t like spinning HDD performance.

    • llisandro
    • 3 years ago

    Isn’t the market for cheap machines with spinning rust mostly enterprise? And aren’t most of their files on a network drive anyway?

    (and isn’t that why SSHDs didn’t catch on when they could at least give a bit of a bump over a Blue for minimal cost?)

    Having use both SRT, and SSHDs, I’m wondering what speed difference this really makes over just having a single SSHD for the target market (who want to save pennies over just going full-SSD).

    • maritrema
    • 3 years ago

    Does Intel Optane memory supplement or replace the DDR3/4-blocks needed when building a new PC?

      • RAGEPRO
      • 3 years ago

      Did you read the article? Heh.

      In this article Jeff talks about using Optane as a cache of sorts between the disk and the host system. Basically, you have the CPU with very fast cache, slower-but-still-pretty-fast RAM, and then an incredibly slow hard disk. You stick the Optane device in between the RAM and the disk, and frequently-accessed data can live there and be fetched much more quickly than if the system had to go out to the hard drive.

      This sort of thing isn’t really new; people (including Intel) used to do this with regular old flash-memory-based SSDs. Optane is sort of “between” flash memory and DRAM in terms of performance. It could make for [url=https://techreport.com/review/31608/intel-optane-ssd-dc-p4800x-opens-new-frontiers-in-datacenter-storage<]a very fast storage device[/url<], and Intel intends to use it in DIMM slots in servers to provide slower, but extremely high capacity memory. Storage and memory are accessed in fundamentally-different ways in current computers, so it's not quite as wacky as it seems. There are advantages to addressing fast storage like Optane using memory semantics instead of as a storage device. The thing is, flash memory has come down in price so much and so fast since those days, most system builders just access an SSD directly for their system files. Even OEM PCs are going the way of the SSD for system disks, by and large. There are a lot of benefits besides performance; SSDs are silent, use much less power, and aren't sensitive to physical shock like a hard drive (which makes them much more reliable in mobile systems like laptops.) Even a mediocre SSD is fast enough that other system components (like the CPU and RAM) can rapidly become your performance bottleneck. [b<]TL;DR[/b<] supplement. But really neither, at this time. If you're building a new PC, just get a decent SATA SSD and be done with it. Optane (or more accurately, the 3D Xpoint tech it's based on) is really cool but there's not a huge amount of utility for it right now. It's possible that I'm wrong and that tiered storage with a hard drive behind an Optane cache is amazing. I've done caching before though, and I wasn't impressed when comparing the performance versus a standard SSD. I find it hard to believe that the speed of Optane will make the difference.

        • maritrema
        • 3 years ago

        Thanks a lot for spelling it out for me! Techreport may not be aware that they actually have readers, that do not have English as their first language, and some of us (just me?) needs a little extra help sometimes. Hi from Denmark 🙂

          • RAGEPRO
          • 3 years ago

          I’m one of the writers for the site, haha. We do cover things with a US-centric bias (as we are based out of the US), but we’re always very happy to have international readers. Glad I could help you out.

    • rutra80
    • 3 years ago

    [url=http://www.romexsoftware.com<]PrimoCache[/url<] does this with any flash storage on any hardware.

    • djayjp
    • 3 years ago

    Excellent article. Thanks for the analysis and first impressions on the performance. Looking forward to the review piece!

    • vikas.sm
    • 3 years ago

    Use this for: PortableApps, Scratch Drive, Temp files, Page file.
    Not to speed up boot/loading times.

    With an SSD main drive of course, Not with a HDD.

      • freebird
      • 3 years ago

      NO WAY for a PAGE file unless it is hardly used…same for TEMP especially if you do video edits or transcoding that leverages the TEMP.

      Ok, losers that want to down mark this go check out the write endurance on this thing…crappy
      100GB per day. Go ahead and leave hibernation turned on with 32GB+ of memory while you are at it… and see how long it lasts… the Samgsung evo 960s have the a same issues.
      Very low WRITE endurance compared to older NAND drives and or Mechanical HDs.

      Also, Intel was trying to sell this as great for laptops for fast hibernate/wakeups.

    • Amiga500+
    • 3 years ago

    So why not just buy 16GB of RAM, set it up as a virtual hard drive and allow it to act as cache?

    [url<]http://www.superspeed.com/desktop/supercache.php[/url<] [url<]http://www.romexsoftware.com/en-us/fancy-cache/[/url<]

      • tipoo
      • 3 years ago

      It’s supposedly half the price of DRAM. We’ll see about that.

        • RAGEPRO
        • 3 years ago

        Even if that’s the case, I think I’d rather spend the money on half as much RAM. Heh.

      • freebird
      • 3 years ago

      Optane may be 1/2 the price of DDR4; but DDR4 is probably 10x faster as a RAM disk, probably less as a disk cache, but longevity is NO MATCH. DDR3/4 memory will last nearly infinitely longer than this “product”, especially since most memory manufacturers also offer a “life-time” guarantee.

      Also, the daily WRITE limit per day is listed at ~100GB (according to Semi-Accurate article), so it better not be cache large video streams, editing large photo images or be used for transcoding video files. I wouldn’t be surprised if the “magic” software that comes with it doesn’t steal a couple of GB of memory to buffer some of that if it doesn’t by-pass that type of work around the Optane “cache”

      And if it is used as a cache for a fast hibernation solution, that will also burn it out quicker…

      That’s why I personally bought 64 GB of memory; gonna run (probably 4GB free Radeon RamDIsk) and PrimoCache 30GB+ (better than fancy cache because it can leverage 2nd-level caches: SSDs or USB Memory sticks)

      And definitely turn off hibernation in this setup… since I have an evo 960 500GB as my boot drive and they have an absolutely AWFUL WRITE endurance as well ~200TB lifetime writes for the 500GB drive.

      May look into using an UPS and setting up the Primo cache to do deferred writes on the EVO 960, but then it is the OS drive also; so if I do, I’ll definitely have to perform weekly OS drive backups.

    • raddude9
    • 3 years ago

    Am I on my own in thinking that this would be great in a USB 3 memory stick?

      • tipoo
      • 3 years ago

      Pretty sure going over the USB 3 bus would kill a lot of its benefits, i.e 1000x faster than NAND access time. I think even Thunderbolt had longer latencies than integrated PCI-E.

        • NoOne ButMe
        • 3 years ago

        Yup. And already it’s being choked off lots of it’s advantage by not being in DIMM format.

    • DevilsCanyonSoul
    • 3 years ago

    Intel throws a bone(r) at the failing mechanical HDD sector.

    Sorry, it’s a bit too late.

    • Ultracer
    • 3 years ago

    Anyone notice that the 2280 drive is using SATA M.2 interface with 2 notches? NVMe M.2 drive only come with 1 notch on the connector.
    Strange.

      • derFunkenstein
      • 3 years ago

      I didn’t notice it until you pointed it out. Guessing that absolute bandwidth isn’t the key to this, though. It’s about reducing latency and keeping costs down. NVMe controller is probably a waste here.

    • blahsaysblah
    • 3 years ago

    Operating systems have had file caching since the beginning of time. 16 or 32GB? You hands down should buy more RAM instead.

      • chuckula
      • 3 years ago

      Since being price conscious is now a big thing, please show me where I can get that 32GB of RAM for only $77.

      [url<]https://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=100007611%20600083963%20600213074%20600327642&IsNodeId=1&bop=And&Order=PRICE&PageSize=36[/url<]

        • the
        • 3 years ago

        I found a [url=https://www.walmart.com/ip/32GB-Kit-4x-8GB-Modules-PC3-10600-1333MHz-ECC-Registered-DDR3-DIMM-Server-240-pin-Memory-Ram/180183620?wmlspartner=wlpa&selectedSellerId=1010&adid=22222222227062059940&wmlspartner=wmtlabs&wl0=&wl1=g&wl2=c&wl3=165989042114&wl4=pla-327388518687&wl5=9023228&wl6=&wl7=&wl8=&wl9=pla&wl10=114220662&wl11=online&wl12=180183620&wl13=&veh=sem<]32 GB kit for $80.[/url<] Not the fastest speed nor is it DDR4, but hey, it is a cheap upgrade for older systems.

          • chuckula
          • 3 years ago

          Oh yeah.

          I can’t wait to put registered DIMMs into my desktop machine.

          That’ll work great.

        • blahsaysblah
        • 3 years ago

        Damn its a conspiracy i say. To prop up Micron profits. 2x16GB DDR 2133/2400 were around $120 for the longeeest time. Not special sales. Regular price. What happened? I know there was slight increase.

          • chuckula
          • 3 years ago

          It’s just Intel being evil, what can I say.

          They bought up all the RAM to drive up the price of RAM to make their evil Optane products look cheaper than they really are!

          It’s kind of like how they bought up all the AM4 motherboards and RyZen chips.

          • the
          • 3 years ago

          I’ve also noticed that DDR4 prices have increased lately. DDR3 still remains cheaper to my surprise since most fabs have switched over to DDR4 for volume production.

          • freebird
          • 3 years ago

          Actually, I’ve been reading for some time that DDR4 was heading for a price move up in 2017 along with NAND (supposedly SSDs sales are increasing and also sizes). The memory prices spiked with the release of Ryzen (probably to take advantage of short term inventory outages in some of the sweet spot for higher end memory. I bought my 2 sets of 32GB (2×16) G.Skill Trident Z 3000s CL14 for $219.99 each the week before Ryzen release and now Newegg is charging $319.99 and I KNOW the memory chips on the DIMMS didn’t jump in COST to MANUFACTURE in a week or two!!!

        • synthtel2
        • 3 years ago

        I’d much rather have 16GB of spare RAM for this sort of thing than 32GB of optane. That brings prices a lot closer to parity, and the versatility of the extra RAM could justify the remaining price difference for a lot of people.

        • NoOne ButMe
        • 3 years ago

        Or we could have consumers who bought machine with say, 2×8 GB of RAM with 4 RAM slots.

        In which case an extra 16GB of RAM costs more, but also is superior in almost every way if one already has an SSD.

        Yeah, sure, Optane is twice the size for slightly lower price. But I would much rather more RAM given the other usages it could have.

    • Waco
    • 3 years ago

    So it costs $4 per GB…but the street price of the 16/32 GB modules is closer to $2 per GB?

    That part made my brain hurt.

      • chuckula
      • 3 years ago

      It’s true that they lose money on every GB.

      But they make up for it in volume.

        • Mr Bill
        • 3 years ago

        Oh, that was perfect…

    • Welch
    • 3 years ago

    No Intel… Just no.

    If this came out when SSD were insanely priced then it would have had some room to grow. In a day and age where you can pickup a 250gb SSD for well under $100, and then a very large HDD for sub $60… Why buy into Optane?

    Most of my customers don’t remotely use 250gb, so it’s useless for them. The handful of people who need speed and storage for real workloads or games will likely not be so broke that saving 40 bucks is going to save their bank account. You can often get 500gb SSDs for sub $150 on sales too. That and with drives like the $250, Samsung 500GB 960 EVO if you really need to push performance….

    Optane looks like it missed it’s opertunity as a niche component 3-4 years ago. Nice try to make a large profit off of DREAM per GB though… Woosh.

      • DavidC1
      • 3 years ago

      No, you don’t get the point.

      80% of systems sold use HDDs. Because the price advantages are still significant. Look at flyers for computers. Most are sold with HDDs. Few Laptops and Desktops sold with SSDs are for “gamers” and “enthusiasts” and are quite a bit more expensive.

      In fact, *this* is the relevant Optane device, because unlike the NVMe SSD version(the one announced for servers earlier and also for consumer versions later) its not expensive in the absolute sense, and its more affordable than going for pure SSD.

      And their point makes sense too. Perhaps performing that much better relative to an HDD in low QD will make it a substantially better caching device than using an SSD to do so, because SSDs are least advantageous in low queue depths.

        • Welch
        • 3 years ago

        I think you’re missing the point with all due respect.

        80% of system sold use HDDs… My point is in 2017 they shouldn’t be. SSDs need to be pushed to be a standard in any of your low-mid range systems. Not in your Walmart sub $300 systems obviously as those are tight margins. I work on peoples OEM systems for a living, I cringe to see the crappy HDDs that they come with these days.

        You can get an Asus Zenbook or other series laptops for $600 and under. This is not an expensive laptop and they come with 250GB SSD. Looking into the $549 laptop I recommended to a friend going to school it was a 256GB Micron (Crucial) drive, and performed near par with a Samsung 850 EVO. We aren’t talking bottom of the barrel here.
        [url<]https://www.newegg.com/Product/Product.aspx?Item=1TS-001A-004C8&cm_re=asus_i5_laptop-_-1TS-001A-004C8-_-Product[/url<] The main issue here is that you are taking an Intel specific device, that only works with Intel's 7 series chips (will it work with the i3, or perhaps the Pentium series?). If this really is for the low end, then you are talking about a user base who doesn't understand technology and has to rely on software to run. If the software suddenly stops running, gets corrupt, is attacked by a virus, gets shut off by some wannabe IT tech to "speed the system up" or something similar.... They are going to notice a large performance drop. It's not a reliable way to ensure system performance. This isn't even mentioning the fact that you don't get the RELIABILITY aspect of an SSD. So it's a bit faster, but it's still tied to the recently even more terrible hard drive. Failure rates of hard drives went up sharply after the release of SSDs because they are fighting a price war. What about OEMs? They have to make sure their low end boards have an m.2 slot, that has to add a little bit of cost, design/development and takes away some PCI-E lanes from other devices on these really low end chipsets. They are making over $2 per GB for their NAND vs SSDs that can be under $0.50 at times. In the end, it is a clunky "Solution" to a problem that is easily solved by one of three different methods. 1. All out SSD (Cost more for bigger drives) 2. Hybrid SSD/HDD (They have their drawbacks but some seem to like) 3. Smaller SSD boot drive with large HDD for capacity. The best thing for the market is to keep pushing for SSD prices to come tumbling down and for 500GB SSDs to become mainstream and nearer the $100 mark. At that point the smaller 120/250 drives have no excuse to not be mainstream. If Intel were to sell these little deals for even closer to $1 per GB, maybe... just maybe... they would have some relevancy for people who demand a bit more performance whilst keeping their cornucopia of spinning GB goodness. Still wouldn't make it reliable like an SSD though 😉

          • DavidC1
          • 3 years ago

          #3 is not viable for those people either. That’s still too much complexity. Well, I guess to some people it can be done. Knowledge and aptitude varies as much as there are number of people in the world.*

          “80% of system sold use HDDs… My point is in 2017 they shouldn’t be”

          I am sorry. This is the enthusiastic thinking.

          Most people are quite realistic. There are much more important thing to worry about than getting the best computer. Not spending a $ above what you care about is being pragmatic.
          The number illustrates that SSDs are not price/perf efficient enough for those people. The vendors selling those computers don’t see the point either.

          Here in Canada that $600 laptop goes for $800. Other countries have a mark up on top of that. US guys have it the best.

          *Key point. If it does well as they say it’ll find a decent market for it. End of story.

            • Welch
            • 3 years ago

            Yes it is my enthusiasm for this industry that has me saying SSDs need to be pushed to be more mainstream. Asus and other brands are obviously making money in cheaper systems with SSD, and I don’t mean 120gb models.

            If it were a small SSD and a large drive for storage, the consumers don’t have to know how to do it. Vendors used to send out systems with multiple drives all of the time. Or better yet the standard was to partition a single large drive. OEMs can easily change the default save locations of files for pictures, desktop, music, video, documents, ECT.

            So they don’t even have to have the knowledge on how to do that.

        • NoOne ButMe
        • 3 years ago

        It only supports Kaby Lake and later. So there goes most of the market today which could benefit.

        Better for OEM to grow a 64GB boot SSD. Cheaper also.

        • blastdoor
        • 3 years ago

        Apple’s Fusion works very well when implemented well (as Apple does with the 3TB drive).

        Now, it’s needlessly too expensive and it’s Apple-only, but as a proof of concept, it shows that HDD+SSD can work very well.

          • POLAR
          • 3 years ago

          That is why we use plain SSD storage in all of our desktop, workstation, server. No drivers no caches no intel RST whatever, and you don’t have to implement anything. Bonus: if one needs 1-2TB of SSD, they can probably afford it. Everybody is fine.

        • freebird
        • 3 years ago

        I think YOU don’t get the point… if they ARE NOT using SSDs then they are pinching PENNIES and WILL NOT spend the extra money on an OPTANE cache that only works with HIGH END PCs and LAPTOPS (which have the NVMe slot for it.) If so, then they could just replace the HDD with an SSD and forgo the Optane and probably still save money…

    • JosiahBradley
    • 3 years ago

    Vendor lock in needs to die, the first SSD caches were not locked in. I know I don’t have a popular opinion here, but it needs to be said. I vote with my wallet.

      • chuckula
      • 3 years ago

      I agree.

      That’s why I stopped buying all AMD GPUs since they have a vendor-lock on asynchronous compute features.

    • chuckula
    • 3 years ago

    This is still addressing the storage market (and not the memory market) but Optane gets truly exciting when you are buying it by the DIMM.

    For a little foreshadowing, Intel just submitted patches to the Linux kernel to extend the maximum physical address space out to 4 [b<]Peta[/b<]bytes: [url<]http://phoronix.com/scan.php?page=news_item&px=Intel-5-LVL-Paging-Linux-4.12[/url<]

      • cegras
      • 3 years ago

      Do Optane DIMMs masquerade as RAM? How would the trace layout work for a motherboard with so many DIMM slots?

        • the
        • 3 years ago

        They [i<]can[/i<] masquerade as main memory. Their raw performance isn't there to replace DRAM but it does have two other advantages: non-volatility and capacity. The non-volatility can work as an effective RAID1 copy of DRAM for increased RAS. Memory writes would suffer since operations would be dependent on the Optane speed but reads could also come from DRAM. So even with this performance hit, there would be sudden power fault protection by being able to resume exactly when power was cut. This feature is worthwhile to a specific niche. This level of support can come at the platform level and wouldn't require changes to a hypervisor or OS. So if you work with big data, using Optane to replace DRAM would make sense so you can remove the entire storage stack from the performance equation and simply utilize in-memory operations. That may result in a performance win as the bottleneck is disk rather than memory in that niche. While Optane [i<]can[/i<] replace DRAM it can also work alongside of it. Basically this would create a virtual NUMA domain inside a single memory channel. Memory access would not be uniform as DRAM would remain faster but with specific software, less frequently used data or data that requires to be non-volatile can be read/write specifically to the Optane region while legacy applications would continue to utilize DRAM. This requires OS and application support to work well but it is seen as a good middle ground for performance. Lastly, Optane DIMMs can work as regular storage. This is effectively a large RAM disk due to the necessary software shim to mount as traditional storage. Unlike a RAM disk though this will be slower. Just like a RAM disk, this may require changes to the hypervisor and OS but not end user applications.

      • the
      • 3 years ago

      That is in preparation for Sky Lake-EP which extends the physical address space.

      Some SGI (now HPE) systems were able to reach the previous 64 TB limit with Haswell-EP/Broadwell-EP. There is demand to have even more memory online in a system but x86 systems literally couldn’t address any more in a single system.

    • Growler
    • 3 years ago

    INTEL! INTEL! INTEL!

    Our High Optane Memory Systems will take your PC to a whole new level of performance!

    MARVEL at the wicked fast data transfers!

    WONDER at the incredible read speeds!

    INTEL! We’ll sell you the whole drive, but you’ll only need…THE EDGE!

      • Generic
      • 3 years ago

      The excitement over Mad Scientist’s front flip seeps into TR…

      • BIF
      • 3 years ago

      Okay, that was funny, and I don’t know why.

    • tipoo
    • 3 years ago

    Only being on PCI-e x2 is interesting, I guess it’s not topping the throughput of the highest end SSDs? Some of those are already reaching PCI-e x4 limits on 3.0. I guess it’s more about the latency splitting the difference between SRAM and an SSD rather than raw throughput. Latency is 1000 times lower than an SSD!

    [url<]https://cdn.arstechnica.net/wp-content/uploads/2017/03/IntelR-OptaneTM-Technology-Workshop-Analyst-and-Press-Slides-3-15...-4-1440x810.jpeg[/url<]

    • blastdoor
    • 3 years ago

    i keep thinking that 4 or 8GB of 3dXpoint in an iPhone, used to augment 2GB of DRAM, could make sense. It would give iPhones many of the advantages of much more RAM, but at lower cost and lower power usage. Same for iPads.

      • blahsaysblah
      • 3 years ago

      The articles state very high idle/standard power usage, compared to current SSDs. Peak is about same. No go.

        • NoOne ButMe
        • 3 years ago

        specific power is 3.5W load .9-1.2W idle.

        Given many SSDs will average <.5W idle and 4-5W load, unless you are actively loading drive it might be MORE power efficient not use this.

        If Intel can get cut idle and load in half probably could be a very good product for notebooks!

          • synthtel2
          • 3 years ago

          I’m used to SSD idle power being more in the 100mW ballpark, and a half watt of idle power is a big deal in a laptop.

            • NoOne ButMe
            • 3 years ago

            Agreed. I think most are in the 50-100mW range. But I felt like giving Intel a fighting chance!!

        • blastdoor
        • 3 years ago

        SSD isn’t the counterfactual — RAM is the counterfactual.

    • derFunkenstein
    • 3 years ago

    The more Intel pimps these tiny caches the dumber they seem to me. Especially once you look at a 32GB Optane module vs a similarly priced SSD. Now it’s $77 for 32GB Optane vs $82 for [url=https://www.amazon.com/SanDisk-240GB-2-5-Inch-SDSSDA-240G-G25-Version/dp/B00S9Q9VS4<]240GB of Sandisk SSD Plus[/url<]. The 16GB module might have VERY LIMITED appeal, but there's just no way the larger one makes any sense for the use case Intel is selling.

      • Convert
      • 3 years ago

      Totally agreed. I’d much rather just split my data across mechanical and a SSD drives. That way my day to day stuff is always fast and I don’t have to play the caching game. For my larger data I don’t need the speed of the SSD anyways so it’s no big deal and the caching capability wouldn’t really help.

      I feel like this hybrid concept only made sense when SSDs first hit the market.

        • blahsaysblah
        • 3 years ago

        If there was a GUI that let you choose from list like: OS files, this game folder, that game folder, this work folder,…. It might have a niche.

          • derFunkenstein
          • 3 years ago

          Yeah, that might help. Part of how I decide which games in my Steam folder go on the SSD is load time. Older games typically load pretty quickly from mechanical storage so I don’t mind if, for example, GTA4 is on a hard drive. But GTA5 is so much slower so it goes on an SSD.

        • travbrad
        • 3 years ago

        [quote<]I feel like this hybrid concept only made sense when SSDs first hit the market.[/quote<] Yeah. The main reason to go with caching instead of just buying a bigger SSD was the prohibitive cost of SSDs in the early days. With how cheap they are now though it becomes harder and harder to find a reason to use caching, and SSDs/NAND only continue to get cheaper. An Optane caching setup just seems like it is aimed at a vanishing niche. Power users will still mostly prefer to have a SSD + HDD(s), and average users don't need that much space anyway so a SSD alone would suffice in most cases.

      • Ninjitsu
      • 3 years ago

      Yeah, the question is less “do i put $45 more into a GPU” but more “do I just get a bigger SSD”.

      • Andrew Lauritzen
      • 3 years ago

      Yeah it’s pretty ridiculous given the sizing. We’re much closer to that $77 upgrading you to a 1tb ssds than this being interesting. And if it only really helps back to back launches, more RAM instead.

      Optane is awesome for data center but the economics still don’t make sense for client IMO.

        • derFunkenstein
        • 3 years ago

        Yeah, that big drive covered last week makes a ton of sense. Glad I’m not the only one missing this one. Lol

      • Mr Bill
      • 3 years ago

      Maybe they should suggest it would speed up SSD response. Is that possible, third graph down on the first page?

      • LostCat
      • 3 years ago

      I had a ReadyCache drive and I loved it (it’s in my laptop as a standalone drive atm though.) Bloody amazing except for situations like updating the machine where there’re extended writes.

      If I could pair this with my external non OS HD on any system I might consider it, but I doubt it’s that open.

      • LostCat
      • 3 years ago

      I’m still torn between a Firecuda and an M.2 drive for my next drive heh.

        • freebird
        • 3 years ago

        They need to INCREASE the size of the NAND cache on the FireCuda to make it “real-world” useful, unless you are just web browsing and emailing and even then you’d be better off using a free RAMDISK like the Radeon 4GB version and setting up RAMDISK and pointing TEMP variables there and Web Browser caches, etc.

    • nico1982
    • 3 years ago

    Yup. It looks like a perfet fit to replace the basically unused Intel SRT or, more likely, enhance Apple’s ubiquitus Fusion Drive.

      • blastdoor
      • 3 years ago

      I feel like perhaps you’re being sarcastic, but I like Apple’s fusion drive quite a bit. Now… it’s overpriced, and they haven’t updated it much, which is par for the course with the Mac these days. But it’s a good product/technology. If they actually gave a crap about the Mac, it could be really great.

        • tipoo
        • 3 years ago

        I liked the initial idea, I didn’t like how they quietly reduced the 1TB Fusion Drive to 24 (yeah, no typo) gigs of flash, which often sees you hitting HDD speeds.

        Man, 80% of their entire line needs a refresh stat.

        • derFunkenstein
        • 3 years ago

        I don’t like that new models with 1TB drives are [url=https://9to5mac.com/2015/10/13/retina-imac-fusion-drive-flash-lol-are-you-serious/<]only getting 24GB of flash[/url<]. When it was 128GB of storage, that's enough to store the apps most Mac users use and have plenty of spinning rust for media. And if you get a 3TB Fusion Drive, you still get that. but don't get suckered by a 1TB fusion drive today.

          • blastdoor
          • 3 years ago

          Yup — I totally agree.

          The 3TB Fusion Drive is the “good one”. It’s definitely overpriced. But it’s still good technology.

        • nico1982
        • 3 years ago

        I was serious 🙂

      • tipoo
      • 3 years ago

      I feel like Fusion Drive at the moment would be better served with a larger flash cache than a smaller higher performance Optane cache. The difference between the feel of the 24GB and 128GB fusion drives is large, and could stand to benefit from even larger than the latter.

        • blastdoor
        • 3 years ago

        Could be, but hard to know without actually seeing the alternatives.

        But maybe 24GB of Optane + 256GB of NAND + 4 TB of rust is the sweet spot?

          • tipoo
          • 3 years ago

          We definitely don’t know yet, but I’m intuiting that most iMac workloads aren’t really going to see as much benefit from 1000x lower access latency than NAND, rather than splitting that cost the old fashioned way between NAND for the fusion drive, memory, and rust. At least for this year and the next, but Optane will certainly become more affordable fast like SSDs.

          It’s nice to imagine layers of cache upon layers of cache but I doubt the cost is there yet for a consumer/prosumer device.

      • cygnus1
      • 3 years ago

      Fusion Drive isn’t really the same as this though. Fusion drive is actual storage tiering, not caching. Data is either on the SSD or on the HDD, not both. You actually have the combined space of both disks available for use. So a 1TB HDD and 128GB SSD gives you a 1.125 TB Fusion drive.

      This is exactly the same as the old Intel SRT where it’s just trying to cache busy LBA’s from a bigger disk.

      Now, not to say you couldn’t replace the SSD in a Fusion with an Optane device, but your total overall capacity would be lower since the Optane devices are smaller.

    • Bauxite
    • 3 years ago

    Isn’t this like the fourth attempt at caching to somehow sell extremely small devices at high margins?

      • TwistedKestrel
      • 3 years ago

      SRT wasn’t a bad idea. I was very fond of it, actually… while it worked. Intel has since shown that they drop support for a given chipset verrrrry quickly, and now it basically doesn’t work at all anymore for older chipsets under Windows 10. Doesn’t really motivate me to buy the newest version of it… This is still really weak on its own, though. The SLC Intel 331 from six years ago was 20GB, and SSDs are a hell of a lot cheaper now. I have a really hard time imagining anyone buying a brand new system but cheaping out on the storage… but yet willing to float $80 to shore up performance only while booting.

        • Den2
        • 3 years ago

        $80 is easily a 240gb drive, very close to a 500gb during the lowest prices. That’s quite a bit more than just installing the OS… my computer I built in december 2014 started with only 120gb total for the ~2months and even now only has ~360gb total. Not planning on upgrading that any time soon. We’re past the point where $80 only gets you a boot drive. Unless 3D xpoint provides a large benefit over SSDs in general, caching on them while using an SSD makes little sense and just getting an SSD instead of 3D xpoint makes more sense the xpoint+HDD at 3D xpoint’s current prices.

Pin It on Pinterest

Share This