Memristor-based flash replacement due in 2013

Memristors are pretty neat. By adjusting the direction of current flow through them, these electrical components can be made to change their resistance. More importantly, memristors are capable of retaining their resistance after the current flow is cut off. Combine enough of ’em together, and you’ve got an alternative solid-state storage technology.

A little more than a year ago, HP teamed up with Hynix to produce memory based on memristor technology. Dubbed ReRAM, this non-volatile memory promised lower power consumption than flash and the potential to be even faster. Now, according to HP Senior Fellow Stan Williams, we could see the first chips as early as 2013. Williams says "hundreds of wafers" have already been produced, and it looks like two versions will be available initially: a slower speed grade designed to supplant the flash memory in smartphones, and faster stuff primed for SSDs.

HP plans to license its ReRAM technology, and Williams points out that Samsung has an even bigger team working on memristors. Replacing flash memory is only the beginning, though. Williams expects non-volatile memristor chips to challenge volatile DRAM by as early as 2014.

Comments closed
    • PeterD
    • 8 years ago

    How much would it cost?
    (sorry for this down-to-earth comment)

    • MadManOriginal
    • 8 years ago

    News from the near future:

    HP selling and/or spinning off its Memristor division because it’s ‘too low margin’ even though it would make them a ton of money in absolute terms.

      • CuttinHobo
      • 8 years ago

      News that involves HP…. but doesn’t force the reader to do a [url<]http://static.divbyzero.nl/facepalm/doublefacepalm.jpg[/url<] ? Am I in the twilight zone? The world no longer makes sense! Edit: My first attempt at creating a link had failed. Boooo.

    • UberGerbil
    • 8 years ago

    If you start from HP’s proof-of-concept paper in [url=http://www.eetimes.com/electronics-news/4076910/-Missing-link-memristor-created-Rewrite-the-textbooks-<]2008[/url<] we're at 3 years, and 2013 is 5 years, and there's your "3-to-5 year" thing panning out for once. On the other hand, I remember reading an article on HP's crossbar work in Scientific American in 2000 or 2001, work that was waiting for (or perhaps inspired the search for) memristors to show up. So if you go by that date, we'll be getting close to the 10-15 year timeframe that far-out tech is generally lumped into. Either way, if they can actually get working (and performant) circuits in commercial quantities, it'll be a big deal. What will be really interesting is watching it evolve as other folks get their hands on it, because this doesn't have the decades of refinement and building-on-the-shoulders-of-others improvements that NAND and our familiar transistors have had.

    • fredsnotdead
    • 8 years ago

    These “exciting new technologies” usually don’t materialize. By the time they’ve developed something useful, more standard devices have surpassed them due to ongoing development.

      • StuffMaster
      • 8 years ago

      I usually agree, but look at how far flash memory has come. Memristors and Magnetic RAM (MRAM) are two things I think might pan out – eventually.

      • Game_boy
      • 8 years ago

      That applies when they say “five years away” or “ten years away”. But 2013 means they have a commercially viable design now.

      Intel demoed Haswell running Windows a month ago, it’s coming out 2013 too.

      • The Dark One
      • 8 years ago

      Shut up, I’m expecting those field emission displays to pop up any day now. 🙁

        • willmore
        • 8 years ago

        SEDs are way better!

    • Chrispy_
    • 8 years ago

    As long as they share/license it properly and don’t pull another RAMBUS on the market, this sounds good.

      • BiffStroganoffsky
      • 8 years ago

      Actually, I was thinking they might be the next target for the RAMBUS lawyers. I might be jumping the gun though as they usually wait until the technology is adopted as a standard and permeate through the market before they unleash the hounds.

    • OneArmedScissor
    • 8 years ago

    [quote<]HP’s technology allows the memory layers to be put directly on top of the processor layer making for very fast systems on chip.[/quote<] Mercy! I'm very curious how dense it is. Imagine not even having cache in your CPU anymore because the cache is the hard drive itself.

      • Goty
      • 8 years ago

      Are you talking about having enough memory on-die to use it as storage or using external storage as cache? I don’t think either would work too well.

        • OneArmedScissor
        • 8 years ago

        Integrated storage. For normal people, smart phones manage with just a few GB of storage and that’s going to be miniscule below 20nm. Everything that used to eat space is shifting towards streamed delivery.

        They already said they’re going to use it instead of SRAM.

          • Goty
          • 8 years ago

          That could work.

      • willyolio
      • 8 years ago

      i think you’re mixing up the term “system on a chip” with “chip spread out over a system”

      • GTVic
      • 8 years ago

      What I would rather see is OS on one chip, installed apps on another and user data elsewhere. Plus high speed chip-to-chip optical connections.

      If you can physically separate programs and data and possibly execute apps directly from their permanent storage location, rather than loading them into RAM and then cache, then you can really start to crack down on some of the security exploits. Either that or physically separate the data RAM and the program RAM then you can start to eliminate exploits like buffer over-run.

      People shouldn’t be opening Explorer and seeing C:\Windows and all that other stuff at the root of C:. Just an Application store where you can drag in applications (no install) and a documents Explorer, not even a “Documents” folder, just open Explorer and there are your sub-folders for various application data files in whatever way you wish to organize them. Have one hidden folder where apps store data that users don’t need to access is located and you have a much simpler/better system.

        • Game_boy
        • 8 years ago

        This has been done on phones, and almost immediately they are jailbroken.

      • Geistbar
      • 8 years ago

      Cache will always be needed on the CPU, no matter how fast external storage is. With modern clock speeds, the physical distance that the signal needs to travel would add sufficient delay to cause a drop in performance. It might alleviate the need for random access memory, possibly, but some cache will always be needed.

        • NeelyCam
        • 8 years ago

        …even if the ‘external storage’ is sitting right on top of the CPU core, 1mm away..?

          • Geistbar
          • 8 years ago

          Well that’s a little different :P. But yes, if it was sufficiently close, it wouldn’t matter. But even going from L2 to L3 cache causes a latency increase. Not sure off the top of my head what the (typical) increased distance is from moving from L2 to L3.

          I was thinking more of something where it’s located similarly to a hard drive, i.e. at the end of a fairly long cable, somewhere in chassis.

            • Zoomer
            • 8 years ago

            Yes, cache is still needed, unless ReRAM can fulfill requests in a few cycles. That means about 1 nanosecond. Given that current NAND is atrociously slow at the microsecond range, plus the issues associated with addressing a large amount of memory, it doesn’t seem very likely to almost impossible.

            • NeelyCam
            • 8 years ago

            Speed improvement was one of the key benefits of this memresistor… you can’t lump it together with run-of-the-mill flash just because it’s non-volatile.

          • BobbinThreadbare
          • 8 years ago

          Well at that point, you could consider it a cache, couldn’t you?

            • NeelyCam
            • 8 years ago

            Yep – memory is memory… levels and words like ‘cache’ are just degrees of separation.

    • NeelyCam
    • 8 years ago

    [quote<]Williams expects non-volatile memristor chips to challenge volatile DRAM by as early as 2014.[/quote<] This is the fun part.

      • Game_boy
      • 8 years ago

      I dismissed it as ‘well in 15 years…’ initially. But this is exciting.

      • willyolio
      • 8 years ago

      this can’t come soon enough.

        • NeelyCam
        • 8 years ago

        Slap these into a Hybrid Memory Cube (to improve the CPU/RAM signaling power efficiency) and we have perfect memory.

      • Geistbar
      • 8 years ago

      Yes, this could be a truly huge shift for computers. I expect we might need to add a few years to his prediction though. I am excited for when it finally does happen, regardless of when.

        • NeelyCam
        • 8 years ago

        Memristor/phase-change based memory enables some serious idle power savings, but active power will be determined by the CPU/RAM signaling approach.

        I honestly think Hybrid Memory Cube (kind of a dumb name…) is going to be huge. The near-threshold-voltage solar-powered CPU demoed in the IDF sounds cool and all, but I think that the “HMC” has more promise to make a big difference. They even have a standards organization for that now:

        [url<]http://hybridmemorycube.org/about.html[/url<] With PCIeG3, USB3.0, SATA6 (and even Thunderbolt) becoming widespread, the last things that require space on a mother board are the DIMM slots (and the hugely parallel mobo traces associated with them). HMC enables much reduced mobo sizes because of hugely higher data rate signaling (=> fewer mobo traces).

          • Geistbar
          • 8 years ago

          I was initially thinking more than power savings with the new technologies. One of the big talking points with memristors was density (though I haven’t actually seen anything that talks specifically about bits / cell), and I know that PRAM can do at least 2 bits / cell. Power efficiency is huge, but the ability to dramatically increase memory storage could also have a large effect. I imagine that they could do something impressive with say, 64 GB of RAM in the equivalent of an otherwise modern personal computer. Of course that is something that would take longer for programmers to take advantage of, whereas power savings can be taken advantage of immediately.

          Assuming the technologies could be adapted to cache as well, it could cause a huge change for processors as well. A single SRAM cell is 6 transistors; if they could move to something with 1 transistor / cell and 2 bits / cell, it could allow for either dramatically more cache or significantly reduced die sizes (I haven’t seen specific numbers, but going by eye, it looks like about 20-30% of the Sandy Bridge die is L3 cache; Penryn looks to be about 1/2 to 2/3 cache). Probably some combination of increased cache and decreased die size, I would think. I know SRAM is much faster than the alternatives (I’ve never seen something claimed to be faster than it), so I assume that and difficulty of implementing the other technologies so close to the core are the main stumbling blocks; but I would think the potential is there.

          Looked around the HMC website, and found and read an entry on Intel’s Research blog. Looks very interesting, I had been wondering how long it would be until transistors were stacked three dimensionally. I haven’t been able to find anything on how specifically it stores bits though. There are various references to the stacked DRAM layers; since you seem to know something about it, is there a specific technology for those layers, or is that meant to be more customizable? I feel like I might be missing the forest for the trees with that question though, as the technology being touted seems to be more the new I/O management and the stacked layers. So I assume it’s less specific, but, well, never good to rely on assumptions when you don’t have to.

          Also, yeah, the name could use some work.

          Edit: Clarification.

            • NeelyCam
            • 8 years ago

            [quote<]One of the big talking points with memristors was density (though I haven't actually seen anything that talks specifically about bits / cell)[/quote<] I think it'll be hard to break into an established market without a compelling alternative... both flash and DRAM are on pretty solid scaling trends (driven by hugely massive volumes).. I have serious doubts about how well memristors can scale to practical applications economically... I'd certainly be happy with positive surprises but... [quote<]is there a specific technology for those layers, or is that meant to be more customizable?[/quote<] My understanding is that they'll use WideIO or something similar through TSVs to connect from the buffer chip to the memory stack; that way the link can be parallelized extensively - i.e., the the buffer chip uses some sort of SerDes topology to take in high-speed serial streams and translate them into a highly parallel, low-speed WideIO-like link to the (D)RAM.

    • flip-mode
    • 8 years ago

    This could be big.

      • DancinJack
      • 8 years ago

      lol i’ve never seen someone get a downvote for saying something that has nearly no consequence.

        • flip-mode
        • 8 years ago

        I’m tempted to downvote you for saying that!

        • NeelyCam
        • 8 years ago

        You’re kidding, right?

      • quasi_accurate
      • 8 years ago

      That’s what she said.

      • ew
      • 8 years ago

      Not only for storage but from what I understand memristors can be used like transistors and could have a big impact on CPU design as well.

Pin It on Pinterest

Share This