Intel SSD 660p brings QLC and NVMe to the masses

Samsung announced yesterday that it's mass-producing QLC SATA SSDs for consumers, but Intel has beaten the NAND giant to the punch with a QLC drive of its own. The SSD 660p launching today offers SATA-like prices for NVMe gumsticks in 512-GB, 1-TB, and 2-TB capacities.

Intel SSD 660p
Capacity Price Max sequential (MB/s) Max random (IOPS)
Read Write Read Write
512 GB $99 up to 1800 MB/s up to 220K IOPS
1 TB $199
2 TB TBD

According to Anandtech's report on the drive, the SSD 660p uses Intel's 64-layer, 1-Tb 3D QLC NAND to achieve its high densities at relatively low prices per gigabyte for NVMe storage. The uniform performance specifications in the table above come from the fact that the SSD 660p uses a massive pseudo-SLC caching scheme in tandem with 256 MB of DRAM to hide the native performance of the underlying NAND.

The 512 GB drive can use anywhere from 6 GB to 76 GB of its capacity for caching; the 1-TB drive, 12 GB–140 GB; and the 2-TB drive, 24 GB–280 GB. The effectiveness of the SLC caching mechanism will apparently decrease as the drive fills.

Intel warrants the SSD 660p for five years. The 512-GB drive is specced to endure 0.1 DWPD or 100 TB of writes. The 1-TB drive can stand up to 200 TB of writes over its lifetime. Finally, Intel rates the 2-TB drive for 400 TB of writes.

At about 20¢ per gigabyte, these drives are priced competitively with comparable SATA products and will likely get cheaper as the discount winds blow. That's good news for consumers who want a little PCIe in their SSDs. Newegg already has the 512-GB 660p in stock, so expect the rest of these drives to hit retail and e-tail shelves soon.

Comments closed
    • tanker27
    • 1 year ago

    I’m ignorant to what the actual uses for these are? Do you see performance increases with them along side SSDs? How about without? Are there gaming benefits to having one?

    What do you actually use it for?

      • Krogoth
      • 1 year ago

      NVMe solid-sate media only makes sense if you are moving GB/s worth of data on a daily basis or you need to handle thousands-tens of thousands of I/O requests.

    • rudimentary_lathe
    • 1 year ago

    I hope wide adoption of QLC doesn’t mean prices of TLC and MLC will stagnate.

    • ronch
    • 1 year ago

    So what you’re saying is.. it’ll take QLC to bring NVMe to ‘masses’ people like me?

    • watzupken
    • 1 year ago

    Despite the poor endurance relative to MLC and TLC, this may be a good storage solution with a good price. Currently, it is still pricey.

      • BurntMyBacon
      • 1 year ago

      Adding bits per cell will not be as good a driver here as people seem to expect. Going from SLC to MLC provided a doubling of (100% more) capacity within the same area. Going from MLC to TLC provided 50% more capacity in the same area. Going from TLC to QLC only provides 33% more capacity in the same area. This means, all else equal QLC can provide a theoretical maximum of 33% more GB/$ to the end user. In the real world, however, there are other costs associated with building a QLC SSD (Controller, PCB, RAM, etc.) that are the same or even possibly higher than its equivalent TLC counterpart. Moving from QLC to 5LC (PLC?) would only net a theoretical max of 25% more GB/$.

      Most of the cost improvements improvements for this drive seems to come from the usage of a 64 layer stack vs a 32 layer stack in the 600P. The capacity improvement of going from TLC to QLC is certainly meaningful, but less of a contributor.

        • Waco
        • 1 year ago

        25% is pretty huge IMO.

    • HERETIC
    • 1 year ago

    There’s one constant in this universe-“change”

    Having read a couple of reviews, these are not “bad.”
    Fine for OEM’s to throw in general purpose lappies,and
    so far will probably make a ok game drive.
    Price will be a deciding factor-expect every man and his dog
    to have QLC drives out by year end,and cheaper than Intel.
    So these will be relegated to OEM’s that buy at bulk prices.

    As for us enthusiasts at TR-
    “THESE ARE NOT THE DRIVE’S YOUR LOOKING FOR”

    • ronch
    • 1 year ago

    Pretty soon most SSDs will be QLC. I wonder who gets to Bikini Bottom first

      • NovusBogus
      • 1 year ago

      I’m feeling the urge to pick up a couple of 1TB MLC drives while they’re still available.

    • DavidC1
    • 1 year ago

    The endurance isn’t anywhere bad as people are saying.

    Forget the DWPD. The absolute numbers are pretty good. The good SSDs like the X25-M, had only a 20GB/day lifespan. The capacity being far greater with the 660p counteracts the reduced endurance.

    I have 5TB written or something on the 9 year old 80GB X25-M out of the warrantied 36.5TB lifespan. Granted it wasn’t used as my main system since a few years ago. But still. The Intel 760p is little less than 3x of the 660p, and same with the Samsung 960/970.

    So what? You’ll go from using the drive 33 years down to 11. The SSD will have electrical issues before it has endurance issues. Boohoo.

      • ronch
      • 1 year ago

      Probably still better than mechanical drives.

      • limitedaccess
      • 1 year ago

      Write endurance figures are misleading. That isn’t the real problem that users might be facing with regards to NAND durability.

      What needs to be looked into is independent testing with regards to how SSD products actually handle NAND deterioration during it’s usage.

      How is data retention? How is performance for “stale” data? How does drive performance hold up after heavy usage for months or a year?

      We’ve already seen real world cases where such problems cropped up, most notably with the 840/840 EVO which were early drives in the shift to TLC.

        • liquidsquid
        • 1 year ago

        The hope would be the underlying controller would refresh memory during idle times to prevent deterioration and stale data issues. Those storage gates are so damned tiny that I worry natural radiation will speed the loss of data without refreshing.

    • Chrispy_
    • 1 year ago

    Once the cache is used up, is the underlying QLC NAND performance any better than mechanical?

      • tay
      • 1 year ago

      It has variable amounts of SLC cache. It is simply repurposing the same cells for the SLC, where read and write are faster because your voltage levels don’t quite have to be as exact.

      • DavidC1
      • 1 year ago

      Worst case NAND is probably still faster than a mechanical drive.

      However, in actual use, especially with older and cheaper drives you notice the worst case scenario because you’d be used to a much faster state.

      I’m not sure how big of a problem that is since controllers have improved a lot since first generation of good SSDs(X25-M). Older SSDs also didn’t have TRIM. I have a Silicon Power Slim S60 using TRIM every week there’s a noticeable difference.

      • watzupken
      • 1 year ago

      Yes. You can’t just look at the sequential read & write speed since the key benefit of SSD is the low seek/response time as compared to mechanical drives. The transfer rate is usually sustained while it declines on mechanical drives as the data get stored on the outer platter.

        • BurntMyBacon
        • 1 year ago

        To be fair, most writes large enough that it would deplete the SLC cache and continuous enough that the cache doesn’t have time to flush are likely fairly sequential in nature. While this isn’t always the case, it could be argued that sequential performance of an HDD is still the most appropriate comparison here.

        On a different (and inconsequential) note, I thought that HDDs were faster on the outer edge where the linear velocity of the platter is greater given the same angular velocity.

          • Usacomp2k3
          • 1 year ago

          I believe you are correct. Same thing with CD’s.

      • jihadjoe
      • 1 year ago

      On AT’s sustained sequential read/write tests it was doing 1.4 to 1.6GB/s.

        • Waco
        • 1 year ago

        Those tests are not long enough to determine steady-state bandwidth IIRC.

    • DPete27
    • 1 year ago

    0.1 DWPD? That doesn’t sound like much. Right?

      • cygnus1
      • 1 year ago

      It’s not a lot. But the question should be, is it enough? For the average user/usage, it’s plenty

      • smilingcrow
      • 1 year ago

      If you look at these as cheaper secondary data SSDs where typically reads are much more than writes then that seems fine.
      The smallest one is 512GB so that equates to over 50GB of writes per day.
      That equates to almost 2.5 hours of 50Mbs video capture per day.

    • DragonDaddyBear
    • 1 year ago

    I dream of a day where I have a hand full of these 2TB drives in a NAS with 10GbE. I just need my current stuff to last long enough to get there.

      • Zan Lynx
      • 1 year ago

      Wait until PCIe 4.0 is everywhere, you should be able to buy up some out of date 2 TB Samsung 960 Pro drives and put 4 of them on an x16 adapter card. More performance, probably the same price in four years or so.

    • Delphis
    • 1 year ago

    Noticing that Samsung’s EVO 970 nvme drive is only rated for 150TB written for its 256GB one. So this doesn’t seem too bad. Mostly you’re going to be reading from it I’d think.

      • Waco
      • 1 year ago

      It means they’re rated at 1/3 the durability of the 970 EVO. Seems potentially bad, depending on workload.

        • jihadjoe
        • 1 year ago

        Well MLC was 1/10th the reliability of SLC, so dropping to 1/3 when going from MLC to TLC, and TLC to QLC seems pretty good in comparison.

          • Waco
          • 1 year ago

          SLC and MLC were well beyond anything most of us have to worry about. TLC and QLC are not.

      • DavidC1
      • 1 year ago

      (Damnit. I accidentally clicked downvote by accident. I meant an upvote. So think of your votes as 2 higher than it actually is. Not that you should really live for that but just saying)

      100TB is pretty decent. Enthusiasts are like super paranoid or something. You can use this on a gaming system and it’ll be fine.

        • Delphis
        • 1 year ago

        Oh, it’s fine but thank you for mentioning it. It amused me, the downvotes.

        It does seem as though there’s an immediate panic about overwriting SSDs numerous times when noone ever really does that in real world usage.

        Nothing lasts forever and I highly doubt anyone is using the same data drives they were 10 years ago anyway.

        If you’re sitting there with everything on one SSD and can’t survive if it goes away either by controller or flash failure, you have bigger issues.

        Worrying about malware too. If it can write to your drives you’ve already lost. The data can’t be trusted.

          • Waco
          • 1 year ago

          It’s not panic, it’s that a finite resource that continues to shrink in durability is eventually going to be a very real issue. It already is for many workloads.

            • strangerguy
            • 1 year ago

            You mean the extreme SSD professional workloads where the employer pays for the gear?

            If you aren’t then I’m so sorry you are the 0.01% special snowflake has to pay more for endurance nobody else needs.

            • Waco
            • 1 year ago

            Nobody else don’t need. Intellectual giant here.

            Anyway – if you think a drive that dies after writing to it a few hundred times isn’t a concern, why are you on TR? Most of us here are *slightly* power users, which means we generally put a little more load on our systems than the general public. If you are okay with drives that wear out within a few years, so be it. I’m not.

            • strangerguy
            • 1 year ago

            “If you are okay with drives that wear out within a few years, so be it. I’m not.”

            Yeah, because I’m sure there’s won’t be any big GB/$ increases to justify a new drive in said few years. Help, I’m being oppressed by QLC!

            • Waco
            • 1 year ago

            Thanks for making my point even more clearly than I did. 🙂

            • Srsly_Bro
            • 1 year ago

            I like the reference you made but some ppl tend to keep their drives longer than a few years.

    • just brew it!
    • 1 year ago

    Unless you’ve got a healthy amount of RAM headroom, I’d be really leery of letting the OS use one of these for swap. Sustained thrashing of the pagefile could chew through those writes pretty quickly.

    Frequent use of hibernation with one of these as your system drive is gonna be a no-no too, since it dumps the entire contents of RAM to the drive each time you hibernate.

      • smilingcrow
      • 1 year ago

      If you haven’t got a healthy amount of RAM then buy more RAM before splashing out on an SSD.

      • cygnus1
      • 1 year ago

      Throwing any SSD in a system with only 2GB of ram and using it for swap isn’t a great plan for longevity, ever.

      That being said, you’re kind of blowing the hibernation situation out of proportion. At least on Windows, you don’t normally get a bit for bit copy of ram in the hyberfil.sys file. By default, the size of the file is only a percentage of RAM (40% in Win10, 75% in Win7 I believe) and the contents are compressed as they’re written. So on a 16GB system, in general you’d get about 6.5 GB written to disk at most for a hibernation event. In a system with 16GB of ram, it would take over 1500 hibernations to even hit 10% of the write endurance of the smallest, 512GB drive. I think hibernating isn’t that big a deal…

      edit: and that 1500+ hibernations assumes that RAM was 100% in use for every single one.

        • DavidC1
        • 1 year ago

        I have an older system for a family member that only has 2GB RAM but Intel SSD 520, running Windows Vista 32-bit.

        Even browsing and watching videos you can tell its paging out. When it does it turns *noticeably* sluggish. Yea, SSDs can’t make up for lack of RAM. TRIM makes a big difference too.

        I do hear the Optane drives are good enough for 2GB DRAM swap usage. Since the partly-swapping 520 was sluggish but still usable, I imagine Optane being several times faster make it completely acceptable for average joe consumers.

          • cygnus1
          • 1 year ago

          I can’t imagine it has an amazing CPU either. That system sounds like punishment to use…

    • Waco
    • 1 year ago

    200 overwrites. Ouch.

    • brucethemoose
    • 1 year ago

    Huh. At 1500MB/s, you could hit the 200TB limit with a mere 37 hours of continuous writing.

    If these QLC drives become more popular, I can see some… interesting attack vectors for future malware/ransomware. Especially since you don’t need read access to critical memory/files. Get write access to any part of the drive, and you can brick it.

    I want to see how these newer drives behave at the end of their expected life, too. IIRC Intel drives just go read-only, but that TR test was a long time ago.

      • chuckula
      • 1 year ago

      [quote<]Huh. At 1500MB/s, you could hit the 200TB limit with a mere 37 hours of continuous writing.[/quote<] I wouldn't worry about this drive sustaining 1500MB/sec for prolonged periods of time.

        • just brew it!
        • 1 year ago

        A malware attack with the ability to create temporary files on the drive could do sustained writes indefinitely.

          • Waco
          • 1 year ago

          I think he was poking fun at the performance once you’ve exhausted the SLC caching mechanism. If it’s anything like TLC drives, the performance drop could be into the triple digits or even double digits of MB/s under a sustained write workload.

            • cygnus1
            • 1 year ago

            I don’t even think it would have to go full tilt to brick it in short order though.

            Brainstorming a bit, the hypothetical malware could send a ton of small synchronous writes a few times per second. Write amplification on the small writes would burn through the endurance of the drive pretty quickly most likely. It could also send it as low priority IO and throttle it to say no more than 100MB/s. Assuming a fresh disk and no write amplification, the drive would hit 100TB in less than 2 weeks of powered on time, write amplification could cut that significantly. It could also easily be written to query for the SSD model, and then knowing the endurance rating and typical write amplification, then write 95% of the disk endurance and then hold for signal to go full tilt and brick the drive.

            I guess the moral is, always backup your data. You never really know when or why a storage device is going to fail.

            • brucethemoose
            • 1 year ago

            Yeah, that’s more of what I meant.

            Even if you only average 100MB/s, malware could brick a QLC SSD like this in pretty short amount of time. Days or weeks, not months or years like MLC or even TLC.

            • cygnus1
            • 1 year ago

            Yeah, exactly. It’d be under 2 weeks assuming zero write amplification, and that’s not realistic. So it could probably be done in a few days at most and be low enough impact on performance while it’s underway so as to go unnoticed by Joe User.

            • BurntMyBacon
            • 1 year ago

            Anyone know the effect the SLC cache has on write amplification?

            • cygnus1
            • 1 year ago

            That’s a good point I didn’t think of. I could see it dampen it for the QLC cells, maybe. But you’ll still have write amplification to the SLC area at the very least. I think if you assume the drive can’t reassign individual sectors to new flash pages (I really don’t know if the flash table can move stuff around that finely grained) so as to coalesce the writes to the QLC area, I think malware could still be engineered to get write amplification in the QLC area too.

          • chuckula
          • 1 year ago

          I never said you couldn’t do sustained writes on this drive indefinitely. You could do the same thing on a floppy.

          I said that I wouldn’t worry about this drive sustaining [b<]1500MB/s[/b<] write speeds for a prolonged period of time.

      • DavidC1
      • 1 year ago

      The endurance numbers are different from sequential writes versus random writes. Based on their datacenter SSD released just a few days ago, sequential write DWPD is 4.5x random write DWPD.

        • BurntMyBacon
        • 1 year ago

        Continuous does not imply sequential. Though, if you were the malware writer, that is something to keep in mind.

          • DavidC1
          • 1 year ago

          If you worry about malware exhaust on this SSD, you’ll worry on Samsung 970 and Intel 760p too. They are 3x better per capacity. 100TB is more than enough, and the 1TB version has 200TB endurance. If you get the 256GB 970, its only 150TB, or 50% better than the 512GB 660p.

          Worst-case scenario assumes:
          -Random writes, which are much more punishing than sequential
          -Drive being full, so little room for moving data around for write amplification
          -Random writes do the testing to the full span of the drive

          Much ado about nothing.

      • BurntMyBacon
      • 1 year ago

      I thought Intel drive in the TR test went read-only until the next reboot. At that point it was a brick.

    • meerkt
    • 1 year ago

    Maybe it’s about time they started offering user-configurable bits-per-cell?

      • just brew it!
      • 1 year ago

      That’s an interesting idea, but it would just confuse most people. I’m not sure the demand for a niche product like that from people who would actually care would be enough to justify it.

        • meerkt
        • 1 year ago

        They already sort of do it with pseudo-SLC caching. A bit more refinement and it could be exposed directly to the user.

        Even better, how about defining different xLCness per sector range? Boot and important stuff partition as SLC, general use partition as MLC or TLC, temp files or big files you have backed up as QLC.

          • cygnus1
          • 1 year ago

          It could be simplified to a slider in the OEM’s SSD tool/utility that reduces capacity of the SSD. As you slide it to smaller sizes it increases reserve and/or decreases bits per cell and it could maybe give you some rough estimate of how much endurance/life you’re adding to the SSD in the process.

          • Amgal
          • 1 year ago

          So we’re back to de-fragging… in a way?

            • willmore
            • 1 year ago

            The wear leveling in the drives firmware sort of does that anyway already, so yeah?

    • chuckula
    • 1 year ago

    Well it is a rather weird way to respond in panic at the imminent launch of Ripper^2.

    But I’ll take it.

      • uni-mitation
      • 1 year ago

      Son, chin up! We are on our indestructible tread-ripping journey to world domination!

      [superb TR scoop) Jaws: The Threadripper edition will hit store shelves pretty soon! Be prepared for blood!

      Rick MoarCoars
      AMD PR Head Chief

    • JosiahBradley
    • 1 year ago

    Is it my birthday already, thanks Intel.

Pin It on Pinterest

Share This