review samsungs 860 qvo 1 tb ssd reviewed

Samsung’s 860 QVO 1-TB SSD reviewed

When we reviewed Samsung’s 860 EVO SSD a few months ago, we spent some time musing about the coming quad-level-cell flood. Well, folks, the fourth bit of the levelocalypse is finally upon us. Samsung’s QLC-powered 860 QVO SSD has made its way into the TR labs.

The 860 QVO is the third consumer QLC SSD to hit the consumer market. The first was Intel’s 660P, followed by its Micron-flavored cousin, the Crucial P1. (Both of those drives were preceded by Micron’s 5210 Ion, but that drive was targeted towards enterprise customers and made available only to select partners.) The 860 QVO’s mission is to bring a terabyte-class SSD to the unwashed masses. QLC is the fruit of NAND manufacturers’ never-ceasing quest for increased density, and Samsung thinks it can leverage those gains to deliver enormous drive capacities at comparatively low costs. 860 QVOs start at 1 TB in capacity and only go up from there.

Samsung 860 QVO
Capacity Price Max sequential (MB/s) Max random (IOps)
Read Write Read Write
1 TB $149.99 550 520 96K 89K
2 TB $299.99 550 520 97K 89K
4 TB $599.99 550 520 97K 89K

The 1 TB QVO we have on the bench today appears pretty similar to the 860 EVO if you’re only looking skin-deep. Samsung has inverted the usual color scheme of its SATA drives, but otherwise the 860 QVO looks to be just another 2.5″ SATA drive. But don’t be fooled: its pedestrian presentation belies the new technology lurking below the surface.

But let’s leave aside the hyperbole for a moment. What does quad-level-cell even mean? To put it succinctly, in QLC NAND, each individual flash cell is tasked with storing four bits of data. We won’t rehash the entire history of solid-state storage, but here’s a whirlwind tour. NAND flash started out as single-level cells (SLC), meaning each cell stored only a single bit. Eventually, manufacturers figured out how to store two bits per cell (known as multiple-level cell, or MLC), getting more storage space out of the same amount of silicon.

This savings comes at a cost—each bit of storage added to a cell doubles the amount of voltage states in play. In practice, this increased number of states translates to lower performance and shorter lifetimes at the individual flash cell level. For the gritty details, refer back to our review of the Samsung 840 series, in which we gave a thorough breakdown of the MLC (two-bit) to TLC (three-bit) transition. It’s every bit (har har) as relevant today, despite its age.

The transition from three bits per cell to four brings all the same concerns and considerations back to the foreground. The advent of TLC brought a lot of hand-wringing into enthusiast circles. The increased density brought some hope that the reduced costs would potentially be passed down to the consumer. But a more pessimistic view was that manufacturers would phase out SLC and MLC products and sell us objectively inferior products for the same price as the outgoing stuff. Arguments can be made for both viewpoints. Client SLC drives are completely extinct, and MLC drives are a dying breed. The few left on the market are positioned only for the deepest pockets. On the other hand, prices for TLC drives have come way down since the technology’s introduction. Here’s the cost-per-gigabyte graph from the 840 Series review as a reminder.

That’s a far cry from the 20-ish cents per gigabyte—or even less—we’re paying for TLC NAND today. But it’s hard to credit the difference entirely to TLC flash. After all, solid-state storage is a different beast today. In 2012, SSDs still had a whiff of the exotic about them, but moving-part-free storage devices are accessible to just about everyone now. Tons of laptops and preassembled systems come standard with them. On top of that, PCIe drives and the NVMe protocol have provided manufacturers with new segmentation niches to carve out. Gone are the days when your only choice was between cheap mechanical storage and halo solid-state products. There’s a whole lot of space in between now.

As for the worry that TLC speeds weren’t good enough, well, adding additional bits certainly does slow down I/O at the cell level, but manufacturers are adept at masking that performance hit. Just about every modern SSD has the ability to set aside a portion of its NAND capacity to be treated as if it’s composed of single-level cells. This reduces the amount of possible voltage states back down to two instead of eight, reducing the complexity and speed of accesses. SanDisk calls this technique “nCache,” Micron calls it “Dynamic Write Acceleration,” and Samsung refers to it as “TurboWrite,” but we lump them all together under the umbrella term “pseudo-SLC caching.” These schemes are essential to maintaining an acceptable user experience on denser NAND implementations.

The long and short of it is that TLC, though inherently slower and less durable than the NAND that preceded it, is still good enough for most end users. And Geoff’s infamous endurance experiment provided evidence that a drive’s flash cell density isn’t the limiting factor for its longevity. Here we are again, though, six years later, with all the same questions about quad-level-cell NAND’s performance and endurance. It’s just that now there are a whopping sixteen voltage states for each cell to worry about! The 860 QVO is our first guinea pig, so let’s dissect it to see what makes it tick.

It’s immediately obvious that Samsung got the bill-of-materials savings it was looking for. The tiny PCB inside the case looks just like that of the 860 EVO 1 TB, except there’s a single V-NAND package instead of two. Aside from that difference, not much else distinguishes the guts of the drives. The brains behind the operation is still Samsung’s MJX controller, which has been retrofitted to play nice with the company’s 4-bit V-NAND. The same 1 GB of LPDDR4 is right there alongside the controller and NAND. (The 2-TB QVO gets 2 GB of DRAM,  and the 4-TB drive gets 4 GB.)

The MJX’s ace-in-the-hole is Intelligent TurboWrite, which we’ve seen in action before in Samsung’s other 860 and 900-series drives. This allows the drive’s pseudo-SLC caching capabilities to reach beyond their small, dedicated slice of space and seize additional capacity to act as a high-speed buffer if the user has left some unfilled. In the 1-TB QVO, TurboWrite  has a fixed 6-GB region to work with at all times, and it can temporarily enlist 36 more gigabytes if that space is available and called for.

What’s new in Samsung’s TurboWrite-related press materials is a shot of depressing realism. The company acknowledges that performance will fall off a cliff when TurboWrite isn’t in action. Be careful of that “up to 78 GB” figure in the chart below—the 1-TB QVO can’t enlist more than 42 GB of pseudo-SLC cache.

Drives with pseudo-SLC caching have always been subject to this kind of degradation, but companies weren’t always so forthright about it. This time around, Samsung is careful to specify that the 500+ MB/s speeds quoted in the table only apply to TurboWrite-accelerated operations. The company only claims a modest 80 MB/s when sequential writes are hitting the QLC NAND directly.

Should users be worried about the limits of Intelligent TurboWrite and QLC media? We doubt it, presuming the read characteristics of these drives are satisfactory. Enthusiasts buying a lot of recent AAA games could flirt with the limits of a 42-GB cache, especially as high-quality textures push some games’ installed sizes past 100 GB. Thing is, unless you’re restoring a game backup from local media, 80 MB/s is still going to be way faster than most folks’ internet connections. Folks with gigabit fiber might want to consider a TLC SSD for their Steam drives, but in its role as an accessible SSD, the QVO could serve just fine as a place to keep large files on the cheap.

QLC’s shortcomings also manifest in Samsung’s reduced confidence in the longevity of the drive. It backs the drive with a three-year warranty instead of the five years that the EVO got. And the endurance rating for the 1 TB drive is 360 terabytes written, down a good bit from the 660 TBW that the 860 EVO 1 TB was good for. That’s still far more than most consumers need, and it behooves us to remember that the venerable 850 EVO only got a 150 TBW rating back in its heyday.

And now for perhaps the most important detail of all: the price. The 860 QVO won’t hit the streets for another couple of weeks, but Samsung is setting the suggest price for the 1 TB version at $150. That’s pretty cheap for a terabyte, but tried-and-true 3D TLC drives are currently in the same neighborhood. Either QLC is better than it has any right to be, or it’s too expensive and needs some help from the discount gods. Let’s find out.


IOMeter — Sequential and random performance

IOMeter fuels much of our latest storage test suite, including our sequential and random I/O tests. These tests are run across the full capacity of the drive at two queue depths. The QD1 tests simulate a single thread, while the QD4 results emulate a more demanding desktop workload. For perspective, 87% of the requests in our old DriveBench 2.0 trace of real-world desktop activity have a queue depth of four or less. Clicking the buttons below the graphs switches between results charted at the different queue depths. Our sequential tests use a relatively large 128-KB block size.

The 860 QVO’s sequential reads can’t keep up with those of the TLC-equipped 860 EVO, but the race is fairly close. On the other hand, writes aren’t at all in the same ballpark. It’s worth noting that our sequential IOMeter test setup defeats schemes like Intelligent TurboWrite by design. The test file IOMeter runs spans the entire capacity of the drive, so there’s never any idle space to take advantage of. But that was true for the 860 EVO as well, and it chugs along happily near 500 MB/s on static TurboWrite alone. We know that the performance of direct-to-QLC writes is relatively poor from Samsung’s materials, so we’re most likely seeing the effects of that characteristic here.

It’s worth noting that IOMeter is a stress test, and this is the sort of issue that real-world workloads don’t typically expose. We’ll reserve final judgement on the drive’s write capabilities for now, but this is a black mark that will hurt the 860 QVO in our aggregated rankings.

Life doesn’t get better for the QVO in our random tests. Random read response times are slower than even our ancient X25-M. Maybe don’t throw our your SATA 3 Gbps drives just yet, especially if they’re packing MLC inside. The 860 QVO surges to life when it comes to random write response times. Clearly its TurboWrite tricks yield good results when they kick in properly.

Our first set of IOMeter tests leave us with doubts about the 860 QVO and QLC’s place in the SSD landscape. Let’s see what our sustained and scaling tests can uncover.


Sustained and scaling I/O rates

Our sustained IOMeter test hammers drives with 4-KB random writes for 30 minutes straight. It uses a queue depth of 32, a setting that should result in higher speeds that saturate each drive’s overprovisioned area more quickly. This lengthy—and heavy—workload isn’t indicative of typical PC use, but it provides a sense of how the drives react when they’re pushed to the brink.

We’re reporting IOPS rather than response times for these tests. Click the buttons below the graph to switch between SSDs.

The QVO’s peak is much lower than the EVO’s, but it holds on to it for more than 10 minutes.  Let’s look at the drives’ performance side-by-side.

The peak graph echoes the question raised by our IOMeter sequential write results: what makes TurboWrite so much less capable on QLC than on TLC? Again, the answer is likely a matter of the performance of the underlying media. With many more states per cell to juggle than its TLC siblings, it’s perhaps not that surprising that the QVO suffers next to its TLC brethren.

Our final IOMeter test examines performance scaling across a broad range of queue depths. We ramp all the way up to a queue depth of 128. Don’t expect AHCI-based drives to scale past 32, though—that’s the maximum depth of their native command queues.

For this test, we use a database access pattern comprising 66% reads and 33% writes, all of which are random. The test runs after 30 minutes of continuous random writes that put the drive in a simulated used state. Click the buttons below the graph to switch between the different drives. Note that each drive uses a different scale for IOPS to allow us to view the shape of its curves.

The 860 QVO exhibits a general upward trend, but the absolute differences from queue depth to queue depth are rather small. The next graphs will make that much clearer. 

All our 3D TLC drives make meaningful gains as queue depth increases, but our one 3D QLC drive looks pretty much flat in comparison.

IOMeter has not been kind to the 860 QVO. But our IOMeter tests are in no way representative of typical access patterns for a client SSD. After all, we write the entire span of the drive and then begin the enhanced interrogation almost immediately. This setup completely neutralizes any caching strategies that rely on unused drive capacity. The data we’ve gathered is still valid and useful, but before we write off QLC, let’s see how it behaves in real-world tests.


TR RoboBench — Real-world transfers

RoboBench trades synthetic tests with random data for real-world transfers with a range of file types. Developed by our in-house coder, Bruno “morphine” Ferreira, this benchmark relies on the multi-threaded robocopy command build into Windows. We copy files to and from a wicked-fast RAM disk to measure read and write performance. We also cut the RAM disk out of the loop for a copy test that transfers the files to a different location on the SSD.

Robocopy uses eight threads by default, and we’ve also run it with a single thread. Our results are split between two file sets, whose vital statistics are detailed below. The compressibility percentage is based on the size of the file set after it’s been crunched by 7-Zip.

Number of files Average file size Total size Compressibility
Media 459 21.4MB 9.58GB 0.8%
Work 84,652 48.0KB 3.87GB 59%

RoboBench’s write and copy tests run after the drives have been put into a simulated used state with 30 minutes of 4KB random writes. The pre-conditioning process is scripted, as is the rest of the test, ensuring that drives have the same amount of time to recover.

The media set is made up of large movie files, high-bitrate MP3s, and 18-megapixel RAW and JPG images. There are only a few hundred files in total, and the data set isn’t amenable to compression. The work set comprises loads of TR files, including documents, spreadsheets, and web-optimized images. It also includes a stack of programming-related files associated with our old Mozilla compiling test and the Visual Studio test on the next page. The average file size is measured in kilobytes rather than megabytes, and the files are mostly compressible.

Let’s take a look at the media set first. The buttons switch between read, write, and copy results.

Drat. RoboBench does not provide the vindication we were hoping for. In fact, these results are incredibly similar to our IOMeter sequential output, with reasonably fast reads but painfully slow writes. Our RoboBench tests run with the drive all but empty, so there should be nothing stopping Intelligent TurboWrite from running wild here. Let’s see what the QVO does with the work set.

The work set doesn’t quite let the 860 QVO catch its EVO stablemate, either. Read and copy performance trails the TLC drive. At least writes end up fairly close.

We weren’t expecting the QVO to bowl us over, but we we thought real-life file transfers would better showcase its strengths than ended up being the case. The last ray of hope for the drive lies in giving it boot duties.


Boot times

Until now, all of our tests have been conducted with the SSDs connected as secondary storage. This next batch uses them as system drives.

We’ll start with boot times measured two ways. The bare test depicts the time between hitting the power button and reaching the Windows desktop, while the loaded test adds the time needed to load four applications—Avidemux, LibreOffice, GIMP, and Visual Studio—automatically from the startup folder. These tests cover the entire boot process, including drive initialization.

Finally, some tests that don’t leave the 860 QVO in the dust. If nothing else, QLC seems well suited to booting up Windows.

Load times

Next, we’ll tackle load times with two sets of tests. The first group focuses on the time required to load larger files in a collection of desktop applications. We open a 790-MB 4K video in Avidemux, a 30-MB spreadsheet in LibreOffice, and a 523-MB image file in the GIMP. In the Visual Studio Express test, we open a 159-MB project containing source code for Microsoft’s PowerShell.

Load times for the first three programs are recorded using PassMark AppTimer. AppTimer’s load completion detection doesn’t play nice with Visual Studio, so we’re still using a stopwatch for that one.

The QVO seems to lag a couple of tenths of a second behind the rest of the pack, but it’s a pretty small difference compared the performance gulfs we saw in RoboBench and IOMeter.

In games the 860 QVO manages to land close to the middle of the road. The perfect use case for QLC may just be to store our massive game libraries.

This was really the only page of tests that cast the 860 QVO in a positive light. We’ve tapped out our supply of benchmarks, so let’s talk test methods. Or skip a page if you just can’t wait for our conclusions.


Test notes and methods

Here are the essential details for all the drives we tested:

Interface Flash controller NAND
Adata XPG SX8200 480GB PCIe Gen3 x4 Silicon Motion SM2262 64-layer Micron 3D TLC
Crucial MX500 500GB SATA 6Gbps Silicon Motion SM2258 64-layer Micron 3D TLC
Intel X25-M G2 160GB SATA 3Gbps Intel PC29AS21BA0 34-nm Intel MLC
Samsung 850 EVO 1TB SATA 6Gbps Samsung MEX 32-layer Samsung TLC
Samsung 860 EVO 1TB SATA 6Gbps Samsung MJX 64-layer Samsung TLC
Samsung 860 QVO 1TB SATA 6Gbps Samsung MJX 64-layer Samsung QLC
Samsung 970 EVO 1TB PCIe Gen3 x4 Samsung Phoenix 64-layer Samsung TLC
Toshiba RC100 PCIe Gen3 x2 Toshiba 64-layer Toshiba BiCS TLC

The SATA SSDs were connected to the motherboard’s Z270 chipset. The PCIe drives were connected via one of the motherboard’s M.2 slots, which also draw their lanes from the Z270 chipset.

We used the following system for testing:

Processor Intel Core i7-6700K
Motherboard Gigabyte Aorus Z270X-Gaming 5
Firmware F10B
Memory size 16 GB (2 DIMMs)
Memory type Corsair Vengeance LPX DDR4 at 2133 MT/s
Memory timings 15-17-17-35
System drive Corsair Force LS 240GB with S8FM07.9 firmware
Power supply Rosewill Fortress 550 W
Operating system Windows 10 x64 1803

Thanks to Gigabyte for providing the system’s motherboard, to Intel for the CPU, to Corsair for the memory and system drive, and to Rosewill for the PSU. And thanks to the drive makers for supplying the rest of the SSDs.

We used the following versions of our test applications:

Some further notes on our test methods:

  • To ensure consistent and repeatable results, the SSDs were secure-erased before every component of our test suite. For the IOMeter database, RoboBench write, and RoboBench copy tests, the drives were put in a simulated used state that better exposes long-term performance characteristics. Those tests are all scripted, ensuring an even playing field that gives the drives the same amount of time to recover from the initial used state.

  • We run virtually all our tests three times and report the median of the results. Our sustained IOMeter test is run a second time to verify the results of the first test and additional times only if necessary. The sustained test runs for 30 minutes continuously, so it already samples performance over a long period.

  • Steps have been taken to ensure the CPU’s power-saving features don’t taint any of our results. All of the CPU’s low-power states have been disabled, effectively pegging the frequency at 4.0 GHz. Transitioning between power states can affect the performance of storage benchmarks, especially when dealing with short burst transfers.

The test systems’ Windows desktop was set at 1920×1200 at 60 Hz. Most of the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.



The bards will sing no lionizing songs of the 860 QVO’s struggle through our test suite. We entertain no hopes that it will land anywhere near the 860 EVO in our final reckoning. We distill the overall performance rating using an older SATA SSD as a baseline. To compare each drive, we then take the geometric mean of a basket of results from our test suite.

The 860 QVO lands in the company of Toshiba’s RC100, a tiny sliver of NVMe storage that attempted to scrape by without a dedicated DRAM cache. Even with all its advantages, the 860 QVO couldn’t turn in a better showing. We can make excuses for the QLC drive’s performance in IOMeter, since those tests are designed to expose drives’ weakness by filling them to capacity and subjecting them to extreme conditions. Applications with similar access patterns are not at all suitable to be run on budget consumer drives like the 860 QVO. But there are no excuses for the drive’s showing in our real-world RoboBench tests. Despite leaving the 860 QVO plenty of room to let its Intelligent TurboWrite caching work its magic, the drive still just couldn’t deliver.

The drive did do well in our boot and load tests, so maybe it could find its niche there—but only if the price is right. In the graph below, the most compelling position is toward the upper left corner, where the price per gigabyte is low and performance is high.

Prices for SSDs are still falling across the board, and the story is the same no matter where you look in PC components today. Value-packed CPU options abound, DDR4 memory is dipping into sane territory again, and TLC NAND SSDs are just begging to be bought for well under 20 cents a gig. This, unfortunately, is rough news for the 860 QVO 1 TB. Its suggested price has been set at $150, but why buy a QLC drive at 15 cents per gigabyte when you can get excellent TLC drives like the MX500 500 GB or Samsung’s own 860 EVO 1 TB for 13 cents per gigabyte or less?

The disconnect might be explained by the usual discrepancy between manufacturers’ suggested launch prices and real-life retail prices, but even Samsung itself seems to want to sell the 860 EVO for less than it expects to get for the 860 QVO. It’s still Cyber Monday as I’m writing this, so perhaps the magic will have faded from the 860 EVO’s current prices by the time this review goes live.

On top of price concerns, the real-world reliability of QLC NAND is still an open question. Samsung’s three-year warranty and 360-terabytes-written endurance rating are encouraging, but it might nonetheless be hard to find peace of mind this early in QLC’s lifetime. Unless you’re an intrepid early adopter or follow robust backup practices, there’s a lot to lose if you entrust precious data to a fledgling storage technology.

This isn’t a knock against the QVO specifically, of course. More data points will instill more confidence, and more data should be forthcoming over the next months. Intel’s 660P and Crucial’s P1 SSDs are already on the market.  Adata just announced its own SSD that’s likely based on Micron’s 3D QLC, too: the Ultimate SU630. As for the other NAND manufacturers, both Western Digital and SK Hynix have already announced or sampled QLC dies. With the entire breadth of the solid-state storage industry indicating a QLC-laden future, we’ll soon have plenty of products to compare and reliability reports to comb through.

In summary, the 860 QVO can boot Windows and load applications with SSD swiftness, but it seems unable to marshal its caching tricks well enough to yield consistently snappy file transfer speeds.  At the price Samsung wants for it, the QVO is a tough pill to swallow. Instead, we’d suggest buying cheap TLC drives while the holiday sales bonanzas are yet running. Give QLC some more time to bake before you open up that particular oven.

0 responses to “Samsung’s 860 QVO 1-TB SSD reviewed

  1. Wow, I missed this article when it came out. I get the performance issues for the QVO, but I am enthusiastic about this drive, because for the 4TB drive, the price (maybe) will be about $300 or 30% less than the EVO with a similar capacity.

    For me, this would be good for my older gaming and music/art production laptop, which has a lot of plugin and content files. That machine is before the time of M.2 slots and has only the two 3.5″ drive bays. To get higher capacity of internal drive space, I had to remove the DVD reader/writer and add a 3rd drive into that space.

    This works, but it comes with some risk. I have an external USB DVD drive for that laptop, but I hardly need to use it for anything. The problem is that the most current BIOS for this particular Asus can’t boot from USB, so I can’t use the external DVD drive or a USB stick to get myself out of trouble; I’d have to put the original DVD back or swap out one of the hard drives with another SATA device that would allow me to boot from a disk or SSD bootable partition.

    The older BIOS could boot from USB, but not this latest one, and Asus hasn’t updated it since 2012. It *can* boot from DVD, so if I could go back to 2 internal drives and keep a few bootable emergency utility DVDs handy, that would be the best choice.

    With 4TB drives more and more widely available, I could consider putting a 4TB and a 2TB drive in there and put the DVD drive back in place.

    But I guess what I probably should REALLY do is cut my losses and shop for a new laptop. :O

  2. I spoke to a Micron guy about QLC a few months ago. He said that QLC is meant to displace 10k spinners in datacenters, not TLC and MLC “performance” drives.

    So I think a comparison with TLC vs QLC vs 10k vs 7.2k would be appropriate.

  3. Thanks for the suggestion! I think I’m going to get a 240 or 256 GB model at a minimum. I don’t foresee her needs growing, but I don’t want her to get stuck, and if we’re already at ~70% full, that’s probably too much.

    I’d look at the Adata SU800 256 GB though. If the 128 performs well and is sturdy in your use, a 256 can’t be much different.

  4. I suggest the 128 GB Adata SU___ models. I used the 128 GB SU800 to breathe new life into two old laptops. Today, they’re $28.99 apiece.

    Note that 128 GB can’t fully populate the channels, so they’re not as fast. But for a casual user, that shouldn’t be a problem.

    Also, note that the SU600 lacks DRAM, so I recommend the SU800.

  5. And it’s a one foot to the left- one foot to the right. haha.
    We havn’t gone back that far, probably on par with planer TLC.
    And now there’s only one way to go.(unless they have ram-less versions.)

    High RRP have a purpose-
    1. Let’s see how much suckers will pay. (think appl.)
    2. Retailers can use high No’s like “40% OFF.”

    Would love to know how hot that controller gets doing its error correction on QLC………

  6. The vast majority of flash use cases don’t require huge endurance. You get speed through quantity (for write) and most workloads in the weird AI / ML world we live in are read-heavy on existing datasets.

  7. Not necessarily. It’s too early to say. Just because many outfits went with SPARC or Itanium didn’t mean they became successful. Yes they stayed but only in limited quantities. The mass market voted with their wallets.

  8. I’m not sure the first SATA drive out to market is enough to designate QLC as a ‘resounding failure’ by any stretch, as the first iteration of a new tech tends to be less price competitive as the highly mature products it’s intended to replace. Plus we’re going off of Samsung’s launch list price, which even they tend to discount heavily against.

    QVO aside, I recently built a CFL NUC and paid $74.99 for a 512GB Intel 660p as a boot drive. The 660p was priced identically to the m.2 Crucial MX500 drive, which it crushes in every performance metric.

    People made the same arguments for both MLC and TLC (buh… buh… [b<]BUH MY WRITEZZZ [/b<]) and by the second generation of the product most of us were running them without issue enjoying the leap in $/GB. Why would QLC be any different?

  9. I think I’d go with something in the page-level encoding. If I read you right, you’re thinking like ECC RAM, where 72 bits encodes 64 bits of data with the ability to correct one-bit errors and detect two-bit errors. But if you went with something like Reed-Solomon coding, you can pretty arbitrarily tweak how much overhead you want to spend protecting the data. Given the direction they’re going with endurance ratings, it seems obvious that at some point someone will introduce something like this to squeeze a bit more endurance out of drives.

    I also find myself wondering if there is any leverage to be had around what is stored in adjacent cells. Apparently there are ways to adapt Rowhammer to flash, which implies that there is some interaction between cells – in which case, I wonder if you can choose an intercell coding which maximizes ability to read back what you wrote.

  10. I think QLC won’t stay very long in the market, to be honest. Even as data drives the cost savings probably won’t save it unless they can really drop prices.

  11. I agree that this round is not worth buying, however this tech is how I get to build an All Flash based 10TB+ Plex Server w/ RAID 5 redundancy in 2020. The prices will go down.

  12. I have been reading hardware magazines from years before internet appears and visiting Usenet and hardware sites since the internet creation… still not capable of imagine some of the features of any hardware just reading its name. Always need to go to the reviews to understand the differences between a EVO and a QVO…

    Imagine a person that do not know anything about hardware trying to buy a new hard disk for a computer.

  13. Great write-up Tony. You made boring old TLC more exciting than it’s ever been before! This drive makes me wish I had a couple more 1 TB VelociRaptors, more 1TB 850 EVOs, more 1TB Mushkin Reactors, and more 2TB Micron 1100s. Also makes me wish I had grabbed a couple of those $200 2TB MX500s.

    Out of curiosity, why does your M.2 slot go through Z270 instead of CPU? Mobo layout not particularly flexible?

  14. QLC may well be the bottom of this barrel at least. I’m 100% sure that DRAM-less QLC drives are going to be so awful that people just won’t be interested, and moving to five-bits per cell adds so little storage that I doubt it’ll be worth any of the major controller manufacturers from investing effort in Q(uintuple)LC NAND.

    NAND FABs: “Hey, we’ve got a proposition for you Marvel/Phison/J-Micron/SM; Design us a new controller for FIVE levels per cell. We’ll be able to sell 25% more capacity per die than universally-panned QLC and all you have to do is make something that’s four times more difficult that your budget TLC controllers. It’ll be valid in the bottom-of-the-barrel, zero margins ultra-budget sector where last year’s products are already undercutting all the competition but those will probably clear out inventory [i<]Real Soon* Now™[/i<]" Honestly though, The floor for SSD performance is eMMC, which has already replaced TLC and QLC NAND in truly awful budget builds where neither performance nor capacity are important.

  15. I’m glad TR chose not to recommend this drive. Many other outlets don’t have the scruples for that.

  16. Welcome to “The Race To The Bottom” in SSD technology and pricing.

    Expect this nightmare to be complete when the major names exit the SSD business because they can’t make billions of dollars at it.

    What will be left are those manufacturers and sellers on Newegg with:

    – quirky names (Shenyung Fun Industry?)

    – unbelievable low-ball prices (anyone remember “Crazy Eddie” TV commercials?)

    – “First From China” images (your indication to avoid at all costs) on their pages

    QLC == Solid State Nope (thanks to ronch for the saying)

  17. My idea once was 3 TLC cells for 8 bits + byte-level ECC (in addition to all other ECC in SSDs that we know and don’t know about). Of course it’s now obsolete since the humanity has learned to stuff more into less.

  18. TLC drives will remain available, just like MLC drives are still available. You just have to accept to pay extra. Given the fact that my storage needs have stagnated, a 1TB pro MLC drive is likely to last 3-4 years.

  19. Yep. I’ve been eyeing a 2TB MX500 in the sales.

    I don’t need anywhere near that much storage just yet, but I’ve not seen a single QLC drive on the market I like the performance of and Samsung are probably the best when it comes to controllers. If they can’t make QLC appealing, it’s because QLC simply isn’t appealing, end of discussion.

  20. I kind of have started doing that, but not because of QLC. I already had a Samsung 840 EVO 250GB, and a Pioneer 120GB. About 6 months ago I bought a 2TB Micron 1100 when it was on sale for $250. This was before the price of NAND started dropping and so it was an incredible deal at the time. Then about two months ago Newegg had a 15% off sale, which made the 2TB Micron 1100 $212.50. Even though NAND had dropped a bit, this was still by FAR the best deal around, so I bought another one. The Micron 1100 is a discontinued drive, so once stocks are gone, that’s it.
    I did see that on Black Friday or Cyber Monday Newegg was selling a Crucial MX500(the Micron 1100 is actually a Crucial MX300) for $208, so finally there was a deal out there better than the Micron 1100. And NAND prices are forecast to drop up to 20% next year. So if that does happen, it would be a good idea to pick up any good deals you see on MLC/TLC drives next year.

  21. And the other one is a simple “no”, unless money is no object in which case “caveat emptor”.

  22. Agreed. Furthermore I don’t trust Samsung to be up-front about it. They “fixed” the 840 EVO but conveniently forgot about its immediate predecessor, the 840, which had the exact same problems due to using the same planar TLC NAND but without the benefit of a cache.

  23. I’d pronounce it like Kee-Voh, but I’d rather it didn’t exist in the first place.

    Other than an enterprise with an extremely large and read-heavy workload, I don’t see who would want this. And the guys with those kind of loads already have solutions.

  24. well, with only one die, this drive is not the best foot forward for QLC NAND. Something that can fill all the channels of a modern drive controller (which I believe requires at minimum of 2 die) will help write performance greatly.

  25. You may also want to consider data retention of drives that aren’t accessed frequently enough. They already ran into an issue with TLC (840EVO IIRC) where data that was not accessed frequently enough got corrupted due to voltage drift. The solution was to rewrite the data periodically. I would want some assurance that a technology that has half the tolerance for voltage drift can maintain data integrity over time without sacrificing an unacceptable portion of its write cycles to do it.

  26. [quote=”NovusBogus”<]I'd rather see that effort and expense go into more impactful issues like whether RTX is worth it or how long an Optane/XPoint drive lasts[/quote<] Sounds like an endurance test including Optane/XPoint representatives would satisfy one of your criteria.

  27. Y U be droppin perfectly good SSDs in an article like this. How are they supposed to sell you on the “latest and greatest” tech if you keep bringing up the past.

  28. [hangs head]… me too.
    [realizes it still performs quite well]Wait, what was the problem again?

  29. ^This.
    [quote=”Chrispy_”<]- You save a teensy bit on NAND (33% more data per die but subtract from that the extra manufacturing complexity). - You have to spend extra on a more fancy controller and its fancy cache shenanigans.[/quote<] Nice and succinct way of putting it. I'm not really sure what people expected, but this came as no surprise to me.

  30. In order for a single cell to hold, say, 3 bits, there will have to be 8 distinct voltage levels because you need to account for every possible combination of those 3 bits, of which there are 8 possible combos. For QLC, 4 bits are read straight out of the cell and there are 16 possible combos. So, 16 distinct voltage levels.

    Er, does this answer your question?

  31. Keeping things simple is one way to reduce the cost.

    Needing to make an entirely new flash tech that brings a theoretical 10% benefit in capacity might not end up offering higher capacities per $ in the end.

    With QLC we seem to be getting close to the point where you can’t get higher without substantial sacrifices for lower and lower gains(5 level cell would offer a maximum 25% gain over QLC, and 6-level cell will offer an even smaller 20%).

  32. Uh. I’ll just leave this here.


  33. Am I the only one think that the naming is “funny”?

    PRO: easy to pronounce, easy to understand.
    EVO: easy to pronounce, easy to understand.
    QVO: Kill VO? Queue VO? Kill Vee Old?

    I know it’s QLC, but why can’t we call it: QUO xD

  34. [quote<] whether RTX is worth it or how long an Optane/XPoint drive lasts. [/quote<] I'm going to assume the former is a joke, but agree including Optane in the test would be worthwhile.

  35. I’m not against another endurance test, but I don’t see where it would really bring a lot of benefits. The SSD endurance test was a landmark in tech journalism because it answered the question of whether a then exotic new technology came with major long term risks that outweigh the immense performance gains over spinning rust. QLC is, at its very best, a marginally cheaper and almost as good alternative to commodity TLC drives that are already flooding the market. I’d rather see that effort and expense go into more impactful issues like whether RTX is worth it or how long an Optane/XPoint drive lasts.

  36. If you’ve been planning to get an SSD, do it now while there are still respectable TLC drives available. Buy the best and biggest one you can get and pray that it works ok for the next 5 years while they sort out QLC or the market decides whether QLC stays or not.

  37. QLC appears to be a resounding failure.

    – You save a teensy bit on NAND (33% more data per die but subtract from that the extra manufacturing complexity).
    – You have to spend extra on a more fancy controller and its fancy cache shenanigans.

    For entry-level/budget drives the more complex controller costs likely equal or outweigh the QLC NAND savings, and yet performance and endurance both suffer; You’re getting an inferior drive for no cost savings whatsoever. Yet with enough NAND that the extra controller costs are mitigated by the capacity savings, this becomes an expensive drive, and nobody with money for expensive drives wants a low-endurance, underperfoming dog.

    It’s a pretty poor solution to a problem that doesn’t exist, IMO.

  38. My guess would be it’s just not worth the trouble dealing with non-power-of-two values. That 20-value (“4.25bits”) cell only gives you a tiny 6.25% increase over the QLC cell. Even the 32-value cell (“penta-level-cell” maybe?) would be of course just 25% more, which isn’t all that much in the first place neither.
    So intermediate steps might just not be worth it.

  39. Well, I’m looking at a Christmas gift idea, so I want to stick to new. I asked her what she wanted, and she asked if there was a way to make her laptop not feel so slow. It’s got 4 GB of memory with one slot still open, and a 500 GB hard drive. I’d like to keep it under $100 to add another 4 or 8 GB and an SSD, so this thing is out entirely. The 250 GB 860 EVO is like $55, which is in my range, or $75 for a 500 GB that she’ll never use.

  40. Same cost as TLC.
    Half the speed as TLC
    Half the endurance as TLC
    1 NAND Die = 1TB so that’s likely the lowest capacity they’ll sell = $150 entry fee

    Solid nope.

  41. If she’s only using 80GB, wouldn’t an older “better” drive like an 840 pro or something from crucial fit the bill? Got an 256GB ssds that need upgrading and your mom can subsidize the cost of your upgrade while getting a great performance increase at less than the cost of a new drive?

  42. I can see why it went from 1-bit to 2-bit, but is there a particular reason why these levels are 3-bit and 4-bit? I mean, there’s no real reason you couldn’t encode into cells holding a non-power-of-two number of discrete values, like 5 or 7. I guess it would change the number of pages that fit in an erase block, but that doesn’t seem like an insurmountable problem.

    This isn’t such an issue right now, because it would only open up things like 3/4-size TLC, but I would imagine they’re planning to keep moving forward, and it seems plausible that a 5-bit cell might be more reliable when treated as a 20-value cell rather than a 32-value cell.

  43. I think the price comparison to spinning drives is a bit misleading.

    If your goal is cheap storage, you can get spinning drives at about $0.025 per gigabyte. A TB here and a TB there and suddenly you’re talking real money. Instead of spending $600 on a 4TB SSD. you could pay just $100 for a 4TB HDD.

  44. That is some dreadful performance. Prices will have to drop significantly to justify this drive.

    Plus, given the slowdowns over time experienced by Samsung’s first TLC drives and their inability to provide a real fix, I would be rather hesitant to pick this up regardless of price until QLC has proven itself over time.

  45. [quote<] On top of price concerns, the real-world reliability of QLC NAND is still an open question. [/quote<] The availability of additional QLC drives sounds like a good excuse to repeat the endurance test.

  46. If a drive like this is in a low-write situation, like a NAS, would the reliability be higher than that of a mechanical drive?

    EDIT: My thought is that the life of an SSD seems to have more to do with the writes where mechanical drive life tends to be time and use.

  47. [quote=”Vigil80″<]Thank goodness. I only just got a new EVO this weekend. I don't need to hear about any new hotness yet.[/quote<]QLC will never be new hotness. Its only benefit is lower cost.

  48. So, we are approaching the prices of hard drives with the sustained transfer of hard drives? At least access times remain good.

    The great unknown is the resilience/endurance of QLC. Please run a new version of the SSD endurance test with a new crop of drives.

  49. shit i’m stupid.

    So I think this thing could find a home as an inexpensive Steam library drive or similar. It doesn’t load any slower than any of the other modern drives and it should be tangibly faster than any spinning drive as a result. Definitely not something you want to put through the paces. But even if I dropped one of these in my mom’s aging Broadwell-based laptop, it’d give new life to an old computer for not much money. I’d probably drop in a smaller, different model, since she only uses around 80GB with Windows, Office, and her docs.

    Then again, that all assumes the price will drop below $150. The 860 EVO is still $128 on Amazon.

  50. [quote<]Give QLC some more time to bake[/quote<] Thank goodness. I only just got a new EVO this weekend. I don't need to hear about any new hotness yet.