When we reviewed Samsung’s 860 EVO SSD a few months ago, we spent some time musing about the coming quad-level-cell flood. Well, folks, the fourth bit of the levelocalypse is finally upon us. Samsung’s QLC-powered 860 QVO SSD has made its way into the TR labs.
The 860 QVO is the third consumer QLC SSD to hit the consumer market. The first was Intel’s 660P, followed by its Micron-flavored cousin, the Crucial P1. (Both of those drives were preceded by Micron’s 5210 Ion, but that drive was targeted towards enterprise customers and made available only to select partners.) The 860 QVO’s mission is to bring a terabyte-class SSD to the unwashed masses. QLC is the fruit of NAND manufacturers’ never-ceasing quest for increased density, and Samsung thinks it can leverage those gains to deliver enormous drive capacities at comparatively low costs. 860 QVOs start at 1 TB in capacity and only go up from there.
|Samsung 860 QVO|
|Capacity||Price||Max sequential (MB/s)||Max random (IOps)|
The 1 TB QVO we have on the bench today appears pretty similar to the 860 EVO if you’re only looking skin-deep. Samsung has inverted the usual color scheme of its SATA drives, but otherwise the 860 QVO looks to be just another 2.5″ SATA drive. But don’t be fooled: its pedestrian presentation belies the new technology lurking below the surface.
But let’s leave aside the hyperbole for a moment. What does quad-level-cell even mean? To put it succinctly, in QLC NAND, each individual flash cell is tasked with storing four bits of data. We won’t rehash the entire history of solid-state storage, but here’s a whirlwind tour. NAND flash started out as single-level cells (SLC), meaning each cell stored only a single bit. Eventually, manufacturers figured out how to store two bits per cell (known as multiple-level cell, or MLC), getting more storage space out of the same amount of silicon.
This savings comes at a cost—each bit of storage added to a cell doubles the amount of voltage states in play. In practice, this increased number of states translates to lower performance and shorter lifetimes at the individual flash cell level. For the gritty details, refer back to our review of the Samsung 840 series, in which we gave a thorough breakdown of the MLC (two-bit) to TLC (three-bit) transition. It’s every bit (har har) as relevant today, despite its age.
The transition from three bits per cell to four brings all the same concerns and considerations back to the foreground. The advent of TLC brought a lot of hand-wringing into enthusiast circles. The increased density brought some hope that the reduced costs would potentially be passed down to the consumer. But a more pessimistic view was that manufacturers would phase out SLC and MLC products and sell us objectively inferior products for the same price as the outgoing stuff. Arguments can be made for both viewpoints. Client SLC drives are completely extinct, and MLC drives are a dying breed. The few left on the market are positioned only for the deepest pockets. On the other hand, prices for TLC drives have come way down since the technology’s introduction. Here’s the cost-per-gigabyte graph from the 840 Series review as a reminder.
That’s a far cry from the 20-ish cents per gigabyte—or even less—we’re paying for TLC NAND today. But it’s hard to credit the difference entirely to TLC flash. After all, solid-state storage is a different beast today. In 2012, SSDs still had a whiff of the exotic about them, but moving-part-free storage devices are accessible to just about everyone now. Tons of laptops and preassembled systems come standard with them. On top of that, PCIe drives and the NVMe protocol have provided manufacturers with new segmentation niches to carve out. Gone are the days when your only choice was between cheap mechanical storage and halo solid-state products. There’s a whole lot of space in between now.
As for the worry that TLC speeds weren’t good enough, well, adding additional bits certainly does slow down I/O at the cell level, but manufacturers are adept at masking that performance hit. Just about every modern SSD has the ability to set aside a portion of its NAND capacity to be treated as if it’s composed of single-level cells. This reduces the amount of possible voltage states back down to two instead of eight, reducing the complexity and speed of accesses. SanDisk calls this technique “nCache,” Micron calls it “Dynamic Write Acceleration,” and Samsung refers to it as “TurboWrite,” but we lump them all together under the umbrella term “pseudo-SLC caching.” These schemes are essential to maintaining an acceptable user experience on denser NAND implementations.
The long and short of it is that TLC, though inherently slower and less durable than the NAND that preceded it, is still good enough for most end users. And Geoff’s infamous endurance experiment provided evidence that a drive’s flash cell density isn’t the limiting factor for its longevity. Here we are again, though, six years later, with all the same questions about quad-level-cell NAND’s performance and endurance. It’s just that now there are a whopping sixteen voltage states for each cell to worry about! The 860 QVO is our first guinea pig, so let’s dissect it to see what makes it tick.
It’s immediately obvious that Samsung got the bill-of-materials savings it was looking for. The tiny PCB inside the case looks just like that of the 860 EVO 1 TB, except there’s a single V-NAND package instead of two. Aside from that difference, not much else distinguishes the guts of the drives. The brains behind the operation is still Samsung’s MJX controller, which has been retrofitted to play nice with the company’s 4-bit V-NAND. The same 1 GB of LPDDR4 is right there alongside the controller and NAND. (The 2-TB QVO gets 2 GB of DRAM, and the 4-TB drive gets 4 GB.)
The MJX’s ace-in-the-hole is Intelligent TurboWrite, which we’ve seen in action before in Samsung’s other 860 and 900-series drives. This allows the drive’s pseudo-SLC caching capabilities to reach beyond their small, dedicated slice of space and seize additional capacity to act as a high-speed buffer if the user has left some unfilled. In the 1-TB QVO, TurboWrite has a fixed 6-GB region to work with at all times, and it can temporarily enlist 36 more gigabytes if that space is available and called for.
What’s new in Samsung’s TurboWrite-related press materials is a shot of depressing realism. The company acknowledges that performance will fall off a cliff when TurboWrite isn’t in action. Be careful of that “up to 78 GB” figure in the chart below—the 1-TB QVO can’t enlist more than 42 GB of pseudo-SLC cache.
Drives with pseudo-SLC caching have always been subject to this kind of degradation, but companies weren’t always so forthright about it. This time around, Samsung is careful to specify that the 500+ MB/s speeds quoted in the table only apply to TurboWrite-accelerated operations. The company only claims a modest 80 MB/s when sequential writes are hitting the QLC NAND directly.
Should users be worried about the limits of Intelligent TurboWrite and QLC media? We doubt it, presuming the read characteristics of these drives are satisfactory. Enthusiasts buying a lot of recent AAA games could flirt with the limits of a 42-GB cache, especially as high-quality textures push some games’ installed sizes past 100 GB. Thing is, unless you’re restoring a game backup from local media, 80 MB/s is still going to be way faster than most folks’ internet connections. Folks with gigabit fiber might want to consider a TLC SSD for their Steam drives, but in its role as an accessible SSD, the QVO could serve just fine as a place to keep large files on the cheap.
QLC’s shortcomings also manifest in Samsung’s reduced confidence in the longevity of the drive. It backs the drive with a three-year warranty instead of the five years that the EVO got. And the endurance rating for the 1 TB drive is 360 terabytes written, down a good bit from the 660 TBW that the 860 EVO 1 TB was good for. That’s still far more than most consumers need, and it behooves us to remember that the venerable 850 EVO only got a 150 TBW rating back in its heyday.
And now for perhaps the most important detail of all: the price. The 860 QVO won’t hit the streets for another couple of weeks, but Samsung is setting the suggest price for the 1 TB version at $150. That’s pretty cheap for a terabyte, but tried-and-true 3D TLC drives are currently in the same neighborhood. Either QLC is better than it has any right to be, or it’s too expensive and needs some help from the discount gods. Let’s find out.
IOMeter — Sequential and random performance
IOMeter fuels much of our latest storage test suite, including our sequential and random I/O tests. These tests are run across the full capacity of the drive at two queue depths. The QD1 tests simulate a single thread, while the QD4 results emulate a more demanding desktop workload. For perspective, 87% of the requests in our old DriveBench 2.0 trace of real-world desktop activity have a queue depth of four or less. Clicking the buttons below the graphs switches between results charted at the different queue depths. Our sequential tests use a relatively large 128-KB block size.
The 860 QVO’s sequential reads can’t keep up with those of the TLC-equipped 860 EVO, but the race is fairly close. On the other hand, writes aren’t at all in the same ballpark. It’s worth noting that our sequential IOMeter test setup defeats schemes like Intelligent TurboWrite by design. The test file IOMeter runs spans the entire capacity of the drive, so there’s never any idle space to take advantage of. But that was true for the 860 EVO as well, and it chugs along happily near 500 MB/s on static TurboWrite alone. We know that the performance of direct-to-QLC writes is relatively poor from Samsung’s materials, so we’re most likely seeing the effects of that characteristic here.
It’s worth noting that IOMeter is a stress test, and this is the sort of issue that real-world workloads don’t typically expose. We’ll reserve final judgement on the drive’s write capabilities for now, but this is a black mark that will hurt the 860 QVO in our aggregated rankings.
Life doesn’t get better for the QVO in our random tests. Random read response times are slower than even our ancient X25-M. Maybe don’t throw our your SATA 3 Gbps drives just yet, especially if they’re packing MLC inside. The 860 QVO surges to life when it comes to random write response times. Clearly its TurboWrite tricks yield good results when they kick in properly.
Our first set of IOMeter tests leave us with doubts about the 860 QVO and QLC’s place in the SSD landscape. Let’s see what our sustained and scaling tests can uncover.
Sustained and scaling I/O rates
Our sustained IOMeter test hammers drives with 4-KB random writes for 30 minutes straight. It uses a queue depth of 32, a setting that should result in higher speeds that saturate each drive’s overprovisioned area more quickly. This lengthy—and heavy—workload isn’t indicative of typical PC use, but it provides a sense of how the drives react when they’re pushed to the brink.
We’re reporting IOPS rather than response times for these tests. Click the buttons below the graph to switch between SSDs.
The QVO’s peak is much lower than the EVO’s, but it holds on to it for more than 10 minutes. Let’s look at the drives’ performance side-by-side.
The peak graph echoes the question raised by our IOMeter sequential write results: what makes TurboWrite so much less capable on QLC than on TLC? Again, the answer is likely a matter of the performance of the underlying media. With many more states per cell to juggle than its TLC siblings, it’s perhaps not that surprising that the QVO suffers next to its TLC brethren.
Our final IOMeter test examines performance scaling across a broad range of queue depths. We ramp all the way up to a queue depth of 128. Don’t expect AHCI-based drives to scale past 32, though—that’s the maximum depth of their native command queues.
For this test, we use a database access pattern comprising 66% reads and 33% writes, all of which are random. The test runs after 30 minutes of continuous random writes that put the drive in a simulated used state. Click the buttons below the graph to switch between the different drives. Note that each drive uses a different scale for IOPS to allow us to view the shape of its curves.
The 860 QVO exhibits a general upward trend, but the absolute differences from queue depth to queue depth are rather small. The next graphs will make that much clearer.
All our 3D TLC drives make meaningful gains as queue depth increases, but our one 3D QLC drive looks pretty much flat in comparison.
IOMeter has not been kind to the 860 QVO. But our IOMeter tests are in no way representative of typical access patterns for a client SSD. After all, we write the entire span of the drive and then begin the enhanced interrogation almost immediately. This setup completely neutralizes any caching strategies that rely on unused drive capacity. The data we’ve gathered is still valid and useful, but before we write off QLC, let’s see how it behaves in real-world tests.
TR RoboBench — Real-world transfers
RoboBench trades synthetic tests with random data for real-world transfers with a range of file types. Developed by our in-house coder, Bruno “morphine” Ferreira, this benchmark relies on the multi-threaded robocopy command build into Windows. We copy files to and from a wicked-fast RAM disk to measure read and write performance. We also cut the RAM disk out of the loop for a copy test that transfers the files to a different location on the SSD.
Robocopy uses eight threads by default, and we’ve also run it with a single thread. Our results are split between two file sets, whose vital statistics are detailed below. The compressibility percentage is based on the size of the file set after it’s been crunched by 7-Zip.
|Number of files||Average file size||Total size||Compressibility|
RoboBench’s write and copy tests run after the drives have been put into a simulated used state with 30 minutes of 4KB random writes. The pre-conditioning process is scripted, as is the rest of the test, ensuring that drives have the same amount of time to recover.
The media set is made up of large movie files, high-bitrate MP3s, and 18-megapixel RAW and JPG images. There are only a few hundred files in total, and the data set isn’t amenable to compression. The work set comprises loads of TR files, including documents, spreadsheets, and web-optimized images. It also includes a stack of programming-related files associated with our old Mozilla compiling test and the Visual Studio test on the next page. The average file size is measured in kilobytes rather than megabytes, and the files are mostly compressible.
Let’s take a look at the media set first. The buttons switch between read, write, and copy results.
Drat. RoboBench does not provide the vindication we were hoping for. In fact, these results are incredibly similar to our IOMeter sequential output, with reasonably fast reads but painfully slow writes. Our RoboBench tests run with the drive all but empty, so there should be nothing stopping Intelligent TurboWrite from running wild here. Let’s see what the QVO does with the work set.
The work set doesn’t quite let the 860 QVO catch its EVO stablemate, either. Read and copy performance trails the TLC drive. At least writes end up fairly close.
We weren’t expecting the QVO to bowl us over, but we we thought real-life file transfers would better showcase its strengths than ended up being the case. The last ray of hope for the drive lies in giving it boot duties.
Until now, all of our tests have been conducted with the SSDs connected as secondary storage. This next batch uses them as system drives.
We’ll start with boot times measured two ways. The bare test depicts the time between hitting the power button and reaching the Windows desktop, while the loaded test adds the time needed to load four applications—Avidemux, LibreOffice, GIMP, and Visual Studio—automatically from the startup folder. These tests cover the entire boot process, including drive initialization.
Finally, some tests that don’t leave the 860 QVO in the dust. If nothing else, QLC seems well suited to booting up Windows.
Next, we’ll tackle load times with two sets of tests. The first group focuses on the time required to load larger files in a collection of desktop applications. We open a 790-MB 4K video in Avidemux, a 30-MB spreadsheet in LibreOffice, and a 523-MB image file in the GIMP. In the Visual Studio Express test, we open a 159-MB project containing source code for Microsoft’s PowerShell.
Load times for the first three programs are recorded using PassMark AppTimer. AppTimer’s load completion detection doesn’t play nice with Visual Studio, so we’re still using a stopwatch for that one.
The QVO seems to lag a couple of tenths of a second behind the rest of the pack, but it’s a pretty small difference compared the performance gulfs we saw in RoboBench and IOMeter.
In games the 860 QVO manages to land close to the middle of the road. The perfect use case for QLC may just be to store our massive game libraries.
This was really the only page of tests that cast the 860 QVO in a positive light. We’ve tapped out our supply of benchmarks, so let’s talk test methods. Or skip a page if you just can’t wait for our conclusions.
Test notes and methods
Here are the essential details for all the drives we tested:
|Adata XPG SX8200 480GB||PCIe Gen3 x4||Silicon Motion SM2262||64-layer Micron 3D TLC|
|Crucial MX500 500GB||SATA 6Gbps||Silicon Motion SM2258||64-layer Micron 3D TLC|
|Intel X25-M G2 160GB||SATA 3Gbps||Intel PC29AS21BA0||34-nm Intel MLC|
|Samsung 850 EVO 1TB||SATA 6Gbps||Samsung MEX||32-layer Samsung TLC|
|Samsung 860 EVO 1TB||SATA 6Gbps||Samsung MJX||64-layer Samsung TLC|
|Samsung 860 QVO 1TB||SATA 6Gbps||Samsung MJX||64-layer Samsung QLC|
|Samsung 970 EVO 1TB||PCIe Gen3 x4||Samsung Phoenix||64-layer Samsung TLC|
|Toshiba RC100||PCIe Gen3 x2||Toshiba||64-layer Toshiba BiCS TLC|
The SATA SSDs were connected to the motherboard’s Z270 chipset. The PCIe drives were connected via one of the motherboard’s M.2 slots, which also draw their lanes from the Z270 chipset.
We used the following system for testing:
|Processor||Intel Core i7-6700K|
|Motherboard||Gigabyte Aorus Z270X-Gaming 5|
|Memory size||16 GB (2 DIMMs)|
|Memory type||Corsair Vengeance LPX DDR4 at 2133 MT/s|
|System drive||Corsair Force LS 240GB with S8FM07.9 firmware|
|Power supply||Rosewill Fortress 550 W|
|Operating system||Windows 10 x64 1803|
Thanks to Gigabyte for providing the system’s motherboard, to Intel for the CPU, to Corsair for the memory and system drive, and to Rosewill for the PSU. And thanks to the drive makers for supplying the rest of the SSDs.
We used the following versions of our test applications:
- IOMeter 1.1.0 x64
- TR RoboBench 0.2a
- Passmark AppTimer 1.0
- Avidemux 2.7.1 x64
- GIMP 2.10.0
- LibreOffice 18.104.22.168
- Visual Studio Community 2017 15.7.4
- Batman: Arkham Origins
- Tomb Raider
- Middle Earth: Shadow of Mordor
Some further notes on our test methods:
- To ensure consistent and repeatable results, the SSDs were secure-erased before every component of our test suite. For the IOMeter database, RoboBench write, and RoboBench copy tests, the drives were put in a simulated used state that better exposes long-term performance characteristics. Those tests are all scripted, ensuring an even playing field that gives the drives the same amount of time to recover from the initial used state.
- We run virtually all our tests three times and report the median of the results. Our sustained IOMeter test is run a second time to verify the results of the first test and additional times only if necessary. The sustained test runs for 30 minutes continuously, so it already samples performance over a long period.
- Steps have been taken to ensure the CPU’s power-saving features don’t taint any of our results. All of the CPU’s low-power states have been disabled, effectively pegging the frequency at 4.0 GHz. Transitioning between power states can affect the performance of storage benchmarks, especially when dealing with short burst transfers.
The test systems’ Windows desktop was set at 1920×1200 at 60 Hz. Most of the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.
The bards will sing no lionizing songs of the 860 QVO’s struggle through our test suite. We entertain no hopes that it will land anywhere near the 860 EVO in our final reckoning. We distill the overall performance rating using an older SATA SSD as a baseline. To compare each drive, we then take the geometric mean of a basket of results from our test suite.
The 860 QVO lands in the company of Toshiba’s RC100, a tiny sliver of NVMe storage that attempted to scrape by without a dedicated DRAM cache. Even with all its advantages, the 860 QVO couldn’t turn in a better showing. We can make excuses for the QLC drive’s performance in IOMeter, since those tests are designed to expose drives’ weakness by filling them to capacity and subjecting them to extreme conditions. Applications with similar access patterns are not at all suitable to be run on budget consumer drives like the 860 QVO. But there are no excuses for the drive’s showing in our real-world RoboBench tests. Despite leaving the 860 QVO plenty of room to let its Intelligent TurboWrite caching work its magic, the drive still just couldn’t deliver.
The drive did do well in our boot and load tests, so maybe it could find its niche there—but only if the price is right. In the graph below, the most compelling position is toward the upper left corner, where the price per gigabyte is low and performance is high.
Prices for SSDs are still falling across the board, and the story is the same no matter where you look in PC components today. Value-packed CPU options abound, DDR4 memory is dipping into sane territory again, and TLC NAND SSDs are just begging to be bought for well under 20 cents a gig. This, unfortunately, is rough news for the 860 QVO 1 TB. Its suggested price has been set at $150, but why buy a QLC drive at 15 cents per gigabyte when you can get excellent TLC drives like the MX500 500 GB or Samsung’s own 860 EVO 1 TB for 13 cents per gigabyte or less?
The disconnect might be explained by the usual discrepancy between manufacturers’ suggested launch prices and real-life retail prices, but even Samsung itself seems to want to sell the 860 EVO for less than it expects to get for the 860 QVO. It’s still Cyber Monday as I’m writing this, so perhaps the magic will have faded from the 860 EVO’s current prices by the time this review goes live.
On top of price concerns, the real-world reliability of QLC NAND is still an open question. Samsung’s three-year warranty and 360-terabytes-written endurance rating are encouraging, but it might nonetheless be hard to find peace of mind this early in QLC’s lifetime. Unless you’re an intrepid early adopter or follow robust backup practices, there’s a lot to lose if you entrust precious data to a fledgling storage technology.
This isn’t a knock against the QVO specifically, of course. More data points will instill more confidence, and more data should be forthcoming over the next months. Intel’s 660P and Crucial’s P1 SSDs are already on the market. Adata just announced its own SSD that’s likely based on Micron’s 3D QLC, too: the Ultimate SU630. As for the other NAND manufacturers, both Western Digital and SK Hynix have already announced or sampled QLC dies. With the entire breadth of the solid-state storage industry indicating a QLC-laden future, we’ll soon have plenty of products to compare and reliability reports to comb through.
In summary, the 860 QVO can boot Windows and load applications with SSD swiftness, but it seems unable to marshal its caching tricks well enough to yield consistently snappy file transfer speeds. At the price Samsung wants for it, the QVO is a tough pill to swallow. Instead, we’d suggest buying cheap TLC drives while the holiday sales bonanzas are yet running. Give QLC some more time to bake before you open up that particular oven.