It’s been more than a year since Samsung’s 960 EVO wowed us with its combination of ludicrous speeds and palatable price points. At that time, the NVMe SSD market was just beginning to expand, and manufacturers were starting to find room in their lineups for attainable drives in addition to their halo products.
Today’s solid-state landscape is totally different. Every company and their grandmothers have brought multiple NVMe drives to bear for performance freaks and mainstream builders alike. Some NVMe drives’ prices dip so low they even encroach on SATA drives’ traditional downmarket dominance. It would take a truly special drive to make as big a splash today as the 960 EVO did back in 2016.
Samsung reckons the time is right for a new drive in this evergreen series. Today, the firm is unveiling the 960 EVO’s successor, the 970 EVO.
Armed with the latest TLC V-NAND and a new controller, the 970 EVO is ready to take the world by storm. It comes in four capacities, the essential details of which are outlined in the table below. Samsung sent us the 1-TB drive, which makes it easy to compare it to the 960 EVO 1 TB we reviewed in days past.
Samsung 970 EVO | |||||
Capacity | Max sequential (MB/s) | Max random (IOps) | Price | ||
Read | Write | Read | Write | ||
250 GB | 3400 | 1500 | 200K | 350K | $120 |
500 GB | 3400 | 2300 | 370K | 450K | $230 |
1 TB | 3400 | 2500 | 500K | 450K | $450 |
2 TB | 3500 | 2500 | 500K | 480K | $850 |
Among the drives launching today, the 2-TB version is new to the family. While the lowly SATA 850 EVO series eventually got a top-end 2-TB variant, the 960 EVOs only ever went up to 1 TB. To double the fun with Samsung NVMe drives, one used to have to spend truly eye-watering amounts of money on a 960 Pro 2 TB. Before we peel back the label to have a look at the good stuff, it’s worth calling out that the sticker still uses the same heat-dissipating integrated copper film we first saw in the 960 series.
Samsung calls its new NVMe controller “Phoenix.” As the company describes this chip, it sounds a whole lot like the prior Polaris design outside of its higher clock speeds. The controller includes five cores, one of which is dedicated to host system communication. The most apparent difference is visible to the naked eye. Phoenix shines bright with a nickel coating. According to Samsung, that coating could help stave off thermal throttling longer. Moving down the length of the PCB, we pass 1 GB of LPDDR4 on the way to the drive’s two flash packages. The 970 EVO has been upgraded to the latest 64-layer V-NAND in TLC configuration, which we reviewed previously in Samsung’s potent Portable SSD T5.
As is usual for the EVO line, TurboWrite (Samsung’s pseudo-SLC implementation) is along for the ride. Like the 960 EVO before it, the 970 EVO gets the upgraded Intelligent TurboWrite, which allows it to commandeer unused space to act as a fast-writing cache in addition to the dedicated, pre-allocated space that all TurboWrite versions use regardless of their IQ. The details haven’t changed, so flip back to our 960 EVO review for more.
The 970 EVO comes equipped with the standard bevy of encryption features, too. Its AES 256-bit hardware encryption engine will keep your secrets safe with support for the TCG Opal and IEEE 1667 standards as desired.
Samsung is so confident in the 970 EVO’s durability that it guarantees the drive for five years, a welcome jump from the three-year warranty of previous EVO drives. The endurance ratings for each capacity get a nice boost, as well, rising 50% for all capacities. That makes for a whopping 600 terabytes written for the 1 TB unit. It would take unfathomable abuse to hit that limit within the drive’s warranty period.
Despite the assorted improvements, Samsung isn’t moving the needle on pricing. The 970 EVO 1 TB’s suggested sticker is the same $450 that the 960 EVO 1 TB launched at. Of course, we won’t know if that suggestion will be honored by retailers until the drive hits general availability on May 7. But we don’t have to wait on anything to start testing, so let’s see what the latest EVO can do.
IOMeter — Sequential and random performance
IOMeter fuels much of our latest storage test suite, including our sequential and random I/O tests. These tests are run across the full capacity of the drive at two queue depths. The QD1 tests simulate a single thread, while the QD4 results emulate a more demanding desktop workload. For perspective, 87% of the requests in our old DriveBench 2.0 trace of real-world desktop activity have a queue depth of four or less. Clicking the buttons below the graphs switches between results charted at the different queue depths.
Our sequential tests use a relatively large 128-KB block size.
These first results are a bit unexpected. The 970 EVO’s sequential speeds are about 20% slower than its predecessor’s in IOMeter. It’s still a screaming fast drive, make no mistake, but we wouldn’t have expected a regression this large. We’re checking in with Samsung about these results, and we’ll update this review if we learn more.
Random response times are an even split. The 970 EVO’s random reads are just a smidgen faster than its predecessor, but the 960 EVO boasts better random write throughput.
The 970 EVO has given us bona fide NVMe speeds so far, but we can’t call it an unqualified success. Where we would have expected parity or better in IOMeter, the 970 EVO fell a bit behind its forebear. We’re looking into these results to make sure they’re not anomalous and will revisit them if we learn more about what might be going on. Let’s see if our sustained and scaling tests can cast the 970 EVO in more favorable light.
Sustained and scaling I/O rates
Our sustained IOMeter test hammers drives with 4KB random writes for 30 minutes straight. It uses a queue depth of 32, a setting that should result in higher speeds that saturate each drive’s overprovisioned area more quickly. This lengthy—and heavy—workload isn’t indicative of typical PC use, but it provides a sense of how the drives react when they’re pushed to the brink.
We’re reporting IOps rather than response times for these tests. Click the buttons below the graph to switch between SSDs.
The 970 EVO appears to peak a bit shy of the 960 EVO’s max speed, and it has an intermediate drop-off on its way to sustained performance that the early drive doesn’t. Even so, both gumsticks experience a precipitous collapse at almost exactly the same time. This makes perfect sense, since the two drives have identical DRAM and TurboWrite cache allocations.
The 970 EVO indeed lags behind the record-holding peak random write rate of the 960 EVO. Its steady state speed also trails the older drive’s, albeit not by much.
Our final IOMeter test examines performance scaling across a broad range of queue depths. We ramp all the way up to a queue depth of 128. Don’t expect AHCI-based drives to scale past 32, though—that’s the maximum depth of their native command queues.
For this test, we use a database access pattern comprising 66% reads and 33% writes, all of which are random. The test runs after 30 minutes of continuous random writes that put the drives in a simulated used state. Click the buttons below the graph to switch between the different drives. And note that the P3700 plot uses a much larger scale.
The 970 EVO 1 TB scales readily until QD16, where it levels off. Let’s look at the Samsung stable together for context.
The 970 EVO slightly outscales the 960 EVO until the two meet at QD16. The 960 EVO, however, manages to eke out a few more IOps as it continues through the final queue depths, in stark opposition to the 970 EVO’s flat plateau. Recall that client workloads often don’t scale much past QD4 or QD8, so that kind of scaling is primarily of academic interest.
Again, the 970 EVO 1 TB delivers great NVMe-worthy speeds in our sustained test, but it just can’t seem to match the 960 EVO’s prior accomplishments in IOMeter. Maybe real-world workloads will be kinder to it than our synthetic tests have been.
TR RoboBench — Real-world transfers
RoboBench trades synthetic tests with random data for real-world transfers with a range of file types. Developed by our in-house coder, Bruno “morphine” Ferreira, this benchmark relies on the multi-threaded robocopy command build into Windows. We copy files to and from a wicked-fast RAM disk to measure read and write performance. We also cut the RAM disk out of the loop for a copy test that transfers the files to a different location on the SSD.
Robocopy uses eight threads by default, and we’ve also run it with a single thread. Our results are split between two file sets, whose vital statistics are detailed below. The compressibility percentage is based on the size of the file set after it’s been crunched by 7-Zip.
Number of files | Average file size | Total size | Compressibility | |
Media | 459 | 21.4MB | 9.58GB | 0.8% |
Work | 84,652 | 48.0KB | 3.87GB | 59% |
The media set is made up of large movie files, high-bitrate MP3s, and 18-megapixel RAW and JPG images. There are only a few hundred files in total, and the data set isn’t amenable to compression. The work set comprises loads of TR files, including documents, spreadsheets, and web-optimized images. It also includes a stack of programming-related files associated with our old Mozilla compiling test and the Visual Studio test on the next page. The average file size is measured in kilobytes rather than megabytes, and the files are mostly compressible.
RoboBench’s write and copy tests run after the drives have been put into a simulated used state with 30 minutes of 4KB random writes. The pre-conditioning process is scripted, as is the rest of the test, ensuring that drives have the same amount of time to recover.
Let’s take a look at the media set first. The buttons switch between read, write, and copy results.
Now we’re cooking with gas. The 960 EVO bests the newer drive in the single-threaded read test, but the 970 EVO comes out on top in all the others. In fact, the eight-threaded copy test is an unprecedented slaughter. The 970 EVO sets a new record with a copy speed 70% faster than the prior drive’s.
Now for the work set.
The 960 EVO still reads faster with a single thread, but again the 970 EVO emerges victorious everywhere else. The biggest win this time is in the eight-threaded write test, where the 970 EVO’s performance is a good 30% better than the 960 EVO’s.
These are the results we are looking for. The 970 EVO’s file-pushing prowess outshines that of its older brother and most of the rest of field as well. Lastly, we’ll see how the drive handles primary storage responsibilities.
Boot times
Until now, all of our tests have been conducted with the SSDs connected as secondary storage. This next batch uses them as system drives.
We’ll start with boot times measured two ways. The bare test depicts the time between hitting the power button and reaching the Windows desktop, while the loaded test adds the time needed to load four applications—Avidemux, LibreOffice, GIMP, and Visual Studio Express—automatically from the startup folder. Our old boot tests focused on the time required to load the OS, but these new ones cover the entire process, including drive initialization.
It’s pretty much a wash. Even stevens. The 970 EVO boots up just as swiftly as the 960 EVO did.
Load times
Next, we’ll tackle load times with two sets of tests. The first group focuses on the time required to load larger files in a collection of desktop applications. We open a 790-MB 4K video in Avidemux, a 30-MB spreadsheet in LibreOffice, and a 523-MB image file in the GIMP. In the Visual Studio Express test, we open a 159-MB project containing source code for the LLVM toolchain. Thanks to Rui Figueira for providing the project code.
The 970 EVO is a tad sluggish to load LibreOffice, but otherwise looks just fine. Fun and games next.
Games load in at the appropriate speeds. Probably not the most cost-effective use of an NVMe drive, but don’t let us stop you.
The 970 EVO serves well as primary storage, though it didn’t come away with any big wins over the 960 EVO. We’re all out of tests, so skip ahead to conclusion unless you want to read about our test methods on the next page.
Test notes and methods
Here are the essential details for all the drives we tested:
Interface | Flash controller | NAND | |
Adata Premier SP550 480GB | SATA 6Gbps | Silicon Motion SM2256 | 16-nm SK Hynix TLC |
Adata Ultimate SU800 512GB | SATA 6Gbps | Silicon Motion SM2258 | 32-layer Micron 3D TLC |
Adata Ultimate SU900 256GB | SATA 6Gbps | Silicon Motion SM2258 | Micron 3D MLC |
Adata XPG SX930 240GB | SATA 6Gbps | JMicron JMF670H | 16-nm Micron MLC |
Corsair MP500 240GB | PCIe Gen3 x4 | Phison 5007-E7 | 15-nm Toshiba MLC |
Crucial BX100 500GB | SATA 6Gbps | Silicon Motion SM2246EN | 16-nm Micron MLC |
Crucial BX200 480GB | SATA 6Gbps | Silicon Motion SM2256 | 16-nm Micron TLC |
Crucial MX200 500GB | SATA 6Gbps | Marvell 88SS9189 | 16-nm Micron MLC |
Crucial MX300 750GB | SATA 6Gbps | Marvell 88SS1074 | 32-layer Micron 3D TLC |
Crucial MX500 500GB | SATA 6Gbps | Silicon Motion SM2258 | 64-layer Micron 3D TLC |
Crucial MX500 1TB | SATA 6Gbps | Silicon Motion SM2258 | 64-layer Micron 3D TLC |
Intel X25-M G2 160GB | SATA 3Gbps | Intel PC29AS21BA0 | 34-nm Intel MLC |
Intel 335 Series 240GB | SATA 6Gbps | SandForce SF-2281 | 20-nm Intel MLC |
Intel 730 Series 480GB | SATA 6Gbps | Intel PC29AS21CA0 | 20-nm Intel MLC |
Intel 750 Series 1.2TB | PCIe Gen3 x4 | Intel CH29AE41AB0 | 20-nm Intel MLC |
Intel DC P3700 800GB | PCIe Gen3 x4 | Intel CH29AE41AB0 | 20-nm Intel MLC |
Mushkin Reactor 1TB | SATA 6Gbps | Silicon Motion SM2246EN | 16-nm Micron MLC |
OCZ Arc 100 240GB | SATA 6Gbps | Indilinx Barefoot 3 M10 | A19-nm Toshiba MLC |
OCZ Trion 100 480GB | SATA 6Gbps | Toshiba TC58 | A19-nm Toshiba TLC |
OCZ Trion 150 480GB | SATA 6Gbps | Toshiba TC58 | 15-nm Toshiba TLC |
OCZ Vector 180 240GB | SATA 6Gbps | Indilinx Barefoot 3 M10 | A19-nm Toshiba MLC |
OCZ Vector 180 960GB | SATA 6Gbps | Indilinx Barefoot 3 M10 | A19-nm Toshiba MLC |
Patriot Hellfire 480GB | PCIe Gen3 x4 | Phison 5007-E7 | 15-nm Toshiba MLC |
Plextor M6e 256GB | PCIe Gen2 x2 | Marvell 88SS9183 | 19-nm Toshiba MLC |
Samsung 850 EV0 250GB | SATA 6Gbps | Samsung MGX | 32-layer Samsung TLC |
Samsung 850 EV0 1TB | SATA 6Gbps | Samsung MEX | 32-layer Samsung TLC |
Samsung 850 Pro 512GB | SATA 6Gbps | Samsung MEX | 32-layer Samsung MLC |
Samsung 860 Pro 1TB | SATA 6Gbps | Samsung MJX | 64-layer Samsung MLC |
Samsung 950 Pro 512GB | PCIe Gen3 x4 | Samsung UBX | 32-layer Samsung MLC |
Samsung 960 EVO 250GB | PCIe Gen3 x4 | Samsung Polaris | 32-layer Samsung TLC |
Samsung 960 EVO 1TB | PCIe Gen3 x4 | Samsung Polaris | 48-layer Samsung TLC |
Samsung 960 Pro 2TB | PCIe Gen3 x4 | Samsung Polaris | 48-layer Samsung MLC |
Samsung 970 EVO 1TB | PCIe Gen3 x4 | Samsung Phoenix | 64-layer Samsung TLC |
Samsung SM951 512GB | PCIe Gen3 x4 | Samsung S4LN058A01X01 | 16-nm Samsung MLC |
Samsung XP941 256GB | PCIe Gen2 x4 | Samsung S4LN053X01 | 19-nm Samsung MLC |
Toshiba OCZ RD400 512GB | PCIe Gen3 x4 | Toshiba TC58 | 15-nm Toshiba MLC |
Toshiba OCZ VX500 512GB | SATA 6Gbps | Toshiba TC358790XBG | 15-nm Toshiba MLC |
Toshiba TR200 480GB | SATA 6Gbps | Toshiba TC58 | 64-layer Toshiba BiCS TLC |
Toshiba XG5 1TB | PCIe Gen3 x4 | Toshiba TC58 | 64-layer Toshiba BiCS TLC |
Transcend SSD370 256GB | SATA 6Gbps | Transcend TS6500 | Micron or SanDisk MLC |
Transcend SSD370 1TB | SATA 6Gbps | Transcend TS6500 | Micron or SanDisk MLC |
All the SATA SSDs were connected to the motherboard’s Z97 chipset. The M6e was connected to the Z97 via the motherboard’s M.2 slot, which is how we’d expect most folks to run that drive. Since the XP941, 950 Pro, RD400, and 960 Pro require more lanes, they were connected to the CPU via a PCIe adapter card. The 750 Series and DC P3700 were hooked up to the CPU via the same full-sized PCIe slot.
We used the following system for testing:
Processor | Intel Core i5-4690K 3.5GHz |
Motherboard | Asus Z97-Pro |
Firmware | 2601 |
Platform hub | Intel Z97 |
Platform drivers | Chipset: 10.0.0.13 RST: 13.2.4.1000 |
Memory size | 16GB (2 DIMMs) |
Memory type | Adata XPG V3 DDR3 at 1600 MT/s |
Memory timings | 11-11-11-28-1T |
Audio | Realtek ALC1150 with 6.0.1.7344 drivers |
System drive | Corsair Force LS 240GB with S8FM07.9 firmware |
Storage | Crucial BX100 500GB with MU01 firmware Crucial BX200 480GB with MU01.4 firmware Crucial MX200 500GB with MU01 firmware Intel 335 Series 240GB with 335u firmware Intel 730 Series 480GB with L2010400 firmware Intel 750 Series 1.2GB with 8EV10171 firmware Intel DC P3700 800GB with 8DV10043 firmware Intel X25-M G2 160GB with 8820 firmware Plextor M6e 256GB with 1.04 firmware OCZ Trion 100 480GB with 11.2 firmware OCZ Trion 150 480GB with 12.2 firmware OCZ Vector 180 240GB with 1.0 firmware OCZ Vector 180 960GB with 1.0 firmware Samsung 850 EVO 250GB with EMT01B6Q firmware Samsung 850 EVO 1TB with EMT01B6Q firmware Samsung 850 Pro 500GB with EMXM01B6Q firmware Samsung 950 Pro 512GB with 1B0QBXX7 firmware Samsung XP941 256GB with UXM6501Q firmware Transcend SSD370 256GB with O0918B firmware Transcend SSD370 1TB with O0919A firmware |
Power supply | Corsair AX650 650W |
Case | Fractal Design Define R5 |
Operating system | Windows 8.1 Pro x64 |
Thanks to Asus for providing the systems’ motherboards, to Intel for the CPUs, to Adata for the memory, to Fractal Design for the cases, and to Corsair for the system drives and PSUs. And thanks to the drive makers for supplying the rest of the SSDs.
We used the following versions of our test applications:
- IOMeter 1.1.0 x64
- TR RoboBench 0.2a
- Avidemux 2.6.8 x64
- LibreOffice 4.3.2
- GIMP 2.8.14
- Visual Studio Express 2013
- Batman: Arkham Origins
- Tomb Raider
- Middle Earth: Shadow of Mordor
Some further notes on our test methods:
-
To ensure consistent and repeatable results, the SSDs were secure-erased before every component of our test suite. For the IOMeter database, RoboBench write, and RoboBench copy tests, the drives were put in a simulated used state that better exposes long-term performance characteristics. Those tests are all scripted, ensuring an even playing field that gives the drives the same amount of time to recover from the initial used state.
-
We run virtually all our tests three times and report the median of the results. Our sustained IOMeter test is run a second time to verify the results of the first test and additional times only if necessary. The sustained test runs for 30 minutes continuously, so it already samples performance over a long period.
-
Steps have been taken to ensure the CPU’s power-saving features don’t taint any of our results. All of the CPU’s low-power states have been disabled, effectively pegging the frequency at 3.5GHz. Transitioning between power states can affect the performance of storage benchmarks, especially when dealing with short burst transfers.
The test systems’ Windows desktop was set at 1920×1080 at 60Hz. Most of the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.
Conclusions
The 970 EVO 1 TB delivered a more-than-competent performance throughout our test suite. If not for the 960 EVO’s significantly better showing in our IOMeter synthetics, we’d likely be gushing over just how great a generation-to-generation improvement the 970 EVO is. The 970 EVO regained some ground on its predecessor during our RoboBench file-transfer tests, but probably not enough to come ahead in our overall rankings (for the moment, at least—we’re working with Samsung to see if there might be an explanation for our IOMeter results).
We distill the overall performance rating using an older SATA SSD as a baseline. To compare each drive, we then take the geometric mean of a basket of results from our test suite. Only drives which have been through the entire current test suite on our current rig are represented.
For now, the 970 EVO lands a ways behind its older sibling but fortunately still distances itself from older, less-expensive drives like the RD400 and Hellfire. Make no mistake: the 970 EVO belongs in the upper echelon of NVMe drives. If IOMeter synthetics are meaningless to you, the 970 EVO 1 TB is almost certainly a better drive than the 960 EVO 1 TB. But if you already own a 960-series gumstick, there’s little sense in shelling out again for the 970 series.
Regardless, let’s take a look at the 970 EVO 1 TB’s place in the broader SSD landscape, assuming retailers will sell it close to its official asking price. In the plots below, the most compelling position is toward the upper left corner, where the price per gigabyte is low and performance is high. Use the buttons to switch between views of all drives, only SATA drives, or only PCIe drives.
Even with our mixed results, the 970 EVO 1 TB comes out ahead of every NVMe SSD on the market save Samsung’s own. As we’ve been saying, the only competition that might give a prospective buyer pause is Samsung’s own 960 EVO 1 TB. And even though the 970 EVO performed worse in some of our tests, its formidable file-transfer speeds in our real-world benchmarking, its longer warranty, and its absurd endurance rating may just make up the difference. We don’t have any shopping links for you yet, but we can tell you that the 970 EVO 1 TB’s suggested price of $450 works out to 45 cents per gigabyte.
April 2018
The 970 EVO didn’t make our hearts throb in the same way that the 960 EVO did in 2016—at least, not yet. Its improvements in our RoboBench tests were overshadowed by some regressions in IOMeter that we’re still investigating. Nonetheless, it’s a darn fine NVMe drive with a long warranty and juicy endurance rating. The 970 EVO 1 TB is easy to recommend to any builder looking for a large chunk of some of the best-performing NVMe goodness around, especially as supplies of the 960 EVO inevitably dwindle, and if we can sort out why this drive isn’t playing nice with our IOMeter tools, it has the potential to put up an even better showing yet.