Just a few years ago, a hundred bucks bought you a choice between a decent-sized mechnical hard drive or a pitifully small amount of solid-state storage. Thankfully, the budget solid-state storage space has ballooned over time. Smaller processes, multi-level cell NAND, and die stacking have all driven down costs. Now, even the thriftiest of builders can enjoy the benefits of flash storage. The advent of breakneck-speed PCIe drives that cost hundreds rather than thousands of dollars is also keeping SATA drive prices from getting too out-of-hand.
But even among mainstream SATA SSDs, there’s a lot of stratification. Most manufacturers offers at least a couple of distinct product lines, segmenting their offerings by target audience. We’re turning our attention to the low end today.
Here’s Kingston’s HyperX Fury 240GB, first announced in the summer of 2014. Kingston reserves the HyperX branding for its gaming-oriented products, and the Fury is no exception. Targeted at “entry-level gamers,” the MLC-based Fury takes its place just above the cheaper but controversial V300 series. Presumably the V300 is for mere gaming interns.
The Fury comes in 120GB and 240GB configurations, each powered by SandForce’s SF-2281 controller. The SF-2281 has been the brains of many an SSD over the years, including a few we’ve covered ourselves. The standout feature of this venerable controller is DuraWrite, SandForce’s proprietary on-the-fly compression scheme which purports to improve endurance and write speed by shrinking compressible data before committing it to the NAND.
Inside the Fury, you’ll find 16 NAND packages, each contributing 16GB of storage. With a little arithmetic, we see that the 16GB excess in conjunction with the usual GB vs GiB terminology shenanigans makes for a good 30-ish gigabytes of overprovisioning. Each package contains a single 128Gb Kingston-branded MLC NAND die. With only 16 total NAND dies, the Fury’s performance will likely suffer, as most controllers need at least 32 dies hooked up in order to reach their peak speeds. The Fury 240GB comes with a three-year warranty and is rated to withstand a comfortable 641 TB of writes.
Next, we have the SanDisk Ultra II 960GB. The Ultra II is also available in 120GB, 240GB, and 480GB variants. It fits roughly in the middle of SanDisk’s consumer SSD lineup, and it’s the company’s only consumer product built with TLC NAND.
Within the Ultra II lie eight NAND packages, each with a 128GB density. The packages are loaded with 128 Gbit SanDisk TLC dies, so the 8-channel Marvell 88SS9189 controller inside the Ultra II should be able to leverage high interleaving over each channel to improve speeds. This controller has also been around the block, most notably inside both iterations of Crucial’s MX-series SSDs.
The Ultra II’s TLC NAND puts it at a theoretical disadvantage when compared to the MLC-based Fury, but there’s more to speed than bits per cell. The additional I/O parallellism afforded by the 960GB Ultra II’s NAND configuration should even things out. Additionally, SanDisk employs a caching system called nCache to boost the drive’s write performance. Simply put, nCache dedicates a portion of the NAND to running in SLC mode. Writes hit this SLC cache first, then are transferred to TLC during idle time by way of an efficient on-chip copy mechanism. The Ultra II uses the second revision of nCache, but the big idea is the same as the original, which we’ve talked about in some depth before.
The Ultra II 960GB also comes with a three-year warranty. SanDisk doesn’t provide an endurance rating for it in terms of bytes written, instead claiming a mean time between failure of 1.75 million hours. Given that we tortured the TLC-based Samsung 840 EVO beyond 300 TB, we’re not too worried about TLC endurance.
Finally, we have the OCZ Arc 100 240GB. We’ve already covered this drive in detail, so to summarize briefly, the Arc 100 is an entry-level MLC SSD positioned just above the new TLC-based Trion series in OCZ’s lineup. It surprised us by punching above its weight at a budget price point, earning it a TR Recommended award. This time around, it’ll make a good reference point and provide some context as we examine the two other drives. On to the benchmarks!
IOMeter — Sequential and random performance
IOMeter fuels much of our latest storage test suite, including our sequential and random I/O tests. These tests are run across the full capacity of the drive at two queue depths. The QD1 tests simulate a single thread, while the QD4 results emulate a more demanding desktop workload. (87% of the requests in our old DriveBench 2.0 trace of real-world desktop activity have a queue depth of four or less.) Clicking the buttons below the graphs switches between the different queue depths.
Our sequential tests use a relatively large 128KB block size.
The Arc 100 is still looking pretty good here. The other two drives provide sequential speeds more in line with what we’d expect from low-end SSDs.
Next, we’ll turn our attention to performance with 4KB random I/O. We’ve reported average response times rather than raw throughput, which we think makes sense in the context of system responsiveness.
The HyperX Fury fares poorly here, reporting a random write response time of over six milliseconds during QD4 testing, compared to the sub-millisecond times of the others. Most likely, it’s that pesky NAND configuration bottlenecking the controller.
The preceding tests are based on the median of three consecutive three-minute runs. SSDs typically deliver consistent sequential and random read performance over that period, but random write speeds worsen as the drive’s overprovisioned area is consumed by incoming writes. We explore that decline on the next page.
IOMeter — Sustained and scaling I/O rates
Our sustained IOMeter test hammers drives with 4KB random writes for 30 minutes straight. It uses a queue depth of 32, which should saturate each drive’s overprovisioned area more quickly. This lengthy—and heavy—workload isn’t indicative of typical PC use, but it provides a sense of how the drives react when they’re pushed to the brink.
We’re reporting IOps rather than response times for these tests. Click the buttons below the graph to switch between SSDs.
Once again, the Arc 100 outperforms its price bracket. The Kingston lags far behind after a very short-lived initial burst.
To show the data in a slightly different light, we’ve graphed the peak random write rate and the average, steady-state speed over the last minute of the test.
The number looks impressive, but the Fury’s peak IOps figure only lasted a second or so, making its meaningfulness questionable. It then dropped down to the 20K-30K IOps range for about a hundred seconds before arriving at the steady state we see in the graph. As we’ve noted, the Fury’s 16-die configuration handicaps it across most of our testing, and it’s especially noticeable during writes.
Our final IOMeter test examines performance scaling across a broad range of queue depths. We ramp all the way up to a queue depth of 128. Don’t expect AHCI-based drives to scale past 32, though—that’s the maximum depth of their native command queues.
We use a database access pattern comprising 66% reads and 33% writes, all of which are random. The test runs after 30 minutes of continuous random writes that put the drives in a simulated used state. Click the buttons below the graph to switch between the different drives. Note that the Arc 100 uses a significantly larger scale.
The Arc 100 again asserts its budget dominance. The Fury and Ultra II are more or less neck and neck. The graph below illustrates the difference side-by-side. The buttons toggle between total, read, and write IOps.
TR RoboBench — Real-world transfers
RoboBench trades synthetic tests for real-world transfers with a range of file types. Developed by our in-house coder, Bruno “morphine” Ferreira, this benchmark relies on the multi-threaded robocopy command build into Windows. We copy files to and from a wicked-fast RAM disk to measure read and write performance. We also cut the RAM disk out of the loop for a copy test that transfers the files to a different location on the SSD.
Robocopy uses eight threads by default, and we’ve also run it with a single thread. Our results are split between two file sets, whose vital statistics are detailed below. The compressibility percentage is based on the size of the file set after it’s been crunched by 7-Zip.
|Number of files||Average file size||Total size||Compressibility|
The “media” set is made up of movie files, MP3s, and high-resolution images. There are only a few hundred files in total, and the data set isn’t amenable to compression. The “work” set comprises loads of productivity-type files, including documents, spreadsheets, and web-optimized images. It also includes a stack of programming-related files, including the files for the Visual Studio test on the next page. The average file size is measured in kilobytes rather than megabytes, and the files are mostly compressible.
RoboBench’s write and copy tests run after the drives have been put into a simulated used state with 30 minutes of 4KB random writes. The pre-conditioning process is scripted, as is the rest of the test, ensuring that drives have the same amount of time to recover.
Read speeds are up first. Click the buttons below the graphs to switch between one and eight threads.
The results here are much closer together than in the synthetics. The differences are especially small in the eight-thread test. Nonetheless, the Arc 100 manages to come out on top yet again.
Write speeds are a similarly close call. The Fury is a little slower in the eight-thread media test, but it stays right on track with the others in the work tests.
The copy results resemble the write results. The Fury lags behind in the media test but barely trails otherwise.
Thus far, all of our tests have been conducted with the SSDs connected as secondary storage. This next batch uses them as system drives.
We’ll start with boot times measured two ways. The bare test depicts the time between hitting the power button and reaching the Windows desktop, while the loaded test adds the time needed to load four applications—Avidemux, LibreOffice, GIMP, and Visual Studio Express—automatically from the startup folder. Our old boot tests focused just on the time required to load the OS, but these new ones cover the entire process, including drive initialization.
Despite being the clear winner of many of our prior tests, the Arc 100 doesn’t separate itself from the others in terms of boot times. Across both bare and loaded boots, the three drives are all within two seconds of each other, and the Ultra II comes out on top.
Next, we’ll tackle load times with two sets of tests. The first group focuses on the time required to load larger files in a collection of desktop applications. We open a 790MB 4K video in Avidemux, a 30MB spreadsheet in LibreOffice, and a 523MB image file in GIMP. In the Visual Studio Express test, we open a 159MB project containing source code for the LLVM toolchain. Thanks to Rui Figueira for providing the project code.
Nothing extraordinary to note here, as there are no clear winners or losers. Next, we see whether any of the drives distinguish themselves in loading up games.
Indeed not. These drives will get you adventuring in roughly the same amount of time.
Now let’s look briefly at power consumption. For idle power, we take the lowest value we get over a five minute period, one minute after Windows has processed its idle tasks. For load power, we take the highest value over a five minute period while hitting the drive with a write-heavy IOMeter workload.
No big differences here. Power consumption is pretty similar with most SATA drives we come across these days. However, this is the only test we ran where the OCZ consistently came in last.
Test notes and methods
Here’s are the essential details for the drives we tested:
|Kingston HyperX Fury 240GB||SATA 6Gbps||SandForce SF-2281||Kingston MLC|
|OCZ Arc 100 240GB||SATA 6Gbps||Indilinx Barefoot 3 M10||A19-nm Toshiba MLC|
|SanDisk Ultra II 960GB||SATA 6Gbps||Marvell 88SS9189||19-nm SandDisk TLC|
All the SSDs were connected to the motherboard’s Z77 chipset.
We used the following system for testing:
|Processor||Intel Core i3-2100 3.1GHz|
|Platform hub||Intel H77|
|Memory size||8GB (2 DIMMs)|
|Memory type||Corsair Dominator Platinum DDR3 1866 MHZ|
|System drive||Intel 510 120GB|
|Power supply||Antec Edge 650W|
|Operating system||Windows 8.1 Pro x64|
Thanks to Gigabyte for providing the system’s motherboard, Intel for the CPU and system drive, Corsair for the memory, and Antec for the PSU. And thanks to the drive makers for supplying the rest of the SSDs.
We used the following versions of our test applications:
- IOMeter 1.1.0 x64
- TR RoboBench 0.2a
- Avidemux 2.6.8 x64
- LibreOffice 22.214.171.124
- GIMP 2.8.14
- Visual Studio Community 2013
- The Elder Scrolls V: Skyrim
- Tomb Raider
- Sid Meier’s Civilization V
Some further notes on our test methods:
- To ensure consistent and repeatable results, the SSDs were secure-erased before every component of our test suite. For the IOMeter database, RoboBench write, and RoboBench copy tests, the drives were put in a simulated used state that better exposes long-term performance characteristics. Those tests are all scripted, ensuring an even playing field that gives the drives the same amount of time to recover from the initial used state.
- We run virtually all our tests three times and report the median of the results. Our sustained IOMeter test is run a second time to verify the results of the first test and additional times only if necessary. The sustained test runs for 30 minutes continuously, so it already samples performance over a long period.
- Steps have been taken to ensure the CPU’s power-saving features don’t taint any of our results. All of the CPU’s low-power states have been disabled, effectively pegging the frequency at 3.1GHz. Transitioning between power states can affect the performance of storage benchmarks, especially when dealing with short burst transfers.
The test systems’ Windows desktop was set at 1920×1080 at 60Hz. Most of the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.
What have we learned from pitting these drives against each other? For starters, the Arc 100 continues to be a diamond in the rough. Its performance in synthetics left the Fury and Ultra II in the dust. It’s little wonder that we gave it a TR Recommended Award last year. On the other hand, that synthetic performance seems to make little difference in the real world. Neither the Fury’s sub-optimal die configuration nor the Ultra II’s TLC was a noticeable handicap in our booting and loading tests.
The question becomes, then: which one to buy? At the time of this writing, the Arc 100 goes for $92, the Ultra II for $350, and the Fury for $93. At those prices, the cost-per-gigabyte works out to be $0.38, $0.36, and $0.39, respectively. For those looking for a terabyte-class SSD, the Ultra II is a solid performer. TLC flash keeps costs down, while nCache keeps endurance and speed tolerable.
If that kind of capacity doesn’t interest you, the Arc 100 is still one heck of a drive. For a fairly low price, you get performance comparable to the ever-popular Samsung 850 EVO 250GB drive. The Arc 100 straight out beats the also-popular Crucial offerings (the BX100 250GB and MX200 250GB), according to this performance chart that Geoff put together earlier this year. The Arc 100 also has the added value of 256-bit AES hardware encryption, unlike either the Fury or the Ultra II. We can’t really recommend the HyperX Fury at the price it’s going for, however. Similar prices on better-performing drives means the Fury is out of contention at its current price point.
Ultimately, our advice remains the same as it’s been for quite a while—if you’re upgrading from mechanical storage, snag just about any SSD you can get at a good price. The deciding factor will likely be one that’s not exposed by our testing. Reliability, customer service, and secondary features like hardware-level encryption are important considerations that we can’t just quantify and graph, much as we’d like to.
For those of you already on solid-state storage and looking for an upgrade, our advice is to wait if you can. The NVMe/PCIe revolution is upon us. Intel’s Z170 platform provides more PCIe lanes for fast next-generation storage than you can shake a stick at. With some help from SSD makers, we’ll hopefully have a full field of PCIe SSDs to throw at it soon.