Kingston’s HyperX Fury and SanDisk’s Ultra II SSDs reviewed

Just a few years ago, a hundred bucks bought you a choice between a decent-sized mechnical hard drive or a pitifully small amount of solid-state storage. Thankfully, the budget solid-state storage space has ballooned over time. Smaller processes, multi-level cell NAND, and die stacking have all driven down costs. Now, even the thriftiest of builders can enjoy the benefits of flash storage. The advent of breakneck-speed PCIe drives that cost hundreds rather than thousands of dollars is also keeping SATA drive prices from getting too out-of-hand.

But even among mainstream SATA SSDs, there’s a lot of stratification. Most manufacturers offers at least a couple of distinct product lines, segmenting their offerings by target audience. We’re turning our attention to the low end today.

Here’s Kingston’s HyperX Fury 240GB, first announced in the summer of 2014. Kingston reserves the HyperX branding for its gaming-oriented products, and the Fury is no exception. Targeted at “entry-level gamers,” the MLC-based Fury takes its place just above the cheaper but controversial V300 series. Presumably the V300 is for mere gaming interns.

The Fury comes in 120GB and 240GB configurations, each powered by SandForce’s SF-2281 controller. The SF-2281 has been the brains of many an SSD over the years, including a few we’ve covered ourselves. The standout feature of this venerable controller is DuraWrite, SandForce’s proprietary on-the-fly compression scheme which purports to improve endurance and write speed by shrinking compressible data before committing it to the NAND.

Inside the Fury, you’ll find 16 NAND packages, each contributing 16GB of storage. With a little arithmetic, we see that the 16GB excess in conjunction with the usual GB vs GiB terminology shenanigans makes for a good 30-ish gigabytes of overprovisioning. Each package contains a single 128Gb Kingston-branded MLC NAND die. With only 16 total NAND dies, the Fury’s performance will likely suffer, as most controllers need at least 32 dies hooked up in order to reach their peak speeds. The Fury 240GB comes with a three-year warranty and is rated to withstand a comfortable 641 TB of writes.

Next, we have the SanDisk Ultra II 960GB. The Ultra II is also available in 120GB, 240GB, and 480GB variants. It fits roughly in the middle of SanDisk’s consumer SSD lineup, and it’s the company’s only consumer product built with TLC NAND.

Within the Ultra II lie eight NAND packages, each with a 128GB density. The packages are loaded with 128 Gbit SanDisk TLC dies, so the 8-channel Marvell 88SS9189 controller inside the Ultra II should be able to leverage high interleaving over each channel to improve speeds. This controller has also been around the block, most notably inside both iterations of Crucial’s MX-series SSDs.

The Ultra II’s TLC NAND puts it at a theoretical disadvantage when compared to the MLC-based Fury, but there’s more to speed than bits per cell. The additional I/O parallellism afforded by the 960GB Ultra II’s NAND configuration should even things out. Additionally, SanDisk employs a caching system called nCache to boost the drive’s write performance. Simply put, nCache dedicates a portion of the NAND to running in SLC mode. Writes hit this SLC cache first, then are transferred to TLC during idle time by way of an efficient on-chip copy mechanism. The Ultra II uses the second revision of nCache, but the big idea is the same as the original, which we’ve talked about in some depth before.

The Ultra II 960GB also comes with a three-year warranty. SanDisk doesn’t provide an endurance rating for it in terms of bytes written, instead claiming a mean time between failure of 1.75 million hours. Given that we tortured the TLC-based Samsung 840 EVO beyond 300 TB, we’re not too worried about TLC endurance.

Finally, we have the OCZ Arc 100 240GB. We’ve already covered this drive in detail, so to summarize briefly, the Arc 100 is an entry-level MLC SSD positioned just above the new TLC-based Trion series in OCZ’s lineup. It surprised us by punching above its weight at a budget price point, earning it a TR Recommended award. This time around, it’ll make a good reference point and provide some context as we examine the two other drives. On to the benchmarks!

 

IOMeter — Sequential and random performance

IOMeter fuels much of our latest storage test suite, including our sequential and random I/O tests. These tests are run across the full capacity of the drive at two queue depths. The QD1 tests simulate a single thread, while the QD4 results emulate a more demanding desktop workload. (87% of the requests in our old DriveBench 2.0 trace of real-world desktop activity have a queue depth of four or less.) Clicking the buttons below the graphs switches between the different queue depths.

Our sequential tests use a relatively large 128KB block size.



 

The Arc 100 is still looking pretty good here. The other two drives provide sequential speeds more in line with what we’d expect from low-end SSDs.

Next, we’ll turn our attention to performance with 4KB random I/O. We’ve reported average response times rather than raw throughput, which we think makes sense in the context of system responsiveness.



 

The HyperX Fury fares poorly here, reporting a random write response time of over six milliseconds during QD4 testing, compared to the sub-millisecond times of the others. Most likely, it’s that pesky NAND configuration bottlenecking the controller.

The preceding tests are based on the median of three consecutive three-minute runs. SSDs typically deliver consistent sequential and random read performance over that period, but random write speeds worsen as the drive’s overprovisioned area is consumed by incoming writes. We explore that decline on the next page.

 

IOMeter — Sustained and scaling I/O rates

Our sustained IOMeter test hammers drives with 4KB random writes for 30 minutes straight. It uses a queue depth of 32, which should saturate each drive’s overprovisioned area more quickly. This lengthy—and heavy—workload isn’t indicative of typical PC use, but it provides a sense of how the drives react when they’re pushed to the brink.

We’re reporting IOps rather than response times for these tests. Click the buttons below the graph to switch between SSDs.


Once again, the Arc 100 outperforms its price bracket. The Kingston lags far behind after a very short-lived initial burst.

To show the data in a slightly different light, we’ve graphed the peak random write rate and the average, steady-state speed over the last minute of the test.

The number looks impressive, but the Fury’s peak IOps figure only lasted a second or so, making its meaningfulness questionable. It then dropped down to the 20K-30K IOps range for about a hundred seconds before arriving at the steady state we see in the graph. As we’ve noted, the Fury’s 16-die configuration handicaps it across most of our testing, and it’s especially noticeable during writes.

Our final IOMeter test examines performance scaling across a broad range of queue depths. We ramp all the way up to a queue depth of 128. Don’t expect AHCI-based drives to scale past 32, though—that’s the maximum depth of their native command queues.

We use a database access pattern comprising 66% reads and 33% writes, all of which are random. The test runs after 30 minutes of continuous random writes that put the drives in a simulated used state. Click the buttons below the graph to switch between the different drives. Note that the Arc 100 uses a significantly larger scale.


The Arc 100 again asserts its budget dominance. The Fury and Ultra II are more or less neck and neck. The graph below illustrates the difference side-by-side. The buttons toggle between total, read, and write IOps.


 

TR RoboBench — Real-world transfers

RoboBench trades synthetic tests for real-world transfers with a range of file types. Developed by our in-house coder, Bruno “morphine” Ferreira, this benchmark relies on the multi-threaded robocopy command build into Windows. We copy files to and from a wicked-fast RAM disk to measure read and write performance. We also cut the RAM disk out of the loop for a copy test that transfers the files to a different location on the SSD.

Robocopy uses eight threads by default, and we’ve also run it with a single thread. Our results are split between two file sets, whose vital statistics are detailed below. The compressibility percentage is based on the size of the file set after it’s been crunched by 7-Zip.

  Number of files Average file size Total size Compressibility
Media 618 6.4MB 3.94GB 1.35%
Work 35,184 33.0KB 1.16GB 76.24%

The “media” set is made up of movie files, MP3s, and high-resolution images. There are only a few hundred files in total, and the data set isn’t amenable to compression. The “work” set comprises loads of productivity-type files, including documents, spreadsheets, and web-optimized images. It also includes a stack of programming-related files, including the files for the Visual Studio test on the next page. The average file size is measured in kilobytes rather than megabytes, and the files are mostly compressible.

RoboBench’s write and copy tests run after the drives have been put into a simulated used state with 30 minutes of 4KB random writes. The pre-conditioning process is scripted, as is the rest of the test, ensuring that drives have the same amount of time to recover.

Read speeds are up first. Click the buttons below the graphs to switch between one and eight threads.



The results here are much closer together than in the synthetics. The differences are especially small in the eight-thread test. Nonetheless, the Arc 100 manages to come out on top yet again.



Write speeds are a similarly close call. The Fury is a little slower in the eight-thread media test, but it stays right on track with the others in the work tests.



The copy results resemble the write results. The Fury lags behind in the media test but barely trails otherwise.

 

Boot times

Thus far, all of our tests have been conducted with the SSDs connected as secondary storage. This next batch uses them as system drives.

We’ll start with boot times measured two ways. The bare test depicts the time between hitting the power button and reaching the Windows desktop, while the loaded test adds the time needed to load four applications—Avidemux, LibreOffice, GIMP, and Visual Studio Express—automatically from the startup folder. Our old boot tests focused just on the time required to load the OS, but these new ones cover the entire process, including drive initialization.

Despite being the clear winner of many of our prior tests, the Arc 100 doesn’t separate itself from the others in terms of boot times. Across both bare and loaded boots, the three drives are all within two seconds of each other, and the Ultra II comes out on top.

Load times

Next, we’ll tackle load times with two sets of tests. The first group focuses on the time required to load larger files in a collection of desktop applications. We open a 790MB 4K video in Avidemux, a 30MB spreadsheet in LibreOffice, and a 523MB image file in GIMP. In the Visual Studio Express test, we open a 159MB project containing source code for the LLVM toolchain. Thanks to Rui Figueira for providing the project code.

Nothing extraordinary to note here, as there are no clear winners or losers. Next, we see whether any of the drives distinguish themselves in loading up games.

Indeed not. These drives will get you adventuring in roughly the same amount of time.

Power consumption

Now let’s look briefly at power consumption. For idle power, we take the lowest value we get over a five minute period, one minute after Windows has processed its idle tasks. For load power, we take the highest value over a five minute period while hitting the drive with a write-heavy IOMeter workload.

No big differences here. Power consumption is pretty similar with most SATA drives we come across these days. However, this is the only test we ran where the OCZ consistently came in last.

 

Test notes and methods

Here’s are the essential details for the drives we tested:

  Interface Flash controller NAND
Kingston HyperX Fury 240GB SATA 6Gbps SandForce SF-2281 Kingston MLC
OCZ Arc 100 240GB SATA 6Gbps Indilinx Barefoot 3 M10 A19-nm Toshiba MLC
SanDisk Ultra II 960GB SATA 6Gbps Marvell 88SS9189 19-nm SandDisk TLC

All the SSDs were connected to the motherboard’s Z77 chipset.

We used the following system for testing:

Processor Intel Core i3-2100 3.1GHz
Motherboard Gigabyte H77N-WiFi
Platform hub Intel H77
Memory size 8GB (2 DIMMs)
Memory type Corsair Dominator Platinum DDR3 1866 MHZ
Memory timings 9-10-9-27
System drive Intel 510 120GB
Power supply Antec Edge 650W
Operating system Windows 8.1 Pro x64

Thanks to Gigabyte for providing the system’s motherboard, Intel for the CPU and system drive, Corsair for the memory, and Antec for the PSU. And thanks to the drive makers for supplying the rest of the SSDs.

We used the following versions of our test applications:

  • IOMeter 1.1.0 x64
  • TR RoboBench 0.2a
  • Avidemux 2.6.8 x64
  • LibreOffice 4.1.1.2
  • GIMP 2.8.14
  • Visual Studio Community 2013
  • The Elder Scrolls V: Skyrim
  • Tomb Raider
  • Sid Meier’s Civilization V

Some further notes on our test methods:

  • To ensure consistent and repeatable results, the SSDs were secure-erased before every component of our test suite. For the IOMeter database, RoboBench write, and RoboBench copy tests, the drives were put in a simulated used state that better exposes long-term performance characteristics. Those tests are all scripted, ensuring an even playing field that gives the drives the same amount of time to recover from the initial used state.

  • We run virtually all our tests three times and report the median of the results. Our sustained IOMeter test is run a second time to verify the results of the first test and additional times only if necessary. The sustained test runs for 30 minutes continuously, so it already samples performance over a long period.

  • Steps have been taken to ensure the CPU’s power-saving features don’t taint any of our results. All of the CPU’s low-power states have been disabled, effectively pegging the frequency at 3.1GHz. Transitioning between power states can affect the performance of storage benchmarks, especially when dealing with short burst transfers.

The test systems’ Windows desktop was set at 1920×1080 at 60Hz. Most of the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

 

Conclusions

What have we learned from pitting these drives against each other? For starters, the Arc 100 continues to be a diamond in the rough. Its performance in synthetics left the Fury and Ultra II in the dust. It’s little wonder that we gave it a TR Recommended Award last year. On the other hand, that synthetic performance seems to make little difference in the real world. Neither the Fury’s sub-optimal die configuration nor the Ultra II’s TLC was a noticeable handicap in our booting and loading tests.

The question becomes, then: which one to buy? At the time of this writing, the Arc 100 goes for $92, the Ultra II for $350, and the Fury for $93. At those prices, the cost-per-gigabyte works out to be $0.38, $0.36, and $0.39, respectively. For those looking for a terabyte-class SSD, the Ultra II is a solid performer. TLC flash keeps costs down, while nCache keeps endurance and speed tolerable.

If that kind of capacity doesn’t interest you, the Arc 100 is still one heck of a drive. For a fairly low price, you get performance comparable to the ever-popular Samsung 850 EVO 250GB drive. The Arc 100 straight out beats the also-popular Crucial offerings (the BX100 250GB and MX200 250GB), according to this performance chart that Geoff put together earlier this year. The Arc 100 also has the added value of 256-bit AES hardware encryption, unlike either the Fury or the Ultra II. We can’t really recommend the HyperX Fury at the price it’s going for, however. Similar prices on better-performing drives means the Fury is out of contention at its current price point.

Ultimately, our advice remains the same as it’s been for quite a while—if you’re upgrading from mechanical storage, snag just about any SSD you can get at a good price. The deciding factor will likely be one that’s not exposed by our testing. Reliability, customer service, and secondary features like hardware-level encryption are important considerations that we can’t just quantify and graph, much as we’d like to.

For those of you already on solid-state storage and looking for an upgrade, our advice is to wait if you can. The NVMe/PCIe revolution is upon us. Intel’s Z170 platform provides more PCIe lanes for fast next-generation storage than you can shake a stick at. With some help from SSD makers, we’ll hopefully have a full field of PCIe SSDs to throw at it soon.

Comments closed
    • HERETIC
    • 4 years ago

    What a mix-The OCZ probably has the best flash-The Sandisk the best controller.
    And Kingston throwing in what scraps they can find.
    Just personal preference-I only buy drives where the nand is cut and binned by
    the manufacturer-so that leaves out the likes of Kingston,A-Data as it always
    reminds me of why OCZ were getting those 60% failure rates a few years ago.

      • brucethemoose
      • 4 years ago

      I thought OCZ’s Barefoot 3 was a really fast controller.

      …What makes the Vector/Vertex so much faster than the ARC in TR’s tests anyway? As far as I can tell, they all use the same flash, and the Vertex/ARC use the same version of the controller.

        • HERETIC
        • 4 years ago

        “I thought OCZ’s Barefoot 3 was a really fast controller.”

        Even when thinking SSD-everything isn’t about “fast”
        There’s only so much pie to go around.
        Data in-Data out-Error correction-When to do garbage cleanup.
        Marvel is one of the most used/reliable controllers out there……….

    • DPete27
    • 4 years ago

    I’ve bought a half dozen 240GB Arc 100’s for builds. They’re my go-to right now. Lowest price I’ve paid = $63 after MIR but they’re pretty regularly $80 after MIR (FYI)

    • anotherengineer
    • 4 years ago

    Nice little review.

    Still waiting for some plextor drives to be through the TR wringer.

    Edit – Also is this still an issue with SSDs?? (performance degradation with time and capacity)
    [url<]http://www.xbitlabs.com/articles/storage/display/marvell-ssd_7.html#sect0[/url<]

      • weaktoss
      • 4 years ago

      Yep, still an issue! If you take a look at page 3 of this review for the sustained IOMeter graphs, you’ll see that they perform at a significantly higher rate for a short time before hitting their steady-state Iops.

      • HERETIC
      • 4 years ago

      “Also is this still an issue with SSDs?? (performance degradation with time and capacity)”

      Please correct me if I’ve got this wrong-

      YES-if you thrash the crap out of your SSD and don’t give time for recovery.
      NO-if you use your SSD in a normal manner-as garbage cleanup and trim help to
      restore full performance-on all but Sandforce drives-that was one of the major
      problems of the Sandforce controller.
      The drops on page 3-In my opinion-are more the SLC cache becoming full……..

    • Chrispy_
    • 4 years ago

    Sandforce drives are ‘fast enough’ if you find them on heavy discount at a much better cost/GB than everything else, but OMG, the controller is so old now.

    Surely there’s something new, cheap and better Kingston could use from Phison or Marvell?

    I’m still running a SF-2281 in my HTPC and it’s fine. It was a decent controller for its day (once the firmware kinks responsible for bluescreening were ironed-out) but that day was 2011 people.

    With far superior modern options available to compete on price, why have Kingston stuck with such an old controller?

      • Milo Burke
      • 4 years ago

      My Sandforce controller melted into a Glassforce controller. It’s faster now.

      • just brew it!
      • 4 years ago

      Tried and true?

        • Chrispy_
        • 4 years ago

        Tired and cheap is probably closer.

    • chuckula
    • 4 years ago

    Good review Tony.

    Just one thing (not related to your writeup): Kingston’s HyperX Fury

    With a name like “HyperX Fury” I smell a lawsuit with AMD. I’m just not sure who stole from whom.

      • Chrispy_
      • 4 years ago

      Pretty sure I’ve been putting HyperX and Fury branded parts in builds for years before AMD used those words.

        • JustAnEngineer
        • 4 years ago

        Marvel’s been doing it since [url=https://en.wikipedia.org/wiki/Nick_Fury<]1963[/url<].

        • HisDivineOrder
        • 4 years ago

        ATI used the word “Fury” long before Hyper X was attached to it.

        [quote<]Rage Fury - 32 MB SDRAM memory and same performance as the Magnum, this add-in card was targeted at PC gamers.[/quote<] [url<]https://en.wikipedia.org/wiki/ATI_Rage#RAGE_128[/url<]

    • weaktoss
    • 4 years ago

    Hey guys, Tony here! Figured it was time to stop lurking in the shadows and expose my forum handle. Whoever was tabulating the staff’s names and handles can add me to the list.

      • DrDominodog51
      • 4 years ago

      I’m on it now.

      Edit: Done. It is in the comments [url=https://techreport.com/news/28702/in-the-lab-budget-ssds-from-sandisk-ocz-and-kingston<]here.[/url<] It looks like I have almost all front-facing staffs handles down.

        • w76
        • 4 years ago

        … and for all the years I’ve been lurking intermittently, I’ve thought ronch was Ronald Hanaki. No idea why I even thought that.

          • DrDominodog51
          • 4 years ago

          No evidence denys your theory or confirms it. The reason I knew loopless and weaktoss were Nelson and Thomas respectively was their writing styles in the comments and the content of the comment of course.

    • wierdo
    • 4 years ago

    The Sandisk drive looks like a decent “budget” drive relative to the 850 EVO series. But imho at $350 I’d take the older M/MX/BX series from Crucial, they use MLC and are easy to find cheaper – for example under $300 for that size these days, sometimes almost as low as $250.

    I’m not sure about SSDs from Kingston (and PNY) though, don’t like their bait-and-switch practices:

    [url<]https://techreport.com/review/26664/alleged-bait-and-switch-tactics-spur-kingston-pny-ssd-boycott[/url<]

      • Freon
      • 4 years ago

      Not to mention OCZ’s sterling reputation of reliability. Never again.

        • Sabresiberian
        • 4 years ago

        I expect that Toshiba’s purchase and oversight of OCZ has lead to a better validation process. I certainly understand waiting to see if the OCZ brand will live up to the reliability it must have to be effective and competitive (or writing them off entirely), but personally haven’t dropped them from my list. And since Toshiba has every step of the SSD process in-house (NAND, controller) it could be one of the manufacturers that make it through the crowded and highly competitive market.

        But – OCZ is going to have to shine brightly against tough competitors like Samsung and Intel for me to pay attention to what they offer (and that will have to be in a PCIe + NVMe form). That is a tall order indeed, and the only place I can see them being competitive in the next few years is price. That simply won’t be enough for me to buy OCZ hardware.

          • Chrispy_
          • 4 years ago

          Huh, you’d trust Samsung over OCZ?

          You must have been living under a rock since the 840 was released. Samsung’s track record is absolutely awful at the moment and OCZ hasn’t put a foot wrong since Toshiba took control.

          If anything the Arc100 is still perhaps the budget drive to buy, it focuses on consistently good performance rather than overly inflated burst-mode statistics that then flatline under heavy use. 12 months on the market and no major complaints which is more than can be said of either Crucial’s or Samsung’s products.

            • Freon
            • 4 years ago

            I’m not sure Samsung is that bad. 840 EVO issues are not the end of the world. I think the whole Algolia TRIM bug thing turned out to be a bug in Linux, not their drives.

            [url<]https://techreport.com/news/28674/samsung-says-data-eating-trim-bug-is-a-linux-kernel-problem[/url<] I've decided to go with the BX100 over the 850 EVO to err on the safe side, but I don't feel immensely strong about that. I usually give a soft recommendation to others as well. 850 EVO has now been out close to a year and no issues have cropped up so far. It is a new NAND...

          • Freon
          • 4 years ago

          I just wonder why they use the OCZ name at all. Why did they purchase them?

          I tend to think a lot of the issues weren’t necessarily manufacturing related, but more with poorly tested controller firmwares which was about all OCZ was adding to the picture. They only ever offered stitching together NANDs and controllers from third parties such as Jmicron or Sandforce.

          Whatever is left of OCZ seems toxic to me. Maybe whatever used to be OCZ was completely junked, but I’d rather buy something with Toshiba’s name and reputation on it. It makes me wonder if Toshiba really wants to own up to it.

            • HERETIC
            • 4 years ago

            Manufacturing was definitely a issue.When the supply of nand was tight OCZ started buying
            wafers and cutting and binning themselves.
            With the price war that was happening at the same time,this pushed OCZ to put low binned nand in to drives that other manufacturers would use for cheap pen drives.
            Combine this with OCZ experimenting with controllers-trying to write firmware to compensate
            for low quality nand and the absolute mess that was Sandforce at the time ended up with some
            drives having a 60% failure rate.

            • Chrispy_
            • 4 years ago

            Well, OCZ of old was toxic, but before the Toshiba acquisition, [b<]they bought out Indilinx[/b<], who I've always considered to be a strong and fault-free controller company. The original barefoot was good. The Barefoot 3 has proven itself very competitive. There are no significant past issues with Indilinx and the stellar performance of the original Barefoot and Barefoot 3 are probably what kept OCZ selling products *despite* their reputation for unreliable products. In short, SSD's are a comination of commodity, off-the-shelf NAND and some rather special-sauce controller magic. Indilinx is the important part of the equation and I think most people associate Indilinx with OCZ.

Pin It on Pinterest

Share This