Patriot’s Hellfire 480GB NVMe SSD reviewed

Just a few weeks ago, we were bemoaning the dearth of affordable NVMe drives. For a long time, NVMe products have commanded around double the per-gigabyte cost of more pedestrian AHCI drives. That premium had a lot to do with how few players there were in the market.

Since then, the NVMe ecosystem has matured substantially, and far more vendors are offering SSDs that utilize the next-gen protocol. With new drives often targeting less exorbitant price points and old drives feeling pressure from the added competition, our wallets can breathe a collective sigh of relief.

One of the new wave of manufacturers hawking NVMe goodness is Patriot Memory with its Hellfire M.2 SSD. The drive’s been around for some months now, but it’s only recently made its way over to our storage labs. Patriot sent over a 480GB sample for our testing pleasure. Take a look at the drive and its basic specs.

Patriot Hellfire
Capacity Max sequential (MB/s) Max random (IOps)
Read Write Read Write
240GB 3000 2300 370k 185k
480GB 3000 2400 185k 210k

Not only is the Hellfire Patriot’s first NVMe drive, it’s actually the first Patriot SSD to hit the TR storage labs. Patriot has a long history of slinging SSDs, though, dating back to times when SATA 3Gbps connections ruled the roost. Back then, the Patriot SSD product lines were called things like “Zephyr” and “Inferno,” and not much has changed since in the company’s florid naming schemes. The current Patriot lineup includes the “Spark,” “Blast,” “Pyro,” “Blaze, “Flare,” and “Ignite” drives. Admirable thematic consistency, but figuring out what each name implies about the relative strength of the respective drive is left as an exercise for the reader.

What’s obvious is that the Hellfire’s got ’em all beat. Even if the name didn’t tip us off, the fact is that the Hellfire is the only PCIe drive in Patriot’s stable. Good thing it packs the right combination of technologies to go with that zesty name: a PCIe x4 interface, the NVMe protocol, and an M.2 2280 form factor.

As is the case with most M.2 drives, the Hellfire’s chassis is a modest affair: just a sliver of PCB with a basic sticker covering up the operative bits. No frills, but that’s how we like it.

With the sticker peeled back, the components fueling the Hellfire are laid bare. Toshiba’s pervasive 15-nm MLC is back in TR labs yet again. This same flash was the backbone of Toshiba’s own OCZ RD400 NVMe gumstick, so the results from our RD400 512GB review will be a pertinent comparison point to consider when we get into our performance testing. The NAND on this drive is distributed among four packages: two on either side of the PCB. The two on the top side share space with the drive’s DRAM cache and controller, Phison’s PS5007-E7.

Phison, not unlike Patriot, is another name that’s been in the game for ages, but the company’s controllers haven’t made an appearance in the TR labs in quite some time. In fact, none of the drives benched on the current storage rig are powered by the company’s controllers, so this presents a good opportunity to see whether Phison’s still keeping up with its contemporaries. The E7, at least, checks all the right boxes. It’s an eight-channel controller designed for PCIe Gen3 x4 NVMe applications and supports 256-bit AES encryption (with TCG Opal 2.0 compliance to boot).

Alas, the Hellfire doesn’t take advantage of the E7’s encryption capabilities. It does, however, offer a three-year warranty and an endurance rating of 230 terabytes written. At the moment, the drive is out of stock at Newegg, but Amazon’s letting it go for only $230. That’s a pretty palatable price for a speedy NVMe drive.

Well, only if it’s actually speedy. So let’s find out how fast the Hellfire is.

 

IOMeter — Sequential and random performance

IOMeter fuels much of our latest storage test suite, including our sequential and random I/O tests. These tests are run across the full capacity of the drive at two queue depths. The QD1 tests simulate a single thread, while the QD4 results emulate a more demanding desktop workload. For perspective, 87% of the requests in our old DriveBench 2.0 trace of real-world desktop activity have a queue depth of four or less. Clicking the buttons below the graphs switches between results charted at the different queue depths.

Our sequential tests use a relatively large 128KB block size.



The Hellfire’s sequential speeds are good enough to keep it ahead of the SATA herd, but they aren’t particularly fast for a PCIe drive. The RD400 looks about 50% faster in most of these tests.



The drive’s random response times are a mixed bag. Read times are definitely on the slow side, beaten by various older, slower drives. But the Hellfire’s write times are blazingly quick, landing close to the top of the charts.

So far, the Hellfire seems all right. Not mindblowing, but all right. Perhaps more complex testing will expose more noteworthy strengths or weaknesses. Let’s move on.

 

Sustained and scaling I/O rates

Our sustained IOMeter test hammers drives with 4KB random writes for 30 minutes straight. It uses a queue depth of 32, a setting that should result in higher speeds that saturate each drive’s overprovisioned area more quickly. This lengthy—and heavy—workload isn’t indicative of typical PC use, but it provides a sense of how the drives react when they’re pushed to the brink.

We’re reporting IOps rather than response times for these tests. Click the buttons below the graph to switch between SSDs.


The Hellfire’s performance appears to peak quite high, albeit for only a brief time. Both its peak and its steady state seem to have the RD400 beat by a good amount.

The Hellfire’s peak rate is indeed twice as fast as the RD400’s speed. Its steady-state performance is about 50% faster than the OCZ drive’s, to boot. Phison must be doing something right here. Toshiba may want to take notes.

Our final IOMeter test examines performance scaling across a broad range of queue depths. We ramp all the way up to a queue depth of 128. Don’t expect AHCI-based drives to scale past 32, though—that’s the maximum depth of their native command queues.

For this test, we use a database access pattern comprising 66% reads and 33% writes, all of which are random. The test runs after 30 minutes of continuous random writes that put the drives in a simulated used state. Click the buttons below the graph to switch between the different drives. And note that the P3700 plot uses a much larger scale.


It’s been some time seen we’ve seen such straightforward curves. The Hellfire scales smoothly from QD1 all the way to QD128. The rate of increase certainly slows down, but at no point does it flatline or regress. Let’s look at some other NVMe drives for context.


The Hellfire looks way better here than either the RD400 or Samsung’s 950 Pro, both of which tapered off around QD8. Phison’s E7 continues to impress us.

Now it’s time to set IOMeter aside and see how the Hellfire fares with real-world workloads.

 

TR RoboBench — Real-world transfers

RoboBench trades synthetic tests with random data for real-world transfers with a range of file types. Developed by our in-house coder, Bruno “morphine” Ferreira, this benchmark relies on the multi-threaded robocopy command build into Windows. We copy files to and from a wicked-fast RAM disk to measure read and write performance. We also cut the RAM disk out of the loop for a copy test that transfers the files to a different location on the SSD.

Robocopy uses eight threads by default, and we’ve also run it with a single thread. Our results are split between two file sets, whose vital statistics are detailed below. The compressibility percentage is based on the size of the file set after it’s been crunched by 7-Zip.

  Number of files Average file size Total size Compressibility
Media 459 21.4MB 9.58GB 0.8%
Work 84,652 48.0KB 3.87GB 59%

The media set is made up of large movie files, high-bitrate MP3s, and 18-megapixel RAW and JPG images. There are only a few hundred files in total, and the data set isn’t amenable to compression. The work set comprises loads of TR files, including documents, spreadsheets, and web-optimized images. It also includes a stack of programming-related files associated with our old Mozilla compiling test and the Visual Studio test on the next page. The average file size is measured in kilobytes rather than megabytes, and the files are mostly compressible.

RoboBench’s write and copy tests run after the drives have been put into a simulated used state with 30 minutes of 4KB random writes. The pre-conditioning process is scripted, as is the rest of the test, ensuring that drives have the same amount of time to recover.

Let’s take a look at the media set first. The buttons switch between read, write, and copy results.



The Hellfire gratifies us with the greater-than-1000MB/s read speeds that we’re accustomed to seeing from PCIe drives, but it falls shy of that mark in the write tests. In fact, in the media set, the Hellfire seems to be the slowest writer and copier of any NVMe drive we’ve tested.

Let’s see whether it puts on a better show in the work set.



Read speeds are again as good as we could hope for. This time around, writes look solid, too—the Hellfire has the RD400 beat by a little in the single-threaded test and by a huge margin in the eight-threaded test.

The Hellfire proved to be a bit slower in RoboBench write tests than some of its rivals, but it still put in an NVMe-worthy performance. Our last set of tests will see how fit the drive is to be primary storage.

 

Boot times

Until now, all of our tests have been conducted with the SSDs connected as secondary storage. This next batch uses them as system drives.

We’ll start with boot times measured two ways. The bare test depicts the time between hitting the power button and reaching the Windows desktop, while the loaded test adds the time needed to load four applications—Avidemux, LibreOffice, GIMP, and Visual Studio Express—automatically from the startup folder. Our old boot tests focused on the time required to load the OS, but these new ones cover the entire process, including drive initialization.

SSDs always fall within a narrow band of results in our boot tests. The Hellfire, though, manages to land near the very top of that band. It’s no slow booter.

Load times

Next, we’ll tackle load times with two sets of tests. The first group focuses on the time required to load larger files in a collection of desktop applications. We open a 790MB 4K video in Avidemux, a 30MB spreadsheet in LibreOffice, and a 523MB image file in the GIMP. In the Visual Studio Express test, we open a 159MB project containing source code for the LLVM toolchain. Thanks to Rui Figueira for providing the project code.

Everything is in order. Perfectly fine load times, nothing to remark on. Lastly, let’s see how snappy the drive is when loading games.

The Hellfire’s loading speeds land in line with expectations, and it makes a competent game library drive.

We’re all outta tests, so read on for a summary of our methods. Or skip ahead to the conclusion.

 

Test notes and methods

Here are the essential details for all the drives we tested:

  Interface Flash controller NAND
Adata Premier SP550 480GB SATA 6Gbps Silicon Motion SM2256 16-nm SK Hynix TLC
Adata Ultimate SU800 512GB SATA 6Gbps Silicon Motion SM2258 32-layer Micron 3D TLC
Adata XPG SX930 240GB SATA 6Gbps JMicron JMF670H 16-nm Micron MLC
Crucial BX100 500GB SATA 6Gbps Silicon Motion SM2246EN 16-nm Micron MLC
Crucial BX200 480GB SATA 6Gbps Silicon Motion SM2256 16-nm Micron TLC
Crucial MX200 500GB SATA 6Gbps Marvell 88SS9189 16-nm Micron MLC
Crucial MX300 750GB SATA 6Gbps Marvell 88SS1074 32-layer Micron 3D TLC
Intel X25-M G2 160GB SATA 3Gbps Intel PC29AS21BA0 34-nm Intel MLC
Intel 335 Series 240GB SATA 6Gbps SandForce SF-2281 20-nm Intel MLC
Intel 730 Series 480GB SATA 6Gbps Intel PC29AS21CA0 20-nm Intel MLC
Intel 750 Series 1.2TB PCIe Gen3 x4 Intel CH29AE41AB0 20-nm Intel MLC
Intel DC P3700 800GB PCIe Gen3 x4 Intel CH29AE41AB0 20-nm Intel MLC
Mushkin Reactor 1TB SATA 6Gbps Silicon Motion SM2246EN 16-nm Micron MLC
OCZ Arc 100 240GB SATA 6Gbps Indilinx Barefoot 3 M10 A19-nm Toshiba MLC
OCZ Trion 100 480GB SATA 6Gbps Toshiba TC58 A19-nm Toshiba TLC
OCZ Trion 150 480GB SATA 6Gbps Toshiba TC58 15-nm Toshiba TLC
OCZ Vector 180 240GB SATA 6Gbps Indilinx Barefoot 3 M10 A19-nm Toshiba MLC
OCZ Vector 180 960GB SATA 6Gbps Indilinx Barefoot 3 M10 A19-nm Toshiba MLC
Patriot Hellfire 480GB PCIe Gen3 x4 Phison PS5007-E7 15-nm Toshiba MLC
Plextor M6e 256GB PCIe Gen2 x2 Marvell 88SS9183 19-nm Toshiba MLC
Samsung 850 EV0 250GB SATA 6Gbps Samsung MGX 32-layer Samsung TLC
Samsung 850 EV0 1TB SATA 6Gbps Samsung MEX 32-layer Samsung TLC
Samsung 850 Pro 500GB SATA 6Gbps Samsung MEX 32-layer Samsung MLC
Samsung 950 Pro 512GB PCIe Gen3 x4 Samsung UBX 32-layer Samsung MLC
Samsung 960 EVO 250GB PCIe Gen3 x4 Samsung Polaris 32-layer Samsung TLC
Samsung 960 EVO 1TB PCIe Gen3 x4 Samsung Polaris 48-layer Samsung TLC
Samsung 960 Pro 2TB PCIe Gen3 x4 Samsung Polaris 48-layer Samsung MLC
Samsung SM951 512GB PCIe Gen3 x4 Samsung S4LN058A01X01 16-nm Samsung MLC
Samsung XP941 256GB PCIe Gen2 x4 Samsung S4LN053X01 19-nm Samsung MLC
Toshiba OCZ RD400 512GB PCIe Gen3 x4 Toshiba TC58 15-nm Toshiba MLC
Toshiba OCZ VX500 512GB SATA 6Gbps Toshiba TC358790XBG 15-nm Toshiba MLC
Transcend SSD370 256GB SATA 6Gbps Transcend TS6500 Micron or SanDisk MLC
Transcend SSD370 1TB SATA 6Gbps Transcend TS6500 Micron or SanDisk MLC

All the SATA SSDs were connected to the motherboard’s Z97 chipset. The M6e was connected to the Z97 via the motherboard’s M.2 slot, which is how we’d expect most folks to run that drive. Since the XP941, 950 Pro, RD400, and 960 Pro require more lanes, they were connected to the CPU via a PCIe adapter card. The 750 Series and DC P3700 were hooked up to the CPU via the same full-sized PCIe slot.

We used the following system for testing:

Processor Intel Core i5-4690K 3.5GHz
Motherboard Asus Z97-Pro
Firmware 2601
Platform hub Intel Z97
Platform drivers Chipset: 10.0.0.13

RST: 13.2.4.1000

Memory size 16GB (2 DIMMs)
Memory type Adata XPG V3 DDR3 at 1600 MT/s
Memory timings 11-11-11-28-1T
Audio Realtek ALC1150 with 6.0.1.7344 drivers
System drive Corsair Force LS 240GB with S8FM07.9 firmware
Storage Crucial BX100 500GB with MU01 firmware

Crucial BX200 480GB with MU01.4 firmware

Crucial MX200 500GB with MU01 firmware

Intel 335 Series 240GB with 335u firmware

Intel 730 Series 480GB with L2010400 firmware

Intel 750 Series 1.2GB with 8EV10171 firmware

Intel DC P3700 800GB with 8DV10043 firmware

Intel X25-M G2 160GB with 8820 firmware

Plextor M6e 256GB with 1.04 firmware

OCZ Trion 100 480GB with 11.2 firmware

OCZ Trion 150 480GB with 12.2 firmware

OCZ Vector 180 240GB with 1.0 firmware

OCZ Vector 180 960GB with 1.0 firmware

Samsung 850 EVO 250GB with EMT01B6Q firmware

Samsung 850 EVO 1TB with EMT01B6Q firmware

Samsung 850 Pro 500GB with EMXM01B6Q firmware

Samsung 950 Pro 512GB with 1B0QBXX7 firmware

Samsung XP941 256GB with UXM6501Q firmware

Transcend SSD370 256GB with O0918B firmware

Transcend SSD370 1TB with O0919A firmware

Power supply Corsair AX650 650W
Case Fractal Design Define R5
Operating system Windows 8.1 Pro x64

Thanks to Asus for providing the systems’ motherboards, to Intel for the CPUs, to Adata for the memory, to Fractal Design for the cases, and to Corsair for the system drives and PSUs. And thanks to the drive makers for supplying the rest of the SSDs.

We used the following versions of our test applications:

Some further notes on our test methods:

  • To ensure consistent and repeatable results, the SSDs were secure-erased before every component of our test suite. For the IOMeter database, RoboBench write, and RoboBench copy tests, the drives were put in a simulated used state that better exposes long-term performance characteristics. Those tests are all scripted, ensuring an even playing field that gives the drives the same amount of time to recover from the initial used state.

  • We run virtually all our tests three times and report the median of the results. Our sustained IOMeter test is run a second time to verify the results of the first test and additional times only if necessary. The sustained test runs for 30 minutes continuously, so it already samples performance over a long period.

  • Steps have been taken to ensure the CPU’s power-saving features don’t taint any of our results. All of the CPU’s low-power states have been disabled, effectively pegging the frequency at 3.5GHz. Transitioning between power states can affect the performance of storage benchmarks, especially when dealing with short burst transfers.

The test systems’ Windows desktop was set at 1920×1080 at 60Hz. Most of the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

 

Conclusions

The Patriot Hellfire seems to be a steady, solid performer. While it lagged a bit behind its NVMe competition in some of our benchmarks, it surprised us with a few standout performances, especially in our sustained and scaling tests. Let’s see where it falls in our overall performance rankings. We distill the overall performance rating using an older SATA SSD as a baseline. To compare each drive, we then take the geometric mean of a basket of results from our test suite. Only drives which have been through the entire current test suite on our current rig are represented.

The Hellfire stakes its claim alongside Samsung’s 950 Pro. Not bad company, to say the least. It’s a few percentage points from catching up to its Toshiba 15-nm MLC cousin, the OCZ RD400, but all told, the Hellfire has proven that it belongs among the ranks of high-end NVMe storage.

Whether the drive is worth buying, however, is a completely different story. To make that determination, we turn to our scatter plots. In the plots below, the most compelling position is toward the upper-left corner, where the cost per gigabyte is low and performance is high. Use the buttons to switch between views of all drives, only SATA drives, or only PCIe drives.


The Hellfire lands in an excellent position in our value charts. You can snag it for as little as $230 on Amazon—a mere $0.48 per gigabyte—whereas the 950 Pro and RD400 demand much more money for similar performance. The 960 EVO 1TB might give you some pause, since it’s substantially faster for the same cost-per-gig, but you’ll have to cough up $480 for one of those drives. In absolute terms, that might tip the decision in the Hellfire’s favor.

In the end, the Hellfire might not be as fast as the breakneck Samsung 960s, but it’s close to the cheapest NVMe drive around, and it’s more than quick enough to live up to that standard’s next-gen reputation. Builders looking for a cost-effective ticket into the world of PCIe and NVMe speeds will be well served by Patriot’s Hellfire, and we’re happy to call it TR Recommended.

Comments closed
    • brucek2
    • 3 years ago

    Looking at that price/performance chart it would seem difficult to justify any drive other than the 960 EVO, unless you just couldn’t allocate that much $ to storage for your system. You can pay less but you’d be getting a lot less for your money, which is funny because it usually works the other way around.

    I suppose somewhere there’s a tiny niche of users doing something that could justify the marginally faster Pro at its much highest cost.

    • DarkUltra
    • 3 years ago

    Does this drive have capacitors to help write out anything in the buffer if the power goes out:
    [url<]https://youtu.be/nwCzcFvmbX0[/url<] skip to 2:00 23 power-loss capacitors used to keep the SSD's controller running just long enough, in the event of an outage, to flush all pending writes: [url<]http://www.tomshardware.com/reviews/samsung-845dc-evo-ssd-performance,3838-2.html[/url<] Would this prevent something like this: [url<]https://youtu.be/-Qddrz1o9AQ[/url<] Only some Intell SSDs passed this test [url<]http://www.extremetech.com/computing/173887-ssd-stress-testing-finds-intel-might-be-the-only-reliable-drive-manufacturer[/url<]

      • BoilerGamer
      • 3 years ago

      If you have enough money for a NVME SSD, you have enough for a UPS, so don’t be cheap.

        • UberGerbil
        • 3 years ago

        UPS doesn’t help in the event of a PSU failure.

    • AnotherReader
    • 3 years ago

    When will you review the most affordable of the PCIe SSDs: Intel’s 600p?

      • Jeff Kampman
      • 3 years ago

      When they send us one, which so far has not happened.

        • AnotherReader
        • 3 years ago

        Thanks for the quick reply. I wonder why they haven’t sent you one yet; AnandTech reviewed one nearly seven weeks ago.

      • Bauxite
      • 3 years ago

      You ain’t kidding, I cleaned out the 3 microcenters near me when the 1TB drives were $260.

      Its basically runs like a fast sata instead of a screaming samsung, but you get some nvme/pci-e latency reductions and its intel so warranty/drivers/tools/etc are great. That and m2 is awesome, every board I get now has at minimum a small baked in boot drive that can stay with it.

      Its nice that samsung et al are pushing the limits, but there is 0 current benefit to gaming/prosumer use, everything loads the same. Only databases, high-end AV work or similar will ever notice those nosebleed IOPs and 1GBs+ speeds.

    • Dysthymia
    • 3 years ago

    Is this the sort of thing that’s going to become High Bandwidth Cache on AMD Vega cards in the next couple of years?

      • juzz86
      • 3 years ago

      I’m interested in this, too. From reading it looks like the Vega HBCC will take care of what the Phison controller does here, but playing host to the NAND is a distinct possibility?

      I haven’t seen it cleared up whether the HBCC will hand-off directly to NAND, or to an SSD controller. More reading required. It’d be a cheaper, elegant solution to just whack a pair of M.2 slots on a PCI-E GPU card, but then I’d worry about the latency to memory with the additional controller, despite them getting better?

    • JustAnEngineer
    • 3 years ago

    Thanks for comparing this new drive to the old SM951 in my mini-ITX gaming PC.

    • Pzenarch
    • 3 years ago

    What’s going on with the ‘960 Pro 2TB’ in the ‘IOMeter Random 4KB Write QD32’ graph? It looks like it’s not doing anything for six minutes before it surges into action. Issue with graph generation, or something seriously slow going on?

    [url<]https://techreport.com/r.x/2016_10_17_960Pro2TB/sustained.png[/url<]

      • weaktoss
      • 3 years ago

      Not a graph issue, that’s actually what the data shows. We talked to Samsung about it, but they have yet to follow up with an explanation.

    • Takeshi7
    • 3 years ago

    Seems pretty mediocre. Not too much to differentiate it other than above average sustained performance.

    Next M.2 Drive you should test is the Seagate XM1440

      • juzz86
      • 3 years ago

      It’d also be nice to see ‘baseline’ NVMe – Intel’s 600p.

      Having run a few of them now (and read all the reviews trouncing them), in real-life they’re the equivalent of top-end SATA drives at a very similar price point and I’m starting to use them more for M.2 when the MX300 isn’t around.

      • Takeshi7
      • 3 years ago

      Also can you review the OCZ TL100 SSD? I see ads for it on your site all the time.

Pin It on Pinterest

Share This