Sustained and scaling I/O rates
Our sustained IOMeter test hammers drives with 4KB random writes for 30 minutes straight. It uses a queue depth of 32, a setting that should result in higher speeds that saturate each drive's overprovisioned area more quickly. This lengthy—and heavy—workload isn't indicative of typical PC use, but it provides a sense of how the drives react when they're pushed to the brink.
We're reporting IOps rather than response times for these tests. Click the buttons below the graph to switch between SSDs.
The Hellfire's performance appears to peak quite high, albeit for only a brief time. Both its peak and its steady state seem to have the RD400 beat by a good amount.
The Hellfire's peak rate is indeed twice as fast as the RD400's speed. Its steady-state performance is about 50% faster than the OCZ drive's, to boot. Phison must be doing something right here. Toshiba may want to take notes.
Our final IOMeter test examines performance scaling across a broad range of queue depths. We ramp all the way up to a queue depth of 128. Don't expect AHCI-based drives to scale past 32, though—that's the maximum depth of their native command queues.
For this test, we use a database access pattern comprising 66% reads and 33% writes, all of which are random. The test runs after 30 minutes of continuous random writes that put the drives in a simulated used state. Click the buttons below the graph to switch between the different drives. And note that the P3700 plot uses a much larger scale.
It's been some time seen we've seen such straightforward curves. The Hellfire scales smoothly from QD1 all the way to QD128. The rate of increase certainly slows down, but at no point does it flatline or regress. Let's look at some other NVMe drives for context.
The Hellfire looks way better here than either the RD400 or Samsung's 950 Pro, both of which tapered off around QD8. Phison's E7 continues to impress us.
Now it's time to set IOMeter aside and see how the Hellfire fares with real-world workloads.