Single page Print

IOMeter — Sequential and random performance
IOMeter fuels much of our latest storage test suite, including our sequential and random I/O tests. These tests are run across the full capacity of the drive at two queue depths. The QD1 tests simulate a single thread, while the QD4 results emulate a more demanding desktop workload. (87% of the requests in our old DriveBench 2.0 trace of real-world desktop activity have a queue depth of four or less.) Clicking the buttons below the graphs switches between the different queue depths.

Our sequential tests use a relatively large 128KB block size.


The Arc 100 is still looking pretty good here. The other two drives provide sequential speeds more in line with what we'd expect from low-end SSDs.

Next, we'll turn our attention to performance with 4KB random I/O. We've reported average response times rather than raw throughput, which we think makes sense in the context of system responsiveness.


The HyperX Fury fares poorly here, reporting a random write response time of over six milliseconds during QD4 testing, compared to the sub-millisecond times of the others. Most likely, it's that pesky NAND configuration bottlenecking the controller.

The preceding tests are based on the median of three consecutive three-minute runs. SSDs typically deliver consistent sequential and random read performance over that period, but random write speeds worsen as the drive's overprovisioned area is consumed by incoming writes. We explore that decline on the next page.