Single page Print

IOMeter — Sequential and random performance
IOMeter fuels much of our latest storage test suite, including our sequential and random I/O tests. These tests are run across the full capacity of the drive at two queue depths. The QD1 tests simulate a single thread, while the QD4 results emulate a more demanding desktop workload. For perspective, 87% of the requests in our old DriveBench 2.0 trace of real-world desktop activity have a queue depth of four or less. Clicking the buttons below the graphs switches between results charted at the different queue depths.

Our sequential tests use a relatively large 128KB block size.



The EVO 2TB posts the fastest sequential write numbers numbers we've seen yet in our (admittedly nascent) data set. Reads are peppy too, keeping pace with the Ultra II 960 GB, which used to pass as a large SSD.

Next, we'll turn our attention to performance with 4KB random I/O. The tests below are based on the median of three consecutive three-minute runs. SSDs typically deliver consistent sequential and random read performance over that period, but random write speeds worsen as the drive's overprovisioned area is consumed by incoming writes. We've reported average response times rather than raw throughput, which we think makes sense in the context of system responsiveness.



Samsung's drive delivers impressive performance in random workloads, too. The EVO's response times are usually on par with (and sometimes even a tiny bit ahead of) the Arc 100, which we've rated highly in our previous testing.

IOMeter — Sustained and scaling I/O rates
Our sustained IOMeter test hammers drives with 4KB random writes for 30 minutes straight. It uses a queue depth of 32, which should saturate each drive's overprovisioned area fairly quickly. This lengthy—and heavy—workload isn't indicative of typical PC use, but it provides a sense of how the drives react when they're pushed to the brink.

We're reporting input and output operations per second (IOps) rather than response times for these tests. Click the buttons below the graph to switch between the results from the different SSDs.


The 850 EVO reaches a lofty peak, but the whole point of this particular test is to expose performance degradation as IOMeter exhausts each drive's overprovisioned area. The steady-state results are what we're really after. The next graphs highlight the peak random write rate and the average, steady-state speed over the last minute of the test.

It takes some time for the 850 EVO 2TB to fully consume its overprovisioned area, but once it does, its steady-state write speed is much slower than our budget darling's. The Arc 100 won't have to abdicate its throne just yet.

Our final IOMeter test examines performance scaling across a broad range of queue depths. We ramp all the way up to a queue depth of 128. Don't expect AHCI-based drives to scale past 32, though—that's the maximum depth of their native command queues.

We use a database access pattern comprising 66% reads and 33% writes, all of which are random. The test runs after 30 minutes of continuous random writes that put the drives in a simulated used state. Click the buttons below the graph to switch between the different drives. Note that the graphs for the 850 EVO and Arc 100 use a significantly larger scale than the other two.


The 850 EVO meets expectations by smoking the Fury and Ultra II, but it's still a ways off from the remarkable speed of the Arc 100. The graph below illustrates the difference side-by-side. The buttons toggle between total, read, and write IOps.