Crucial’s m4 solid-state drive

Back in the early days of SSDs, the flash controller was king. Marvell’s first effort on that front was a bit of a disappointment. That controller, code-named Da Vinci, crucially lacked TRIM support at a time when it was widely offered by the competition. Forced to contend with flash-based storage’s pesky block-rewrite penalty, the one Da Vinci-based drive we reviewed was much slower than its rivals and inexplicably expensive given the comparatively sluggish performance.

Our faith in Marvell’s controller chops was restored last summer when Crucial’s RealSSD C300 burst onto the scene sporting shiny new silicon. With a little help from some blazing-fast flash memory and a 6Gbps Serial ATA interface, the controller otherwise known as Van Gogh set a new standard for all-around SSD performance. Marvell’s second-generation design was further validated when Intel drafted the chip to anchor its latest high-end SSD, the 510 Series.

The Intel 510 Series is a perfect example of how other factors play a role in defining SSD performance today. Despite having the same controller and similar flash chips to the C300, the Intel 510 Series boasts much higher sequential throughput at the expense of performance with highly random I/O loads. The two drives present a very different set of trade-offs, and Crucial is about to add one more twist with a successor to the C300 based on the very same flash controller, this time paired with next-generation NAND built on a 25-nm fabrication process.

If you’re a big system builder like Dell or HP, this C300 sequel will be known as the Micron RealSSD C400. For regular folks, Micron will sell the drive through its consumer brand as the Crucial m4. The lower-case “m” is presumably there to avoid confusion with the M4 carbine—the last thing Crucial wants is you picturing the drive mowing down its competition with a rapid-fire hail of IOps. Or something.

The RealSSD C400 and m4 are the same hardware with different names. Crucial tells us that both will receive the identical firmware updates down the line, so they’re pretty much interchangeable. Amusingly, our review sample came in a box labeled Crucial m4, but the drive itself bears the RealSSD C400 name.

As we’ve noted, the m4 uses Marvell’s second-generation SSD controller. The chip you see above is a slightly newer variant than the one found inside the C300. However, Crucial says this new revision doesn’t bring any major changes. That mirrors what Marvell told us about the BKK2 version of the chip in Intel’s 510 Series, which it said was “comparable” to the original Van Gogh silicon used in the C300.

Since it’s been around for more than nine months now, the 88SS9174’s 6Gbps SATA support is old hat. The m4 looks set to make better use of the additional bandwidth than its predecessor, though. While the RealSSD C300 carries sequential read and write ratings of 355 and 215MB/s, respectively, the m4’s spec sheet advertises 415MB/s reads and write speeds as high as 260MB/s. Those sequential numbers don’t quite match the Intel 510 Series’ 500MB/s reads and 315MB/s writes, but the Intel drive has been optimized specifically with sequential throughput in mind.

Crucial has gone with a more balanced approach for the m4, and it shows when we look at the drive’s IOps ratings. With random 4KB reads, the m4 is capable of crunching 40,000 IOps. Switch to writes, and that number increases to 50,000 IOps. The Intel 510 Series is limited to just 20,000 random-read IOps and only 8,000 random writes.

Flash controller Marvell 88SS9174-BLD2
Interface 6Gbps
Flash type Micron 25-nm MLC NAND
Available capacities 64, 128, 256, 512GB
Cache size 256MB
Sequential reads 415MB/s
Sequential writes 95MB/s (64GB)

175MB/s (128GB)

260MB/s (256, 512GB)

Random 4KB reads 40,000 IOps
Random 4KB writes 20,000 IOps (64GB)

35,000 IOps (128GB)

50,000 IOps (256, 512GB)

Warranty length Three years

Like just about every other SSD, the m4’s performance ratings fall with the drive’s total capacity. Reads aren’t affected, but writes take a hit when you drop down to the 128 and 64GB models. This isn’t some insidious attempt at artificial segmentation—because lower-capacity drives employ fewer memory chips, they can’t exploit all of the parallelism built into the controller.

Speaking of parallelism, the Marvell chip has eight memory channels compatible with second-generation ONFI NAND. The 256GB m4 we’ll be looking at today spreads its flash over 16 chips split evenly between both sides of the drive’s circuit board.

Micron builds these NAND chips using a cutting-edge 25-nm fabrication process, creating a clear distinction between the m4 and both the RealSSD C300 and the Intel 510 Series, which use 34-nano flash. This distinction is important because the move to finer fabrication techniques is a key step in reducing SSD prices—the more dies you can cram onto a wafer, the cheaper your per-gigabyte cost, at least in theory.

25-nm NAND brings its own challenges, specifically in the realm of endurance. Flash chips can only withstand a limited number of write-erase cycles, and everything we’ve seen points to that figure being lower with 2x-nm NAND than it is with the 3x-nm stuff. The Micron 29F128G08CFAAB flash chips found on the m4 appear to be identical to the ones OCZ is using in its Vertex 3 SSD. According to OCZ, the chips can endure 3,000 write-erase cycles, which is less than the 5,000 cycles typical of 34-nano flash.

Crucial wouldn’t confirm the write-erase limit of the m4’s flash chips, but it does publish endurance specifications for the drive as a whole. According to the company, the m4 can write 72 terabytes of data over its lifetime. Amortize that over a five-year span, and you’re looking at 40GB per day. Which is a lot. 72TB is also the same Total Bytes Written (TBW) rating that Crucial slaps on the C300. All flavors of the old RealSSD share this rating, but 64GB variants of the m4 do not. The new drive’s smallest capacity point is limited to 36TB of writes, which still works out to 20GB a day for five years.

The little chip you see over to the right in the picture above is the m4’s 256MB DRAM cache. Intel’s 510 Series gets by with only 128MB, but Crucial used 256MB on the C300, so the larger cache isn’t unexpected. Neither is the drive’s 7% overprovisioning percentage, which is common for consumer-grade SSDs. Of the drive’s 256GB of flash capacity, 238GB is available to end users. The rest is dedicated to “spare area” used as temporary storage by the controller.

Our testing methods

Before dipping into our benchmark results, let’s take a quick look at the mix of rivals we’ve put together to face the m4, and the methods we use to test storage devices here at TR. We include these details to help you better understand and replicate our results, but if you’re already familiar with our approach to storage testing, feel free to skip ahead to the benchmarks. We won’t be offended.

Today, the m4 will face off against a collection of solid-state drives based on a handful of different controllers. Note that only half of the SSDs have 6Gbps SATA interfaces. We’re using a Sandy Bridge motherboard with 6Gbps SATA connectivity, so those drives have a distinct advantage over the others. The Agility 2 also has somewhat of an edge thanks to a 28% overprovisioning percentage, four times what’s typical for consumer-grade SSDs. We’ve found SandForce-based SSDs tend to run slower when they set aside a more traditional 7-8% of their flash capacity as spare area.

  Flash controller Interface speed Cache size Total capacity
Corsair Nova V128 Indilinx Barefoot ECO 3Gbps 64MB 128GB
Crucial RealSSD C300 Marvell 88SS9174-BJP2 6Gbps 256MB 256GB
Crucial m4 Marvell 88SS9174-BLD2 6Gbps 256MB 256GB
Intel X25-M G2 Intel PC29AS21BA0 3Gbps 32MB 160GB
Intel 510 Series Marvell 88SS9174-BKK2 6Gbps 128MB 250GB
OCZ Agility 2 SandForce SF-1200 3Gbps NA 100GB
OCZ Vertex 3 SandForce SF-2281 6Gbps NA 240GB
Samsung Spinpoint F3 NA 3Gbps 32MB 1TB

We’ve updated all the drives to their latest and greatest firmware revisions with the exception of the Nova. This Indilinx-based drive debuted well into the controller’s life, so the initial release should have all of the kinks ironed out. Corsair tells us there are no firmware updates for the Nova.

You’ll notice that we’ve also included a traditional hard drive this time around. The Spinpoint F3 1TB is our favorite 7,200-RPM desktop drive at the moment, and it’ll give us a sense of how the m4 and other SSDs compare to the performance of contemporary mechanical storage.

We’re in the midst of overhauling our storage test systems here at TR, a plan that was stalled briefly by Intel’s Sandy Bridge chipset bug. The new suite of tests is coming soon, and it should be worth the wait. In the interim, we’ve whipped up an abbreviated version with a handful of new and old tests that cover the basics.

The block-rewrite penalty inherent to flash memory, the TRIM command designed to offset it, and the last workload an SSD tackled can all impact drive performance, so we’ll provide a little more detail on exactly how we test SSDs. Before testing, each drive is returned to a factory-fresh state with a secure erase. Next, we fire up HD Tune and run a series of read and write tests covering transfer rates and random access times. HD Tune is designed to run on unpartitioned drives, so TRIM won’t be a factor. The command requires a file system to be in place.

After HD Tune, we partition the drives and fire up a series of IOMeter workloads using the latest version of that app. When running on a partitioned drive, IOMeter first fills it with a single file, firmly putting SSDs into a used state in which all of their flash pages have been occupied. We delete that file before moving onto our used-state file copy tests, after which we tackle disk-intensive multitasking. Our multitasking benchmark requires an unpartitioned drive; like HD Tune, it shouldn’t be affected by TRIM.

With our multitasking tests completed, we secure-erase the drives once more and launch a final instance of our scripted file copy test. This procedure should ensure that each SSD is tested on an even playing field—and in best- and worst-case performance scenarios.

We run all our tests at least three times and report the median of the results. We’ve found that IOMeter performance can fall off after the first couple of runs, so we use five in total and throw out the first two. Each drive’s performance over the last three runs has been pretty consistent thus far. We’ve also seen remarkable consistency with our new FileBench copy test, which we’re currently running five times while we tune the scripting. We used the following system configuration for testing:

Processor Intel Core i7-2500K 3.3GHz
Motherboard Asus P8P67 PRO
Bios revision 1305
Platform hub Intel P67 Express
Platform drivers INF update 9.2.0.1025

RST 10.1.0.1008

Memory size 8GB (2 DIMMs)
Memory type Corsair Vengeance DDR3 SDRAM at 1333MHz
Memory timings 9-9-9-24-1T
Audio Realtek ALC892 with 2.58 drivers
Graphics Gigabyte Radeon HD 4850 1GB with Catalyst 11.2 drivers
Hard drives Corsair Nova V128 128GB with 1.0 firmware
Intel X25-M G2 160GB with 02M3 firmware
Intel 510 Series 250GB with PWG2 firmware
OCZ Agility 2 100GB with 1.29 firmware
Crucial RealSSD C300 256GB with 0006 firmware

OCZ Vertex 3 with 1.11 firmware
Samsung Spinpoint F3 1TB

Crucial m4 256GB with 0001 firmware

Power supply OCZ Z-Series 550W
OS Windows 7 Ultimate x64

Thanks to Asus for providing the system’s motherboard, Gigabyte for the graphics card, Intel for the CPU, Corsair for the memory, OCZ for the PSU, and Western Digital for the Caviar Black 1TB system drive.

We used the following versions of our test applications:

The test systems’ Windows desktop was set at 1280×1024 in 32-bit color at a 75Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests.

Most of the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

HD Tune — Transfer rates

HD Tune lets us look at transfer rates in a couple of different ways. We use the benchmark’s “full test” setting, which tracks performance across the entire drive and lets us create the fancy line graphs you see below. This test was run with its default 64KB block size.

As you can see, we’ve painted our results in a rainbow of colors to make the graphs easier to interpret. In the bar graphs, drives are colored by manufacturer, with the m4 highlighted in bright red. The line graphs follow a similar color scheme with some additional shades to cover the multiple Intel and OCZ drives.

The m4 gets off to a good start by besting the 510 Series and nearly catching the Vertex 3 in HD Tune’s read speed test. Sequential read performance is much improved over the old C300, which is nearly 70MB/s slower. Even the C300 is several times faster than our lone mechanical hard drive. Modern SSDs are in a whole other ballpark when it comes to performance.

Things get a little messy with writes, which cause transfer rates to oscillate wildly with some of the drives. The m4 is one of the erratic examples, although it’s arguably not the worst offender. Overall, the drive’s sequential write average is only 24MB/s faster than the C300’s. Intel and OCZ occupy the top three spots, leaving Crucial off the podium.

As you can see, our mechanical hard drive is much more competitive with sequential writes than it is with reads. The m4 still offers twice the throughput of the Spinpoint, though.

HD Tune’s burst speed tests are meant to isolate a drive’s cache memory.

As long as you’re reading from it, the m4’s cache is very fast. Only the Vertex 3 scores higher in this test, although that particular drive is a cacheless design that’s probably bursting from buffers built into the controller.

Switch to writes, and the m4 tumbles down the standings. Forget competing with the Intel 510 Series and Vertex 3, the m4 isn’t even close to keeping up with the C300. Or the Spinpoint.

HD Tune — Random access times

In addition to letting us test transfer rates, HD Tune can measure random access times. We’ve tested with four transfer sizes and presented all the results in a couple of line graphs. We’ve also busted out the 4KB and 1MB transfers sizes into bar graphs that should be easier to read.

The line graph nicely illustrates why folks tend to describe SSDs as more responsive than mechanical hard drives. The Spinpoint’s random access times are a couple of orders of magnitude higher than those of the SSDs, at least until we get up to the 1MB transfer size. Even then, the solid-state drives are still way faster.

Compared to the other SSDs, the m4 has or ties for the fastest access times with the 4KB and 1MB transfer sizes. The drive’s strong performance extends to the 512-byte and 64KB transfer sizes, too.

Moving to random writes doesn’t change the line graph much. However, the m4 again shows signs of weakness with writes. With the 4KB transfer size, it’s several times slower than the 510 Series and Vertex 3. Those drives also lead the m4 with the 1MB transfer size.

To be fair, the m4 is faster than its C300 predecessor across all transfer sizes. And the Spinpoint? Fuggetaboutit.

TR FileBench — Real-world copy speeds

Our resident developer, Bruno “morphine” Ferreira, has been hard at work on a new file copy benchmark for our storage reviews. FileBench is the result of his efforts. This shining example of scripting awesomeness runs through a series of file copy operations using Windows 7’s xcopy command. Using xcopy produces nearly identical copy speeds to dragging and dropping files using the Windows GUI, so our results should be representative of real-world performance.

To reduce the number of external variables, FileBench runs entirely on the drive that’s being tested. Files are copied from source folders to temporary targets that aren’t deleted until all testing is complete. Copy speeds were tested with the SSDs fresh from a secure erase and in a tortured used state after more than half a day’s worth of IOMeter thrashing.

To gauge performance with different kinds of files, we tested with four sets. The movie set includes six video files of the sort one might download off BitTorrent. Total payload: 4.1GB. Our MP3 file set uses a chunk of my music archive, which is made up of high-bitrate MP3s and associated album art. This one has 549 files that add up to 3.47GB. The Mozilla file set includes the huge selection of files necessary to compile Firefox. All told, there are 22,696 files spread across only 923MB. Finally, we have the TR file set, which contains several years worth of the images, HTML files, and spreadsheets behind my reviews. This set has the largest number of files at 26,767, but it’s heftier than the Mozilla set with 1.7GB worth of data.

The good news is the m4 seems to reclaim trimmed flash pages as aggressively as the C300—that’s why there isn’t much of a drop in copy speed between the drive’s fresh and used states. Also, Crucial’s latest drive copies the large files that make up our movie and MP3 file sets more quickly than its RealSSD forebear. The speed boost isn’t enough to catch the Intel 510 Series, but it puts the m4 solidly in second place if you’re looking at used-state performance, which we think is more important than how a drive behaves fresh from a secure erase.

Now, the bad news. With the Mozilla and TR file sets, which have a huge number of small files, the m4 slows way down. The OCZ drives and Intel 510 Series are all faster with those file sets, as is the C300. At least the m4’s gains over the C300 with large files are bigger than its losses to the RealSSD with smaller files.

As usual, the Spinpoint provides some helpful perspective. With larger files, it’s way behind most of the SSDs. However, the additional overhead associated with transferring a large number of small files allows our mechanical representative to narrow the gap with the Mozilla and TR file sets.

TR DriveBench — Disk-intensive multitasking

TR DriveBench allows us to record the individual IO requests associated with a Windows session and then play those results back on different drives. We’ve used this app to create a set of multitasking workloads that combine common desktop tasks with disk-intensive background operations like compiling code, copying files, downloading via BitTorrent, transcoding video, and scanning for viruses. You can read more about these workloads and desktop tasks on this page of our SSD value round-up.

A new version of DriveBench complete with updated traces is in the works. This old suite of workloads still has some life left in it, though.

Below, you’ll find an overall average followed by scores for each of our individual workloads. The overall score is an average of the mean performance score with each multitasking workload.

Fourth place is the best the m4 can muster in DriveBench overall. The Intel 510 Series and the old C300 both score higher, while the Vertex 3 beats ’em all.

Let’s break down the overall average into individual test results to see if the m4 stands out anywhere in particular.

Given our FileBench results from the previous page, we shouldn’t be surprised to see the m4 budging ahead of the C300 in the file copy workload. Third is as good as it gets for the m4 in our individual DriveBench tests, which I suppose is acceptable given the performance of the X25-M and Nova V128. Those are last-generation SSDs, though. The m4 should be setting its sights higher.

Once again, I should point out the substantially slower performance of our lone mechanical hard drive. Disk-intensive multitasking lends itself to quick access times, and we’ve already seen the huge advantage SSDs enjoy in that regard.

As a control, we also recorded a trace of our foreground tasks, while nothing was going on in the background.

The multitasking built into our other workloads isn’t responsible for the m4’s average showing in DriveBench, otherwise its score with this control test would be higher.

DriveBench lets us start recording Windows sessions from the moment the storage driver loads during the boot process. We can use this capability to gauge boot performance, this time with TweetDeck, Pidgin, AVG, Word, Excel, Acrobat, and Photoshop loading from the Windows startup folder.

Don’t get too excited about the m4 jumping up into third place here. The startup trace takes about 10 seconds to run on the SSDs, and the gaps between the top four only amount to one second each.

IOMeter

Our IOMeter workloads are made up of randomized access patterns, making them perfect candidates to exploit the wicked-fast access times of solid-state storage. The app bombards drives with an escalating number of concurrent IO requests and should do a good job of simulating the demanding environments common in enterprise applications. We tested using the “pseudo random” data pattern, which is IOMeter’s old default and somewhat amenable to the compression mojo built into SandForce controllers. Additional testing with the “full random” data pattern revealed only a minor drop in the Agility 2’s performance, so we’re sticking with pseudo random for now.

Over the last few years, we’ve watched new storage controller drivers (including the Intel RST drivers used in this review) effectively cap IOMeter performance scaling beyond 32 outstanding I/O requests. The Serial ATA spec’s Native Command Queue is 32 slots deep, and more than one drive maker has told us that this queue is rarely full. As a result, we’re only testing up to 32 concurrent I/O requests.

With the exception of a handful of data points, the m4 offers lower IOMeter transaction rates than its predecessor, the RealSSD C300. The difference between the two drives is most apparent in the web server access pattern, which is made up entirely of read requests. More troubling is the fact that the m4 tends to fall farther behind the C300 as the load increases. This trend persists across all four access patterns.

If you just compare it to the Corsair and Intel SSDs, the m4 doesn’t look too bad overall. All of those drives positively cream the Spinpoint, which is multiple orders of magnitude out of contention.

Conclusions

This conclusion would be a lot easier to write if I knew what the m4 is going to cost when it becomes available to the public on April 26. Crucial is holding the pricing close to its chest, and that may ultimately be what makes or breaks the drive’s appeal.

If you were hoping for a big improvement over the C300, you’re out of luck. The m4 offers higher sustained throughput in targeted tests and when copying large files, but it’s a little less responsive under multitasking and multi-user loads. There’s only so much performance headroom in the Marvell controller shared by the two drives, I suppose. With the m4, Crucial has shifted the bias slightly in favor of sequential workloads.

The end result is a drive that can’t lay claim to the overall performance crown. Heck, the m4 can’t even claim to be faster than its predecessor. The C300 is pretty quick, though, and so is the m4 if you compare its performance to older SSDs and mechanical hard drives. If the m4’s price reflects its middle-of-the-pack performance, I’d have no qualms about recommending it.

Crucial’s use of 25-nano flash could help make that a reality, but we won’t know until the price becomes public in April. OCZ tells us the 120 and 240GB flavors of the Vertex 3 could hit the market in a couple of weeks at $250 and $500, respectively. Those prices are in line with the cost of the RealSSD C300 and a lot cheaper than the going rate for the Intel 510 Series. The m4 will have to come in cheaper than all of them to be a truly compelling alternative.

Comments closed
    • hapyman
    • 8 years ago

    Any chance we can get at least one set of mechanical HD in RAID 0 for comparison sake? Perhaps a set of 640GB WDs or the Samsung 1TB F3. Hopefully I am not alone on wanting to see this.

      • Pettytheft
      • 8 years ago

      Problem is reliability of Raid 0. Most people use these as system drives and Raid 0 is gambling.

    • nsx241
    • 8 years ago

    OEM versions are already available for purchase at Superbiiz.

    [url<]http://www.superbiiz.com/query.php?categry=0&s=micron+c400&x=0&y=0[/url<]

    • codedivine
    • 8 years ago

    Any word on power consumption of these drives compared to other SSDs and 2.5” HDDs?

      • Da_Boss
      • 8 years ago

      I second this. I’d love to see some power consumption results, as I’m sure other notebook users would as well. Maybe even throw a notebook drive in there as a reference?

    • Thresher
    • 8 years ago

    These things are still irrelevant to the largest number of consumers and only slightly less so to “enthusiasts”. The prices are still much too high for widespread acceptance.

    I suspect this will happen eventually, but we haven’t seen prices coming down in the past year as fast as I would have thought they would.

    • Vasilyfav
    • 8 years ago

    At 25nm flash, I damn well better be getting a better price than $2+/GB to undercut OCZ, otherwise, I really don’t see much of a point.

    • jpostel
    • 8 years ago

    Great review again Geoff. Please consider adding “dirty drive” numbers to the TR tests. I would love to see comparison of the drives after simulating several months of use.

    Thanks.

    • OneArmedScissor
    • 8 years ago

    Where’d the power measurements go? 🙁

      • Firestarter
      • 8 years ago

      I guess they’ll have to wait for the epic SSD roundup that TR is working hard on, right? RIGHT?

      😛

    • Da_Boss
    • 8 years ago

    Kudos to TR for really getting into these SSD reviews lately. It’s good to finally see the landscape for the latest generation of SSDs panning out.

    All things considered, it looks like SSD makers really are calling it a day on random 4k performance in favor of raw sequential throughput. My guess is that, after SSDs got way more than good enough in the random performance arena for desktop users, SSD makers all saw that sequential performance is the last place where they can make improvements that are actually tangible to the end user.

    If you took a C300 and C400, and tested them to see who boots and loads apps faster, my guess is that, despite the C300’s better random performance, there would be very little variance. But get them copying large amounts of data, and their differences start to show themselves.

    Only time will tell if that design decision was worth it.

    • odizzido
    • 8 years ago

    It’s an SSD review frenzy. I was surprised when I saw the second SSD article up but a third is…..more….surprising.

    • Stargazer
    • 8 years ago

    Am I missing something, or are there no Random Read benchmarks?
    Isn’t there a law or something that says that all SSD reviews need to have one of those?

    As with the Intel 510 review, I’d much prefer seeing the 128GB version. If you have any influence over what kind of drives you are sent, could you please try to get lower capacity ones in the future? I’m pretty sure that most of your readers would be more likely to get those.

      • Damage
      • 8 years ago

      You are missing something!

        • Stargazer
        • 8 years ago

        Ok, I did notice that the review has benchmarks for Random Read *Access Times*, but what I meant was actually Random Read transfer rates (I noticed that I wasn’t 100% clear about this, and considered going back and editing my post with a clarification, but figured I’d see if anyone pointed that out first. 🙂 ).

        Now, with appropriate definitions you can get the transfer rates by dividing the transfer size with the access time, but if you do that with your results, they don’t seem to track with what is generally expected. For example, in your 4k Random Read Access Time test, you show the X25-M as having an access time of 0.04ms, and the C300 0.11ms. That would mean that the X25-M would have a RRTR that is almost 3 times faster than that of the C300. This is not I’ve been seeing in reviews on other sites, where the C300 is shown as having *faster* 4k RRTR than the X25-M. It most definitely hasn’t been 3 times slower.

        So, what’s up with that? Different queue depths could have some impact I suppose, but it’d seem surprising if it made that much of a difference.

        Or am I missing something else? 🙂

          • Firestarter
          • 8 years ago

          At QD=1 and with a access time of 0.11ms, the drive would only receive a request every 0.11ms. At higher queue depths, the average access time of the drive that achieves higher transfer rates would be lower than the drive with lower transfer rates, but AFAIK it can never be lower than the access time at QD=1.

          Consider for example that you would need 2 random 4K blocks, where the 2nd block is not dependant on the first block. If you fire these requests at the same time (so that QD=2) then the C300 might be able to deliver both blocks after about 0.12ms or so (just some random number > 0.11ms). But it would never be able to deliver either of these blocks before at least 0.11ms has passed.

          Now if you have a drive that has a minimum access time of 0.04ms and do the same, it could be that the first arrives after 0.04ms and the second after 0.08ms. However, with that drive (the X25-M for example), you could request the first block, receive it, process it somehow (~0.02ms) and then request the second block based on that information, and the second block might arrive less than 0.11ms after the first request was made.

          So, higher transfer rates at QD > 1 might look cool and be pretty important when multitasking or processing a lot of user requests (databases for example), but the access time for a single request at QD=1 is also very important as it may determine the maximum speed at which a single task is completed. As for testing at QD=32, well I think that’s nifty but completely useless for anyone that isn’t running a datacenter/supercomputer.

          edit: just theorizing here, feel free to make swiss cheese out of my post if you can 🙂

    • Bauxite
    • 8 years ago

    I love the latency graphs.

    Too many people obsessed with “bandwidth, bandwidth, bandwidth, must have more bandwidth” and hard drives are not the only things where people are blinded by bandwidth. (hello, internet connections, I’m talkin’ to you)

    As many gamers can tell you, latency is king. Hail to the SSD kings.

      • Firestarter
      • 8 years ago

      the factor 10 difference in latency for a random 1MB read between a recent SSD and that harddrive is a real eye-opener

    • jwilliams
    • 8 years ago

    It is odd how the HD Tune burst transfer rate for writes is only 174 MB/s for the m4, while the minimum recorded HD Tune write speed is 202 MB/s. That is bizarre, how can burst be lower than minimum normal write speed?

    Is that result repeatable?

    Does anyone know EXACTLY how HD Tune does the burst measurement? How does it bypass any computer RAM cache, how much it writes, in what cluster size, and how it times it?

      • mboza
      • 8 years ago

      I thought it was odd too. If it is true, then it suggests the small writes have some sort of latency effect that is dragging down the average.

      I wonder how the sustained rate test results change with block size, and if the burst test is using a different block size?

      • OneArmedScissor
      • 8 years ago

      Those test programs were made before SSDs came about and have never been realistic representations of them. I have no idea why everyone continues to use them.

      Don’t read into it. It’s just yet another example of why synthetic benchmarks are pretty much stupid and misleading.

    • Corrado
    • 8 years ago

    Makes me feel better about the C300 drive I bought 2 weeks ago, thats for sure.

      • potatochobit
      • 8 years ago

      if the C300 is as good as geoff says, makes me wonder why I never bought one
      must be price per GB

    • potatochobit
    • 8 years ago

    doesnt using the 256 drives kind of pad the stats a little?
    random access times need to be given more credit

    • Firestarter
    • 8 years ago

    If these drives end up $20 cheaper than the Intel equivalent, I guess they’ll sell pretty well. Given that the controller is a known entity and the flash should be cheaper to produce, it could very well be a bit cheaper.

      • Chrispy_
      • 8 years ago

      These drives need to be 20% cheaper than the *old* sandforce drives.

      What we are seeing here is an old controller that predates the SF1500 series, tweaked to run with cheaper flash that has a known shorter lifespan.

      Put a pretty ribbon and a shiny new sticker on it all you want, this is old tech designed to save manufacturing costs first and foremost.

        • Firestarter
        • 8 years ago

        Still I have more confidence in Microns and Intels products than I have in products from OCZ/Sandforce. Color me conservative, but that trust is worth more than a few dollars here and there.

        • green
        • 8 years ago

        [quote<]... this is old tech designed to save manufacturing costs first and foremost.[/quote<] *tock*

    • Meadows
    • 8 years ago

    This is bad.

      • 5150
      • 8 years ago

      Quit bitching, Yawn has been hard at work trying to make these cheaper for you.

        • Yawn!
        • 8 years ago

        We’re glad someone appreciates Us.

        • Meadows
        • 8 years ago

        No, I mean, this is bad. We don’t even know the price, but it’s going to have to be very, very good to justify a performance that relatively sucks. Most of the time the drive does worse than its predecessor.

Pin It on Pinterest

Share This