review a look at four x25 e extreme ssds in raid

A look at four X25-E Extreme SSDs in RAID

Intel’s X25-E Extreme SSD is far and away the fastest flash drive we’ve ever tested. Sure, it only packs 32GB of storage, and yes, you’ll pay a princely sum for the privilege. But with a smart storage controller, near-instantaneous seek times, and the ability to sustain reads at 250MB/s and writes at 170MB/s, the X25-E actually represents good value if you quantify its performance per dollar. That might not be how most folks look at value in the storage world, but for the demanding enterprise environments at which the Extreme is targeted, it’s often the most important metric.

While the X25-E’s dominating single-drive performance would surely satiate most folks, its target market is likely to seek out even greater throughput and higher transaction rates by combining multiple drives in RAID. The performance potential of a RAID 0 array made up of multiple Extremes is bountiful to say the least, and with the drive’s frugal power consumption and subsequently low heat output, such a configuration should cope well in densely populated rack-mount enclosures. Naturally, we had to test this potential ourselves.

Armed with a high-end RAID card and four X25-Es, we’ve set out to see just how fast a RAID 0 array can be. This is easily the most exotic storage configuration we’ve ever tested, but can it live up to our unavoidably lofty expectations? Let’s find out.

Ramping up the RAID
The software RAID solutions built into modern south bridge chips are more than adequate for most applications—my personal desktop and closet file server included—but they’re probably not the best foundations for a four-way X25-E array. Such an impressive stack of drives calls for a RAID controller with a little more swagger, so we put in a call to Adaptec, which hooked us up with one of its RAID 5405 cards.

The 5405 features a dual-core hardware RAID chip running at 1.2GHz with 256MB of DDR2 cache memory. We’ll be focusing our attention on RAID 0 today, but the card supports a whole host of other array configurations, including RAID 1, 1E, 5, 5EE, 6, 10, 50, 60, and 36DD. Ok, so maybe not the last one.

Dubbed a “Unified Serial RAID controller,” the 5405 works with not only Serial ATA drives, but also Serial-Attached-SCSI hardware. The card itself doesn’t have any, er, Serial ports onboard. Instead, it has a single x4 mini-SAS connector (at the top in the picture above) and comes with an expander cable that splits into four standard Serial ATA data cables. If you want to use the 5404 with Serial-Attached-SCSI drives, you’ll need to add a SAS expander cable or have a compatible backplane or direct connect SAS storage.

To ensure compatibility with cramped rack-mount enclosures, the 5405 is a low-profile card with standard and short mounting brackets included in the box. It also has a PCI Express x8 interface, making it compatible with a wide range of workstation and server motherboards, in addition to standard desktop fare. PCIe x8 slots tend to be rare on desktop boards, but fear not. We were able to get the 5405 running in our test system’s primary PCIe x16 graphics card slot without a fuss. Since it only has eight lanes of electrical connectivity, the 5405 can’t make the most of an x16 slot’s available bandwidth. However, for four ports, an aggregate 2GB/s of bi-directional bandwidth should be more than adequate—even for X25-Es.

As one might expect, the 5405 isn’t cheap; it costs $335 and up online. Adaptec does provide three years of warranty coverage, though. Drivers are also available not only for Windows, but also for OpenServer, UnixWare, Solaris, FreeBSD, VMware, and both Red Hat and SUSE Linux.

Our testing methods
In truth, we don’t have anything even remotely comparable to line up against four X25-Es strapped to a fancy hardware RAID card. So we’ve thrown a little of everything at this beastly storage configuration instead, including hardware RAM disks from Gigabyte and ACard, a collection of SSDs including the X25-E Extreme on its own, and a handful of the fastest 3.5″ desktop drives on the market.

To keep the graphs on the following pages easier to read, we’ve color-coded the bars by manufacturer. Our X25-E RAID 0 array appears in bright blue, with Intel’s X25-series SSDs appearing in a lighter hue. Note that we also have a set of RAID 0 results for the ANS-9010 RAM disk. Those results were from a virtual two-drive config running off our test system’s ICH7R south bridge RAID controller.

All tests were run three times, and their results were averaged, using the following test system.

Processor Pentium 4 Extreme Edition 3.4GHz
System bus 800MHz (200MHz quad-pumped)
Motherboard Asus P5WD2 Premium
Bios revision 0422
North bridge Intel 955X MCH
South bridge Intel ICH7R
Chipset drivers Chipset
Memory size 1GB (2 DIMMs)
Memory type Micron DDR2 SDRAM at 533MHz
CAS latency (CL) 3
RAS to CAS delay (tRCD) 3
RAS precharge (tRP) 3
Cycle time (tRAS) 8
Audio codec ALC882D
Graphics Radeon X700 Pro 256MB with CATALYST 5.7 drivers
Hard drives Seagate Barracuda 7200.11 1TB
Seagate Barracuda ES.2 1TB
Samsung SpinPoint F1 1TB
Hitachi Deskstar E7K1000 1TB

Western Digital VelociRaptor 300GB

Western Digital Raptor WD1500ADFD 150GB

Western Digital Caviar Black 1TB

Western Digital RE3 1TB

Western Digital Caviar SE16 640GB

Seagate Barracuda 7200.11 1.5TB

Samsung FlashSSD 64GB

Intel X25-M 80GB

Intel X25-E Extreme 32GB

Gigabyte i-RAM
with 4GB DDR400 SDRAM

ACard ANS-9010
with 16GB DDR2-800 SDRAM
OS Windows XP Professional
OS updates Service Pack 2

Thanks to NCIX for getting us the SpinPoint F1.

Our test system was powered by an OCZ PowerStream power supply unit.

We used the following versions of our test applications:

The test systems’ Windows desktop was set at 1280×1024 in 32-bit color at an 85Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests.

All the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

WorldBench uses scripting to step through a series of tasks in common Windows applications. It then produces an overall score. WorldBench also spits out individual results for its component application tests, allowing us to compare performance in each. We’ll look at the overall score, and then we’ll show individual application results. You won’t find Gigabyte’s i-RAM in the graphs below because its 4GB maximum storage capacity is too limited for WorldBench to run.

WorldBench is made up of common desktop applications that aren’t typically bound by storage subsystem performance. However, it’s still a little disheartening to see our X25-E RAID config fail to make the podium. Even a single X25-E is faster than our stack of four here.

Multimedia editing and encoding

MusicMatch Jukebox

Windows Media Encoder

Adobe Premiere

VideoWave Movie Creator

Our X25-E RAID 0 array does reasonably well in WorldBench’s Premiere test, but scores are close through the rest of WorldBench’s multimedia editing and encoding tests. Note that the RAID setup is 13 seconds slower than a single X25-E in the Media Encoder test, though.

Image processing

Adobe Photoshop

ACDSee PowerPack

The four-drive X25-E setup takes top honors in WorldBench’s ACDSee test, but it’s only 11 seconds quicker than one of the Extremes on its own.

Multitasking and office applications

Microsoft Office


Mozilla and Windows Media Encoder

WorldBench’s office and multitasking tests appear unable to exploit faster storage configurations.

Other applications



The WinZip and Nero tests are more storage-bound than any others in the WorldBench suite, and again, there’s little difference in performance between a single X25-E Extreme and four of them in a RAID 0 array.

Boot and load times
To test system boot and game level load times, we busted out our trusty stopwatch.

Ignore this one, folks. Our RAID setup may take more than a minute longer to boot than the rest, but it’s also the only configuration that has to initialize the Adaptec RAID card, which takes its sweet time booting up.

Of course, we can’t blame the Adaptec card’s initialization time for the X25-E RAID config’s uninspired level load times. The RAID 0 array is at least within striking distance of a single X25-E in Doom 3, but it’s a few seconds back in Far Cry.

File Copy Test
File Copy Test is a pseudo-real-world benchmark that times how long it takes to create, read, and copy files in various test patterns. File copying is tested twice: once with the source and target on the same partition, and once with the target on a separate partition. Scores are presented in MB/s.

To make things easier to read, we’ve separated our FC-Test results into individual graphs for each test pattern. We’ll tackle file creation performance first.

Now that’s more like it. Our X25-E RAID array roars to victory in all five file creation test patterns. The striped array’s performance is most dominant with the Install, ISO, and MP3 test patterns, which have smaller numbers of larger files than the Programs and Windows test patterns. We see the most impressive performance scaling with the MP3 test pattern, which runs more than 3.5 times faster with four X25-Es than it does with a single drive.

Although it continues to lead the field by a wide margin with most test patterns, our X25-E RAID 0 array’s read performance isn’t nearly as impressive as its write speeds. In fact, with the Windows test pattern, the X25-E array is actually slower than a single X25-E. Even when it’s out ahead of the rest of the pack, the Extreme SSD array is never more than 1.6 times faster than a single-drive config.

FC-Test – continued

Next, File Copy Test combines read and write tasks in some, er, copy tests.

The Windows test pattern again proves challenging for our X25-E array, which would have otherwise swept FC-Test’s copy tests. Still, four X25-Es are consistently faster than just one, and occasionally by significant margins. We find the best performance scaling with the ISO test pattern, which is made up of only a few very large files and runs a little better than 2.7 times faster on our RAID config.

The results of FC-Test’s partition copy tests mirror those of the straight copy tests. Our X25-E RAID config is certainly dominant, but it can’t shut out the ANS-9010 RAM disk.

iPEAK multitasking
We’ve developed a series of disk-intensive multitasking tests to highlight the impact of seek times and command queuing on hard drive performance. You can get the low-down on these iPEAK-based tests here. The mean service time of each drive is reported in milliseconds, with lower values representing better performance.

Our iPEAK workloads were recorded using a 40GB partition, so they’re a little big for the 4GB i-RAM, 16GB ANS-9010, and even the 32GB X25-E. The app had no problems running, but it warned us that I/O requests that referenced areas beyond the drives’ respective capacities would be wrapped around to the beginning of each drive. Since there should be no performance difference between the beginning and end of an SSD, the results should be valid.

With just one exception, our four-drive X25-E Extreme array is the class of our iPEAK multitasking tests. It’s not miles ahead of the competition, though. If you average the mean service time across all nine test patterns, the X25-E RAID config works out to 0.14 milliseconds. The ANS-9010 RAID setup averages out to 0.18 ms, while a single X25-E sits at 0.35 milliseconds.

IOMeter presents a good test case for both seek times and command queuing.

The results of our IOMeter tests are as interesting as they are varied. Let’s start with the obvious, which is the fact that with the exception of the web server test pattern, the X25-E array isn’t the fastest config on the block. That said, our RAID 0 array does offer a significant performance boost over a single X25-E, particularly as the load ramps up. Under the heaviest loads, the RAID config offers transaction rates close to four times higher than a single X25-E with the file server, workstation, and database test patterns. Our striped array only offers about double the performance of a single drive with the web server test pattern, which is made up exclusively of read requests.

Our IOMeter CPU utilization results suggest that the X25-E RAID array’s processor utilization is lower than one might expect in light of its transaction rates. Given the huge gaps in transaction rates, these results are a little difficult to interpret on their own, so we’ve whipped up another set of graphs that illustrates the transaction rate per CPU utilization percentage. Since our mechanical hard drives don’t deliver anywhere near SSD levels of performance here, we’ve left them out of the equation, with the exception of the VelociRaptor.

No doubt thanks to its use of a hardware RAID controller, our X25-E Extreme array offers much better performance per CPU cycle than the competition. Note that the ANS-9010 RAM disk RAID array, which uses the ICH7R south bridge chip’s software RAID solution, offers the lowest transaction rate per CPU cycle.

HD Tach
We tested HD Tach with the benchmark’s full variable zone size setting.

Four X25-E Extreme SSDs in RAID 0 deliver by far the highest sustained throughput we’ve ever measured in HD Tach, and it doesn’t matter whether you’re reading or writing. We don’t see anything close to a 4X increase in performance over a single-drive config, though.

The Adaptec 5405’s PCI Express interface has plenty of bandwidth at its disposal, as evidenced by the X25-E array’s monstrous 755MB/s burst speed. That’s all coming from the RAID controller’s onboard 256MB cache, so we’re really not hitting the SSDs here.

While still in the realm of near-instantaneous, the X25-E array’s random access time is just a sliver higher than that of a single-drive config. This result was consistent across all three of our test runs, as well.

HD Tach’s margin for error in the CPU utilization is +/- 2%, which makes the X25-E array competitive with drives that offer the highest sustained throughput.

Although the recent wave of solid-state hard drives that’s flooded the market primarily targets mobile applications, SSDs aren’t quite ready to replace their mechanical counterparts for most users. The price is simply too high at the moment, not only in terms of the total cost of a drive, but the cost per gigabyte, as well. However, we don’t have to wait for prices to fall for SSDs to make sense in the enterprise world. For those less interested in storage capacity and more concerned with throughput and the ability to handle a barrage of concurrent I/O requests comfortably, solid-state drives like Intel’s X25-E Extreme have a great performance per dollar proposition.

Interestingly, the perks that make SSDs attractive for notebooks also pay dividends for enterprise RAID configurations. The X25-E’s 2.5″ form factor is easy to pack into low-profile rack-mount enclosures, and thanks to the drive’s very low power consumption, there’s little need to worry about excessive heat. Because solid-state drives lack moving parts, the environmental vibration that can become problematic in a tightly-packed array isn’t an issue, either.

As we’ve seen today, a collection of X25-Es in RAID 0 can be very fast indeed—under the right circumstances. You need the right sort of workload to exploit the enormous performance potential of four of the fastest flash drives on the market. With our Adaptec 5405, our array offered the best performance scaling with sustained transfers, in particular with real-world writes. As one might expect from solid-state storage, the array also made short work of our multitasking and multi-user loads, delivering the best performance under our most demanding loads.

Naturally, a four-drive X25-E Extreme array is going to be overkill for most—it is a $2000 storage solution, after all. But if you have the right sort of workload, there’s staggering performance to be had.

0 responses to “A look at four X25-E Extreme SSDs in RAID

  1. I think the comments about this being the raid card may be right. The spec page of the 5405 quotes a data transfer rate of 3Gb/s per port. Note the small ‘b’. 3Gb/s = 384MB/s, and that ties in pretty closely with the highest write rate seen in the test of 366.1MB/s.

    From the figures, it seems pretty likely that the controller is the bottleneck here. It can’t actually cope with the performance of these drives.

    A 5805 card with two internal ports may have been a better bet.

  2. Page 8/bottom:

    “With just one exception, our four-drive X25-E Extreme array is the class of our iPEAK multitasking tests.”


  3. Perhaps the problem is what actually happens in a scripted scenerio. Does the script execute items one at a time or does it pile them all on and let multitasking sort it all out? If its one-at-a-time then perhaps SSD won’t pull away from the pack.

  4. Processor usage? Most people can’t figure out what to do with all the cores anyway. 🙂

  5. You could get very nice results without the card too, with less driver overhead to boot. Probably nothing over 300 megs a second, but that’s still monstrous.

  6. I hardly think that a pci-e expansion card can be classified as “no additional hardware.” Also the driver requires 2GB of system RAM for every 80GB of flash on the iodrive, making it very impractical for a lot of applications.

  7. you hit a bad raid card max transfer limit of 560.. my 4x vr300 do this..tweeked).. get a better raid card that has shown to do 800mb/sec

  8. Sorry for TR bashing. Been left feeling like I got an incomplete picture in a lot of reviews lately — especially when it comes to SSD’s. Not as much from TR as from other sites though; so be assured — I’ve let loose on a few others far worse than this 🙂

    While I definitely agree with your point on comparing a $2k RAID setup to one worth $0, given that folks who are more likely implementers will choose the former, I think there are still people — myself included — who would indeed like to see how well at least 2x SDD’s would compare on an ICH10R or more modern integrated controller.

    There are a lot of different potential bottlenecks in this picture, which is one reason I would stress on eliminating as many as possible.

    * RAID controller speed
    * PCI express bus
    * chipset-northbridge bus (PCI express or LDT)
    * RAM latency, size and speed
    * CPU speed
    * CPU cache size and speed

    Damage: I would view this as an opportunity to actually show your readers what the truth is. More reviews means more article views means more advertising dollars 🙂

    That said, I definitely should have posed my original post as a question: Would the picture change if you changed the CPU platform? The RAID Controller?

    Thanks for your great concern and quality reviews overall.

  9. that is my understanding (although I could be wrong). it’s just that the numbers just don’t line up with what i’ve experienced personally.

  10. Sure, give me some funnays and I will. Robert “Apache” Howarth style witticisms I can do without.

  11. There are lots of these VoodooExtreme/HardOCP immature thingies these days in TR’s articles.

    I find them lame and I’m no prude.

  12. The freezes of some SSD’s on the market are related to a well known problem with the embedded JMicron controller inside them. The controller has issues with writes and pauses when the write queue is full (or something like that). Intel uses their own controller and design so no issue there.

    If you Google “JMicron SSD problem”, you’ll get a tonne of hits.

  13. I just wonder what the world bench (esp Nero and Winzip, as the CPU tests show big differences) and game level load graphs would look like. I doubt the rest of the graphs would change. Admittedly a newer game to match newer hardware might take just as long to load, but if you could cut any time the HD is not the bottleneck, it would emphase the differences in the HDs.

    The other thing that bugs me is when the array gets beat significantly by the single one.

    But lets face it, anyone seriously considering a raid of these things can probably afford just to commission you to do any tests they want, while all us nitpickers can only dream of such an extravagant build.

  14. the problem with this guy is he doesn’t think outside the box

    how does he and we know how the other raid types will perform if he doesnt do and publish the benchmarks

    believe it or not raid 5 works well with ssds, but you wouldn’t know it from this piece of work

  15. Personally I wouldn’t call this laziness, he might have simply concentrated to much on the SSD, that he simply didn’t even consider this.

    Another option is to get two of the ACard units, and raid them (without using the dual SATA ports) it could be that the control chip is simply inefficient when dividing the memory internaly to the two SATA ports…

    may ideas, but it depends how many units they recieved.

  16. No, you still need a buffer to handle the stripes and parity calcs when doing writes on RAID 5, etc. (aka the “small writes” problem).

  17. Um, no one in their right mind would buy $2000 worth of hard drives to run them off onboard intel software RAID.

    On the other hand, I would have liked to have seen the Acard 9010 tested on the same Adaptec RAID controller, but I imagine TR probably had to return the test unit to the distributer/manufacturer after the review, and don’t have it on hand anymore.

  18. “Four X25-E Extreme SSDs in RAID 0 deliver by far the highest sustained throughput we’ve ever measured in HD Tach, and it doesn’t matter whether you’re reading or writing. We don’t see anything close to a 4X increase in performance over a single-drive config, though.”

    As several others have mentioned, this may well be because of the I/O controller setup you chose for this test. Our own fairly extensive benchmarks on sustained read speed show that most modern motherboards equipped with 8-10 SATA ports can continuously and simultaneously run all ports flat out running simple Windows software RAID 0.

    In other words, connect six drives each individually capable of 100MB/S transfer speed to six motherboard SATA ports (and/or SATA ports on a PCI-E controller card), stripe them with Windows Disk Manager, and you get almost exactly 600MB/S transfer speed (as measured by Microsoft Research Labs’ DiskSpd, a benchmark utility tuned for massively parallel I/O arrays).

    No need at all for 3rd party RAID controllers with RAID-0; the OS is about as good as you can get…

  19. The reason that big iron doesn’t use raid0 is more the problem with data integrity and the fact that any failure will take out a drive. The 15-drive EMC SAN that I’ve used in the past was configured in raid5 with a hotspare(whatever that’s called).

  20. To be honest, it really doesn’t make any sense to me why the these drives (and the ram drive) should be so close to the hard drives in the “real world” tests.

    From personal experience, fast SSDs (when not randomly freezing) are mind blowingly fast compared to a velociraptor in general computer usage. Those benchmarks really don’t reflect my experience at all.

  21. RAID cards that use these pigtails are NOT limited to a single sata port when it comes to bandwidth.

  22. Nice Work. It is a shame you could not get the chance to bench RAID levels that would be intensive on the CPU (RAID 5 or greater).

    The review just shows that how hardware RAID is slowly becoming more pointless to get when high RPM HDDs are getting replaced by SSDs. The memory buffer on true hardware controllers is meant to help reduce the latency associated with accessing data on different HDDs across the array. SSDs are so darn fast at random access speed that it practically eliminates the need for it. No CPU overhead is the last remaining benefit of hardware RAID. However, multi-core and super-fast CPUs do minimize its impact.

  23. did you guys test this setup yourselves in real-world usage?

    do those intel drives exhibit the random freezes that have plauged my attempts to use SSDs?

  24. Very interesting stuff. Real change is coming to the computer world. I can put all of my crap on a X25-M, if I wanted to replace my huge WD 640 SE16.

    Of course, the only way to squelch the critics of the P4 are to take the exact same array and pop it into a Phenom II or Core i7 or something. Maybe you could do that as part of a review of the RAID controller instead of the drives, and then link the two stories. If you’re going to review the controller please test it with more than Windows.

    I don’t care enough to complain, but when I see Windows reporting 50% load, I think about one core running full bore and the other totally idle in a dual core system. Not a complaint… just saying.

  25. Can the wear leveling algorithms also add to the overhead of the raided SSD’s? I would think it would only be micro seconds, but unlike platter storage, couldn’t the data be written to differen’t parts of the SSD every time?

  26. The random access time measurement with only one decimal place to the right of the point is kind of pointless for flash and RAM based devices. Why not use a program that can measure it more precisely?

    Also, since you get 560.5 average read for the RAID, which is not close to the 4x average read speed (236.5 x 4 = 946) you might expect, but the RAID is able to deliver 755.3 in burst mode (i.e., the bus is not the limitation for the aggregate read), it seems to me that the RAID card is meant to deliver aggregate reads up to 4x the average read of a normal hard disk (around 100). I.e., you picked the wrong RAID card to properly max out the performance of the flash disks. You probably need one that supports aggregation of 10 rotating disks.

  27. The drawback to this test is that it doesn’t give a good picture of the capabilities of 4 X-25 E drives in raid. My reasoning is that the drives experience a bandwidth bottleneck due to the limitation of one branched sata port which has a max bandwidth of about 300MB/s. If these drives were tested with a raid card which had multiple sata ports you would be able to take advantage of greater bandwidth. For example, a raid card with 4 sata ports( 4 x 300MB/S ) for a combined bandwidth which can more adequately take advantage of the pci express x8 slot which has a claimed bandwidth of 2GB/S.

  28. All these requests for raid 1, 5, 10, reduced amount of drives, different test systems and a different controller but you are all overlooking the major issue here.

    They weren’t tested in my system.. in a undisclosed location in Mexico… with no expectation to ever get them back.. so screw you guys.

    Thanks Geoff for the additional tests you ran from the original article, much appreciated.

  29. Thanks a lot for testing these out with a decent raid controller, the peanut gallery (myself included) has been asking for this!

    it would have been great though if you could have retested the ANS-9010 using the adaptec raid card though, as it would have been a much fairer comparison. OK it might not make any difference but i’d like to know.

    Understand about the old P4 testbed- not ideal but yes it is preferable to have a standard platform to do comparisions. Could you please consider acquiring a (currently) high end raid controller as part of your replacement test bed to standardise any raid0 tests you do in future?

    The article really does demonstrate that RAID0 has quite a limited value for desktop uses, and the returns are diminishing more and more as each device gets faster.

    Overall: very interesting read over morning coffee, thanks!! 🙂

  30. But if you actually care about processor usage you wouldn’t use the motherboard RAID either, so that comparison is kind of pointless either way.

    I do like the fact that we’re finally seeing CPU numbers normalized against the throughput, however.

  31. I disagree. I think it’s valid the way it is, even though it makes things like the boot speed test invalid.

    Damage has a point, you don’t put a $2400 worth of hard drives on an onboard raid setup. You also wouldn’t spend a few hundred bucks on a controller for a $75 hard drive.

  32. would have been nice to see those mechanical drives in RAID 0…I am sure that long blue line would somehow be matched.

  33. I completely agree with you, that stripe size and controller make a big difference. On big enterprise frames, the discussion between RAID 5 and 10 or the other highly available disk configurations comes down to need for I/O, data integrity and cost. Performance wise, it should be moot, especially on systems with big caches (32-256GB in size) where the working set fits in the frames cache.

    In looking at the table from Storage Review, I can see how they have weighted each of the various RAID levels. I think they score RAID 0 higher for writes over RAID 1 and 01/10 since there is less overhead. In terms of I/O’s, RAID 10 should be much faster under heavy load, since you have minimum 2 disks servicing the requests, versus one (or possibly more depending on stripes) in the RAID 0 set up.

    There is a difference between RAID 10 and 0+1,

    §[<<]§ Choosing between 0+1 or 10 is a reliability and cost question. Most people confuse these.

  34. Hello, nice article! Interesting how so many tests didn’t go in favor of the X25-E/RAID config, given that it displayed the best read and write throughput, with excellent access latencies. The later were a tiny bit higher than some of the other cards (not sure if that was within the std. error though), so maybe that explains it.

    Anyhow, I’ll chime in with my own suggestions of what else to test! 🙂

    It would have been great to have been able to test the ANS-9010 with a decent RAID controller, but given that you guys probably sent it back already, how about testing the X25-E’s using the ICH7R south bridge RAID controller? It would be interesting to compare the performance of the controllers, and beyond that it would certainly enable a more even comparison of the X25-E against the ANS-9010.

  35. Edit: meant this as a reply to #47…

    I’m not so sure about the performance advantages of RAID0/1 or 5 vs. 0. Check out this piece from storage review (which strangely enough now is peppered with annoying ads): §[<<]§ The reason that no one in the enterprise segment (especially banks...) won't consider RAID 0 is that it isn't fault-tolerant. A non-fault-tolerant RAID setup is simply not an option for most enterprise scenarios. From what I've seen & read, and as confirmed by that storage review article, you'll get the best performance overall with RAID 0, so long as you pick the right stripe size and you have a good controller card. (Excluding exotic RAID variants like the RAID 7 described by storage review)

  36. That was 12 drives on a LSI 8708, using a poorly designed Supermicro SAS expander that only had one functional mini SAS port. (it has 2 expanders on the backplane but they are in a redundant configuration)

    The next build will use something that either has no SAS expander or multiple SAS expanders that are independant, so everything isn’t sent through 4 channels.

  37. Do you know the controller used?

    550MB sounds low even for that number of disks in play. It works out to roughly 45MB/s / disk. I usually plan on a average of 72MB/s / disk when using 15K SAS/SCSI disks. Last time I did the tests on a HP DL380G4/5, I was getting ~480MB/s on a RAID 10 set of 6 disks. I’d expect you to get much more than that with 12 of them.

  38. >> you’ll find many IT Storage Architects using RAID 10

    My most recent build was a 12 drive SAS array, and it was quite a bit faster with RAID5/6 than RAID10 (550 MB/sec versus 380 or so for reads). I can’t speak for the Adaptec card used in the review but other recent controllers have solved the performance hit RAID 5/6 used to take.

  39. I would like to see a comparison of 15K SAS drives to this array, based on cost not number of units. It should be possible to make an 8 drive 15K SAS array for the cost of these 4 SSDs, is it faster?

    Do SSDs still make sense from a cost / performance point of view?

    Also, would something like a LSI MegaRAID 8708 produce better results? I have found that card to be particularly fast, and quite affordable.

    Good to see some server hardware being tested.

  40. Geoff,

    Can you re-test this with RAID 1 or RAID 10 instead?

    In general, RAID 0 stinks for performance intensive situations. RAID 0 stinks because it uses stripping. You’ll find the big disk vendors, like HP, EMC, etc. do not offer RAID 0 on their enterprise storage systems (except on their low-end SMB products which are NOT enterprise). They offer modified versions of RAID 5 (RAID S in EMC parlance), RAID 1, RAID 10 and a few other more exotic modes.

    I suspect the controller really isn’t to blame here. For SSD’s and RAID 0 you need to tune the stripe size to that of the SSD. In RAID 0, if you have RAID 0 and the stripe size is 128KB (which is the default on system RAID cards), it means that unless the data size is > 128KB, the data sits on one disk or the other. You lose the benefit of read-ahead since one disk gets pounded with I/O and the other sits idle. Same problem for writes, if the data size is < 128KB one disk gets pounded. The controllers do try to spread it out, but…

    If you re-test with RAID 1, you’ll notice a big improvement in the read performance. This is because the controller can handle two simultaneous requests at a time (one per disk/SSD). If you have a 4 SSD RAID 1, you would be able to handle 4 reads at once. Writes of course are in parallel and should be equal to single disk performance for 2 disk/SSD sets.

    And.. if you go RAID 10, you’ll discover the performance should be eye popping, especially if you match the stripe size to the SSD block size. A “mirror of stripes” i.e. RAID 10 requires at least 4 disks, and gives ultimate data protection and performance since literally all disks are used in reads and writes.

    As a BTW, you’ll find many IT Storage Architects using RAID 10 (or their storage vendors equivalent) for their applications (think banking) since the need for a huge number of transactions and sustained performance. That’s why they’d consider SSD’s in the first place. IT Storage Architects will tend to use RAID 5 for bulk protection of storage at a reasonable cost. Before people go off on RAID 5 sucks, blah blah, remember true enterprise RAID arrays have 32-256GB of cache on them, so, RAID 5 performance isn’t a issue. And… typically in enterprise frames (HP, EMC) the LUNs are mirrored internally and then a RAID 5 set is made out of those LUNs. The RAID 5 set is then presented to the outside world. RAID 5 over mirroed sets gives better performance and uses fewer disks than a RAID 10 set would give, lowering the cost of the storage in use.

    For the small home server or low-end RAID 10 versus 1 does not matter, but as you have correctly noted, when SSD’s are used in the enterprise, their performance, especially in that mode is king. Either RAID 1 or RAID 10 with those sweet SSD’s you have will have them singing.

    Bottom line, RAID 0 sucks.

  41. SSDs in a raid are not going to scale perfectly because there is a complex interaction between the filesystem block or extent size, the RAID stripe width, the number of flash channels per SSD, the size of the flash erase block (typically 128KiB), and finally the 512B size of a SATA i/o.

  42. Interesting article, thanks for posting this. I was very curious to see how four Intel SSDs would perform on Adaptec’s best controller. Unfortunately that is just a huge disappointment!

    On the bright side, Intel cut their SSD prices again today.

    X25-E ~$425
    X25-M $399 shipped @ Newegg

  43. The amazing thing is that people decided to start complaining so vocally about our testbed, since:

    1) Our stable-testbed approach has been a facet of our storage coverage for, heh, quite a while, as the vintage of hardware involved testifies. It allows us to do these broad comparisons in quick succession, like the A-Card vs. X25-E RAID 0… and still have the i-RAM and a range of hard drives included.

    2) Even with the highest transaction rates in the most grueling multi-user test using a DDR2 800MHz RAM disk, CPU utilization rarely ranges over 50%. See the data yourself here:

    §[<<]§ Our testbed is a SATA 3Gbps storage controller, which, for all of the change in the industry recently, is still state of the art. (In fact, I'd bet good money this Intel ICH performs better than any AMD south bridge with AHCI--and thus NCQ. Even AMD's newest chipsets are still broken there.) We do recognize the need to change testbeds from time to time, and we intend to do so with the release of SATA 6Gbps capable controllers. We might move up that schedule if we find that hard drives are regularly pushing our testbed toward 100% CPU utilization, but frankly, that seems unlikely in the near future. As for your other comment, with respect, I fail to see how testing a $2400+ RAID setup with a ~$0 onboard RAID controller is more appropriate and relevant to likely implementations than using a dedicated controller.

  44. I’d like to see 4 2.5″ 15k rpm sas drives in the mix so you can get a comparison with the current high end in platter based storage.

  45. Edit: Um, wow. Holy cow. Intel ICH7R and a P4 Extreme Edition as the testbed?

    Can anyone say bottleneck?

    This isn’t really a fair review like we’re used to seeing from TR. I’d like to see a RAID configuration using on-board RAID controllers as are typical with consumer boards.

  46. Would it be possible to get a review of different RAID cards using this setup? (*Hint*Hint*) 😉

  47. Also interesting would have been the i-RAM on that controller card. I have seen 4x and 8x raid setups demoed on the tube – §[<<]§ , and they look fast. You also should have had a more simple 2x raid setup for the Intel drives. That could be the price/performance sweet spot, and more drives could always be added when the price drops.

  48. The thing with raid that I found when messing with servers I was using for builds/compile is that stripping size made a good amount difference, because of general file sized accessed how, many files accessed at once, and things of that nature. (For example I was running 5 or 6 builds at once.) Also for mostly random access it was recommended by dell on the perc6i to turn off things like read ahead and adaptive read ahead. I also think depending on what the main usage is there were instructions to turn off the cards cache and in general. Drive caches are disabled because of the usage of the cache on the card and because if there was a system failure items in the drives cache that wasn’t written to the HD yet would be lost. Enabling drive cache didn’t help or slightly hurt performance in the setup I was using.

    Basically I found there was a long drawn out raid optimization process that needs to be done dependent on what the main usage of the server was. Some of the benchmarks could be impacted by that kind of optimizations however I understand why it wouldn’t be done in this kind of review. It would be interesting to know if those kinds of changes have as much of an impact on SD drives as they do on HD drives.

  49. Same here. As much as I hate to dis you Geoff, not re-testing the ACard’s RAID-0 implementation on the 5405 screams laziness.

    Overall, some really interesting results with the SSDs though. It looks to me like the RAID setup has some problems trying to work with random reads though. After seeing it stutter through the FC Windows-read test, it has to make you wonder how the random-reads are spread out among all the other tests.

  50. The SSD array is very fast in the throughput/access time. I wonder about having such an array as part of a hybrid drives+SSD controller where the SSD array is the cache for the drive array. Then if the system goes down you still have transaction data in the SSD which can be updated to the drives.

  51. I personally was hoping to see a retest of the ACard with this controller. I am wondering if the on-board controller was holding it back, or not.

  52. Or maybe do a article testing the current setup with a completely new one and see if they are comparable, would find out if the board is bottlenecking anything.

  53. And is XP SP2 the optimal OS for SSDs? There is so many factors hindering these drives.

  54. For consistency with the previous ones, but is looking quite old. I wonder how the poorest scaling tests from one drive to 4 would improve if they retested on an i7 rig? (hint, hint, and apologies for asking for 7x as many tests in just 2 posts)

  55. Interesting, a little disappointing, but so many more questions.
    How do 4 raptors or decent 7200 rpm drives compare? – Seek times should go up as you wait for all four drives, but throughput should scale well?

    How does the onboard raid compare? Is the card adding some additional latency because it is raid-0, or just because it is a card? cf, how does a single drive attached to the card do? And how does the DDR2 based disk do with the adeptec card?

    How does the write caching on the card affect the infamous JMicron controller based cards cope with small random writes?

    SSDs in a raid should scale near perfectly, are we just seeing the bottleneck move elsewhere in the system?

    And is SATA 3 going to be enough? I thought SSDs used a RAID-0 style implementation internally anyway, so throughput would scale with density (or number of memory chips)

  56. I think this review makes a compelling case for upgrading the storage test platform. I can understand the need to keep some consistency, to make it easier to test newer drives against old, but I feel that on this (what should be) killer storage array the rest of the platford should be reasonably up to date also. C2D’s been out for what, 2.5 years now?

    Please take this as constructive criticism for was otherwise a decent read.

  57. I would guess CPU, assuming that the levels are all compressed in some way. Would always be nice to see a test done though.

  58. What exactly is holding back load times? Even with really fast SSDs the load time is not decreasing very much at all.

    Is this a bottleneck from memory, cpu, OS, the game or what?

  59. agreed

    and i’ve been going on about raid0’ing ssd’s for quite a while now…
    results were a little disappointing

  60. I would hope to see a test with the same raid card and 4 normal drives, like 7200rpm 3.5″ drivers, and even more fun, 4x Velociraptors.

    I can easily see the need for a SSD device in a laptop, but with the high cost, I rather have a velociraptor in my workstation. Although that doesnt get you away from the near instantanous access an SSD gives.

    What i would love to see is a something of a hybrid. A velociraptor mated with a ssd device of moderate size, preferably with some intelligence that would place small and often opened files on the ssd part, and larger and more seldom access files on the drive part. The velociraptor by itself is really nice and gives a good boost in percieved performance, but getting rid of having all those pesky tiny files and their read/write latency would be much better. I cant really see the need for an incredibly high transfer rate being needed all that much outside specialized applications. Velocirapotor is plenty fast for most users.

  61. I’d also be curious to see if the performance was similar (or even improved) when run from an integrated ICH9R or ICH10R controller. I’ve had numerous occasions over the years where an expansion card controller introduced new limitations that left me scratching my head at performance results.

  62. Not really any point to RAID 0’ing SSD’s. Their main advantage is quick access time. Compare 4 of these SSD’s to almost any 6 HDD’s in RAID 0 and the bandwidth is basically the same. Better off just buying one of these SSD’s if you need IO (like databases or whatever) or want to shave a little off load times.

  63. I was really expecting more.

    I have to wonder what performance would have looked like with 4 drives running RAID 0 from the motherboard, rather than through a stand-alone controller.

  64. I really want to read this right now, but I have to get up for work tomorrow…where I’ll read this article then. However, I’m sure I’m pretty sure I’ll be dreaming about this tonight. *Best Homer voice* SSD’s in raid *Homer drool*

  65. Hardware raid is an interesting beast. What you managed to show in these benchmark is that Adaptec’s raid controller is nothing special and all it really manages to do is add a layer of access latency with it’s cache. Also these controllers are meant for mechanical disks and to hide their random reads and writes. I am willing to bet with SATA rev 3.0 SDDs will come out to take advantage of the larger bandwidth and more efficient SATA protocol and truely set themselves apart from mechanical drives.

  66. I really don’t see any use for it in consumer homes, even if they are hardware enthusiasts. Between the price, the boot time and the increased cpu utilization, there also isn’t much real world benefit (or at least not enough for a dedicated RAID card and 4 SSDs – the scaling was quite poor, except for an artificial HDtach benchmark) in many benchmarks presented.

  67. Not exactly going the budget route here are we? 🙂
    Sweet performance though, even better with SATA 6.0 I guess … waiting…

  68. Wow, if those read/write speeds taken from HD Tach are any indication of what to expect from OCZ’s built-in Raid-0 SSD drives, SATA 3.0 is going to be a massive hamstring for them.

  69. Will you have a chance to look at the G.Skill Titan SSDs? Although it uses the JMicron controller, it has a special configuration that apparently solves the stuttering issue.

    Write speeds get pretty tremendous too.