A look at four X25-E Extreme SSDs in RAID

Intel’s X25-E Extreme SSD is far and away the fastest flash drive we’ve ever tested. Sure, it only packs 32GB of storage, and yes, you’ll pay a princely sum for the privilege. But with a smart storage controller, near-instantaneous seek times, and the ability to sustain reads at 250MB/s and writes at 170MB/s, the X25-E actually represents good value if you quantify its performance per dollar. That might not be how most folks look at value in the storage world, but for the demanding enterprise environments at which the Extreme is targeted, it’s often the most important metric.

While the X25-E’s dominating single-drive performance would surely satiate most folks, its target market is likely to seek out even greater throughput and higher transaction rates by combining multiple drives in RAID. The performance potential of a RAID 0 array made up of multiple Extremes is bountiful to say the least, and with the drive’s frugal power consumption and subsequently low heat output, such a configuration should cope well in densely populated rack-mount enclosures. Naturally, we had to test this potential ourselves.

Armed with a high-end RAID card and four X25-Es, we’ve set out to see just how fast a RAID 0 array can be. This is easily the most exotic storage configuration we’ve ever tested, but can it live up to our unavoidably lofty expectations? Let’s find out.

Ramping up the RAID

The software RAID solutions built into modern south bridge chips are more than adequate for most applications—my personal desktop and closet file server included—but they’re probably not the best foundations for a four-way X25-E array. Such an impressive stack of drives calls for a RAID controller with a little more swagger, so we put in a call to Adaptec, which hooked us up with one of its RAID 5405 cards.

The 5405 features a dual-core hardware RAID chip running at 1.2GHz with 256MB of DDR2 cache memory. We’ll be focusing our attention on RAID 0 today, but the card supports a whole host of other array configurations, including RAID 1, 1E, 5, 5EE, 6, 10, 50, 60, and 36DD. Ok, so maybe not the last one.

Dubbed a “Unified Serial RAID controller,” the 5405 works with not only Serial ATA drives, but also Serial-Attached-SCSI hardware. The card itself doesn’t have any, er, Serial ports onboard. Instead, it has a single x4 mini-SAS connector (at the top in the picture above) and comes with an expander cable that splits into four standard Serial ATA data cables. If you want to use the 5404 with Serial-Attached-SCSI drives, you’ll need to add a SAS expander cable or have a compatible backplane or direct connect SAS storage.

To ensure compatibility with cramped rack-mount enclosures, the 5405 is a low-profile card with standard and short mounting brackets included in the box. It also has a PCI Express x8 interface, making it compatible with a wide range of workstation and server motherboards, in addition to standard desktop fare. PCIe x8 slots tend to be rare on desktop boards, but fear not. We were able to get the 5405 running in our test system’s primary PCIe x16 graphics card slot without a fuss. Since it only has eight lanes of electrical connectivity, the 5405 can’t make the most of an x16 slot’s available bandwidth. However, for four ports, an aggregate 2GB/s of bi-directional bandwidth should be more than adequate—even for X25-Es.

As one might expect, the 5405 isn’t cheap; it costs $335 and up online. Adaptec does provide three years of warranty coverage, though. Drivers are also available not only for Windows, but also for OpenServer, UnixWare, Solaris, FreeBSD, VMware, and both Red Hat and SUSE Linux.

Our testing methods

In truth, we don’t have anything even remotely comparable to line up against four X25-Es strapped to a fancy hardware RAID card. So we’ve thrown a little of everything at this beastly storage configuration instead, including hardware RAM disks from Gigabyte and ACard, a collection of SSDs including the X25-E Extreme on its own, and a handful of the fastest 3.5″ desktop drives on the market.

To keep the graphs on the following pages easier to read, we’ve color-coded the bars by manufacturer. Our X25-E RAID 0 array appears in bright blue, with Intel’s X25-series SSDs appearing in a lighter hue. Note that we also have a set of RAID 0 results for the ANS-9010 RAM disk. Those results were from a virtual two-drive config running off our test system’s ICH7R south bridge RAID controller.

All tests were run three times, and their results were averaged, using the following test system.

Processor Pentium 4 Extreme Edition 3.4GHz
System bus 800MHz (200MHz quad-pumped)
Motherboard Asus P5WD2 Premium
Bios revision 0422
North bridge Intel 955X MCH
South bridge Intel ICH7R
Chipset drivers Chipset 7.2.1.1003
AHCI/RAID 5.1.0.1022
Memory size 1GB (2 DIMMs)
Memory type Micron DDR2 SDRAM at 533MHz
CAS latency (CL) 3
RAS to CAS delay (tRCD) 3
RAS precharge (tRP) 3
Cycle time (tRAS) 8
Audio codec ALC882D
Graphics Radeon X700 Pro 256MB with CATALYST 5.7 drivers
Hard drives Seagate Barracuda 7200.11 1TB
Seagate Barracuda ES.2 1TB
Samsung SpinPoint F1 1TB

Hitachi Deskstar E7K1000 1TB

Western Digital VelociRaptor 300GB

Western Digital Raptor WD1500ADFD 150GB


Western Digital Caviar Black 1TB


Western Digital RE3 1TB


Western Digital Caviar SE16 640GB



Seagate Barracuda 7200.11 1.5TB


Samsung FlashSSD 64GB


Intel X25-M 80GB


Intel X25-E Extreme 32GB


Gigabyte i-RAM
with 4GB DDR400 SDRAM

ACard ANS-9010
with 16GB DDR2-800 SDRAM

OS Windows XP Professional
OS updates Service Pack 2

Thanks to NCIX for getting us the SpinPoint F1.

Our test system was powered by an OCZ PowerStream power supply unit.

We used the following versions of our test applications:

The test systems’ Windows desktop was set at 1280×1024 in 32-bit color at an 85Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests.

All the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

WorldBench
WorldBench uses scripting to step through a series of tasks in common Windows applications. It then produces an overall score. WorldBench also spits out individual results for its component application tests, allowing us to compare performance in each. We’ll look at the overall score, and then we’ll show individual application results. You won’t find Gigabyte’s i-RAM in the graphs below because its 4GB maximum storage capacity is too limited for WorldBench to run.

WorldBench is made up of common desktop applications that aren’t typically bound by storage subsystem performance. However, it’s still a little disheartening to see our X25-E RAID config fail to make the podium. Even a single X25-E is faster than our stack of four here.

Multimedia editing and encoding

MusicMatch Jukebox

Windows Media Encoder

Adobe Premiere

VideoWave Movie Creator

Our X25-E RAID 0 array does reasonably well in WorldBench’s Premiere test, but scores are close through the rest of WorldBench’s multimedia editing and encoding tests. Note that the RAID setup is 13 seconds slower than a single X25-E in the Media Encoder test, though.

Image processing

Adobe Photoshop

ACDSee PowerPack

The four-drive X25-E setup takes top honors in WorldBench’s ACDSee test, but it’s only 11 seconds quicker than one of the Extremes on its own.

Multitasking and office applications

Microsoft Office

Mozilla

Mozilla and Windows Media Encoder

WorldBench’s office and multitasking tests appear unable to exploit faster storage configurations.

Other applications

WinZip

Nero

The WinZip and Nero tests are more storage-bound than any others in the WorldBench suite, and again, there’s little difference in performance between a single X25-E Extreme and four of them in a RAID 0 array.

Boot and load times
To test system boot and game level load times, we busted out our trusty stopwatch.

Ignore this one, folks. Our RAID setup may take more than a minute longer to boot than the rest, but it’s also the only configuration that has to initialize the Adaptec RAID card, which takes its sweet time booting up.

Of course, we can’t blame the Adaptec card’s initialization time for the X25-E RAID config’s uninspired level load times. The RAID 0 array is at least within striking distance of a single X25-E in Doom 3, but it’s a few seconds back in Far Cry.

File Copy Test
File Copy Test is a pseudo-real-world benchmark that times how long it takes to create, read, and copy files in various test patterns. File copying is tested twice: once with the source and target on the same partition, and once with the target on a separate partition. Scores are presented in MB/s.

To make things easier to read, we’ve separated our FC-Test results into individual graphs for each test pattern. We’ll tackle file creation performance first.

Now that’s more like it. Our X25-E RAID array roars to victory in all five file creation test patterns. The striped array’s performance is most dominant with the Install, ISO, and MP3 test patterns, which have smaller numbers of larger files than the Programs and Windows test patterns. We see the most impressive performance scaling with the MP3 test pattern, which runs more than 3.5 times faster with four X25-Es than it does with a single drive.

Although it continues to lead the field by a wide margin with most test patterns, our X25-E RAID 0 array’s read performance isn’t nearly as impressive as its write speeds. In fact, with the Windows test pattern, the X25-E array is actually slower than a single X25-E. Even when it’s out ahead of the rest of the pack, the Extreme SSD array is never more than 1.6 times faster than a single-drive config.

FC-Test – continued

Next, File Copy Test combines read and write tasks in some, er, copy tests.

The Windows test pattern again proves challenging for our X25-E array, which would have otherwise swept FC-Test’s copy tests. Still, four X25-Es are consistently faster than just one, and occasionally by significant margins. We find the best performance scaling with the ISO test pattern, which is made up of only a few very large files and runs a little better than 2.7 times faster on our RAID config.

The results of FC-Test’s partition copy tests mirror those of the straight copy tests. Our X25-E RAID config is certainly dominant, but it can’t shut out the ANS-9010 RAM disk.

iPEAK multitasking
We’ve developed a series of disk-intensive multitasking tests to highlight the impact of seek times and command queuing on hard drive performance. You can get the low-down on these iPEAK-based tests here. The mean service time of each drive is reported in milliseconds, with lower values representing better performance.

Our iPEAK workloads were recorded using a 40GB partition, so they’re a little big for the 4GB i-RAM, 16GB ANS-9010, and even the 32GB X25-E. The app had no problems running, but it warned us that I/O requests that referenced areas beyond the drives’ respective capacities would be wrapped around to the beginning of each drive. Since there should be no performance difference between the beginning and end of an SSD, the results should be valid.

With just one exception, our four-drive X25-E Extreme array is the class of our iPEAK multitasking tests. It’s not miles ahead of the competition, though. If you average the mean service time across all nine test patterns, the X25-E RAID config works out to 0.14 milliseconds. The ANS-9010 RAID setup averages out to 0.18 ms, while a single X25-E sits at 0.35 milliseconds.

IOMeter
IOMeter presents a good test case for both seek times and command queuing.

The results of our IOMeter tests are as interesting as they are varied. Let’s start with the obvious, which is the fact that with the exception of the web server test pattern, the X25-E array isn’t the fastest config on the block. That said, our RAID 0 array does offer a significant performance boost over a single X25-E, particularly as the load ramps up. Under the heaviest loads, the RAID config offers transaction rates close to four times higher than a single X25-E with the file server, workstation, and database test patterns. Our striped array only offers about double the performance of a single drive with the web server test pattern, which is made up exclusively of read requests.

Our IOMeter CPU utilization results suggest that the X25-E RAID array’s processor utilization is lower than one might expect in light of its transaction rates. Given the huge gaps in transaction rates, these results are a little difficult to interpret on their own, so we’ve whipped up another set of graphs that illustrates the transaction rate per CPU utilization percentage. Since our mechanical hard drives don’t deliver anywhere near SSD levels of performance here, we’ve left them out of the equation, with the exception of the VelociRaptor.

No doubt thanks to its use of a hardware RAID controller, our X25-E Extreme array offers much better performance per CPU cycle than the competition. Note that the ANS-9010 RAM disk RAID array, which uses the ICH7R south bridge chip’s software RAID solution, offers the lowest transaction rate per CPU cycle.

HD Tach
We tested HD Tach with the benchmark’s full variable zone size setting.

Four X25-E Extreme SSDs in RAID 0 deliver by far the highest sustained throughput we’ve ever measured in HD Tach, and it doesn’t matter whether you’re reading or writing. We don’t see anything close to a 4X increase in performance over a single-drive config, though.

The Adaptec 5405’s PCI Express interface has plenty of bandwidth at its disposal, as evidenced by the X25-E array’s monstrous 755MB/s burst speed. That’s all coming from the RAID controller’s onboard 256MB cache, so we’re really not hitting the SSDs here.

While still in the realm of near-instantaneous, the X25-E array’s random access time is just a sliver higher than that of a single-drive config. This result was consistent across all three of our test runs, as well.

HD Tach’s margin for error in the CPU utilization is +/- 2%, which makes the X25-E array competitive with drives that offer the highest sustained throughput.

Conclusions

Although the recent wave of solid-state hard drives that’s flooded the market primarily targets mobile applications, SSDs aren’t quite ready to replace their mechanical counterparts for most users. The price is simply too high at the moment, not only in terms of the total cost of a drive, but the cost per gigabyte, as well. However, we don’t have to wait for prices to fall for SSDs to make sense in the enterprise world. For those less interested in storage capacity and more concerned with throughput and the ability to handle a barrage of concurrent I/O requests comfortably, solid-state drives like Intel’s X25-E Extreme have a great performance per dollar proposition.

Interestingly, the perks that make SSDs attractive for notebooks also pay dividends for enterprise RAID configurations. The X25-E’s 2.5″ form factor is easy to pack into low-profile rack-mount enclosures, and thanks to the drive’s very low power consumption, there’s little need to worry about excessive heat. Because solid-state drives lack moving parts, the environmental vibration that can become problematic in a tightly-packed array isn’t an issue, either.

As we’ve seen today, a collection of X25-Es in RAID 0 can be very fast indeed—under the right circumstances. You need the right sort of workload to exploit the enormous performance potential of four of the fastest flash drives on the market. With our Adaptec 5405, our array offered the best performance scaling with sustained transfers, in particular with real-world writes. As one might expect from solid-state storage, the array also made short work of our multitasking and multi-user loads, delivering the best performance under our most demanding loads.

Naturally, a four-drive X25-E Extreme array is going to be overkill for most—it is a $2000 storage solution, after all. But if you have the right sort of workload, there’s staggering performance to be had.

Comments closed
    • myxiplx
    • 10 years ago

    I think the comments about this being the raid card may be right. The spec page of the 5405 quotes a data transfer rate of 3Gb/s per port. Note the small ‘b’. 3Gb/s = 384MB/s, and that ties in pretty closely with the highest write rate seen in the test of 366.1MB/s.

    From the figures, it seems pretty likely that the controller is the bottleneck here. It can’t actually cope with the performance of these drives.

    A 5805 card with two internal ports may have been a better bet.

    • indeego
    • 10 years ago

    Page 8/bottom:

    “With just one exception, our four-drive X25-E Extreme array is the class of our iPEAK multitasking tests.”

    typog{<.<}g

    • duck apple
    • 11 years ago

    /[

    • issa2000
    • 11 years ago

    you hit a bad raid card max transfer limit of 560.. my 4x vr300 do this..tweeked).. get a better raid card that has shown to do 800mb/sec

    • Luminair
    • 11 years ago

    the problem with this guy is he doesn’t think outside the box

    how does he and we know how the other raid types will perform if he doesnt do and publish the benchmarks

    believe it or not raid 5 works well with ssds, but you wouldn’t know it from this piece of work

    • Alatar
    • 11 years ago

    “Four X25-E Extreme SSDs in RAID 0 deliver by far the highest sustained throughput we’ve ever measured in HD Tach, and it doesn’t matter whether you’re reading or writing. We don’t see anything close to a 4X increase in performance over a single-drive config, though.”

    As several others have mentioned, this may well be because of the I/O controller setup you chose for this test. Our own fairly extensive benchmarks on sustained read speed show that most modern motherboards equipped with 8-10 SATA ports can continuously and simultaneously run all ports flat out running simple Windows software RAID 0.

    In other words, connect six drives each individually capable of 100MB/S transfer speed to six motherboard SATA ports (and/or SATA ports on a PCI-E controller card), stripe them with Windows Disk Manager, and you get almost exactly 600MB/S transfer speed (as measured by Microsoft Research Labs’ DiskSpd, a benchmark utility tuned for massively parallel I/O arrays).

    No need at all for 3rd party RAID controllers with RAID-0; the OS is about as good as you can get…

    • sativa
    • 11 years ago

    To be honest, it really doesn’t make any sense to me why the these drives (and the ram drive) should be so close to the hard drives in the “real world” tests.

    From personal experience, fast SSDs (when not randomly freezing) are mind blowingly fast compared to a velociraptor in general computer usage. Those benchmarks really don’t reflect my experience at all.

    • Krogoth
    • 11 years ago

    Nice Work. It is a shame you could not get the chance to bench RAID levels that would be intensive on the CPU (RAID 5 or greater).

    The review just shows that how hardware RAID is slowly becoming more pointless to get when high RPM HDDs are getting replaced by SSDs. The memory buffer on true hardware controllers is meant to help reduce the latency associated with accessing data on different HDDs across the array. SSDs are so darn fast at random access speed that it practically eliminates the need for it. No CPU overhead is the last remaining benefit of hardware RAID. However, multi-core and super-fast CPUs do minimize its impact.

      • UberGerbil
      • 11 years ago

      No, you still need a buffer to handle the stripes and parity calcs when doing writes on RAID 5, etc. (aka the “small writes” problem).

    • sativa
    • 11 years ago

    did you guys test this setup yourselves in real-world usage?

    do those intel drives exhibit the random freezes that have plauged my attempts to use SSDs?

      • TechNut
      • 11 years ago

      The freezes of some SSD’s on the market are related to a well known problem with the embedded JMicron controller inside them. The controller has issues with writes and pauses when the write queue is full (or something like that). Intel uses their own controller and design so no issue there.

      If you Google “JMicron SSD problem”, you’ll get a tonne of hits.

    • fantastic
    • 11 years ago

    Very interesting stuff. Real change is coming to the computer world. I can put all of my crap on a X25-M, if I wanted to replace my huge WD 640 SE16.

    Of course, the only way to squelch the critics of the P4 are to take the exact same array and pop it into a Phenom II or Core i7 or something. Maybe you could do that as part of a review of the RAID controller instead of the drives, and then link the two stories. If you’re going to review the controller please test it with more than Windows.

    I don’t care enough to complain, but when I see Windows reporting 50% load, I think about one core running full bore and the other totally idle in a dual core system. Not a complaint… just saying.

    • Trymor
    • 11 years ago

    Can the wear leveling algorithms also add to the overhead of the raided SSD’s? I would think it would only be micro seconds, but unlike platter storage, couldn’t the data be written to differen’t parts of the SSD every time?

    • laserbrain
    • 11 years ago

    The random access time measurement with only one decimal place to the right of the point is kind of pointless for flash and RAM based devices. Why not use a program that can measure it more precisely?

    Also, since you get 560.5 average read for the RAID, which is not close to the 4x average read speed (236.5 x 4 = 946) you might expect, but the RAID is able to deliver 755.3 in burst mode (i.e., the bus is not the limitation for the aggregate read), it seems to me that the RAID card is meant to deliver aggregate reads up to 4x the average read of a normal hard disk (around 100). I.e., you picked the wrong RAID card to properly max out the performance of the flash disks. You probably need one that supports aggregation of 10 rotating disks.

    • opht
    • 11 years ago

    The drawback to this test is that it doesn’t give a good picture of the capabilities of 4 X-25 E drives in raid. My reasoning is that the drives experience a bandwidth bottleneck due to the limitation of one branched sata port which has a max bandwidth of about 300MB/s. If these drives were tested with a raid card which had multiple sata ports you would be able to take advantage of greater bandwidth. For example, a raid card with 4 sata ports( 4 x 300MB/S ) for a combined bandwidth which can more adequately take advantage of the pci express x8 slot which has a claimed bandwidth of 2GB/S.

    • Convert
    • 11 years ago

    All these requests for raid 1, 5, 10, reduced amount of drives, different test systems and a different controller but you are all overlooking the major issue here.

    They weren’t tested in my system.. in a undisclosed location in Mexico… with no expectation to ever get them back.. so screw you guys.

    Thanks Geoff for the additional tests you ran from the original article, much appreciated.

    • Bion1c
    • 11 years ago

    Thanks a lot for testing these out with a decent raid controller, the peanut gallery (myself included) has been asking for this!

    it would have been great though if you could have retested the ANS-9010 using the adaptec raid card though, as it would have been a much fairer comparison. OK it might not make any difference but i’d like to know.

    Understand about the old P4 testbed- not ideal but yes it is preferable to have a standard platform to do comparisions. Could you please consider acquiring a (currently) high end raid controller as part of your replacement test bed to standardise any raid0 tests you do in future?

    The article really does demonstrate that RAID0 has quite a limited value for desktop uses, and the returns are diminishing more and more as each device gets faster.

    Overall: very interesting read over morning coffee, thanks!! 🙂

    • indeego
    • 11 years ago

    /. ‘ed g{<:)<}g

    • albundy
    • 11 years ago

    would have been nice to see those mechanical drives in RAID 0…I am sure that long blue line would somehow be matched.

    • alphaGulp
    • 11 years ago

    Hello, nice article! Interesting how so many tests didn’t go in favor of the X25-E/RAID config, given that it displayed the best read and write throughput, with excellent access latencies. The later were a tiny bit higher than some of the other cards (not sure if that was within the std. error though), so maybe that explains it.

    Anyhow, I’ll chime in with my own suggestions of what else to test! 🙂

    It would have been great to have been able to test the ANS-9010 with a decent RAID controller, but given that you guys probably sent it back already, how about testing the X25-E’s using the ICH7R south bridge RAID controller? It would be interesting to compare the performance of the controllers, and beyond that it would certainly enable a more even comparison of the X25-E against the ANS-9010.

    • SomeOtherGeek
    • 11 years ago

    I LOL’ed on several places – especially at l[

      • GreatGooglyMoogly
      • 11 years ago

      There are lots of these VoodooExtreme/HardOCP immature thingies these days in TR’s articles.

      I find them lame and I’m no prude.

        • SomeOtherGeek
        • 11 years ago

        Maybe you need to laugh a little more and not take things so seriously?

          • GreatGooglyMoogly
          • 11 years ago

          Sure, give me some funnays and I will. Robert “Apache” Howarth style witticisms I can do without.

    • gwalker
    • 11 years ago

    I would like to see a comparison of 15K SAS drives to this array, based on cost not number of units. It should be possible to make an 8 drive 15K SAS array for the cost of these 4 SSDs, is it faster?

    Do SSDs still make sense from a cost / performance point of view?

    Also, would something like a LSI MegaRAID 8708 produce better results? I have found that card to be particularly fast, and quite affordable.

    Good to see some server hardware being tested.

    • TechNut
    • 11 years ago

    Geoff,

    Can you re-test this with RAID 1 or RAID 10 instead?

    In general, RAID 0 stinks for performance intensive situations. RAID 0 stinks because it uses stripping. You’ll find the big disk vendors, like HP, EMC, etc. do not offer RAID 0 on their enterprise storage systems (except on their low-end SMB products which are NOT enterprise). They offer modified versions of RAID 5 (RAID S in EMC parlance), RAID 1, RAID 10 and a few other more exotic modes.

    I suspect the controller really isn’t to blame here. For SSD’s and RAID 0 you need to tune the stripe size to that of the SSD. In RAID 0, if you have RAID 0 and the stripe size is 128KB (which is the default on system RAID cards), it means that unless the data size is > 128KB, the data sits on one disk or the other. You lose the benefit of read-ahead since one disk gets pounded with I/O and the other sits idle. Same problem for writes, if the data size is < 128KB one disk gets pounded. The controllers do try to spread it out, but…

    If you re-test with RAID 1, you’ll notice a big improvement in the read performance. This is because the controller can handle two simultaneous requests at a time (one per disk/SSD). If you have a 4 SSD RAID 1, you would be able to handle 4 reads at once. Writes of course are in parallel and should be equal to single disk performance for 2 disk/SSD sets.

    And.. if you go RAID 10, you’ll discover the performance should be eye popping, especially if you match the stripe size to the SSD block size. A “mirror of stripes” i.e. RAID 10 requires at least 4 disks, and gives ultimate data protection and performance since literally all disks are used in reads and writes.

    As a BTW, you’ll find many IT Storage Architects using RAID 10 (or their storage vendors equivalent) for their applications (think banking) since the need for a huge number of transactions and sustained performance. That’s why they’d consider SSD’s in the first place. IT Storage Architects will tend to use RAID 5 for bulk protection of storage at a reasonable cost. Before people go off on RAID 5 sucks, blah blah, remember true enterprise RAID arrays have 32-256GB of cache on them, so, RAID 5 performance isn’t a issue. And… typically in enterprise frames (HP, EMC) the LUNs are mirrored internally and then a RAID 5 set is made out of those LUNs. The RAID 5 set is then presented to the outside world. RAID 5 over mirroed sets gives better performance and uses fewer disks than a RAID 10 set would give, lowering the cost of the storage in use.

    For the small home server or low-end RAID 10 versus 1 does not matter, but as you have correctly noted, when SSD’s are used in the enterprise, their performance, especially in that mode is king. Either RAID 1 or RAID 10 with those sweet SSD’s you have will have them singing.

    Bottom line, RAID 0 sucks.

    • Kougar
    • 11 years ago

    Interesting article, thanks for posting this. I was very curious to see how four Intel SSDs would perform on Adaptec’s best controller. Unfortunately that is just a huge disappointment!

    On the bright side, Intel cut their SSD prices again today.

    X25-E ~$425
    X25-M $399 shipped @ Newegg

      • indeego
      • 11 years ago

      wow. $200 less than October 08g{<.<}g

    • ssidbroadcast
    • 11 years ago

    Dang my Macbook boots faster.

    /bait.

    • JdL
    • 11 years ago

    Edit: Um, wow. Holy cow. Intel ICH7R and a P4 Extreme Edition as the testbed?

    Can anyone say bottleneck?

    This isn’t really a fair review like we’re used to seeing from TR. I’d like to see a RAID configuration using on-board RAID controllers as are typical with consumer boards.

    • Flatland_Spider
    • 11 years ago

    Would it be possible to get a review of different RAID cards using this setup? (*Hint*Hint*) 😉

    • Freon
    • 11 years ago

    Has there been any attempt to test different RAID block sizes on these SSDs?

    • tfp
    • 11 years ago

    The thing with raid that I found when messing with servers I was using for builds/compile is that stripping size made a good amount difference, because of general file sized accessed how, many files accessed at once, and things of that nature. (For example I was running 5 or 6 builds at once.) Also for mostly random access it was recommended by dell on the perc6i to turn off things like read ahead and adaptive read ahead. I also think depending on what the main usage is there were instructions to turn off the cards cache and in general. Drive caches are disabled because of the usage of the cache on the card and because if there was a system failure items in the drives cache that wasn’t written to the HD yet would be lost. Enabling drive cache didn’t help or slightly hurt performance in the setup I was using.

    Basically I found there was a long drawn out raid optimization process that needs to be done dependent on what the main usage of the server was. Some of the benchmarks could be impacted by that kind of optimizations however I understand why it wouldn’t be done in this kind of review. It would be interesting to know if those kinds of changes have as much of an impact on SD drives as they do on HD drives.

    • Mr Bill
    • 11 years ago

    The SSD array is very fast in the throughput/access time. I wonder about having such an array as part of a hybrid drives+SSD controller where the SSD array is the cache for the drive array. Then if the system goes down you still have transaction data in the SSD which can be updated to the drives.

    • Prodeous
    • 11 years ago

    I personally was hoping to see a retest of the ACard with this controller. I am wondering if the on-board controller was holding it back, or not.

    • mboza
    • 11 years ago

    Interesting, a little disappointing, but so many more questions.
    How do 4 raptors or decent 7200 rpm drives compare? – Seek times should go up as you wait for all four drives, but throughput should scale well?

    How does the onboard raid compare? Is the card adding some additional latency because it is raid-0, or just because it is a card? cf, how does a single drive attached to the card do? And how does the DDR2 based disk do with the adeptec card?

    How does the write caching on the card affect the infamous JMicron controller based cards cope with small random writes?

    SSDs in a raid should scale near perfectly, are we just seeing the bottleneck move elsewhere in the system?

    And is SATA 3 going to be enough? I thought SSDs used a RAID-0 style implementation internally anyway, so throughput would scale with density (or number of memory chips)

      • da sponge
      • 11 years ago

      I’d like to see 4 2.5″ 15k rpm sas drives in the mix so you can get a comparison with the current high end in platter based storage.

        • jwb
        • 11 years ago

        SSDs in a raid are not going to scale perfectly because there is a complex interaction between the filesystem block or extent size, the RAID stripe width, the number of flash channels per SSD, the size of the flash erase block (typically 128KiB), and finally the 512B size of a SATA i/o.

    • S_D
    • 11 years ago

    I think this review makes a compelling case for upgrading the storage test platform. I can understand the need to keep some consistency, to make it easier to test newer drives against old, but I feel that on this (what should be) killer storage array the rest of the platford should be reasonably up to date also. C2D’s been out for what, 2.5 years now?

    Please take this as constructive criticism for was otherwise a decent read.

      • Farting Bob
      • 11 years ago

      Or maybe do a article testing the current setup with a completely new one and see if they are comparable, would find out if the board is bottlenecking anything.

    • Chillectric
    • 11 years ago

    What exactly is holding back load times? Even with really fast SSDs the load time is not decreasing very much at all.

    Is this a bottleneck from memory, cpu, OS, the game or what?

      • mboza
      • 11 years ago

      I would guess CPU, assuming that the levels are all compressed in some way. Would always be nice to see a test done though.

        • Chillectric
        • 11 years ago

        Makes me wonder why TR used a Pentium 4 EE for this series of benchmarks…

          • mboza
          • 11 years ago

          For consistency with the previous ones, but is looking quite old. I wonder how the poorest scaling tests from one drive to 4 would improve if they retested on an i7 rig? (hint, hint, and apologies for asking for 7x as many tests in just 2 posts)

            • Chillectric
            • 11 years ago

            And is XP SP2 the optimal OS for SSDs? There is so many factors hindering these drives.

    • Aphasia
    • 11 years ago

    I would hope to see a test with the same raid card and 4 normal drives, like 7200rpm 3.5″ drivers, and even more fun, 4x Velociraptors.

    I can easily see the need for a SSD device in a laptop, but with the high cost, I rather have a velociraptor in my workstation. Although that doesnt get you away from the near instantanous access an SSD gives.

    What i would love to see is a something of a hybrid. A velociraptor mated with a ssd device of moderate size, preferably with some intelligence that would place small and often opened files on the ssd part, and larger and more seldom access files on the drive part. The velociraptor by itself is really nice and gives a good boost in percieved performance, but getting rid of having all those pesky tiny files and their read/write latency would be much better. I cant really see the need for an incredibly high transfer rate being needed all that much outside specialized applications. Velocirapotor is plenty fast for most users.

    • slash3
    • 11 years ago

    I’d also be curious to see if the performance was similar (or even improved) when run from an integrated ICH9R or ICH10R controller. I’ve had numerous occasions over the years where an expansion card controller introduced new limitations that left me scratching my head at performance results.

      • indeego
      • 11 years ago

      Adaptec’s are pretty much the standard for the enterpriseg{<.<}g

    • 0g1
    • 11 years ago

    Not really any point to RAID 0’ing SSD’s. Their main advantage is quick access time. Compare 4 of these SSD’s to almost any 6 HDD’s in RAID 0 and the bandwidth is basically the same. Better off just buying one of these SSD’s if you need IO (like databases or whatever) or want to shave a little off load times.

    • eitje
    • 11 years ago

    I was really expecting more.

    I have to wonder what performance would have looked like with 4 drives running RAID 0 from the motherboard, rather than through a stand-alone controller.

    • Sargent Duck
    • 11 years ago

    I really want to read this right now, but I have to get up for work tomorrow…where I’ll read this article then. However, I’m sure I’m pretty sure I’ll be dreaming about this tonight. *Best Homer voice* SSD’s in raid *Homer drool*

    • shank15217
    • 11 years ago

    Hardware raid is an interesting beast. What you managed to show in these benchmark is that Adaptec’s raid controller is nothing special and all it really manages to do is add a layer of access latency with it’s cache. Also these controllers are meant for mechanical disks and to hide their random reads and writes. I am willing to bet with SATA rev 3.0 SDDs will come out to take advantage of the larger bandwidth and more efficient SATA protocol and truely set themselves apart from mechanical drives.

    • Vasilyfav
    • 11 years ago

    I really don’t see any use for it in consumer homes, even if they are hardware enthusiasts. Between the price, the boot time and the increased cpu utilization, there also isn’t much real world benefit (or at least not enough for a dedicated RAID card and 4 SSDs – the scaling was quite poor, except for an artificial HDtach benchmark) in many benchmarks presented.

    • Imperor
    • 11 years ago

    Not exactly going the budget route here are we? 🙂
    Sweet performance though, even better with SATA 6.0 I guess … waiting…

    • Kurotetsu
    • 11 years ago

    Wow, if those read/write speeds taken from HD Tach are any indication of what to expect from OCZ’s built-in Raid-0 SSD drives, SATA 3.0 is going to be a massive hamstring for them.

    • ChronoReverse
    • 11 years ago

    Will you have a chance to look at the G.Skill Titan SSDs? Although it uses the JMicron controller, it has a special configuration that apparently solves the stuttering issue.

    Write speeds get pretty tremendous too.

Pin It on Pinterest

Share This