Ok, I have no idea who uses Areca RAID controllers in business. Maybe for custom build stuff along with the LSI MegaRAID controllers? Most of the big vendors I've dealt with, Dell and HP, have their own solutions, the PERC and the SmartArray lines. That being said, here's what I have found. I will post the results/raw numbers/graphs when I get a chance to send the spreadsheet I've been working on to my home address so I can upload it to my geoblocked-from-work web space so y'all can download it.
SAS and SSDHP DL380 G5 458565-001, P56 5/2/2011 firmware
2 x Xeon E5430 2.66 GHz
32 GB RAM
P400i 512 MB BBWC, 7.24 firmware
146 GB 15K 6G DP SAS 512547-S21
256 GB Samsung 840 Pro SATA
Set as RAID 1+0 with 2 drives, 256K stripe, for RAID test.
Set as 2 RAID 0 arrays with 1 drive each, 256K stripe, Windows Dynamic Disks, mirrored, for WIN test.
Set as 1 RAID 0 array with 1 drive, 256K stripe, for single drive test.
SATADell PE2900ii, 2.7.0 firmware
2 x Xeon E5345 2.33 GHz
12 GB RAM
PERC 5/i 256 MB BBWC, 5.2.2-0072 firmware
500 GB 7.2K Western Digital
Set as RAID 1 array with 2 drives, 128K stripe, for RAID test.
Set as 2 drives on Intel 5000X chipset's SATA controller, Windows Dynamic Disks, mirrored, for WIN test.
Set as 1 drive on Intel 500X chipset's SATA controller for single drive test.
Set as RAID 0 array with 1 drive, 128K stripe, for single drive test.
For 15K SAS drives, same settings as the RAID tests for this system.
Testing Procedures:SAS and SSD followed TR iometer testing procedures. Outlined here:
viewtopic.php?f=5&t=86757SATA followed TR testing procedures with the exception that a 100 GB test file was used instead of the whole disk.
OS was Win2K8 R2 Enterprise, fully-patched at the time of testing.
Observations:The RAID mirror using SATA drives plateaued when the queue depth hit 4, in one instance when it hit 8. The software mirror continued to scale as the queue depth rose. The SATA drives closely tracked each other in performance up until the RAID mirrors plateaued. In all cases with the SATA drives, a mirrored drive configuration provided significant performance over a single drive. The single drive RAID configuration did not scale at all while the single drive SATA configuration did. This may be a quirk of the PERC 5/i controller.
The RAID mirror using SAS drives closely tracked the software mirror configuration. In 3 out of 4 cases, the RAID mirror appeared to be pulling away in performance as queue depth hit 32. Higher queue depths were not tested so it is unknown if the trend would have continued to a queue depth of 256. In general, the software mirror performed slightly better for most of the tests--though when we're talking a max of 50 IOps difference when the scores are going up to 800, it's not a huge difference. Mirror configurations produced significant performance advantages over a single drive, the largest being a score of twice the single drive in the Web Server test pattern at a queue depth of 32.
The RAID mirror using SSD drives tracked similarly to the software mirror in 3 out of 4 tests. The RAID typically performed better that the software mirror in these tests, sometimes up to 800 IOps with the scores reaching a high of 7000 to 15000. Of note, at queue depth of 32, the software and RAID mirror scores reached parity. The File Server test pattern was odd with the RAID mirror performing almost twice as much at queue depth of 4 as the software mirror, 8000 to 4500 IOps. RAID and software then reached parity at a queue depth of 8 and closely tracked up to a queue depth of 32. Single drive testing was not done on SSD configurations.
Further testing using the same HP 15K DP 6G SAS drives in the PERC 5/i system showed how much of a performance difference proper enterprise drives make compared to regular desktop drives. The PERC 5/i was competitive with the SmartArray P400i and performed far better than when it used the 7.2K SATA desktop drives. A SAS/SATA interposer may help increase the SATA drive performance but that testing is beyond the scope of this set of tests. Plus, I don't think many businesses use interposers, they just buy SAS instead of SATA.
At the highest queue depth tested, the P400i begins to pull away in performance versus the PERC 5/i. This may be a consequence of the P400i's 512 MB of cache versus the PERC 5/i's 256 MB of cache.
Conclusions:The PERC 5/i seems to suck at higher queue depths with a mirror configuration. The drives attached to it were no screamers, but the software mirroring continued to scale at higher queue depths whereas the RAID did not. The P400i continued to scale at higher queue depths. Further testing will be done with the same 15k SAS drives used in the P400i to see if the PERC 5/i is just being held back by slow drives.
Hardware-based RAID can either be better or worse than software-based RAID, depending on the controller and drives. Often, the differences are not significant.
Mirrored drive configurations provided significant performance gains over single drive configurations.
Enterprise-grade drives, especially 15K dual-ported 6G SAS drives, easily outperform desktop-grade SATA drives. Though this should be obvious, it's interesting to see the large performance gap in the real world.
Raw Datahttp://screenshots.rq3.com/monk/mirror.xlsxMade in Excel 2010 but it works in LibreOffice's Calc too, though the graph labels didn't quite make it. I didn't split off the read versus write performance but the data's all there for anyone who wants to graph that stuff.
For the graphs, red and green are single drives. Green is a single drive on a RAID controller versus the red which was on the SATA controller for that system. The SAS system didn't offer a raw SATA link to test a non-RAID single drive. Orange is RAID mirror, blue is Windows-based mirror.