http://techreport.com/news/25223/the-fi ... ost=752072You never went back and replied to either me or Spunjji so I'm curious as to what your responses would be.
Moving on, though, just because you have a low-MHZ ASIC doesn't mean that it performs its task poorly compared to a current CPU. The first iPod had some dumpy little DSP in there for MP3 decoding while at the time people were trying to get their 200 MHz Pentium MMXs to playback stuff smoothly. Or look at the example of DVD playback; some embedded stuff in a player versus a "powerful" desktop computer that couldn't always play stuff without dropped frames. It's been many years since this has been a consumer-facing issue so it's kind of out of mind; I guess HD video decoding is the latest in low-power DSP versus powerful general CPU.
Why do I bring this up? The RAID ASIC is closer to the action, so to speak, so it's losing less performance potential by having to jump through the IO system to the CPU to execute code. And for parity operations like RAID 5 and RAID 6, hardware RAID is going to outperform software RAID because of this. It's very case-dependent, though, and in some applications it might not be an issue at all or the performance delta might be small enough that you won't care.
One of the recent rants about RAID "facts" had this to say:
Software RAID has advanced significantly in the last few years (as of 2012). Hardware RAID still has the three key vulnerabilities it has always had: First, it is expensive. Second, if your RAID card fails, your RAID volume fails; it is a single point of failure. Third, if your RAID card fails, you must find an exact replacement for that card to recover your data.
If you care about data, the RAID card is the least of your expense. The server itself is probably as expensive if not more and the storage disks in an enterprise setting will QUICKLY outpace the cost of a RAID card. I'm putting together a new storage system for my workplace and the P812 RAID card cost about $400 with 1 GB of flash-backed write cache. The 12 x 3 TB SAS drives will run me about $5,000, maybe closer to $6,000.
You yourself said you were looking at SANs in the $120k range so in this instance, who gives a crap about an "expensive" RAID controller? It's not a concern.
I'm in the HP ecosystem. For quite some time now, any SmartArray RAID can read arrays created by any other SmartArray controller. If my controller dies, I can either put in a different controller and plut it into the SAS backplane in the server or migrate the drives to another server entirely and the array will still work. And to the last point, no, you don't need the exact same card in the HP ecosystem.
These are the main red flags that people seem to throw up when talking about hardware versus software RAID. The latest one is that software RAID lets you use SSD drives for caching. Well, current generation HP controllers support SmartCache which does the same thing. Oh, and I guess software RAID is supposed to be easy to "grow" if need be. I know at least the HP RAID controllers can expand arrays in the same way. I think Adaptec can as well.
You are using $120k SANs with SAS backplanes and SATA drives? Really? I mean it's not a huge issue, but why didn't you use SAS drives instead (I'm going by your news item comment for that info) as that'd give you dual path for handling requests and a more sophisticated manner of handling error conditions? I'd be interested in the 4 products quoted to see what you actually bought and were looking into buying.
Relevant links:
HP SmartCache:
http://h18004.www1.hp.com/products/serv ... index.htmlRAID "facts":
http://augmentedtrader.wordpress.com/20 ... ings-raid/Adaptec whitepaper on hardware/software RAID, from 2006:
http://www.adaptec.com/nr/rdonlyres/14b ... aid_10.pdfLinux hardware/software RAID benchmarking, from 2008:
http://www.linux.com/news/hardware/serv ... tware-raidI didn't really look much farther for recent benchmarking. My own very limited performance testing showed that hardware and windows software RAID (mirroring only) performed pretty close to each other. However, that's just a 2-drive mirror. Spindle speed had a far bigger impact on performance than hardware versus software in that limited test. For an enterprise NAS that's a glorified Synology (commodity drives, custom OS/disk management, custom hardware enclosure) I can see why you'd use a software solution.
However, in your particular case, I don't know that you'd get many of the benefits of portability if your SAN failed. Are you gonna move the drives into a different server and have it just work? If the storage pool is over a few different NAS boxes in the SAN and one NAS fails, what's the recovery like? I don't have any experience with a SAN, just DAS and NAS, so I actually don't know the answers to these questions; they aren't rhetorical.