LukeCWM wrote:I understand that SSDs are shipped with extra storage to transition into use as older sections slow down. I hear enterprise SSDs typically have 20% of spare storage for this function, while consumer drives typically have 7%.
Yep, this is called overprovisioning. The NAND chips in SSD drives have 3000-5000 write/erase cycles on each page (like an old HDD block) before they wear out. To minimise the amount of cycles used up, SSD's use spare area on the drives for garbage collection, replacing retired pages, and generally being smart about only erasing and rewriting a page when necessary. More space typically means more endurance.
LukeCWM wrote:Two different people speculated that you could just format a consumer SSD at only 80% capacity to achieve the same thing, but I can't find any evidence.
I could try and explain it, but I won't. Read this
instead to get some idea of what overprovisioning does.
LukeCWM wrote:I can't get much info on whether TRIM will work in a server environment, or how bad it gets if I need to rely on garbage collection only.
I can't say for definite, but I ran drives in RAID before TRIM was supported and it was fine. Given that TRIM in RAID is still a relatively recent thing, and applies to Intel controllers only, assume that it won't work in a proper server using a PERC or PCI-E SATA controller. As for how bad it will get between garbage collections, It depends largely on the SSD controller, how hard you stress it, and how much overprovisioning you've done. More spare area means it's less likely to get itself into a tortured state. Same article as above, I think. From memory, I think the Sandforce SSD's are the most aggressive with garbage collection, so you'd probably want the Intel 330 series. Maybe try a trio 180GB models and format them as 100GB each for RAID5 to give you the 200GB you were originally looking at. If it doesn't work out for you it will still cost much much less than any Dell/HP/IBM.
LukeCWM wrote:I gather there are broad concerns about running SSDs in RAID, especially consumer SSDs, and including RAID 1. I can't find much hard data.
Neither can I. I've run SSD's in RAID0, RAID1, and RAID5 and I can't say I've had any problems. I've never really hammered the SSD's in a database server though, and only one of those RAID arrays was even in an actual server. In a RAID1, the mirror disk will get the same data written as the master, It shouldn't suffer any more than a single disk. RAID5 parity updates could affect the write amplification, but I've not seen or heard about any major problems with SSDs in RAID5.
The only thing to warn against for SSD's is that in a RAID1 or RAID5, all disks get written to evenly, which means that if they use up all their write/erase cycles they will probably all fail at a similar time. If you are slack about replacing hot-spares, then the increased chance that two could fail before integrity is restored is more worrying. Both RAID methods only support one disk failure before the array is lost (this is nothing new) However mechanical drives tend to fail more gradually and are much less likely to fail on the same day. One idea to avoid this 'similar lifespan' failure mode, is to partition the drives for slightly different sizes. You'll lose even more total array space, but each drive will have different spare areas to use for replacing worn out NAND pages. Theoretically, that staggers drive lifespans a bit more but I've not seen hard evidence of this in practice.
LukeCWM wrote:Perhaps an option is to spend a bit more for enterprise SSD
Yes you could, but given that consumer SSD's are so much cheaper, you might as well try it. Even if you get multiple disk failures after 12 months, it'll still probably save you money because of the rate at which enterprise SSD's are reducing in price. What costs $2000 today might cost only $800 next year, so you could (in this example) splurge $1200 on this consumer experiment and still
I think you're overthinking this though, you have 14 users, right? I really can't imagine 14 users are going to destroy an SSD array that quickly. The people burning through X-25M drives on a weekly basis are running multiple virtual database hosts and writing petabytes to the array daily. Even burning through disks the performance gain is worth the risk and drive costs. You'll have to evaluate the price/performance/risk based on your own criteria.