I wouldn't be comfortable knowing accessing all my data is 24+ hours away. Much more attractive if I can cut that in half.
While it would certainly be cool (from a geek cred standpoint) to be able to move that much data that fast, I can't think of a consumer use case where I'd actually need
to do that. Given that most other people almost certainly feel the same, what you're talking about here is a tiny niche market.
I gave up on RAID for this very reason. Built a 3TB Raid5 array years ago with Intel Matrix Raid. Took 22 hours(!) to initialize. Promptly broke it up and the disks are still lying unused.
I've never personally used Intel Matrix RAID... but with Linux MD RAID you can start using the array immediately while the array finishes initializing in the background. If Intel does not allow this, then that's a problem.
We (crazy hoarders) need a disruptive company in the HDD space desperately. One that delivers speed along with capacity with technology so revolutionary that Seagate and WD have no choice but to merge and play catch up for a few years just to survive.
See my previous post. Very few consumers need/want that kind of speed or capacity. All of the smaller, potentially disruptive HDD companies have been assimilated into WD, Seagate, and Toshiba at this point, and nobody else is going to want to sink that sort of R&D money into what is effectively a commodity industry. Future advances are going to continue to be evolutionary/incremental, and the drives incorporating any new tech will be costly enough that they will be marketed to enterprise and Cloud users.
Please God, make Elon Musk dream of a world of limitless capacity. Make him see that people everywhere are running out of space and they don't want to store their precious family photos/videos in the cloud.
Current drives are already more than large enough to store family photos/videos.
That's what bothers me. 3 days is too long. Ample time for another drive to fail and take the whole array with it.
That's why there's RAID-6 and ZFS. More than 1 level of redundancy makes the chance of data loss due to multiple failures much less likely, since it would take 3 failures within the rebuild interval to take out the entire array.
Many large-scale enterprise/cloud storage systems use arbitrary "k
" erasure coding, where data is written in n
stripes but any k
stripes are sufficient to recover the original data. So up to n - k
failures can be tolerated before the array loses data. The values of n
are chosen when the array is set up, based on capacity, cost, performance, and reliability constraints.
This is also why all RAID (and RAID-like) solutions should be configured for automatic scrubbing - the system periodically scans the array, looking for latent bad sectors which were silently mis-written, and media defects which have developed since the last scrub. This greatly reduces the odds of getting a nasty surprise down the road. (My home-brewed NAS scrubs its arrays monthly.)
Not everyone is good at keeping current backups.
And that's a separate issue; RAID is not (and never was) intended to be a complete replacement for backups. RAID only protects you from individual drive failures. It does NOT protect you from user error, malicious tampering, catastrophic PSU failures that fry drives, fire, flood, theft, nearby lightning strikes, earthquake, tornado, etc.