Personal computing discussed
Moderators: renee, SecretSquirrel, notfred
synthel2 wrote:Thoughts?
JBI wrote:My next home server build will be using btrfs or ZFS...
Glorious wrote:Ultimately, the question is what are you trying to protect? Because if it is just that 7.5 GB of data, and that data doesn't change frequently, you might want to just tar that up, md5sum it, and then I don't know, put it on a bunch of flash drives/ old harddrives, and leave them in different places: work, family, friends. I'd cloud storage it somehow too, 7.5GB will easily fit into a free google drive and plenty of other places too I'd imagine.
Glorious wrote:I use it for RAID1, and have already run into a serious bug: One of the drives starting failing (ancient 7200.10s), and then it died completely. As in, it was no longer detected by anything, it was just gone.
This is not handled well by btrfs (or the marvell controller, but I only have suspicions and it's irrelevant to this discussion). For one thing, it kept logging errors like crazy instead of recognizing that the device was kaput. For another, the actual bug, you can get into a catch-22 where you can only mount the remaining disk as degraded,ro but then cannot add a new replacement disk because the array is read-only.
synthtel2 wrote:I had a Crucial MX100 bitrot at me a while back, and I have no desire to repeat the experience. Btrfs-raid1 sounds like just the ticket to complement my backup scheme (what with checksumming allowing it to correct errors, not merely detect), but it also sounds kinda overkill for my needs, so I thought I'd ask the gerbils what you all would do.
Thoughts?
synthel2 wrote:The btrfs replace instructions I see here look like they avoid that problem
synthel2 wrote:and even if they don't, having to juggle some data in the event of a proper drive failure is something I'm okay with.
boing wrote:Here's one heck of a big reason to stay off btrfs when it comes to raid5 or 6:
Bauxite wrote:Can't you compile in whatever you want pretty easily with Arch? If so, no reason to pick btrfs over zfs, the latter is far far more mature.
Though even the more prebuilt-oriented distros are making it less and less painful.
Bauxite wrote:If you're really worried about the data, having it on multiple computers in different locations is the only real backup though.
CScottG wrote:RAID isn't backup - it's real-time redundancy, and for consumers it's most useful for your OS drive to improve your odds at "up-time". Just get a good (good rep. MLC or SLC) couple of small SSD's for your OS and put them in RAID (..and I"m assuming Linux here so probably ZFS). [...]
synthtel2 wrote:Aha, copies=2 looks like just the ticket! That looked like it would be more complex on btrfs.
My big concern with ZFS is that it looks like it involves either running a downgraded kernel or doing a lot more poking around with the kernel than I otherwise have reason to. I don't especially want to have extra kernel maintenance to do, and I don't like the idea of a downgraded kernel because Arch's rolling-release deal expects all packages to be at latest as of the most recent system update. I don't know exactly what kind of breakage can happen if that isn't followed, and I don't really want to find out.
Waco wrote:I have petabytes of it deployed and I've learned to very much trust it (both the true ZFS tree and ZFS on Linux).
synthel2 wrote:FUSE ZFS would bypass all that. I like that idea a lot, thanks!
Glorious wrote:ZFS hasn't given me issues even when I initially did things I knew were stupid, like building a pool by device (/dev/sd*) instead of disk (/dev/disk/by-id).
Pro-tip, don't do that, go with by-id.
just brew it! wrote:How hard is it to grow existing ZFS pools by adding disks, and what are the downsides to building up an array gradually? I'd prefer not to have to buy a bunch of new disk drives. The existing server has multiple MD RAID-1 arrays in it. Could I create a ZFS pool just large enough to contain one of the existing arrays (e.g. using the existing storage drives from the retired FX-8320 box...), move one array's worth of content, then take the drives for that RAID array out of the existing server and add them to the ZFS server prior to migrating the next array? That way I could migrate everything one RAID array at a time, and re-use all of the existing drives.
Waco wrote:EDIT: By the way, the cheap LSI controllers are Ebay are wonderful for around $60. I have three of them, and aside from the initial pain of putting them in IT mode and updating firmware, they're rock solid. Ensure you have at least some airflow over them though, they run hot. I cheated and replaced the heatsinks with nice tall ones from Digikey for a few dollars each.
Waco wrote:It's very easy to fix that - just import with "-d /dev/disk/by-id". I tend to use the short names to build pools, then export / import with the by-id labels. It'll forever use the by-id labels until you tell it not to.