Personal computing discussed
Moderators: renee, Flying Fox, Ryu Connor
Deanjo wrote:Are you married to having a windows server? Seems like everything that you want to do (and more) could be done with FreeNAS.
http://www.freenas.org/
HorseIicious wrote:Well, I checked out FreeNAS - it has definitely come a long way since I last considered it. However, I have to say, reading the manuals, as well as a few forums over at their site, has me pretty reluctant to give it a real try with my data. My main holdup is that they seem to strongly warn against using the ZFS based file system without ECC RAM (which I don't have, and don't plan on buying new hardware). I've also found a lot of issues regarding ZFS file system corruption when not paired with ECC, and it's virtually impossible to mount a drive and recover data from ZFS, so that's pretty scary to me. .
HorseIicious wrote:However, I think that may have led me to Openfiler. It seems like Openfiler would easily meet my needs, and I like the greater reliability of its options for journaled EXT3 or XFS file systems.
anotherengineer wrote:HorseIicious wrote:Well, I checked out FreeNAS - it has definitely come a long way since I last considered it. However, I have to say, reading the manuals, as well as a few forums over at their site, has me pretty reluctant to give it a real try with my data. My main holdup is that they seem to strongly warn against using the ZFS based file system without ECC RAM (which I don't have, and don't plan on buying new hardware). I've also found a lot of issues regarding ZFS file system corruption when not paired with ECC, and it's virtually impossible to mount a drive and recover data from ZFS, so that's pretty scary to me. .
Don't scare me now. My Synology DS712+ NAS uses ZFS in raid1
Igor_Kavinski wrote:
Waco wrote:In general you shouldn't use ZFS without ECC. It heavily relies on one thing to be error-free: memory (CPU, too, but that's beside the point). Because so many structures, indexes, etc are kept in memory it's definitely best practice to run ECC.
anotherengineer wrote:Don't scare me now. My Synology DS712+ NAS uses ZFS in raid1
bitcat70 wrote:While FreeNAS with ZFS may not be the best option for your current configuration you may still want to read this article Bitrot and atomic COWs: Inside “next-gen” filesystems. Gives one a different perspective on backups.
Igor_Kavinski wrote:
Evaders99 wrote:I don't think this is the case, any links?the fact that the total space is limited by the size of the smallest drive (times multiplier)
Evaders99 wrote:In my current tests, only simple spaces have that issue. For parity and mirror spaces, I can easily retire a disk and repair kicks in to rebuild the array. Sort of make sense as simple is really RAID0 and you can't just yank the drive out.you have to destroy the space in order to remove a disk.
Evaders99 wrote:WHS 2011 already does that AFAIK, so? Most other solutions, like software/hardware RAID, are also block-based and have this same issue. If accessing disk directly as regular file system partitions is your thing, you may as well stick with WHSv1 with DE (which is what I am running, but I do want the ability to use 4TB disks so I am looking also).Plus your data is no longer accessible directly on disk as NTFS partitions.
Deanjo wrote:This. If you do want better reliability, you should build a system with ECC as the chance for us hitting bit errors are higher now that we are dealing much more data than before.In all fairness, that concern lies with every filesystem out there. Bad ram is going to be a concern no matter what the filesystem. ZFS however offers its best protection when utilized with ECC.
Deanjo wrote:In all fairness, that concern lies with every filesystem out there. Bad ram is going to be a concern no matter what the filesystem. ZFS however offers its best protection when utilized with ECC.
Flying Fox wrote:Evaders99 wrote:WHS 2011 already does that AFAIK, so? Most other solutions, like software/hardware RAID, are also block-based and have this same issue. If accessing disk directly as regular file system partitions is your thing, you may as well stick with WHSv1 with DE (which is what I am running, but I do want the ability to use 4TB disks so I am looking also).Plus your data is no longer accessible directly on disk as NTFS partitions.
Flying Fox wrote:Evaders99 wrote:I don't think this is the case, any links?the fact that the total space is limited by the size of the smallest drive (times multiplier)
Flying Fox wrote:Evaders99 wrote:WHS 2011 already does that AFAIK, so?Plus your data is no longer accessible directly on disk as NTFS partitions.
Waco wrote:Deanjo wrote:In all fairness, that concern lies with every filesystem out there. Bad ram is going to be a concern no matter what the filesystem. ZFS however offers its best protection when utilized with ECC.
Not so much with "traditional" filesystems though. Even if the filesystem gets destroyed it's reasonably easy to recover files from busted NTFS, FAT, EXT[2/3/4] file systems. ZFS? Not so much due to how it allocates blocks and whatnot.
Unless I'm totally mistaken or something - in which case please set me straight!
HorseIicious wrote:As it stands now I think I'm between just reinstalling WHS2011 or going with Openfiler (which I just set up in a VBox, and was impressed with how easy and simple it was to get going). Unfortunately it's looking like one of these two options is probably going to be my best bet (unfortunate because they're both dated).
Flatland_Spider wrote:OpenMediaVault (http://www.openmediavault.org/)
Waco wrote:Deanjo wrote:In all fairness, that concern lies with every filesystem out there. Bad ram is going to be a concern no matter what the filesystem. ZFS however offers its best protection when utilized with ECC.
Not so much with "traditional" filesystems though. Even if the filesystem gets destroyed it's reasonably easy to recover files from busted NTFS, FAT, EXT[2/3/4] file systems. ZFS? Not so much due to how it allocates blocks and whatnot.
Unless I'm totally mistaken or something - in which case please set me straight!
Flatland_Spider wrote:OpenMediaVault (http://www.openmediavault.org/)
It's based on Debian, and it's actively developed. Plus, it has more features then Openfiler ever did.
Evaders99 wrote:Have you tried DrivePool or DriveBender? They are essentially what DE would have been - and they have versions designed for Win 8 as well.
HorseIicious wrote:Most complaints were during beta time before 8/2012 RTM. For 8.1/2012 R2:Flying Fox wrote:Evaders99 wrote:I don't think this is the case, any links?the fact that the total space is limited by the size of the smallest drive (times multiplier)
I've read what Evader99 is talking about in a few places too - regarding total space being hindered (not necessarily limited) by the smallest drive size - also big issues with adding a drive to a pool (and being able to use all of the new space) after the others are mostly full.
HorseIicious wrote:Right, I forgot the original story. They tried doing the block-based "new DE" but could not get it to work properly, so they dropped "DE" completely. So yes, you are still on a file-based approach. I have to admit I don't run into the flaws of DEv1 myself, so I am happy to remain on WHSv1 if not for the capacity problem. It saved my behind once when a drive failed. Other than the non-duplicated files, rebuilding the array resulted in only 1 small corrupted file. I just needed to merge the folders and my array is more or less back to normal.No, WHS2011 drives (at least how I have mine set up) are each simple NTFS logical/physical drives. I can pull any one of the 10+ drives in my server, throw them in my main workstation, and read all of the data that's on them, and put them back in the server, no problems.
Flying Fox wrote:I may have to look at FlexRaid myself, though not sure if I want to pay extra on top of the OS. I am already thinking Journal+WBC on SSD, plus a single parity pool. The rule seems to be "only replace/upgrade with larger disks" and life should be good. Looking at C2xx motherboards now paired with a cheap ass i3/Pentium and ECC.
Deanjo wrote:Yes with bad bits you could potentially lose a pools of data with ZFS, but then again, ZFS is more likely to pick up on bitrot like failures while another filesystem such as NTFS (and btw NTFS since Server 2008/Win 7 has had "self healing" features that are just as susceptible to data corruption due to memory errors and gets worse if using something like a raid configuration instead of a spanned volume) is not likely to and merrily continue writing away corrupted data. Also keep in mind that no matter what storage solution you choose, SISO kicks in. So if that non-ECC workstation that you are copying data from has bad data to begin with, no filesystem is going to correct that data on the server but you will have a perfect copy of the corrupted data.