Personal computing discussed

Moderators: renee, SecretSquirrel, notfred

 
DragonDaddyBear
Gerbil Elite
Topic Author
Posts: 985
Joined: Fri Jan 30, 2009 8:01 am

Storage architecture help

Tue Aug 06, 2019 1:11 pm

A computer I bought doesn't have a legal copy of Windows. The plan is for a Plex server. Ubuntu will work fine for me.

What I want is an SSD RAID1 cache with a RAID5 (or equivalent) for the 4 HDDs. What the best way to do this? Before you ask, no it doesn't have a 10 Gbps NIC yet. I'm going to put one in evenly, one switches are reasonable priced. I want this to be the last box I build for a while.
 
Waco
Maximum Gerbil
Posts: 4850
Joined: Tue Jan 20, 2009 4:14 pm
Location: Los Alamos, NM

Re: Storage architecture help

Tue Aug 06, 2019 1:16 pm

1. You don't need a cache, save the money and spend it elsewhere.
2. Why Ubuntu versus something like FreeNAS that is much more "appliancy"?
3. You can do better than RAID5, and it behooves you to think about future storage needs now if you can. Decisions you make now will be hard to undo moving forward (more disks, different protection layout, etc).
4. Do you have a need for 10 Gbps networking? Plex certainly doesn't need it unless your local streaming includes multiple 4K streams at native bitrates simultaneously (and I doubt this is a use-case given the storage requirements). You'll have to spend more time choosing and tuning your storage hardware to get any advantage out of a 1 GB/s network link as well.

// storage architect is my day job :)
Victory requires no explanation. Defeat allows none.
 
Usacomp2k3
Gerbil God
Posts: 23043
Joined: Thu Apr 01, 2004 4:53 pm
Location: Orlando, FL
Contact:

Re: Storage architecture help

Tue Aug 06, 2019 1:45 pm

Even UHD BD is around 70mbps. You can run 10 of those and still be under the gigabit spec. HEVC is remarkable efficient.
 
DragonDaddyBear
Gerbil Elite
Topic Author
Posts: 985
Joined: Fri Jan 30, 2009 8:01 am

Re: Storage architecture help

Tue Aug 06, 2019 1:58 pm

I want the speed of running old games off it. Also, there well eventually be 4 users of this. I do have a 3750 with a 10Gbps fiber SFP and a NIC to go with it.

I suppose I should say this will be my for server, too.
 
just brew it!
Administrator
Posts: 54500
Joined: Tue Aug 20, 2002 10:51 pm
Location: Somewhere, having a beer

Re: Storage architecture help

Tue Aug 06, 2019 2:01 pm

When you say "cache" do you mean automatic caching, or just a faster volume to store stuff you know will be accessed a lot?
Nostalgia isn't what it used to be.
 
Waco
Maximum Gerbil
Posts: 4850
Joined: Tue Jan 20, 2009 4:14 pm
Location: Los Alamos, NM

Re: Storage architecture help

Tue Aug 06, 2019 2:22 pm

DragonDaddyBear wrote:
I want the speed of running old games off it.

No matter what you do, you won't get the same speeds as local using Windows clients over CIFS or NFS. You can get pretty close if you're just comparing local HDD to remote array of HDDs, though. You still are more latency limited than bandwidth, but if you have 10 Gb gear already, it can't hurt!
Victory requires no explanation. Defeat allows none.
 
DragonDaddyBear
Gerbil Elite
Topic Author
Posts: 985
Joined: Fri Jan 30, 2009 8:01 am

Re: Storage architecture help

Tue Aug 06, 2019 2:53 pm

Yeah, I know latency is an issue. But for test vm or old games and pictures it's fine.

When I say cache I mean an automatic tier, where coming stuff is on the SSD and the older stuff it's on a disk. It writing it goes to the SSD and ages out. I think bcache seems familiar but I don't know if that's what it does
 
Waco
Maximum Gerbil
Posts: 4850
Joined: Tue Jan 20, 2009 4:14 pm
Location: Los Alamos, NM

Re: Storage architecture help

Tue Aug 06, 2019 3:22 pm

IMO, that's just another thing to break and leave you with corrupted data.
Victory requires no explanation. Defeat allows none.
 
DragonDaddyBear
Gerbil Elite
Topic Author
Posts: 985
Joined: Fri Jan 30, 2009 8:01 am

Re: Storage architecture help

Tue Aug 06, 2019 3:39 pm

So, as cool as it is, I sound roll ZFS? Isn't there some drama around the kernel and ZFS right now?

One other goal I'd like to have is for the file system expendable in the future. So if I need another drive I can add one. With ZFS I would need to replace all drives. As drives get bigger rebuild take longer. That's longer in a vulnerable state.
 
Waco
Maximum Gerbil
Posts: 4850
Joined: Tue Jan 20, 2009 4:14 pm
Location: Los Alamos, NM

Re: Storage architecture help

Tue Aug 06, 2019 4:03 pm

I believe the ZFS drama is effectively done (from what I've seen on the master branch on Github). You can always change the license if you're building it yourself if the version you're running is affected by the drama.

You can expand ZFS filesystems pretty trivially, you just need to be careful how you go about it and how you plan for potential expansion in the future. With ZFS you can add VDEVs to an existing pool to expand space. IE: you can have a 4+2 now, and add a second 4+2 (or really, any protection type, but keeping them the same is suggested) later down the road. Rebuilds are based on device size, but the newer sequential rebuild code is dramatically better than ZFS ever was in the past (3-5X improvement if your pool has any real type of data written to it).

ZFS would also let you run an L2ARC if you really wanted hot data to be kept on something more agile than HDDs. It's not a write cache (but due to how ZFS writes that's generally not a problem) but it will demote things out of the RAM ARC into the L2ARC to expand the overall capacity. It does eat up some RAM keeping track of those records but it can be wildly better than hitting HDDs if your dataset that's commonly accessed is larger than DRAM by a significant amount.
Victory requires no explanation. Defeat allows none.
 
DragonDaddyBear
Gerbil Elite
Topic Author
Posts: 985
Joined: Fri Jan 30, 2009 8:01 am

Re: Storage architecture help

Tue Aug 06, 2019 4:20 pm

Waco, I don't suppose you'll be at the BBQ. If love to pick your brain.

So, with L2arc, will I need another VDEV? if so, will it need to be redundant? I have 16 GB on a single stick at the moment, so I will likely be adding another for dual channel 32.
 
DragonDaddyBear
Gerbil Elite
Topic Author
Posts: 985
Joined: Fri Jan 30, 2009 8:01 am

Re: Storage architecture help

Tue Aug 06, 2019 4:25 pm

One other thought about freenas, I use it and love it. The issue is it doesn't pass through hardware transcoding.
 
Waco
Maximum Gerbil
Posts: 4850
Joined: Tue Jan 20, 2009 4:14 pm
Location: Los Alamos, NM

Re: Storage architecture help

Tue Aug 06, 2019 4:33 pm

DragonDaddyBear wrote:
Waco, I don't suppose you'll be at the BBQ. If love to pick your brain.

So, with L2arc, will I need another VDEV? if so, will it need to be redundant? I have 16 GB on a single stick at the moment, so I will likely be adding another for dual channel 32.

I wish I could be there, but unfortunately not going to happen this year (again).

The L2ARC does not require redundancy. It is technically another VDEV, but it's a special type. If it fails, it just faults the VDEV for caching and you return to the standard behavior without it (along with some nasty emails if you have zed set up). The ram usage is dependent upon how much data ends up in the L2ARC (number of records, size of records, etc). There's a formula to figure it out that I don't have on the top of my head, but if you're using 1 MB records (likely on a data pool) the limit is pretty darn high before it matters. A couple hundred GB of L2ARC will happily fit in a few GB of RAM.

In general, though, L2ARC isn't super useful for most. More RAM is almost always a better purchase unless you have very specific workloads that blow out RAM consistently. If you're just loading occasional (older) games as your performance sensitive workload, I bet you'd see little benefit. I used to run an L2ARC and even running VMs directly from my NAS I rarely saw hits in the L2ARC.

EDIT: If you really want to ensure a minimum level of performance, though, you could always set up your L2ARC to only cache your dataset that's sensitive for performance and disable it for all media. With enough data accessed consistently (larger than RAM) it could help significantly. Testing is the only way to find out, though.
Victory requires no explanation. Defeat allows none.
 
DragonDaddyBear
Gerbil Elite
Topic Author
Posts: 985
Joined: Fri Jan 30, 2009 8:01 am

Re: Storage architecture help

Tue Aug 06, 2019 5:37 pm

Can you L2ARC a dataset or a VDEV or both? I like that suggestion. That way I can use just one small ish drive for the games, tools, and VM. I suppose this would be best as a dedicated drive and not a chunk of my os nvme drive?

I'm liking this idea. If I ever needed to I could just buy a card and make another set of drives for a VDEV and expand the pool.

Sorry we'll miss you at the BBQ.
 
Waco
Maximum Gerbil
Posts: 4850
Joined: Tue Jan 20, 2009 4:14 pm
Location: Los Alamos, NM

Re: Storage architecture help

Tue Aug 06, 2019 8:34 pm

You can configure an L2ARC on a pool level, then restrict what can get demoted into it on a dataset-by-dataset level. You can share an L2ARC device between pools with a little partition trickery as well by splitting it up prior to assigning it to the various pools. You can't use an L2ARC with a single VDEV; I'm not sure you'd ever really want to do that anyway.

If you don't need all the space for your OS drive chopping it up with partitions is a great way to use the additional space - just be mindful that if you burn out the OS drive through lots of L2ARC updates, you'll take down the OS too. In practice this is unlikely to be a problem, but if it was a QLC drive and a very active pool I could see potential trouble arising.
Victory requires no explanation. Defeat allows none.
 
DragonDaddyBear
Gerbil Elite
Topic Author
Posts: 985
Joined: Fri Jan 30, 2009 8:01 am

Re: Storage architecture help

Tue Aug 06, 2019 10:09 pm

OK, let me make sure I understand the high points so I know what to start searching the "series of tubes" for.
- I can assign an L2ARC to a POOL (which, if memory serves correctly, is a grouping of VDEVs).
- I then would configure the L2ARC to be restricted to the data set for games and VM's
- I can partition my main NVMe drive and use a partition as a "special VDEV" that is used for L2ARC. I think this will be fine as the OS is easy to replace, so long as I have the data I'm fine. I'm thinking 512GB drive, so 1/2 for the ARC should be sufficient.

So, what happens if the L2ARC is the same size as the data set? I'm seriously not planning on a lot being on there for now. And can't you easily rejigger space of data sets in the pool?

Also, any bang-for-the-buck drives you'd suggest? I've only got 6 SATA ports to work with. I was originally thinking 4 X 4TB drives and 2X SSD's, but I guess I don't need the SSD's, huh.
 
just brew it!
Administrator
Posts: 54500
Joined: Tue Aug 20, 2002 10:51 pm
Location: Somewhere, having a beer

Re: Storage architecture help

Wed Aug 07, 2019 6:49 am

I'd avoid drive-managed SMR HDDs like the plague. Supposedly they can work OK with ZFS, but they're an absolute disaster with ext4.
Nostalgia isn't what it used to be.
 
The Egg
Minister of Gerbil Affairs
Posts: 2938
Joined: Sun Apr 06, 2008 4:46 pm

Re: Storage architecture help

Wed Aug 07, 2019 8:24 am

Since we're on the subject (and while I'm waiting to see if I can inherit a full-on 6/12 Sandy-Xeon server when my workplace does an upgrade in the next couple months) --- any expected issues with using drives of measurably different speeds within a ZFS pool? I have 16TB worth of WD Red drives, and then an oddball 5TB 7200rpm WD Black laying around that I was hoping to throw in the mix.
 
DragonDaddyBear
Gerbil Elite
Topic Author
Posts: 985
Joined: Fri Jan 30, 2009 8:01 am

Re: Storage architecture help

Wed Aug 07, 2019 8:45 am

I'll have a dual socket 6c/12t 48 GB server I'll let go for cheap once I do this upgrade, if you're interested
 
The Egg
Minister of Gerbil Affairs
Posts: 2938
Joined: Sun Apr 06, 2008 4:46 pm

Re: Storage architecture help

Wed Aug 07, 2019 9:09 am

DragonDaddyBear wrote:
I'll have a dual socket 6c/12t 48 GB server I'll let go for cheap once I do this upgrade, if you're interested

Sounds like a solid setup, but the work rig would likely be free. I've actually got a regular Sandy 4/4 setup right now (which I haven't put into use), but it'd be nice to have ECC.
 
DragonDaddyBear
Gerbil Elite
Topic Author
Posts: 985
Joined: Fri Jan 30, 2009 8:01 am

Re: Storage architecture help

Wed Aug 07, 2019 9:25 am

Yeah, this is just too much power. And to get what I want, I'd need to drop in a Quadro GPU and do a VM for Plex. Right now it's all software decode and transcode, which is kinda harsh. So I have only 8 cores assigned (VMWare Free limitation) to a FreeNas VM that has a Plex plugin. It's an older FreeNas, too. So I can't even upgrade Plex. Rolling a straight Ubuntu desktop will make my life so much easier in so many ways. I may even get rid of my Ubiquiti AP and PFSense and just do an out-of-the-box setup once I can get a router with 10Gbps NIC's. The reason I never plugged in the Cisco 3750 and actually turned on the 10Gbps in my server was because it's too much power. I'm looking at 300W of power just doing nothing the vast majority of the time, just so I can have 10Gbps for a server that can't even put out 200MBps of hard drive activity. So the rebuild will simplify things. I just don't have the time. I have the passion and the curiosity but, as the name implies, I have kids and one has a "provisional" diagnosis of autism. So while I love geeking out, the cost of money and time just isn't there for me. That's why I wan't the server to have all the power I need now and maybe for the next 10 years with an SSD and option of 10Gbps (when/if it comes down to reasonable prices).
 
The Egg
Minister of Gerbil Affairs
Posts: 2938
Joined: Sun Apr 06, 2008 4:46 pm

Re: Storage architecture help

Wed Aug 07, 2019 10:21 am

^^ Some of that's over my head, but yeah, power usage is a concern. I plan to run FreeNAS and a Plex server, but it'll be my first crack at it, and there will be very light usage for such a hefty system and power draw. Does Plex specifically require a Quadro for on-the-fly hardware transcoding?
 
Waco
Maximum Gerbil
Posts: 4850
Joined: Tue Jan 20, 2009 4:14 pm
Location: Los Alamos, NM

Re: Storage architecture help

Wed Aug 07, 2019 10:32 am

DragonDaddyBear wrote:
OK, let me make sure I understand the high points so I know what to start searching the "series of tubes" for.
- I can assign an L2ARC to a POOL (which, if memory serves correctly, is a grouping of VDEVs).
- I then would configure the L2ARC to be restricted to the data set for games and VM's
- I can partition my main NVMe drive and use a partition as a "special VDEV" that is used for L2ARC. I think this will be fine as the OS is easy to replace, so long as I have the data I'm fine. I'm thinking 512GB drive, so 1/2 for the ARC should be sufficient.

So, what happens if the L2ARC is the same size as the data set? I'm seriously not planning on a lot being on there for now. And can't you easily rejigger space of data sets in the pool?

Also, any bang-for-the-buck drives you'd suggest? I've only got 6 SATA ports to work with. I was originally thinking 4 X 4TB drives and 2X SSD's, but I guess I don't need the SSD's, huh.

You've got it right! If the L2ARC is "too big" in terms of the data you're demoting into it, it just won't fill. No real issues there. Space between datasets can be limited if you choose to do so, or you can simply let them share the aggregate of available space in the pool.

For drives, it doesn't matter a huge amount IMO. Decent drives from a reputable manufacturer, avoid Seagate Barracuda Compute drives (they are shingled, not worth the hassles unless you really really want to save a few bucks), and try to keep them all at the same RPM to avoid performance limitations (a RAIDZ/RAIDZ2/RAIDZ3 will perform as fast as the slowest drive in the VDEV). Mirrors can balance performance if you go that route, so no real harm mixing different speeds there (and for reads, you can actually have SSDs and HDDs in the same mirror and ZFS will preferentially hit the SSD for reading).


EDIT: I hear you on the FreeNAS upgrade frustrations. I've been hanging onto my warden Plex jail because iocage and the new FreeNAS UI is such a mess. Eventually I'll migrate my stuff to a new install of FreeNAS but I've been avoiding the pain of figuring out the new iocage system as well as replicating all the metadata from the existing warden jails into the new instance.
Victory requires no explanation. Defeat allows none.

Who is online

Users browsing this forum: No registered users and 1 guest
GZIP: On