I’m just getting started on my first setup. I’ve got radarr, sonarr, prowlarr, jellyfin, etc running in docker and reading/writing their configs to a 4TB external drive.
I followed a guide to ensure that hardlinks would be used to save disk space.
But what happens when the current drive fills up? What is the process to scale and add more storage?
My current thought process is:
- Mount a new drive
- Recreate the data folder structure on the new drive
- Add the path to the new drive to the jellyfin container
- Update existing collections to look at the new location too
- Switch (not add) the volume for the *arrs data folder to the new drive
Would that work? It would mean the *arrs no longer have access to the actual downloaded files. But does that matter?
Is there an easier, better way? Some way to abstract away the fact that there will eventually be multiple drives? So I could just add on a new drive and have the setup recognize there is more space for storage without messing with volumes or app configs?
I’m using MergerFS, which makes this really easy. I set up a temp mergerfs array with all my disks except the one I want to replace, add the new drive to my first array, then run a command to move all data from the replaced drive to the temp array. The original array mount point doesn’t notice the difference. Once it’s done, I remove the old disk from my main mergerfs array, add the new one, and delete the “temp” array. Then I can remove the old disk from my Snapraid config and also physically remove it from the server.
If you’ve got an old PC laying around, you should look into setting up Open Media Vault on it.
I’m going to be adding more drives to my current basic setup soon and I think LVM is how i’m going to go, then I can just extend the filesystem across multiple drives in the future as I need