I have a 56 TB local Unraid NAS that is parity protected against single drive failure, and while I think a single drive failing and being parity recovered covers data loss 95% of the time, I’m always concerned about two drives failing or a site-/system-wide disaster that takes out the whole NAS.
For other larger local hosters who are smarter and more prepared, what do you do? Do you sync it off site? How do you deal with cost and bandwidth needs if so? What other backup strategies do you use?
(Sorry if this standard scenario has been discussed - searching didn’t turn up anything.)
I’ve been following this post since the first comment.
And I have just put together my own RAID1 1TB NAS. And I did not think that 1TB will serve me forever, more like “a good start”.
But the numbers I’ve been seeing in here… you guys are nuts 😆
For me, I only back up data I can’t replace, which is a small subset of the capacity of my NAS. Personal data like photos, password manager databases, personal documents, etc. get locally encrypted, then synced to a cloud storage provider. I have my encryption keys stored in a location that’s automatically synced to various personal devices and one off-site location maintained by a trusted party. I have the backups and encryption key sync configured to keep n old versions of the files (where the value of n depends on how critical the file is).
Incremental synchronization really keeps the bandwidth and storage costs down and the amount of data I am backing up makes file level backup a very reasonable option.
If I wanted to back up everything, I would set up a second system off-site and run backups over a secure tunnel.
Not all data is equal. I backup things i absolutely can not lose and yolo everything else. My love for this hobby does not extend to buying racks of hard drives.
I don’t. Of my 120tb, I only care about the 4tb of personal data and I push that to a cloud backup. The rest can just be downloaded again.
Do you have logs or software that keeps track of what you need to redownload? A big stress for me with that method is remembering or keeping track of what is lost when I and software can’t even see the filesystem anymore.
If you can’t remember what you lost, did you really need it to begin with?
Unless it’s personal memories of course.
I can’t remember the name of an excel spreadsheet I created years ago, which has continually matured with lots of changes. I often have to search for it of the many I have for different purposes.
Trusting your memory is a naive, amateur approach.
The key here being that you actually remember the file exists, because it’s important. Some other random spreadsheet you don’t even remember exists because you haven’t needed it since forever is probably not all that important to backup.
If you loose something without ever realizing you lost it, it was not important so there would be no reason to make a backup.
If the spreadsheet is important it sounds like it would be part of the 4 GB that was backed up.
I have a 120TB unraid server at home, and a 40TB unraid server at work. Both use 2 x parity disks.
The critical work stuff backs up to home, and the critical home stuff backs up to work.
The media is disposable.
Both servers then back up to Crashplan on separate accounts - work uses the Australian server on a business account, home used the US server on a personal account.
I figure I should be safe unless Australia and the US are nuked simultaneously… At which point my data integrity is probably not the most pressing issue.
why is your work stuff at home and why is your personal stuff at work ಠ_ಠ
Yeah I guess it probably makes more sense when it’s my business… Maybe not if you’re an employee at some corporate randomly hosting backups of your dog photos.
I dunno. At a big company they probably won’t notice an extra TB of storage cost… So long as you’re discrete with the transfers.
A second offsite NAS (my old one) with the same capacity for the larger files
Backblaze B2 and a Hezner storage box for Really Important stuff.
Okay Mr. Money Bags
It’s literally a Raspberry pi 3B+ and a USB hard drive in a plastic storage box at my parents house 😅
Personally I deal with it by prioritizing the data.
I have about the same total size Unraid NAS as you, but the vast majority is downloaded or ripped media that would be annoying to replace, but not disastrous.
My personal photos, videos and other documents which are irreplaceable only make up a few TB, which is pretty managable to maintain true local and cloud backups of.
Not sure if that helps at all in your situation.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters Git Popular version control system, primarily for code NAS Network-Attached Storage RAID Redundant Array of Independent Disks for mass storage SSD Solid State Drive mass storage VNC Virtual Network Computing for remote desktop access VPN Virtual Private Network ZFS Solaris/Linux filesystem focusing on data integrity
7 acronyms in this thread; the most compressed thread commented on today has 12 acronyms.
[Thread #119 for this comm, first seen 26th Feb 2026, 15:51] [FAQ] [Full list] [Contact] [Source code]
What’s your recovery needs?
It’s ok to take 6 months to backup to a cloud provider, but do you need all your data to be recovered in a short period of time? If so, cloud isn’t the solution, you’d need a duplicate set of drives nearby (but not close enough for the same flood, fire, etc.
But, if you’re ok waiting for the data to download again (and check the storage provider costs for that specific scenario), then your main factor is how much data changes after that initial 1st upload.
Sorry. Shortly after posting this and the initial QA I left for a trip.
I could definitely wait those time periods for a first backup and a restore, since I assume it’ll be a once in 10 year at worst situation. Data changes after the first upload should be show enough to keep up.
Recently helped someone get set up with backblaze B2 using Kopia, which turned out fairly affordable. It compresses and de-duplicates leading to very little storage use, and it encrypts so that Backblaze can’t read the data.
Kopia connects to it directly. To restore, you just install Kopia again and enter the same connection credentials to access the backup repository.
My personal solution is a second NAS off-site, which periodically wakes up and connects to mine via VPN, during that window Kopia is set to update my backups.
Kopia figures out what parts of the filesystem has changed very quickly, and only those changes are transferred over during each update.
The Backblaze option is something I’ve seriously considered.
Any reason this person didn’t go with the $99/year personal backup plan? It says “unlimited” and it is for my household only, but maybe I’m missing something about how difficult it is to setup on Unraid or other NAS software. B2’s $6/TB/mo rate would put me at $150/mo which is not great.
They only needed about 500GB.
And personal is for desktop systems. You have to use Backblazes macOS/Windows desktop application, and the setup is not zero-knowledge on Backblazes part. They literally advertise being able to ship you your files on a physical device if need be.
Which some people are ok with, but not what most of us would want.
You can ship encrypted files you know……?
Yes. That’s not mutually exclusive with Backblaze having access to your backups.
Them having access to them is irrelevant if they’re encrypted. What’s the issue?
You can do that with B2. Just use an application to upload that encrypts as it uploads.
The only way to achieve the same on the backup plan (because you have to use their desktop app) is to always have your entire system encrypted and never decrypt anything while the desktop app is performing a backup.
Did you not read what I said? You use their app, which copies files from your system as-is. Ensuring it never grabs a cleartext file is not practical.
That doesn’t mean it’s not encrypted on their servers……







