Backup Strategy

The 3-2-1 Rule at Scale: Why Cloud Backup Fails for 100TB+ Hoarders

You own 100TB. Cloud backup wants $500/month. Download speeds mean restores take weeks. It's time to architect like someone with real stakes.

The Cloud Backup Math Breaks Down at 100TB

Cloud backup services are engineered for consumers with 1–4 TB. They throw unlimited-looking plans at you, and you believe the marketing. Then you hit 100 TB.

Let's be honest about the math. A typical cloud backup service charges $100–300/month for "unlimited" plans, except they don't cap your data. They cap your bandwidth. Upload 100 TB over their network at a realistic 20 Mbps sustained? You're looking at 139 days of non-stop uploading. If your internet hiccups, the upload restarts from scratch.

Now factor in recovery. You need to restore 10 TB because a corrupted drive took out a storage node. Download speed from their data center is throttled to 30 Mbps, so you're staring at 80+ hours of restore time. That's straight time sitting around waiting for data.

Meanwhile, your monthly bill is compounding. At current rates: 100 TB × $3–5/TB/month = $300–500/month. Over three years, that's $10,800–18,000 just to keep data "backed up" in someone else's data center.

The 3-2-1 Backup Rule Actually Works

Three copies of your data. Two different storage media types. One copy off-site. This is the golden rule, and it works well for large-scale backup strategies including local storage options.

For data hoarders, here's what sane looks like: Your primary pool is local on-site RAID. Your second copy lives on a different RAID array, also on-site but isolated. Your third copy is off-site, sitting in a friend's spare room or a remote location. All three need to be independent—if one dies, the other two survive untouched.

The key insight: your on-site copies are fast for recovery, and your off-site copy is fire-safe, theft-safe, and cryptographically separate. Cloud providers want you to believe only they can do off-site. They can't. Your friend's NAS across town is off-site too, and it costs nothing to maintain.

When Local RAID Fails at Scale

RAID is not backup. Everyone knows this, but data hoarders often think "RAID-6 with two-parity-drives means I'm fine." You're not. RAID protects you from drive failure. It does nothing against:

  • Controller firmware bugs that corrupt the array
  • Power events that corrupt the filesystem
  • Ransomware that encrypts every drive in the RAID
  • Fire, water, or theft
  • Bit rot on aged drives in a cold array

A 12-bay RAID-6 enclosure with 18 TB drives is ~200 TB of usable capacity. That's a single point of failure in terms of backup. If that cabinet fails catastrophically, all your primary data is gone. RAID is uptime protection, not backup. Backup is a copy that lives elsewhere.

The Cost Comparison: Cloud vs. Local vs. Hybrid

Let's model three backup strategies over three years, storing 100 TB with proper redundancy.

StrategyHardwareYear 13-Year Total
Cloud Backup OnlyNone (lease)$4,200$14,400
Local RAID Only$5,000 (2 × 100TB RAID)$5,000$5,000
Local + Off-site P2P$5,500 (3 × ~100TB total)$5,500$5,500
Hybrid (Local + Cloud)$5,000 (local RAID)$9,200$19,400
3-2-1 (Optimal)$6,000 (on-site + off-site)$6,000$6,500

The optimal strategy buys hardware once, refreshes a drive or two over three years, and stays independent. Cloud-only requires year-after-year subscription bleeding. Hybrid is the worst of both: you're paying for local infrastructure and cloud fees.

Off-Site Backup Is Not Your Friend's NAS (Yet)

The mental barrier data hoarders hit: if off-site backup requires asking your friend to host a 100 TB drive, that's... a lot to ask. And coordinating the initial transfer takes time and willpower. This is why cloud backup feels free—it's passive. You set it up and forget about it (until restore time, when you learn how slow it actually is).

Off-site backup via P2P transfer inverts the dynamic. You move data once over your own network, at full speed, to wherever it needs to go. Your friend's spare NAS, a remote location, even a secondary residence—the hardware is cheap now. A 6-bay NAS holding 100 TB costs $1,500–2,500 in drives. In three years, that's $500–800 per year for off-site resilience. Compare that to cloud.

The P2P transfer piece is crucial. You don't want to babysit rsync over SSH for 139 days. You need a tool that handles the full 100 TB move directly, with resume capability, deduplication awareness, and speed. That's what changes the equation.

The Real Problem with Restore Time

Data hoarders accumulate files over years. Restoring after a failure isn't just about RPO (recovery point objective). It's about RTO (recovery time objective)—how long you can tolerate downtime.

With cloud backup: a 50 TB restore at 30 Mbps is 55+ hours of waiting. With local backup on a NAS over 1GbE? That's the same 50 TB in ~11 hours. With 10GbE or direct LAN transfer? 2–3 hours. The difference between hours and days of downtime is meaningful when you live and work with your data.

Off-site P2P backup means your third copy is a drive you can physically retrieve (from your friend's place) or access directly if it's in another part of your own infrastructure. The RTO for that copy is measured in hours of transfer time, not days of cloud throttling.

Implementing 3-2-1 for 100TB

Here's the practical setup: Primary storage is a 12-bay RAID-6 or RAID-Z3 enclosure with 18 TB drives (~200 TB usable). Your second copy lives on a second 12-bay array, different room, powered separately. Your third copy goes to off-site via P2P transfer to a NAS at a remote location.

Transfer the initial 100 TB off-site once, directly over P2P. Subsequent incremental syncs use change detection (modification time, block hashing) to only move changed data. For most hoarders, that's 5–10 TB per month of net changes.

Automate the syncs. Once a week, your primary storage pushes changes to off-site via a scheduled P2P job. If the connection drops, it resumes. If a file changes, only the delta moves. You don't micro-manage it—it just happens in the background.

The Fire-Flooded Scenario

Disaster planning is why data hoarders go full 3-2-1. Imagine a fire takes out your primary RAID and your second copy (same room). Your third copy is 50 miles away in a friend's NAS. You lose data, but not all your data. You recover from the off-site copy in days, not weeks.

Cloud backup can't protect against this faster than a local copy. The download speed is the bottleneck either way. But cloud costs 3–4× more. And if the cloud provider has an outage or data corruption on their end, you find out during restore. Local copies are under your control.

Ransomware and the Immutable Backup

If your primary array gets hit by ransomware, cloud backup can propagate the encryption if your sync is automatic. Off-site P2P backup, if it's unidirectional (only push, never pull), stays clean. Set it to one-way sync: primary to off-site, never reverse. If primary gets encrypted, off-site is untouched.

Even better: off-site backups from the previous week or month can stay unmounted and isolated. You only bring them online to restore. This adds one more layer: ransomware can only encrypt what's mounted and accessible.

The Bottom Line for Data Hoarders

Cloud backup is a crutch for small data. For 100+ TB, it's economically insane and operationally weak. The 3-2-1 rule with local on-site storage, a second independent local copy, and off-site backup via P2P transfer is the only architecture that scales.

Your capital cost is front-loaded (~$6,000 for good hardware), but your maintenance cost is near-zero over three years. You control every copy. Restore times are measured in hours. No monthly subscription bleeds your bank account. And when disaster strikes, you recover faster than any cloud service can guarantee.

The data is yours. The backup should be too.

Move Your Off-Site Backup Now

Stop waiting for cloud backup to finish. P2P transfer moves your 100TB off-site in hours, not months. Direct, fast, and under your control.

Download Free