Do You Actually Need a Central File Server? The Case for Decentralized File Sharing
Everyone defaults to buying a NAS. But for many workflows, a central server is a bottleneck, not a solution.
The Traditional Model and Its Problems
The self-hosted playbook goes like this: buy a NAS, set it up as central storage, have all devices access it. It's an obvious architecture. Centralization makes sense for shared data—everyone has one truth source. But this model has real downsides that nobody discusses until you live with them.
The Bottleneck Problem
A central server becomes a throughput bottleneck. If you're transferring a 500 GB backup, the entire bandwidth of your network funnel through one NAS. You're moving data from disk A to NAS, then from NAS to disk B. That's two network hops and NAS disk I/O in the middle.
Direct device-to-device transfer is one hop—straight from source to destination at full bandwidth. A gigabit link gives you 125 MB/s maximum. If you route through a NAS doing RAID calculations, metadata updates, and cache invalidation, you might get 60–80 MB/s. That's a 30–50% penalty just for having a middleman.
Scale this to a team environment. Five people all working from a central NAS means five clients competing for disk I/O. If one person is uploading a massive file, the others experience lag. The NAS CPU spikes. Network becomes congested. Performance degrades for everyone.
Single Point of Failure
A central server is also a single point of failure. If the NAS dies, your shared files are unavailable. Yes, you should have backups, but the service is down while you repair it. That downtime cascades: collaborators can't access files, backup jobs queue up, projects stall.
RAID helps, but it's not foolproof. RAID-1 or RAID-6 protects against one or two drive failures, but a firmware bug, power surge, or manufacturing defect can take down the entire array. You still need a separate backup system, which adds complexity.
Decentralized systems don't have this problem. If your laptop dies, your files still exist on your desktop and your NAS. If the NAS is offline, you and a colleague can still sync files directly. There's no critical point of failure.
The Maintenance Burden
Central servers need tending. Firmware updates. Disk health monitoring. RAID integrity checks. Storage expansion planning. Security patches. Database maintenance. Someone is responsible for keeping it running.
If you skip maintenance, you're at risk. An unpatched NAS with a known vulnerability is a breach waiting to happen. A RAID array that hasn't been scrubbed in months might have silent data corruption. A full disk causes cascading failures.
Decentralized approaches distribute this burden. Each device manages itself. A laptop doesn't need firmware updates for shared storage—it already updates as part of OS maintenance. No central monitoring system to babysit.
Cost and Overhead
A quality 4-bay NAS costs $500–1,500. Add drives at $15–30 per terabyte. Factor in the cost of a backup drive or cloud backup subscription. Add electricity and cooling. Over five years, a NAS setup costs $2,000–4,000 in hardware and operational costs.
If you already have multiple devices—laptop, desktop, NAS for something else—decentralized file sharing uses hardware you already own. The marginal cost is software, which for open source is zero.
When Centralization Still Makes Sense
This isn't an argument against NAS entirely. Centralization is still the right choice for specific scenarios.
Long-term shared storage with many collaborators: If ten people need permanent access to shared project files, a central NAS with proper permissions is simpler than managing peer-to-peer sync across everyone's devices. You set access once and it's managed.
Archive and backup: Files you might need in five years but aren't actively working on. A central archive NAS with redundant drives is more reliable than hoping you keep a backup drive long-term.
Always-on availability: Some workflows demand files be available 24/7 without needing to wake any specific device. A NAS running on modest power consumption solves this.
Remote access from outside the LAN: If you need access from office, coffee shop, or travel, a central NAS with VPN or reverse proxy is simpler than configuring direct P2P over internet.
The Decentralized Approach: Mesh-Style Sharing
Instead of a hub, imagine a mesh: every device can talk directly to every other device. Your laptop syncs with your desktop. Your phone syncs with both. Your backup drive is just another node that receives copies.
This is how Syncthing works. Every device holds copies of files it's synced. No central hub needed. If one device is offline, the others continue working and sync when it returns. Resilience through redundancy instead of centralization.
For most personal workflows—keeping your own files consistent across devices—mesh-style syncing offers strong benefits. You get redundancy without a dedicated server.
The P2P Transfer Model: No Sync, Just Movement
P2P direct transfer is different again. Not sync, which requires continuous connection and state. Just moving files from A to B one time.
Send a project folder to a colleague? P2P transfer, 50 MB/s at full bandwidth, done in minutes. Back up to an external drive? P2P transfer. No server involved, no ongoing overhead. The transfer completes and you disconnect.
For movement workflows—which are more common than people admit—P2P is the most efficient model.
A Practical Hybrid
Most self-hosted users should do both, not either-or.
Keep personal files synced across your devices via mesh-style sync (Syncthing or similar). This gives you redundancy without a server. Use P2P transfer for moving large files to collaborators or backup. For truly shared team storage—files that multiple people edit continuously—a central NAS makes sense.
The key is being intentional. Don't default to a central server for everything. Use the right tool for each workflow. Personal files go in mesh sync. Team files go in central storage. Large transfers go via P2P.
Clustering bridges these approaches. Install Handrive on your laptop, desktop, and NAS, all logged in with the same email. Shares and access controls sync automatically across every device. Run multiple instances on your NAS if you need more throughput. You get the resilience of distributed devices with the coordination of centralized storage—without managing a traditional server.
The Real Question
Before you buy a NAS, ask: who needs to access this data, how often, and for how long? If it's you keeping multiple devices synced, mesh is better. If it's shared team storage, a central server makes sense. If it's moving files from A to B, P2P is faster.
The allure of a NAS is that it seems like one answer to everything. But one answer to everything is usually not the best answer to anything. The self-hosted ecosystem is mature enough now that you can mix and match. Use what actually solves your problem instead of what everyone assumes you need.
Direct File Transfers Without a Server
Move files between devices at full bandwidth. No central hub. No bottleneck. Simple, fast, and privacy-first.
Download Free