Server-to-Server Transfer Toolkit: SCP vs Rsync vs FTP vs BBCP
Every sysadmin knows the basics: SCP, rsync, SFTP. But which one actually gets the job done fastest, safely, and reliably? A practical breakdown for 2026.
The Toolkit Evolution
Server-to-server file transfer tools have remained largely unchanged for 20 years. SSH-based tooling dominates (SCP, SFTP, rsync over SSH). But the problems they solve—slow networks, unreliable links, multi-hop routing—haven't gone away. Meanwhile, the constraints have changed: gigabit and multi-gigabit links are standard, but files are bigger, datasets are larger, and uptime demands are higher.
Here's how to pick the right tool for your migration, knowing what each one is actually optimized for.
The Tools Compared
| Tool | Speed | Security | Resume | Best For |
|---|---|---|---|---|
| SCP | Slow (single-threaded) | Strong | No | Quick ad-hoc files |
| Rsync | Medium (single-stream) | Strong | Yes (partial) | Incremental, repeating syncs |
| SFTP | Medium (protocol overhead) | Strong | Yes | Interactive, multi-file |
| FTP | Fast (raw speed) | Weak (unencrypted) | Yes | LAN-only bulk transfer |
| BBCP | Very Fast (parallel) | Medium (tunnel option) | Yes | High-speed bulk migration |
| P2P (Handrive, etc.) | Very Fast (parallel native) | Strong | Yes (native) | Modern data center migration |
SCP: Simple, Slow, Reliable
SCP (Secure Copy Protocol) does one thing: copy a file from A to B over SSH. Single-threaded. No checksums. No delta logic. No resume.
scp -r /source user@remote:/dest
Use SCP for:
- One-off file transfers under 100 GB
- Configuration files, scripts, archives
- Situations where you need a transfer to work and don't care about speed
Don't use SCP for:
- Large files or large batches (anything over 500 GB)
- Situations where network drops are expected
- Incremental syncs (SCP retransfers everything)
Why SCP persists: it's the simplest secure transfer. It's installed everywhere. For a 5 GB config archive, "scp and walk away" beats setting up rsync or BBCP. Use your best judgment.
Rsync: The Incremental Workhorse
Rsync has been the default for 20 years because it solves a real problem: only transfer what changed.
rsync -avz --partial /source user@remote:/dest
Rsync's killer feature: run it once, then run it again next week. The second run only transmits changes. This is invaluable for:
- Repeating syncs (nightly backups, continuous deployment)
- Large datasets where 80% is static
- Recovery: if a sync fails, rerun it and only retry the missing files
But rsync is single-threaded. A 100 TB migration with scattered changes might take 8-12 hours. If you need that done in 2 hours, rsync won't get you there.
Use rsync for:
- Any repeating sync scenario
- Datasets under 50 TB where you don't mind a 4-8 hour window
- Incremental backups (most common use case)
Don't use rsync for:
- One-time bulk transfers over 100 TB
- Situations where you need speed and have parallel links available
SFTP: The Protocol Overhead
SFTP (SSH File Transfer Protocol) is rsync's slower cousin. It's a proper file protocol with directory enumeration, permission handling, and interactive operations.
Tools like FileZilla, WinSCP, and lftp use SFTP as their backend. It's reliable, standardized, and works anywhere SSH does.
SFTP's overhead: every file operation (stat, read, write) is a separate request-response over SSH. Transferring a million small files over SFTP means a million request-response cycles. Rsync batches these. The result: rsync is 10-30% faster than SFTP for large datasets.
Use SFTP for:
- Interactive file browsing and selection
- GUI-based transfers
- Multi-file transfers where you want confirmation before each operation
Don't use SFTP for:
- Automated bulk transfers (use rsync instead)
- Situations requiring high throughput
FTP: Fast, Insecure, Underrated
FTP (File Transfer Protocol) is the old standard. Unencrypted, username-password auth, but extremely fast over LAN because there's no SSH encryption overhead.
On a 10 Gbps LAN with two servers you control, FTP can shift data at near-wire speed. A parallel FTP client (multiple simultaneous connections) can saturate the link. With a 4-connection FTP client and a 10 Gbps link, you'll push 10 Gbps.
Use FTP for:
- LAN-only transfers between servers you own
- Situations where speed matters more than encryption
- Legacy systems or devices that don't support SSH
Don't use FTP for:
- Any WAN or untrusted network
- Sensitive data (FTP sends passwords in clear text)
- Compliance-heavy environments
BBCP: Parallel, Complex, Fast
BBCP (Bulk Berkeley Copy) was designed for scientific computing: move petabytes between data centers. It opens multiple TCP streams in parallel and distributes file chunks across them.
BBCP can achieve 90-95% of available bandwidth on long-distance links where single-stream TCP wouldn't. It also has native resume: interrupt a transfer and pick up where it left off.
The catch: BBCP is uncommon. It requires installation on both ends. Configuration is complex. Debugging network issues is harder.
Use BBCP for:
- One-time bulk migrations (100+ TB)
- Situations where you control both endpoints
- High-speed links (10+ Gbps) where single-stream protocols bottleneck
Don't use BBCP for:
- Ad-hoc, one-file transfers
- Situations requiring incremental sync logic
- Public/untrusted networks (BBCP has minimal auth)
Modern P2P Approaches
Newer tools (Handrive and similar) combine the best properties: parallel streams like BBCP, resumability like rsync, security like SFTP, and ease-of-use approaching SCP.
These tools are designed for the 2026 environment: modern networks are fast and reliable, but datasets are massive, and downtime is expensive. A tool that handles parallel transfers natively, resumes from network interruptions, and requires minimal configuration is increasingly valuable.
Use P2P tools for:
- Large data center migrations
- Situations where network interruptions are expected
- Transfer jobs that need to complete in a specific window
Decision Tree: Which Tool to Use
Here's the pragmatic flowchart:
- Is the transfer under 10 GB and a one-off? Use SCP.
- Do you need to repeat this transfer regularly? Use rsync.
- Is this a one-time bulk transfer of 100+ TB on a fast link? Use BBCP or a P2P tool.
- Do you need interactive file browsing? Use SFTP.
- Is this LAN-only and speed is critical? Use FTP.
- Are you migrating infrastructure or data center data? Use a P2P tool designed for this scale.
Practical Sysadmin Wisdom
In practice, you'll likely use a mix:
- SCP for quick one-offs
- Rsync for repeating syncs and incremental backups
- BBCP for documented bulk migrations where you can spend time tuning
- P2P tools for large, complex data center operations
The biggest mistake sysadmins make: choosing a tool based on what they know rather than what the workload requires. Rsync is familiar. But if you're moving 500 TB and rsync will take 24 hours, using it anyway because you know rsync is a false economy.
Test. Measure. Choose based on time, not inertia.
Transfer at Modern Scale
Handrive brings P2P-native transfers to modern sysadmins: parallel by default, resumable by design, secure by requirement. Built for the data center migrations that rsync can't handle.
Download Free