amuck-landowner

Differences between common transfer methods?

MannDude

Just a dude
vpsBoard Founder
Moderator
The other day was talking to someone who uses FUSE, and although I had heard of it I've never actually utilized it. Got me thinking what the major differences are between common transfer methods, and one would be better than the other in certain scenarios.

Anyone care to give me a basic description highlighting the major differences between transfer methods such as SCP, FTP, FUSE, rsync, etc and when one would be better than another in certain situations?

Myself, I utilize rsync to transfer data between servers as it's what I am familiar with, and FTP/SFTP if I'm moving data from my desktop to a remote server, but then again I'm always welcome to learning something new and perhaps there is a better way to do things.
 
Always use SFTP/SCP. If you can, use a CPU optimized cipher like arcfour or blowfish for speed, ie: 

scp -o Cipher=arcfour ...
 

drmike

100% Tier-1 Gogent
Same crypto optimization exists in my preferred method - SSHFS.  That being Arcfour.

Considering most things here are tunneled out via SSH or VPN, the overhead and crypto exists in said tunneling.

If you run bare, absent said tunneling, turning the crypto down to especially AES is likely making your data much more vulnerable to decryption by someone else.

I use SSHFS everywhere, including between local servers.  It's simple and secure.   Obviously won't win any mass throughput races....
 

texteditor

Premium Buffalo-based Hosting
FTP - the key word here is "simplicity"

FTP isn't special, it's just simple. A small set of commands and return codes, generally clear text operation, and either in band or out-of-band control. This lends it to three things:

1. High speeds

2. Easy usage (by which I mean rudimentary software can use it well, such as PXE or Kickstart script sources)

3. Very easy scriptability and extensibility

The advantages of 1 are marginal over most alternatives, and 2 is generally an edge-case for special setups, but number 3 (scriptibility and extensibility) are probably where FTP really shines.

FTP, being so simple, has allowed for some pretty cool things to be built into ftpds over the years, like:

Virtual users system with very granular access control (per-file, per-directory permissions, write-only access, read-only access, and more that can be set on a per-user basis or even for anonymous users)

Traffic accounting inside the daemon (for example, upload/download 'credit' systems in pure-ftpd, glftpd, or drftpd)

Tight integration with custom services (IRC nuke-bots, pre-bots)

Add-on or even In-line integrity checking (for example, some ftpds have extensions to preload .sfv files and then check the CRC32 value of incoming data due to the extremely low time/space-complexity of cyclical redundancy checks)

Distribution of transfer data sources due to separate out-of-band control (extensions to the FTP spec like drftpd's PRET command take advantage of the fact that FTP's control and transfer connections can be separated to allow data to be mirrored or striped across multiple slave servers, allowing single clients to be served data from multiple sources to increase speed or to load-balance client requests)

Easy server-server transfer or syncing via FXP (While FXP is technically a feature built on a bug in the FTP spec, many ftpds have embraced it as a way for client to initiate transfer from server to server instead of server to client)
 tl;dr: If you need to serve up a lot of data at relatively high-speeds to clients you give varied levels of trust, FTP can probably be beaten and molded into a shape that perfectly matches your use case.

Granular client bandwidth controls

edit for readability
 
Last edited by a moderator:
Top
amuck-landowner