# File transfers over ssh limited by CPU speed



## mix_room (Aug 27, 2010)

I am trying to transfer some large files over the internet. Generally SSH (rsync -e ssh, scp) works wonderfully for this. BUT it turns out that the CPU is limiting the speed with which I can transfer the files. I can not get more than 1.6 MBps (~13Mbps) out of a single transfer. 
Since this does not maximize my transfer capacity I would like to improve the transfer rate. 

1) What I do: rsync from cygwin windows server 2008 -> FreeBSD server. Fastest rate: 13Mbps. 
2) What I would like to do: rsync from cygwin windows server 2008 -> FreeBSD server. Fastest rate: 100Mbps. 

Is there any way in which this can be achieved? 
A) multi-threaded rsync? 
B) other encryption standard on ssh (tried [cmd=""]ssh -c blowfish[/cmd] to no avail) 
C) others, such as alternative transfer protocol, preferably not. 

The processors are fairly new Intel Xeon, so they shouldn't be slow out of the box. 
I realise that the FreeBSD forum may not be the appropriate place to ask this question as it involves running the cygwin port of ssh on windows, but was hoping that someone might have a good hint, or idea.


----------



## graudeejs (Aug 27, 2010)

It might was well be router, that throttles everything (I had this problem, until I was able to directly connect both PC's)


----------



## mix_room (Aug 27, 2010)

I updated my installation of cygwin, it had an old version of openssh now at 5.6.
While this did not improve the speed, it did significantly reduce CPU-load: from 100% on a CPU-core to zero. 

Now it seems that there is something else that is causing problems. I will continue to look for improved speeds, but it no longer seems to be the CPU that is limiting.


----------



## graudeejs (Aug 27, 2010)

also check your firewall rules (if you're running firewall) baybe you have set limited bandwidth....
If possibly try to disable firewall, and see if that helps


----------



## mix_room (Aug 27, 2010)

killasmurf86 said:
			
		

> also check your firewall rules (if you're running firewall) baybe you have set limited bandwidth.


Will take a look. 



> If possibly try to disable firewall, and see if that helps


Unfortunately not possible. 


This is the only type of transfer that is limited. I am suspecting limited peering capacity between the two providers.


----------



## phoenix (Aug 27, 2010)

Install security/openssh-portable and enable the HPN patches.  This patches OpenSSH for two things:  larger transmit/receive buffers internally, enables the NONE cipher (ie not encrypted).

With that, you can eliminate the encryption for the bulk data transfer and achieve as close to the full line rate (100 Mbps) as you can get without switching to bare UDP packets.

You'll need to add the following to /etc/rc.conf:

```
sshd_enable="NO"
openssh_enable="YES"
```

Then run the following:
`# service sshd onestop`
`# service openssh start`

The extra options to add to /usr/local/etc/ssh/sshd_config are:

```
# The following are HPN-related configuration options
# Whether to disable hpn performance boosts. 
HPNDisabled no

# TCP receive buffer polling.
# Disable in non-autotuning kernels
TcpRcvBufPoll yes

# Buffer size for hpn to non-hpn connections
HPNBufferSize 8192

# Whether to allow the use of the "none" cipher
NoneEnabled yes
```
You'll need to play with the HPNBufferSize to match it up to your system.  8192 is a good value for gigabit links.

Connecting from a non-HPN ssh client to an HPN-enabled server will give you about 10-20% speed increase.

Connectiong from an HPN-enabled client to an HPN-enabled server will give you about a 15-30% increase.

Connecting from an HPN-enabled client using NONE cipher to HPN-enabled server will give about a 50% increase.

To use the NONE cipher, you have to manually specify it on the SSH commandline:
`$ /usr/local/bin/ssh -oNoneEnabled=yes -oNoneSwitch=yes -oHPNBufferSize=8192 username@host`
`$ /usr/local/bin/scp -oNoneEnabled=yes -oNoneSwitch=yes -oHPNBufferSize=8192 username@host`

To use it with rsync:
`$ rsync --rsh="/usr/local/bin/scp -oNoneEnabled=yes -oNoneSwitch=yes -oHPNBufferSize=8192" --other-rsync-options ...`

We use this on our rsync servers.  Can almost get 100 MBps out of SATA disks across a gigabit fibre link.


----------



## graudeejs (Aug 27, 2010)

Ah and you can encrypt data before sending with openssl, or gnupg 

http://www.madboa.com/geek/openssl/#encrypt-simple


----------



## gessel (Aug 13, 2011)

Has the procedure for HPN enablement changed in the last few years?  I have ticked 

```
Enable HPN-SSH patch
```
in the options for openssh-portable-overwrite-base 5.2.p1_4,1 (along with 
	
	



```
OVERWRITE_BASE
```
)

I then, as described above, modified 
	
	



```
sshd_config
```
 as above.  However, the options are flagged as bad, for example:

Enabling 

```
# The following are HPN-related configuration options
# Whether to disable hpn performance boosts.
HPNDisabled no
```

Yields


```
Bad configuration option: HPNDisabled
```

on 
	
	



```
service openssh restart
```
 and OpenSSH does not start.  Commenting the HPN specific options out of 
	
	



```
sshd_config
```
 and executing 
	
	



```
service openssh restart
```
 yields expected results.

Are there no HPN specific options configurable in 5.2.p1_4,1?


----------



## gessel (Aug 13, 2011)

Absent further configuration of HPN, simply enabling or disabling the HPN-SSH patch and copying files using WinSCP 4.3.4 across 100mbit LAN, I got the following results:


```
Protocol  HPN  Cipher    Data Rate (mB/sec)
SCP       Yes  AES       7.8
SCP       No   AES       7.9
SCP       No   BlowFish  8.2
SCP       Yes  BlowFish  8.1
SFTP      No   BlowFish  6.9
SFTP      No   AES       5.7
SFTP      Yes  AES       5.8
SFTP      Yes  BlowFish  7.0
```
Which suggest that the HPN option as enabled and absent setting the tuning options described above slightly degrades SCP performance and slightly improves SFTP performance.  Other results are about as expected.


----------

