# FreeBSD speed terrible compared to Ubuntu



## absduser (Oct 7, 2015)

I'm trying to figure out why FreeBSD cannot manage to achieve decent transfer rates in both directions as compared to Ubuntu. I'm performing my tests as follows: 

Source: FreeBSD 9.2 AMD
Target: Ubuntu 14.04 x64

When testing speed with iperf, Source -> Target I'm getting about 20Mbps:


```
iperf -t10 -P1 -i1 -c xxxxxxxx

------------------------------------------------------------
Client connecting to xxxxxxxxx, TCP port 5001
TCP window size: 2.16 MByte (default)
------------------------------------------------------------
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec  3.50 MBytes  29.4 Mbits/sec
[  3]  1.0- 2.0 sec  3.75 MBytes  31.5 Mbits/sec
[  3]  2.0- 3.0 sec  6.38 MBytes  53.5 Mbits/sec
[  3]  3.0- 4.0 sec  2.12 MBytes  17.8 Mbits/sec
[  3]  4.0- 5.0 sec  3.25 MBytes  27.3 Mbits/sec
[  3]  5.0- 6.0 sec  4.25 MBytes  35.7 Mbits/sec
[  3]  6.0- 7.0 sec  1.88 MBytes  15.7 Mbits/sec
[  3]  7.0- 8.0 sec  4.12 MBytes  34.6 Mbits/sec
[  3]  8.0- 9.0 sec  1.25 MBytes  10.5 Mbits/sec
[  3]  9.0-10.0 sec  1.00 MBytes  8.39 Mbits/sec
[  3]  0.0-10.1 sec  31.8 MBytes  26.4 Mbits/sec
```

The same test in reverse (Target -> Source) the speed is fantastic:

Target -> Source:


```
------------------------------------------------------------
Client connecting to xxxxxxxx, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec  90.8 MBytes   761 Mbits/sec
[  3]  1.0- 2.0 sec   104 MBytes   871 Mbits/sec
[  3]  2.0- 3.0 sec   107 MBytes   900 Mbits/sec
[  3]  3.0- 4.0 sec  96.0 MBytes   805 Mbits/sec
[  3]  4.0- 5.0 sec  97.8 MBytes   820 Mbits/sec
[  3]  5.0- 6.0 sec   102 MBytes   857 Mbits/sec
[  3]  6.0- 7.0 sec   104 MBytes   873 Mbits/sec
[  3]  7.0- 8.0 sec   104 MBytes   868 Mbits/sec
[  3]  8.0- 9.0 sec   104 MBytes   873 Mbits/sec
[  3]  9.0-10.0 sec   104 MBytes   871 Mbits/sec
[  3]  0.0-10.0 sec  1014 MBytes   850 Mbits/sec
```

The Target is capable of gigabit speed downloads from other hosts.

Traceroutes:

Source -> Target:


```
traceroute to xxxxxxx 64 hops max, 52 byte packets
 1    6.978 ms  1.989 ms  2.002 ms
 2    10.954 ms  0.983 ms
      1.957 ms
 3    52.189 ms  5.998 ms  22.044 ms
 4    26.091 ms  22.056 ms  24.017 ms
 5    21.492 ms  21.029 ms
 6    21.047 ms  21.093 ms  21.998 ms
 7    20.897 ms  20.744 ms  23.042 ms
 8    20.699 ms  20.655 ms  20.526 ms
```


Target -> Source:

```
traceroute to xxxxxx, 30 hops max, 60 byte packets
 1    0.782 ms  0.761 ms  0.784 ms
 2    1.072 ms  1.028 ms  1.002 ms
 3    0.689 ms  0.665 ms  0.796 ms
 4    42.513 ms  42.596 ms  42.568 ms
 5    0.917 ms  0.895 ms  0.866 ms
 6    20.209 ms  28.508 ms  28.507 ms
 7    20.346 ms  20.352 ms  20.392 ms
 8    30.392 ms  30.404 ms  30.387 ms
 9    20.542 ms  20.720 ms  20.899 ms
```


We thought there could just be a bad hop or perhaps it was hardware at fault, but when we run our iperf from the SAME hardware but booted to a stock Ubuntu 12.04 install, we get great/better/acceptable results:


(Ubuntu 12.04) Source -> Target:


```
$ iperf -t10 -P1 -i1 -w1M -c xxxxxxx
------------------------------------------------------------
Client connecting to xxxxxxxx, TCP port 5001
TCP window size:  256 KByte (WARNING: requested 1.00 MByte)
------------------------------------------------------------
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec  9.25 MBytes  77.6 Mbits/sec
[  3]  1.0- 2.0 sec  10.2 MBytes  86.0 Mbits/sec
[  3]  2.0- 3.0 sec  10.6 MBytes  89.1 Mbits/sec
[  3]  3.0- 4.0 sec  11.1 MBytes  93.3 Mbits/sec
[  3]  4.0- 5.0 sec  11.2 MBytes  94.4 Mbits/sec
[  3]  5.0- 6.0 sec  10.8 MBytes  90.2 Mbits/sec
[  3]  6.0- 7.0 sec  11.5 MBytes  96.5 Mbits/sec
[  3]  7.0- 8.0 sec  11.2 MBytes  94.4 Mbits/sec
[  3]  8.0- 9.0 sec  11.0 MBytes  92.3 Mbits/sec
[  3]  9.0-10.0 sec  11.1 MBytes  93.3 Mbits/sec
[  3]  0.0-10.0 sec   108 MBytes  90.7 Mbits/sec
```


So, it would seem that FreeBSD is very well tuned for Target -> Source traffic (nearly 1Gbps), but the Source -> Target direction is terrible (20Mbps). Since the RTT is the same and the # of hops is virtually unchanged, we don't see any reason the speed should be different outbound, and further we see that when we use Ubuntu -> Target, we get decent speed (100Mbps). Thus we conclude the hardware and hops must not the issue. So, what we're missing in our FreeBSD tuning that is making the outbound transfers so slow (while the inbound transfers are great)? And why is Ubuntu handling this speed so much better on the identical hardware?

Thanks 



Source settings (FreeBSD):


```
bge0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500     options=c019b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,TSO4,VLAN_HWTSO,LINKSTATE>
        ether f0:1f:af:11:e9:de
        inet6 fe80::f21f:afff:fe11:e9de%bge0 prefixlen 64 scopeid 0x3
        inet xxxxxxxx netmask 0xff000000 broadcast 82.255.255.255
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
        media: Ethernet autoselect (1000baseT <full-duplex>)
        status: active

kern.ipc.maxsockbuf: 16777216
kern.ipc.sockbuf_waste_factor: 8
kern.ipc.somaxconn: 128
kern.ipc.max_linkhdr: 16
kern.ipc.max_protohdr: 60
kern.ipc.max_hdr: 76
kern.ipc.max_datalen: 92
kern.ipc.nmbjumbo16: 3200
kern.ipc.nmbjumbo9: 6400
kern.ipc.nmbjumbop: 12800
kern.ipc.nmbclusters: 25600
kern.ipc.piperesizeallowed: 1
kern.ipc.piperesizefail: 0
kern.ipc.pipeallocfail: 0
kern.ipc.pipefragretry: 0
kern.ipc.pipekva: 21315584
kern.ipc.maxpipekva: 4080218112
kern.ipc.msgseg: 2048
kern.ipc.msgssz: 8
kern.ipc.msgtql: 40
kern.ipc.msgmnb: 2048
kern.ipc.msgmni: 40
kern.ipc.msgmax: 16384
kern.ipc.semaem: 16384
kern.ipc.semvmx: 32767
kern.ipc.semusz: 152
kern.ipc.semume: 10
kern.ipc.semopm: 100
kern.ipc.semmsl: 60
kern.ipc.semmnu: 30
kern.ipc.semmns: 60
kern.ipc.semmni: 10
kern.ipc.semmap: 30
kern.ipc.shm_allow_removed: 0
kern.ipc.shm_use_phys: 0
kern.ipc.shmall: 8192
kern.ipc.shmseg: 128
kern.ipc.shmmni: 192
kern.ipc.shmmin: 1
kern.ipc.shmmax: 33554432
kern.ipc.maxsockets: 25600
kern.ipc.numopensockets: 1281
kern.ipc.nsfbufsused: 0
kern.ipc.nsfbufspeak: 0
kern.ipc.nsfbufs: 0
net.inet.tcp.rfc1323: 1
net.inet.tcp.mssdflt: 1460
net.inet.tcp.keepidle: 7200000
net.inet.tcp.keepintvl: 75000
net.inet.tcp.sendspace: 2263000
net.inet.tcp.recvspace: 2263000
net.inet.tcp.keepinit: 75000
net.inet.tcp.delacktime: 100
net.inet.tcp.v6mssdflt: 1024
net.inet.tcp.cc.available: newreno
net.inet.tcp.cc.algorithm: newreno
net.inet.tcp.hostcache.purge: 0
net.inet.tcp.hostcache.prune: 300
net.inet.tcp.hostcache.expire: 3600
net.inet.tcp.hostcache.count: 193
net.inet.tcp.hostcache.bucketlimit: 30
net.inet.tcp.hostcache.hashsize: 512
net.inet.tcp.hostcache.cachelimit: 15360
net.inet.tcp.read_locking: 1
net.inet.tcp.recvbuf_max: 16777216
net.inet.tcp.recvbuf_inc: 524288
net.inet.tcp.recvbuf_auto: 1
net.inet.tcp.insecure_rst: 0
net.inet.tcp.ecn.maxretries: 1
net.inet.tcp.ecn.enable: 0
net.inet.tcp.abc_l_var: 2
net.inet.tcp.rfc3465: 1
net.inet.tcp.rfc3390: 1
net.inet.tcp.rfc3042: 1
net.inet.tcp.drop_synfin: 1
net.inet.tcp.delayed_ack: 1
net.inet.tcp.blackhole: 0
net.inet.tcp.log_in_vain: 0
net.inet.tcp.sendbuf_max: 16777216
net.inet.tcp.sendbuf_inc: 16384
net.inet.tcp.sendbuf_auto: 1
net.inet.tcp.tso: 1
net.inet.tcp.local_slowstart_flightsize: 4
net.inet.tcp.slowstart_flightsize: 1550
net.inet.tcp.path_mtu_discovery: 1
net.inet.tcp.reass.overflows: 481332
net.inet.tcp.reass.cursegments: 4
net.inet.tcp.reass.maxsegments: 1680
net.inet.tcp.sack.globalholes: 0
net.inet.tcp.sack.globalmaxholes: 65536
net.inet.tcp.sack.maxholes: 128
net.inet.tcp.sack.enable: 1
net.inet.tcp.inflight.stab: 20
net.inet.tcp.inflight.max: 1073725440
net.inet.tcp.inflight.min: 6144
net.inet.tcp.inflight.rttthresh: 10
net.inet.tcp.inflight.debug: 0
net.inet.tcp.inflight.enable: 0
net.inet.tcp.isn_reseed_interval: 0
net.inet.tcp.icmp_may_rst: 1
net.inet.tcp.pcbcount: 461
net.inet.tcp.do_tcpdrain: 1
net.inet.tcp.tcbhashsize: 512
net.inet.tcp.log_debug: 0
net.inet.tcp.minmss: 216
net.inet.tcp.syncache.rst_on_sock_fail: 1
net.inet.tcp.syncache.rexmtlimit: 3
net.inet.tcp.syncache.hashsize: 512
net.inet.tcp.syncache.count: 4294967277
net.inet.tcp.syncache.cachelimit: 15360
net.inet.tcp.syncache.bucketlimit: 30
net.inet.tcp.syncookies_only: 0
net.inet.tcp.syncookies: 1
net.inet.tcp.timer_race: 1
net.inet.tcp.rexmit_drop_options: 1
net.inet.tcp.finwait2_timeout: 60000
net.inet.tcp.fast_finwait2_recycle: 0
net.inet.tcp.always_keepalive: 1
net.inet.tcp.rexmit_slop: 200
net.inet.tcp.rexmit_min: 30
net.inet.tcp.msl: 30000
net.inet.tcp.nolocaltimewait: 0
net.inet.tcp.maxtcptw: 5120
```


Target settings (Ubuntu):

```
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:840363516 errors:0 dropped:0 overruns:0 frame:0
          TX packets:152325250 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:2086148084848 (2.0 TB)  TX bytes:64233189137 (64.2 GB)

net.ipv4.tcp_abort_on_overflow = 0
net.ipv4.tcp_adv_win_scale = 1
net.ipv4.tcp_allowed_congestion_control = cubic reno
net.ipv4.tcp_app_win = 31
net.ipv4.tcp_autocorking = 1
net.ipv4.tcp_available_congestion_control = cubic reno
net.ipv4.tcp_base_mss = 512
net.ipv4.tcp_challenge_ack_limit = 100
net.ipv4.tcp_congestion_control = cubic
net.ipv4.tcp_dsack = 1
net.ipv4.tcp_early_retrans = 3
net.ipv4.tcp_ecn = 2
net.ipv4.tcp_fack = 1
net.ipv4.tcp_fastopen = 1
net.ipv4.tcp_fastopen_key = 00000000-00000000-00000000-00000000
net.ipv4.tcp_fin_timeout = 60
net.ipv4.tcp_frto = 2
net.ipv4.tcp_fwmark_accept = 0
net.ipv4.tcp_invalid_ratelimit = 500
net.ipv4.tcp_keepalive_intvl = 75
net.ipv4.tcp_keepalive_probes = 9
net.ipv4.tcp_keepalive_time = 7200
net.ipv4.tcp_limit_output_bytes = 131072
net.ipv4.tcp_low_latency = 0
net.ipv4.tcp_max_orphans = 32768
net.ipv4.tcp_max_reordering = 300
net.ipv4.tcp_max_syn_backlog = 256
net.ipv4.tcp_max_tw_buckets = 32768
net.ipv4.tcp_mem = 524288       524288  524288
net.ipv4.tcp_min_tso_segs = 2
net.ipv4.tcp_moderate_rcvbuf = 1
net.ipv4.tcp_mtu_probing = 0
net.ipv4.tcp_no_metrics_save = 0
net.ipv4.tcp_notsent_lowat = -1
net.ipv4.tcp_orphan_retries = 0
net.ipv4.tcp_reordering = 3
net.ipv4.tcp_retrans_collapse = 1
net.ipv4.tcp_retries1 = 3
net.ipv4.tcp_retries2 = 15
net.ipv4.tcp_rfc1337 = 0
net.ipv4.tcp_rmem = 4096        87380   16777216
net.ipv4.tcp_sack = 1
net.ipv4.tcp_slow_start_after_idle = 1
net.ipv4.tcp_stdurg = 0
net.ipv4.tcp_syn_retries = 6
net.ipv4.tcp_synack_retries = 5
net.ipv4.tcp_syncookies = 0
net.ipv4.tcp_thin_dupack = 0
net.ipv4.tcp_thin_linear_timeouts = 0
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_tso_win_divisor = 3
net.ipv4.tcp_tw_recycle = 0
net.ipv4.tcp_tw_reuse = 0
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_wmem = 4096        65536   16777216
net.ipv4.tcp_workaround_signed_windows = 0
net.core.busy_poll = 0
net.core.busy_read = 0
net.core.default_qdisc = pfifo_fast
net.core.dev_weight = 64
net.core.flow_limit_cpu_bitmap = 00
net.core.flow_limit_table_len = 4096
net.core.message_burst = 10
net.core.message_cost = 5
net.core.netdev_budget = 300
net.core.netdev_max_backlog = 1000
net.core.netdev_tstamp_prequeue = 1
net.core.optmem_max = 20480
net.core.rmem_default = 524288
net.core.rmem_max = 33554432
net.core.rps_sock_flow_entries = 0
net.core.somaxconn = 128
net.core.tstamp_allow_data = 1
net.core.warnings = 0
net.core.wmem_default = 524288
net.core.wmem_max = 33554432
net.core.xfrm_acq_expires = 30
net.core.xfrm_aevent_etime = 10
net.core.xfrm_aevent_rseqth = 2
net.core.xfrm_larval_drop = 1
```


----------



## drhowarddrfine (Oct 7, 2015)

I've always pointed out that Netflix serves its video using FreeBSD, about 40% of all internet traffic. Coincidentally, I was just having an online conversation with one of the Netflix engineers about something entirely unrelated. I'm going to see if I can reach him and see if he'd be willing to come here and look at that.


----------



## Oko (Oct 7, 2015)

I stop reading your thread when I saw that you are comparing Ubuntu 14.04 and FreeBSD 9.2. IIRC GCC was still default compiler FreeBSD 9.2. Ubuntu 14.04 is using much newer version of GCC. Of course Ubuntu is going to be faster. You are also not disclosing the details of your hardware setup which file systems you are using, details of SATA controllers, RAID, ZFS pools if any involved. If you want comparison we will need to know all details of the entire setup first. Besides, I have no problem believing that Ubuntu 14.04 is faster for certain tasks. However you post doesn't look very serious.


----------



## diizzy (Oct 7, 2015)

Upgrade to 10.2 first
//Danne


----------



## obsigna (Oct 7, 2015)

absduser said:


> Source settings (FreeBSD):
> `net.inet.tcp.delayed_ack: 1`
> Target settings (Ubuntu):
> `net.ipv4.tcp_autocorking = 1`



TCP Delayed Acknowledge and TCP_CORK (Linuxism for TCP_NOPUSH on *BSD, see tcp(4)) might not play well together. Try again by setting either of both or both, namely net.inet.tcp.delayed_ack and net.ipv4.tcp_autocorking, to 0.

TCP Delayed Acknowledge is mostly justified by using the example of telnetting single characters via a congested high latency transmission line. And this is of course most useful for transferring data between the Pluto mission New Horizon space craft and its base station on earth, however, IMHO, this does more harm than it helps on high speed low latency Mbit/s - Gbit/s lines. The first thing that I do when installing new systems is to disable net.inet.tcp.delayed_ack. I even disabled this on my iPhone and experienced faster web- and mail-access.


----------



## lme@ (Oct 11, 2015)

Oko said:


> I stop reading your thread when I saw that you are comparing Ubuntu 14.04 and FreeBSD 9.2. IIRC GCC was still default compiler FreeBSD 9.2. Ubuntu 14.04 is using much newer version of GCC. Of course Ubuntu is going to be faster. You are also not disclosing the details of your hardware setup which file systems you are using, details of SATA controllers, RAID, ZFS pools if any involved. If you want comparison we will need to know all details of the entire setup first. Besides, I have no problem believing that Ubuntu 14.04 is faster for certain tasks. However you post doesn't look very serious.



Sorry, but that's just bullshit. FreeBSD 9.2 was released Nov. 2013. Do you think FreeBSD could not do more than 30 MBit/s at that time? Furthermore the OP also compares 9.2 with Ubuntu 12.04 which outperforms FreeBSD in his setup.
My guess is that some sysctl needs to be changed to get a decent speed.


----------



## diizzy (Oct 11, 2015)

lme@
Please mind your language.

Oko does have a point (GCC version does make quite a difference since 9.X and older uses an acient version) however 20-30 MBit/s is extremely slow shouldn't be an issue even without tuning. This is most likely a driver issue which is why I suggested upgrading to 10.2.
//Danne


----------



## absduser (Oct 15, 2015)

Per suggestion, I upgraded to 10.2 on my test (formerly 9.2) system and with no modifications I got this right away (FreeBSD 10.2 -> target):


```
# iperf -t30 -P1 -i1 -c x.x.x.x.
------------------------------------------------------------
Client connecting to x.x.x.x, TCP port 5001
TCP window size: 32.5 KByte (default)
------------------------------------------------------------
[  3] local x.x.x.x port 30459 connected with x.x.x.x port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec  63.2 MBytes   531 Mbits/sec
[  3]  1.0- 2.0 sec  70.8 MBytes   593 Mbits/sec
[  3]  2.0- 3.0 sec  81.4 MBytes   683 Mbits/sec
[  3]  3.0- 4.0 sec  81.6 MBytes   685 Mbits/sec
[  3]  4.0- 5.0 sec  66.8 MBytes   560 Mbits/sec
[  3]  5.0- 6.0 sec  1.50 MBytes  12.6 Mbits/sec
[  3]  6.0- 7.0 sec  1.88 MBytes  15.7 Mbits/sec
[  3]  7.0- 8.0 sec  1.00 MBytes  8.39 Mbits/sec
[  3]  8.0- 9.0 sec  1.88 MBytes  15.7 Mbits/sec
[  3]  9.0-10.0 sec  1.50 MBytes  12.6 Mbits/sec
[  3] 10.0-11.0 sec  1.62 MBytes  13.6 Mbits/sec
[  3] 11.0-12.0 sec  1.88 MBytes  15.7 Mbits/sec
[  3] 12.0-13.0 sec  1.50 MBytes  12.6 Mbits/sec
[  3] 13.0-14.0 sec  2.00 MBytes  16.8 Mbits/sec
[  3] 14.0-15.0 sec  1.75 MBytes  14.7 Mbits/sec
[  3] 15.0-16.0 sec   768 KBytes  6.29 Mbits/sec
[  3] 16.0-17.0 sec  1.12 MBytes  9.44 Mbits/sec
[  3] 17.0-18.0 sec  1.38 MBytes  11.5 Mbits/sec
[  3] 18.0-19.0 sec  1.38 MBytes  11.5 Mbits/sec
[  3] 19.0-20.0 sec  1.38 MBytes  11.5 Mbits/sec
[  3] 20.0-21.0 sec  1.00 MBytes  8.39 Mbits/sec
[  3] 21.0-22.0 sec  1.62 MBytes  13.6 Mbits/sec
[  3] 22.0-23.0 sec  2.12 MBytes  17.8 Mbits/sec
[  3] 23.0-24.0 sec  1.38 MBytes  11.5 Mbits/sec
[  3] 24.0-25.0 sec   896 KBytes  7.34 Mbits/sec
[  3] 25.0-26.0 sec  1.25 MBytes  10.5 Mbits/sec
[  3] 26.0-27.0 sec  1.88 MBytes  15.7 Mbits/sec
[  3] 27.0-28.0 sec  1.38 MBytes  11.5 Mbits/sec
[  3] 28.0-29.0 sec  2.00 MBytes  16.8 Mbits/sec
[  3] 29.0-30.0 sec  1.88 MBytes  15.7 Mbits/sec
[  3]  0.0-30.0 sec   402 MBytes   112 Mbits/sec
```

So that's an improvement, but still not good.

With the following tweaks:


```
net.inet.tcp.hostcache.cachelimit="0"
net.link.ifqmaxlen=2048
kern.ipc.maxsockbuf=4194304
net.inet.tcp.sendbuf_max=4194304
net.inet.tcp.recvbuf_max=4194304
net.inet.tcp.mssdflt=1460
net.inet.tcp.minmss=1300
net.inet.tcp.syncache.rexmtlimit=0
net.inet.tcp.tso=0
net.inet.tcp.cc.algorithm=htcp
```


I get better speed:


```
# iperf -t60 -P1 -i1 -c x.x.x.x
------------------------------------------------------------
Client connecting to x.x.x.x, TCP port 5001
TCP window size: 32.5 KByte (default)
------------------------------------------------------------
[  3] local x.x.x.x port 34697 connected with x.x.x.x port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec  44.2 MBytes   371 Mbits/sec
[  3]  1.0- 2.0 sec  58.1 MBytes   488 Mbits/sec
[  3]  2.0- 3.0 sec  57.4 MBytes   481 Mbits/sec
[  3]  3.0- 4.0 sec  36.8 MBytes   308 Mbits/sec
[  3]  4.0- 5.0 sec  24.5 MBytes   206 Mbits/sec
[  3]  5.0- 6.0 sec  12.5 MBytes   105 Mbits/sec
[  3]  6.0- 7.0 sec  17.5 MBytes   147 Mbits/sec
[  3]  7.0- 8.0 sec  30.0 MBytes   252 Mbits/sec
[  3]  8.0- 9.0 sec  14.6 MBytes   123 Mbits/sec
[  3]  9.0-10.0 sec  2.25 MBytes  18.9 Mbits/sec
[  3] 10.0-11.0 sec  11.4 MBytes  95.4 Mbits/sec
[  3] 11.0-12.0 sec  28.4 MBytes   238 Mbits/sec
[  3] 12.0-13.0 sec  53.0 MBytes   445 Mbits/sec
[  3] 13.0-14.0 sec  26.5 MBytes   222 Mbits/sec
[  3] 14.0-15.0 sec  36.9 MBytes   309 Mbits/sec
[  3] 15.0-16.0 sec  45.9 MBytes   385 Mbits/sec
[  3] 16.0-17.0 sec  62.0 MBytes   520 Mbits/sec
[  3] 17.0-18.0 sec  81.4 MBytes   683 Mbits/sec
[  3] 18.0-19.0 sec  33.8 MBytes   283 Mbits/sec
[  3] 19.0-20.0 sec  1.00 MBytes  8.39 Mbits/sec
[  3] 20.0-21.0 sec  4.25 MBytes  35.7 Mbits/sec
[  3] 21.0-22.0 sec  16.8 MBytes   141 Mbits/sec
[  3] 22.0-23.0 sec  38.1 MBytes   320 Mbits/sec
[  3] 23.0-24.0 sec  64.6 MBytes   542 Mbits/sec
[  3] 24.0-25.0 sec  84.8 MBytes   711 Mbits/sec
[  3] 25.0-26.0 sec  48.6 MBytes   408 Mbits/sec
[  3] 26.0-27.0 sec  1.50 MBytes  12.6 Mbits/sec
[  3] 27.0-28.0 sec  7.25 MBytes  60.8 Mbits/sec
[  3] 28.0-29.0 sec  22.5 MBytes   189 Mbits/sec
[  3] 29.0-30.0 sec  44.6 MBytes   374 Mbits/sec
[  3] 30.0-31.0 sec  69.8 MBytes   585 Mbits/sec
[  3] 31.0-32.0 sec  85.6 MBytes   718 Mbits/sec
[  3] 32.0-33.0 sec  56.6 MBytes   475 Mbits/sec
[  3] 33.0-34.0 sec  75.4 MBytes   632 Mbits/sec
[  3] 34.0-35.0 sec  66.6 MBytes   559 Mbits/sec
[  3] 35.0-36.0 sec  65.1 MBytes   546 Mbits/sec
[  3] 36.0-37.0 sec  70.4 MBytes   590 Mbits/sec
[  3] 37.0-38.0 sec  1.38 MBytes  11.5 Mbits/sec
[  3] 38.0-39.0 sec  3.38 MBytes  28.3 Mbits/sec
[  3] 39.0-40.0 sec  13.4 MBytes   112 Mbits/sec
[  3] 40.0-41.0 sec  32.9 MBytes   276 Mbits/sec
[  3] 41.0-42.0 sec  57.4 MBytes   481 Mbits/sec
[  3] 42.0-43.0 sec  81.8 MBytes   686 Mbits/sec
[  3] 43.0-44.0 sec  71.9 MBytes   603 Mbits/sec
[  3] 44.0-45.0 sec  27.1 MBytes   228 Mbits/sec
[  3] 45.0-46.0 sec  27.9 MBytes   234 Mbits/sec
[  3] 46.0-47.0 sec  33.8 MBytes   283 Mbits/sec
[  3] 47.0-48.0 sec  45.5 MBytes   382 Mbits/sec
[  3] 48.0-49.0 sec  52.5 MBytes   440 Mbits/sec
[  3] 49.0-50.0 sec  13.9 MBytes   116 Mbits/sec
[  3] 50.0-51.0 sec  19.6 MBytes   165 Mbits/sec
[  3] 51.0-52.0 sec  26.0 MBytes   218 Mbits/sec
[  3] 52.0-53.0 sec  28.9 MBytes   242 Mbits/sec
[  3] 53.0-54.0 sec  21.6 MBytes   181 Mbits/sec
[  3] 54.0-55.0 sec  26.0 MBytes   218 Mbits/sec
[  3] 55.0-56.0 sec  36.8 MBytes   308 Mbits/sec
[  3] 56.0-57.0 sec  55.9 MBytes   469 Mbits/sec
[  3] 57.0-58.0 sec  78.5 MBytes   659 Mbits/sec
[  3] 58.0-59.0 sec  68.1 MBytes   571 Mbits/sec
[  3] 59.0-60.0 sec  55.5 MBytes   466 Mbits/sec
[  3]  0.0-60.0 sec  2.32 GBytes   333 Mbits/sec
```

But as you see it's bursting- it gets to some kind of limit then cuts speed in half and works it's way back up.

I'm not sure if that's acceptable or normal or something I'm missing, but I would appreciate any input you can provide as to what should be played with to get the speed to be more steady.

Other observations:


```
net.inet.tcp.tso=0
net.inet.tcp.cc.algorithm=htcp
```

Made the biggest difference in speed, by far.

In fact, just by changing `sysctl net.inet.tcp.tso=0` on the original 8.4 system, I was able to better speed to:


```
[  3]  0.0- 1.0 sec  7.12 MBytes  59.8 Mbits/sec
[  3]  1.0- 2.0 sec  8.38 MBytes  70.3 Mbits/sec
[  3]  2.0- 3.0 sec  9.25 MBytes  77.6 Mbits/sec
[  3]  3.0- 4.0 sec  10.6 MBytes  89.1 Mbits/sec
[  3]  4.0- 5.0 sec  11.8 MBytes  98.6 Mbits/sec
[  3]  5.0- 6.0 sec  10.1 MBytes  84.9 Mbits/sec
[  3]  6.0- 7.0 sec  7.25 MBytes  60.8 Mbits/sec
[  3]  7.0- 8.0 sec  8.25 MBytes  69.2 Mbits/sec
[  3]  8.0- 9.0 sec  9.38 MBytes  78.6 Mbits/sec
[  3]  9.0-10.0 sec  8.75 MBytes  73.4 Mbits/sec
[  3] 10.0-11.0 sec  5.75 MBytes  48.2 Mbits/sec
[  3] 11.0-12.0 sec  6.75 MBytes  56.6 Mbits/sec
[  3] 12.0-13.0 sec  7.88 MBytes  66.1 Mbits/sec
[  3] 13.0-14.0 sec  8.62 MBytes  72.4 Mbits/sec
[  3] 14.0-15.0 sec  9.50 MBytes  79.7 Mbits/sec
[  3] 15.0-16.0 sec  5.12 MBytes  43.0 Mbits/sec
[  3] 16.0-17.0 sec  3.38 MBytes  28.3 Mbits/sec
[  3] 17.0-18.0 sec  4.38 MBytes  36.7 Mbits/sec
[  3] 18.0-19.0 sec  5.38 MBytes  45.1 Mbits/sec
[  3] 19.0-20.0 sec  6.38 MBytes  53.5 Mbits/sec
[  3] 20.0-21.0 sec  7.50 MBytes  62.9 Mbits/sec
[  3] 21.0-22.0 sec  8.62 MBytes  72.4 Mbits/sec
[  3] 22.0-23.0 sec  9.75 MBytes  81.8 Mbits/sec
[  3] 23.0-24.0 sec  11.0 MBytes  92.3 Mbits/sec
[  3] 24.0-25.0 sec  12.2 MBytes   103 Mbits/sec
[  3] 25.0-26.0 sec  13.2 MBytes   111 Mbits/sec
[  3] 26.0-27.0 sec  14.4 MBytes   121 Mbits/sec
[  3] 27.0-28.0 sec  15.5 MBytes   130 Mbits/sec
[  3] 28.0-29.0 sec  16.5 MBytes   138 Mbits/sec
[  3] 29.0-30.0 sec  17.5 MBytes   147 Mbits/sec
[  3] 30.0-31.0 sec  18.5 MBytes   155 Mbits/sec
[  3] 31.0-32.0 sec  19.5 MBytes   164 Mbits/sec
[  3] 32.0-33.0 sec  20.5 MBytes   172 Mbits/sec
[  3] 33.0-34.0 sec  21.5 MBytes   180 Mbits/sec
[  3] 34.0-35.0 sec  18.6 MBytes   156 Mbits/sec
[  3] 35.0-36.0 sec  11.6 MBytes  97.5 Mbits/sec
[  3] 36.0-37.0 sec  13.0 MBytes   109 Mbits/sec
[  3] 37.0-38.0 sec  14.2 MBytes   120 Mbits/sec
[  3] 38.0-39.0 sec  15.4 MBytes   129 Mbits/sec
[  3] 39.0-40.0 sec  11.9 MBytes  99.6 Mbits/sec
[  3] 40.0-41.0 sec  6.88 MBytes  57.7 Mbits/sec
[  3] 41.0-42.0 sec  5.00 MBytes  41.9 Mbits/sec
[  3] 42.0-43.0 sec  6.38 MBytes  53.5 Mbits/sec
[  3] 43.0-44.0 sec  4.88 MBytes  40.9 Mbits/sec
[  3] 44.0-45.0 sec  4.88 MBytes  40.9 Mbits/sec
[  3] 45.0-46.0 sec  6.12 MBytes  51.4 Mbits/sec
[  3] 46.0-47.0 sec  7.00 MBytes  58.7 Mbits/sec
[  3] 47.0-48.0 sec  8.00 MBytes  67.1 Mbits/sec
[  3] 48.0-49.0 sec  9.00 MBytes  75.5 Mbits/sec
[  3] 49.0-50.0 sec  10.2 MBytes  86.0 Mbits/sec
[  3] 50.0-51.0 sec  11.1 MBytes  93.3 Mbits/sec
[  3] 51.0-52.0 sec  7.12 MBytes  59.8 Mbits/sec
[  3] 52.0-53.0 sec  7.12 MBytes  59.8 Mbits/sec
[  3] 53.0-54.0 sec  5.38 MBytes  45.1 Mbits/sec
[  3] 54.0-55.0 sec  4.75 MBytes  39.8 Mbits/sec
[  3] 55.0-56.0 sec  2.75 MBytes  23.1 Mbits/sec
[  3] 56.0-57.0 sec  3.75 MBytes  31.5 Mbits/sec
[  3] 57.0-58.0 sec  4.88 MBytes  40.9 Mbits/sec
[  3] 58.0-59.0 sec  6.00 MBytes  50.3 Mbits/sec
[  3] 59.0-60.0 sec  7.12 MBytes  59.8 Mbits/sec
[  3]  0.0-60.2 sec   575 MBytes  80.1 Mbits/sec
```

This is a 4x increase.

But as you see it's similarly bursting/cutting speed.

Since I was asked (though I don't know what it has to do with it), this test 10.2 system is not running any zfs. Simply:


```
CPU: Intel(R) Core(TM) i5-3320M CPU @ 2.60GHz (2591.64-MHz K8-class CPU)
  Origin="GenuineIntel"  Id=0x306a9  Family=0x6  Model=0x3a  Stepping=9
  Features=0xbfebfbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,DTS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE>
Features2=0x7fbae3ff<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,POPCNT,TSCDLT,AESNI,XSAVE,OSXSAVE,AVX,F16C,RDRAND>

real memory  = 8589934592 (8192 MB)

bge0: <B\M^B/, ASIC rev. 0x5761100> mem 0xf7c10000-0xf7c1ffff,0xf7c00000-0xf7c0ffff irq 18 at device 0.0 on pci12
bge0: CHIP ID 0x05761100; ASIC REV 0x5761; CHIP REV 0x57611; PCI-E
miibus0: <MII bus> on bge0
brgphy0: <BCM5761 10/100/1000baseT PHY> PHY 1 on miibus0
brgphy0:  10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, 1000baseT, 1000baseT-master, 1000baseT-FDX, 1000baseT-FDX-master, auto, auto-flow
bge0: Using defaults for TSO: 65518/35/2048
bge0: Ethernet address: d4:be:d9:7a:4d:8d

ahci0: <Intel Panther Point AHCI SATA controller> port 0xf0b0-0xf0b7,0xf0a0-0xf0a3,0xf090-0xf097,0xf080-0xf083,0xf060-0xf07f mem 0xf7f16000-0xf7f167ff irq 19 at device 31.2 on pci0
ahci0: AHCI v1.30 with 6 6Gbps ports, Port Multiplier not supported

ada0 at ahcich0 bus 0 scbus0 target 0 lun 0
ada0: <SAMSUNG SSD PM830 2.5" 7mm 128GB CXM03D1Q> ATA8-ACS SATA 3.x device
ada0: Serial Number S0TYNEACC25064
ada0: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 8192bytes)
ada0: Command Queueing enabled
ada0: 122104MB (250069680 512 byte sectors: 16H 63S/T 16383C)
ada0: Previously was known as ad4
```


----------



## diizzy (Oct 15, 2015)

...and if you disable your "tweaks"?
//Danne


----------



## absduser (Oct 16, 2015)

Without tweaks is the first iperf output: starts off at 500mbps and drops abruptly to ~15mbps. That first iperf is 10.2 with no adjustments whatsoever; fresh "out of the box"


----------



## drhowarddrfine (Oct 16, 2015)

absduser I would suggest you ask these questions on the mailing list where there are far more experts in this area than you'll find here.


----------



## lme@ (Oct 16, 2015)

Jup, please ask on freebsd-net@freebsd.org But please report back when you fixed the problem.


----------

