# Networking - Freebsd 12.1 vs Ubuntu 20.04



## irukandji (Oct 7, 2020)

_(I am not saying that Feebsd is slower as my method of testing can be severely flawed)_

I was benchmarking LAN with iperf3, both systems at "factory" settings but test where FreeBSD is server, I always get few MBytes/sec less than in opposite configuration (FreeBSD server, Ubuntu client). Both systems are vanilla regarding networking, I have tried calomel.org network optimization settings but result for FreeBSD as a server is always under 100 MBytes/sec ranging from 92 to 96. I have tried switching router ports but there is no difference.

Zero background traffic.

In both cases integrated Intel(R) PRO/1000 is used, the Ubuntu laptop (Thinkpad P51) has i7-7820HQ while Freebsd i5-9600K. It might be hardware issue =/

It somehow pisses me off and I would like to understand the reason / have a fix.


```
FreeBSD 12.1-RELEASE-p9
Linux 5.4.0-47-generic
iperf3 -s -f M
iperf3 -c a.a.a.a -f M -t 60

Freebsd as server, ubuntu client:
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  5.66 GBytes  96.5 MBytes/sec    0             sender
[  5]   0.00-60.00  sec  5.66 GBytes  96.5 MBytes/sec                  receiver

Ubuntu as server, Freebsd client:
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  6.37 GBytes   109 MBytes/sec    0             sender
[  5]   0.00-60.26  sec  6.37 GBytes   108 MBytes/sec                  receiver

---

Freebsd as server, ubuntu client 1:
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  2.88 GBytes  49.2 MBytes/sec    0             sender
[  5]   0.00-60.00  sec  2.88 GBytes  49.2 MBytes/sec                  receiver

Freebsd as server, ubuntu client 2:
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  2.90 GBytes  49.4 MBytes/sec    0             sender
[  5]   0.00-60.00  sec  2.90 GBytes  49.4 MBytes/sec                  receiver

Ubuntu as server, Freebsd client 1:
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  3.22 GBytes  55.0 MBytes/sec    0             sender
[  5]   0.00-60.30  sec  3.22 GBytes  54.7 MBytes/sec                  receiver

Ubuntu as server, Freebsd client 2:
[  5]   0.00-60.00  sec  3.37 GBytes  57.5 MBytes/sec    0             sender
[  5]   0.00-60.10  sec  3.37 GBytes  57.4 MBytes/sec                  receiver
```


----------



## Mjölnir (Oct 7, 2020)

In addition to the optimizations described in em(4) (espc. Jumbo frames: MTU), the 1st thing I would try is to enable polling(4) on the interface.


----------



## irukandji (Oct 7, 2020)

I dont have jumbo frames enabled on network and the MTUs are 1500 on Ubuntu and FreeBSD.

By enabling polling (`ifconfig em0 polling`) the only thing that changed was small drop in speed (FreeBSD as client).


```
Freebsd as client
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  6.42 GBytes   109 MBytes/sec    0             sender
[  5]   0.00-60.35  sec  6.42 GBytes   109 MBytes/sec                  receiver

Ubuntu as client
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  5.61 GBytes  95.8 MBytes/sec    0             sender
[  5]   0.00-60.00  sec  5.61 GBytes  95.8 MBytes/sec                  receiver
```


----------



## drhowarddrfine (Oct 7, 2020)

This sounds familiar on this forum. iirc, it has to do with Linux or Ubuntu's "speed at all cost, damn reliability" settings (or something like that). Netflix uses the standard, out-of-the-box settings for their video delivery through their FreeBSD boxes except they use custom CURRENT builds. I'll try to search for that forum post.


----------



## rootbert (Oct 7, 2020)

FreeBSD is slower than Ubuntu/Linux in most of my workloads (server, network), however, not by much. The difference is really not relevant. What is the problem that would be solved by being 10% faster?


----------



## irukandji (Oct 7, 2020)

Well, 10% when copying large amount of data is surely relevant. Anyway, I want to understand a reason.


----------



## ekvz (Oct 7, 2020)

irukandji said:


> Anyway, I want to understand a reason.



That would likely come down to looking into the parameters of the TCP/IP stack or even the way it's coded. There is a ton of stuff that might have an influence there. You could even try compiling the kernel with different optimization settings. I've never really benchmarked network performance but i figure it being rather performance critical you might even see a difference by just throwing -march=native -mtune=native into the mix. You could also try using GCC to check if it maybe somehow manages to better optimize the relevant parts. The list of possible causes is pretty long.


----------



## olli@ (Oct 7, 2020)

irukandji said:


> Well, 10% when copying large amount of data is surely relevant. Anyway, I want to understand a reason.


It should be noted that you are already (almost) saturating the Gbit ethernet link. In that situation, small details can make a difference, like subtle timing differences in the kernel. It might also be worth trying to find out where the bottleneck actually is. For example, what is the system load on the machines during the tests, i.e. how much CPU is spent in kernel and userland?

By the way, FreeBSD has quite a lot of knobs to optimize it for various kinds of workloads. The tuning(7) manual page is a good start. It all depends on what you actually want to do with that machine. iperf is a synthetic benchmark that does not really represent a real-life workload. For example, you would probably use different optimizations for a web server than for an NFS server.

On a related note: When you’re approaching the bandwidth limit of a Gbit link, and you say that 10 % difference are really important to you, then you should try to add 2.5G or 10G ethernet NICs to the machines in question. I’m pretty sure that will gain you more than just 10 %.

PS: Just in case someone is interested to see some price points: A 10G ethernet card with Intel chip costs about 100 €, SFP+ fiber modules are 20–30 €, and a few meters of LWL cable cost near to nothing – so, to connect two machines back-to-back with 10G via optical fiber would be around 250 €. Even less if you use copper instead, via a DAC cable, but my recommendation would be to go with fiber. If you need a switch, it’ll get more expensive, though. The cheapest one known to me is the Xyxel XGS1010-12 at 150 € which has 8× Gbit, 2× 2.5G (copper) and 2× 10G (SFP+).

Note that there are USB3 adapters for 2.5G ethernet – These do _not_ work well with FreeBSD. Avoid them.


----------

