# Slow 10Gbit network throughput on CURRENT



## blastwave (Oct 24, 2020)

I know that FreeBSD CURRENT built from sources has debug options involved as well as other things to be sure.
Regardless this is quite slow for 10Gbit over a 9000byte MTU link.  


```
amalthea# 
amalthea# uname -apKU
FreeBSD amalthea 13.0-CURRENT FreeBSD 13.0-CURRENT #1 r366186: Sat Sep 26 23:13:58 GMT 2020     root@amalthea:/usr/obj/usr/src/amd64.amd64/sys/GENERIC  amd64 amd64 1300117 1300117
amalthea# 
amalthea# zfs get sharenfs amalthea/opt/share
NAME                PROPERTY  VALUE             SOURCE
amalthea/opt/share  sharenfs  nosuid,anon=root  local
amalthea#
```

So that box has a 10Gbit network link on an intel dual port card and on another end of the tiny subnet we have a machine which also has the exact same Intel 10Gbit network card : 


```
# uname -apKU
FreeBSD rhea 13.0-CURRENT FreeBSD 13.0-CURRENT #0 r366295: Thu Oct  1 12:36:53 UTC 2020     root@rhea:/usr/obj/usr/src/head/amd64.amd64/sys/GENERIC  amd64 amd64 1300117 1300117
```

The network config at the NFS server side of life is trivial : 

```
amalthea# 
amalthea# cat /etc/rc.conf 
clear_tmp_enable="YES"
hostname="amalthea"
ifconfig_re0="inet 172.16.35.4 netmask 0xffffffc0"
defaultrouter="172.16.35.1"
ifconfig_ix0="inet 10.0.0.2 netmask 255.255.255.248 mtu 9000"
sshd_enable="YES"
ntpd_enable="YES"
# Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable
dumpdev="AUTO"
zfs_enable="YES"
rpcbind_enable="YES"
nfs_server_enable="YES"
mountd_enable="YES"
smartd_enable="YES"
amalthea#
```

The NFS client side of life is equally trivial : 

```
root@rhea:/opt/nfs_test # cat /etc/rc.conf
clear_tmp_enable="YES"
hostname="rhea"
ifconfig_rl0="inet 172.16.35.42 netmask 255.255.255.192"
defaultrouter="172.16.35.1"
ifconfig_rl0_ipv6="inet6 accept_rtadv"
ifconfig_ix0="inet 10.0.0.3 netmask 255.255.255.248 mtu 9000"
sshd_enable="YES"
ntpd_enable="YES"
# Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable
dumpdev="AUTO"
zfs_enable="YES"
nfs_client_enable="YES"
smartd_enable="YES"
root@rhea:/opt/nfs_test #
```

Both machines run with ZFS and both machines have plenty of memory and AMD64 processors and nothing else to do at all other than this single test. In fact, they don't exist for any other reason than this test. My hope is to sort out the throughput issues and then use LACP to bond the two ports on those network cards together and then fill one of the boxen with 8TB seagate disks and create a LACP 10Gbit based NFS server for home. Seemed like a good idea to me at the time. However a trivial throughput test shows poor performance. At least it seems that way. I created some non-compressible files using dd from /dev/urandom and there are eight such files all 64MB each. I said megabytes here and not gigabytes. I tried a gigabyte sized file initially and thought the machines had crashed when nothing happened thirty secs or so. So in any case on the NFS server side of life we have a single ZFS filesystem with sharenfs thus : 


```
amalthea# 
amalthea# zfs get sharenfs amalthea/opt/share  
NAME                PROPERTY  VALUE             SOURCE
amalthea/opt/share  sharenfs  nosuid,anon=root  local
amalthea#
```

Over on the client side of life I have an entry in the /etc/fstab like so : 

```
root@rhea:/opt/nfs_test # cat /etc/fstab 
# Device                Mountpoint      FStype      Options     Dump    Pass_number
/dev/ada0p2             none            swap        sw          0       0
10.0.0.2:/opt/share     /opt/nfs_test   nfs         rw          0       0
root@rhea:/opt/nfs_test #
```

Reboot both machines. Just for good luck. Why not? 

Then on the server side I have these files sitting in that filesystem : 

```
amalthea# 
amalthea# pwd
/opt/share/dclarke
amalthea# ls -lapb
total 524685
drwxr-xr-x  2 dclarke  devl         10 Oct 23 07:28 ./
drwxr-xr-x  3 root     wheel         3 Oct 24 07:34 ../
-rw-r--r--  1 dclarke  devl   67108864 Oct 22 02:33 64m_random_000.dat
-rw-r--r--  1 dclarke  devl   67108864 Oct 22 02:35 64m_random_001.dat
-rw-r--r--  1 dclarke  devl   67108864 Oct 22 02:36 64m_random_002.dat
-rw-r--r--  1 dclarke  devl   67108864 Oct 22 02:36 64m_random_003.dat
-rw-r--r--  1 dclarke  devl   67108864 Oct 22 02:37 64m_random_004.dat
-rw-r--r--  1 dclarke  devl   67108864 Oct 22 02:37 64m_random_005.dat
-rw-r--r--  1 dclarke  devl   67108864 Oct 22 02:37 64m_random_006.dat
-rw-r--r--  1 dclarke  devl   67108864 Oct 22 02:38 64m_random_007.dat
amalthea# 
amalthea# openssl dgst -sha256 -r *dat
b5a76a4cd29f88f41c7a557d77d0345056ba011212eb933e3779e0d1ab23cd75 *64m_random_000.dat
c9ea0abe71e07e101bece133cf263ab05d6c6169f19cfa815999f6b743b7e79e *64m_random_001.dat
cda05b4d9cc9fbf1081457d83f2bc805cabd4dcd38424563abf45a61c5643704 *64m_random_002.dat
839ee1d184888cb6ff147fe855d078b2dfd0871863da3e0394e4e4752775a6a2 *64m_random_003.dat
a1593f092b915d19b810ca84cf00d577e3ef4038856f3dea5e5310ed8e66bc5d *64m_random_004.dat
5bead3389583ed7857c8a37b585f15af9e435b14ababa6342eba3dc3cdb996a3 *64m_random_005.dat
e18389da43eecc514ab69e83273acbcaa5354500bdc2544500f54c5763543596 *64m_random_006.dat
8692e39608d0d5a78b1d5c41127e9026a9dc0b6682306fa2660de9ef19bacf31 *64m_random_007.dat
amalthea#
```

Then, as an ordinary user ( me ) on the client side of life I try this : 

```
dclarke@rhea:~ $ cd
dclarke@rhea:~ $ id
uid=16411(dclarke) gid=16411(dclarke) groups=16411(dclarke),20002(devl)
dclarke@rhea:~ $ ls -lap /opt/nfs_test/dclarke/
total 524685
drwxr-xr-x  2 dclarke  devl         10 Oct 23 07:28 ./
drwxr-xr-x  3 root     wheel         3 Oct 24 07:34 ../
-rw-r--r--  1 dclarke  devl   67108864 Oct 22 02:33 64m_random_000.dat
-rw-r--r--  1 dclarke  devl   67108864 Oct 22 02:35 64m_random_001.dat
-rw-r--r--  1 dclarke  devl   67108864 Oct 22 02:36 64m_random_002.dat
-rw-r--r--  1 dclarke  devl   67108864 Oct 22 02:36 64m_random_003.dat
-rw-r--r--  1 dclarke  devl   67108864 Oct 22 02:37 64m_random_004.dat
-rw-r--r--  1 dclarke  devl   67108864 Oct 22 02:37 64m_random_005.dat
-rw-r--r--  1 dclarke  devl   67108864 Oct 22 02:37 64m_random_006.dat
-rw-r--r--  1 dclarke  devl   67108864 Oct 22 02:38 64m_random_007.dat
dclarke@rhea:~ $ 
dclarke@rhea:~ $ pwd
/home/dclarke
dclarke@rhea:~ $ mkdir land_here
dclarke@rhea:~ $ /usr/bin/time -p cp -p /opt/nfs_test/dclarke/*.dat land_here
real 17.37
user 0.00
sys 10.28
dclarke@rhea:~ $ 
dclarke@rhea:~ $ echo '8k 64 1048576* 8* 17.37/ pq' | dc 
30907939.66609096
dclarke@rhea:~ $
```

I was expecting better than that. Much much better. I did check with openssl and the data lands fine on the client side of life. Oh, also, the NFS server is using a single Samsung SSD for this test and the client end is using a single Seagate ST2000DM001 disk.  I am thinking that the "WITNESS option enabled, expect reduced performance" is a factor but this is a rocket sled hitting a wall of jello. 

Any thoughts, helpful, would be showered with praise and great coffee. 

Dennis Clarke


----------



## Alexander88207 (Oct 24, 2020)

Hello, *blastwave*

Topics about unsupported FreeBSD versions.


----------



## blastwave (Oct 24, 2020)

No, this is networking. I am in the correct place.


----------



## T-Daemon (Oct 24, 2020)

blastwave said:


> So that box has a 10Gbit network link on an intel dual port card and on another end of the tiny subnet we have a machine which also has the exact same Intel 10Gbit network card :


Try the port ix(4) driver: net/intel-ix-kmod


----------



## blastwave (Oct 24, 2020)

T-Daemon said:


> Try the port ix(4) driver: net/intel-ix-kmod



Excellent, thank you. I will give that a try. The question will then be how to isolate the interfaces to this driver and not the regular off the shelf ix driver.


----------



## Crivens (Oct 24, 2020)

blastwave said:


> No, this is networking. I am in the correct place.


You want support for -current. You are on your own (mostly).


----------



## T-Daemon (Oct 24, 2020)

blastwave said:


> Excellent, thank you. I will give that a try. The question will then be how to isolate the interfaces to this driver and not the regular off the shelf ix driver.


A post install message will advice to set in /boot/loader.conf `if_ix_updated_load="YES"`.


----------



## blastwave (Oct 27, 2020)

T-Daemon said:


> Try the port ix(4) driver: net/intel-ix-kmod


Turns out that driver is way way behind the driver in normal base src. Thanks anyways.


----------

