# Wireguard vpn slow speed



## Vovas (Dec 26, 2019)

Hi experts!
I have a problem with slow speed with wireguard vpn. FreeBSD 12.0 installed on VPS.
Information about server.

```
# cat /var/run/dmesg.boot | grep CPU
CPU: QEMU Virtual CPU version 1.5.3 (2400.20-MHz K8-class CPU)
cpu0: <ACPI CPU> on acpi0
CPU: QEMU Virtual CPU version 1.5.3 (2400.22-MHz K8-class CPU)
cpu0: <ACPI CPU> on acpi0
```


```
cat /var/run/dmesg.boot | grep memory
real memory  = 2147483648 (2048 MB)
avail memory = 2043375616 (1948 MB)
```
Network:

```
# ifconfig
vtnet0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=6c07bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWTSO,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
        ether 52:54:00:c9:7e:b4
        inet 212.0.0.2 netmask 0xffffff00 broadcast 212.0.0.255
        media: Ethernet 10Gbase-T <full-duplex>
        status: active
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
        options=680003<RXCSUM,TXCSUM,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
        inet6 ::1 prefixlen 128
        inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2
        inet 127.0.0.1 netmask 0xff000000
        groups: lo
        nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
wg0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> metric 0 mtu 1500
        options=80000<LINKSTATE>
        inet 10.0.1.1 --> 10.0.1.1 netmask 0xff000000
        groups: tun
        nd6 options=101<PERFORMNUD,NO_DAD>
        Opened by PID 575
```
My `pf.conf`

```
ext_if="vtnet0"
int_if="wg0"
set skip on lo0
scrub in all
nat on $ext_if from $int_if:network to any -> ($ext_if)
pass all
```



Spoiler: Speedtest results



Without VPN connection:






With VPN connection:







Any suggestions?


----------



## SirDice (Dec 27, 2019)

Vovas said:


> ```
> wg0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> metric 0 mtu 1500
> options=80000<LINKSTATE> inet 10.0.1.1 --> 10.0.1.1
> netmask 0xff000000 groups: tun nd6
> ...


This is a tunnel to itself?


----------



## Vovas (Dec 27, 2019)

SirDice said:


> This is a tunnel to itself?


Yes. This ip set up during the boot system.
`/etc/rc.conf`

```
wireguard_enable="YES"
wireguard_interfaces="wg0"
ifconfig_wg0="inet 10.0.1.1 netmask 255.255.255.0"
```


----------



## SirDice (Dec 27, 2019)

Doesn't wireguard work similar to OpenVPN? For OpenVPN you don't configure the interface in rc.conf. It's dynamically created when the OpenVPN service is started. Can you post your wireguard config (make sure to obfuscate your passwords/public IP addresses)?


----------



## Vovas (Dec 27, 2019)

SirDice said:


> Doesn't wireguard work similar to OpenVPN? For OpenVPN you don't configure the interface in rc.conf. It's dynamically created when the OpenVPN service is started.


I don't know. Maybe!
`wg0.conf`

```
# cat /usr/local/etc/wireguard/wg0.conf
[Interface]
PrivateKey = <...>
ListenPort = 51820

[Peer]
PublicKey = <...>
AllowedIPs = 10.0.1.2/32
Endpoint = 212.0.0.2:51820

[Peer]
PublicKey = <...>
AllowedIPs = 10.0.1.3/32
Endpoint = 212.0.0.2:51820
```
And config for my phone:

```
# cat /usr/local/etc/wireguard/ios.conf
[Interface]
Address = 10.0.1.3/32
PrivateKey = <...>
DNS = 9.28.15.8, 212.4.1.11

[Peer]
PublicKey = <...>
AllowedIPs = 0.0.0.0/0
Endpoint = 212.0.0.2:51820
```


----------



## Vovas (Dec 30, 2019)

So, I've removed `ifconfig_wg0` from /etc/rc.conf and add ip address to
/usr/local/etc/wireguard/wg0.conf

```
[Interface]
Address = 10.0.1.1/24
PrivateKey = <...>
ListenPort = 51820
```
Restart daemon and same slow incoming speed. Outgoing speed around 10~20Mbps. I've changed MTU for `wg0` interface to 1500, like `vtnet0`, because everytime after restarting daemon system set 16304 MTU by default.


Spoiler: # service wireguard restart





```
[#] rm -f /var/run/wireguard/wg0.sock
[#] wireguard-go wg0
INFO: (wg0) 2019/12/30 12:54:59 Starting wireguard-go version 0.0.20191012
[#] wg setconf wg0 /tmp/tmp.fhpTOKz2/sh-np.2zOzEv
[#] ifconfig wg0 inet 10.0.1.1/24 10.0.1.1 alias
[#] ifconfig wg0 mtu 16304
[#] ifconfig wg0 up
[#] route -q -n add -inet 10.0.1.3/32 -interface wg0
[#] route -q -n add -inet 10.0.1.2/32 -interface wg0
[+] Backgrounding route monitor
```



`netstat -r`

```
Routing tables

Internet:
Destination        Gateway            Flags     Netif Expire
default            212.0.0.1      UGS      vtnet0
10.0.1.1           link#3             UH          wg0
10.0.1.2/32        wg0                US          wg0
10.0.1.3/32        wg0                US          wg0
localhost          link#2             UH          lo0
2212.0.0.0/24   link#1             U        vtnet0
```


----------



## zer69 (Jan 1, 2020)

Dear all,

I have a very similar issue too, running 11.3-REL.

Tried things which didn't help:
- MTU change
- PF NAT vs IPFW NAT
- tuning of OS network stack

My VPS uplink is on 10Gbps, I am able to achieve 300Mbps speeds from my Windows 10 machine when running over SSH tunnels. When using wireguard VPN it's only ~10Mbps.

Any ideas would be very welcome!

Best wishes,

-Robert


----------



## acheron (Jan 1, 2020)

wireguard implementation is userspace only on FreeBSD, what kind of performance do you expect?


----------



## zer69 (Jan 2, 2020)

I have found out that my ISP is somehow throttling UDP connections, and WG is UDP only... Going to try OpenVPN now. Thanks for support.


----------



## rf10 (Jan 30, 2020)

I am actually surprised wireguard works on FreeBSD. I tried it a few months ago, and it was a no go (aside from it userspace implementation on FreeBSD and the associated performance).  I may dust it off again even to run some perf tests on my lan.


----------



## ctaranotte (Jan 30, 2020)

rf10 said:


> I am actually surprised wireguard works on FreeBSD. I tried it a few months ago, and it was a no go (aside from it userspace implementation on FreeBSD and the associated performance).  I may dust it off again even to run some perf tests on my lan.



I am using Wireguard on FreeBSD and Debian peers. Speed seems to be as good as with OpenVPN.


----------



## Alexander Huemeyer (Jan 30, 2020)

I just tried in my homenet: Linux to BSD 55 MByte /s with and without wireguard. No noticable CPU utilization on the FreeBSD Server. I use the latest wireguard packe from the latest repository.
I dont think its a wireguard problem.


----------



## rf10 (Jan 30, 2020)

ctaranotte said:


> I am using Wireguard on FreeBSD and Debian peers. Speed seems to be as good as with OpenVPN.


I did some performance testing on OpenVPN, and its speed was heavily affected by the encryption algorithm used. The default Blowfish was faster than AES, but I suppose it depends on whether AES hardware acceleration is present in the CPU. Wireguard is using ChaCha20, which is supposed to be fast, especially on older CPUs, but I couldn't do direct performance measurements at the time because I couldn't get Wireguard to work.


----------



## Vovas (Feb 1, 2020)

ctaranotte said:


> I am using Wireguard on FreeBSD and Debian peers. Speed seems to be as good as with OpenVPN.


Could you post your pc's specifications? I use wireguard on VPS with 1gb RAM and one core processor. May be my VPS too slow


----------



## mwest (Feb 2, 2020)

Have you tried using iperf or similar tool to remove Wireguard from the equation while testing?

On the server: iperf --server --port 9898 --udp
On the client: iperf --port 9898 --udp --client <your.server.IP>

Should reveal if the slowness is due to Wireguard, or due to something else affecting UDP traffic.


----------



## ctaranotte (Feb 2, 2020)

Vovas said:


> Could you post your pc's specifications? I use wireguard on VPS with 1gb RAM and one core processor. May be my VPS too slow



My VPS: 4 cores plus 8gb.


----------



## ctaranotte (Feb 4, 2020)

mwest said:


> Have you tried using iperf or similar tool to remove Wireguard from the equation while testing?
> 
> On the server: iperf --server --port 9898 --udp
> On the client: iperf --port 9898 --udp --client <your.server.IP>
> ...



I have run iperf as per you suggestion with iperf on my VPS bound to server public IP and wg off.


```
# iperf --port 9898 --udp --client "server public IP"
------------------------------------------------------------
Client connecting to server public IP, UDP port 9898
Sending 1470 byte datagrams, IPG target: 11215.21 us (kalman adjust)
UDP buffer size: 9.00 KByte (default)
------------------------------------------------------------
[  3] client local IP port 30420 connected with server public IP port 9898
[  3] WARNING: did not receive ack of last datagram after 10 tries.
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.25 MBytes  1.05 Mbits/sec
[  3] Sent 892 datagrams
```

I have run iperf as per you suggestion with wg on and iperf on my VPS bound to the server wg0 IP.


```
# iperf --port 9898 --udp --client "server wg0 IP"
------------------------------------------------------------
Client connecting to server wg0 IP, UDP port 9898
Sending 1470 byte datagrams, IPG target: 11215.21 us (kalman adjust)
UDP buffer size: 9.00 KByte (default)
------------------------------------------------------------
[  3] client wg0 IP port 63883 connected with server wg0 IP port 9898
[  3] WARNING: did not receive ack of last datagram after 10 tries.
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.25 MBytes  1.05 Mbits/sec
[  3] Sent 892 datagrams
```

The same with TCP packets.


```
# iperf --port 9898 --client "server public IP"
------------------------------------------------------------
Client connecting to server public IP, TCP port 9898
TCP window size: 64.8 KByte (default)
------------------------------------------------------------
[  3] client local IP port 58401 connected with server public IP port 9898
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  15.2 MBytes  12.7 Mbits/sec
```


```
# iperf --port 9898 --client "server wg0 IP"
------------------------------------------------------------
Client connecting to server wg0 IP, TCP port 9898
TCP window size: 64.3 KByte (default)
------------------------------------------------------------
[  3] client wg0 IP port 32814 connected with server wg0 IP port 9898
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.1 sec  13.5 MBytes  11.3 Mbits/sec
```

I have not installed openvpn on my VPS but tell me if you need me to run the same test with openvpn
I hope it helps

EDIT: 
Server: 4 cores, 8gb, Debian 10 (amd64), wireguard-dkms (>= 0.0.20200121-2), wireguard-modules (>= 0.0.20191219), wireguard-tools (>= 1.0.20200121-2)
Client: Dell XPS L322x i5, 4gb, FreeBSD 12.1 (amd64), wireguard 1.0.20200121


----------



## Vovas (Feb 5, 2020)

So, could somebody explain to me, why incoming speed is too slow (around 1Mbit/s)? Outgoing speed is normal(10Mbit/s) for my slow VPS.


----------



## Void (Feb 7, 2020)

// I have a problem with slow speed with wireguard vpn. FreeBSD 12.0 installed on VPS.
same to me

wireguard-1.0.20200206
Upload speed is good, but download at 1.5 Mbit/s. 
wireguard-go CPU usage is only 5-7% at download time.


----------



## Alexander Huemeyer (Feb 7, 2020)

Both of u are using Freebsd 12.0. Perhaps try to upgrade to 12.1? 12.0 is soon EoL anyway.

Perhaps also a problem with ur vps provider? Some sort of throtteling outgoing udp packages? To test this, you can try the speed with netcat over tcp and udp.


----------



## Void (Feb 8, 2020)

I use 12.1. Tested with 3 different ISP, same slow ingress speed. 2 different VPS: one hosted by DO and some hosting from Germany, clients: android and windows. Will try to test with Linux  soon.

Update:
Tested with Linux on server side. Ingress speed is about 70Mbit/s. Same VPS. I think WG port on FreeBSD seems to be bugged.


----------



## Futura (Feb 12, 2020)

Hi Vova,

i also faced this issue not only with wireguard but also with OpenVPN. Everything worked fine using different linux distributions. So in my case i could nail it down to the virtio network driver when using a KVM based virtualization. There are some weird udp packet drops with it. The provider i use is Netcup in Germany. Switching to the e1000 driver did solve the performance issue.

Maybe this information will help you.


----------



## SirDice (Feb 13, 2020)

I suspect the original issue may be due to MTU settings. With a quick glance through wireguard documentation I noticed it seems to heavily depend on a working Path MTU Discovery. As a lot of people just blindly block everything, including all ICMP, PMTUD doesn't work.


----------



## Void (Feb 20, 2020)

SirDice said:


> I suspect the original issue may be due to MTU settings. With a quick glance through wireguard documentation I noticed it seems to heavily depend on a working Path MTU Discovery. As a lot of people just blindly block everything, including all ICMP, PMTUD doesn't work.


Confirm. Adding this to pf.conf fix the problem.
`pass in quick on $ext_if inet proto icmp from any to ($ext_if) icmp-type unreach`

edit: but with android clients the problem not solved. Only with FreeBSD<->Linux is OK.
full list:

FreeBSD<->Linux = OK
Linux<->Adnroid    = OK
FreeBSD<->Android != OK

This is strange and needs deeper investigating.

-----
Update
tested with OpenBSD 6.7-current (wg in kernel) and Android. Speed is very good. Wait for kernel implementation in FreeBSD...
(this test was false-positive) problem was in Adnroid


----------



## Void (Nov 16, 2020)

New update.
Tested on Android 10 with wireguard kernel module backported.
For now download speed are very fast, seems problem was in userspace realization in Android.


----------



## Void (Oct 13, 2022)

Finally fix all speed issues by adding this to /boot/loader.conf

hw.vtnet.tso_disable="1"
hw.vtnet.lro_disable="1"
hw.vtnet.csum_disable="1"


----------



## BobbyDropTables (Nov 8, 2022)

Void said:


> Finally fix all speed issues by adding this to /boot/loader.conf
> 
> hw.vtnet.tso_disable="1"
> hw.vtnet.lro_disable="1"
> hw.vtnet.csum_disable="1"


Hi Void,

I used the same trick on my VPS300 instance @ Contabo in Germany... went from 1 Mbit/s download to somewhere between 50 and 70 with noticable CPU usage according to htop.
Would be interesting to know the reason behind this - my FreeBSD instance is also a KVM instance like described in this thread.
Can anybody confirm if this is only affecting KVM FreeBSD guests, or does this happen to Linux guests as well, if they have set or not set those things for their kernel.
Missing hardware acceleration for the network certainly aches, but at least I can use my server as a proper VPN now.

Additional note:
Can someone tell me if 50-70 Mbit/s is a reasonable speed (considering ping + general overhead) using WireGuard? I currently reside in the USA and the VPN server is located in Germany as said.
Server has 100 Mbit/s down and up, Client hast 100 Mbit/s down and ~10 up.
Bandwidth tests using iperf3 show that I can reach 100 Mbit/s download from the server to my client using UDP.

Kind regards and thanks again for this life saving hint


----------

