# IPSec Performance



## sydney6 (Jan 26, 2015)

Hello Everybody,

I have set up 2 VM's under Xen running each one IPSec-Endpoint. Everything seems to work fine, but (measured with benchmarks/iperf) the performance drops from ~10 Gb/s on a non-IPSec-Kernel to ~200 Mb/s with IPSec compiled in, regardless of whether actually using IPSec or not.

I have read about reasoning why IPSec isn't enabled in GENERIC, but wanted to ask if this is the kind of performance hit one has to expect.

I have observed this with FreeBSD 10.1 and 10 Stable, both AMD64. The Hypervisor is running Xen 4.4 with a Linux 3.16 Dom0.


----------



## junovitch@ (Jan 27, 2015)

That is a pretty big difference.  What NIC type does the VM use?  Does trying different NIC types help?  Some folks here have reported performance issues using virtio(4) that were resolved by selecting a NIC on the hypervisor that presents itself as an em(4) driver Intel NIC.


----------



## sydney6 (Jan 27, 2015)

At the moment I am using the paravirtualized netfront driver, which was added with XENHVM in GENERIC for 10.0 RELEASE. But this could be indeed be a problem/solution. I'm wondering how I could prevent FreeBSD from picking up the driver or how I could boot up the VMs with qemu-emulated e1000 NICs, to rule this out..


----------



## ssoorruu (Oct 10, 2017)

Hi sydney6,

Have you found solution for this?
(As I'm trying similar, to achieve high throughput IPSec, above 2Gbit/s)


----------



## sydney6 (Oct 10, 2017)

Actually, *I* do not remember as there are many factors depending upon this and there has been a _lot_ of improvement lately, i.e. the reworked xen-netfront driver, (some) solved NIC-offloading issues, AESI Support for AES-GCM, reworked IPSEC module, GNN's tryforwarding optimizations, ...

There were/are forwaring issues with TCP offloading enabled on the xen-host which can become a bottleneck in performance terms, as I e.g. have TCP Offloading disabled on the xen-host, or by using a userspace NIC like e1000, which obviously also would become a bottleneck..

Furthermore, on systems without AES-GCM support the used hashing algorithm, e.g. SHA can become a bottleneck. Also ESP processing over a single IPSec connection limited out around 1 Gb/s on a 3.6 GHz Intel CPU as it is single threaded in the kernel..

Do you have tried it already and where are your concrete issues? Also, I don't understand what you are trying to achieve.. Transport or Tunnel Setup? How/What are you benchmarking? Single Flow or Multiple Flows? Which Algorithms are you using? IPv4 or IPv6? Questions over questions..


----------



## debguy (Oct 17, 2017)

*I* haven't performance tested IPSec before but *I*'m sure "large firewall tables" (anything that filters IP packets) can have an enormous impact. (complex filtering routing and encryption? Possibly that's the correct impact)

Perhaps your hardware has accelerated encryption you might take advantage of with the right kernel options and encryption options?  *A*nother good bet is using hardware that has IPSec built-in (ie, most ISP modems have lightweight encryption built-in).

*T*o really "find blame" you should try using the same setup with encryption disabled (but through same tunnel.  maybe tell IPSec to use weakest encryption).  *Y*ou could have a routing or firewall issue going on - some delay due to blocked ICMP or something.


----------

