# vmx showing less performance in FreeBSD13



## kavitakr (Feb 23, 2022)

We have changed NIC from E1000V to vmx and we see a drop in performance when we run netperf. 
Also I see

```
vmx0: flags=8863<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=4800038<VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,NOMAP>
        ether 00:0c:29:0c:5a:f3
        inet 10.10.0.28 netmask 0xffffffe0 broadcast 10.10.0.31
        media: Ethernet autoselect
        status: active
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
```
media Is autoselect only but in em0 we full-duplex, Can we say vmx is in full-duplex mode? Is it known issue?

```
em0: flags=8863<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=481249b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,LRO,WOL_MAGIC,VLAN_HWFILTER,NOMAP>
        ether 00:50:56:a7:93:8f
        inet 10.10.4.18 netmask 0xffffffe0 broadcast 10.10.4.31
        media: Ethernet autoselect (1000baseT <full-duplex>)
```


----------



## monwarez (Feb 23, 2022)

I would simply guess that the driver for a real hardware is more optimized than the one for a virtual interface. Also when looking at other project that use FreeBSD they recommend the emulated driver over vmx for traffic shaping (since it does not work on VMWare according to them)





						Virtual & Cloud based Installation — OPNsense  documentation
					






					docs.opnsense.org


----------



## sko (Feb 23, 2022)

IIRC the single queue that is available by default severely limits vmx performance.
If you are using ESXi and have MSI-X available, you can disable hw.pci.honor_msi_blacklist to use all available queues (see vmx(4)).

I haven't used VMware for quite a while (and never in production...), but if they offer virtio devices you should always go with them as they are far more optimized than drivers for the (proprietary and closed source) VMXNET virtual interfaces...


----------



## kavitakr (Feb 23, 2022)

sko said:


> IIRC the single queue that is available by default severely limits vmx performance.
> If you are using ESXi and have MSI-X available, you can disable hw.pci.honor_msi_blacklist to use all available queues (see vmx(4)).
> 
> I haven't used VMware for quite a while (and never in production...), but if they offer virtio devices you should always go with them as they are far more optimized than drivers for the (proprietary and closed source) VMXNET virtual interfaces...


We are using ESXI 6.7  and disable hw.pci.honor_msi_blacklist is 1.
We see small improvement with -rxcsum -rxcsum6 -txcsum -txcsum6 -lro -tso -vlanhwtso

But atleast we expected to be better then E1000V


----------



## kavitakr (Feb 23, 2022)

I see the bug exist PR 258755


----------



## Antonical (Oct 18, 2022)

Just waking this up. I can see there are a number of bugs related to he vmx driver for FreeBSD I had added to the bug referenced for what good that will do.
It seems the bugs are not being picked up by anyone in the dev team. Status showing as new with no validation from the developers. 

the VMXNET3 VMWARE ESXI adaptor presents as vmx in FreeBSD 13 and is problematic. Performance is degraded substantially over linux vm's which all run the same physical hardware (Broadcom NetXtreme II quad port) at wire speed 1G. 

VMWARE is one of the largest virtualisation platforms. VMXNET3 is the default adaptor. It would b egood to see BSD OS's perform well on ESXI.

Cheers
Tony


----------



## SirDice (Oct 18, 2022)

Antonical said:


> It seems the bugs are not being picked up by anyone in the dev team.


Ping the developers on the mailing lists, freebsd-virtualization or freebsd-net seem like good starting points for this.


----------

