# Multiple network queues on vmx interface



## dvish (Nov 19, 2014)

I'm trying to get multiple transmit and receive queues on vmx. FreeBSD 10.1-RELEASE amd64 hosted on ESXi 5.5 update2a. 6 CPU, core is compiled with the vmx driver.

The vmx man page says 





> The number of queues allocated depends on the presence of MSI-X, the number of configured CPUs, and the tunables listed below. FreeBSD does not enable MSI-X support on VMware by default. The hw.pci.honor_msi_blacklist tunable must be disabled to enable MSI-X support.



Ok, but when I set 
	
	



```
hw.pci.honor_msi_blacklist="0"
```
 in loader.conf, the vmx interface can't get up. The message 
	
	



```
vmx0: device enable command failed!
```
 appears when rebooting.

When 
	
	



```
hw.pci.honor_msi_blacklist="1"
```
 the vmx interfaces are getting up, but there is only one CPU per interface works.

What is wrong? Any additional settings on ESXi?


----------



## raVen (Oct 3, 2015)

FreeBSD vmx(4) driver does not have core number detection mechanism, so you should set it up by hand. If number of cores in your VM is less than hw.vmx.[tr]xnqueue, you'll get this at boot:

```
vmx0: device enable command failed!
```
This configuration is working now for me:

```
[~@kbsd10/07:08:23]
raven$ cat /boot/loader.conf | egrep msi\|queue
hw.pci.honor_msi_blacklist="0"
hw.vmx.txnqueue="4"
hw.vmx.rxnqueue="4"
[~@kbsd10/07:09:05]
raven$ cat /var/run/dmesg.boot | fgrep Multip
FreeBSD/SMP: Multiprocessor System Detected: 4 CPUs
[~@kbsd10/07:09:07]
raven$ vmstat -i | fgrep vmx
irq256: vmx0:tq0                      10          0
irq257: vmx0:tq1                       4          0
irq258: vmx0:tq2                      11          0
irq259: vmx0:tq3                     420          2
irq260: vmx0:rq0                     134          0
irq261: vmx0:rq1                      30          0
irq262: vmx0:rq2                       5          0
irq263: vmx0:rq3                     455          2
```


----------

