# sriov passthru network card driver



## Ofloo (Feb 3, 2019)

I've installed both freebsd 11.2 and 12 both don't seem to support it work out the box

On host system configuration

```
PF {
        device:         ix3;
        num_vfs:        20;
}
DEFAULT {
  passthrough:          true;
  allow-set-mac:        true;
  allow-promisc:        true;
}
```


```
loader="bhyveload"
cpu=1
memory=256M
disk0_type="virtio-blk"
disk0_name="disk0"
disk0_dev="sparse-zvol"
uuid="118457fb-2788-11e9-b711-001b21a2777c"

passthru0="8/0/131"
```

The passthrough worked however the card doesn't get attached to a driver on the guest system

```
none0@pci0:0:5:0:       class=0x020000 card=0x00008086 chip=0x15c58086 rev=0x00 hdr=0x00
    vendor     = 'Intel Corporation'
    device     = 'X553 Virtual Function'
    class      = network
    subclass   = ethernet
```


----------



## Phishfry (Feb 3, 2019)

I am passing through onboard Intel controllers and 2 network cards with 4 ports each both igb and em.
Unsure how vm-bhyve works but I use this:


			bhyve/pci_passthru - FreeBSD Wiki
		

/boot/loader.conf

```
pptdevs="6/0/0 6/0/1 7/0/0 7/0/1 132/0/0 132/0/1 134/0/0 134/0/1"
vmm_load="YES"
nmdm_load="YES"
```
`bhyveload -m 8G -S -d /vm/freebsd/freebsd1.img freebsd1;`
`bhyve -S -m 8G -c 16 -AHP -s 0,hostbridge -s 1,lpc -s 2:0,ahci-hd,/vm/freebsd/freebsd1.img -s 5:0,passthru,6/0/0 -s30,xhci,tablet -l com1,/dev/nmdm1A freebsd1;`


----------



## Ofloo (Feb 3, 2019)

It's not a matter of being able to pass them through, there passed through, .. only there not getting any drivers, ..

Host system:

```
ppt0@pci0:8:0:129:      class=0x020000 card=0x00008086 chip=0x15c58086 rev=0x00 hdr=0x00
    vendor     = 'Intel Corporation'
    device     = 'X553 Virtual Function'
    class      = network
    subclass   = ethernet
```

Guest system:


```
none0@pci0:0:5:0:       class=0x020000 card=0x00008086 chip=0x15c58086 rev=0x00 hdr=0x00
    vendor     = 'Intel Corporation'
    device     = 'X553 Virtual Function'
    class      = network
    subclass   = ethernet
```


----------



## Phishfry (Feb 3, 2019)

I have only seen SRV-IO on my Chelsio cards. But in my case I could not get them to passthrough right either.
Have you tried passing through the whole device 8/0/0 just to see what it does?
I got my Chelsio working by using a different PCI address other than SRV-IO ones.


----------



## Ofloo (Feb 3, 2019)

I expect that to work but i'll get back to you in a sec.


----------



## Phishfry (Feb 3, 2019)

What is the hardware? I see ix driver in use. Is this a single physical port or dual/quad device?


----------



## Ofloo (Feb 3, 2019)

Intel X553 Gigabit Ethernet Controller Quad device

FreeBSD test 11.2-RELEASE-p4:

```
ix0: flags=8802<BROADCAST,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=e407bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6>
        ether ac:1f:6b:45:bb:3e
        hwaddr ac:1f:6b:45:bb:3e
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
        media: Ethernet autoselect
        status: no carrier
```

FreeBSD test3 12.0-RELEASE

```
ix0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=e53fbb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,WOL_UCAST,WOL_MCAST,WOL_MAGIC,VLAN_HWFILTER,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6>
        ether ac:1f:6b:45:bb:3f
        media: Ethernet autoselect (1000baseT <full-duplex>)
        status: active
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
```


----------



## Phishfry (Feb 3, 2019)

I have a feeling that the SRV-IO interfaces are not meant to be assigned and maybe the parent device handles them.


----------



## Ofloo (Feb 3, 2019)

Ofcourse they are, its an alternative to vtnet.



			https://people.freebsd.org/~rstone/BSDCan_SRIOV.pdf


----------



## Phishfry (Feb 3, 2019)

I would start by paring back the number of VF interfaces to 4 for now like this shows:


			Testing VF/PF code


----------



## Ofloo (Feb 3, 2019)

20 or 4 doesn't really matter though, ..









> uint16_t
> Accepts any integer in the range 0 to 65535, inclusive.
> num_vfs (uint16_t)



iovctl.conf(5)


----------



## Ofloo (Feb 3, 2019)

I think its a problem with the ix driver, ..


```
ixv0: <Intel(R) PRO/10GbE Virtual Function Network Driver> at device 0.128 on pci6
ixv0: ...reset_hw() failure: Reset Failed!
ixv0: IFDI_ATTACH_PRE failed 5
device_attach: ixv0 attach returned 5
ppt2 at device 0.130 on pci6
ppt3 at device 0.132 on pci6
ppt4 at device 0.134 on pci6
```

Going to reboot and see what happens.


----------



## Phishfry (Feb 3, 2019)

No doubt. You'r trying to run 20 VF's...
Start low and work up. Do you really need 20 VF's


----------



## Ofloo (Feb 3, 2019)

And it seems it has been arround for a while now.






						211062 – ix(4): SR-IOV virtual function driver fails to attach Intel 10-Gigabit X540-AT2 (0x1528): Failed to attach pci0:129:0:129: Input/output error
					






					bugs.freebsd.org
				




Took your advice, ..


```
# cat /etc/iov/ix0.conf
PF {
  device:        ix0;
  num_vfs:        4;
}

DEFAULT {
  passthrough:        true;
}

VF-0 {
  passthrough:          false;
}
```

No difference.


----------



## Phishfry (Feb 3, 2019)

I can't imagine how they can get 64 Virtual Interfaces from 4 physical ones. There certainly is not enough bandwidth to provide 10G to 64 VM's. So I don't know what this magic does. I assume it is like VALE and it has a virtual switch onboard.

Will retry tomorrow with my Chelsio's now that I see what I need to do. It had the VF's created, but only 4 per interface.


----------



## Ofloo (Feb 3, 2019)

It all depends on how bandwidth intensive your vms are, .. and how continuously they use it. you can split a gbic nic over a lot of interfaces, .. think of it as a time share, not everyone needs to be using it all the time, or at the same time.

What about 24port switches with 1gbic uplink.. i mean my internet is 500mbits .. doesn't mean i max it out all the time certainly got more then 64 devices !?

I use vms usually to separate things from each other, .. and to make them more portable, suppose my hardware fails, just setup bhyve host system import the zfs pool done.

And if you backup your configuration with tarsnap like rc.conf probably setup a script with basic install packages, then it should almost be painless to reinstall.


----------



## Ofloo (Feb 3, 2019)

Downloaded intel driver had to edit the make file to enable sr-iov copied to /boot/modules named it if_ix_updated.ko, added if_ix_updated_load="YES" to /boot/loader.conf.

So locally it works already.


```
ixv0: flags=8802<BROADCAST,SIMPLEX,MULTICAST> metric 0 mtu 1500
    options=e507bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWFILTER,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6>
    ether 1e:56:13:ac:40:b2
    media: Ethernet autoselect
    status: no carrier
    nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
```

FreeBSD 11.2 guest no support not even with updated driver

FreeBSD 12 guest only updated driver on host not on guest:


```
ixv0: flags=8802<BROADCAST,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=e507bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWFILTER,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6>
        ether 02:17:08:89:d4:59
        media: Ethernet autoselect
        status: no carrier
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
        options=680003<RXCSUM,TXCSUM,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
        inet6 ::1 prefixlen 128
        inet6 fe80::1%lo0 prefixlen 64 scopeid 0x3
        inet 127.0.0.1 netmask 0xff000000
        groups: lo
        nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
```

So what did i do to make it work:


Downloaded the latest intel driver https://downloadmirror.intel.com/22283/eng/23_5_1.zip
Compiled the intel driver with *SRIOV_ENABLE = 1*
Copied the driver if_ix.ko to /boot/modules
renamed it to if_ix_updated.ko
sysrc -f /boot/loader.conf if_ix_updated_load=yes
rebooted and done
Only works for FreeBSD12 guest though, need a find a way to make it work for FreeBSD11.2

At least the driver work, .. network connectivity is untested.

edit: haven't been able to route traffic over them yet.


----------



## Ofloo (Feb 4, 2019)

Now the driver work but the network isn't passed through.


----------



## pos (May 5, 2019)

Ofloo Did you solve it? I have also seen the errors you have posted in post no 12 above.


----------



## Ofloo (May 5, 2019)

I was able to solve the errors and make it function but make it functional yet. Got distracted but I'll soon look into that again. From what i can tell it's a driver issue. It still might be a configuration issue, .. since now the interface works without errors but it's not really passing through.


----------



## pos (May 5, 2019)

I can see the code in Intel:s ix-3.3.6 package seems to be broken. I both have the problem you have plus I want transparent vlan support. The transparent vlan feature seems to be partly added to the code. I.e not finished....


----------



## pos (May 6, 2019)

Ofloo 

I have talked to the maintainer of the intel-ix-kmod pkg that has this driver ix 3.3.6 driver and version. Now there will be an pkg update that has the SRIOV_ENABLE as a configurable option. https://svnweb.freebsd.org/ports?view=revision&revision=500919 

Transparent VLAN is another thing though...


----------



## Ofloo (May 6, 2019)

That's great, ..


----------



## Phishfry (May 8, 2019)

It was defiantly broke just like you guys said. Everything was OK except inside VM. Trying the newest driver now.


> ixv0: <Intel(R) PRO/10GbE Virtual Function Network Driver> mem 0xc0004000-0xc0007fff,0xc0008000-0xc000bfff at device 3.0 on pci0
> ixv0: ...reset_hw() failure: Reset Failed!
> ixv0: IFDI_ATTACH_PRE failed 5
> device_attach: ixv0 attach returned 5


----------



## enbucm (May 22, 2019)

Hi, I was running around this Problem since long time, but the solution is very simple. Just add

hw.pci.honor_msi_blacklist=0

to /boot/loader.conf -> reboot-> issue fixed ! A new ixv0 interface is up and running from now ...

Be aware, some other issues will raise after SR-IOV is running on ESXi with FreeBSD 12.

#1 > Make sure the Management Network is running on a independet Network Port without SR-IOV enabled!
#2 > Sometimes the ixv0 interface starts at boot, the entire SR-IOV Network Stack is hanging, the ESXi Host need a reboot to fix this! I try to find a solution for this at the moment and it looks like, that LRO and TSO needs to be disabled at earliest stage. If any find best solution for stable enviroment, just let the community know. I will do as well ...

Just to be clear: This Solution works with standard installation of FreeBSD 12.0 without any other 3rd party driver.


----------



## enbucm (May 23, 2019)

Disable TSO & LRO in /etc/rc.conf

--------------------- /etc/rc.conf ---------------------
# IPv4 Virtual Function
ifconfig_ixv0="inet your_ipv4/24 -tso4 -tso6 -lro -vlanhwtso"
defaultrouter="ipaddress_of_your_ipv4_router"

# IPv6 Virtual Function
ifconfig_ixv0_ipv6="inet6 your_ipv6/64 -tso4 -tso6 -lro -vlanhwtso"
ipv5_defaultrouter="ipaddress_of_your_ipv6_router%ixv0"
--------------------------------------------------------

so far I understand, no TSO / LRO improve network performance as well on vSphere 6.7.

Now, the Environment runs smooth and stable.


----------



## Phishfry (May 23, 2019)

enbucm said:


> Just add
> 
> hw.pci.honor_msi_blacklist=0
> 
> to /boot/loader.conf -> reboot-> issue fixed !


So just to be clear this is in the guest VM right?


----------



## enbucm (May 25, 2019)

Phishfry said:


> So just to be clear this is in the guest VM right?


It's an vmware vSphere ESXi 6.7 Virtual Machine using Intel Server Adapter X520-2 (intel chip 82599) 10GBit SR-IOV


----------

