# Bad performance using bridges



## pos (Mar 10, 2019)

Hi

Asking this as I think of using bridges (use of bhyve).

The question has its rots in in this paper https://people.freebsd.org/~olivier...FreeBSD_for_routing_and_firewalling-Paper.pdf

And I refer to the performance comparison between bridges vs no bridges:
--snip--
The massive performance degradation (-63%) is a big
surprise: if_bridge code is using lot’s on non-optimised
locking mechanism. Its usage needs to be avoided.
--snip--

Question:
I wonder if anything has been done to optimize the if_bridge codebase between 11.1 and 12.x (No. I have not gone through the source code and checked myself)


Thanks in advance
/Peo


----------



## Phishfry (Mar 10, 2019)

Not that I know of. I am using routed networking with bhyve. I added two 4 port Intel LAN cards. Hook each one up to a switch.
Passthru each NIC to bhyve.

The way packets co-mingle on bridges never thrilled me. Routed networks are superior.

Most 10G supported cards support VF's for bhyve as well. See iovctl(8)
I am trying it with Intel x540 and Chelsio T420's.
They both provide VF interfaces when activated.

That has been my approach. Keep onboard NIC's for hypervsior management and add NIC's for clients.
To me a bridge is nothing more than a party line. Add a tap and I am sure you are losing speed.


----------



## pos (Mar 10, 2019)

I have to read about iovctl... I don't know anything about SR-IOV and what it gives. Tnx for the hint.

So you use VT-d and have one physical NIC for each vm? That approach will generate heat if using many 10GBase-T in the server (Less heat generation with SFP+). Also, you could not have that many vm:s with this approach. It will also be much more expensive. But on the other side... you will not have any performance degradation due to bridge usage 

I maybe go for your approach as a start, but would ideally prefer one NIC trunk with VLANs on it and tie vm:s to each VLAN. But that requires bridges. So.... I would very much appreciate if someone that have info about this bridge performance issue could comment.


----------



## Phishfry (Mar 10, 2019)

Here is some scant information on VF's.


			Testing VF/PF code
		


The NIC acts as a Virtual switch and you can have very many VF's per PF.


----------



## Phishfry (Mar 10, 2019)

pos said:


> So you use VT-d and have one physical NIC for each vm?


Yes right now that is my setup. 2 cards i350-4 provide gigabit ethernet to my VM's using ppt pass thru.
I am just experimenting with VF/IOV for now. An IOV capable NIC card provide lots of interfaces.
128 on the Chelsio if I remember correctly.


----------



## pos (Mar 25, 2019)

Phishfry 

Have tested SR-IOV now. Either set the VLAN on the VF in the host or config the VLAN in the guest. Both work. So in my test I have removed the bridges in favour for SR-IOV VF:s and uses VLAN tagged VFs only.

Really good solution. Thanks for the idea!


----------

