# Configuring bhyve-vm networking. How?



## bogong (Apr 24, 2019)

Hello all!
I've got issue of configuring bhyve-vm networking: the installed guest (Debian 9) do not see the neighbours (other IPs in the same network) but very perfectly see the internet. How to make guest to see the neighbours?

What I've done:

```
$ mkdir /[custom_dir]/vms
$ pkg install vm-bhyve grub2-bhyve
$ nano /boot/loader.conf

# --------------------
# Bhyve virtual machine sttings

if_bridge_load="YES"
if_tap_load="YES"
nmdm_load="YES"
vmm_load="YES"

$ nano /etc/rc.conf

# --------------------
# Bhyve settings

vm_enable="YES"
vm_dir="/custom/vms"
vm_list=""
vm_delay="5"
ifconfig_igb0_aliases="inet 172.16.20.151-200/24"

$ vm switch create public
$ vm switch add public igb1
$ cp /usr/local/share/examples/vm-bhyve/* /custom/vms/.templates/
$ vm iso http://ftp.uni-kl.de/pub/linux/ubuntu.iso/bionic/ubuntu-18.04.1.0-live-server-amd64.iso
$ vm create -t ubuntu -s 100G myubuntu
$ vm configure myubuntu
$ vm install myubuntu ubuntu-18.04.1.0-live-server-amd64.iso
$ vm console myubuntu
$ sysrc vm_list="myubuntu"
$ vm start myubuntu
```


----------



## SirDice (Apr 24, 2019)

1) Your igb1 interface isn't "up". 
2) Double check if the VM is actually tied to the "public" switch.


----------



## bogong (Apr 24, 2019)

1) igb1 - working perfectly. It's the main interface for all that I've virtualised (jails and vm) when I ping it from jails to vm all is OK, when I am trying to ping from vm to jails got problem
2) It's tied to public and it's working perfectly because I am reaching internet from inside of vm, the problem is only in reaching neighbours

I've been publishing only settings that is related to the Bhyve. The full rc.conf - much bigger.


----------



## SirDice (Apr 24, 2019)

bogong said:


> 1) igb1 - working perfectly. It's the main interface for all that I've virtualised (jails and vm) when I ping it from jails to vm all is OK, when I am trying to ping from vm to jails got problem
> 2) It's tied to public and it's working perfectly because I am reaching internet from inside of vm, the problem is only in reaching neighbours


This sounds like a subnet mask issue. You could get into a situation like this if the Ubuntu VM has a /25 or /26 subnet mask instead of a /24.


----------



## bogong (Apr 24, 2019)

I found this article -  https://github.com/churchers/vm-bhyve/wiki/Virtual-Switches
I understand that all is around creating switch, but couldn't find any examples or references.


----------



## bogong (Apr 24, 2019)

SirDice said:


> This sounds like a subnet mask issue. You could get into a situation like this if the Ubuntu VM has a /25 or /26 subnet mask instead of a /24.



For now installed pure Debian 9 and mask - the first thing that I've checked and write in config manually netmask 255.255.255.0 everywhere.


----------



## bogong (Apr 24, 2019)

I got this from switch list command

```
$ vm switch list
NAME    TYPE      IFACE      ADDRESS  PRIVATE  MTU  VLAN  PORTS
public  standard  vm-public  -        no       -    -     igb0
```
I think the problem is in address - it's not defined, but in manuals on github it's presented. 

```
# vm switch list
NAME    TYPE      IFACE      ADDRESS         PRIVATE  MTU  VLAN  PORTS
public  standard  vm-public  192.168.8.1/24  no       -    -     -
```
I am trying figure it out.


----------



## SirDice (Apr 24, 2019)

bogong said:


> I found this article - https://github.com/churchers/vm-bhyve/wiki/Virtual-Switches
> I understand that all is around creating switch, but couldn't find any examples or references.


Not much to talk about really. All it does is create a bridge(4) interface. 

```
root@hosaka:~ # vm list
NAME            DATASTORE  LOADER     CPU  MEMORY  VNC           AUTOSTART  STATE
case            default    bhyveload  4    4096M   -             Yes [3]    Running (2502)
jenkins         default    bhyveload  4    4096M   -             Yes [5]    Running (3198)
kdc             default    none       2    2048M   0.0.0.0:5901  Yes [2]    Running (60128)
gitlab          stor10k    bhyveload  4    6144M   -             Yes [9]    Running (4838)
gitlab-runner   stor10k    bhyveload  4    4096M   -             Yes [10]   Running (4858)
kibana          stor10k    bhyveload  4    6144M   -             Yes [1]    Running (1952)
lady3jane       stor10k    bhyveload  4    4096M   -             No         Stopped
plex            stor10k    bhyveload  4    4096M   -             Yes [6]    Running (4130)
riviera         stor10k    bhyveload  2    4096M   -             No         Running (31342)
sdgame01        stor10k    grub       2    4096M   -             No         Stopped
tessierashpool  stor10k    bhyveload  4    32768M  -             Yes [4]    Running (3178)
wintermute      stor10k    bhyveload  4    4096M   -             Yes [8]    Running (4425)
root@hosaka:~ # vm switch list
NAME     TYPE      IFACE       ADDRESS  PRIVATE  MTU   VLAN  PORTS
servers  standard  vm-servers  -        no       9000  11    lagg0
public   standard  vm-public   -        no       9000  10    lagg0
```
I have everything tied to a lagg(4) interface, it consists of two physical interfaces; igb1 and igb2. I also have VLANs configured now but I started with a single interface and network.



bogong said:


> I think the problem is in address - it's not defined, but in manuals on github it's presented.


Don't assign an address to the bridge. 


```
root@hosaka:~ # ifconfig vm-public
vm-public: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
        ether 1e:78:8a:3f:d7:70
        id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
        maxage 20 holdcnt 6 proto rstp maxaddr 2000 timeout 1200
        root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
        member: tap10 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 21 priority 128 path cost 2000000
        member: tap6 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 17 priority 128 path cost 2000000
        member: tap9 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 20 priority 128 path cost 2000000
        member: tap8 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 19 priority 128 path cost 2000000
        member: tap5 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 16 priority 128 path cost 2000000
        member: tap4 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 15 priority 128 path cost 2000000
        member: tap3 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 14 priority 128 path cost 2000000
        member: lagg0.10 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 10 priority 128 path cost 2000000
        groups: bridge vm-switch viid-4c918@
        nd6 options=1<PERFORMNUD>
```

I think your problem is partly caused by the way you've set things up. Are your jails tied directly to igb1? In that case traffic from the VM is captured by the bridge(4) and passed out on igb1 instead of being passed to your jails due to the way a bridge(4) "hooks" into the interface stack. If you search these forums you'll find similar issues with a combination of VLAN tagged and untagged interfaces being bridged while also having services tied directly to the interfaces.

As a work-around you could try switching to VNET jails and adding the other ends of the epair(4) interfaces to your vm-public bridge interface.


----------



## SirDice (Apr 24, 2019)

You may also try to bind your jails to the vm-public bridge instead of the igb1 interface if you don't want to use VNET jails. But this might result in a order issue during boot if the jails are started before vm-bhyve. To solve that you would need to create the bridge interface yourself (so it exists when the jails are started) and import that bridge into vm-bhyve as a custom switch.


----------



## D-FENS (Apr 24, 2019)

What exactly is the error when you ping the host? Is it "permission denied", or is it a timeout?

If I understand correctly, you have two ethernet cards - one for your host and one for sharing among the VMs. With that in mind, your VMs should see the host just like any other host in your LAN.

To rule out DNS problems, use netstat, ping, route etc. always with *"-n"* switch and only IP addresses without hostnames.

1. Check if igb0 and igb1 (inside the VM) have the same IP network mask. They should be in the same subnet.

2. Ensure that no IP addresses are colliding. The host and the VMs should have distinct IP addresses.

3. Make sure that the MAC addresses are different too! Use ifconfig.

4. Your VM and the host should both have a direct routing entry for inside the LAN (they don't need to go through the router for direct communication).
See the routing entries of a VM of mine:

```
# netstat -rn
Routing tables

Internet:
Destination        Gateway            Flags     Netif Expire
default            192.168.2.1        UGS         em0
127.0.0.1          link#2             UH          lo0
192.168.2.0/24     link#1             U           em0
192.168.2.2        link#1             UHS         lo0
```
Here 192.168.2.1 is the Internet gateway and all connections to 192.168.2.0/24 (the LAN) are direct via em0.
Use this command to validate that the route to your host's IP is working:

```
# route show 192.168.2.5
   route to: 192.168.2.5
destination: 192.168.2.0
       mask: 255.255.255.0
        fib: 0
  interface: em0
      flags: <UP,DONE,PINNED>
recvpipe  sendpipe  ssthresh  rtt,msec    mtu        weight    expire
       0         0         0         0      1500         1         0
```

IMPORTANT: Your host should also have proper routing table. I often forget to check for the way back and scratch my head for a while before realizing the back route is missing.

5. Once the routes work, if you still have no connection, check your firewalls (host and jail). Let all deny rules log - enable *firewall_loging* in rc.conf and put *log* in all deny rules. Restart the firewall and try again. Any denied packets will be logged in /var/log/security. If the packets are not denied, you should be able to ping by IP address (use ping -n to exclude DNS problems).

6. If still no luck, use programs like tcpdump on the VM's interface, on the bridge and on igb0 to see if the packets are coming through. If IP/MAC addresses are correct, routing is configured properly and the firewall is not stopping the packets, they MUST come through.

7. Once the IP connection works fine, debug DNS settings if necessary. Make sure you have a nameserver in /etc/resolv.conf on both host and vm. Make sure that the domain names you are using are known to the DNS server, or insert them in /etc/hosts on both host and vm.


----------



## D-FENS (Apr 24, 2019)

SirDice said:


> I think your problem is partly caused by the way you've set things up. Are your jails tied directly to igb1? In that case traffic from the VM is captured by the bridge(4) and passed out on igb1 instead of being passed to your jails due to the way a bridge(4) "hooks" into the interface stack. If you search these forums you'll find similar issues with a combination of VLAN tagged and untagged interfaces being bridged while also having services tied directly to the interfaces.


Good point. I normally use tun or tap interfaces in the VM and bridge them together with the physical network interface or use the host as a gateway.


----------



## SirDice (Apr 24, 2019)

Yeah, there's something very counter-intuitive going on in the way a bridge works. I found some details on the mailing lists that gave me a slightly clearer picture why things don't always work as you'd expect. But I still get bitten by the way a bridge(4) interacts with the (physical) interface it is attached to. And judging by the number of people getting bitten by more or less the same issue I'm not the only one. I don't necessarily think it's a bug, just a misconception that should probably be documented a lot better.


----------



## bogong (Apr 24, 2019)

For now I am trying to split on different interfaces and create custom bridge


----------



## bogong (Apr 24, 2019)

SirDice said:


> just a misconception that should probably be documented a lot better


I would say MUCH MORE BETTER ... The documentation is very poor ... There are only option to get old server and huge amount of time and do the experiments while all isn't OK.


----------



## bogong (Apr 24, 2019)

Got failed ... Decided to use linux compatibility for using of what I need.


----------

