# FreeBSD 12-RELEASE guest in QEMU/KVM



## tommyhp2 (Mar 30, 2019)

Hi everyone,

I tried installing FreeBSD 12-RELEASE as guest in QEMU/KVM (Ubuntu 18.04.2 host) onto Virtio HDD but the kernel with builtin virtio drivers are not detecting the both the hard drive and NIC.  According the dmesg

```
# dmesg | egrep -i 'scsi|virt|vt|ata'
VT(vga): text 80x25
vtvga0: <VT VGA driver> on motherboard
ahci0: <Intel ICH9 AHCI SATA controller> port 0xd0c0-0xd0df mem 0xfd019000-0xfd019ff irq 16 at device 31.2 on pci0
cd0: <QEMU QEMU DVD-ROM 2.5+> Removable CD-ROM SCSI device
cd0: 150.000MB/s transfers (SATA 1.x, UDMA5, ATAPI 12bytes, PIO 8192bytes)
```

The chipset used for the VM is Q35.  If I change the hard drive to SATA type and NIC to e1000, the installer proceeds OK.  Also, if I use the i440FX chipset with virtio HDD and NIC instead of the Q35, it also works OK.  Would someone please advise on how I can install FreeBSD guest using the Q35 chipset?

Thank you in advance,
Tommy


----------



## tommyhp2 (Mar 31, 2019)

Upon further investigation of another VM where I used SATA HDD and e1000 NIC while having the Virtio controllers still, the

```
pciconf -lv
```

yields this relevant snippet:


```
none0@pci0:0:31:3:      class=0x0c0500 card=0x11001af4 chip=0x29308086 rev=0x02 hdr=0x00
    vendor     = 'Intel Corporation'
    device     = '82801I (ICH9 Family) SMBus Controller'
    class      = serial bus
    subclass   = SMBus
none1@pci0:1:0:0:       class=0x010000 card=0x11001af4 chip=0x10481af4 rev=0x01 hdr=0x00
    vendor     = 'Red Hat, Inc.'
    device     = 'Virtio SCSI'
    class      = mass storage
    subclass   = SCSI
none2@pci0:2:0:0:       class=0x078000 card=0x11001af4 chip=0x10431af4 rev=0x01 hdr=0x00
    vendor     = 'Red Hat, Inc.'
    device     = 'Virtio console'
    class      = simple comms
none3@pci0:3:0:0:       class=0x00ff00 card=0x11001af4 chip=0x10451af4 rev=0x01 hdr=0x00
    vendor     = 'Red Hat, Inc.'
    device     = 'Virtio memory balloon'
    class      = old
pcib8@pci0:7:0:0:       class=0x060400 card=0x00000000 chip=0x00011b36 rev=0x00 hdr=0x01
    vendor     = 'Red Hat, Inc.'
    device     = 'QEMU PCI-PCI bridge'
    class      = bridge
    subclass   = PCI-PCI
```

This VM has a custom kernel (a lot of the unneeded drivers were removed) which somehow detected the virtio while the ISO install with the generic default kernel doesn't detect the virtio PCI controllers.  Does that mean the virtio driver is broken for this specific configuration?  Should I file a bug report?


----------



## D-FENS (Mar 31, 2019)

Why don't you try using devel/libvirt and deskutils/virt-manager? They offer a GUI that's quite handy when configuring VMs. The programs are available also for GNU/Linux + KVM.

Here are my ways of starting a VM with FreeBSD 12.0-RELEASE on a GNU/Linux host:

```
# Feel free to hard-code the values in the scripts below if you want.
export MYUSERNAME=...
export VMNAME=...
```

This is how I create an image file for the VM disk:

```
qemu-img create -f qcow2 $VMNAME.img 32G
```

This is my script for starting directly via qemu:

```
#!/bin/bash

export LANG=en_US.UTF-8
export LOCALE=$LANG

PROCESSORS=4
MEMORY=6144
CD_IMG=~/vm/bsd.iso
HD_IMG=~/vm/$VMNAME/$VMNAME.img

# Share a directory between host and VM
PASSTHROUGH_DISK_OPTS=""
#"-virtfs local,id=passthr1,path=/home/$MYUSERNAME/vm/vm-shared,security_model=passthrough,writeout=immediate,mount_tag=passthr1"

# Directly open a port to the VM
REDIR=""
#"-redir tcp:3000::22"

# The MAC address is freely choosable
NETWORK_OPTS="-net tap,ifname=tap0,script=no,downscript=no -net nic,macaddr=BA:CE:BA:CE:00:02"

OPTIONS=
#-nographic
#-full-screen

qemu-system-x86_64 -m $MEMORY -smp $PROCESSORS -boot c -enable-kvm -vga vmware -cdrom $CD_IMG -hda $HD_IMG $PASSTHROUGH_DISK_OPTS $REDIR $NETWORK_OPTS $OPTIONS
```

And this is how a VM configured via libvirt and virt-manager ends up being started:

```
/usr/bin/qemu-system-x86_64 \
    -name guest=$VMNAME,debug-threads=on \
    -S \
    -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-$VMNAME/master-key.aes \
    -machine pc-i440fx-3.1,accel=kvm,usb=off,vmport=off,dump-guest-core=off \
    -cpu SandyBridge-IBRS,vme=on,ss=on,vmx=on,pcid=on,hypervisor=on,arat=on,tsc_adjust=on,umip=on,ssbd=on,xsaveopt=on \
    -m 6144 \
    -realtime mlock=off \
    -smp 4,sockets=4,cores=1,threads=1 \
    -uuid cd5613b1-9272-4804-84c6-2468173ea923 \
    -no-user-config \
    -nodefaults \
    -chardev socket,id=charmonitor,fd=24,server,nowait \
    -mon chardev=charmonitor,id=monitor,mode=control \
    -rtc base=utc,driftfix=slew \
    -global kvm-pit.lost_tick_policy=delay \
    -no-hpet \
    -no-shutdown \
    -global PIIX4_PM.disable_s3=1 \
    -global PIIX4_PM.disable_s4=1 \
    -boot strict=on \
    -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x5.0x7 \
    -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x5 \
    -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x5.0x1 \
    -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x5.0x2 \
    -drive file=/home/$MYUSERNAME/vm/$VMNAME/$VMNAME.img,format=qcow2,if=none,id=drive-virtio-disk0 \
    -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 \
    -netdev tap,fd=27,id=hostnet0 \
    -device e1000,netdev=hostnet0,id=net0,mac=52:54:00:74:c2:0a,bus=pci.0,addr=0x3 \
    -chardev pty,id=charserial0 \
    -device isa-serial,chardev=charserial0,id=serial0 \
    -spice port=5900,addr=127.0.0.1,disable-ticketing,image-compression=off,seamless-migration=on \
    -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 \
    -device intel-hda,id=sound0,bus=pci.0,addr=0x4 \
    -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 \
    -chardev spicevmc,id=charredir0,name=usbredir \
    -device usb-redir,chardev=charredir0,id=redir0,bus=usb.0,port=1 \
    -chardev spicevmc,id=charredir1,name=usbredir \
    -device usb-redir,chardev=charredir1,id=redir1,bus=usb.0,port=2 \
    -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 \
    -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
    -msg timestamp=on
```


----------



## D-FENS (Mar 31, 2019)

tommyhp2 said:


> This VM has a custom kernel (a lot of the unneeded drivers were removed) which somehow detected the virtio while the ISO install with the generic default kernel doesn't detect the virtio PCI controllers.  Does that mean the virtio driver is broken for this specific configuration?  Should I file a bug report?


If the kernel had been highly customized for a specific virtual hardware, then your best shot is to get the kernel configuration and then figure out which drivers are built in and what QEMU devices correspond to them. This is probably the next best thing after having the original QEMU configuration.

You could do the following:
- Use a FreeBSD installation medium (CD, for example) and connect your VM image file too.
- Boot the VM into the FreeBSD installation, but choose "Shell".
- Then mount the partition with the kernel on it and goto /usr/src and extract the kernel configuration the kernel was built with. My kernel config is located here: ./sys/amd64/conf/
- Goto FreeBSD kernel source documentation and identify the device drivers that are active.
- Goto QEMU docu and figure out what options do you need.
- Run and enjoy.

In case you only need to run the VM and don't care to keep the original kernel, you could (I think) replace it like this:
- Make a backup copy of the VM image (in any case)
- Start from the installation CD and then open a shell and chroot into the VM's installation.
- Then you can build a generic kernel and install it, or even simpler - run a `freebsd-update fetch install` on top of it.
- Replacing the bootcode might or might not be necessary. I don't know about that.


----------



## tommyhp2 (Mar 31, 2019)

@*roccobaroccoSC,*

Thank you for the feedback.  I used the virt-manager on Ubuntu to configure and manage the VMs to save me a lot of typing and prevent possible typo errors between the VM configurations.  As for the FreeBSD kernel configuration, the default kernel in the install ISO did not detect any Virtio PCI controllers.  The only QEMU PCI it detected was the QEMU PCI bridge.  My custom kernel (revision 345757) simply remove all the HBA, SCSI, NIC, BT, etc. drivers that are unnecessary in addition to older version  compatibility ( COMPAT_FREEBSD* ).  I've compared revision 345757 against the src that came with the install ISO and there's no change for virtio devices.

Looking at your VM startup command, I see that you're using the legacy i440FX:


```
-machine pc-i440fx-3.1,accel=kvm,usb=off,vmport=off,dump-guest-core=off
```

As I've mentioned originally, everything works as expected if I use the i440FX chipset.  If I used the Q35,


```
<os>
    <type arch='x86_64' machine='pc-q35-2.11'>hvm</type>
    <bootmenu enable='yes'/>
  </os>
```

the virtio breaks. 

Here's a screen shot of the VMs' configurations:

( inline image attachment doesn't seem to show correctly for me.  I've attached the screen shot separately ).

The left VM (d-build-fbsd) is my FreeBSD build server.  If I change both the HDDs and NICs to virtio types, FreeBSD won't boot properly:


```
Solaris: NOTICE: Cannot find the pool label for 'd_build_fbsd'
Solaris: NOTICE: Cannot find the pool label for 'd_build_fbsd'
Solaris: NOTICE: Cannot find the pool label for 'd_build_fbsd'
Solaris: NOTICE: Cannot find the pool label for 'd_build_fbsd'
```

and goes into recovery mode shell nor it will detect virtio NIC.   Ironically, when boot as SATA HDD and e1000 nic, the same kernel detects the PCI virtio controllers but the virtio driver doesn't seem to load properly.

The right VM is what I'm using to troubleshoot the FreeBSD 12-RELEASE virtio drivers.  I've attached the pciconf output of both VMs.


----------



## tommyhp2 (Apr 1, 2019)

*roccobaroccoSC* ,

Would you be willing to do a quick test in your environment to see if your VM fails to boot using Q35 chipset instead of the i440FX?  It would help a great deal in submitting a bug report.  Thank your for your time.


----------



## D-FENS (Apr 2, 2019)

tommyhp2 said:


> *roccobaroccoSC* ,
> 
> Would you be willing to do a quick test in your environment to see if your VM fails to boot using Q35 chipset instead of the i440FX?  It would help a great deal in submitting a bug report.  Thank your for your time.


Sure. I just used the defaults as long as they worked. The big command is generated by libvirt, as I wrote.

I used my manual script above and added `-machine pc-q35-2.11,accel=kvm,usb=off,vmport=off,dump-guest-core=off`
The VM started and was able to detect the PCI bridge, VGA, Network card, AHCI, ISA bus. It uses a generic kernel. Could it be an issue with your real hardware and virtualization? There may be a difference if the VM is running on one or another CPU.


----------



## tommyhp2 (Apr 2, 2019)

roccobaroccoSC said:


> Sure. I just used the defaults as long as they worked. The big command is generated by libvirt, as I wrote.
> 
> I used my manual script above and added `-machine pc-q35-2.11,accel=kvm,usb=off,vmport=off,dump-guest-core=off`
> The VM started and was able to detect the PCI bridge, VGA, Network card, AHCI, ISA bus. It uses a generic kernel. Could it be an issue with your real hardware and virtualization? There may be a difference if the VM is running on one or another CPU.


Thank you for testing.  Hmm... I wonder if I has to do with the CPU since Q35 is an Intel chipset (so is the i440FX) and I have AMD 6300 series.  But further troubleshooting in environment shows that the FreeBSD is not detecting the PCI controllers properly since the file src/sys/dev/virtio/pci/virtio_pci.h  has this:


```
/* VirtIO PCI vendor/device ID. */
#define VIRTIO_PCI_VENDORID     0x1AF4
#define VIRTIO_PCI_DEVICEID_MIN 0x1000
#define VIRTIO_PCI_DEVICEID_MAX 0x103F
```

while pciconf for the SCSI when boot with i440FX chipset:


```
virtio_pci2@pci0:0:7:0: class=0x010000 card=0x00081af4 chip=0x10041af4 rev=0x00 hdr=0x00
    vendor     = 'Red Hat, Inc.'
    device     = 'Virtio SCSI'
    class      = mass storage
    subclass   = SCSI
    cap 11[98] = MSI-X supports 4 messages, enabled
                 Table in map 0x14[0x0], PBA in map 0x14[0x800]
    cap 09[84] = vendor (length 20)
    cap 09[70] = vendor (length 20)
    cap 09[60] = vendor (length 16)
    cap 09[50] = vendor (length 16)
    cap 09[40] = vendor (length 16)
```

and when boot with Q35 chipset:


```
none1@pci0:1:0:0:       class=0x010000 card=0x11001af4 chip=0x10481af4 rev=0x01 hdr=0x00
    vendor     = 'Red Hat, Inc.'
    device     = 'Virtio SCSI'
    class      = mass storage
    subclass   = SCSI
    cap 11[dc] = MSI-X supports 4 messages
                 Table in map 0x14[0x0], PBA in map 0x14[0x800]
    cap 09[c8] = vendor (length 20)
    cap 09[b4] = vendor (length 20)
    cap 09[a4] = vendor (length 16)
    cap 09[94] = vendor (length 16)
    cap 09[84] = vendor (length 16)
    cap 01[7c] = powerspec 3  supports D0 D3  current D0
    cap 10[40] = PCI-Express 2 endpoint max data 128(128)
                 link x1(x1) speed 2.5(2.5) ASPM disabled(L0s)
```

I deducted that the 'chip' is DevID & VenID.  The Windows' SCSI INF driver definition has:


```
%RHELScsi.DeviceDesc% = rhelscsi_inst, PCI\VEN_1AF4&DEV_1004&SUBSYS_00081AF4&REV_00
%RHELScsi.DeviceDesc% = rhelscsi_inst, PCI\VEN_1AF4&DEV_1048&SUBSYS_11001AF4&REV_01
```

from the virtio driver 0.1.126 (driver dated Aug 2016).  The Windows driver has correctly identified the PCI controllers and devices attached for my Windows guests.  I've also looked at the Linux driver code:


```
if (pci_dev->device < 0x1040) {
        /* Transitional devices: use the PCI subsystem device id as
         * virtio device id, same as legacy driver always did.
         */
        vp_dev->vdev.id.device = pci_dev->subsystem_device;
    } else {
        /* Modern devices: simply use PCI device id, but start from 0x1040. */
        vp_dev->vdev.id.device = pci_dev->device - 0x1040;
}
```

from: https://github.com/torvalds/linux/blob/v5.0/drivers/virtio/virtio_pci_modern.c
which also identified the PCI controllers correctly for my Linux guests.

In summary, if you compare the pciconf outputs, Virtio controllers are in the PCI 'slot' under i440FX because PCIe doesn't exist yet when the chipset was designed in the 1990s.  The Virtio controllers are in the PCIe 'slot' under the Q35 since PCIe is the standard now.


----------



## tommyhp2 (Apr 3, 2019)

roccobaroccoSC said:


> Sure. I just used the defaults as long as they worked. The big command is generated by libvirt, as I wrote.
> 
> I used my manual script above and added `-machine pc-q35-2.11,accel=kvm,usb=off,vmport=off,dump-guest-core=off`
> The VM started and was able to detect the PCI bridge, VGA, Network card, AHCI, ISA bus. It uses a generic kernel. Could it be an issue with your real hardware and virtualization? There may be a difference if the VM is running on one or another CPU.


After further review of your startup script, I think just changing the machine to pc-q35-2.11 and still leave the rest as is, the Virtio would still work since it's connected to the PCI bus and not PCIe bus as seen in your start up script:

```
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 \
```

This is my XML config for the Q35 chipset:


```
<controller type='pci' index='0' model='pcie-root'/>
    <controller type='pci' index='1' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='1' port='0x10'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
    </controller>
    <controller type='pci' index='2' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='2' port='0x11'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
    </controller>
    <controller type='pci' index='3' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='3' port='0x12'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
    </controller>
...
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </memballoon>
```
all of my controllers (USB, SATA, SCSI, VirtIO) are connected to the PCIe bus and should have higher performance (if the FreeBSD driver work correctly).  Since PCIe speed is faster than PCI for baremetal.  I'd think the same applies to virtualized environment.


----------



## D-FENS (Apr 4, 2019)

This exceeds the boundaries of my knowledge about the chips. I try to stick to the defaults and regarding hardware I am a kind of script kiddie.
My assumption is that the problem is caused from you having an AMD hardware and trying to virtualize an Intel CPU. I think that the KVM is very close to the bare metal when doing virtualization (there is a flag in both AMD and Intel BIOS-es to enable this).
On the hardware I have tested it, the CPU is Intel.
You could test your VM on an Intel machine and see if it works there. I don't know how different an AMD processor from an Intel processor is in relation to the programming interface, but they probably do differ in some extended instructions and tweaks. However, with the FreeBSD generic kernel you should be able to do both. But if KVM tries to virtualize an Intel chip on AMD hardware, this could be a problem.
Just speculating here, I don't have enough experience to tell for sure.


----------

