# bhyve Windows Server slow IO



## stratacast1 (Jun 22, 2019)

I have Windows Server 2016 installed in a bhyve VM on FreeBSD 12.0 and overall it runs well, except I/O operations are slow. For example, simply extracting a 12KB zip archive can take close to 10 seconds. I did an iostat on its device (/dev/vmm/winserver2016) and here's the output I got when doing the extract:


```
tty            cpu
 tin  tout us ni sy in id
   0    27  0  0 31  0 69
   0    27  0  0 24  0 76
   0    27  0  0 33  0 67
   0    27  0  0 33  0 67
   0    27  0  0 44  0 56
   0    27  0  0 51  0 49
   0    27  0  0 50  0 50
```

it has 2 cores and 3GB on an i5 4570 (host has 8GB totalmem) and all running on an SSD and the vm was installed in a zfs dataset with vm-bhyve. Any insights would be great. Not used for production at least


----------



## zirias@ (Jun 22, 2019)

For me, using virtio-blk instead of ahci-hd for the virtual disk made a huge difference. But there's a gotcha: If you don't run -CURRENT, you have to apply a patch, otherwise bhyve crashes quickly when a windows guest uses a virtio-blk disk with redhat virtio windows drivers:


```
--- head/usr.sbin/bhyve/virtio.c    2019/05/18 17:30:03    347959
+++ head/usr.sbin/bhyve/virtio.c    2019/05/18 19:32:38    347960
@@ -3,6 +3,7 @@
  *
  * Copyright (c) 2013  Chris Torek <torek @ torek net>
  * All rights reserved.
+ * Copyright (c) 2019 Joyent, Inc.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -32,6 +33,8 @@
 #include <sys/param.h>
 #include <sys/uio.h>
 
+#include <machine/atomic.h>
+
 #include <stdio.h>
 #include <stdint.h>
 #include <pthread.h>
@@ -422,6 +425,12 @@
     vue = &vuh->vu_ring[uidx++ & mask];
     vue->vu_idx = idx;
     vue->vu_tlen = iolen;
+
+    /*
+     * Ensure the used descriptor is visible before updating the index.
+     * This is necessary on ISAs with memory ordering less strict than x86.
+     */
+    atomic_thread_fence_rel();
     vuh->vu_idx = uidx;
 }
 
@@ -459,6 +468,13 @@
     vs = vq->vq_vs;
     old_idx = vq->vq_save_used;
     vq->vq_save_used = new_idx = vq->vq_used->vu_idx;
+
+    /*
+     * Use full memory barrier between vu_idx store from preceding
+     * vq_relchain() call and the loads from VQ_USED_EVENT_IDX() or
+     * va_flags below.
+     */
+    atomic_thread_fence_seq_cst();
     if (used_all_avail &&
         (vs->vs_negotiated_caps & VIRTIO_F_NOTIFY_ON_EMPTY))
         intr = 1;
--- head/usr.sbin/bhyve/block_if.c    2019/05/02 19:59:37    347032
+++ head/usr.sbin/bhyve/block_if.c    2019/05/02 22:46:37    347033
@@ -65,7 +65,7 @@
 #define BLOCKIF_SIG    0xb109b109
 
 #define BLOCKIF_NUMTHR    8
-#define BLOCKIF_MAXREQ    (64 + BLOCKIF_NUMTHR)
+#define BLOCKIF_MAXREQ    (BLOCKIF_RING_MAX + BLOCKIF_NUMTHR)
 
 enum blockop {
     BOP_READ,
--- head/usr.sbin/bhyve/block_if.h    2019/05/02 19:59:37    347032
+++ head/usr.sbin/bhyve/block_if.h    2019/05/02 22:46:37    347033
@@ -41,7 +41,13 @@
 #include <sys/uio.h>
 #include <sys/unistd.h>
 
-#define BLOCKIF_IOV_MAX        33    /* not practical to be IOV_MAX */
+/*
+ * BLOCKIF_IOV_MAX is the maximum number of scatter/gather entries in
+ * a single request.  BLOCKIF_RING_MAX is the maxmimum number of
+ * pending requests that can be queued.
+ */
+#define    BLOCKIF_IOV_MAX        128    /* not practical to be IOV_MAX */
+#define    BLOCKIF_RING_MAX    128
 
 struct blockif_req {
     int        br_iovcnt;
--- head/usr.sbin/bhyve/pci_virtio_block.c    2019/05/02 19:59:37    347032
+++ head/usr.sbin/bhyve/pci_virtio_block.c    2019/05/02 22:46:37    347033
@@ -3,6 +3,7 @@
  *
  * Copyright (c) 2011 NetApp, Inc.
  * All rights reserved.
+ * Copyright (c) 2019 Joyent, Inc.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -55,7 +56,9 @@
 #include "virtio.h"
 #include "block_if.h"
 
-#define VTBLK_RINGSZ    64
+#define VTBLK_RINGSZ    128
+
+_Static_assert(VTBLK_RINGSZ <= BLOCKIF_RING_MAX, "Each ring entry must be able to queue a request");
 
 #define VTBLK_S_OK    0
 #define VTBLK_S_IOERR    1
@@ -351,7 +354,15 @@
     /* setup virtio block config space */
     sc->vbsc_cfg.vbc_capacity = size / DEV_BSIZE; /* 512-byte units */
     sc->vbsc_cfg.vbc_size_max = 0;    /* not negotiated */
-    sc->vbsc_cfg.vbc_seg_max = BLOCKIF_IOV_MAX;
+
+    /*
+     * If Linux is presented with a seg_max greater than the virtio queue
+     * size, it can stumble into situations where it violates its own
+     * invariants and panics.  For safety, we keep seg_max clamped, paying
+     * heed to the two extra descriptors needed for the header and status
+     * of a request.
+     */
+    sc->vbsc_cfg.vbc_seg_max = MIN(VTBLK_RINGSZ - 2, BLOCKIF_IOV_MAX);
     sc->vbsc_cfg.vbc_geometry.cylinders = 0;    /* no geometry */
     sc->vbsc_cfg.vbc_geometry.heads = 0;
     sc->vbsc_cfg.vbc_geometry.sectors = 0;
```

I'm using this patch with 12.0-RELEASE and it's working fine


----------



## stratacast1 (Jun 22, 2019)

Darn base compiling with a patch hehe. Not a big deal for me to do (probably worth it for myself), but I was hoping I could tell my friends bhyve was ready for virtualizing Windows

I'll have to give this patch a spin soon, hopefully it'll get ported back to 12.0


----------



## aragats (Jun 22, 2019)

stratacast1 said:


> I was hoping I could tell my friends bhyve was ready for virtualizing Windows


It is – perfectly works here for 3 years: Windows 7 / 10 / 2019 on ZFS. I never noticed any IO issue.


----------



## stratacast1 (Jun 23, 2019)

aragats said:


> It is – perfectly works here for 3 years: Windows 7 / 10 / 2019 on ZFS. I never noticed any IO issue.



Wish I could say that. Networking is great, install was great, but a 1MB zip extraction takes a minute. That's not working perfectly


----------



## zirias@ (Jun 23, 2019)

stratacast1 said:


> I'll have to give this patch a spin soon, hopefully it'll get ported back to 12.0


That's unlikely, as it neither fixes a security hole nor a "bug" (bhyve is documented to be incompatible with windows guests using virtio-blk).
But, as the patch is pretty small and doesn't change any other behavior (as far as I can tell), I could imagine it might be included in an upcoming 12.1.


stratacast1 said:


> Wish I could say that. Networking is great, install was great, but a 1MB zip extraction takes a minute. That's not working perfectly


I didn't do any exact measurement, but for me, a windows guest using ahci-hd was usable, but "felt" a bit slow. Switching to virtio-blk did speed up things.


----------



## stratacast1 (Jul 3, 2019)

Zirias said:


> That's unlikely, as it neither fixes a security hole nor a "bug" (bhyve is documented to be incompatible with windows guests using virtio-blk).
> But, as the patch is pretty small and doesn't change any other behavior (as far as I can tell), I could imagine it might be included in an upcoming 12.1.
> 
> I didn't do any exact measurement, but for me, a windows guest using ahci-hd was usable, but "felt" a bit slow. Switching to virtio-blk did speed up things.



Even if it showed up in 12.1 that would be neat. I can't say my Windows VM "feels" slow, it IS slow. 1MB extract of a .zip takes a minute. That isn't "feeling" slow. I wished it did better at this because I have quite a few people who turned down the idea of testing FreeBSD simply because bhyve can't virtualize Windows Server well compared to KVM. bhyve is pretty infant compared to other hypervisors though, so here's to it catching up for Windows guests!


----------



## free-and-bsd (Nov 14, 2019)

stratacast1 said:


> That isn't "feeling" slow. I wished it did better at this because I have quite a few people who turned down the idea of testing FreeBSD simply because bhyve can't virtualize Windows Server well compared to KVM. bhyve is pretty infant compared to other hypervisors though, so here's to it catching up for Windows guests!


Alas, it is even so. I have to use VMware Player: though it's not a hypervisor, it gives me faster Windows nevertheless. Or maybe it's the virtual networking driver that slows things down, I don't know. When I connect to my Windows machine on my real office network using xfreerdp as aragats suggests here, the connected Windows is lighting fast in the RDP window. Connected to bhyve it reminds me of Win95 times when computers used to hang every now and then. Just can't believe that virtual NATed network can be way slower than the real one.


----------



## `Orum (Nov 14, 2019)

free-and-bsd said:


> When I connect to my Windows machine on my real office network using xfreerdp...Windows is lighting fast... Connected to bhyve it reminds me of Win95 times when computers used to hang every now and then.


Are you connecting to the Windows servers using VNC?  Yes, that's pretty slow and aside from the installation or recovery, I'd advise against it.  Connecting to Windows bhyve VMs via RDP works fine for me.


----------



## free-and-bsd (Nov 15, 2019)

`Orum said:


> Are you connecting to the Windows servers using VNC?  Yes, that's pretty slow and aside from the installation or recovery, I'd advise against it.  Connecting to Windows bhyve VMs via RDP works fine for me.


No , man. Using RDP. Comparing here RDP connection to bhyve vs. RDP to office network located Win 10 machine. Fair comparison.


----------



## free-and-bsd (Mar 3, 2020)

Ok, my problem was partly solved by adjusting the `bhyve -c` value. It turns out, when you pass on `-c [>1]`, Win 10 will understand it as the number of CPUs being more than 1... which it doesn't support, does it. So the right thing to use is `bhyve -c sockets=1,cores=2,threads=2`. That will give Windows a pretty standard 1 CPU 2 core 4 threads with HT enabled. Now just for experiment's sake I made a separate Win 10 Pro installation using virtio-blk but cannot say whether it outperforms the older one with ahci-hd. Now that I'm using this optimized CPU setting (+6G RAM given it) together with NIC passthrough, it really is quick enough.


----------



## `Orum (Mar 3, 2020)

If you do find big gains with virtio-blk over ahci, let us I know.  Although none of our Windows servers are very heavy on disk IO, I'm thinking of converting them if it's significantly faster.

Does anyone know if the patch Zirias mentioned made it into 12.1, or does it still need to be patched in manually?  _*Edit:* Looks like it __was added in May__._


----------



## free-and-bsd (Mar 3, 2020)

It's there already. I just compared my sources and it IS there.


----------



## bendany (Mar 16, 2020)

For windows server, my recommendation config is

1. apply this commit https://reviews.freebsd.org/rS358848
2. using nvme as disk backend or passthough a nvme disk.
3. passthrough a nic instead of virtio-net
4. apply this commit https://reviews.freebsd.org/rS349184
5. apply this commit https://reviews.freebsd.org/rS348779

Windows will as fast as it could be.


----------



## `Orum (Mar 18, 2020)

bendany said:


> 1. apply this commit https://reviews.freebsd.org/rS358848
> 2. using nvme as disk backend or passthough a nvme disk.
> 3. passthrough a nic instead of virtio-net
> 4. apply this commit https://reviews.freebsd.org/rS349184
> 5. apply this commit https://reviews.freebsd.org/rS348779


Thanks for this.  Unfortunately #2 isn't possible for me right now, and I suspect #3 won't be practial for most people unless they're running very few VMs or have a ton of physical NICs to passthrough.

The patches are interesting.  #4 seems to have been fixed quite some time ago and should have been in 12.1, but it slipped through the cracks and never got MFC'd, so unless manually patched in we won't see it until 12.2.  I'll have to look at the source to tell if #5 made it in to 12.1, but I'm guessing not?  #1 is very fresh and looks quite promising.

Seeing as work is a ghost town right now, and as a result the guests (and the bhyve server itself) aren't being used, I've got some flexibility to play around with experimental patches.  So I think I might just take these for a bit of a test drive while I still can.


----------



## bendany (Mar 19, 2020)

`Orum said:


> Thanks for this.  Unfortunately #2 isn't possible for me right now, and I suspect #3 won't be practial for most people unless they're running very few VMs or have a ton of physical NICs to passthrough.
> 
> The patches are interesting.  #4 seems to have been fixed quite some time ago and should have been in 12.1, but it slipped through the cracks and never got MFC'd, so unless manually patched in we won't see it until 12.2.  I'll have to look at the source to tell if #5 made it in to 12.1, but I'm guessing not?  #1 is very fresh and looks quite promising.
> 
> Seeing as work is a ghost town right now, and as a result the guests (and the bhyve server itself) aren't being used, I've got some flexibility to play around with experimental patches.  So I think I might just take these for a bit of a test drive while I still can.



#2, just enable nvme as virtual disk controller instead of virtio-scsi/ahci, since nvme controller using less overhead, it really fast than ahci backend. no matter what real disks you have.

#4 is a bug fix. if you select passthrough, or ignore it.
#5, good for intel core i5/i7. since XEON CPU has already good support.

you can just fetch the commit diff, and patch the bhyve only, update bhyve only. no need to upgrade whole OS.

and, for FreeBSD/Linux guests, SR-IOV is good to go. chelsio T520-BT/CR is pretty stable to create multiple VFs and passthrough to guests.
while no such luck for Windows guests. I am writing email to chelsio support. hope in the future they can fix the VF driver for bhyve/Windows guest.


----------



## free-and-bsd (Mar 19, 2020)

bendany said:


> and, for FreeBSD/Linux guests, SR-IOV is good to go. chelsio T520-BT/CR is pretty stable to create multiple VFs and passthrough to guests.


Read about this on bhyve dev mailing list. It also mentions that Inte I350-t4 are no good. 
Anyway, since you're mentioning this and I couldn't find anything clear enough on the web, let me ask you this. Is SR-IOV support required from the CPU & motherboard also, and not only from the NIC? 
Because Intel site implied so by saying about certain CPUs that they have the feature _disabled_. Then again, the next thing is motherboard chip (and BIOS, of course) that supports all the CPU features or not. But most other sites only mention whether or not this or that NIC supports the feature. 
So how does one find out that his hardware supports SR-IOV? For example, I have Intel Xeon e5-2690 and motherboard supporting all its features. But there's no mentioning of SR-IOV there, though I also have Intel I350-t4, that kind of supports it. Though at bhyve-dev list they mentioned it wasn't worth the time to try and implement it with igb driver responsible for that NIC.
Then, of course, it has 4 ports, each of which can be passed through as a separate PCIe device... still, I'm interested.,


----------



## Phishfry (Mar 19, 2020)

free-and-bsd said:


> So how does one find out that his hardware supports SR-IOV? For example, I have Intel Xeon e5-2690 and motherboard


I can't find an Intel page to quote but I know all LGA2011 support SR-IOV on C6xx chipsets..
When it comes to NIC's it is found in Intel, Mellanox and Chelsio 10G cards.
There was a mailing list discussion about 1g Intel Nics but I don't know if that was ever implemented.


			Testing VF/PF code


----------



## free-and-bsd (Mar 20, 2020)

bendany said:


> For windows server, my recommendation config is
> 
> 1. apply this commit https://reviews.freebsd.org/rS358848
> 2. using nvme as disk backend or passthough a nvme disk.
> ...


Do I need to use -CURRENT for these patches to apply? They failed with 12.1.


----------



## Phishfry (Mar 20, 2020)

You don't need patch from #4. It is only needed for E3 Xeons, Not E5 Xeons like you have.
As for #1 and #5 patches I dont think they are absolutely needed either.
I wholeheartedly agree with # 2 and #3 and use both. I don't pass-thru NVMe but host my VM's on them.
I pass-thru all NIC's and let my upstream OPNSense box hand out DHCP IP's to each NIC interface.
Bridges and all that jazz are not ideal in my opinion.


----------



## zirias@ (Mar 20, 2020)

Phishfry said:


> As for #1 and #5 patches I dont think they are absolutely needed either.


I applied patch #1 and my Windows vm gained a lot of speed. Some things (like login with a samba AD account) take a fraction of the time they took without this patch. Highly recommended!


Phishfry said:


> I wholeheartedly agree with # 2 and #3 and use both.


Sure, if you want to dedicate some hardware to the vm. I have good results with virtio-blk (backed by a ZFS vdev) and virtio-net though.


----------



## zirias@ (Mar 20, 2020)

free-and-bsd said:


> Do I need to use -CURRENT for these patches to apply? They failed with 12.1.


That's because you get them in a braindead format from the linked site, with only one "hunk" consisting of the whole file. They work on 12.1, but I had to apply them manually, using vim ...

Find attached a patch combining #1 and #4 for 12.1 for your convenience


----------



## free-and-bsd (Mar 21, 2020)

Oh, thank you kindly!!!
EDIT: looks like it does work faster with these patches (the other ones I already added)! Thank you again, only had to point it to /usr/src.

I only didn't understand about bendany 's point #2, use nvme instead of ahci/virtio-blk. Because bhyve refuses to accept that as argument. Though for me it's fast enough even without it, but would be interesting to try.


----------



## `Orum (Mar 22, 2020)

If you're using sysutils/vm-bhyve (which I highly recommend), it's rather straightforward.  Details on how to configure it are on the bottom of the page here.


----------



## free-and-bsd (Mar 22, 2020)

`Orum said:


> If you're using sysutils/vm-bhyve (which I highly recommend), it's rather straightforward.  Details on how to configure it are on the bottom of the page here.


No, I'm not. At this stage of testing it suits my needs better to use a startup shell script to manually start/stop/destroy a given VM. 
Now bhyve says this about nvme type of emulated device:

```
NVMe devices:

                 devpath     Accepted device paths are:
                     /dev/blockdev or /path/to/image or
                     ram=size_in_MiB.

                 maxq     Max number of queues.

                 qsz     Max elements in each queue.

                 ioslots     Max number of concurrent I/O re-
                     quests.

                 sectsz     Sector    size (defaults to blockif sec-
                     tor size).

                 ser     Serial    number with maximum 20 charac-
                     ters.
```
 But I just simply used nvme,/path/to/image, which, obviously, wasn't enough or something. So I wonder if sysutils/vm-bhyve uses some defaults there for maxq, qxz, ioslots, sectsz, ser.


----------



## Phishfry (Mar 22, 2020)

I don't see the advantage of passing through an NVMe.








						NVMe passthru on Bhyve findings
					

I am currently using a SuperMicro X10DRL board for my Bhyve Virtualization machine. OS boots from a 64GB DOM. I use two Samsung 512GB PM953 drives in a M.2 form factor mounted on a SuperMicro AOC-SLG3-2M2. The slot where the card  is installed is set to x4x4 in the BIOS for bitfurication. I am...




					forums.freebsd.org
				




From my findings your better off hosting VM's on them and using ahci-hd for the images versus -s 7:0,nvme,/dev/nda2


----------



## Lamia (Mar 23, 2020)

Vm-bhvye is very simplistic. CBSD is robust with support for multiple OS platforms and continues to require more testing from its developers to ensure wide use. Chyves is only for FreeBSD. This background is necessary because I spent the last five days trying to get bhyve run with no issues.

CBSD won't run ISOs from its own repo despite using 'cbsd bconstruct-tui' wizard. Vm-bhvye works with ease but later fails. I got latest Freepbx running on it and "vm console..." showed expected information while "cbsd blogin..." hangs just as it reads its CD, peripherals and boot entries.

The problem with vm-bhvye is that web access is lost after one or two clicks at the initial setup after a fully automatic installation over console. 

The vm-public bridge like tap0 keeps changing its ether address at every reboot. What this means is that the [new] MAC address would not be the same as MAC address earlier enter into the VM-BHYVE_DIR/.config/system.conf in host machine AND the etc/sysconfig/network-script/ifcfg-eth0 in the VM itself. When the Ethernet address is the same and the Freepbx web UI is accessible, it is just for few seconds and clicks. Connection via browser will be lost in no time. Usually at this time, the VM could be pinged from outside it e.g. from other jails or host.
Even on creating a bridge connected to tap0 & vm-public and using its static ether address in the above files, the gateway and other IP address cannot be pinged. A passthru of ALL in system.conf did not work either.

Any suggestions would be appreciated. 

It is not intended to hijack this thread; it is only coincidental with almost the same problem.


----------



## Lamia (Mar 24, 2020)

Lamia said:


> Vm-bhvye is very simplistic. CBSD is robust with support for multiple OS platforms and continues to require more testing from its developers to ensure wide use. Chyves is only for FreeBSD. This background is necessary because I spent the last five days trying to get bhyve run with no issues.
> 
> CBSD won't run ISOs from its own repo despite using 'cbsd bconstruct-tui' wizard. Vm-bhvye works with ease but later fails. I got latest Freepbx running on it and "vm console..." showed expected information while "cbsd blogin..." hangs just as it reads its CD, peripherals and boot entries.
> 
> ...



It's now fixed. Passthru finally worked. Existing documentation is not very clear on where changes are required. I needed to make changes to both the loader.conf and system.conf. Then it goes beyond that to systonically bridge two interfaces in a VM.


In fact, this is just another form of hack that makes me think that FreeBSD is not for the faint-hearted.


----------



## abishai (Mar 24, 2020)

bendany said:


> 1. apply this commit https://reviews.freebsd.org/rS358848


Does Xeon X56xx support this?


----------



## Phishfry (Mar 24, 2020)

abishai said:


> Does Xeon X56xx support this?


From what I read it seems that APIC virtualized(APCIv) was first a software feature that was moved onto the CPU.


			Software
		

So maybe E5 Xeon V2 family was the first with APCIv on the CPU.








						Data Center
					

Access technologies that use data for modern code, machine learning, big data, analytics, networking, storage, servers, cloud, and more.




					software.intel.com


----------



## abishai (Mar 27, 2020)

*sigh* 
E5 v2 is too expensive yet.


----------



## Phishfry (Mar 27, 2020)

I may be wrong though. Looking at the comments in the review, It seems people see a speed up with i7-4771 desktop CPU.





						⚙ D22942 Untangle TPR shadowing and APIC virtualization
					






					reviews.freebsd.org
				



So you may need to try it and see if it helps you any.
I don't see any merge to stable so you will need to run head.


----------



## zirias@ (Mar 27, 2020)

Phishfry said:


> I don't see any merge to stable so you will need to run head.


Or 12.1-RELEASE, see the patch I posted earlier in this thread. It's really a huge difference


----------



## stratacast1 (Apr 1, 2020)

I'm not sure if this helps anyone, but I had a need to run a Windows server VM again and my host hardware has changed. In the past, I was testing this on an i5 4670 with 8GB of RAM and on an SSD. Most everything worked great but it choked at IO. Unzip a 100kb file? Took 30 seconds. Nasty. I did it again today with the same files and it was instantaneous. Tried again with a 700MB 7z archive and it did it at the speed I expected. Very nice actually, I feel like I can actually recommend bhyve to people now for virtualizing Windows 

New hardware: Ryzen 3600, 16GB RAM, running my VMs off a zpool that is on a Samsung 840 Pro SSD
OS: 12.1-RELEASE


----------



## zader (Apr 1, 2020)

> I feel like I can actually recommend bhyve to people now for virtualizing Windows



sure wish pci passthrough worked better for graphics cards .. Id  never look back if my 2080ti worked better  on a windows vm .. perhaps I missed something..

did you end up passing the nvme? or did the new hardware fix the io? or you just created a pool/dataset for vm's on a new drive?


----------



## stratacast1 (Apr 1, 2020)

zader said:


> sure wish pci passthrough worked better for graphics cards .. Id  never look back if my 2080ti worked better  on a windows vm .. perhaps I missed something..
> 
> did you end up passing the nvme? or did the new hardware fix the io? or you just created a pool/dataset for vm's on a new drive?



The OS isn't running on an NVMe drive, I have it on a standard SATA SSD. Maybe the new hardware fixed IO? I guess since this is AMD it could behave differently than an Intel box. I'm using vm-bhyve so I just used what they have


```
loader="uefi"
graphics="yes"
xhci_mouse="yes"
cpu="3"
memory="6G"

# put up to 8 disks on a single ahci controller.
# without this, adding a disk pushes the following network devices onto higher slot numbers,
# which causes windows to see them as a new interface
ahci_device_limit="8"

# ideally this should be changed to virtio-net and drivers installed in the guest
# e1000 works out-of-the-box
network0_type="e1000"
network0_switch="public"

disk0_type="ahci-hd"
disk0_name="disk0.img"

# windows expects the host to expose localtime by default, not UTC
utctime="no"
uuid=""
network0_mac=""
```


----------



## m1001101 (Apr 4, 2020)

bendany said:


> For windows server, my recommendation config is
> 
> 1. apply this commit https://reviews.freebsd.org/rS358848
> 2. using nvme as disk backend or passthough a nvme disk.
> ...



I've Xeon E3 1220 and bhyve work very slow on 12.1.
To test the patches I've upgraded to CURRENT and now bhyve working great!
In order to maintain 12.1 is possible to apply these patches and build only bhyve and vmm?
Tested copying the new files in /usr/src/sys/usr.sbin/bhyve and make, now I have the executable in /usr/obj/. How I can install? Manually move the executable and libraries? And for vmm?

Thanks


----------



## zirias@ (Apr 4, 2020)

m1001101 said:


> In order to maintain 12.1 is possible to apply these patches and build only bhyve and vmm?


Please see here: https://forums.freebsd.org/threads/bhyve-windows-server-slow-io.71199/post-456220 -- it only contains two of those patches, but I found the first one is the one making the huge difference for windows guests.

About only building parts of the system, it definitely works for bhyve (just use make in the appropriate source directory), I didn't try to just build the vmm module but built a complete kernel, so _maybe_ this works as well.


----------



## m1001101 (Apr 5, 2020)

Thanks, will try...


----------



## m1001101 (Apr 20, 2020)

Zirias said:


> Please see here: https://forums.freebsd.org/threads/bhyve-windows-server-slow-io.71199/post-456220 -- it only contains two of those patches, but I found the first one is the one making the huge difference for windows guests.
> 
> About only building parts of the system, it definitely works for bhyve (just use make in the appropriate source directory), I didn't try to just build the vmm module but built a complete kernel, so _maybe_ this works as well.



Work great!

Thanks!


----------



## jardows (Jun 4, 2020)

Zirias said:


> That's because you get them in a braindead format from the linked site, with only one "hunk" consisting of the whole file. They work on 12.1, but I had to apply them manually, using vim ...
> 
> Find attached a patch combining #1 and #4 for 12.1 for your convenience


Pardon my ignorance, but how do I properly apply this patch to the file (I assume from the info it is /usr/src/sys/amd64/vmm/intel/vmx.c that I am patching), and properly rebuild and install only the bhyve components?  I see many references to applying patches or diff files, but have come up empty with any instructions on how to apply them.


----------



## zirias@ (Jun 4, 2020)

jardows `cd /usr/src; patch -p0 <patchfile`


----------



## bendany (Jun 30, 2020)

free-and-bsd said:


> No, I'm not. At this stage of testing it suits my needs better to use a startup shell script to manually start/stop/destroy a given VM.
> Now bhyve says this about nvme type of emulated device:
> 
> ```
> ...



Yes, bhyve manpage is not so good descript how to use nvme backend.
Maybe I make a patch to manpage later.
Usually, I use these parameters:

```
-s 4:0,nvme,/dev/zvol/zones/vm/nvme1/disk0,maxq=16,qsz=8,ioslots=1,sectsz=512,ser=ABCDEFG
```

Hope this help.


----------



## bendany (Jun 30, 2020)

jardows said:


> Pardon my ignorance, but how do I properly apply this patch to the file (I assume from the info it is /usr/src/sys/amd64/vmm/intel/vmx.c that I am patching), and properly rebuild and install only the bhyve components?  I see many references to applying patches or diff files, but have come up empty with any instructions on how to apply them.



If patch relate to vmm.c you can rebuild vmm.ko only. if patch relate to bhyve, you can only rebuild bhyve. that make fast.
usally I clone bhyve code from 13-CURRENT. build in 12.1-STABLE. (CURRENT's bhyve has more feature and more bugfix.)


----------



## joeafterdinner (Jul 9, 2020)

I hate to be a debbie downer, but I am too having issues with bhyve on 12.1-RELEASE-p7. It only stays usuable for 10 minutes or so then freezes. I am going to keep my eyes  peeled for a patch or fix. Would like to know if anyone else is having this sort of issue? I have tried installing virtio drivers, nvme disk type....no luck.

I am willing to help out as my skill set allows, just message me!


----------



## free-and-bsd (Jul 27, 2020)

joeafterdinner said:


> I hate to be a debbie downer, but I am too having issues with bhyve on 12.1-RELEASE-p7. It only stays usuable for 10 minutes or so then freezes. I am going to keep my eyes  peeled for a patch or fix. Would like to know if anyone else is having this sort of issue? I have tried installing virtio drivers, nvme disk type....no luck.
> 
> I am willing to help out as my skill set allows, just message me!


Impossible to tell without seeing the command line you use to start your bhyve. The hardware you're using also matters. Any of these things may be the problem.
Other than that, bhyve is being used by many in production environments, so certainly it's not prone to freezing like that.


----------



## joeafterdinner (Jul 27, 2020)

I should have updated sooner. I reinstalled Windows (2004) and started using freerdp and tigervnc...problem solved.

Thanks for the reply.


----------



## j77h (Oct 10, 2020)

free-and-bsd said:


> I have to use VMware Player:



Are you saying you run VMware Workstation Player with FreeBSD as the host OS?

Via google, the only info I can find is that VMware runs on only Windows Linux and Mac.

If it's faster than VirtualBox with a Windows-10 guest, and has USB-2 passthru, it could be very useful...


----------



## free-and-bsd (Oct 19, 2020)

j77h said:


> Are you saying you run VMware Workstation Player with FreeBSD as the host OS?
> Via google, the only info I can find is that VMware runs on only Windows Linux and Mac.
> If it's faster than VirtualBox with a Windows-10 guest, and has USB-2 passthru, it could be very useful...


Linux host, not FreeBSD, that's the point. No VMware tools for FreeBSD host and none expected in the future. Which makes it hardly any better than VirtualBox or others.


----------



## timmus (Oct 23, 2021)

Are the patches described in this post included in 13.0-RELEASE?
I'm in the proces switching our VMs from XCP-NG to FreeBSD bhyve vm.
In case I need to apply the patches, where can I find documentation on how to apply the patches?

Also, what is the current best practice? ahci, virtio-blk or nvme disk type?
The host storage is on 4 disk mirrored vdev.

this is my current config running on HP ProLiant DL360e Gen8 CPU E5-2450L
tried several virtio driver versions, version newer than virtio-win-0.1.187 caused Windows 10 install to fail.

 :
`loader="uefi"
graphics="yes"
xhci_mouse="yes"
cpu=8
cpu_sockets=1
cpu_cores=8
cpu_threads=1
memory=8G

ahci_device_limit="8"
network0_type="virtio-net"
network0_switch="public"

disk0_type="virtio-blk"
disk0_opts="sectorsize=512"
disk0_name="root"
disk0_dev="sparse-zvol"

disk1_type="ahci-cd"
disk1_dev="custom"
disk1_name="/zroot/vm/.iso/sage.iso"
utctime="no"

uuid="800d78eb-"
network0_mac="58:9c"`


----------



## zirias@ (Oct 23, 2021)

timmus said:


> Are the patches described in this post included in 13.0-RELEASE?


Yes.


----------

