# FreeBSD Virtual server on ZFS



## mipam007 (Dec 15, 2010)

Hi, I have question regarding configuration of disk and memory on server for virtualization on ZFS. There will be max 5 virtual machines for development and testing purpose.

CPU: Intel Xeon W3530
RAM: ECC 3x4G
DISK: 2x WDC WD3000HLFS-75G6U1/04.04V03 (300GB/10000RPM)

1st QUESTION: Do you think that there is better option how to devide disk for virtualization?
Two ZFS pools: FreeBSD, virtual
- FreeBSD: for / and /usr (ad4s1d disk)
- virtual: for virtual machines (ad4s1e + ad6s1d disk)

2nd QUESTION: Do you have better suggestion how to set loader.conf regarding my hardware configuration?
My loader.conf:

```
vm.kmem_size="1024"
vfs.zfs.arc_max="100M"
```

3rd QUESTION: Should I use amd64 or IA64 iso of FreeBSD installation CD for my Xeon processor?

Thanks much for whatever...!


----------



## vermaden (Dec 15, 2010)

mipam007 said:
			
		

> 1st QUESTION: Do you think that there is better option how to devide disk for virtualization?
> Two ZFS pools: freebsd, virtual
> - freebsd: for / and /usr (ad4s1d disk)
> - virtual: for virtual machines (ad4s1e + ad6s1d disk)



Add 2 x 4-8 GB in size USB PENDRIVE to that machine, install _base system_ on GMIRROR RAID1 on that 2 x PENDRIVE, then add ZFS pool to use all remaining RAW disks (mirror in that case).



> 2nd QUESTION: Do you have better suggestion how to set loader.conf regarding my hardware configuration?
> My loader.conf:
> vm.kmem_size="1024"
> vfs.zfs.arc_max="100M"



I have these setting for ZFS mirror/ZFS RAIDZ on 2/3 disks accordingly:

```
# ZFS tuning
vfs.zfs.prefetch_disable=1
vfs.zfs.arc_max=2048M
vfs.zfs.arc_min=1024M
vfs.zfs.vdev.min_pending=4
vfs.zfs.vdev.max_pending=12
vfs.zfs.cache_flush_disable=1
vfs.zfs.txg.timeout=5
```

... but these are for 4GB system, You may increase arc_min and arc_max for better performance, or add SSD as L2_ARC ZFS cache, greatly improoves I/O operations, check these for details:
http://www.zfsbuild.com/2010/07/30/testing-the-l2arc/
http://www.zfsbuild.com/2010/06/03/howto-our-zpool-configuration/
http://www.zfsbuild.com/2010/05/19/disk-drive-selection/
*(also check others articles from that site)*



> 3rd QUESTION: Should I use amd64 or IA64 iso of FreeBSD installation CD for my Xeon processor?


Use amd64 mate, ia64 is for Itanium64.


----------



## phoenix (Dec 15, 2010)

What kind of VMs will you be using?  VirtualBox?  Jails?  Something else?

With only two drives, I'd use the PC-BSD installer to install FreeBSD onto a ZFS pool, using a mirror vdev.

Then create separate ZFS filesystems for each VM.

Vermaden's suggestion (/ and /usr on gmirror(4), rest on ZFS) works well.  We use that at work, and I use a similar setup at home.  This setup is needed if you have multiple raidz vdevs in the pool, as I don't think FreeBSD can boot from that yet (I'm pretty sure the boot support is only for a single vdev in the pool.  Please correct me if that is wrong.)

If you are using the 64-bit version of FreeBSD 8.1, try it without any ZFS settings in loader.conf.  With more than 4 GB of RAM, the system should be able to tune itself automatically.  Only if you run into issues with the auto-tuning should you try to tune it manually.  Unfortunately, tuning that works for SystemA won't necessarily work on SystemB.


----------



## danbi (Dec 17, 2010)

FreeBSD can boot fine from multiple vdevs, I played recently with few new (and upgraded) servers. There is still some remnant form the Solaris code, that has this restriction: you cannot add more vdevs to a zpool that has the bootfs property set. So the workaround is to either create the entire pool before setting that property, or set it to '' before adding new vdevs and reset it afterwards. Worked fine for me.

I used to advocate the separate USB FLASH (or other flash) approach for quite a while. It has few drawbacks and one great benefit, in that no part of the OS resides on the ZFS pool -- therefore is better solution for data-only pools. 

However, having only two disks, makes this somewhat impractical. With two disks, you may make an ZFS-only system, that uses GPT and has slices for the boot code, swap and zfs. Then, it depends: I have recently made a two mirror vdev ZFS pool, where all of the four disks had identical layout (bootcode, swap, zfs), ended up with 4-way gmirror  In BIOS I would list all four disks as boot disks and have the system bootable whichever fails.
On another system, made one (first?) pair of drives using this schema, then added mirror vdevs of entire remaining drives (because that system has too many drives anyway) to the zpool. If both my boot drives fail, I would be screwed, but the same applies if my both USB sticks fail. You really need to decide on the layout, considering possible zpool growth.

Then, on the zpool you can have everything FreeBSD on the root partition. Create separate filesystem for root, do not use the zpool root!!!
Creating jails becomes a breeze, you can just snapshot and clone the 'root' (that is, everything FreeBSD) and have the new jail filesystem setup in an instant -- without wasting disk space. Then create new filesystem for the jail's /usr/local, /var etc. --- this allows you to replace the 'base' FreeBSD from another snapshot, again, without wasting any disk space.

FreeBSD jails together with ZFS are an extremely efficient virtualization solution.


----------



## chrcol (Dec 17, 2010)

vermaden said:
			
		

> Add 2 x 4-8 GB in size USB PENDRIVE to that machine, install _base system_ on GMIRROR RAID1 on that 2 x PENDRIVE, then add ZFS pool to use all remaining RAW disks (mirror in that case).
> 
> 
> 
> ...



hi

you know the exact affect of 
	
	



```
vfs.zfs.cache_flush_disable
```
 thanks.

I assume that it stops regular interval cache flushes? but would still be flushes of some sort.


----------



## vermaden (Dec 18, 2010)

chrcol said:
			
		

> hi
> 
> you know the exact affect of
> 
> ...



All I know is that:

```
% sysctl -d vfs.zfs.cache_flush_disable 
vfs.zfs.cache_flush_disable: Disable cache flush
```


----------



## Savagedlight (Dec 27, 2010)

iirc, it would tell zfs to not make the disk write its cache to solid media at preset intervals.
It should improve performance, but a power failure may have severe consequences as data ZFS thinks is written to disk, in reality isn't.


----------

