# One ZIL for multiple pools?



## AlexSanchezSTHLM (Oct 22, 2013)

This post is about sharing a ZIL between multiple pools. I have two pools that will be exported thru through NFS.


 The first pool:
Small capacity but fast SSD-based mirror that is used for storing VM disks (ESXi).

 The second pool:
Large capacity but slow HDD-based RAIDZ array that is used for all kinds of data and archiving.

To increase the write performance for my ESXi VM's (thru NFS) I'm gonna going to add a ZIL to the first pool, using the Intel S3700 SSD.

My question is: can I add the same ZIL drive to the second pool as well? Or do I have to partition the drive into two and add each partition as ZIL to both pools? What's the disadvantage of doing this?

Thanks.


----------



## SirDice (Oct 22, 2013)

AlexSanchezSTHLM said:
			
		

> To increase the write performance for my ESXi VM's (thru NFS) I'm gonna add a ZIL to the first pool, using the Intel S3700 SSD.


I seriously doubt this is going to increase write performance as the pool already consists of fast SSD disks. In your case I don't think ZIL is recommended in combination with NFS.

https://wiki.freebsd.org/ZFSTuningGuide#NFS_tuning

Keep in mind you need to mirror your ZIL too. If the ZIL disk breaks you will lose data.


----------



## usdmatt (Oct 22, 2013)

You would need to partition the disk and use a separate partition for each ZIL. You can't use the same device as ZIL in two places. As mentioned, this may not make as big an improvement as hoped. NFS on FreeBSD ZFS really does suck at the moment, especially with ESXi.

The loader.conf variable is a bit out of date and pools can survive ZIL loss these days. If you system is running and a ZIL breaks, you don't actually lose anything at all (seeing as the ZIL device is just a backup of what's waiting to be written in RAM). If your ZIL breaks and machine crashes, you'll lose any pending writes but *should* be able to roll back.

Your best bets to increase performance for ESXi are as follows:


Use iSCSI. If you're like me, this is a last resort as it's hugely beneficial to me to be able to copy around, backup, restore, duplicate vmdk files directly on storage. It does perform better though and there's a proper native iSCSI target coming in FreeBSD 10.0.
Disable the ZIL. Not using the old loader variable, but by just setting sync=disabled on the dataset containing your VMs. This is only really recommended if you VMs aren't highly important and you have backups.
Really get into the code and start messing with the buffer sizes that are mentioned on the mailing list every now and then. (appears to make big improvements but I've no idea what it may be breaking).


----------



## AlexSanchezSTHLM (Oct 22, 2013)

Thanks guys for answering my question and giving me that other important info.



			
				usdmatt said:
			
		

> NFS on FreeBSD ZFS really does suck at the moment, especially with ESXi.



What would you suggest instead? I know there are alternatives for a "Storage VM" (OmniOs/OpenIndiana+napp-it, etc) but I love FreeBSD and always try to use it whenever/wherever I can.


----------



## Sebulon (Oct 22, 2013)

@@AlexSanchezSTHLM

Seeing as you mentioned the (DC) S3700, you are able to have:
/boot/loader.conf

```
vfs.zfs.cache_flush_disable="1"
```

It will turn off ZFSÂ´s constant cache flushing that usually drowns any disk with IO. When used with a regular drive, this setting is volatile. But with a disk that has built-in power-loss protection that ensures every write gets through, it enables you to have just as good throughput over any transport, be it NFS, iSCSI, whatev.

@@usdmatt

From what IÂ´ve seen, at least with istgt, you get higher throughput because it actually doesnÂ´t "listen" to sync-requests. So I always create ZVOL's with "sync=always".

/Sebulon


----------



## usdmatt (Oct 23, 2013)

> What would you suggest instead? I know there are alternatives for a "Storage VM" (OmniOs/OpenIndiana+napp-it, etc) but I love FreeBSD and always try to use it whenever/wherever I can.



I would expect the various systems based on Solaris to perform better (IllumOS, SmartOS, etc).

http://lists.freebsd.org/pipermail/freebsd-fs/2013-June/017519.html
http://lists.freebsd.org/pipermail/freebsd-fs/2013-June/017520.html

I'm with you though. I'd prefer to try my best to get as much performance on FreeBSD and put up with what I can get, rather then jump into an OS that I have about 1% of the knowledge about.


----------



## AlexSanchezSTHLM (Oct 24, 2013)

Since this storage VM is only used for exporting ZFS-backed NFS-shares it's ok if it's not running on FreeBSD. So I'll check out Solaris 11 together with Napp-it.


----------



## Sebulon (Oct 24, 2013)

AlexSanchezSTHLM said:
			
		

> Since this storage VM is only used for exporting ZFS-backed NFS-shares it's ok if it's not running on FreeBSD. So I'll check out Solaris 11 together with Napp-it.



To bad. YouÂ´ll be missing out on the LZ4 compression in FreeBSD 9.2 that has really turned our savings upside down since starting using it since just a couple of weeks ago. Can also say that we are using S3700 as mirrored ZIL for VMWare and oVirt virtualization over NFS and are having no performance issues what so ever.

/Sebulon


----------



## Sebulon (Oct 25, 2013)

@usdmatt,

How strange, I got an email from forums about an update from you on this thread but I canÂ´t see that here...

Well anyway, to your question if IÂ´ve managed to get better throughput than through my strict write-focused benchmarks the answer is both yes and no

I created a CentOS VM in oVirt with one vCPU and 2 GB of RAM. To that IÂ´ve benchmarked with bonnie++ towards a iSCSI LUN, an NFS export and a virtual VirtIO disk drive (TLA'd here as VDD), and here are the results:

Full result:
http://pastebin.com/LTDppndC

Summary:

```
WRITE	RE-WRITE	READ	IOPS
iSCSI	77	37		97	6653
NFS	74	21		97	6349
VDD	77	37		122	5165
```

So you can judge for yourselves, is that acceptable for you? If so, FreeBSD can be your best friend. If not, youÂ´ll have to look for alternatives.

/Sebulon


----------

