# ZFS Performance and tuning



## Ungaro (Jun 17, 2010)

Hi,

I've been using ZFS on my home server for 2 days, and spending a lot of time on it.
Here is my conf : 
- AMD Sempron 140
- MB MSI K9N6PGM2-V
- 3GB DDR2 PC2-6400
- 2 x 1TB Seagate SATA2
- 1 x 80GB Maxtor IDE ATA-133

I installed FreeBSD 8.0p3 (64 bits) on the 80GB disk, and I mounted the 2 SATA disks 
in ZFS RAID1 with the following command : 
	
	



```
zpool create mirror share ad4 ad6
```

Then, I benchmarked my server, by copying some big files, and then a lot of small files. The performance was really bad, in the two cases !

I read a lot of threads about that, and I tried several configurations, by modifying /boot/loader.conf, and the "best" performance I could get is with these parameters : 


```
vfs.zfs.prefetch_disable=1
vm.kmem_size="1536M"
vfs.zfs.arc_min="1024M"
vfs.zfs.arc_max="1536M"
vfs.zfs.vdev.min_pending=2
vfs.zfs.vdev.max_pending=8
vfs.zfs.txg.timeout=5
```

When I copy a big file, a movie for instance, I got this speed : 27MB/s,
and 18MB/s for a lot of small files.

Maybe my parameters are not so good, but I don't find anything which could help me configure it better, or maybe ZFS is not for me, maybe I should switch back to UFS2, or Linux - EXT4 which was working well.

Someone has any idea ?


----------



## olav (Jun 17, 2010)

How do you benchmark?
Do you have compression enabled?


----------



## Ungaro (Jun 17, 2010)

Compression is disabled, and to benchmark I copy some files over my gigabyte network, from my computer with NFS.


----------



## olav (Jun 17, 2010)

Try benchmarking with dd first. Then you know where to start looking.

It could be bad network cabling, wrong nfs settings or bad sata cables. And you're not using a PCI SATA controller, right?


----------



## Ungaro (Jun 17, 2010)

Yes, I'm gonna try with dd.

It can't be a network cabling problem, or nfs settings because the same configuration was working well under Debian 
I'm not using the motherboard raid controller, right.


----------



## danbi (Jun 18, 2010)

Try the most generic tuning first, for example comment  everything else ZFS related. Add


```
vm.kmem_size="5G"
```

to /boot/loader.conf.
You will be best to use the motherboard SATA ports, with the AHCI driver if supported. Add 


```
ahci_load="YES"
```

to /boot/loader.conf. ports on the motherboard are likely to be the fastest you will ever get (unless not supported well).

You may try to compare UFS vs. ZFS on the same server by NFS exporting filesystem from your third disk.

It is expected, that writing to ZFS over NFS will be slower. This is because of the ZIL and the synchronous writes NFS is performing. You may get much better performance with a separate ZIL device (such as flash memory of some sort). To test this, you may try 

`# sysctl vfs.zfs.zil_disable=1`

Just don't forget to revert it back!

I would not compare ZFS with ext4 on any account. It is better safe, than sorry.
You may also try copying the same files on the server, to compare the influence of NFS and remote machine.


----------



## wonslung (Jun 18, 2010)

nfs is going to perform slower...that's a given.



you should check the filesystem performance locally first, chances are you will find the problem isn't due to ZFS at all but due to your network protocol.

you might find samba performs better.....it did for me when i was using FreeBSD as my home server (i've switched my ZFS servers to opensolaris recently)

also, you should look into adding as much ram as possible....for a ZFS machine, ram is king..but i'm willing to bet the problem is just NFS and not ZFS.


----------



## Ungaro (Jun 18, 2010)

Ok I made some changes in my /boot/loader.conf settings :


```
vfs.zfs.prefetch_disable=1
vm.kmem_size="3096M"
ahci_load="YES"
```

The performance seems to be better when writing (60MB/s), not for reading (35MB/s), pretty curious !


----------



## fgordon (Jun 22, 2010)

maybe as the system can *always* cache writing, but caching while reading only works if you`ve read the data at least once before....

So with very huge amounts of data many GBytes or even TBytes reading should be faster than writing.


----------



## Ungaro (Jun 23, 2010)

Is there a way to disable caching ? I don't need it, because my server is a home storage server which is used punctually, so caching is useless I think.


----------



## t1066 (Jun 23, 2010)

From the man page,

`# zfs set primarycache=[i]var[/i]`

where _var_ can be none, metadata or all.


----------



## Matty (Jun 23, 2010)

Ungaro said:
			
		

> Is there a way to disable caching ? I don't need it, because my server is a home storage server which is used punctually, so caching is useless I think.



I don't think it would hurt to keep using the cache either.


----------



## phoenix (Jun 23, 2010)

Why would you *ever* want to disable caching?  Doing so will send disk performance through the floor (as in, it would be horrible).


----------



## boblog (Jun 24, 2010)

Disabling the prefetcher nukes read performance. I had the same performance before. Faster writes than reads, enabling the prefetcher fixed that right up.


----------



## Ungaro (Jun 24, 2010)

I can't enable prefetcher : I've got only 3Gb of RAM installed which is not enough (4Gb recommended), and my mobo is full (no more slot to add 1 more Gb).


----------



## Ungaro (Jun 25, 2010)

Here is my solution : I moved to Debian ! I can't tune ZFS which is certainly a powerful filesystem, but not for me.
So I moved today to Debian stable, and I put my SATA disk in RAID1 (with the mobo controller). 
I'm gonna make some reading/writing tests to compare with ZFS and my last params. I'll tell you that later.


----------



## wonslung (Jul 3, 2010)

ZFS really shines on newer hardware...you can think of it like a sliding scale....the newer your hardware is, the better ZFS is going to look compared to other options.


I ultimately moved to Solaris for my home servers because of the newer ZFS features but when i was using FreeBSD, it  worked very well with around 8 gb ram, a decent multi core 64 bit cpu and several drives.

I know people using it on machines with 2 gb ram who have it working well, but at that level of ram i think ufs is going to perform better.  They use it for the OTHER features, and not the performance at that level of ram.


----------

