# How to improve ZFS speed when the pool usage is 80% and above



## belon_cfy (Aug 29, 2013)

Hi,

I think the slowness is due to the block allocation method been used by ZFS from "first fit" to "best fit" when the pool hits 80%. 

Since the HDD space grows significantly, how can we increase the limit, let's say, from 80% to 95% so that the server can maintain the same performance until 95%?


----------



## jem (Aug 29, 2013)

The ZFS Best Practices guide recommends keeping pool space usage under 80% to avoid performance degradation.  If there was a way around that issue, they wouldn't have needed to make the recommendation I'm thinking.


----------



## SirDice (Aug 29, 2013)

It's not limited to ZFS, UFS and a few other filesystems suffer from the same effects when reaching 80% capacity.

Although I think I get where the OP is going. 80% of 20 MB or 80% of 3 TB. The latter disk has a lot more capacity left over than the entire first disk. But over the past few years the rule of thumb is still 80%, regardless of the actual disk size.


----------



## bthomson (Aug 30, 2013)

I just found this discussion, which seems to indicate the auto-best-fit threshold is actually 4%, not 20%.

I looked and metaslab.c is on my 9.1-RELEASE system at /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/metaslab.c and metaslab_df_free_pct is indeed 4.

So, unless this is not the right variable, it seems the change has already been made.


----------



## throAU (Aug 30, 2013)

SirDice said:
			
		

> It's not limited to ZFS, UFS and a few other filesystems suffer from the same effects when reaching 80% capacity.
> 
> Although I think I get where the OP is going. 80% of 20 MB or 80% of 3 TB. The latter disk has a lot more capacity left over then the entire first disk. But over the past few years the rule of thumb is still 80%, regardless of the actual disk size.



True, but if you have larger disks, the chances are you are (as time moves on) dealing with larger files and larger stripe sizes.  So the percentage rule is probably still a reasonable rule of thumb.

And yes, the general rule of thumb (keep free space) also applies to Netapp's WAFL system, too.  I suspect any copy-on-write system will have similar performance issues on a mostly full disk.


----------

