# ZFS ARC max depeding on available ressources



## icecoke (Nov 15, 2017)

Hi,

we are heavily using FreeBSD 10.0 - 11.0 on our xen based cloud infrastructure. What we almost always experience, is the unneeded high usage of arc ram without manuell changes of


```
vm.kmem_size
vm.kmem_size_max
vfs.zfs.arc_max
vfs.zfs.vdev.cache.size
```

in /boot/loader.conf - (we boot from ufs2 and use zfs for main space).

As our user can remotely restart their systems with different ressources, there is even more need to adapt the above settings depending on the choosed settings after the new start.

Of course we can change these entries in /boot/loader.conf before the restart is done, but I wonder if there is a way I just don't know by now. Maybe something like a way to give percentages of even an if/then syntax to do this automatically.

Any input here is really appreciated!


----------



## SirDice (Nov 15, 2017)

It's already dynamic if you don't set it. ARC will use everything it can but if there are more pressing memory requests ARC will reduce its memory usage in favor of the process that requires it. Unused memory is useless memory.


----------



## icecoke (Nov 15, 2017)

SirDice - yep that is the theory. We have about 1000 FreeBSD instances proofing the opposite 

The ZFS Arc is using quite to much RAM and is not reducing the memory in favour of other processes. Especially mysqld is suffering much from this situation. So much, that it's moving active RAM into the swap as a last resort. In the end, the kernel fires messages about forcely killed mysqld processes. This - of course - is not nice and kills complete datasets since the often used innodb.

This behaviour can be tracked on machines with 2GB (where it is more than expected) and machines up to 32GB of (not shared, not balloned) RAM. Of course, this effect is depending on filesystem load of the instance.

The only way to stop this, is to limit the max usages of arc. Nothing else seems to be of help here. Doing this, no swap is used that way, no zio->i, no high sys% usage, no killed processes and a smoothly running machine.

So, even if the theory is nice, it's not working as expect. At least for our instances here.
Is there a way, to change the amount of RAM the arc is trying to use? It seems that it sets its max value to ~94% of the given RAM. I assume this is hardcoded anyway. Do you know where? Maybe it would be the easiest way to tweak this a little bit, if there is no way thru the configs in percentages. Doing this by script on any ressourcechange is another option, but I would like to collect the options in a whole atm.


----------



## usdmatt (Nov 15, 2017)

Not the politically correct answer but I always limit ARC with the sysctl variables. It may of improved in recent years but I've had far too many crashes, across many systems, because they eventually starve themselves of RAM when there's heavy application and storage activity.


----------



## tankist02 (Nov 15, 2017)

I had similar experience - without manually limiting max ARC usage the system will eventually almost run out of memory.


----------



## icecoke (Nov 16, 2017)

Thanks usdmatt and tankist02 for your honest answers (which are the only 'politically correct' answers  )!
In the meantime I wrote an additional script on ressource changes following this scheme:


```
vfs.zfs.vdev.cache.size    # 0.5%
vfs.zfs.vdev.cache.max     # 1%
vfs.zfs.arc_min            # 5%
vfs.zfs.arc_max            # 15%
vm.kmem_size               # 50%
vm.kmem_size_max           # 50%
```

While these are values from earlier evaluations, any suggestions and experiences are welcome.
The machines are running in the most cases typical web applications like apache, sendmail, php, dovecot, proftpd, mysqld (more seldom java) and alike.


----------

