# ZFS not freeing RAM?



## kclark (Dec 20, 2013)

I understand the _basics_ of ZFS and have had good luck with it so far, but I've got 16GB of RAM installed on my machine and it feels like it's not freeing unused blocks.  I booted my system up this morning and after a half day of use my system is using 13.6GB of 15.5GB available and is swapping at 106MB.  I do run a jail and some other memory applications such as www/tomcat7 and www/chromium.  Even at idle I'm using 88% of my available RAM.  I understand that ZFS is a memory hog (prefetch?) but `top` and sysutils/conky tell another story.

Stats from conky:


```
HIGEST CPU               CPU %  MEM %
GST-PLUGIN-SCANN    0.00    0.00
XORG                        0.00    0.81
KDEINIT4                   0.00    0.68
```


```
HIGHEST MEM       CPU %    MEM%
CHROME               0.00      0.82
XORG                   0.00      0.81
KDEINIT4              0.00      0.68
```

Stats from top:


```
last pid: 95922;  load averages:  0.17,  0.21,  0.28                      up 1+18:22:46  15:14:43
182 processes: 2 running, 176 sleeping, 4 zombie
CPU: 10.4% user,  0.0% nice,  2.4% system,  0.2% interrupt, 87.1% idle
Mem: 469M Active, 1416M Inact, 13G Wired, 107M Cache, 406M Free
ARC: 12G Total, 3914M MFU, 7303M MRU, 637K Anon, 149M Header, 792M Other
Swap: 512M Total, 105M Used, 406M Free, 20% Inuse, 4K In

  PID USERNAME    THR PRI NICE   SIZE    RES STATE   C   TIME   WCPU COMMAND
  597 root          7 -21  r31   635M   146M uwait   0  52:39 17.97% Xorg
95863 kclark        2  41    0   357M 53960K select  1   0:00  1.95% kdeinit4
93464 kclark       16  20    0  1133M   109M uwait   2   9:19  0.98% kdeinit4
95889 kclark        1  20    0 16600K  2460K CPU4    4   0:00  0.98% top
95874 kclark        1  29    0 14540K  2212K wait    2   0:00  0.98% sh
24089 kclark        1  23    0   837M 50280K select  3  40:15  0.00% npviewer.bin
93586 kclark        9  20    0 65268K  4316K usem    5  20:50  0.00% conky
64065 kclark       32  20    0   558M   128M uwait   3  14:59  0.00% chrome
93459 kclark        4  20    0   592M 44088K kqread  2   8:58  0.00% kwin
24002 kclark        9  20    0   976M    99M uwait   2   4:45  0.00% chrome
24077 kclark        2  20    0   280M 42452K kqread  4   4:43  0.00% chrome
24103 kclark        1  20    0   837M 50280K futex   3   1:11  0.00% npviewer.bin
93518 kclark        6  39   19   129M 17024K uwait   1   1:04  0.00% virtuoso-t
24105 kclark        1  20    0   837M 50280K futex   4   1:04  0.00% npviewer.bin
24106 kclark        1  20    0   837M 50280K futex   0   1:04  0.00% npviewer.bin
24104 kclark        1  20    0   837M 50280K futex   5   1:03  0.00% npviewer.bin
93529 kclark        5  20    0   518M 51628K select  3   1:01  0.00% kopete
  497 root          1  20    0 14268K  1040K select  0   0:59  0.00% moused
93525 kclark        1  20    0   369M 15988K select  3   0:49  0.00% ktorrent
93436 kclark        6  29    0   590M 22976K select  5   0:45  0.00% kdeinit4
64599 kclark        9  20    0   935M 44656K uwait   0   0:38  0.00% chrome
93521 kclark        2  52    0   531M 21336K kqread  4   0:38  0.00% kdeinit4
93479 kclark       35  20    0   282M  5728K sbwait  1   0:28  0.00% mysqld
  529 haldaemon     2  39    0 57536K  2592K select  3   0:27  0.00% hald
64086 kclark        9  20    0   899M 42084K uwait   0   0:22  0.00% chrome
93462 kclark        8  20    0   518M 36364K uwait   5   0:20  0.00% knotify4
  576 root          1  20    0 20924K  1252K select  1   0:16  0.00% hald-addon-storage
  451 messagebus    1  20    0 18444K  2116K select  1   0:15  0.00% dbus-daemon
  581 root          1  20    0 20924K  1252K select  0   0:14  0.00% hald-addon-storage
 3416 root          1  35   10  9948K  1060K nanslp  5   0:14  0.00% swapexd
  584 root          1  20    0 20924K  1252K select  0   0:14  0.00% hald-addon-storage
  587 root          1  20    0 20924K  1284K select  2   0:14  0.00% hald-addon-storage
 3324 root          1  30   10 12088K   996K select  2   0:13  0.00% powerd
  590 root          1  20    0 20924K  1284K select  1   0:13  0.00% hald-addon-storage
93491 kclark        2  20    0   704M 23904K kqread  1   0:12  0.00% kdeinit4
99947 root          1  41   10 18632K   872K wait    0   0:12  0.00% sh
94954 kclark        9  20    0   905M 75428K uwait   3   0:11  0.00% chrome
95735 kclark        1  24    0   837M 50280K CPU2    2   0:10  0.00% npviewer.bin
93636 root          2  27    0   188M 10708K select  2   0:09  0.00% pc-mounttray
93375 kclark        5  52    0   206M 14492K select  3   0:08  0.00% pc-systemupdatertra
79288 kclark        9  20    0   923M 45172K uwait   3   0:07  0.00% chrome
93490 kclark       13  39   19   469M 18632K select  2   0:07  0.00% nepomukservicestub
93638 kclark        3  52   19   308M 13968K select  3   0:07  0.00% nepomukservicestub
93398 kclark        1  20    0 18444K  2576K select  5   0:07  0.00% dbus-daemon
 3321 root          1  30   10 22264K  1580K select  1   0:06  0.00% ntpd
 3436 root          1  52   20 18636K  1580K wait    2   0:06  0.00% sh
93578 kclark        3  20    0 81140K  2316K select  3   0:06  0.00% ibus-daemon
```

I'm thinking it's a memory leak?  How would I check this?  Is there something else this could be or is this just the nature of ZFS?  I read up on tuning ZFS but I couldn't find anything that seemed relevant.


----------



## usdmatt (Dec 20, 2013)

Yes, ZFS is a memory hog. It was primarily designed for dedicated storage boxes and you'll probably find all your RAM, most of which is 'wired' in top, is not leaked but being aggressively used for ARC (ZFS in-memory cache).

If you install the zfs-stats port (think it's in sysutils), it'll give you a nice overview of how much RAM ZFS has given itself for ARC, and how much is currently in use.

If you run a lot of other memory hungry software, you may want to try limiting the ARC size by adding the relevant sysctls to /boot/loader.conf. Some people may point out that it's supposed to all be automatic and say you shouldn't tune manually, but personally I actually find it necessary to limit the ARC size manually on some systems. Usually because I have other software running and it's fighting for RAM like in your case, or because I'm trying to repurpose an old machine with little RAM and it literally crashes the machine otherwise (starves the kernel of memory).

Edit: In fact you're running a recent enough version that actually lists the cache size in top. You can see there that the ARC in using 12 GB of RAM.


----------



## kclark (Dec 20, 2013)

Awesome!  Thanks for the info.  I'm going to be upgrading from 16GB of RAM to 32GB tonight, I was just going to add another 8GB, but I'm just going to top it out.  I love the ZFS filesystem, but damn ZFS + x11/kde4 + Jails + java/openjdk17 + java/eclipse-devel = 16GB all used up.  I know the minimum recommended RAM for ZFS is 4GB, but what would you say the minimum should be for desktop environments?


----------



## usdmatt (Dec 20, 2013)

I'd still consider 4 GB a perfectly reasonable minimum for a desktop environment. For what you're doing I would consider 16 GB ok but there's nothing wrong with putting more in .

The problem is that, as we can see from the fact that ARC size is 12 GB, ZFS has decided to let itself use (at least) 75% of your RAM for its own cache. If you were only doing storage that would be fine, you've still got a few GB left for the OS. 

When you're running a bunch of other stuff though, the few GB left over really isn't enough and the various services are left fighting over the remaining memory. That's why I suggest limiting the ARC. Decide how much memory you think your other services may use. If you estimate that at 6 GB (with a bit of head room), take 6 of the total RAM in the machine, plus maybe a bit spare, and limit the ARC to what's left. (Say 8GB for example). Then ZFS will only ever use 8 GB and you'll always have at least 8 available for everything else.

On a desktop system I doubt you will notice any performance difference by taking a few GB away from its cache.


----------



## kclark (Dec 20, 2013)

First of all, wow, thank you.  I tuned ZFS just now.  My desktop loads faster and is smoother.  It still caches but a reasonable amount.  Second of all, and more importantly, thank you!


----------



## throAU (Jan 13, 2014)

Performance on my little N54L dedicated FreeNAS box with 2 gigabytes of RAM and a pair of mirror VDEVs is acceptable for most things (it came with only 2 GB, I do plan to add RAM to it ), so you can likely get quite stingy with ARC.

As above if it is only for a single user or handful of users, I doubt spending more than a couple of gigs on ARC is really worth it.


----------



## fulltlt (Aug 18, 2014)

@kclark - Could you provide the details of what you did for the ZFS tuning?


----------



## kclark (Aug 25, 2014)

I lowered my default ARC size in /boot/loader.conf

IE:

```
vfs.zfs.arc_max="40M"
```

The ZFSTuningGuide https://wiki.freebsd.org/ZFSTuningGuide is an awesome resource.


----------



## frijsdijk (Aug 28, 2014)

I still wonder why ZFS still needs tuning in this area. It's accepted that ZFS will use a lot of memory to cache (meta)data and that's all fine, but I've seen too many FreeBSD boxes fail/crash/slow down because the amount of wired memory grows and grows up to a point where nothing's left. Limiting the ARC size is (almost) always the solution to this, but it feels like babysitting.


----------



## usdmatt (Aug 28, 2014)

There appears to be a lot of discussion going on about issues with ZFS releasing memory.
See the following bug report: https://bugs.freebsd.org/bugzilla/show_ ... ?id=187594

There are patches in there that apparently fix problems for quite a few people who have ZFS using all their memory and forcing everything else into swap. I don't exactly know why none of these changes have been merged. It mostly seems to be down to developers thinking it's too extreme, not the correct way of doing it, or that the patches may cause other problems in certain circumstances.

It does appear though that there's definitely an issue with ZFS not giving memory back to the system. Some responses in that bug are from people basically saying that it's unusable without the patches.

I'm expecting some sort of change to be made to the ARC memory management based on that bug report, but I've no idea when they'll finally come up with something they are happy to commit.


----------

