# Freebsd 11.1 ZFS using to much wired memory



## Gelo Riv (Feb 16, 2018)

is it normal for a ZFS to use a large amount of wired memory and took a while before it releases it?


----------



## SirDice (Feb 16, 2018)

ZFS likes memory, and lots of it. If you're struggling with memory issues you can limit the amount of ARC ZFS uses by setting vfs.zfs.arc_max in /etc/sysctl.conf.


----------



## Gelo Riv (Feb 16, 2018)

SirDice said:


> ZFS likes memory, and lots of it. If you're struggling with memory issues you can limit the amount of ARC ZFS uses by setting vfs.zfs.arc_max in /etc/sysctl.conf.




SirDice
I see but is it normal for the ZFS to used the wired memory instead of the physical memory that I put in? and take a while before it releases it.



```
CPU:  0.1% user,  0.0% nice,  0.1% system,  0.0% interrupt, 99.7% idle
Mem: 20G Active, 83G Inact, 140M Laundry, 20G Wired, 1572M Buf, 1626M Free
ARC: 15G Total, 76K MFU, 15G MRU, 16K Anon, 27M Header, 397K Other
     15G Compressed, 15G Uncompressed, 1.00:1 Ratio
Swap: 128G Total, 128G Free
```

may wired is 20GB and it has been 5 days it has that and didn't release it until the system lock when it reaches 52GB.


----------



## Maxnix (Feb 16, 2018)

Gelo Riv said:


> I see but is it normal for the ZFS to used the wired memory ins


What do you mean? Wired memory is physical memory reserved by the kernel, not virtual memory. Give a look at https://wiki.freebsd.org/Memory.


----------



## pprocacci (Feb 16, 2018)

Gelo,

From an end users' perspective, there isn't any correlation between Wired and Physical memory.  They are both memory.
The representation of "wired" memory is simply a representation of memory that cannot be swapped to disk.
This is due to a userland process calling mlock(2) or the kernel itself preventing paging.

As SirDice has already stated, ZFS likes memory, and it is _normal_ for zfs to allocate pages of memory that it doesn't want swapped to disk.  (Hence "wired")
It's also _normal_, in the case of ZFS that these pages of memory stay in use for as long as ZFS needs.
There's no sense in "premature deallocation" of pages that ZFS might need to use in the future.

The `top` snippet you provided seems perfectly fine.


----------



## shkhln (Feb 16, 2018)

Gelo Riv said:


> is it normal for a ZFS to use a large amount of wired memory and took a while before it releases it?



No, of course it's not normal: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=187594.



pprocacci said:


> As SirDice has already stated, ZFS likes memory, and it is _normal_ for zfs to allocate pages of memory that it doesn't want swapped to disk.  (Hence "wired")
> It's also _normal_, in the case of ZFS that these pages of memory stay in use for as long as ZFS needs.
> There's no sense in "premature deallocation" of pages that ZFS might need to use in the future.



In my (desktop) experience, without setting _vfs.zfs.arc_max_ to some specific limit, ZFS gobbles memory until there is _nothing_ left for running actual applications. I fail to see what's "normal" about that.


----------



## Gelo Riv (Feb 16, 2018)

shkhln said:


> No, of course it's not normal: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=187594.
> 
> 
> 
> In my (desktop) experience, without setting _vfs.zfs.arc_max_ to some specific limit ZFS gobbles memory until there is _nothing_ left for running actual applications. I fail to see what's "normal" about that.




shkhln so it will be best to modify _vfs.zfs.arc_max for this situation, cause when my wired memory reaches 52GB the system lock or is frozen and has to restart it again. _


----------



## rigoletto@ (Feb 16, 2018)

The amount of memory used for ARC by default is (IIRC) *100% minus 1GB*; however it is also dynamic, which means if something needs memory that is being used for ARC, ZFS gives it (but does not work well for some).


----------



## Gelo Riv (Feb 16, 2018)

lebarondemerde said:


> The amount of memory used for ARC by default is (IIRC) *100% minus 1GB*; however it is also dynamic, which means if something needs memory that is being used for ARC, ZFS gives it (but does not work well for some).




will adjusting the _vfs.zfs.arc_max help? _


----------



## rigoletto@ (Feb 17, 2018)

Gelo Riv said:


> will adjusting the _vfs.zfs.arc_max help? _



 I guess yes. Things are working well for me with defaults.


----------



## pprocacci (Feb 17, 2018)

What's "best" is different for different people.  My suggestion is ignoring it altogether unless you have a specific problem to solve.  ZFS using as much memory as possible is not a problem unless something else is effected.

The advice provided by *shkhln *to set _vfs.zfs.arc_max is also fine._

https://wiki.freebsd.org/ZFSTuningGuide

If you are truly concerned about ZFS's memory usage that page will give you pointers to clamp down on the ARC cache.


----------



## Gelo Riv (Feb 17, 2018)

pprocacci said:


> What's "best" is different for different people.  My suggestion is ignoring it altogether unless you have a specific problem to solve.  ZFS using as much memory as possible is not a problem unless something else is effected.
> 
> The advice provided by *shkhln *to set _vfs.zfs.arc_max is also fine._
> 
> ...




Ok thank you, the whole system is really affected when the wired memory reached 52GB the system is already frozen and I will need to reboot it.


----------



## Oko (Feb 17, 2018)

Gelo Riv said:


> Ok thank you, the whole system is really affected when the wired memory reached 52GB the system is already frozen and I will need to reboot it.


You are giving us here very little to work with. Could you please give us machine's physical specs, purpose of the hardware, and the size of ZFS pools. One of my main file servers with multiple ZFS pools totaling 250 TB is rock stable with 128 GB of RAM. I have no less than 50 NFS clients leaching on that thing at any given moment.


----------



## Snurg (Feb 17, 2018)

It is really stupid when zfs cache steals all memory so that user programs starve off from memory and even have to swap for useless caching.
There is no way except restricting its memory usage manually.
shkhln is completely right, this is unacceptable behavior, at least very annoying when using FreeBSD as desktop machine.


----------



## Gelo Riv (Feb 17, 2018)

Oko said:


> You are giving us here very little to work with. Could you please give us machine's physical specs, purpose of the hardware, and the size of ZFS pools. One of my main file servers with multiple ZFS pools totaling 250 TB is rock stable with 128 GB of RAM. I have no less than 50 NFS clients leaching on that thing at any given moment.



I have 32TB on ZFS
128GB Memory
e5 2630v4
1x256GB
I'm going to use it just a data storage server, but couldn't start with it since the server is locking up when it reached 52GB on wired memory.


----------



## Snurg (Feb 17, 2018)

Then set zfs arc max to, say, 40GB to have a safe margin. Note you have to set the byte count like 40000000000, sysctl is too stupid to understand things like "40G".


----------



## SirDice (Feb 18, 2018)

Gelo Riv said:


> I'm going to use it just a data storage server, but couldn't start with it since the server is locking up when it reached 52GB on wired memory.


I would be suspicious of hardware issues. 

```
last pid: 96541;  load averages:  0.70,  0.58,  0.44                                                       up 13+22:55:53  14:25:39
46 processes:  1 running, 45 sleeping
CPU:  0.0% user,  0.0% nice,  2.3% system,  0.0% interrupt, 97.7% idle
Mem: 8728M Active, 24G Inact, 58G Wired, 3631M Free
ARC: 32G Total, 2632M MFU, 27G MRU, 18M Anon, 1843M Header, 432M Other
     30G Compressed, 63G Uncompressed, 2.13:1 Ratio
Swap: 8192M Total, 8192M Free
```
This is on a machine with 96GB of memory, no tweaking. Uptime is almost 14 days.

Some more specs:

```
dice@hosaka:~ % zpool list
NAME      SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
stor10k  1.09T  26.0G  1.06T         -     4%     2%  1.00x  ONLINE  -
zroot     145G  48.4G  96.6G         -    70%    33%  1.00x  ONLINE  -
```


```
dice@hosaka:~ % sudo vm list
Password:
NAME            DATASTORE       LOADER      CPU    MEMORY    VNC                  AUTOSTART    STATE
case            default         bhyveload   2      2048M     -                    Yes [2]      Running (1460)
freebsd11-img   default         uefi        1      512M      -                    No           Stopped
jenkins         default         bhyveload   4      16384M    -                    Yes [5]      Running (1898)
kdc             default         uefi        2      2048M     0.0.0.0:5900         Yes [1]      Running (1271)
lady3jane       default         uefi        2      4096M     -                    No           Stopped
sdgame01        default         grub        2      4096M     -                    No           Stopped
tessierashpool  default         bhyveload   4      8192M     -                    Yes [4]      Running (69648)
build11         stor10k         bhyveload   4      8192M     -                    No           Running (46143)
plex            stor10k         bhyveload   4      8192M     -                    Yes [6]      Running (42492)
wintermute      stor10k         bhyveload   4      8192M     -                    Yes [3]      Running (24523)
```


----------



## phoenix (Feb 19, 2018)

Snurg said:


> Then set zfs arc max to, say, 40GB to have a safe margin. Note you have to set the byte count like 40000000000, sysctl is too stupid to understand things like "40G".



Not true. You can use K, M, G, T suffixes with a lot of sysctl/loader settings. Unfortunately, the ZFS-related ones don't. It seems the parsing of suffixes is done on a per-tunable basis, and no one added it to the ZFS ones.


----------



## Gelo Riv (Feb 19, 2018)

SirDice the server is set-up where the OS is installed on a SSD and then set the 32TB with zfs. Do you think there's a connection to it why the system eats to much-wired memory to the point it reaches memlock?


----------



## bisi (Jul 28, 2018)

SirDice said:


> ... you can limit the amount of ARC ZFS uses by setting vfs.zfs.arc_max in /etc/sysctl.conf.



I tried this with my system, and it had no effect (zfs-stats -a | grep -i arc, executed before and after the change, returned approx 31GB).  I put the entry 
	
	



```
vfs.zfs.arc_max="4G"
```
 in /boot/loader.conf and then zfs-stats started showing the expected results. 
	
	



```
vfs.zfs.arc_max                         4294967296
```
  Note that this number is just a best-guess starting point for troubleshooting, not a recommendation of any kind.


----------



## grahamperrin@ (Mar 2, 2019)

shkhln said:


> … https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=187594 …



_[zfs] [patch] ZFS ARC behavior problem and fix_

https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=187594#c278 ▶ 229764 – Default settings allow system to wire all ram

https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=187594#c279 ▶ 229670 – Too many vnodes causes ZFS ARC to exceed limit vfs.zfs.arc_max (high ARC "Other" usage)


----------



## Alain De Vos (Apr 3, 2020)

Currently trying openzfs and it consumes all my memory as wired memory which is not given free.


----------



## PMc (Apr 3, 2020)

Wired memory isn't normally given free, and ZFS is unconscious about the wired memory. 
What ZFS is conscious about is the ARC size. And it will reduce the ARC size under certain conditions, like when there is a pageout event from the pager, or when it grows beyond arc-max (unless in certain circumstances where it cannot reduce the ARC, like when the vnode cache is configured too big - see various posts here).
But wired memory is (nowadays, mostly) managed by the UMA allocator. And that one has it's own scheme of deciding when reducing it would be appropriate. And I didn't yet look into that to understand how exactly it works, I only can see it happen (when memory gets scarce, there suddenly may be a drop on wired, without other apparent reason).


----------



## Alain De Vos (Apr 4, 2020)

.


----------



## rihad (Aug 22, 2021)

What if I decrease vfs.zfs.arc_max (also named vfs.zfs.arc.max in FreeBSD 13.0) on a running machine, will the memory already wired by zfs before that moment be wasted/leaked forever?


----------



## PMc (Aug 22, 2021)

Certainly not. The arc_max and arc_min are not hard limits, they are rather recommentations to the evict process. E.g. if you have more inodes opened than do fit into arc_max, and they cannot be evicted (because they are opened), then the arc will not even shrink down to the
new arc_max.
And anyway, the wired memory will not immediately shrink. It will do its own optimisation stuff and occasionally shrink when there is other demand for ram,


----------



## grahamperrin@ (Aug 23, 2021)

rihad said:


> What if I decrease vfs.zfs.arc_max (also named vfs.zfs.arc.max in FreeBSD 13.0) on a running machine, will the memory already wired by zfs before that moment be wasted/leaked forever?



What's the motivation for your thoughts of a decrease?


----------



## rihad (Aug 23, 2021)

This is why:


```
last pid: 36059;  load averages:  0.54,  0.55,  0.54                                                                                                                                                                 up 16+10:56:31  09:26:45
49 processes:  1 running, 48 sleeping
CPU:  0.0% user,  0.0% nice,  0.6% system,  0.0% interrupt, 99.4% idle
Mem: 3633M Active, 8391M Inact, 7538M Laundry, 240G Wired, 104K Buf, 14G Free
ARC: 191G Total, 144G MFU, 46G MRU, 7148K Anon, 784M Header, 390M Other
     184G Compressed, 208G Uncompressed, 1.13:1 Ratio
Swap: 16G Total, 2302M Used, 14G Free, 14% Inuse
```


It appears that arc_max mistakenly wasn't set in /boot/loader.conf and was at ~380GB (ARC shown in top output had reached roughly 220G).
Yesterday I set vfs.zfs.arc_max to 192GB to put an end to this. c_max and kstat.zfs.misc.arcstats.size were automatically adjusted based on that value.
But I'm afraid this extra wired memory won't be freed. On the contrary it grew by an additional 3gb and (free mem decreased by 3GB) overnight.
Also swap use considering so much essentially free mem and such low use of Active mem doesn't look all too promising. About 7gb are waiting to be laundered.


----------



## PMc (Aug 23, 2021)

rihad said:


> This is why:
> 
> 
> ```
> ...


So what? Free memory is not useful, so the kernel tries to put it to some use.
And occupied unaccessed pages are meant to be moved to swap. Get the thing enough swap so the swap is in a sane relationship to the installed memory.


----------



## mer (Aug 23, 2021)

Setting that is not a dynamic event.  You need to reboot for it to take effect.
I agree with PMc but sometimes, it's less stress to "just do something".
ZFS by default will try to use almost all of RAM.  It leaves a little bit for the kernel.
ZFS is also designed to release memory if there is pressure, the downside is the release is not immediate.

On a desktop, setting ZFS to not use all the RAM (say set to use half) has a slight positive impact:  you will always have RAM for Firefox to use


----------



## rihad (Aug 23, 2021)

Alas, rebooting isn't an option. What if I write a small program that calls calloc in small chunks enough times to cause some memory pressure, so that wired memory gets released to somewhere below the new arc_size. Then the program would exit, and voila - less wired mem, more free mem.


----------



## mer (Aug 23, 2021)

rihad said:


> Alas, rebooting isn't an option. What if I write a small program that calls calloc in small chunks enough times to cause some memory pressure, so that wired memory gets released to somewhere below the new arc_size. Then the program would exit, and voila - less wired mem, more free mem.


In theory I guess that would work.  Worst case the system crashes, reboots and it uses the new value.  I'm assuming that this a a server of some kind?  Can you do a hot failover to a backup system?  If so, you could fail over, stop the service(s) on the one you are concerned about, restart the services, then fail back.  

If you have installed zfs-stats you can take a look at things related to ARC.
zfs-stats -A and zfs-stats -E are the two most useful.

MFU and MRU are the two big pieces of ARC.  MRU (Most Recently Used) is the first layer (like traditional Buffer Cache), MFU (Most Frequently Used) is the next layer.  Things age out of MRU and wind up on MFU, if they get used again, they can wind up on MRU again.  If you look, MFU + MRU is pretty much your ARC size.  Look what zfs-stats -E says.  If you have high cache hit numbers, then yes, you have a lot of memory in use by ARC but almost all requests are being served from cache.  That means you are not going the physical devices very much, which is pretty much always a good thing.


----------



## PMc (Aug 23, 2021)

rihad said:


> Alas, rebooting isn't an option. What if I write a small program that calls calloc in small chunks


No need for that. You can use awk to fill some array with strings. It's a one-liner.


rihad said:


> enough times to cause some memory pressure, so that wired memory gets released to somewhere below the new arc_size. Then the program would exit, and voila - less wired mem, more free mem.


It won't go below. The whole arc is in wired mem, plus most other things in the kernel, plus some user applications that use wired mem, plus bhyve (depending on the configuration).


----------



## rihad (Aug 23, 2021)

Bingo. This simple C program did the trick


```
#include <unistd.h>
#include <stdlib.h>
#include <strings.h>

int main(void) {
  while (1) {
    void *p = malloc(20*1024*1024);
    explicit_bzero(p, 20*1024*1024);
    sleep(1);
  }

  return 0;
}
```

After about 1.5gb of watching active mem+laundry queue grow in top, and free mem decreasing, I got bored and hit Ctrl+C in the program's window. And suddenly magic happened:


```
last pid: 43833;  load averages:  0.03,  0.29,  0.44                                                                                                                                                                 up 16+19:51:48  18:22:02
52 processes:  1 running, 51 sleeping
CPU:  0.1% user,  0.0% nice,  0.2% system,  0.0% interrupt, 99.7% idle
Mem: 3692M Active, 8417M Inact, 7539M Laundry, 202G Wired, 104K Buf, 52G Free
ARC: 190G Total, 143G MFU, 47G MRU, 12M Anon, 790M Header, 412M Other
     184G Compressed, 208G Uncompressed, 1.13:1 Ratio
Swap: 16G Total, 2302M Used, 14G Free, 14% Inuse
```

Much better.


----------



## mer (Aug 23, 2021)

Comparing against the stats you put up in #28, the major change I see is "Free" went from 14G to 52G, corresponding "Wired" from 240G to 202G.  
ARC is basically unchanged, Swap unchanged.  That implies to me that "it wasn't ZFS".

Hitting ctrl-c caused everything allocated in your program to be freed.  Perhaps it triggered a forced colescing of buffers or what ever was in Wired.

Before doing this, was the Wired relatively static?


----------



## rihad (Aug 23, 2021)

Yup, wired simply grew from 237 to 240G overnight (and correspondingly Free down from 17 to 14gb) and remained there.

I think it was the extra ZFS wired memory back from when ARC was at about 220G.
Then I set arc_max to 192G dynamically and top (along with relevant sysctls) reflected the change immediately. But wired remained at 240G.


----------



## rihad (Aug 23, 2021)

This time I set arc_max to 16gb (interestingly it can't be set to 8gb, I mean arc_max is set to that value, but top's ARC remains at 16GB), and ran the program above allocating 500mb every second. After around 15-20gb, still running, wired started dropping rapidly in top with free mem increasing. Landry didn't change at all. Wired stopped dropping at 23G mark.


```
last pid: 44926;  load averages:  0.55,  0.19,  0.06                                                                                                                                                                 up 16+21:09:04  19:39:18
52 processes:  1 running, 51 sleeping
CPU:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
Mem: 3710M Active, 8417M Inact, 7525M Laundry, 23G Wired, 104K Buf, 230G Free
ARC: 16G Total, 8730M MFU, 7017M MRU, 8536K Anon, 79M Header, 399M Other
     15G Compressed, 20G Uncompressed, 1.37:1 Ratio
Swap: 16G Total, 2302M Used, 14G Free, 14% Inuse
```

Nice.


----------



## mer (Aug 23, 2021)

I have nothing more to say other than "Hmm.  Interesting".  Without knowing what exactly the Wired was tied to, other than it seems to be related to ARC, just it's interesting.
I took a quick look back and couldn't find it, but What Version of FreeBSD are you running?


----------



## rihad (Aug 23, 2021)

FreeBSD 13.0


----------



## PMc (Aug 23, 2021)

mer said:


> I have nothing more to say other than "Hmm.  Interesting".  Without knowing what exactly the Wired was tied to, other than it seems to be related to ARC, just it's interesting.


I would like to get a usage breakdown for wired. (I think a lot of people would.) But it appears very difficult to obtain, due to the fragmentation issue:

Operative memory is endangered by fragmentation; blocks of various sizes are allocated and released all the time, so after some uptime you may still have free memory, but it will be dispersed into little chunks which cannot be used contiguously.
This is a big problem when systems should stay up for a longer time, and it is not easy to solve - you cannot just move memory around to de-fragment (pointer arithmetics!)
Therefore algorithms have been implemented which pre-allocate chunks of equal size for similar use-cases, trying to anticipate the expected duration of use of these chunks - so that when the chunks are freed, they can be re-joined to form bigger free blocks. (You can read about that, it's called UMA.)

Bottomline: that wired memory may not have been in use at all, the kernel might just have anticipated that it could be demanded in the future. And so it would be unwise to free it without actual need.


----------



## rihad (Aug 24, 2021)

The amount of wired memory correlated with the amount of ARC in use (10-50gb more than that). This is expected as ARC is always wired by definition.
I just didn't want it to grow forever (up to ~380gb limit, which is total ram), as this unnecessary disk cache resulted in needlessly swapping out non-wired mem.

Here's current ARC breakdown shown by `zfs-stats -A`:

```
ARC Efficiency
        Cache Access Total:                     267629336
        Cache Hit Ratio:                99.68%  266790411
        Cache Miss Ratio:               0.31%   838925
        Actual Hit Ratio:               97.51%  260973203
```

So 16GB ARC looks enough. It did drop from 99.80% to 99.68%, I'll watch the dynamics more and increase ARC if necessary, just not to 380GB )


----------



## mer (Aug 24, 2021)

PMc Fragmentation, always the "gotcha" for memory allocators.  One reason garbage collection is hard to get right.
rihad how about zfs-stats -E output?  By default, the caching is for both data and metadata, so the -E gives you the breakdown on what is in the cache.

That output is simply saying most requests for data or metadata are being served from ARC.
If you look at the "-E" output you can get a feel for how the cache info moves from list to list.  Pay attention to the "Ghost" values.  If they go up or are significant, then increase the size of ARC because that implies data has fallen off MFU and MRU but is requested again.  Think of "I'm streaming movie, in a loop,  of size X so it gets read sequentionally and pushed out a socket.  If my ARC is sized X/2 I completely flush the ARC and have to go back to the device.   If my ARC is sized >= X then after the first loop the whole thing is in ARC so subsequent loops are satisfied from memory, no need to go back to the disk".
A shared database is another example that may benefit from more ARC.

Without knowing what this machine is being used for (quick scan I couldn't find that info), it's hard to say what is the appropriate size of ARC.  
But in the end, it's your asset you have to feel comfortable with the performance.


----------



## rihad (Aug 24, 2021)

```
$ zfs-stats -E

------------------------------------------------------------------------
ZFS Subsystem Report                            Tue Aug 24 12:07:19 2021
------------------------------------------------------------------------

ARC Efficiency:                                 268.59  m
        Cache Hit Ratio:                99.68%  267.73  m
        Cache Miss Ratio:               0.32%   856.24  k
        Actual Hit Ratio:               97.52%  261.91  m

        Data Demand Efficiency:         99.82%  169.18  m
        Data Prefetch Efficiency:       93.31%  7.19    m

        CACHE HITS BY CACHE LIST:
          Anonymously Used:             1.97%   5.28    m
          Most Recently Used:           38.04%  101.84  m
          Most Frequently Used:         59.79%  160.07  m
          Most Recently Used Ghost:     0.07%   195.47  k
          Most Frequently Used Ghost:   0.13%   340.61  k

        CACHE HITS BY DATA TYPE:
          Demand Data:                  63.07%  168.87  m
          Prefetch Data:                2.50%   6.71    m
          Demand Metadata:              34.41%  92.13   m
          Prefetch Metadata:            0.01%   24.67   k

        CACHE MISSES BY DATA TYPE:
          Demand Data:                  36.42%  311.81  k
          Prefetch Data:                56.15%  480.80  k
          Demand Metadata:              5.98%   51.19   k
          Prefetch Metadata:            1.45%   12.44   k

------------------------------------------------------------------------
```


----------



## mer (Aug 24, 2021)

Thanks.
I think that the "misses" section is implying a little more ARC would help by satisfying more requests from RAM.
But, that's just my opinion.


----------



## rihad (Aug 24, 2021)

Does newly written data go through the ARC cache and stay there (this is a PostgreSQL replica+analytics server)? If not, then it's entirely possible that all newly written data gets read back and qualifies as a MISS initially. Other than that "cache hits" is a nice metric to eyeball to get the general idea ) And it's pretty high - 99.68%. If it decreases, then yeah, that could imply that cache size is insufficient.


----------



## mer (Aug 24, 2021)

ARC is read direction to the best of my knowledge.


----------



## grahamperrin@ (Aug 24, 2021)

Do you have level 2? 
<https://klarasystems.com/articles/openzfs-all-about-l2arc/> (undated)


----------



## rihad (Aug 25, 2021)

This is probably unrelated to ZFS, but why doesn't top's memory breakdown sum up to the amount of physical memory?


```
Mem: 8450M Active, 27G Inact, 110G Wired, 104K Buf, 129G Free

ARC: 16G Total, 5055M MFU, 11G MRU, 3568K Anon, 95M Header, 412M Other

     15G Compressed, 26G Uncompressed, 1.77:1 Ratio

Swap: 16G Total, 16G Free
```
This sums up roughly to: 8.5+27+110+129 == 275g

while:


```
$ sysctl hw.physmem
hw.physmem: 410834587648
```



```
$ grep -E '^(real|avail)' /var/run/dmesg.boot 
real memory  = 412304277504 (393204 MB)
avail memory = 400211787776 (381671 MB)
```

Spotted on 2 machines, both running FreeBSD 13.0


----------



## rihad (Aug 25, 2021)

grahamperrin said:


> Do you have level 2?
> <https://klarasystems.com/articles/openzfs-all-about-l2arc/> (undated)



Nope, I believe L2 arc are for speeding up slower spinning disks, where an additional ARC can be used on SSD's.
We use NVMe SSD disks themselves for data


----------



## mer (Aug 25, 2021)

L2ARC is exactly what it sounds like, but instead of being in RAM it's a device.  So yes having fast SSD/NVME device holding L2ARC in front of spinning disks could be a performance boost.


----------

