# Intensive swap usage



## abishai (Nov 26, 2021)

Hello, I have a problem with a server (runs bhyve and jails). According top, I have some free memory, however I'm constantly running out of swap space.


```
abishai@alpha:~ % grep real /var/run/dmesg.boot
real memory  = 103079215104 (98304 MB)
```


```
last pid: 64640;  load averages:  9,37,  7,94,  8,23                                                                                              up 40+07:36:33  16:02:59
363 processes: 2 running, 361 sleeping
CPU: 32,8% user,  0,0% nice, 14,4% system,  1,4% interrupt, 51,4% idle
Mem: 14G Active, 21G Inact, 29G Laundry, 14G Wired, 7835M Free
ARC: 2785M Total, 706M MFU, 1282M MRU, 5828K Anon, 126M Header, 664M Other
     1480M Compressed, 3210M Uncompressed, 2,17:1 Ratio
Swap: 32G Total, 32G Used, 28M Free, 99% Inuse
```

I receive

```
swp_pager_getswapspace(30): failed
```
from time to time.

I've tried vm.overcommit=4, but I'm not sure that something is changed.
The situation is confusing: server has free memory, inactive memory, however ARC is very small and it tries to swap even more.

Is it possible to decrease swap usage? Or find out why it swaps so actively?


----------



## SirDice (Nov 26, 2021)

Limit your ARC size. And check how much memory you've assigned to the VMs. ARC + VM memory + some room for your jails shouldn't exceed the amount of RAM the machine has.


----------



## covacat (Nov 26, 2021)

you may have some large  bhyve processes swapped out.
as root ps -f ?


----------



## Alain De Vos (Nov 26, 2021)

The following output can be interesting,

```
top a -n -o res |head -n 20
```


----------



## abishai (Nov 26, 2021)

SirDice said:


> Limit your ARC size. And check how much memory you've assigned to the VMs. ARC + VM memory + some room for your jails shouldn't exceed the amount of RAM the machine has.


I have vfs.zfs.arc_max="20G" in /boot/loader.conf
Running VMs are configured to use 18GB total.



covacat said:


> you may have some large  bhyve processes swapped out.
> as root ps -f ?




```
abishai@alpha:~ % doas ps -f
  PID TT  STAT       TIME COMMAND
 1538 v0  Is+     0:00.00 /usr/libexec/getty Pc ttyv0
 1539 v1  Is+     0:00.00 /usr/libexec/getty Pc ttyv1
 1540 v2  Is+     0:00.00 /usr/libexec/getty Pc ttyv2
 1541 v3  Is+     0:00.00 /usr/libexec/getty Pc ttyv3
 1542 v4  Is+     0:00.00 /usr/libexec/getty Pc ttyv4
 1543 v5  Is+     0:00.00 /usr/libexec/getty Pc ttyv5
 1544 v6  Is+     0:00.00 /usr/libexec/getty Pc ttyv6
 1545 v7  Is+     0:00.00 /usr/libexec/getty Pc ttyv7
 1253  2- IW      0:00.00 /bin/sh /usr/local/sbin/vm _run quik-vtb-main
 1432  2- SC   1168:21.45 bhyve: quik-vtb-main (bhyve)
 9053  4  R+      0:00.00 ps -f
53533  3- IW      0:00.00 /bin/sh /usr/local/sbin/vm _run temp
53711  3- SC    182:58.23 bhyve: temp (bhyve)
23141  5- IW      0:00.00 /bin/sh /usr/local/sbin/vm _run quik-master
23324  5- SC   3824:51.58 bhyve: quik-master (bhyve)
27479  1- IW      0:00.00 /bin/sh /usr/local/sbin/vm _run quik-open-main
27653  1- SC   4219:50.68 bhyve: quik-open-main (bhyve)
```

Not sure I understand the output. Time is total CPU consumption, right? And what about memory?



Alain De Vos said:


> The following output can be interesting,
> 
> ```
> top a -n -o res |head -n 20
> ```




```
abishai@alpha:~ % doas top a -n -o res | head -n 30
last pid: 11018;  load averages:  4.15,  4.67,  4.86  up 40+10:41:12    19:07:38
365 processes: 2 running, 363 sleeping
CPU: 28.7% user,  0.0% nice, 15.4% system,  0.9% interrupt, 55.0% idle
Mem: 14G Active, 22G Inact, 24G Laundry, 22G Wired, 3321M Free
ARC: 6631M Total, 2109M MFU, 803M MRU, 14M Anon, 117M Header, 3589M Other
     961M Compressed, 2707M Uncompressed, 2.82:1 Ratio
Swap: 32G Total, 31G Used, 800M Free, 97% Inuse

  PID USERNAME    THR PRI NICE   SIZE    RES STATE    C   TIME    WCPU COMMAND
12750    818       75  26    0    27G  8523M uwait    3  66:23   0.00% java
69131 www           4  20    0  5535M  4476M select   3  66:44  36.13% zmc
 8144    770        1  20    0  4258M  4109M select  10  31:28   0.00% postgres
23324 root         36  20    0  4150M  3833M kqread   8  63.8H  15.82% bhyve
 8145    770        1  20    0  4250M  3686M select   3   5:04   0.00% postgres
 1432 root         36  20    0  4185M  3177M kqread   8  19.5H   2.25% bhyve
27653 root         36  20    0  4185M  3142M kqread   2  70.4H  34.47% bhyve
37682 nobody       93  52    0    27G  2500M uwait    6  16.9H   0.49% java
69115 www           4  20    0  2708M  2055M select   6  94:34  50.63% zmc
20876 nobody       93  43    0    27G  1909M uwait    8  19.8H   1.66% java
 5495    907       22  20    0    14G  1884M uwait   10  20.6H   0.00% influxd
69123 www           4  20    0  2258M  1804M select  11  94:15  53.61% zmc
59647 nobody      109  52    0    27G  1634M uwait    1   5:28   0.00% java
97417 nobody      113  52    0    27G  1534M uwait   10  82:18   0.10% java
16043 nobody       99  52    0    27G  1473M uwait    4  32:26   0.00% java
 3720    770        1  21    0  4261M  1432M select   2   0:00   0.00% postgres
69107 www           4  20    0  1987M  1423M select   6  90:26  47.46% zmc
69119 www           4  20    0  1986M  1421M select   6  85:23  46.88% zmc
 5078    965      132  52    0    40G  1376M uwait    8  53.5H   0.00% java
 5015    848      204  52    0  3131M  1340M uwait    9  24.2H   0.00% java
69111 www           4  20    0  1778M  1278M select   6  86:18  46.44% zmc
```

Hmmm, probably swap reflects actual software demands. I will investigate jails further. On the other hand, system reports 21G inactive memory. It can use it for ARC or swap size reduction, right?


----------



## covacat (Nov 26, 2021)

ps -f -o pid,command,rss


----------



## Alain De Vos (Nov 26, 2021)

There are a few java processes each taking 27G & 40G virtual memory.
Maybe limit the heap & stack size of those java processes ?


----------



## grahamperrin@ (Nov 28, 2021)

abishai said:


> I have vfs.zfs.arc_max="20G" …



Also/alternatively <https://old.reddit.com/r/freebsd/comments/pvsu2w/what_to_choose_zfs_or_ufs/hecksww/>


----------



## abishai (Nov 28, 2021)

I've set -Xmx 256M to all my java daemons and rebooted the server, now I see

```
abishai@alpha:~ % top -n -o res
last pid: 32675;  load averages:  6,08,  5,21,  4,70  up 0+19:20:40    11:48:15
337 processes: 2 running, 335 sleeping
CPU: 31,0% user,  0,0% nice,  5,0% system,  0,6% interrupt, 63,4% idle
Mem: 9323M Active, 37G Inact, 28G Wired, 17G Free
ARC: 3507M Total, 1763M MFU, 103M MRU, 1433K Anon, 172M Header, 1469M Other
     375M Compressed, 1492M Uncompressed, 3,97:1 Ratio
Swap: 32G Total, 32G Free
```
They do nothing during weekend though, so I'll better to wait till tomorrow. Maybe java thought it shouldn't worry about memory at all, so with new limits it will be forced to run GC.
I've also limited ARC size to 10GB.


----------



## grahamperrin@ (Nov 28, 2021)

Is there any unwanted killing?



abishai said:


> …
> I receive
> 
> ```
> ...



Typo?

Seeking `swap_pager_getswapspace` finds more things of interest.


----------



## VladiBG (Nov 28, 2021)

man top(1)


> *-w* Display approximate swap usage for each process.



(You don't know how much i hate java developers)


----------



## Alain De Vos (Nov 28, 2021)

VladiBG, do you think a webserver using IIS & dotnet is better ? I don't think so.


----------



## VladiBG (Nov 28, 2021)

It's not the programing language. It's the low quality of the code /optimization/ that more and more students produce for different projects without the proper supervisor.


----------



## eternal_noob (Nov 28, 2021)

My favourite Java developer story: https://thedailywtf.com/articles/The_Brillant_Paula_Bean


----------



## grahamperrin@ (Nov 28, 2021)

VladiBG said:


> man top(1)
> 
> …



Does what's below show that no process uses swap? (If there's a way to reverse the sort order, I can't find it, sorry.)


```
% top -n -w -o swap
last pid: 55279;  load averages:  0.53,  0.99,  1.10; battery: 98%  up 0+09:28:33    13:38:14
183 processes: 1 running, 181 sleeping, 1 zombie
CPU: 15.8% user,  0.3% nice,  5.3% system,  0.2% interrupt, 78.5% idle
Mem: 4838M Active, 3696M Inact, 1942M Laundry, 4172M Wired, 1069M Free
ARC: 1023M Total, 447M MFU, 136M MRU, 1131K Anon, 111M Header, 327M Other
     219M Compressed, 748M Uncompressed, 3.41:1 Ratio
Swap: 16G Total, 5471M Used, 11G Free, 33% Inuse

  PID USERNAME    THR PRI NICE   SIZE    RES SWAP STATE    C   TIME    WCPU COMMAND
 4806 grahamperr  163  23    0  6652M  2196M   0B select   3  70:31   4.05% firefox
 2274 root          8  21    0   286M    50M   0B select   1  16:27   1.86% Xorg
 6797 grahamperr   27  20    0    10G  1239M   0B select   3  32:41   1.37% firefox
 2328 grahamperr   16  20    0   954M   226M   0B select   3  12:30   1.17% kwin_x11
 4814 grahamperr   25  21    0  5827M  1201M   0B select   0  38:57   0.98% firefox
 3655 grahamperr  149  20    0  5931M   805M   0B zio->i   3  18:50   0.88% thunderbird
 2419 grahamperr    5  21    0    90M    22M   0B select   3   4:03   0.39% gkrellm
 1870 netdata      20  52   19   194M    90M   0B pause    1   3:32   0.39% netdata
12579 grahamperr   24  20    0  3512M   900M   0B select   1   5:09   0.29% firefox
45881 grahamperr   25  20    0  3228M   572M   0B select   1   1:52   0.10% firefox
 4811 grahamperr   24  20    0  3698M   436M   0B select   0   7:36   0.00% firefox
 2475 grahamperr   27  52    0  5774M  1117M   0B uwait    2   4:26   0.00% java
 6049 grahamperr   25  20    0  3040M   397M   0B select   3   3:41   0.00% firefox
 2380 grahamperr   30  20    0  1110M   227M   0B select   3   3:29   0.00% plasmashell
 3493 grahamperr   12  20    0   408M   102M   0B select   0   1:27   0.00% konsole
 4822 grahamperr   20  20    0  3085M   425M   0B select   3   1:00   0.00% firefox
 1952 netdata       1  40   19    20M  4344K   0B nanslp   0   0:58   0.00% apps.plugin
 2457 grahamperr   26  52    0  5805M    74M   0B uwait    3   0:47   0.00% java

%
```


----------



## Alain De Vos (Nov 28, 2021)

eternal_noob said:


> My favourite Java developer story: https://thedailywtf.com/articles/The_Brillant_Paula_Bean


Thanks eternal_noob.
I've re-written Paula_Bean in Scheme,using closures no memory should leak.

```
#lang racket
(define paulaBean%
  (class object%
    (super-new)
    (init-field paula)
    (define/public (getpaula) paula)
    );-end-class
  );-end-definition
(define paulaBeanInstance (new paulaBean% [paula "Brilliant"]))
(display (send paulaBeanInstance getpaula))
```
As i miss-spelled Brillant, a corrected Ocaml version,

```
class paula_Bean =
    object (self)
        val mutable paula = "Brillant"
        method getpaula = 
            paula
    end;;
let p = new paula_Bean in
Printf.printf "%s\n" p#getpaula;;
```
&Crystal language version,

```
class Paula_Bean 
    def initialize() 
        @paula = "Brillant" 
    end 
    def getpaula 
        @paula 
    end 
end 
p=Paula_Bean.new 
puts p.getpaula
```


----------



## eternal_noob (Nov 28, 2021)

But Paula made a mistake and misspelled "Brilliant" as "Brillant"!
You need to fix this.


----------



## mer (Nov 28, 2021)

Java as a language, is no worse than others when used correctly.  For me, the problem is that you don't really run a Java program on the hardware, you run it inside of an interpreter that runs on your hardware (yes I know there are/where some CPUs that were Java native in the hardware but not sure if they survived).

I have run into problems in the past (some distant, some recent) where there was a bug in the "jre" that resulted in a memory leak.  That memory leak caused the program/jre to keep increasing, eventually hit limits and got OOMed.  Now take that and run multiple copies in containers/vms and one gets a machine into swap quickly.
Updating the JRE fixed that problem, but updating JRE is not a trivial thing sometimes.


----------



## VladiBG (Nov 28, 2021)

grahamperrin said:


> If there's a way to reverse the sort order, I can't find it, sorry


for sort order press "o" then type the name when you use it interactive it will display the top 20 processes or use:

`top -n -o swap 183 | less`
 to display all 183 processes that you have


> 183 processes: 1 running, 181 sleeping, 1 zombie


----------



## Alain De Vos (Nov 28, 2021)

mer said:


> Java as a language, is no worse than others when used correctly.  For me, the problem is that you don't really run a Java program on the hardware, you run it inside of an interpreter that runs on your hardware (yes I know there are/where some CPUs that were Java native in the hardware but not sure if they survived).
> 
> I have run into problems in the past (some distant, some recent) where there was a bug in the "jre" that resulted in a memory leak.  That memory leak caused the program/jre to keep increasing, eventually hit limits and got OOMed.  Now take that and run multiple copies in containers/vms and one gets a machine into swap quickly.
> Updating the JRE fixed that problem, but updating JRE is not a trivial thing sometimes.


And the JVM garbage collector must receive enough cpu time to collect the garbage ...


----------



## grahamperrin@ (Nov 28, 2021)

VladiBG said:


> …
> `top -n -o swap 183 | less`
> …



Thanks, a slight variation (with `-w`, and for the count of 179 that was found immediately before this run):

`top -n -w -o swap 179 | less`

Then with `more`, I see 5015M used but apparently none of that use is by any process:


```
% top -n -o swap
last pid: 60750;  load averages:  1.63,  1.36,  1.52; battery: 98%  up 0+10:58:01    15:07:42
179 processes: 2 running, 176 sleeping, 1 zombie
CPU: 15.4% user,  0.2% nice,  5.4% system,  0.2% interrupt, 78.8% idle
Mem: 4995M Active, 5093M Inact, 1623M Laundry, 3663M Wired, 340M Free
ARC: 880M Total, 343M MFU, 93M MRU, 1097K Anon, 108M Header, 334M Other
     87M Compressed, 350M Uncompressed, 4.02:1 Ratio
Swap: 16G Total, 5015M Used, 11G Free, 30% Inuse

  PID USERNAME    THR PRI NICE   SIZE    RES STATE    C   TIME    WCPU COMMAND
60183 grahamperr   26  22    0  2802M   414M select   1   0:33   7.08% firefox
12579 grahamperr   24  22    0  3747M   946M select   2   8:26   4.79% firefox
 4806 grahamperr  178  20    0  6965M  2202M select   0  84:35   4.54% firefox
 2274 root          8  21    0   281M    54M select   2  19:10   2.78% Xorg
 2328 grahamperr   16  20    0   953M   226M select   2  14:41   1.95% kwin_x11
 6797 grahamperr   28  20    0    10G  1216M select   2  39:24   1.86% firefox
 4814 grahamperr   27  20    0  7240M  2566M select   2  46:55   1.76% firefox
 4811 grahamperr   24  20    0  3864M   562M select   2   9:31   1.07% firefox
 3655 grahamperr  150  20    0  5930M   837M select   2  21:30   0.88% thunderbird
 2419 grahamperr    5  21    0    90M    19M CPU1     1   5:00   0.68% gkrellm
 2475 grahamperr   29  52    0  5774M   521M uwait    2   5:27   0.59% java
 1870 netdata      20  52   19   197M    91M pause    1   4:24   0.49% netdata
 3493 grahamperr   12  20    0   408M    92M select   2   1:34   0.49% konsole
45881 grahamperr   24  20    0  3622M   806M select   3   5:15   0.00% firefox
 6049 grahamperr   27  20    0  3388M   560M select   1   4:25   0.00% firefox
 2380 grahamperr   30  20    0  1113M   220M select   3   3:57   0.00% plasmashell
 1952 netdata       1  40   19    20M  4084K nanslp   2   1:07   0.00% apps.plugin
 4822 grahamperr   20  20    0  3102M   528M select   3   1:05   0.00% firefox

% top -n -w -o swap 179 | more
last pid: 60754;  load averages:  1.63,  1.36,  1.52; battery: 98%  up 0+10:58:03    15:07:44
180 processes: 1 running, 178 sleeping, 1 zombie
CPU: 15.4% user,  0.2% nice,  5.4% system,  0.2% interrupt, 78.8% idle
Mem: 4999M Active, 5094M Inact, 1623M Laundry, 3663M Wired, 340M Free
ARC: 880M Total, 343M MFU, 94M MRU, 1196K Anon, 108M Header, 334M Other
     87M Compressed, 350M Uncompressed, 4.02:1 Ratio
Swap: 16G Total, 5015M Used, 11G Free, 30% Inuse

  PID USERNAME    THR PRI NICE   SIZE    RES SWAP STATE    C   TIME    WCPU COMMAND
60183 grahamperr   26  22    0  2802M   414M   0B select   2   0:33   6.49% firefox
12579 grahamperr   24  21    0  3747M   946M   0B select   1   8:26   4.05% firefox
 4806 grahamperr  178  20    0  6965M  2202M   0B select   0  84:35   3.66% firefox
 2274 root          8  21    0   281M    54M   0B select   3  19:10   2.88% Xorg
 6797 grahamperr   28  20    0    10G  1220M   0B select   0  39:24   1.95% firefox
 2328 grahamperr   16  20    0   953M   226M   0B select   3  14:41   1.86% kwin_x11
 4814 grahamperr   27  20    0  7240M  2566M   0B select   1  46:55   1.66% firefox
 3655 grahamperr  150  20    0  5930M   837M   0B select   0  21:30   0.88% thunderbird
 4811 grahamperr   24  20    0  3864M   562M   0B select   1   9:31   0.78% firefox
 2419 grahamperr    5  21    0    90M    19M   0B select   0   5:00   0.59% gkrellm
 6049 grahamperr   27  20    0  3388M   560M   0B select   3   4:25   0.49% firefox
 1870 netdata      20  52   19   197M    91M   0B pause    1   4:24   0.39% netdata
 3493 grahamperr   12  20    0   408M    92M   0B select   1   1:34   0.39% konsole
 2475 grahamperr   29  52    0  5774M   521M   0B uwait    2   5:27   0.20% java
45881 grahamperr   24  20    0  3622M   806M   0B select   2   5:15   0.10% firefox
 2380 grahamperr   30  20    0  1113M   220M   0B select   1   3:57   0.00% plasmashell
 1952 netdata       1  40   19    20M  4084K   0B nanslp   0   1:07   0.00% apps.plugin
 4822 grahamperr   20  20    0  3102M   528M   0B select   0   1:05   0.00% firefox
 2457 grahamperr   26  52    0  5805M    59M   0B uwait    3   0:50   0.00% java
 2393 grahamperr   10  20    0   173M    44M   0B select   2   0:44   0.00% kactivitymanagerd
 4837 grahamperr   17  30   10    51G    99M   0B uwait    2   0:43   0.00% code-oss
 4209 grahamperr    1  20    0    14M  2480K   0B select   0   0:30   0.00% top
 3718 grahamperr   20  20    0  2734M   215M   0B select   0   0:30   0.00% thunderbird
 2656 grahamperr    2  20    0    89M    21M   0B select   2   0:28   0.00% kio_http_cache_clea
 2406 grahamperr    1  21    0    20M  2372K   0B nanslp   2   0:24   0.00% perl
 5896 grahamperr    5  20    0   631M    22M   0B select   0   0:22   0.00% gtk-mixer
 3688 grahamperr   14  20    0   445M    61M   0B select   2   0:22   0.00% dolphin
62048 grahamperr    1  20    0    19M  4580K   0B select   0   0:21   0.00% htop
 1175 root          1  20    0    13M  1084K   0B select   0   0:18   0.00% moused
 2448 grahamperr    8  30    0    87M    17M   0B select   3   0:12   0.00% zeitgeist-datahub
 4845 grahamperr   15  30   10    36G    66M   0B uwait    3   0:12   0.00% code-oss
 4844 grahamperr   13  30   10    36G    33M   0B kqread   2   0:10   0.00% code-oss
 2594 grahamperr    1  20    0    14M  1604K   0B nanslp   2   0:10   0.00% gstat
 2186 root          4  20    0    36M  4612K   0B select   2   0:10   0.00% upowerd
 2318 grahamperr    9  20    0   231M    48M   0B select   0   0:09   0.00% kded5
 2435 grahamperr   34  21    0   461M    17M   0B select   0   0:08   0.00% mysqld
 2296 grahamperr    1  20    0    16M  3836K   0B select   1   0:07   0.00% dbus-daemon
 2389 grahamperr    9  20    0   290M    49M   0B select   1   0:07   0.00% kdeconnectd
 4842 grahamperr    8  20    0   200M    45M   0B select   3   0:07   0.00% plasma-browser-inte
 4833 grahamperr   26  20    0    36G    40M   0B select   3   0:06   0.00% code-oss
 2402 grahamperr    1  20    0    25M  4592K   0B select   0   0:06   0.00% xterm
 2529 grahamperr   14  20    0   545M    50M   0B select   0   0:06   0.00% akonadi_followuprem
 1721 root          1  20    0    13M   988K   0B select   2   0:06   0.00% powerd
 2547 grahamperr   14  20    0   539M    50M   0B select   0   0:06   0.00% akonadi_sendlater_a
 2539 grahamperr   14  20    0   539M    50M   0B select   3   0:06   0.00% akonadi_mailmerge_a
93388 grahamperr    5  20    0   137M    49M   0B select   3   0:05   0.00% sysctlview
 2403 grahamperr   13  20    0   368M    49M   0B select   2   0:05   0.00% kgpg
 2439 grahamperr   13  20    0   475M    54M   0B select   2   0:04   0.00% spectacle
 2397 grahamperr   12  20    0   357M    64M   0B select   2   0:04   0.00% kcharselect
 2530 grahamperr   10  20    0   170M    37M   0B select   1   0:04   0.00% akonadi_ical_resour
 2374 grahamperr    7  20    0   135M    30M   0B select   2   0:04   0.00% org_kde_powerdevil
 2534 grahamperr    9  20    0   166M    37M   0B select   0   0:04   0.00% akonadi_maildispatc
 2394 grahamperr    8  20    0   174M    36M   0B select   0   0:04   0.00% DiscoverNotifier
 2525 grahamperr   14  20    0   545M    50M   0B select   2   0:04   0.00% akonadi_archivemail
 2438 grahamperr    1  20    0    25M  4508K   0B select   1   0:04   0.00% xterm
 2536 grahamperr   14  20    0   545M    50M   0B select   3   0:04   0.00% akonadi_mailfilter_
 2443 grahamperr   13  20    0   313M    44M   0B select   0   0:04   0.00% kwalletd5
 2362 grahamperr    7  20    0   170M    41M   0B select   3   0:04   0.00% ksmserver
 2554 grahamperr   13  20    0   541M    50M   0B select   1   0:04   0.00% akonadi_unifiedmail
 4835 grahamperr   11  20    0   476M    33M   0B select   3   0:04   0.00% code-oss
60220 grahamperr   21  20    0  2671M   285M   0B select   2   0:04   0.00% firefox
60186 grahamperr   21  20    0  2677M   285M   0B select   2   0:04   0.00% firefox
 4208 grahamperr    1  20    0    25M  4456K   0B select   2   0:03   0.00% xterm
 2424 grahamperr    3  20    0   106M    26M   0B select   3   0:03   0.00% akonadi_control
 2322 grahamperr    5  20    0    50M  7928K   0B select   0   0:03   0.00% gvfs-udisks2-volume
 2528 grahamperr   13  20    0   509M    49M   0B select   2   0:03   0.00% akonadi_ews_resourc
 1745 messagebus    1  20    0    14M  2296K   0B select   2   0:03   0.00% dbus-daemon
 2491 grahamperr    1  20    0    60M    10M   0B select   2   0:03   0.00% python3.8
 3489 grahamperr   12  20    0   328M    45M   0B select   2   0:03   0.00% kwalletmanager5
 2382 grahamperr    6  20    0   144M    32M   0B select   2   0:03   0.00% kaccess
 2463 grahamperr    7  20    0   212M    35M   0B select   2   0:02   0.00% python3.8
 2407 grahamperr    9  20    0   212M    42M   0B select   1   0:02   0.00% korgac
 2413 grahamperr    4  20    0    33M  5256K   0B select   2   0:02   0.00% at-spi2-registryd
 2543 grahamperr    8  20    0   188M    41M   0B select   3   0:02   0.00% akonadi_newmailnoti
 2523 grahamperr    9  20    0   166M    36M   0B select   0   0:02   0.00% akonadi_akonotes_re
 2532 grahamperr    9  20    0   165M    36M   0B select   2   0:02   0.00% akonadi_maildir_res
 1759 ntpd          1  20    0    21M  2336K   0B select   1   0:02   0.00% ntpd
 2531 grahamperr    8  40   19   170M    36M   0B select   1   0:02   0.00% akonadi_indexing_ag
 2526 grahamperr    8  20    0   162M    35M   0B select   3   0:02   0.00% akonadi_birthdays_r
 2541 grahamperr    8  20    0   158M    35M   0B select   3   0:02   0.00% akonadi_migration_a
 2527 grahamperr    8  20    0   165M    36M   0B select   2   0:02   0.00% akonadi_contacts_re
 2378 grahamperr    6  20    0   146M    33M   0B select   0   0:02   0.00% polkit-kde-authenti
 2494 grahamperr    1  20    0    47M  3764K   0B select   0   0:02   0.00% python3.8
 4847 grahamperr   12  30   10    36G    29M   0B kqread   1   0:02   0.00% code-oss
 2384 grahamperr    4  20    0   110M    26M   0B select   3   0:02   0.00% gmenudbusmenuproxy
 2333 grahamperr    3  20    0   106M    26M   0B select   1   0:02   0.00% kglobalaccel5
  781 root         14 -44   r8    21M  1476K   0B cuse-s   3   0:02   0.00% webcamd
  754 root         14 -44   r8    21M  1468K   0B cuse-s   3   0:02   0.00% webcamd
 2376 grahamperr    3  20    0   102M    24M   0B select   0   0:02   0.00% xembedsniproxy
 2352 grahamperr    3  20    0    97M    22M   0B select   2   0:02   0.00% kscreen_backend_lau
 2453 grahamperr    7  20    0   269M    27M   0B select   0   0:02   0.00% evolution-alarm-not
 2310 grahamperr    4  20    0   120M    29M   0B select   2   0:01   0.00% klauncher
 3803 grahamperr   19  20    0  2457M    55M   0B select   0   0:01   0.00% thunderbird
 3809 grahamperr   19  20    0  2457M    55M   0B select   0   0:01   0.00% thunderbird
 2484 grahamperr    5  20    0    69M    14M   0B select   0   0:01   0.00% ibus-extension-gtk3
 4508 grahamperr    4  20    0    52M    16M   0B select   2   0:01   0.00% gvfsd-metadata
 2426 grahamperr   29  20    0   178M    26M   0B select   1   0:01   0.00% akonadiserver
 2411 grahamperr    1  20    0    14M  2144K   0B select   0   0:01   0.00% dbus-daemon
 2608 grahamperr    5  20    0   165M    17M   0B select   1   0:01   0.00% goa-daemon
54866 grahamperr    1  20    0    18M  1632K   0B nanslp   2   0:00   0.00% zpool
 1872 root          1  20    0    18M  2284K   0B select   2   0:00   0.00% sendmail
 4943 grahamperr    1  20    0    87M    21M   0B select   2   0:00   0.00% kioslave5
 2483 grahamperr    4  20    0    60M    14M   0B select   1   0:00   0.00% ibus-ui-gtk3
 2477 grahamperr    4  20    0    47M  6516K   0B select   3   0:00   0.00% zeitgeist-daemon
 1775 colord        4  31    0    44M  4976K   0B select   0   0:00   0.00% colord
 2090 polkitd       9  20    0  2155M  6228K   0B select   0   0:00   0.00% polkitd
 1773 root          1  20    0    66M  2632K   0B kqread   3   0:00   0.00% cupsd
 2155 root          7  20    0    91M  9952K   0B select   2   0:00   0.00% bsdisks
 2615 grahamperr   10  20    0   124M    18M   0B select   3   0:00   0.00% evolution-calendar-
 2368 grahamperr    1  20    0    84M    20M   0B select   0   0:00   0.00% kioslave5
 2366 grahamperr    1  20    0    84M    20M   0B select   0   0:00   0.00% kioslave5
 2320 grahamperr    4  20    0    40M  5232K   0B select   2   0:00   0.00% gvfsd
 4836 grahamperr    4  20    0   296M    25M   0B uwait    3   0:00   0.00% code-oss
 2088 root         16  20    0    82M  3796K   0B select   1   0:00   0.00% console-kit-daemon
 2605 grahamperr    5  20    0   133M    18M   0B select   2   0:00   0.00% evolution-source-re
 3562 root          1  20    0    16M  3160K   0B ttyin    3   0:00   0.00% csh
 1871 netdata       2  52   19    25M  2160K   0B kqread   1   0:00   0.00% netdata
 1645 root          1  20    0    13M  1416K   0B select   2   0:00   0.00% syslogd
 1786 root          1  20    0    13M   604K   0B nanslp   0   0:00   0.00% cron
 2617 grahamperr    7  20    0   142M    19M   0B select   2   0:00   0.00% evolution-addressbo
 2326 grahamperr    5  20    0    37M  4812K   0B select   2   0:00   0.00% gvfs-gphoto2-volume
 2324 grahamperr    5  20    0    36M  4668K   0B select   1   0:00   0.00% gvfs-mtp-volume-mon
  416 root          1  20    0    21M  1724K   0B select   3   0:00   0.00% wpa_supplicant
 3529 root          1  20    0    16M  3412K   0B ttyin    0   0:00   0.00% csh
 1422 root          1  20    0    11M   776K   0B select   3   0:00   0.00% devd
 6878 grahamperr    5  20    0    42M  4584K   0B select   0   0:00   0.00% gnome-keyring-daemo
 2297 grahamperr    2  24    0    84M    17M   0B select   2   0:00   0.00% startplasma-x11
 3496 grahamperr    1  20    0    16M  2548K   0B pause    1   0:00   0.00% tcsh
 2241 root          1  52    0    16M  1040K   0B ttyin    3   0:00   0.00% csh
 2309 grahamperr    1  20    0   111M    26M   0B select   3   0:00   0.00% kdeinit5
 4542 grahamperr    1  20    0    16M    24K   0B pause    2   0:00   0.00% tcsh
 4810 grahamperr    4  20    0   233M    70M   0B select   3   0:00   0.00% firefox
 3701 grahamperr    1  20    0    16M   968K   0B ttyin    3   0:00   0.00% tcsh
 3503 grahamperr    1  20    0    16M    24K   0B pause    3   0:00   0.00% tcsh
 2271 root          2  20    0    71M    12M   0B select   3   0:00   0.00% sddm
 2410 grahamperr    5  20    0    42M  5440K   0B select   1   0:00   0.00% at-spi-bus-launcher
 3508 grahamperr    1  20    0    16M    24K   0B pause    2   0:00   0.00% tcsh
55231 grahamperr    1  30    0    16M    24K   0B pause    3   0:00   0.00% tcsh
 3514 grahamperr    1  38    0    16M    24K   0B pause    1   0:00   0.00% tcsh
 6079 grahamperr    4  20    0    46M  6588K   0B select   1   0:00   0.00% gvfsd-http
 4855 grahamperr    1  52   10    18M    24K   0B sbwait   3   0:00   0.00% aspell
 1881 root          1  20    0    13M    24K   0B wait     3   0:00   0.00% login
55237 grahamperr    1  20    0    14M  2996K   0B ttyin    0   0:00   0.00% top
 3559 grahamperr    1  20    0    13M    24K   0B wait     0   0:00   0.00% su
 2335 grahamperr    4  20    0    31M  3640K   0B select   0   0:00   0.00% dconf-service
 1875 smmsp         1  20    0    18M   972K   0B pause    3   0:00   0.00% sendmail
 2279 root          1  52    0    66M    11M   0B select   2   0:00   0.00% sddm-helper
 3525 grahamperr    1  22    0    13M    24K   0B wait     1   0:00   0.00% su
 2280 grahamperr    1  52    0    17M    24K   0B wait     1   0:00   0.00% ck-launch-session
 2595 grahamperr    1  52   19    12M    24K   0B sbwait   0   0:00   0.00% cat
 2598 grahamperr    1  52   19    12M    24K   0B sbwait   1   0:00   0.00% cat
 2596 grahamperr    1  52   19    12M    24K   0B sbwait   0   0:00   0.00% cat
 2593 grahamperr    1  52   19    12M    24K   0B sbwait   0   0:00   0.00% cat
 2592 grahamperr    1  52   19    12M    24K   0B sbwait   0   0:00   0.00% cat
 2600 grahamperr    1  52   19    12M    24K   0B sbwait   1   0:00   0.00% cat
  492 root          1  20    0    13M    24K   0B kqread   3   0:00   0.00% rtsold
60754 grahamperr    1  20    0    13M  2528K   0B piperd   0   0:00   0.00% more
 2295 grahamperr    1  52    0    13M    24K   0B wait     2   0:00   0.00% dbus-run-session
60753 grahamperr    1  20    0    14M  2904K   0B CPU2     2   0:00   0.00% top
 1880 root          1  52    0    13M   912K   0B ttyin    0   0:00   0.00% getty
 1884 root          1  52    0    13M   912K   0B ttyin    0   0:00   0.00% getty
  500 root          1  20    0    13M   896K   0B select   1   0:00   0.00% rtsold
 1885 root          1  52    0    13M   912K   0B ttyin    0   0:00   0.00% getty
 1883 root          1  52    0    13M   912K   0B ttyin    3   0:00   0.00% getty
 1882 root          1  52    0    13M   912K   0B ttyin    3   0:00   0.00% getty
 1887 root          1  52    0    13M   912K   0B ttyin    0   0:00   0.00% getty
 1886 root          1  52    0    13M   912K   0B ttyin    3   0:00   0.00% getty
 1012 root          1  24    0    13M  1004K   0B select   3   0:00   0.00% dhclient
  935 root          1  44    0    13M   996K   0B select   0   0:00   0.00% dhclient
 1842 root          1  52    0    21M  2108K   0B select   1   0:00   0.00% sshd
  938 root          1   4    0    13M   992K   0B select   0   0:00   0.00% dhclient
 1015 root          1   4    0    13M   984K   0B select   2   0:00   0.00% dhclient
 1000 _dhcp         1  22    0    13M  1092K   0B select   1   0:00   0.00% dhclient
  200 root          1  52    0    12M    24K   0B pause    2   0:00   0.00% adjkerntz
 1037 _dhcp         1  24    0    13M  1084K   0B select   2   0:00   0.00% dhclient
  496 root          1  52    0    13M   896K   0B select   1   0:00   0.00% rtsold
  498 root          1  52    0    13M   896K   0B select   1   0:00   0.00% rtsold
  499 root          1  52    0    13M   896K   0B select   1   0:00   0.00% rtsold
 1499 _sndio        1  52  -20    14M   880K   0B select   0   0:00   0.00% sndiod

%
```


----------



## VladiBG (Nov 28, 2021)

what is the output of `swapinfo -k`


----------



## grahamperrin@ (Nov 28, 2021)

```
% swapinfo -k
Device          1K-blocks     Used    Avail Capacity
/dev/ada0p2.eli  16777216  3632220 13144996    22%
% swapinfo -hk
Device              Size     Used    Avail Capacity
/dev/ada0p2.eli      16G     3.5G      13G    22%
% top -n -w -o swap
last pid: 22479;  load averages:  1.06,  1.43,  1.46; battery: 98%  up 0+13:41:46    17:51:27
165 processes: 1 running, 163 sleeping, 1 zombie
CPU: 14.9% user,  0.2% nice,  5.3% system,  0.2% interrupt, 79.4% idle
Mem: 4730M Active, 5212M Inact, 1811M Laundry, 3672M Wired, 340M Free
ARC: 1347M Total, 475M MFU, 90M MRU, 1429K Anon, 111M Header, 670M Other
     113M Compressed, 453M Uncompressed, 3.99:1 Ratio
Swap: 16G Total, 3547M Used, 13G Free, 21% Inuse

  PID USERNAME    THR PRI NICE   SIZE    RES SWAP STATE    C   TIME    WCPU COMMAND
63920 grahamperr   24  24    0  3345M   747M   0B select   2   3:28   5.37% firefox
45881 grahamperr   25  22    0  3623M   747M   0B select   0  10:03   4.20% firefox
20102 grahamperr   24  21    0  3179M   638M   0B select   2   2:25   2.29% firefox
 6797 grahamperr   26  21    0  4192M   747M   0B select   1  56:45   2.20% firefox
 6049 grahamperr   25  21    0  3402M   661M   0B select   1   6:17   1.56% firefox
 2274 root          8  20    0   276M    66M   0B select   1  23:41   1.17% Xorg
 4814 grahamperr   25  20    0  7253M  2623M   0B select   3  59:16   0.68% firefox
 4806 grahamperr  164  20    0  7061M  2179M   0B select   2 108:40   0.49% firefox
 2419 grahamperr    5  21    0    90M    18M   0B select   0   6:41   0.49% gkrellm
 1870 netdata      20  52   19   197M    91M   0B pause    1   5:57   0.49% netdata
 2328 grahamperr   16  20    0  1017M   205M   0B select   1  18:22   0.29% kwin_x11
66461 grahamperr   24  20    0  3115M   623M   0B select   3   0:28   0.20% firefox
12579 grahamperr   24  20    0  3729M   661M   0B select   3  11:51   0.10% firefox
 2475 grahamperr   27  52    0  5784M   710M   0B uwait    2   6:58   0.00% java
 2380 grahamperr   31  20    0  1113M   208M   0B select   1   4:49   0.00% plasmashell
 4822 grahamperr   20  20    0  3309M   542M   0B select   1   1:51   0.00% firefox
 3493 grahamperr   12  20    0   407M    82M   0B select   0   1:47   0.00% konsole
 1952 netdata       1  40   19    20M  4012K   0B nanslp   2   1:22   0.00% apps.plugin

%
```

If relevant:



Spoiler: uname -KU, zpool iostat -v, zfs-stats -L





```
% uname -KU
1400041 1400041
% zpool iostat -v
                         capacity     operations     bandwidth  
pool                   alloc   free   read  write   read  write
---------------------  -----  -----  -----  -----  -----  -----
Transcend               364G  99.8G      0      0    123     99
  gpt/Transcend         364G  99.8G      0      0    123     99
cache                      -      -      -      -      -      -
  gpt/cache-transcend  14.4G  58.0M      0      0    211     45
---------------------  -----  -----  -----  -----  -----  -----
august                  246G   666G      8     26   236K   786K
  ada0p3.eli            246G   666G      8     26   236K   786K
cache                      -      -      -      -      -      -
  gpt/cache-august     1.95G  26.9G      7      0   349K  56.3K
  gpt/duracell         13.5G  1.96G     15      0   597K  63.4K
---------------------  -----  -----  -----  -----  -----  -----
% zpool status -x
all pools are healthy
% zfs-stats -L

------------------------------------------------------------------------
ZFS Subsystem Report                            Sun Nov 28 17:54:45 2021
------------------------------------------------------------------------

L2 ARC Summary: (HEALTHY)
        Low Memory Aborts:                      17.12   k
        Free on Write:                          14.44   k
        R/W Clashes:                            30
        Bad Checksums:                          0
        IO Errors:                              0

L2 ARC Size: (Adaptive)                         29.79   GiB
        Decompressed Data Size:                 68.16   GiB
        Compression Factor:                     2.29
        Header Size:                    0.14%   98.27   MiB

L2 ARC Evicts:
        Lock Retries:                           1
        Upon Reading:                           0

L2 ARC Breakdown:                               1.60    m
        Hit Ratio:                      74.01%  1.18    m
        Miss Ratio:                     25.99%  414.62  k
        Feeds:                                  31.80   k

L2 ARC Writes:
        Writes Sent:                    100.00% 7.71    k

------------------------------------------------------------------------

%
```


----------



## VladiBG (Nov 28, 2021)

Most likely it's used by your ARC `zfs-stats -a`


----------



## grahamperrin@ (Nov 28, 2021)

```
% zfs-stats -A

------------------------------------------------------------------------
ZFS Subsystem Report                            Sun Nov 28 18:35:33 2021
------------------------------------------------------------------------

ARC Summary: (HEALTHY)
        Memory Throttle Count:                  0

ARC Misc:
        Deleted:                                1.04    m
        Mutex Misses:                           616.74  k
        Evict Skips:                            11.37   m

ARC Size:                               8.95%   1.33    GiB
        Target Size: (Adaptive)         3.33%   508.28  MiB
        Min Size (Hard Limit):          3.33%   508.28  MiB
        Max Size (High Water):          29:1    14.88   GiB
        Compressed Data Size:                   114.46  MiB
        Decompressed Data Size:                 463.46  MiB
        Compression Factor:                     4.05

ARC Size Breakdown:
        Recently Used Cache Size:       2.33%   31.80   MiB
        Frequently Used Cache Size:     97.67%  1.30    GiB

ARC Hash Breakdown:
        Elements Max:                           1.22    m
        Elements Current:               91.92%  1.12    m
        Collisions:                             1.63    m
        Chain Max:                              7
        Chains:                                 211.59  k

------------------------------------------------------------------------
```


----------



## mer (Nov 28, 2021)

Alain De Vos 
Garbage Collection.  It's not always "give it enough run time",  one often has to tune it for the usage patterns.  I've run into cases where the bursty nature of the inputs caused problems.  Steady state was fine, but bursty?  Not a chance.  Needed to tune GC  to deal with it.


----------



## covacat (Nov 29, 2021)

top -w only shows swap for swapped processes (processes that show like <cron>)
try  
sysctl vm.overcommit=1 
in default mode you can allocate whatever you want and when you try to use it you get killed
test is done on a vm with 1GB + 2GB swap
i had no problem to calloc 8GB (just got killed when tried to access it)
even the RSS in top showed 8GB

```
vm.overcommit: 0 -> 0
[titus@luxe ~]$ ./b
Allocated 80 * 100M chunks
reached chunk 0
reached chunk 1
reached chunk 2
reached chunk 3
reached chunk 4
reached chunk 5
reached chunk 6
reached chunk 7
Killed
[titus@luxe ~]$ sudo sysctl vm.overcommit=1
vm.overcommit: 0 -> 1
[titus@luxe ~]$ ./b
Allocated 6 * 100M chunks
reached chunk 0
reached chunk 1
reached chunk 2
reached chunk 3
reached chunk 4
reached chunk 5
```
the test program (b.c)

```
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <unistd.h>
int main()
{
char *b[80];
int i = 0,j = 0;
for(i = 0;i < 80; i++) {
 b[i] = calloc(100,1000*1000);
 if(!b[i]) break;
 usleep(1000);
 }
printf("Allocated %d * 100M chunks\n",i);
for(j = 0;j < i;j++) {
 memset(b[j],44,100 * 1000 * 1000);
 printf("reached chunk %d\n",j);
 }
}
```


----------



## abishai (Dec 2, 2021)

I've left the system to work for some time and I think, that java GC was the reason of swap usage. With -Xmx set, java daemons consume only fraction of memory observed before. Plus, I've upgraded Java to version 11.


```
last pid: 89085;  load averages:  9,76,  8,18,  7,70  up 4+16:47:16    09:14:51
349 processes: 1 running, 348 sleeping
CPU: 32,6% user,  0,0% nice, 12,7% system,  1,0% interrupt, 53,7% idle
Mem: 7247M Active, 55G Inact, 13M Laundry, 22G Wired, 6140M Free
ARC: 4190M Total, 1761M MFU, 729M MRU, 17M Anon, 150M Header, 1532M Other
     951M Compressed, 2853M Uncompressed, 3,00:1 Ratio
Swap: 32G Total, 32G Free
```

The ARC doesn't fill up though.


```
ARC Misc:
    Deleted:                96386896
    Recycle Misses:                0
    Mutex Misses:                904670
    Evict Skips:                904670

ARC Size:
    Current Size (arcsize):        41,12%    4210,95M
    Target Size (Adaptive, c):    41,93%    4294,09M
    Min Size (Hard Limit, c_min):    29,98%    3070,49M
    Max Size (High Water, c_max):    ~3:1    10240,00M

ARC Size Breakdown:
    Recently Used Cache Size (p):    24,27%    1042,28M
    Freq. Used Cache Size (c-p):    75,72%    3251,81M

ARC Hash Breakdown:
    Elements Max:                5855744
    Elements Current:        20,90%    1224302
    Collisions:                25077261
    Chain Max:                0
    Chains:                    42520

ARC Eviction Statistics:
    Evicts Total:                985430173696
    Evicts Eligible for L2:        91,29%    899645513728
    Evicts Ineligible for L2:    8,70%    85784659968
    Evicts Cached to L2:            1752593637376

ARC Efficiency
    Cache Access Total:            4023014453
    Cache Hit Ratio:        97,43%    3919829152
    Cache Miss Ratio:        2,56%    103185301
    Actual Hit Ratio:        97,28%    3913976248

    Data Demand Efficiency:        92,53%
    Data Prefetch Efficiency:    9,67%

    CACHE HITS BY CACHE LIST:
      Most Recently Used (mru):    3,87%    151721138
      Most Frequently Used (mfu):    95,98%    3762255110
      MRU Ghost (mru_ghost):    0,08%    3160694
      MFU Ghost (mfu_ghost):    0,17%    6999636

    CACHE HITS BY DATA TYPE:
      Demand Data:            5,86%    229969904
      Prefetch Data:        0,06%    2648781
      Demand Metadata:        90,99%    3567022591
      Prefetch Metadata:        3,06%    120187876

    CACHE MISSES BY DATA TYPE:
      Demand Data:            17,98%    18555527
      Prefetch Data:        23,97%    24735746
      Demand Metadata:        16,31%    16833167
      Prefetch Metadata:        41,73%    43060861
------------------------------------------------------------------------
```

L2 looks useless at all

```
------------------------------------------------------------------------
L2 ARC Summary:
    Low Memory Aborts:            1
    R/W Clashes:                0
    Free on Write:                75439

L2 ARC Size:
    Current Size: (Adaptive)        75595,54M
    Header Size:            0,11%    90,30M

L2 ARC Evicts:
    Lock Retries:                407
    Upon Reading:                278

L2 ARC Read/Write Activity:
    Bytes Written:                376200,72M
    Bytes Read:                96269,17M

L2 ARC Breakdown:
    Access Total:                102563566
    Hit Ratio:            17,51%    17963579
    Miss Ratio:            82,48%    84599987
    Feeds:                    429657

    WRITES:
      Sent Total:            100,00%    364929
------------------------------------------------------------------------
```


----------



## mer (Dec 2, 2021)

Cache is mostly read, so a lot of usefulness depends on the exact usage pattern.
ZFS cache is "MRU falls to MFU to L2 ARC" (roughly).  If you look at the efficiency section, you have a hit ratio of 97%, with most of it coming from MFU.  Doesn't matter if it's full or not, the usage pattern indicates to me "most of your read data is reread, but not quickly, so it falls to MFU where it gets reread.".

And since your ARC is not close to full and most of the reads are satisfied from ARC/MFU, the L2 ARC is not really going to be used.  So it's not "useless" but it's "not used because it's not needed".

Your findings on tweaking the GC and upgrading mirror what I've seen in the past.


----------

