# Best measurement/metric for process memory use



## scotia (May 25, 2020)

Hi all,

I'm trying to troubleshoot an out-of-memory/out-of-swap issue and I'm targeting a couple of processes I think are at fault.

I'm using `procstat` to periodically get data from the processes.  Specifically 'RES' and 'PRES' (I'm summing these).

Is this the best approach to identify the memory usage of processes?

Thanks.


----------



## phalange (May 27, 2020)

I use procstat too, and htop or even top.


----------



## gpw928 (May 27, 2020)

procstat(1) has a lot of finesse that you probably don't need at the outset.

Out of memory situations usually have a pretty obvious culprit, which you just need to observe.

Don't forget to look in /var/log/messages for clues.

Toggle the sort order of top(1), by typing "o" (for order), and, at the prompt, "res" (for resident memory) or "size" (total memory size).  Then just watch...

Or in non-interactive mode, you can log a time series with something like:
	
	



```
n=100
while [ $n -gt 0 ]
do
    top -o res 5 >>/tmp/top.log
    sync
    sleep 5
    n=$((n-1))
done
```


----------



## scotia (May 27, 2020)

gpw928 said:


> procstat(1) has a lot of finesse that you probably don't need at the outset.



I've been summing the 'RES' and 'PRES' columns, basically:

```
for p in `/usr/bin/pgrep Plex` ; do /usr/bin/procstat -v $p ; done | awk '{sum_res+=$5;sum_pres+=$6}END{print sum_res","sum_pres}'
```
It sums all of the matching Plex processes.

I then plot this in Grafana against any interesting swap events in  /var/log/messages:




The blue line near the middle is a message in the logs about swap.  Specifically:

```
2020-05-27T03:19:28+10:00 10.1.6.34 kernel: pid 51451 (Plex Script Host), jid 0, uid 972, was killed: out of swap space
2020-05-27T03:19:28+10:00 10.1.6.34 kernel: pid 51451 (Plex Script Host), jid 0, uid 972, was killed: out of swap space
2020-05-27T03:19:31+10:00 10.1.6.34 kernel: pid 691 (Plex Media Server), jid 0, uid 972, was killed: out of swap space
2020-05-27T03:19:31+10:00 10.1.6.34 kernel: pid 691 (Plex Media Server), jid 0, uid 972, was killed: out of swap space
```

I'm doing some maths in Grafana; multiplying the values by 4096 because I believe the page size is 4096 and procstat is showing page counts:


```
RES     resident pages
PRES    private resident pages
```

I get the feeling I shouldn't be looking at PRES, as there's no way the processes are consuming 15GB RAM.

If I only look at RES data it makes more sense (blue dotted lines are swap failures):
The graph below is a longer time period than the above, showing a few swap events.





I see now too I've mislabelled the PRES as Perm. rather than Private


----------



## gpw928 (May 27, 2020)

You seem to have identified the culprit(s).  What are you trying to discern?


----------



## scotia (May 27, 2020)

gpw928 said:


> You seem to have identified the culprit(s). What are you trying to discern?



Well yes most likely, but if anyone could enlighten me on the difference between RES and PRES is I'd appreciate it.

Also, does


```
2020-05-27T03:19:28+10:00 10.1.6.34 kernel: pid 51451 (Plex Script Host), jid 0, uid 972, was killed: out of swap space
```

mean that that PID was the one requesting more memory, or that it was killed because of another application or system was requesting more memory?

Cheers


----------



## gpw928 (May 27, 2020)

scotia said:


> Well yes most likely, but if anyone could enlighten me on the difference between RES and PRES is I'd appreciate it.


procstat(1) says that PRES is "private resident pages".  Private means unshared.  There are a few ways that pages get shared (e.g. fork(2) initially shares all pages with the child, and shared memory segments exist) but I don't understand the _exact_ mechanism(s) by which a page is marked PRES.   A good reference would help.


scotia said:


> ```
> 2020-05-27T03:19:28+10:00 10.1.6.34 kernel: pid 51451 (Plex Script Host), jid 0, uid 972, was killed: out of swap space
> ```
> mean that that PID was the one requesting more memory, or that it was killed because of another application or system was requesting more memory?


It means that pid 51451 was being very naughty.  So, usually, yes, it was the culprit.  But if there was a lot of simultaneous bad behaviour, there may have been additional culprits.

I would be looking at RES, in the first instance, because any misbehaving process will very likely be growing its impure data area.  That will manifest as RESident memory growth.


----------



## scotia (May 28, 2020)

gpw928 said:


> procstat(1) says that PRES is "private resident pages".  Private means unshared.  There are a few ways that pages get shared (e.g. fork(2) initially shares all pages with the child, and shared memory segments exist) but I don't understand the _exact_ mechanism(s) by which a page is marked PRES.   A good reference would help.
> 
> It means that pid 51451 was being very naughty.  So, usually, yes, it was the culprit.  But if there was a lot of simultaneous bad behaviour, there may have been additional culprits.
> 
> I would be looking at RES, in the first instance, because any misbehaving process will very likely be growing its impure data area.  That will manifest as RESident memory growth.



Thanks gpw928

Just on PRES, my samples show it is always much bigger than RES.

For some processes it is often double:

```
# for p in `/usr/bin/pgrep sshd` ; do /usr/bin/procstat -v $p ; done | awk '{sum_res+=$5;sum_pres+=$6}END{print sum_res","sum_pres}'
2010,4121
# for p in `/usr/bin/pgrep syslogd` ; do /usr/bin/procstat -v $p ; done | awk '{sum_res+=$5;sum_pres+=$6}END{print sum_res","sum_pres}'
566,1099
# for p in `/usr/bin/pgrep ntpd` ; do /usr/bin/procstat -v $p ; done | awk '{sum_res+=$5;sum_pres+=$6}END{print sum_res","sum_pres}'
4127,9277
```

For `sshd` for example the bulk of all of those pages are for linked libraries (ld-elf, libutil, libc).

For Plex however the PRES/RES ratio is normally an order of magnitude:

```
# for p in `/usr/bin/pgrep Plex` ; do /usr/bin/procstat -v $p ; done | awk '{sum_res+=$5;sum_pres+=$6}END{print sum_res","sum_pres}'
52776,386853
```

Specifically for the Plex Media Server process there are many entries with no PATH but large differences between RES and PRES:


```
PID              START                END PRT  RES PRES REF SHD FLAG  TP PATH  
38606        0x80d275000        0x80d276000 r--    1    2   3   1 CN--- vn /usr/lib/i18n/libmapper_none.so.4
38606        0x80d276000        0x80d277000 r-x    1    2   3   1 CN--- vn /usr/lib/i18n/libmapper_none.so.4
38606        0x80d277000        0x80d278000 rw-    0    0   2   0 C---- vn /usr/lib/i18n/libmapper_none.so.4
38606        0x80d278000        0x80d279000 r--    0    0   2   0 C---- vn /usr/lib/i18n/libmapper_none.so.4
38606        0x80d279000        0x80d27c000 rw-    3    3   1   0 ----- df
38606        0x80d27c000        0x80d2d9000 rw-   72 2284 234   0 ----- sw
38606        0x80d2db000        0x80d2e3000 rw-    8 2284 234   0 ----- sw
38606        0x80d2e5000        0x80d2f4000 rw-   15 2284 234   0 ----- sw
38606        0x80d2f6000        0x80d30c000 rw-   18 2284 234   0 ----- sw
38606        0x80d30d000        0x80d318000 rw-   11 2284 234   0 ----- sw
38606        0x80d31e000        0x80d321000 rw-    3 2284 234   0 ----- sw
38606        0x80d323000        0x80d327000 rw-    4 2284 234   0 ----- sw
38606        0x80d333000        0x80d33f000 rw-   12 2284 234   0 ----- sw
38606        0x80d340000        0x80d34a000 rw-    4 2284 234   0 ----- sw
38606        0x80d34b000        0x80d358000 rw-   13 2284 234   0 ----- sw

and so on ...
```

Could this be simply FreeBSD is swapping out the unused pages of `/usr/lib/i18n/libmapper_none.so.4` preemptively?

Currently top says:

```
63 processes:  2 running, 60 sleeping, 1 waiting
CPU: 21.9% user,  0.0% nice, 16.4% system,  0.0% interrupt, 61.7% idle
Mem: 53M Active, 629M Inact, 172K Laundry, 484M Wired, 200M Buf, 799M Free
Swap: 1024M Total, 852M Used, 172M Free, 83% Inuse
```

And swapinfo:

```
# swapinfo -h
Device          1K-blocks     Used    Avail Capacity
/dev/gpt/swapfs   1048576     852M     172M    83%
```

There doesn't seem to be pressure on RAM pushing stuff to swap, at least not at that point in time.

Thoughts?
Thanks


----------



## gpw928 (May 28, 2020)

First observation is you need more swap...  If you don't have a disk partition available, add a swap file.

All your swapped areas are tagged "rw-", so I expect that they are some kind of data segments, allocated by the Plex Media Server (and, as you observe, swapped out).

Adding up all the rows from "procstat -v" does not necessarily make sense because you are multi-counting shared libraries.

You have to look closely at the "VM object type" shown under the TP column so see what sort of object you have.

However, I'm out of my depth on the fine details of virtual memory management, and don't understand what all the virtual object types are, nor what the exact definitions of RES and PRES are.

You can't get PRES out of top(1), so I would think it's not critical to the issue.

I think if you watch RES from top(1), you will easily identify processes causing you to run out of memory.


----------



## crypt47 (Feb 9, 2022)

As it is mentioned only in the middle, procstat shows pages (in size of vm.stats.vm.v_page_size, e.i. 4096) and the script should look more like:


```
for p in `/usr/bin/pgrep $progname` ; do /usr/bin/procstat -v $p ; done | awk '{sum_res+=$5;}END{print sum_res*4096/1024}'
```

the resulting value roughly corresponds to top RES column and ps -o rss value.


```
for p in `pgrep $progname` ; do /usr/bin/procstat -v $p |grep sw; done | awk '{sum_res+=$6;}END{print sum_res*4096/1024}'
```
will probably show the swap usage for the program. I'm not sure if it's full though.


----------



## grahamperrin@ (Feb 9, 2022)

scotia said:


> …/out-of-swap …



<https://github.com/freebsd/freebsd-src/commit/4a864f624a7097f1d032a0350ac70fa6c371179e?diff=split> (2022-01-14)



> *vm_pageout: Print a more accurate message to the console before an OOM kill*
> 
> Previously we'd always print "out of swap space."  This can be misleading, as there are other reasons an OOM kill can be triggered.  In particular, it's entirely possible to trigger an OOM kill on a system with plenty of free swap space.


----------

