# Why "out of swap space", when I don't have any?



## littlesandra88 (Sep 25, 2013)

Hello =)

I have 32 GB RAM on my FreeBSD 9.2-PRERELEASE storage hosts and no swap space. I don't have any memory expensive tasks running like de-duplication, but I get a SSH and other daemons are being killed now and then.


```
Sep  1 04:50:04 example kernel: pid 99217 (ssh), uid 0, was killed: out of swap space
```

All it does is ZFS, NFSv4, and Samba3. Does it have something to do with caches, where FreeBSD uses a p_er_centage of my memory for this, and expects I also have swap? Is there a way to tell FreeBSD I don't have any swap? The host is running on a 14GB SATADOM, so adding swap is not really an option...

Hugs,
Sandra =)


----------



## sossego (Sep 25, 2013)

http://lists.freebsd.org/pipermail/freebsd-current/2012-August/036114.html

I was bored.


----------



## littlesandra88 (Sep 25, 2013)

So it is a bug then... Do you know if there is a fix for it (besides adding a swap file)?


----------



## cpm@ (Sep 25, 2013)

littlesandra88 said:
			
		

> So it is a bug then... Do you know if there is a fix for it (besides adding a swap file)?



Seems that FreeBSD kills processes it really shouldn't (see the link below). You got it, add a swap file on FreeBSD. Sometimes, it's preferably to use a swap file instead of a swap partition. Your case is perfect to do that (using ZFS).

http://www.tolaris.com/2013/05/18/enable-swap-on-nas4free/.


----------



## Crivens (Sep 25, 2013)

I am not sure this is a bug. The kernel asks for continous memory, but there are pages in the way. To get these pages the kernel has either to write them out or kill the process. Moving pages to others would be a solution, but this kind of memory defragmentation is not implemented if I remember correctly.


----------



## littlesandra88 (Sep 25, 2013)

Thanks a lot. I'll ZFS file system on the storage array, and place the swap file there =)


----------



## kpa (Sep 25, 2013)

Do you have any virtual memory related tunings set in loader.conf(5)?


----------



## littlesandra88 (Sep 26, 2013)

In order to get Jumbo Frames to work I did


```
echo 'kern.ipc.nmbclusters="32768"' >> /boot/loader.conf

echo 'kern.ipc.maxsockbuf=16777216' >> /etc/sysctl.conf
echo 'net.inet.tcp.sendspace=262144' >> /etc/sysctl.conf
echo 'net.inet.tcp.recvspace=262144' >> /etc/sysctl.conf
echo 'net.inet.tcp.rfc1323=1' >> /etc/sysctl.conf
echo 'net.inet.tcp.sendbuf_max=16777216' >> /etc/sysctl.conf
echo 'net.inet.tcp.recvbuf_max=16777216' >> /etc/sysctl.conf
```

but besides that, I haven't tuned anything. Are there something you would recommend?

I have now created a 128GB swap file, as I have 64GB of RAM..


----------



## wblock@ (Sep 26, 2013)

The old "double RAM" rule is outdated.  Swap file size does not have to be that big.  Estimating how much space to use is tricky.  The nice thing about a swap file is that the size is easy to change after you get an idea of how much space it really needs.

Note: do not use a sparse file!  Also, disable crash dumps!  If the system crashes, writing to a filesystem is not safe.


----------



## SirDice (Sep 26, 2013)

Yeah, that double rule is a bit old. I do recommend to always create swap even if you don't actually need it. To not let the space go to waste I'd recommend using tmpfs(5) so you can use it as a /tmp/ filesystem. Early implementations had some issues when combined with ZFS (both would fight over the memory) but this should be solved. I've been using tmpfs(5) with ZFS for quite some time now without any issues.


----------



## Simba7 (Sep 27, 2013)

I've actually used, at maximum, a 4GB swap. That's all you really need.

The "double swap" is a bit old. It only applied to systems with <=2GB of RAM (4GB of swap). Any higher and it would become an annoyance and a waste of space.


----------



## kpa (Sep 27, 2013)

littlesandra88 said:
			
		

> ...
> but besides that, I haven't tuned anything. Are there something you would recommend?
> 
> I have now created a 128GB swap file, as I have 64GB of RAM..



Those are fine, I was interested to see if you had done any vm.kmem_size or vm.kmem_size_max tunings that many people mistakenly do when they see that those numbers are set to hundreds of gigabytes by default and think that the numbers mean absolute sizes of memory.

Changing those settings without a full understanding what they do can cause memory fragmentation when the kernel memory map is too small and eventually cause a system crash when the kernel runs out of large enough continous areas of memory.


----------



## Crivens (Sep 27, 2013)

wblock@ said:
			
		

> Note: do not use a sparse file!  Also, disable crash dumps!  If the system crashes, writing to a filesystem is not safe.



In case of crashes, you can write the crash dump on other devices. You may even dedicate some old 100GB ATA drive to this while your swap space is on the ZFS pool (much faster). You might even get away with configuring the crash dumps to be written to an USB stick.


----------

