# ZFS and tmpfs together



## ctengel (Mar 13, 2012)

I have read a few places that tmpfs and ZFS on the same box is a bad idea.  However in all that I've found, it predates 9.0-RELEASE.  Today now I see some recommending it, but I'd just like to know if this was in fact an issue, and if so, has it been fixed.

Thanks!


----------



## SirDice (Mar 13, 2012)

Works just fine. Never heard of any issues with the combination. tmpfs(5) is still considered experimental though.


```
root@williscorto:~#df -h
Filesystem                         Size    Used   Avail Capacity  Mounted on
zroot                              211G    766M    210G     0%    /
devfs                              1.0k    1.0k      0B   100%    /dev
tmpfs                              3.5G     12k    3.5G     0%    /tmp
procfs                             4.0k    4.0k      0B   100%    /proc
fdescfs                            1.0k    1.0k      0B   100%    /dev/fd
linprocfs                          4.0k    4.0k      0B   100%    /compat/linux/proc
zroot/usr                          213G    3.2G    210G     2%    /usr
zroot/usr/home                     220G     10G    210G     5%    /usr/home
zroot/var                          210G     34M    210G     0%    /var
zroot/var/log                      210G     13M    210G     0%    /var/log
```


----------



## vermaden (Mar 13, 2012)

SirDice said:
			
		

> Works just fine. Never heard of any issues with the combination. tmpfs(5) is still considered experimental though.



I remember some quote from *phoenix*, that there are issues when You run out of *Inact*/*Cache*/*Free* memory with both tmpfs and ZFS, as long as You have any of these available, there are no issues.


----------



## SirDice (Mar 13, 2012)

This system only has 2GB of memory. It sometimes start swapping. Never had any problems though.


----------



## vermaden (Mar 13, 2012)

I also havent had any peoblems using tmpfs with ZFS, just forwarded the *phoenix* experiences.


----------



## phoenix (Mar 13, 2012)

That was with putting swap on ZFS volumes, not tmpfs.


----------



## vermaden (Mar 13, 2012)

phoenix said:
			
		

> That was with putting swap on ZFS volumes, not tmpfs.



True 

Then what was the problem with tmpfs and ZFS going togehter?


----------



## ondra_knezour (Mar 14, 2012)

Maybe this one?

http://lists.freebsd.org/pipermail/freebsd-stable/2011-January/060867.html


----------



## SirDice (Mar 14, 2012)

Interesting... Never noticed this before..


```
# Just after a reboot
dice@williscorto:~>df -h /tmp
Filesystem                    Size    Used   Avail Capacity  Mounted on
tmpfs                         5.6G    8.0k    5.6G     0%    /tmp

# Generate a large random file
dice@williscorto:~>openssl rand 2000000000 > test.ran
dice@williscorto:~>ll -h test.ran
-rw-r--r--  1 dice  dice   1.9G Mar 14 11:32 test.ran

# Check again
dice@williscorto:~>df -h /tmp/
Filesystem    Size    Used   Avail Capacity  Mounted on
tmpfs         3.3G    8.0k    3.3G     0%    /tmp
# Notice the size difference?

dice@williscorto:~>swapinfo
Device          1K-blocks     Used    Avail Capacity
/dev/gpt/corto-swap   4194304        0  4194304     0%

dice@williscorto:~>cp test.ran /tmp/
dice@williscorto:~>df -h /tmp/
Filesystem    Size    Used   Avail Capacity  Mounted on
tmpfs         5.6G    1.9G    3.7G    33%    /tmp
# Size is back to 'normal'

dice@williscorto:~>swapinfo
Device          1K-blocks     Used    Avail Capacity
/dev/gpt/corto-swap   4194304   287784  3906520     7%

dice@williscorto:~>uname -a
FreeBSD williscorto.dicelan.home 9.0-STABLE FreeBSD 9.0-STABLE #0: Wed Jan 25 13:03:03 CET 2012     root@molly.dicelan.home:/usr/obj/usr/src/sys/CORTO  amd64
```

Besides the fluctuating size of /tmp/ it all seemed to work. I'll see if I can do the same test with an even larger file (bigger than my internal memory).


----------



## fluca1978 (Mar 14, 2012)

Sorry, but what is the difference (if any) between tmpfs(5) and mdmfs(8)?


----------



## SirDice (Mar 14, 2012)

fluca1978 said:
			
		

> Sorry, but what is the difference (if any) between tmpfs(5) and mdmfs(8)?



tmpfs(5) uses memory dynamically, mdmfs(8) creates a static ram disk of a certain size.


----------



## ctengel (Mar 15, 2012)

@SirDice

Hmm OK.  Having messed around a lot with the ZFS ARC on Solaris a lot the drop in tmpfs doesn't surprise me at all.  ZFS will generally try to cache whatever it can (up to the limit you set on the ARC)

What surprises me a lot though is how it went back up after copying, not moving (if it was a move, I'd understand)

I'm thinking maybe I should just go with a tmpmfs (which is mdmfs()-based, IIRC) instead of tmpfs().  Just need to find a good size.


----------



## RobRobertson (Apr 9, 2012)

I just got a tmpfs out of disk space after exercising zfs.   

I'm running 9.0/AMD64.

So it is still there.


----------



## SirDice (Apr 9, 2012)

This patch may help a little. It got committed to 9-STABLE.

http://lists.freebsd.org/pipermail/svn-src-stable-9/2012-April/001440.html


----------



## Martillo1 (Dec 22, 2012)

What if I run on ZFS and want to use tmpfs(5) for ports compiling and devel/ccache temporal work directories? Feasible? Advantageous? 

Temporal workdirs on memory work very well on UFS, avoiding writes and speeding up compilation, but I am not so sure if it will work so well on ZFS, given the memory needs it has.


----------

