# SSD + HDD question, 2 pools (Desktop)



## hedgehog (Sep 29, 2013)

Hi everyone.

I finally decided to give SSD a try and ordered Crucial M4 (128GB). My current ZFS setup:


```
$ zfs list
NAME               USED  AVAIL  REFER  MOUNTPOINT
zroot              321G   132G  11,2G  legacy
zroot/jails       18,9G   132G  18,9G  /jails
zroot/media        165G   132G   165G  /media
zroot/tmp         27,9M   132G  27,9M  /tmp
zroot/usr          117G   132G  21,6G  /usr
zroot/usr/home    88,3G   132G  88,3G  /usr/home
zroot/usr/local   6,95G   132G  6,95G  /usr/local
zroot/usr/src      374M   132G   374M  /usr/src
zroot/var         8,85G   132G  5,86G  /var
zroot/var/db      2,32G   132G  2,25G  /var/db
zroot/var/db/pkg  71,3M   132G  71,3M  /var/db/pkg
zroot/var/empty     21K   132G    21K  /var/empty
zroot/var/log      105M   132G   105M  /var/log
zroot/var/tmp      576M   132G   576M  /var/tmp
```

I want to move everything into a second pool on SSD, leaving the following filesystems on the old HDD:

/var
/media which is currently my storage dataset 
/jails
/usr/home
/usr/ports (actually I might drop it at all as I build ports in jails)

Also I'm going to move /tmp to tmpfs() because I'm purchasing additional RAM too (16 GB in total).

General idea is to have the system and third-party software on SSD to speed-up things, and maybe a few games too. While the rest data will be kept on HDD. Also I'm thinking of keeping SSD pool's snapshot on HDD too.

Do you think it would be a good setup? I tried doing this setup within a virtual machine, seems that it's quite easy to manipulate ZFS data sets and move them between physical drives.


----------



## tyson (Sep 29, 2013)

SSD for root is big speedup. I*'*m using two pools right now. One is on a 120 GB SSD, second is on a geli encrypted 2 TB drive.


```
% zfs list    
NAME                USED  AVAIL  REFER  MOUNTPOINT
pub                 477G  1.32T   470G  /home/pub           # my main data storage
pub/DISTFILES      6.55G  1.32T  6.55G  /usr/ports/distfiles
pub/PACKAGES       83.3M  1.32T  83.3M  /usr/ports/packages
pub/jails           296K  1.32T   152K  /usr/jails          # will grow when i add some jails ;]
ssd                34.4G  74.9G   144K  none
ssd/HOME           18.3G  74.9G  18.3G  /home
ssd/PORTS          2.01G  74.9G  1.62G  /usr/ports
ssd/ROOT           10.9G  74.9G  10.9G  legacy              # / + all ports for consistency
ssd/SRC            1.10G  74.9G   152K  none
ssd/SRC/current    1.10G  74.9G  1.10G  /usr/src
ssd/swap0          2.06G  75.5G  1.52G  -
```
Obvious pools names are just easier to remember. Im testing -CURRENT right now (which is installed on the ssd/ROOT dataset with all ports *I* use), but can install 9.2 in the other dataset if I want. Leaving ports and source on SSD makes it really fast with checkouts, compression helps with conserving space. I don't use tmpfs right now, but maybe I will after adding some more RAM.

P.S. I*'*m keeping /usr/local on the same dataset as the current installed OS; it gives me the opportunity to run different FreeBSD versions, and keeps ports synced with the system they were compiled on.


----------



## bthomson (Sep 30, 2013)

hedgehog said:
			
		

> Also I'm thinking of keeping SSD pool's snapshot on HDD too.



Yes, that's what I did. I use `zfs send`/`receive` to copy the new snapshots and data to hard disk once per day. Then you can delete the old snapshots on the ssd side and keep them on disk.

You might experience performance degradation after a while, some SSD doesn't work as well after it gets filled up. TRIM with ZFS is not supported until FreeBSD 10, I think.


----------



## hedgehog (Sep 30, 2013)

tyson said:
			
		

> Obvious pools names are just easier to remember.


Right, so I'm going to rename zroot to something like _storage_ or _hdd_.



			
				bthomson said:
			
		

> You might experience performance degradation after a while, some SSD doesn't work as well after it gets filled up. TRIM with ZFS is not supported until FreeBSD 10, I think.


As far as I know, TRIM is supported since FreeBSD-9.2-BETA1. Currently I have FreeBSD 9.2-RC3 and there are vfs.zfs.trim enabled by default in sysctl(). I can't tell you the exact option's name as I'm not at home at the moment, but I could provide sysctl() output later if you're interested.

Thank you for responses.


----------



## hedgehog (Oct 1, 2013)

As stated in FreeBSD 9.2 release notes:



> The ZFS filesystem now supports TRIM when used on solid state drives. ZFS TRIM support is enabled by default. [r251419] The following tunables have been added:
> 
> vfs.zfs.trim.enabled: Enable ZFS TRIM
> vfs.zfs.trim.max_interval: Maximum interval in seconds between TRIM queue processing
> ...



So TRIM is not an issue I believe


----------

