# ZFS on root newbie question



## mrmarria (Apr 5, 2014)

I did an install FreeBSD 10 from stick image using the ZFS on root option to 2 ssd drives which went smoothly. After reading more will add a third ssd.

Having a little trouble comprehending what I am looking at, my last system was release 6.

If I try to see whether trim is active through say, tunefs or other tools, I can't seem to identify what the drives actually are.
this also means I cannot figure out how to mount these drives from a livecd boot to say perform recovery etc as I can't identify them properly.


```
# mount
zroot/ROOT/default on / (zfs, local, noatime, nfsv4acls)
devfs on /dev (devfs, local, multilabel)
zroot/tmp on /tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/home on /usr/home (zfs, local, noatime, nfsv4acls)
zroot/usr/ports on /usr/ports (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/src on /usr/src (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var on /var (zfs, local, noatime, nfsv4acls)
zroot/var/crash on /var/crash (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/log on /var/log (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/mail on /var/mail (zfs, local, nfsv4acls)
zroot/var/tmp on /var/tmp (zfs, local, noatime, nosuid, nfsv4acls)
```


```
# zfs mount
zroot/ROOT/default              /
zroot/tmp                       /tmp
zroot/usr/home                  /usr/home
zroot/usr/ports                 /usr/ports
zroot/usr/src                   /usr/src
zroot/var                       /var
zroot/var/crash                 /var/crash
zroot/var/log                   /var/log
zroot/var/mail                  /var/mail
zroot/var/tmp                   /var/tmp
```



```
# zpool get all
```
 gives me a lot of interesting info, as does 

```
# egrep 'ada[1-9]' /var/run/dmesg.boot
ada0 at ahcich2 bus 0 scbus0 target 0 lun 0
ada0: <Samsung SSD 840 EVO 120GB EXT0BB6Q> ATA-9 SATA 3.x device
ada0: Serial Number S1D5NSAF371827E
ada0: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 512bytes)
ada0: Command Queueing enabled
ada0: 114473MB (234441648 512 byte sectors: 16H 63S/T 16383C)
ada0: Previously was known as ad4
ada1 at ahcich4 bus 0 scbus1 target 0 lun 0
ada1: <Samsung SSD 840 EVO 120GB EXT0BB6Q> ATA-9 SATA 3.x device
ada1: Serial Number S1D5NSAF371702X
ada1: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 512bytes)
ada1: Command Queueing enabled
ada1: 114473MB (234441648 512 byte sectors: 16H 63S/T 16383C)
ada1: Previously was known as ad6
```

but, I am at a loss how to use this information to either find the state of trim or to mount the drives when booting from alternate (livecd) boot for future maintenance.

I thought to go back to adding the ZFS to a single drive booted system, but I like the idea of fully mirrored, and and hoping that release 10 has gotten past many of the issues I have been reading about. A lot of the documentation has not seemed to have caught up, so it's hard to tell.

Any pointers greatly appreciated!

OK! - Half my question I found an answer to:
Checking trim ZFS found in post - 
FreeBSD 10 Trim for ZFS
Postby meteor8488 » 08 Feb 2014, 18:04 

```
# sysctl vfs.zfs.trim
# sysctl -d kstat.zfs.misc.zio_trim
# sysctl -a |grep _trim
```
this gets the trim answer.

Still stuck on how to grasp the mount in this system or from a LiveCD boot......


----------



## asteriskRoss (Apr 15, 2014)

Booting from a live CD, you could import the ZFS pool with the command `zpool import -f zroot`.  Since by default, your ZFS datasets will automatically be mounted, you probably want to import the pool with an alternative root using the command `zpool import -R /mnt -f zroot`.  This will give you access to the ZFS pool, which in your case would be mirrored on both your disks.  Your system's filesystem would be accessible in /mnt.  Is that what you meant?  Or have I misunderstood?


----------



## usdmatt (Apr 15, 2014)

`zpool status` will give a nice overview of the disks in the pool and the configuration (mirrors, stripes, etc).

Just to expand on the previous answer, a zpool is a group of disks (or possibly one disk) that contain one or more file systems. The entire pool can be imported or exported on a system using the `zpool import/export poolname` commands. You don't have to tell it which disks to 'mount' manually, it will search all connected disks to find the relevant pool itself.

When a pool is imported, by default all the file systems on it are mounted. As mentioned, in your case with a root pool, this could cause filesystems on the pool to get mounted over the top of your live environment, so you should use the -R option during import to temporarily have all the filesystems mounted under a different path. For example using `zpool import -R /mnt -f zroot`, would cause your zroot/tmp filesystem, which has a mountpoint of /tmp, to be mounted under /mnt/tmp instead.


----------

