# Updating existing ZFS layout to beadm



## KdeBruin (Feb 9, 2014)

As I'm interested in upgrading to FreeBSD 10 I have looked at several threads with stories of upgrading succes and failure. One possible way to "easily" upgrade is to use sysutils/beadm. But my current ZFS layout is not compatible with `beadm` and I was wondering if it is possible to convert an existing ZFS layout to a layout compatible with `beadm` or should I take my losses and de a re-install of the FreeBSD 9.2 system to use the proper ZFS layout?

My current ZFS layout is:


```
zroot                      29.4G  87.8G   537M  legacy
zroot/swap                 8.25G  95.6G   458M  -
zroot/tmp                   515M  87.8G   515M  /tmp
zroot/usr                  16.7G  87.8G  9.26G  /usr
zroot/usr/home             4.41G  87.8G  4.41G  /usr/home
zroot/usr/ports            2.48G  87.8G   906M  /usr/ports
zroot/usr/ports/distfiles  1.38G  87.8G  1.38G  /usr/ports/distfiles
zroot/usr/ports/packages    223M  87.8G   223M  /usr/ports/packages
zroot/usr/src               532M  87.8G   532M  /usr/src
zroot/var                  3.40G  87.8G   169M  /var
zroot/var/crash             148K  87.8G   148K  /var/crash
zroot/var/db               3.23G  87.8G  3.20G  /var/db
zroot/var/db/pkg           31.6M  87.8G  31.6M  /var/db/pkg
zroot/var/empty             144K  87.8G   144K  /var/empty
zroot/var/log              1.43M  87.8G  1.43M  /var/log
zroot/var/mail              148K  87.8G   148K  /var/mail
zroot/var/run               352K  87.8G   352K  /var/run
zroot/var/tmp               304K  87.8G   304K  /var/tmp
```


----------



## asteriskRoss (Feb 13, 2014)

I'm not sure if a transition to a sysutils/beadm layout before upgrade will be "easy" but it's certainly possible.  Creating the basis for the structure with `# zfs create -o canmount=off -o mountpoint=none zroot/ROOT` and then shuffling your existing ZFS filesystems into order with a mixture of `# zfs rename <existing-filesystem> <beadm-friendly-filesystem>` and `# zfs set mountpoint=<required mountpoint> <beadm-friendly-filesystem>` would get you most of the way there, but unfortunately not all the way.

The tricky one is the root filesystem, for which you're using zroot itself.  I chewed over something like cloning zroot to zroot/ROOT/default (or similar) and then promoting the clone.  For sysutils/beadm you will need to stop using /etc/fstab and set the bootfs property on the pool to your root filesystem (something like `# zpool set bootfs=zroot/ROOT/default zroot`).  However, I couldn't figure out a way to eliminate an unwanted dependency on the zroot filesystem as unlike every other ZFS filesystem, you can't destroy the filesystem with the same name as the pool; the sysutils/beadm layout creates it empty and unmountable.

The best solution I can come up with is to use another ZFS pool for temporary storage of your filesystems, destroy your existing pool and create a new one in its place.  If you had an external drive, you could boot to a live CD, create a temporary pool on the external disk, snapshot and send your existing filesystems over to it, create a new pool on your hard disk and send back the individual filesystems to into the structure you wanted.  It's not exactly hassle (or risk) free, but it might be less onerous than doing a reinstall from scratch, depending on how many configuration files you need to edit and ports you need to install.

This may be enough to prompt someone else to suggest something more straightforward...


----------



## KdeBruin (Feb 14, 2014)

Thanks *ross for the detailed description. I'm just a hobby sysadmin tinkering with a personal server so I will probably create a copy of the complete system and do a fresh install of FreeBSD 10 using the proper sysutils/beadm layout. The number of ports installed is minimal and I will first play with a VM install to get more familiar with jails so I can move some services to jails and see how that works.


----------



## _martin (Feb 14, 2014)

Indeed it's possible. Basically what @asteriskRoss said. 
Question is: what all is needed to be in boot env ? You can certainly choose what you want; I'd go with /kind of/ Solaris way - don't create too much tree hierarchy for root FS. 

Using your example: 

```
zroot/usr/ports            2.48G  87.8G   906M  /usr/ports
zroot/usr/ports/distfiles  1.38G  87.8G  1.38G  /usr/ports/distfiles
zroot/usr/ports/packages    223M  87.8G   223M  /usr/ports/packages
```

Would be: 

```
zroot/ports            2.48G  87.8G   906M  /usr/ports
zroot/distfiles  1.38G  87.8G  1.38G  /usr/ports/distfiles
zroot/packages    223M  87.8G   223M  /usr/ports/packages
```
and hence not included in boot environment. You can still do a backup of it with snapshot, it's just not effected by boot environment. 

If I simplify your setup a bit to the following (note my root pool is named rpool): 


```
# zfs list
NAME                 USED  AVAIL  REFER  MOUNTPOINT
rpool               1.07G  18.5G   413M  legacy
rpool/tmp             35K  18.5G    35K  /tmp
rpool/usr            267M  18.5G   267M  /usr
rpool/usr/home        31K  18.5G    31K  /usr/home
rpool/var            238K  18.5G   238K  /var
```

Following can be done to migrate it to `beadm` compatible mode (with one reboot afterwards): 

0) create rpool/ROOT and do a snapshot of / 

```
zfs create -o mountpoint=none rpool/ROOT
zfs snapshot rpool@migration
```
1) copy / data 

```
zfs send rpool@migration | zfs recv rpool/ROOT/current
zfs set mountpoint=/a rpool/ROOT/current
zfs destroy rpool/ROOT/current@migration
```
2) copy /usr

```
zfs snapshot -r rpool/usr@migration
zfs send -R rpool/usr@migration | zfs recv rpool/ROOT/current/usr
zfs set mountpoint=/a/usr rpool/ROOT/current/usr
zfs destroy rpool/ROOT/current/usr@migration
zfs destroy rpool/ROOT/current/usr/home@migration
```
3) copy /var

```
zfs send rpool/var@migration | zfs recv rpool/ROOT/current/var
zfs destroy rpool/ROOT/current/var@migration
```
4) now we need to modify loader.conf and bootfs property (note it's done on new dataset mounted under /a)

```
# cat /a/boot/loader.conf
zfs_load="YES"
vfs.root.mountfrom="zfs:rpool/ROOT/current"
#
zpool set bootfs=rpool/ROOT/current rpool
```
5) tidy up

```
zfs set mountpoint=none rpool/usr
zfs set mountpoint=none rpool/var
zfs set mountpoint=legacy rpool/ROOT/current
zfs set mountpoint=none rpool
```
6) and reboot

```
reboot
```

After reboot check: 


```
# zfs list
NAME                          USED  AVAIL  REFER  MOUNTPOINT
rpool                        1.33G  18.2G   413M  none
rpool/ROOT                    680M  18.2G    31K  none
rpool/ROOT/current            680M  18.2G   413M  legacy
rpool/ROOT/current/usr        267M  18.2G   267M  /usr
rpool/ROOT/current/usr/home    31K  18.2G    31K  /usr/home
rpool/ROOT/current/var        346K  18.2G   272K  /var
rpool/tmp                      35K  18.2G    35K  /tmp
rpool/usr                     267M  18.2G   267M  none
rpool/usr/home                 31K  18.2G    31K  none
rpool/var                     296K  18.2G   240K  none
#
# zfs list -t snapshot
NAME                               USED  AVAIL  REFER  MOUNTPOINT
rpool@migration                   35.5K      -   413M  -
rpool/usr@migration                   0      -   267M  -
rpool/usr/home@migration              0      -    31K  -
rpool/var@migration                 56K      -   244K  -
#
```

Now you can remove leftovers: 


```
zfs destroy -r rpool/usr
zfs destroy -r rpool/var
```

And beadm works now too: 

```
# beadm list
BE      Active Mountpoint  Space Created
current NR     /          720.7M 2014-02-14 10:45
#
```


```
# zfs list
NAME                          USED  AVAIL  REFER  MOUNTPOINT
rpool                        1.11G  18.5G   413M  none
rpool/ROOT                    724M  18.5G    31K  none
rpool/ROOT/current            724M  18.5G   413M  legacy
rpool/ROOT/current/usr        275M  18.5G   275M  /usr
rpool/ROOT/current/usr/home    31K  18.5G    31K  /usr/home
rpool/ROOT/current/var       36.2M  18.5G  36.2M  /var
rpool/tmp                      35K  18.5G    35K  /tmp
#
```

Frankly I'd put rpool/ROOT/current/usr/home to rpool/home. But as an example I did use it to show you how to migrate datasets with more children.


----------



## KdeBruin (Feb 15, 2014)

Another big thanks from my side. The step-by-step instructions are easy to follow and I will try this tomorrow when I (hopefully) have some more time.


----------



## KdeBruin (Feb 16, 2014)

I've followed the steps and with some additional commands to move the /usr/home, /usr/ports, /usr/ports/distfiles and /usr/ports/packages file systems out of the managed boot environment I mostly succeeded. The new set of file systems:


```
# zfs list
zroot                   25.7G  91.5G   539M  none
zroot/ROOT              10.0G  91.5G   144K  none
zroot/ROOT/current      10.0G  91.5G   538M  /a
zroot/ROOT/current/usr  9.32G  91.5G  9.32G  /usr
zroot/ROOT/current/var   174M  91.5G   174M  /var
zroot/distfiles         1.29G  91.5G  1.29G  /usr/ports/distfiles
zroot/home              4.45G  91.5G  4.45G  /usr/home
zroot/packages           220M  91.5G   220M  /usr/ports/packages
zroot/ports              908M  91.5G   908M  /usr/ports
zroot/swap              8.25G  99.3G   458M  -
```

Also `beadm` now lists the proper boot environment:


```
# beadm list
BE      Active Mountpoint  Space Created
current NR     /           10.0G 2014-02-16 10:38
```

But I cannot change the mount point for zroot/ROOT/current to legacy. I get the following error:


```
# zfs set mountpoint=legacy zroot/ROOT/current
cannot unmount '/': Invalid argument
```

I got this error both before and after a reboot. I will try booting from a live CD and see if I can change it. Everything is working as it should but it is rather annoying that it doesn't show the proper mount point.


----------



## KdeBruin (Feb 16, 2014)

Using the FreeBSD 10 live CD I was able to change the mount point to legacy. But now I get the following error when I try to do completion in a non-home directory, e.g.


```
# cd /var
# ls <TAB>-bash: cannot create temp file for here-document: Permission denied
```

So, when I press the tab key at the location of <TAB> I get the message directly after it. I guess there are some permission problems (duh..) but I cannot figure out what they might be.


----------



## _martin (Feb 16, 2014)

It's interesting why you hit that problem with legqacy mountpoint on zroot/ROOT/current. Hard to say as I would have to see it myself. 
The 2nd issue is probably permissions on /var/tmp. You had it on separate dataset before, that's why you've lost the permissions. 

Do: 


```
chmod 1777 /var/tmp
chown root:wheel /var/tmp
```
You can also check /tmp if it's ok too.


----------



## KdeBruin (Feb 16, 2014)

OK, some more searching lead to the following command sequence to restore the default file permissions:


```
# mtree -U -p / -f /etc/mtree/BSD.root.dist
# mtree -U -p /usr -f /etc/mtree/BSD.usr.dist
# mtree -U -p /usr/include -f /etc/mtree/BSD.include.dist
# mtree -U -p /var -f /etc/mtree/BSD.var.dist
```

Several missing directories were created and also some permissions fixed. The above problems are now fixed and I'm a happy man.

Now off to write a script to compare the permissions of the running system against those of my backup...


----------

