# HOWTO: FreeBSD ZFS Madness



## vermaden (Apr 26, 2012)

*0. This is SPARTA!*

Some time ago I found a good, reliable way of using and installing FreeBSD and described it in my _Modern FreeBSD Install_ *[1] [2]* HOWTO. Now, more then a year later I come back with my experiences about that setup and a proposal of newer and probably better way of doing it.

*1. Introduction*

Same as year ago, I assume that You would want to create fresh installation of FreeBSD using one or more hard disks, but also with (laptops) and without GELI based full disk encryption.

This guide was written when FreeBSD 9.0 and 8.3 were available and definitely works for 9.0, but I did not try all this on the older 8.3, if You find some issues on 8.3, let me know I will try to address them in this guide.

Earlier, I was not that confident about booting from the ZFS pool, but there is some very neat feature that made me think ZFS boot is now mandatory. If You just smiled, You know that I am thinking about _Boot Environments_ feature from Illumos/Solaris systems.

In case You are not familiar with the _Boot Environments_ feature, check the _Managing Boot Environments with Solaris 11 Express_ PDF white paper *[3]*. Illumos/Solaris has the [font="Courier New"]beadm(1M)[/font] *[4]* utility and while Philipp Wuensche wrote the manageBE script as replacement *[5]*, it uses older style used at times when OpenSolaris (and SUN) were still having a great time.
I last couple of days writing an up-to-date replacement for FreeBSD compatible beadmutility, and with some tweaks from today I just made it available at _SourceForge _*[6]* if you wish to test it. Currently it's about 200 lines long, so it should be pretty simple to take a look at it. I tried to make it as compatible as possible with the 'upstream' version, along with some small improvements, it currently supports basic functions like list, create, destroy and activate.


```
# beadm
usage:
  beadm activate <beName>
  beadm create [-e nonActiveBe | -e beName@snapshot] <beName>
  beadm create <beName@snapshot>
  beadm destroy [-F] <beName | beName@snapshot>
  beadm list [-a] [-s] [-D] [-H]
  beadm rename <origBeName> <newBeName>
  beadm mount <beName> [mountpoint]
  beadm { umount | unmount } [-f] <beName>
```

There are several subtle differences between mine implementation and Philipp's one, he defines and then relies upon ZFS property called freebsd:boot-environment=1 for each boot environment, I do not set any other additional ZFS properties. There is already org.freebsd:swap property used for SWAP on FreeBSD, so we may use org.freebsd:be in the future, but is just a thought, right now its not used. My version also supports activating boot environments received with zfs recvcommand from other systems (it just updates appreciate /boot/zfs/zpool.cache file).

My implementation is also style compatible with current Illumos/Solaris beadm(1M) which is like the example below.

```
# beadm create -e default upgrade-test
Created successfully

# beadm list
BE           Active Mountpoint Space Policy Created
default      N      /          1.06M static 2012-02-03 15:08
upgrade-test R      -           560M static 2012-04-24 22:22
new          -      -             8K static 2012-04-24 23:40

# zfs list -r sys/ROOT
NAME                    USED  AVAIL  REFER  MOUNTPOINT
sys/ROOT                562M  8.15G   144K  none
sys/ROOT/default       1.48M  8.15G   558M  legacy
sys/ROOT/new              8K  8.15G   558M  none
sys/ROOT/upgrade-test   560M  8.15G   558M  none

# beadm activate default
Activated successfully

# beadm list
BE           Active Mountpoint Space Policy Created
default      NR     /          1.06M static 2012-02-03 15:08
upgrade-test -      -           560M static 2012-04-24 22:22
new          -      -             8K static 2012-04-24 23:40
```
The boot environments are located in the same plase as in Illumos/Solaris, at pool/ROOT/environment place.

*2. Now You're Thinking with Portals*

The main purpose of the _Boot Environments_ concept is to make all risky tasks harmless, to provide an easy way back from possible troubles. Think about upgrading the system to newer version, an update of 30+ installed packages to latest versions, testing software or various solutions before taking the final decision, and much more. All these tasks are now harmless thanks to the _Boot Environments_, but this is just the tip of the iceberg.

You can now move desired boot environment to other machine, physical or virtual and check how it will behave there, check hardware support on the other hardware for example or make a painless hardware upgrade. You may also clone Your desired boot environment and ... start it as a Jail for some more experiments or move Your old physical server install into FreeBSD Jail because its not that heavily used anymore but it still have to be available.

Other good example may be just created server on Your laptop inside VirtualBox virtual machine. After you finish the creation process and tests, You may move this boot environment to the real server and put it into production. Or even move it into VMware ESX/vSphere virtual machine and use it there.

As You see the possibilities with _Boot Environments_ are unlimited.

*3. The Install Process*

I created 3 possible schemes which should cover most demands, choose one and continue to the next step.

*3.1. Server with Two Disks*

I assume that this server has 2 disks and we will create ZFS mirror across them, so if any of them will be gone the system will still work as usual. I also assume that these disks are ada0 and ada1. If You have SCSI/SAS drives there, they may be named da0 and da1 accordingly. The procedures below will wipe all data on these disks, You have been warned.


```
1. Boot from the FreeBSD USB/DVD.
 2. Select the 'Live CD' option.
 3. login: root
 4. # sh
 5. # DISKS="ada0 ada1"
 6. # for I in ${DISKS}; do
    > NUMBER=$( echo ${I} | tr -c -d '0-9' )
    > gpart destroy -F ${I}
    > gpart create -s GPT ${I}
    > gpart add -t freebsd-boot -l bootcode${NUMBER} -s 128k ${I}
    > gpart add -t freebsd-zfs -l sys${NUMBER} ${I}
    > gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ${I}
    > done
 7. # zpool create -f -o cachefile=/tmp/zpool.cache sys mirror /dev/gpt/sys*
 8. # zfs set mountpoint=none sys
 9. # zfs set checksum=fletcher4 sys
10. # zfs set atime=off sys
11. # zfs create sys/ROOT
12. # zfs create -o mountpoint=/mnt sys/ROOT/default
13. # zpool set bootfs=sys/ROOT/default sys
14. # cd /usr/freebsd-dist/
15. # for I in base.txz kernel.txz; do
    > tar --unlink -xvpJf ${I} -C /mnt
    > done
16. # cp /tmp/zpool.cache /mnt/boot/zfs/
17. # cat << EOF >> /mnt/boot/loader.conf
    > zfs_load=YES
    > vfs.root.mountfrom="zfs:sys/ROOT/default"
    > EOF
18. # cat << EOF >> /mnt/etc/rc.conf
    > zfs_enable=YES
    > EOF
19. # :> /mnt/etc/fstab
20. # zfs umount -a
21. # zfs set mountpoint=legacy sys/ROOT/default
22. # reboot
```

After these instructions and reboot we have these GPT partitions available, this example is on a 512MB disk.


```
# gpart show
=>     34  1048509  ada0  GPT  (512M)
       34      256     1  freebsd-boot  (128k)
      290  1048253     2  freebsd-zfs  (511M)

=>     34  1048509  ada1  GPT  (512M)
       34      256     1  freebsd-boot  (128k)
      290  1048253     2  freebsd-zfs  (511M)

# gpart list | grep label
   label: bootcode0
   label: sys0
   label: bootcode1
   label: sys1

# zpool status
  pool: sys
 state: ONLINE
 scan: none requested
config:

        NAME          STATE     READ WRITE CKSUM
        sys           ONLINE       0     0     0
          mirror-0    ONLINE       0     0     0
            gpt/sys0  ONLINE       0     0     0
            gpt/sys1  ONLINE       0     0     0

errors: No known data errors
```

*3.2. Server with One Disk*

If Your server configuration has only one disk, lets assume its ada0, then You need different points *5.* and *7.* to make, use these instead of the ones above.


```
5. # DISKS="ada0"
7. # zpool create -f -o cachefile=/tmp/zpool.cache sys /dev/gpt/sys*
```

All other steps are the same.


----------



## vermaden (Apr 26, 2012)

*3.3. Road Warrior Laptop*

The procedure is quite different for a laptop because we will use the full disk encryption mechanism provided by GELI and then set up the ZFS pool. Its not currently possible to boot from the ZFS pool on top of an encrypted GELI provider, so we will use a setup similar to the server with ... one but with additional local pool for /home and /root partitions. It will be password based and you will be asked to type-in that password at every boot. The install process is generally the same with new instructions added for the GELI encrypted local pool.


```
1. Boot from the FreeBSD USB/DVD.
 2. Select the 'Live CD' option.
 3. login: root
 4. # sh
 5. # DISKS="ada0"
 6. # for I in ${DISKS}; do
    > NUMBER=$( echo ${I} | tr -c -d '0-9' )
    > gpart destroy -F ${I}
    > gpart create -s GPT ${I}
    > gpart add -t freebsd-boot -l bootcode${NUMBER} -s 128k ${I}
    > gpart add -t freebsd-zfs -l sys${NUMBER} -s 10G ${I}
    > gpart add -t freebsd-zfs -l local${NUMBER} ${I}
    > gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ${I}
    > done
 7. # zpool create -f -o cachefile=/tmp/zpool.cache sys /dev/gpt/sys0
 8. # zfs set mountpoint=none sys
 9. # zfs set checksum=fletcher4 sys
10. # zfs set atime=off sys
11. # zfs create sys/ROOT
12. # zfs create -o mountpoint=/mnt sys/ROOT/default
13. # zpool set bootfs=sys/ROOT/default sys
[B]14. # geli init -b -s 4096 -e AES-CBC -l 128 /dev/gpt/local0
15. # geli attach /dev/gpt/local0
16. # zpool create -f -o cachefile=/tmp/zpool.cache local /dev/gpt/local0.eli
17. # zfs set mountpoint=none local
18. # zfs set checksum=fletcher4 local
19. # zfs set atime=off local
20. # zfs create local/home
21. # zfs create -o mountpoint=/mnt/root local/root[/B]
22. # cd /usr/freebsd-dist/
23. # for I in base.txz kernel.txz; do
    > tar --unlink -xvpJf ${I} -C /mnt
    > done
24. # cp /tmp/zpool.cache /mnt/boot/zfs/
25. # cat << EOF >> /mnt/boot/loader.conf
    > zfs_load=YES
[B]  > geom_eli_load=YES[/B]
    > vfs.root.mountfrom="zfs:sys/ROOT/default"
    > EOF
26. # cat << EOF >> /mnt/etc/rc.conf
    > zfs_enable=YES
    > EOF
27. # :> /mnt/etc/fstab
28. # zfs umount -a
29. # zfs set mountpoint=legacy sys/ROOT/default
[B][color="green"]30. # zfs set mountpoint=/home local/home
31. # zfs set mountpoint=/root local/root[/color][/B]
32. # reboot
```
After these instructions and reboot we have these GPT partitions available, this example is on a 4GB disk.


```
# gpart show
=>     34  8388541  ada0  GPT  (4.0G)
       34      256     1  freebsd-boot  (128k)
      290  2097152     2  freebsd-zfs  (1.0G)
  2097442  6291133     3  freebsd-zfs  (3G)

# gpart list | grep label
   label: bootcode0
   label: sys0
   label: local0

# zpool status
  pool: local
 state: ONLINE
 scan: none requested
config:

        NAME              STATE    READ WRITE CKSUM
        sys               ONLINE      0     0     0
          gpt/local0.eli  ONLINE      0     0     0

errors: No known data errors

  pool: sys
 state: ONLINE
 scan: none requested
config:

        NAME        STATE    READ WRITE CKSUM
        sys         ONLINE      0     0     0
          gpt/sys0  ONLINE      0     0     0

errors: No known data errors
```

*4. Basic Setup after Install*

1. Login as *root * with empty password.

```
login: root
password: [ENTER]
```

2. Create initial *snapshot* after install.
`# zfs snapshot -r sys/ROOT/default@install`

3. Set new *root *password.
`# passwd`

4. Set machine's *hostname*.
`# echo hostname=hostname.domain.com >> /etc/rc.conf`

5. Set proper *timezone*.
`# tzsetup`

6. Add some *swap *space.
If you used the _Server with ..._ type, then use this to add swap.


```
# zfs create -V 1G -o org.freebsd:swap=on \
                   -o checksum=off \
                   -o sync=disabled \
                   -o primarycache=none \
                   -o secondarycache=none sys/swap[/color]
# swapon /dev/zvol/sys/swap
```

If you used the _Road Warrior Laptop_ one, then use this one below, this way the swap space will also be encrypted.


```
# zfs create -V 1G -o org.freebsd:swap=on \
                   -o checksum=off \
                   -o sync=disabled \
                   -o primarycache=none \
                   -o secondarycache=none local/swap[/color]
# swapon /dev/zvol/local/swap
```

7. Create a *snapshot* called configured or production
After you configured your fresh FreeBSD system, added needed packages and services, create a snapshot called configured or production so if you mess something, you can always go back in time to bring working configuration back.

`# zfs snapshot -r sys/ROOT/default@configured`

*5. Enable Boot Environments*

Here are some simple instructions on how to download and enable the beadmcommand line utility for easy _Boot Environments_ administration.


```
# fetch -o /usr/sbin/beadm https://downloads.sourceforge.net/project/beadm/beadm
# chmod +x /usr/sbin/beadm
# rehash
# beadm list
BE      Active Mountpoint Space Policy Created
default NR     /           592M static 2012-04-25 02:03
```

*6. WYSIWTF*

Now we have a working ZFS only FreeBSD system, I will put some example here about what you now can do with this type of installation and of course the _Boot Environments_ feature.

*6.1. Create New Boot Environmnent Before Upgrade*

1. Create new environment from the current one.

```
# beadm create upgrade
Created successfully
```

2. Activate it.

```
# beadm activate upgrade
Activated successfully
```

3. Reboot into it.

```
# shutdown -r now
```

4. Mess with it.

You are now free to do anything you like for the upgrade process, but even if you break everything, you still have a working default working environment.

*6.2. Perform Upgrade within a Jail*

This concept is about creating new boot environment from the desired one, lets call it jailed, then start that new environment inside a FreeBSD jail and perform upgrade there. After you have finished all tasks related to this upgrade and you are satisfied with the achieved results, shut down that Jail, set the boot environment into that just upgraded jail called jailed and reboot into just upgraded system without any risks.

1. Create new boot environment called jailed.

```
# beadm create -e default jailed
Created successfully
```

2. Create /usr/jails directory.

```
# mkdir /usr/jails
```

3. Set mount point of new boot environment to /usr/jails/jailed dir.

```
# zfs set mountpoint=/usr/jails/jailed sys/ROOT/jailed
```

3.1. Make new jail dataset mountable.

```
# zfs set canmount=noauto sys/ROOT/jailed
```

3.2. Mount new Jail dataset.

```
# zfs mount sys/ROOT/jailed
```

4. Enable FreeBSD Jails mechanism and the jailed jail in /etc/rc.conf file.

```
# cat << EOF >> /etc/rc.conf
> jail_enable=YES
> jail_list="jailed"
> jail_jailed_rootdir="/usr/jails/jailed"
> jail_jailed_hostname="jailed"
> jail_jailed_ip="10.20.30.40"
> jail_jailed_devfs_enable="YES"
> EOF
```

5. Start the jails mechanism.

```
# /etc/rc.d/jail start
Configuring jails:.
Starting jails: jailed.
```

6. Check if the jailed jail started.

```
# jls
   JID  IP Address      Hostname                      Path
     1  10.20.30.40     jailed                        /usr/jails/jailed
```

7. Log in to the jailed jail.

```
# jexec 1 tcsh
```

8. *PERFORM ACTUAL UPGRADE.*

9. Stop the jailed jail.

```
# /etc/rc.d/jail stop
Stopping jails: jailed.
```

10. Disable Jails mechanism in /etc/rc.conf file.

```
# sed -i '' -E s/"^jail_enable.*$"/"jail_enable=NO"/g /etc/rc.conf
```

11. Activate just upgraded jailed boot environment.

```
# beadm activate jailed
Activated successfully
```

12. Reboot into upgraded system.


----------



## vermaden (Apr 26, 2012)

*6.3. Import Boot Environment from Other Machine*

Lets assume, that You need to upgrade or do some major modification to some of Your servers, You will then create new boot environment from the default one, move it to other 'free' machine, perform these tasks there and after everything is done, move the modified boot environment to the production without any risks. You may as well transport that environment into You laptop/workstation and upgrade it in a Jail like in step *6.2* of this guide.

1. Create new environment on the _production _server.

```
# beadm create upgrade
Created successfully.
```

2. Send the upgrade environment to _test _server.

```
# zfs send sys/ROOT/upgrade | ssh TEST zfs recv -u sys/ROOT/upgrade
```

3. Activate the _upgrade_ environment on the _test _server.

```
# beadm activate upgrade
Activated successfully.
```

4. Reboot into the _upgrade_ environment on the _test _server.

```
# shutdown -r now
```

5. *PERFORM ACTUAL UPGRADE AFTER REBOOT.*

6. Sent the upgraded _upgrade_ environment onto _production_ server.

```
# zfs send sys/ROOT/upgrade | ssh PRODUCTION zfs recv -u sys/ROOT/upgrade
```

7. Activate upgraded _upgrade_ environment on the _production_ server.

```
# beadm activate upgrade
Activated successfully.
```

8. Reboot into the _upgrade_ environment on the _production_ server.

```
# shutdown -r now
```


*7. References*

*[1]* http://forums.freebsd.org/showthread.php?t=10334
*[2]* http://forums.freebsd.org/showthread.php?t=12082
*[3]* http://docs.oracle.com/cd/E19963-01/pdf/820-6565.pdf
*[4]* http://docs.oracle.com/cd/E19963-01/html/821-1462/beadm-1m.html
*[5]* http://anonsvn.h3q.com/projects/freebsd-patches/wiki/manageBE
*[6]* https://sourceforge.net/projects/beadm/

The last part of the HOWTO remains the same as year ago...

You can now add your users, services and packages as usual on any FreeBSD system, have fun 

Added GIT repository: https://github.com/vermaden/beadm


----------



## vermaden (Apr 26, 2012)

As FreeBSD progresses I thought I would post updated to FreeBSD 10 / 9.2 procedure that I currently use.

The only 'problem' with ZFS is now its fragmentation which was supposed to be fixed by _'Block Pointer Rewrite'_ but as we know that did not happened. One of the sources of this fragmentation is that before the data gets written to the pool, ZFS first writes metadata there, then copies the data and then finally removes the metadata. That removal of metadata is the main cause of ZFS fragmentation. To eliminate this problem I suggest using a separate ZIL device for each pool. In the perfect case ZIL should be mirrored, but if You do setup for a single disk, then creating redundant ZIL for non redundant pool is useless ...

ZIL can be grow up to half of RAM, while my current box has 16 GB of RAM I do not think that I will be able to see ZIL filled up to 8 GB, so I have chosen to create 4 GB of ZIL for the 'data' pool and 1 GB for the rather small 16 GB 'root' pool. 

As GRUB2 becomes more popular in BSD world (thanks to PC-BSD) You may want to consider using it in the future, that is why I suggest leaving 1 MB space at the beginning for GRUB2 if necessary, in other words the root pool starts after 1 MB. 


```
ada0p1  512k  bootcode
       -free-  512k  -free- (total 1 MB in case of GRUB2)
(boot) ada0p2   16g  sys.LZ4
       ada0p3    1g  sys.ZIL
       ada0p4    4g  local.ZIL
       ada0p5     *  local.GELI.LZ4
```

Here are the commands that I used.


```
gpart destroy -F ada0
gpart create -s gpt ada0
gpart add -t freebsd-boot -s   1m -l boot      ada0
gpart add -t freebsd-zfs  -s  16g -l sys       ada0
gpart add -t freebsd-zfs  -s   1g -l sys.zil   ada0
gpart add -t freebsd-zfs  -s   4g -l local.zil ada0
gpart add -t freebsd-zfs          -l local     ada0
gpart delete -i 1 ada0
gpart add -t freebsd-boot -s 128k -l boot      ada0
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0
geli init -b -s 4096 /dev/gpt/local
geli attach          /dev/gpt/local
zpool create -f local /dev/gpt/local.eli log /dev/gpt/local.zil
zpool create -f sys /dev/gpt/sys log /dev/gpt/sys.zil
zfs set compression=lz4 sys
zfs set compression=lz4 local
zfs set atime=off sys
zfs set atime=off local
zfs set mountpoint=none sys
zfs set mountpoint=none local
zfs create sys/ROOT
zfs create sys/ROOT/default
zpool set bootfs=sys/ROOT/default sys
zfs create local/home
zfs set mountpoint=/mnt sys/ROOT/default
zfs mount sys/ROOT/default
zfs set mountpoint=/mnt/home local/home
zfs mount local/home
cd /usr/freebsd-dist/
tar --unlink -xvpJf base.txz   -C /mnt
tar --unlink -xvpJf src.txz    -C /mnt
tar --unlink -xvpJf lib32.txz  -C /mnt
tar --unlink -xvpJf kernel.txz -C /mnt --exclude '*.symbols'
echo zfs_enable=YES > /mnt/etc/rc.conf
:> /mnt/etc/fstab
cat > /mnt/boot/loader.conf << EOF
zfs_load=YES
aio_load=YES
geom_eli_load=YES
EOF
zfs umount -a
zfs set mountpoint=legacy sys/ROOT/default
zfs set mountpoint=/home local/home
reboot
```


```
pkg (answer 'y' to bootstrap)
pkg add beadm
chmod 1777 /tmp /var/tmp
cp /usr/share/zoneinfo/Europe/Warsaw /etc/localtime
newaliases
passwd
(...)
```


----------



## bdrewery@ (Apr 27, 2012)

This is an awesome script. I was just considering doing this myself.

I've written up a starter man page for it, and a port. https://github.com/bdrewery/beadm-port

Please give it a version so we can get it into ports.

It would be nice if you put this out on github for contributions as well.


----------



## Crivens (Apr 27, 2012)

Great work.
Now, where can I dig up some 4G disks...


----------



## bdrewery@ (Apr 28, 2012)

This script is now available in sysutils/beadm.


----------



## bdrewery@ (Apr 28, 2012)

Do you also need to update vfs.root.mountfrom in the new /boot/loader.conf? May want to add a comment about that.


----------



## vermaden (Apr 28, 2012)

bdrewery said:
			
		

> Do you also need to update vfs.root.mountfrom in the new /boot/loader.conf? May want to add a comment about that.



beadm takes care about that


----------



## bdrewery@ (Apr 28, 2012)

Ah ye, I see that now. I'm using /boot/loader.conf.local so will just manually do it for now.


----------



## vermaden (May 4, 2012)

The beadm utility version in Ports is *0.1* currently, the latest *0.4* is available at _SourceForge_ [1] and _GitHub_ [2].

[1] https://sourceforge.net/projects/beadm/
[2] https://github.com/vermaden/beadm


----------



## jef (May 5, 2012)

Very nice work!

I was dreading scripting the "roll-back" functionality so that I wouldn't make mistakes, and it looks like you've got a huge amount of it done already.

One thing that might be nice in the future would be to provide an option store the snapshots as read-only and then clone them to activate them. I don't know if the "upstream" version does that. It would allow incremental updating of the snapshots. (I'm planning on keeping the backups on a different machine.)

I may just end up "shipping" a remotely stored read-only snapshot to the target pool for mounting; the incremental approach should work just fine if I do that.


----------



## vermaden (May 7, 2012)

jef said:
			
		

> Very nice work!
> 
> I was dreading scripting the "roll-back" functionality so that I wouldn't make mistakes, and it looks like you've got a huge amount of it done already.



Thanks.



			
				jef said:
			
		

> One thing that might be nice in the future would be to provide an option store the snapshots as read-only and then clone them to activate them. I don't know if the "upstream" version does that. It would allow incremental updating of the snapshots. (I'm planning on keeping the backups on a different machine.)



It*'*s possible that it*'*s already possible to do that with the beadm utility.

You can create as many snapshots as you like with the *beadm create beName@snapshot* command, which is generally the same as the *zfs snapshot -r pool/ROOT/beName@snapshot* command.

You can then create boot environments from these snapshots with *beadm create -e beName@snapshot beName*command (or use *zfs clone ...*).

You can then activate them to reboot into them by *beadm activate beName*.

Don't know if you wanted that functionality or something different


----------



## rawthey (May 7, 2012)

I'm about to upgrade to 9-0-RELEASE and intend to take the opportunity to start using ZFS for the first time so I've found beadm to be very useful while trying different ideas but there's one aspect of my setup that it doesn't manage to deal with.

I initially configured the system with a number of child filesystems as described in the FreeBSD Wiki so I have things like sys/ROOT/usr, sys/ROOT/var etcetera and have the mountpoints defined in /etc/fstab. To get things to work correctly I had to modify the script to update /etc/fstab in each new BE as it is created.

I've managed to produce a script that does what I need with the following changes:

```
*** beadm       2012-05-07 17:54:27.000000000 +0100
--- beadm-patched       2012-05-07 21:05:23.000000000 +0100
***************
*** 59,65 ****
--- 59,105 ----
    echo "${1}" | grep -q "@"
  }

+ __be_fstab () {
+ # edit fstab to use the mounts of the BE children
+ MNT="/tmp/BE-$(date +%Y%m%d%H%M%S)"
+ if mkdir ${MNT}
+ then
+    if mount -t zfs ${TARGET_SYSTEM} ${MNT}
+    then
+       if [ $(grep -c ^${SOURCE_SYSTEM} ${MNT}/etc/fstab) != 0 ]
+       then
+          sed -I "" s+^${SOURCE_SYSTEM}+${TARGET_SYSTEM}+ ${MNT}/etc/fstab
+          FSTAB_STATUS=$?
+          if [ ${FSTAB_STATUS} != 0 ]
+          then
+             echo Failed to update ${MNT}/etc/fstab
+          fi
+       else
+          FSTAB_STATUS=0
+       fi
+       umount ${MNT}
+       rmdir ${MNT}
+    else
+       FSTAB_STATUS=1
+       echo "ERROR: Cannot mount ${TARGET_SYSTEM}"
+       rmdir ${MNT}
+    fi
+ else
+    echo "ERROR: Cannot create '${MNT}' directory"
+    FSTAB_STATUS=1
+ fi
+ if [ ${FSTAB_STATUS} != 0 ]
+ then
+    zfs destroy -r ${TARGET_SYSTEM}
+    zfs destroy -r ${SOURCE_SNAPSHOT}
+ fi
+ return ${FSTAB_STATUS}
+ }
+
  __be_new() { # 1=SOURCE 2=TARGET
+   SOURCE_SYSTEM=$(echo ${1} | sed s+@.*++)
+   SOURCE_SNAPSHOT=${1}@${2##*/}
+   TARGET_SYSTEM=${2}
    if __be_snapshot ${1}
    then
      zfs clone ${1} ${2}
***************
*** 94,100 ****
          fi
          zfs clone -o canmount=off ${OPTS} ${FS}@${2##*/} ${DATASET}
        done
!   echo "Created successfully"
  }

  ROOTFS=$( mount | awk '/ \/ / {print $1}' )
--- 134,143 ----
          fi
          zfs clone -o canmount=off ${OPTS} ${FS}@${2##*/} ${DATASET}
        done
!   if __be_fstab
!   then
!      echo "Created successfully"
!   fi
  }

  ROOTFS=$( mount | awk '/ \/ / {print $1}' )
***************
*** 269,290 ****
        (Y|y|[Yy][Ee][Ss])
          if __be_snapshot ${POOL}/ROOT/${2}
          then
!           if ! zfs destroy ${POOL}/ROOT/${2} 1> /dev/null 2> /dev/null
            then
              echo "ERROR: Snapshot '${2}' is origin for other boot environment(s)"
              exit 1
            fi
          else
            ORIGINS=$( zfs list -r -H -o origin ${POOL}/ROOT/${2} )
!           if zfs destroy ${POOL}/ROOT/${2} 1> /dev/null 2> /dev/null
            then
              zfs destroy -r ${POOL}/ROOT/${2} 2>&1 \
!               | grep "${POOL}/ROOT/" \
                | grep -v "@" \
                | while read I
                  do
!                   zfs promote ${I} 2> /dev/null
                  done
            fi
            echo "${ORIGINS}" \
              | while read I
--- 312,334 ----
        (Y|y|[Yy][Ee][Ss])
          if __be_snapshot ${POOL}/ROOT/${2}
          then
!           if ! zfs destroy -r ${POOL}/ROOT/${2} 1> /dev/null 2> /dev/null
            then
              echo "ERROR: Snapshot '${2}' is origin for other boot environment(s)"
              exit 1
            fi
          else
            ORIGINS=$( zfs list -r -H -o origin ${POOL}/ROOT/${2} )
!           if ! zfs destroy -r ${POOL}/ROOT/${2} 1> /dev/null 2> /dev/null
            then
              zfs destroy -r ${POOL}/ROOT/${2} 2>&1 \
!               | grep "^${POOL}/ROOT/" \
                | grep -v "@" \
                | while read I
                  do
!                   zfs promote ${I}
                  done
+             zfs destroy -r ${POOL}/ROOT/${2}
            fi
            echo "${ORIGINS}" \
              | while read I
```
The main change is the introduction of the __be_fstab function. This works OK if you create a BE from an existing one but there is a problem creating BE's from snapshots, I need to do a bit more work to sort this out.

The other changes further down the script were to fix problems I came across when deleting BE's, I don't think they are directly related to my fix for fstab, could they be bugs which crept in when the script was recently changed from using && {} || {} syntax to if/then/else syntax?


----------



## vermaden (May 8, 2012)

rawthey said:
			
		

> I had to modify the script to update /etc/fstab in each new BE as it is created.
> 
> I've managed to produce a script that does what I need with the following changes (...)



Using /etc/fstab workaround was already used at manageBE, I wanted to avoid that and I succeeded. Now while cloning the boot environment (along with its child datasets) I clone their properties (also mountpoints) and manipulate the canmount property to avoid double/unwanted mounts.

Currently I am working on beadm with _Bryan Drewery_, the latest efforts are available here: https://github.com/vermaden/beadm

If we do not find other issues we will 'brand' that as 0.5 and update the port also.



			
				rawthey said:
			
		

> The other changes further down the script were to fix problems I came across when deleting BE's, I don't think they are directly related to my fix for fstab, could they be bugs which crept in when the script was recently changed from using && {} || {} syntax to if/then/else syntax?



Yes there was a BUG that was introduced in the process of 'porting' beadm from &&-|| to if-then-fi, precisely, if ... was used instead of if ! ..., it's fixed now.



			
				rawthey said:
			
		

> (...) sys/ROOT/usr, sys/ROOT/var (...)


These should be sys/ROOT/*beName*/usr, sys/ROOT/*beName*/var to properly use boot environments, You can of course 'migrate' your sys/ROOT to sys/ROOT/beName with ZFS.


----------



## rawthey (May 8, 2012)

vermaden said:
			
		

> Using /etc/fstab workaround was already used at manageBE, I wanted to avoid that and I succeeded. Now while cloning the boot environment (along with its child datasets) I clone their properties (also mountpoints) and manipulate the canmount property to avoid double/unwanted mounts.



Being very new to ZFS I think I must have messed up the mountpoints when converting to the sys/ROOT/bename structure and ended up using legacy mounts. That was before beadm had been changed to copy properties while cloning. Although I noticed the change I failed to realise the significance of it and continued working on my fixes. I've now reset my ZFS mountpoints correctly, abandoned my fixes and downloaded the latest version.

Everything is working fine with the new version except for an issue with creating BE's from snapshots. Using *beadm create be6@snaptest* only produced the top level snapshot without any descendents. I managed to create all the descendent snapshots and get rid of a spurious "ERROR: Cannot create 'be6@snaptest' snapshot" message by changing line 173 to 

`if [b]![/b] zfs snapshot [b]-r[/b] ${POOL}/ROOT/${2} 2> /dev/null`

When I tried to create a new BE from an existing snapshot with *beadm create -e be6@snaptest fromsnap* it failed at line 78 with "cannot open 'sys/ROOT/be6@snaptest': operation not applicable to datasets of this type".



			
				vermaden said:
			
		

> These should be sys/ROOT/*beName*/usr, sys/ROOT/*beName*/var to properly use boot environments, You can of course 'migrate' your sys/ROOT to sys/ROOT/beName with ZFS.


Yes, that was my typo in the post, I did use sys/ROOT/*beName*/usr in the system.


----------



## serverhamster (May 8, 2012)

So, there is a difference in file system layout? At first, I used the layout from the wiki, resulting in something like this:

```
NAME                                         USED  AVAIL  REFER  MOUNTPOINT
rpool                                       50.2G   241G    22K  none
rpool/HOME                                   235K   241G    33K  /home
rpool/HOME/alvin                             170K   241G   170K  /home/alvin
rpool/ROOT                                  3.01G   241G    22K  none
rpool/ROOT/9.0-RELEASE                      3.01G   241G   349M  legacy
rpool/ROOT/9.0-RELEASE/tmp                   720K   241G   720K  /tmp
rpool/ROOT/9.0-RELEASE/usr                  1.31G   241G   309M  /usr
rpool/ROOT/9.0-RELEASE/usr/local             459M   241G   459M  /usr/local
rpool/ROOT/9.0-RELEASE/usr/ports             573M   241G   269M  /usr/ports
rpool/ROOT/9.0-RELEASE/usr/ports/distfiles   300M   241G   300M  /usr/ports/distfiles
rpool/ROOT/9.0-RELEASE/usr/ports/packages   3.19M   241G  3.19M  /usr/ports/packages
rpool/ROOT/9.0-RELEASE/usr/src                23K   241G    23K  /usr/src
rpool/ROOT/9.0-RELEASE/var                   832M   241G  1.17M  /var
rpool/ROOT/9.0-RELEASE/var/crash            23.5K   241G  23.5K  /var/crash
rpool/ROOT/9.0-RELEASE/var/db                829M   241G   827M  /var/db
rpool/ROOT/9.0-RELEASE/var/db/pkg           1.46M   241G  1.46M  /var/db/pkg
rpool/ROOT/9.0-RELEASE/var/empty              22K   241G    22K  /var/empty
rpool/ROOT/9.0-RELEASE/var/log              1.86M   241G  1.86M  /var/log
rpool/ROOT/9.0-RELEASE/var/mail               86K   241G    86K  /var/mail
rpool/ROOT/9.0-RELEASE/var/run              63.5K   241G  63.5K  /var/run
rpool/ROOT/9.0-RELEASE/var/tmp                36K   241G    36K  /var/tmp
```

Then I discovered manageBE, and installed like this:

```
NAME                        USED  AVAIL  REFER  MOUNTPOINT
rpool                      2.79G   222G   144K  none
rpool/ROOT                  687M   222G   144K  none
rpool/ROOT/9.0-RELEASE      687M   222G   687M  legacy
rpool/home                  352K   222G   152K  /home
rpool/home/alvin            200K   222G   200K  /home/alvin
rpool/tmp                   176K   222G   176K  /tmp
rpool/usr                  1.98G   222G   144K  /usr
rpool/usr/local             351M   222G   351M  /usr/local
rpool/usr/ports             849M   222G   848M  /usr/ports
rpool/usr/ports/distfiles   144K   222G   144K  /usr/ports/distfiles
rpool/usr/ports/packages    144K   222G   144K  /usr/ports/packages
rpool/usr/src               826M   222G   826M  /usr/src
rpool/var                   145M   222G   724K  /var
rpool/var/crash             148K   222G   148K  /var/crash
rpool/var/db                143M   222G   143M  /var/db
rpool/var/db/pkg            292K   222G   292K  /var/db/pkg
rpool/var/empty             144K   222G   144K  /var/empty
rpool/var/log               472K   222G   472K  /var/log
rpool/var/mail              156K   222G   156K  /var/mail
rpool/var/run               224K   222G   224K  /var/run
rpool/var/tmp               152K   222G   152K  /var/tmp
```

If I understand correctly, for beadm, the first method is needed and all filesystems below rpool/9.0-RELEASE will also be cloned. Is that right?


----------



## vermaden (May 9, 2012)

rawthey said:
			
		

> Everything is working fine with the new version except for an issue with creating BE's from snapshots. Using *beadm create be6@snaptest* only produced the top level snapshot without any descendents. I managed to create all the descendent snapshots and get rid of a spurious "ERROR: Cannot create 'be6@snaptest' snapshot" message by changing line 173 to
> 
> `if [b]![/b] zfs snapshot [b]-r[/b] ${POOL}/ROOT/${2} 2> /dev/null`
> 
> When I tried to create a new BE from an existing snapshot with *beadm create -e be6@snaptest fromsnap* it failed at line 78 with "cannot open 'sys/ROOT/be6@snaptest': operation not applicable to datasets of this type".



Thanks for finding these BUGs, I fixed them and also fixed several others, and even added a new *rename *feature, the latest work is available at *github/sourceforge*.



			
				rawthey said:
			
		

> Yes, that was my typo in the post, I did use sys/ROOT/*beName*/usr in the system.


Ok.




			
				serverhamster said:
			
		

> So, there is a difference in file system layout?
> 
> (...)
> 
> If I understand correctly, for beadm, the first method is needed and all filesystems below rpool/9.0-RELEASE will also be cloned. Is that right?



That depends how You want to use boot environments. If You want to clone EVERYTHING, then put all other mountpoints into $pool/ROOT/$beName/* but you may want to use Boot Environments on some more basic level and use it only for base system keeping the /usr or /var aside, it's up to you.

I personally experienced with many ZFS concepts, for example I tried a way that I call _'Cloneable ZFS Namespaces'_ which is something like that:


```
% zfs list -o name
NAME
sys
sys/PORTS
sys/PORTS/current
sys/PORTS/current/compat
sys/PORTS/current/usr
sys/PORTS/current/usr/local
sys/PORTS/current/usr/ports
sys/PORTS/current/var
sys/PORTS/current/var/db
sys/PORTS/current/var/db/pkg
sys/PORTS/current/var/db/ports
sys/PORTS/current/var/db/portsnap
sys/PORTS/release90
sys/PORTS/release90/compat
sys/PORTS/release90/usr
sys/PORTS/release90/usr/local
sys/PORTS/release90/usr/ports
sys/PORTS/release90/var
sys/PORTS/release90/var/db
sys/PORTS/release90/var/db/pkg
sys/PORTS/release90/var/db/ports
sys/PORTS/release90/var/db/portsnap
sys/PORTS/usr/ports/obj
sys/ROOT
sys/ROOT/default
sys/ROOT/default-upgrade
sys/ROOT/jailed
sys/ROOT/upgrade-jailed
sys/SRC
sys/SRC/release90
sys/SRC/release90/usr
sys/SRC/release90/usr/src
sys/SRC/stable90
sys/SRC/stable90/usr
sys/SRC/stable90/usr/src
sys/SRC/current10
sys/SRC/current10/usr
sys/SRC/current10/usr/src
sys/SRC/usr/obj
sys/HOME
sys/HOME/vermaden
sys/SWAP
```

With these _'Cloneable ZFS Namespaces'_ You can mix/change on-the-fly various parts of the FreeBSD system, change the source tree You are using without the need to redownload everything or just to keep several source trees.

You can use Ports/packages from the RELEASE, but at the same time You can have a full set of up-to-date packages that You can switch to and go back again to the RELEASE ones.

It's possible to implement these _'Cloneable ZFS Namespaces'_ into the beadm utility of course.

So you would *beadm list -t SRC* for example, to list the source trees on the system, or *beadm list -t PORTS* to list available package sets.


----------



## rawthey (May 10, 2012)

There still seems to be an issue with *beadm create -e beName@snapshot beName* which fails if the new BE name doesn't match the name of the source snapshot.

This works

```
fbsd9:/root# beadm list
BE    Active Mountpoint Space Policy Created
oldbe -      -          49.5K static 2012-05-06 21:00
be3   -      -          49.5K static 2012-05-06 21:32
be4   -      -           264K static 2012-05-06 21:41
be5   -      -          1.07M static 2012-05-06 21:42
be6   NR     /          7.56G static 2012-05-08 09:34
fbsd9:/root# beadm create be4@snaptest
Created successfully
fbsd9:/root# beadm create -e be4@snaptest snaptest
Created successfully
fbsd9:/root# beadm list
BE       Active Mountpoint Space Policy Created
oldbe    -      -          49.5K static 2012-05-06 21:00
be3      -      -          49.5K static 2012-05-06 21:32
be4      -      -           264K static 2012-05-06 21:41
be5      -      -          1.07M static 2012-05-06 21:42
be6      NR     /          7.56G static 2012-05-08 09:34
snaptest -      -            15K static 2012-05-10 09:53
```

but this doesn't


```
bsd9:/root# beadm create -e be4@snaptest fromsnap
cannot open 'sys/ROOT/be4@fromsnap': dataset does not exist
fbsd9:/root# exit
```

I was able to get it to handle all cases with this patch


```
*** /sbin/beadm 2012-05-10 09:52:32.199568612 +0100
--- /tmp/beadm  2012-05-10 10:16:06.190956035 +0100
***************
*** 101,107 ****
          then
            local OPTS=""
          fi
!         zfs clone -o canmount=off ${OPTS} ${FS}@${2##*/} ${DATASET}
        done
    echo "Created successfully"
  }
--- 101,112 ----
          then
            local OPTS=""
          fi
!       if  __be_snapshot ${1}
!       then
!           zfs clone -o canmount=off ${OPTS} ${FS}@${1##*@} ${DATASET}
!       else
!           zfs clone -o canmount=off ${OPTS} ${FS}@${2##*/} ${DATASET}
!       fi
        done
    echo "Created successfully"
  }
```


----------



## vermaden (May 11, 2012)

@rawthey

Thanks, merged


----------



## rawthey (May 13, 2012)

I had a bit of a problem when I came to copy my test system from VirtualBox onto a real disk. Firstly I created a minimal system as sys/ROOT/default as described above then I copied my BE from VirtualBox with *zfs receive -u* and ended up with this...

```
# beadm list

BE      Active Mountpoint Space Policy Created
default NR     /           592M static 2012-05-11 13:15
be6     -      -          7.56G static 2012-05-11 22:05

# zfs list -o name,canmount,mountpoint

NAME                              CANMOUNT  MOUNTPOINT
sys                                     on  none
sys/ROOT                                on  legacy
sys/ROOT/be6                        noauto  legacy
sys/ROOT/be6/tmp                    noauto  /tmp
sys/ROOT/be6/usr                    noauto  /usr
sys/ROOT/be6/usr/ports              noauto  /usr/ports
sys/ROOT/be6/usr/ports/distfiles    noauto  /usr/ports/distfiles
sys/ROOT/be6/usr/ports/packages     noauto  /usr/ports/packages
sys/ROOT/be6/usr/src                noauto  /usr/src
sys/ROOT/be6/var                    noauto  /var
sys/ROOT/be6/var/db                 noauto  /var/db
sys/ROOT/be6/var/db/pkg             noauto  /var/db/pkg
sys/ROOT/be6/var/empty              noauto  /var/empty
sys/ROOT/be6/var/log                noauto  /var/log
sys/ROOT/be6/var/mail               noauto  /var/mail
sys/ROOT/be6/var/run                noauto  /var/run
sys/ROOT/be6/var/tmp                noauto  /var/tmp
sys/ROOT/default                        on  legacy
sys/swap
```

Then I used *beadm* to activate the new BE. This completed without any error messages but without the expected "Activated successfully" message. The output from *beadm list* gave the impression that the BE had been activated OK but further investigation showed that all of the descendent filesystems still had canmount set to noauto.


```
# beadm activate be6
# beadm list

BE      Active Mountpoint Space Policy Created
default N      /           592M static 2012-05-11 13:15
be6     R      -          7.56G static 2012-05-11 22:05

# zfs list -o name,canmount,mountpoint

NAME                              CANMOUNT  MOUNTPOINT
sys                                     on  none
sys/ROOT                            noauto  legacy
sys/ROOT/be6                            on  legacy
sys/ROOT/be6/tmp                    noauto  /tmp
sys/ROOT/be6/usr                    noauto  /usr
sys/ROOT/be6/usr/ports              noauto  /usr/ports
sys/ROOT/be6/usr/ports/distfiles    noauto  /usr/ports/distfiles
sys/ROOT/be6/usr/ports/packages     noauto  /usr/ports/packages
sys/ROOT/be6/usr/src                noauto  /usr/src
sys/ROOT/be6/var                    noauto  /var
sys/ROOT/be6/var/db                 noauto  /var/db
sys/ROOT/be6/var/db/pkg             noauto  /var/db/pkg
sys/ROOT/be6/var/empty              noauto  /var/empty
sys/ROOT/be6/var/log                noauto  /var/log
sys/ROOT/be6/var/mail               noauto  /var/mail
sys/ROOT/be6/var/run                noauto  /var/run
sys/ROOT/be6/var/tmp                noauto  /var/tmp
sys/ROOT/default                    noauto  legacy
sys/swap
```
The problem arises in the loop at the end of the activation section where *zfs promote ${I} 2> /dev/null* fails due to sys/ROOT/be6 not being a cloned filesystem. Since errexit was set at the start of the script it silently bails out without processing the remaining file systems because the command is not explicitly tested.

This patch seems to fix things:

```
*** beadm	2012-05-11 22:36:44.000000000 +0100
--- /tmp/beadm	2012-05-11 22:35:46.000000000 +0100
***************
*** 260,270 ****
            zfs set canmount=noauto ${I}
          done
      # Enable mounting for the active BE and promote it
!     zfs list -H -o name -t filesystem -r ${POOL}/ROOT/${2} \
!       | while read I
          do
            zfs set canmount=on ${I} 2> /dev/null
!           zfs promote ${I} 2> /dev/null
          done
      echo "Activated successfully"
      ;;
--- 260,273 ----
            zfs set canmount=noauto ${I}
          done
      # Enable mounting for the active BE and promote it
!     zfs list -H -o name,origin -t filesystem -r ${POOL}/ROOT/${2} \
!       | while read I ORIGIN
          do
            zfs set canmount=on ${I} 2> /dev/null
!           if [ ${ORIGIN} != "-" ]
!           then
!             zfs promote ${I}
!           fi
          done
      echo "Activated successfully"
      ;;
```


----------



## vermaden (May 14, 2012)

Merged, thanks again for in-depth testing


----------



## rawthey (May 14, 2012)

vermaden said:
			
		

> Merged, thanks again for in-depth testing



It's been a really interesting exercise. As a newcomer to ZFS it's been a good way to learn about it, and discover some more Bourne shell scripting tricks.

I've just discovered that "interesting" things happen if you try to activate a BE while one of its filesystems is already mounted. This might happen if you want to explore the filesystem to confirm that you've chosen the right BE to activate and then forget to unmount it. ZFS remounts the filesystem on its defined mountpoint when canmount is set to on 

```
fbsd9:/root# beadm list
BE  Active Mountpoint Space Policy Created
be6 NR     /          7.56G static 2012-05-08 09:34
be7 -      -           445K static 2012-05-14 14:54

fbsd9:/root# mount | grep be7
sys/ROOT/be7/tmp on [B]/mnt[/B] (zfs, local, noatime, nosuid, nfsv4acls)

fbsd9:/root# beadm activate be7
Activated successfully
fbsd9:/root# beadm list
BE  Active Mountpoint Space Policy Created
be6 N      /           537K static 2012-05-08 09:34
be7 R      -          7.56G static 2012-05-14 14:54

fbsd9:/root# mount | grep be7
sys/ROOT/be7/tmp on [B]/tmp[/B] (zfs, local, noatime, nosuid, nfsv4acls)
```

Would it be worth including something along the likes of this patch?

```
*** beadm       2012-05-14 17:28:57.967886169 +0100
--- /tmp/beadm  2012-05-14 22:16:41.864615636 +0100
***************
*** 56,61 ****
--- 56,80 ----
    fi
  }

+ __be_is_unmounted() { # 1=BE name
+   local MOUNTED=0
+   mount | awk "/^${POOL}\/ROOT\/${1}/ {print \$1,\$3}" \
+     |{ while read FILESYSTEM MOUNTPOINT
+       do
+         if [ ${MOUNTED} == 0 ]
+         then
+           echo "ERROR: The following filesystem(s) must be unmounted before ${1} can be activated"
+           MOUNTED=1
+         fi
+         echo "     ${FILESYSTEM} on ${MOUNTPOINT}"
+       done
+   if [ ${MOUNTED} != 0 ]
+     then
+     exit 1
+   fi
+   }
+ }
+
  __be_snapshot() { # 1=DATASET/SNAPSHOT
    echo "${1}" | grep -q "@"
  }
***************
*** 207,212 ****
--- 226,235 ----
        echo "Already activated"
        exit 0
      else
+       if [ $2 != ${ROOTFS##*/} ]
+       then
+         __be_is_unmounted ${2}
+       fi
        if [ "${ROOTFS}" != "${POOL}/ROOT/${2}" ]
        then
          TMPMNT="/tmp/BE"
```

In the process of testing this out I came across another side effect, probably a result of the same ZFS bug/feature with setting canmount. Starting with be6 as the active BE if I activated be7 and then reactivated be6 without rebooting I ended up with canmount set to noauto for all the filesystems in both BE's. Deleting redirection to /dev/null from the line *zfs set canmount=on ${I} 2> /dev/null* produced the error message "cannot unmount '/': Invalid argument". I suspect the only way round this is to change the test for *$2 != ${ROOTFS##*/}* to fail with an error if they are equal, in which case I think the subsequent *if [ "${ROOTFS}" != "${POOL}/ROOT/${2}" ]* test might become redundant.


----------



## vermaden (May 15, 2012)

Thanks for another add-on, I have sent PR for that issue: http://freebsd.org/cgi/query-pr.cgi?pr=167905


----------



## bdrewery@ (May 15, 2012)

0.6 is now available in sysutils/beadm


----------



## kangaroo (May 22, 2012)

*beadm question*

I haven't used beadm before.  I think the concept is really cool, and it's great that you've ported it to FreeBSD! That being said, I gave it a try and ran into a few questions..  If anyone might be able to clarify a few things, that would be great!

If I start with no snapshots:


```
# zfs list -t snapshot
no datasets available

# beadm list
BE      Active Mountpoint Space Policy Created
default NR     /           597M static 2012-05-18 07:36
```

Only the default... let's say I want to install updates on my system, so I ...


```
# beadm create test
Created successfully

# zfs list -t snapshot
NAME                    USED  AVAIL  REFER  MOUNTPOINT
sys/ROOT/default@test      0      -   597M  -
```

So far, I think I understand. I've created a snapshot of "default" called "test". It isn't active yet, but in theory, if I activate it, and reboot, I should be using "test". If I want to go back to default, I should be able to re-activate default.


```
# beadm list
BE      Active Mountpoint Space Policy Created
default NR     /           597M static 2012-05-18 07:36
test    -      -             1K static 2012-05-22 09:49
```
Just for (useless) fun, I create another snapshot called "test1": 


```
# beadm create test1
Created successfully
```

In theory, test and test1, should be the same since they are both snapshots of default?


```
# zfs list -t snapshot
NAME                     USED  AVAIL  REFER  MOUNTPOINT
sys/ROOT/default@test   62.5K      -   597M  -
sys/ROOT/default@test1      0      -   597M  -
```

I'm fine here, until I decide to activate test1. 


```
# beadm activate test1
Activated successfully

# zfs list -t snapshot
NAME                   USED  AVAIL  REFER  MOUNTPOINT
sys/ROOT/test1@test   62.5K      -   597M  -
sys/ROOT/test1@test1    19K      -   597M  -
```
This output, which may very well be correct confuses me as a first time beadm user. 
It's not quite clear what happens on activate.  I know snapshots are read only, but I wouldn't have expected default@test or default@test1 snapshots to change -- is this correct?


```
# beadm list
BE      Active Mountpoint Space Policy Created
default N      /              0 static 2012-05-18 07:36
test    -      -             1K static 2012-05-22 09:49
test1   R      -           597M static 2012-05-22 09:50
```

Now let's say I want to go back to default -- I would imagine everything would go back to the way it was before:


```
# beadm activate default
# beadm list
BE      Active Mountpoint Space Policy Created
default NR     /              0 static 2012-05-18 07:36
test    -      -             1K static 2012-05-22 09:49
test1   -      -           597M static 2012-05-22 09:50
```

That seemed okay, but ... 


```
# zfs list -t snapshot
NAME                   USED  AVAIL  REFER  MOUNTPOINT
sys/ROOT/test1@test   62.5K      -   597M  -
sys/ROOT/test1@test1    19K      -   597M  -
```

Hmmm  still not sure I understand that output. 

So let's say I want to go back to default and start again.


```
# beadm destroy test
Are you sure you want to destroy 'test'?
This action cannot be undone (y/[n]): y
Destroyed successfully

# beadm destroy test1
Are you sure you want to destroy 'test1'?
This action cannot be undone (y/[n]): y
Note: No error message, but no destroyed successfully message either.. hmm.

# beadm list
BE      Active Mountpoint Space Policy Created
default NR     /          97.5K static 2012-05-18 07:36
test1   -      -           597M static 2012-05-22 09:50
```

So I'll try running destroy again. 


```
# beadm destroy test1
Are you sure you want to destroy 'test1'?
This action cannot be undone (y/[n]): y
```

In fact, I can run it over and over again with the same result, but if I do:

`# sh -xv beadm destroy test1`

I can see in the output:


```
zfs promote cannot destroy \''sys/ROOT/test1'\'':' filesystem has dependent clones
```

and then after a "+ zfs promote sys/ROOT/default", I see:

```
+ echo 'Destroyed successfully'
Destroyed successfully
```
and now:


```
# beadm list
BE      Active Mountpoint Space Policy Created
default NR     /           597M static 2012-05-18 07:36
test1   -      -          71.5K static 2012-05-22 10:25

# zfs list -t snapshot
NAME                     USED  AVAIL  REFER  MOUNTPOINT
sys/ROOT/default@test1  63.5K      -   597M  -
```
The odd thing is that after, I was able to do:


```
# beadm destroy test1
Are you sure you want to destroy 'test1'?
This action cannot be undone (y/[n]): y
Destroyed successfully
# beadm list
BE      Active Mountpoint Space Policy Created
default NR     /           597M static 2012-05-18 07:36
```
I guarantee that some of this is my lack of understanding of beadm, but I suspect there's a bug hiding in there as well.

A few other minor questions:
1) How does one determine which version of beadm that they have? I downloaded mine today from SF, so I imagine it's the latest version as of today, but how do I keep track? (May I recommend a version number at the top of the script?)

2) In your instructions above, after installing FreeBSD with ZFS root, you recommend taking a snapshot. I think that's a great idea, but do I need to use zfs snapshot for that, or if I have beadm, would I just use beadm?  That is, would there ever be a circumstance where I would need to take a snapshot of ROOT where beadm wouldn't be the tool to use?

Thanks for any assistance.

Jason.


----------



## donduq (May 24, 2012)

I am having the strangest problem, it keeps coming back!


```
Mounting from zfs:sys/ROOT/default failed with error 2
```

Could anyone help me, please?


----------



## rawthey (May 24, 2012)

kangaroo said:
			
		

> # beadm list
> BE Active Mountpoint Space Policy Created
> default N / 0 static 2012-05-18 07:36
> test - - 1K static 2012-05-22 09:49
> ...



The significant thing to note here is the absence of an *Activated successfully* message after *beadm activate default*. This looks like it's caused by the bug (or feature?) of ZFS causing the line *zfs set canmount=on ${I} 2> /dev/nul* to fail because it's trying to unmount /. There's more background to this in my earlier post in this thread.

Until the canmount issue is resolved you need to reboot into the freshly activated BE before attempting to go back.


----------



## vermaden (May 25, 2012)

kangaroo said:
			
		

> I haven't used beadm before.  I think the concept is really cool, and it's great that you've ported it to FreeBSD! That being said, I gave it a try and ran into a few questions..  If anyone might be able to clarify a few things, that would be great!


Thanks 



			
				kangaroo said:
			
		

> So far, I think I understand. I've created a snapshot of "default" called "test". It isn't active yet, but in theory, if I activate it, and reboot, I should be using "test". If I want to go back to default, I should be able to re-activate default.



New BE is creting a *snapshot* from other BE and then creating a *clone* from that *snapshot*, so sys/ROOT/test is a *clone* of sys/ROOT/default@test *snapshot*.



			
				kangaroo said:
			
		

> I'm fine here, until I decide to activate test1.


The activation means zfs promote which diverts the parent/child relationship between sys/ROOT/default and sys/ROOT/test, so after activation sys/ROOT/test will be 'most important' and sys/ROOT/default will be treated as clone of snapshot of sys/ROOT/test (simplifacation) 



			
				kangaroo said:
			
		

> Now let's say I want to go back to default -- I would imagine everything would go back to the way it was before:


There is a BUG in ZFS code which tries to remount a mounted filesystem when changing ZFS canmount property from noauto to on, there is a PR for that and _Bryan Drewery _is working on getting that one solved. Currently, activating the booted BE is broken because of that - You will activate only the 'main' dataset like sys/ROOT/default, but not sys/ROOT/default/tmp.



			
				kangaroo said:
			
		

> I guarantee that some of this is my lack of understanding of beadm, but I suspect there's a bug hiding in there as well.


I will look into these issues and try to resolve them, thanks for testing 



			
				kangaroo said:
			
		

> 1) How does one determine which version of beadm that they have? I downloaded mine today from SF, so I imagine it's the latest version as of today, but how do I keep track? (May I recommend a version number at the top of the script?)



Just use the GITHUB tag: https://github.com/vermaden/beadm/tree/0.6/



			
				kangaroo said:
			
		

> 2) In your instructions above, after installing FreeBSD with ZFS root, you recommend taking a snapshot. I think that's a great idea, but do I need to use zfs snapshot for that, or if I have beadm, would I just use beadm?  That is, would there ever be a circumstance where I would need to take a snapshot of ROOT where beadm wouldn't be the tool to use?


The snapshot is the most important thing, when You have a snapshot, then You can create BE from it with ZFS clone, so creating 'whole' BE is not needed.



			
				kangaroo said:
			
		

> Thanks for any assistance.


Welcome. Sorry for late response.






			
				donduq said:
			
		

> I am having the strangest problem, it keeps coming back!
> 
> 
> ```
> ...



This error on boot happens when You have BE imported from other machine and 





			
				kangaroo said:
			
		

> s not updated on that BE that You want to boot. Solution is to boot some FreeBSD live CD (FreeBSD ISO or mfsbsd ISO or Frenzy ...) and import pool with -o cachefile=/tmp/zpool.cache, then set mountpoint=/mnt for sys/ROOT/default, then copy that /tmp/zpool.cache file to /mnt/boot/zfs/zpool.cache, then zfs umount -a, then reboot (if I recall correctly).
> 
> 
> 
> ...


Unfortunately, true


----------



## papelboyl1 (May 28, 2012)

I'm using only one disk so I'm using  the revise step 7 from the 3.1. I'm getting this error 
	
	



```
cannot mount 'sys' : No such file or directory
```

Can someone please help? Thanks.


----------



## rawthey (May 28, 2012)

The "cannot mount..." message is normal when booting from the DVD and can be ignored. So, providing the partitions had been created OK in step 6, everything else should continue without any problem.


----------



## vermaden (May 28, 2012)

It's a harmless warning, while using the live CD you are not able to create /sys or any other root directory and that is where the warning comes. You can combine steps 7 and 8 into one to omit that warning: [cmd=]# zpool create -f -o cachefile=/tmp/zpool.cache -o mountpoint=none sys mirror /dev/gpt/sys*[/cmd]


----------



## overmind (Jun 1, 2012)

Can I use beadm to also snapshot encrypted partition (laptop example). Or if I do not have an encrypted partition but a second pool, can I manage that too from beadm?


----------



## vermaden (Jun 1, 2012)

overmind said:
			
		

> Can I use beadm to also snapshot encrypted partition (laptop example).



Yes, that would be *beadm create beName@snapshotName*of that _Boot Environment_.

Its the same as the *zfs snapshot -r sys/ROOT/beName@snapshotName* command.



			
				overmind said:
			
		

> Or if I do not have an encrypted partition but a second pool, can I manage that too from beadm?



I used beadm only to manage 'root' ZFS pool, haven*'*t tried with two ZFS pools.


----------



## overmind (Jun 1, 2012)

But the encrypted partition is on second pool (called local). And *zfs snapshot -r sys/ROOT/beName@snapshotName* is for sys pool. So I get: 
	
	



```
Error, cannot create snapshot
```

I could use another approach to just use a single pool (sys and create a vdev on that pool and encrypt it. Could that be ok or is too much overhead (being zfs on geli on top of zfs)

Do you use laptop example to snapshot encrypted (home) partition too when doing a *beadm create ...*? The idea is to save states for both partitions.

And thank you, great tutorial and great idea with beadm. This is a dream come true for many sysadmins to ease deployments of servers/updates or for developers .


----------



## jrm@ (Jun 4, 2012)

Thank you for the nice guide vermaden.



			
				vermaden said:
			
		

> ```
> gpart add -t freebsd-zfs -l sys${NUMBER} ${I}
> ```



Is it helpful to specify the -a or -b switches here for sector alignment?

I will be getting a new laptop soon with an i5-2520M processor that includes "AES New Instructions".  With such cryto hardware I'm expecting something around 75% performance with geli versus without.  Is this a reasonable expectation?


----------



## vermaden (Jun 4, 2012)

overmind said:
			
		

> But the encrypted partition is on second pool (called local).



Yes, but beadm does not 'touch' it 



			
				overmind said:
			
		

> And *zfs snapshot -r sys/ROOT/beName@snapshotName* is for sys pool. So I get:
> 
> 
> 
> ...



I just created such a snapshot, don't know why you can't, try a different/nonexistent snapshot name.



			
				overmind said:
			
		

> I could use another approach to just use a single pool (sys and create a vdev on that pool and encrypt it. Could that be ok or is too much overhead (being zfs on geli on top of zfs)



It should work ok, but it will be slower because of 'doubled' ZFS.



			
				overmind said:
			
		

> Do you use laptop example to snapshot encrypted (home) partition too when doing a *beadm create ...*? The idea is to save states for both partitions.



No, I use beadm only for the sys pool.

If you want, you can create a snapshot of the *local* pool by yourself with the *zfs snapshot* command, but it*'*s not needed, we need an _installed system state_ snapshot, not an empty directories snapshot from a *local* pool 



			
				overmind said:
			
		

> And thank you, great tutorial and great idea with beadm. This is a dream come true for many sysadmins to ease deployments of servers/updates or for developers .



The kudos go to OpenSolaris/Solaris developers who created the beadm idea, I just implemented this idea in the FreeBSD world, there is still a PR/PR 167905 that 'blocks' full beadm functionality, so we will have to wait for it being fixed to have a fully working beadm on FreeBSD.






			
				jrm said:
			
		

> Thank you for the nice guide vermaden.


Welcome.



			
				jrm said:
			
		

> Is it helpful to specify the -a or -b switches here for sector alignment?


I do not have any 4k drives yet (on purpose ) so I can not clarify that.



			
				jrm said:
			
		

> I will be getting a new laptop soon with an i5-2520M processor that includes "AES New Instructions".  With such cryto hardware I'm expecting something around 75% performance with geli versus without.  Is this a reasonable expectation?


AESNI will definitely be faster then without, but I don't know exact numbers.


----------



## pgrunwald (Jun 5, 2012)

Hi, thanks for the great guide.  I'm very rusty on FreeBSD,  the last version I ran was 4.1. I'm missing the step between 3.1 and 4.0. I have completed 3.1 sucessfully but I'm missing the steps for install to use the pool. 

Any pointers would be greatly appreciated.

I am using FreeBSD-9.0-RELEASE-i386-memstick.img as my install medium. 

Thanks,
Paul


----------



## vermaden (Jun 5, 2012)

pgrunwald said:
			
		

> Hi, thanks for the great guide.


Welcome.



			
				pgrunwald said:
			
		

> I'm missing the step between 3.1 and 4.0. I have completed 3.1 sucessfully but I'm missing the steps for install to use the pool.
> 
> Any pointers would be greatly appreciated.


Can you explain more what problem you faced?


----------



## pgrunwald (Jun 5, 2012)

Hi,

Do I just start the install as normal on the reboot still operating from my USB drive?  Going through the install, I was not sure where to install to.  I still see ada0 and ada1 with the boot and zfs partitions.  I'm just not sure how to proceed with the installation to the pool drive.  I have 2 1TB drives and this box will be operating as a NAS via Samba, FTP, and possibly Tahoe-LAFS.

TIA,
Paul


----------



## vermaden (Jun 6, 2012)

pgrunwald said:
			
		

> Do I just start the install as normal on the reboot still operating from my USB drive?


After booting into live CD/USB and entering the instruction you reboot into just installed FreeBSD, not into the live CD/USB.



			
				pgrunwald said:
			
		

> Going through the install, I was not sure where to install to.


It's up to you where you want to install it.



			
				pgrunwald said:
			
		

> I still see ada0 and ada1 with the boot and zfs partitions.


Put command outputs here, I do not exacly know what you mean


----------



## pgrunwald (Jun 6, 2012)

Thanks,  I was confused.  I did boot with the memstick out and the computer hangs after the boot prompt at the propeller.  Just for grins,  I tried this recipe: http://forums.freebsd.org/showthread.php?t=23544 and I get the same results,  hang right after the boot prompt. 

Motherboard is  Atom based Gigabyte GA-D525TUD with 4GB of RAM. I have two Samsung 1TB hard drives on the Intel SATA interface.  

Any other suggestions?


----------



## vermaden (Jun 6, 2012)

pgrunwald said:
			
		

> Any other suggestions?



I would upgrade to latest BIOS version.


----------



## HarryE (Jun 6, 2012)

I have the same board, with 8GB RAM. It seems it can't boot from GPT formatted USB sticks. I have the latest BIOS (F05). I managed to ZFS boot from an MBR formatted USB stick, with non-RAID ZFS on the same stick. It also boots from a mirrored ZFS on two GPT formatted SATA disks.

HTH


----------



## pgrunwald (Jun 7, 2012)

Ok,  I have updated to BIOS version F05, the latest available at the manufacturer.  It still hangs at the propeller. 

HarryE - thanks for the note! I am able to boot from the FreeBSD-9.0-RELEASE-i386-memstick.img as written by win32diskimager and I'm able to run through this procedure and this one: http://forums.freebsd.org/showthread.php?t=23544 without issue. It just won't come up on the reboot without the memstick. 

My drives are set on AHCI. 

What next please folks?


----------



## vermaden (Jun 7, 2012)

The beadm utility and beadm compatible ZFS layout are now included in latest PC-BSD snapshot towards 9.1 release:
http://blog.pcbsd.org/2012/06/20120605-snapshot-now-available/


----------



## vermaden (Jun 18, 2012)

The problematic PR about ZFS canmount property has been fixed (thanks to _Bryan Drewery_) and merged to HEAD (with MFC: 1 week): http://freebsd.org/cgi/query-pr.cgi?pr=167905

So now beadm is fully functional on FreeBSD HEAD and will be in 9-STABLE in less then a week, or You may apply the patch Yourself from here: http://freshbsd.org/commit/freebsd/r237119

With these instructions:
[CMD=""]# cd /usr/src/cddl[/CMD]
[CMD=""]# patch -p1 < patch-zfs-canmount[/CMD]
[CMD=""]# make obj depend all install[/CMD]
[CMD=""]# reboot[/CMD]


----------



## rawthey (Jun 19, 2012)

vermaden said:
			
		

> The problematic PR about ZFS canmount property has been fixed (thanks to _Bryan Drewery_) and merged to HEAD (with MFC: 1 week): http://freebsd.org/cgi/query-pr.cgi?pr=167905



Is it safe to apply the patch to 9.0-RELEASE or do we need to wait for MFC and then upgrade to STABLE?


----------



## vermaden (Jun 19, 2012)

It's safe to apply the patch on 9.0-RELEASE.


----------



## srivo (Jun 20, 2012)

First, thanks for that how-to! It's rea*l*ly helpful to manage servers.

I followed 6.2 to do an upgrade within a jail and it look like something is missing. I got the following error.

```
Configuring jails:.
Starting jails:df: /usr/jails/jailed/dev: No such file or directory
mount: /usr/jails/jailed: Not a directory
/etc/rc.d/jail: WARNING: devfs_domount(): Unable to mount devfs on /usr/jails/jailed/dev
/etc/rc.d/jail: WARNING: devfs_mount_jail: devfs was not mounted on /usr/jails/jailed/dev
ln: /usr/jails/jailed/dev/log: No such file or directory
 cannot start jail "jailed":
```

Like if the zfs jailed volume create by beadm is not mounted.

Also there is a little typo in the man page in the example section it written beadmn instead of beadm.

srivo


----------



## vermaden (Jun 20, 2012)

srivo said:
			
		

> First, thanks for that how-to! It's rea*l*ly helpful to manage servers.


Welcome 



			
				srivo said:
			
		

> Like if the zfs jailed volume create by beadm is not mounted.


Indeed, I added these, to make sure that newly created Jail dataset is mounted:



> 3.1. Make new Jail dataset mountable.
> [font="Courier New"]# zfs set canmount=noauto sys/ROOT/jailed[/font]
> 
> 3.2. Mount new Jail dataset.
> [font="Courier New"]# zfs mount sys/ROOT/jailed[/font]


----------



## vermaden (Jun 24, 2012)

Updates to the beadm utility:

 - minor fixes and clean 
 - added *-F* switch for *destroy* option - does not need confirmation upon destroy
 - implemented *umount* option with *-f* switch for *umount -f *(force)
 - implemented *mount* option with several variants of usage, examples:


```
# [color="Blue"]beadm[/color]
usage:
  beadm subcommand cmd_options

  subcommands:

  beadm activate beName
  beadm create [-e nonActiveBe | -e beName@snapshot] beName
  beadm create beName@snapshot
  beadm destroy [-F] beName | beName@snapshot
  beadm list
  beadm mount
  beadm mount beName [mountpoint]
  beadm umount [-f] beName
  beadm rename origBeName newBeName

# [color="blue"]beadm mount[/color]
update
  sys/ROOT/update  /

# [color="blue"]beadm mount test /test[/color]
Mounted successfully on '/test'

# [color="blue"]beadm mount default[/color]
Mounted successfully on '/tmp/tmp.KhAtHe'

# [color="blue"]beadm mount[/color]
default
  sys/ROOT/default  /tmp/tmp.KhAtHe

test
  sys/ROOT/test            /test
  sys/ROOT/test/SOMETHING  /test/test

update
  sys/ROOT/update  /

# [color="Blue"]beadm umount test[/color]
Unmounted successfully

# [color="blue"]beadm umount -f default[/color]
Unmounted successfully
```

Please report all problems and BUGs


----------



## vermaden (Jul 8, 2012)

beadm 0.7 already in the Ports tree: http://freshports.org/sysutils/beadm


----------



## lisiren (Jul 12, 2012)

> > cd /usr/ports/sysutils/beadm; make install clean
> ===>  License BSD accepted by the user
> ===>  Extracting for beadm-0.7
> => SHA256 Checksum OK for beadm-0.7.tar.bz2.
> ...





> > beadm list
> ERROR: This system is not configured for boot environments



I have FreeBSD 9.0-STABLE and encrypted zfs root. Is it possible to use beadm at this configuration?


----------



## vermaden (Jul 12, 2012)

Post here outputs of gpart show and mount commands.


----------



## lisiren (Jul 12, 2012)

vermaden said:
			
		

> Post here outputs of gpart show and mount commands.




```
> gpart show 
=>       34  976773101  ada0  GPT  (465G)
         34        128     1  freebsd-boot  (64k)
        162       1854        - free -  (927k)
       2016    2097152     2  freebsd-ufs  (1.0G)
    2099168   20447232     3  freebsd-swap  (9.8G)
   22546400  954226735     4  freebsd-zfs  (455G)
```


```
> mount
tank0 on / (zfs, local, noatime, nfsv4acls)
devfs on /dev (devfs, local, multilabel)
/dev/label/boot0 on /boot-mount (ufs, local, noatime)
procfs on /proc (procfs, local)
fdescfs on /dev/fd (fdescfs)
linprocfs on /compat/linux/proc (linprocfs, local)
tank0/home on /home (zfs, local, noatime, nfsv4acls)
tank0/caesar on /home/caesar (zfs, local, noatime, nfsv4acls)
tank0/torrents on /home/caesar/Torrents (zfs, local, noatime, nfsv4acls)
tank0/VBox on /home/caesar/VirtualBox (zfs, local, noatime, nfsv4acls)
tank0/usr on /usr (zfs, local, noatime, nfsv4acls)
tank0/usr/ports on /usr/ports (zfs, local, noatime, nfsv4acls)
tank0/usr/ports/distfiles on /usr/ports/distfiles (zfs, local, noatime, nfsv4acls)
tank0/var on /var (zfs, local, noatime, nfsv4acls)
```


----------



## vermaden (Jul 12, 2012)

@lisiren

I see that You use UFS /boot for booting and then use ZFS for the rest of the system, this is not supported by beadm. You must use ZFS only setup like in this HOWTO (without UFS) and have specified schema, something like that:


```
> mount
tank0/ROOT/default on / (zfs, local, noatime, nfsv4acls)
tank0/ROOT/default/usr on /usr (zfs, local, noatime, nfsv4acls)
tank0/ROOT/default/usr/ports on /usr/ports (zfs, local, noatime, nfsv4acls)
tank0/ROOT/default/usr/ports/distfiles on /usr/ports/distfiles (zfs, local, noatime, nfsv4acls)
tank0/ROOT/default/var on /var (zfs, local, noatime, nfsv4acls)
tank0/home on /home (zfs, local, noatime, nfsv4acls)
tank0/home/caesar on /home/caesar (zfs, local, noatime, nfsv4acls)
tank0/home/caesar/Torrents on /home/caesar/Torrents (zfs, local, noatime, nfsv4acls)
tank0/home/caesar/VirtualBox on /home/caesar/VirtualBox (zfs, local, noatime, nfsv4acls)
```


----------



## lisiren (Jul 12, 2012)

Ok, thank you for answer.


----------



## vermaden (Jul 27, 2012)

Updates to the beadm utility:

 - minor fixes and clean
 - fixed incorrect MOUNTPOINT gathering in beadm mount
 - added additional check to beadm activate if BE is not mounted by beadm mount command


----------



## xeube (Jul 27, 2012)

@vermaden 

Thank you very much for this how to. With the implementation of beadm, Iâ€™m thinking of switching back to zfs. However, I have one question/concern.

Is it possible to use beadm in conjuncture with GRUB2? I used OpenSolaris for a short while back in the days and, if I remember correctly, their version of beadm would create a new entry in GRUB2 that would allow you to select your BE upon boot. This process allowed you to select and test your new BE and, in the event of a kernel panic or /etc/fstab issue for example, you could just revert to your previous BE. This would be easier than recovering your previous BE using as installation CD/DVD as you explained above (reply #29).


----------



## vermaden (Jul 27, 2012)

xeube said:
			
		

> Thank you very much for this how to. With the implementation of beadm, Iâ€™m thinking of switching back to zfs. However, I have one question/concern.


Welcome. If You face any issues with it, please report them 



			
				xeube said:
			
		

> Is it possible to use beadm in conjuncture with GRUB2?


GRUB2 supports booting from ZFS from version 1.99:
http://ashish.is.lostca.se/2011/12/28/booting-into-zfs-only-freebsd-from-grub2/

Recently GRUB2 version 2.0 has been released.

I think the answer is yes, but its difficult since the version of GRUB2 in Ports is 1.98 (which does not support ZFS).

You will have to use some Linux to install GRUB 2.0 that supports ZFS.



			
				xeube said:
			
		

> I used OpenSolaris for a short while back in the days and, if I remember correctly, their version of beadm would create a new entry in GRUB2 that would allow you to select your BE upon boot. This process allowed you to select and test your new BE and, in the event of a kernel panic or /etc/fstab issue for example, you could just revert to your previous BE. This would be easier than recovering your previous BE using as installation CD/DVD as you explained above (reply #29).


That is the long-time plan for FreeBSD, but with ZFSloader, to have such menu on 'our' bootcode and also to failback from not working BE, to the last working one, something line nextboot -k test to try new kernel /boot/test/kernel but if it will fail, the loader will failback to the default /boot/kernel/kernel.

Some work has been done to ZFSloader, to it will allow to select from which ZFS dataset to boot, I haven't followed that development through, it should not be that hard to add failback/BE layer once its done.


----------



## vermaden (Sep 6, 2012)

The *beadm 0.8* has just been commited to the Ports tree:

http://freshports.org/sysutils/beadm

Changelog:


```
-- Introduce proper space calculation by each boot environment in *beadm list*
-- Rework the *beadm destroy* command so no orphans are left after destroying boot environment.
-- Fix the *beadm mount* and *beadm umount* commands error handling.
-- Rework consistency of all error and informational messages.
-- Simplify and cleanup code where possible.
-- Fix *beadm destroy* for 'static' (not cloned) boot environments received by *zfs receive* command.
-- Use mktemp(1) where possible.
-- Implement *beadm list -a* option to list all datasets and snapshots of boot environments.
-- Add proper mountpoint listing to the *beadm list* command.
   % beadm list
   BE      Active Mountpoint       Space Created
   default NR     /                11.0G 2012-07-28 00:01
   test1   -      /tmp/tmp.IUQuFO  41.2M 2012-08-27 21:20
   test2   -      -                56.6M 2012-08-27 21:20

-- Change snapshot format to the one used by original *beadm* command
(%Y-%m-%d-%H:%M:%S).
   % zfs list -t snapshot -o name -r sys/ROOT/default
   NAME
   sys/ROOT/default@2012-08-27-21:20:00
   sys/ROOT/default@2012-08-27-21:20:18

-- Implement *beadm list -D* option to display space that would be consumed by single boot environment if all other boot environments will be destroyed.
   % beadm list -D
   BE      Active Mountpoint       Space Created
   default NR     /                 9.4G 2012-07-28 00:01
   test1   -      /tmp/tmp.IUQuFO   8.7G 2012-08-27 21:20
   test2   -                        8.7G 2012-08-27 21:20

-- Add an option to BEADM DESTROY command to not destroy manually created snapshots used for boot environment.

   # beadm destroy test1
   Are you sure you want to destroy 'test1'?
   This action cannot be undone (y/[n]): y
   Boot environment 'test1' was created from existing snapshot
   Destroy 'default@test1' snapshot? (y/[n]): y
   Destroyed successfully

   # beadm destroy test1
   Are you sure you want to destroy 'test1'?
   This action cannot be undone (y/[n]): y
   Boot environment 'test1' was created from existing snapshot
   Destroy 'default@test1' snapshot? (y/[n]): n
   Origin snapshot 'default@test1' will be preserved
   Destroyed successfully
```


----------



## urosgruber (Oct 8, 2012)

I think I'll go mad. I'm trying to get this work but I can't find where I'm doing it wrong.

I've created test1 and test2 be and then try to activate with beadm activate test2.

Then my beadm list looks like


```
BE      Active Mountpoint  Space Created
default N      /          959.0K 2012-10-01 13:47
test1    -      -              1.0M 2012-10-08 21:17
test2    R      -              3.4G 2012-10-08 21:17
```

When I reboot output is the same and default is still mounted on /


```
tank/ROOT/default on / (zfs, local, nfsv4acls)
tank/ROOT/test2/usr on /usr (zfs, local, nfsv4acls)
tank/ROOT/test2/usr/ports on /usr/ports (zfs, local, nosuid, nfsv4acls)
tank/ROOT/test2/usr/ports/distfiles on /usr/ports/distfiles (zfs, local, noexec, nosuid, nfsv4acls)
tank/ROOT/test2/usr/ports/packages on /usr/ports/packages (zfs, local, noexec, nosuid, nfsv4acls)
tank/ROOT/test2/usr/src on /usr/src (zfs, local, noexec, nosuid, nfsv4acls)
tank/ROOT/test2/var on /var (zfs, local, nfsv4acls)
tank/ROOT/test2/var/crash on /var/crash (zfs, local, noexec, nosuid, nfsv4acls)
tank/ROOT/test2/var/db on /var/db (zfs, local, noexec, nosuid, nfsv4acls)
tank/ROOT/test2/var/db/pkg on /var/db/pkg (zfs, local, nosuid, nfsv4acls)
tank/ROOT/test2/var/empty on /var/empty (zfs, local, noexec, nosuid, nfsv4acls)
tank/ROOT/test2/var/log on /var/log (zfs, local, noexec, nosuid, nfsv4acls)
tank/ROOT/test2/var/mail on /var/mail (zfs, local, noexec, nosuid, nfsv4acls)
tank/ROOT/test2/var/run on /var/run (zfs, local, noexec, nosuid, nfsv4acls)
tank/ROOT/test2/var/tmp on /var/tmp (zfs, local, nosuid, nfsv4acls)
```

I also notice that vfs.root.mountfrom entry inside /boot/loader.conf stays the same. But zpool property bootfs is correct. In this case *tank/ROOT/test2*

Is there something I missed or missread?

For more detail info here is output of zfs list -o name,canmount,mountpoint


```
tank/ROOT                                noauto  none
tank/ROOT/default                        noauto  legacy
tank/ROOT/default/usr                    noauto  /usr
tank/ROOT/default/usr/ports              noauto  /usr/ports
tank/ROOT/default/usr/ports/distfiles    noauto  /usr/ports/distfiles
tank/ROOT/default/usr/ports/packages     noauto  /usr/ports/packages
tank/ROOT/default/usr/src                noauto  /usr/src
tank/ROOT/default/var                    noauto  /var
tank/ROOT/default/var/crash              noauto  /var/crash
tank/ROOT/default/var/db                 noauto  /var/db
tank/ROOT/default/var/db/pkg             noauto  /var/db/pkg
tank/ROOT/default/var/empty              noauto  /var/empty
tank/ROOT/default/var/log                noauto  /var/log
tank/ROOT/default/var/mail               noauto  /var/mail
tank/ROOT/default/var/run                noauto  /var/run
tank/ROOT/default/var/tmp                noauto  /var/tmp
tank/ROOT/test1                          noauto  legacy
tank/ROOT/test1/usr                      noauto  /usr
tank/ROOT/test1/usr/ports                noauto  /usr/ports
tank/ROOT/test1/usr/ports/distfiles      noauto  /usr/ports/distfiles
tank/ROOT/test1/usr/ports/packages       noauto  /usr/ports/packages
tank/ROOT/test1/usr/src                  noauto  /usr/src
tank/ROOT/test1/var                      noauto  /var
tank/ROOT/test1/var/crash                noauto  /var/crash
tank/ROOT/test1/var/db                   noauto  /var/db
tank/ROOT/test1/var/db/pkg               noauto  /var/db/pkg
tank/ROOT/test1/var/empty                noauto  /var/empty
tank/ROOT/test1/var/log                  noauto  /var/log
tank/ROOT/test1/var/mail                 noauto  /var/mail
tank/ROOT/test1/var/run                  noauto  /var/run
tank/ROOT/test1/var/tmp                  noauto  /var/tmp
tank/ROOT/test2                              on  legacy
tank/ROOT/test2/usr                          on  /usr
tank/ROOT/test2/usr/ports                    on  /usr/ports
tank/ROOT/test2/usr/ports/distfiles          on  /usr/ports/distfiles
tank/ROOT/test2/usr/ports/packages           on  /usr/ports/packages
tank/ROOT/test2/usr/src                      on  /usr/src
tank/ROOT/test2/var                          on  /var
tank/ROOT/test2/var/crash                    on  /var/crash
tank/ROOT/test2/var/db                       on  /var/db
tank/ROOT/test2/var/db/pkg                   on  /var/db/pkg
tank/ROOT/test2/var/empty                    on  /var/empty
tank/ROOT/test2/var/log                      on  /var/log
tank/ROOT/test2/var/mail                     on  /var/mail
tank/ROOT/test2/var/run                      on  /var/run
tank/ROOT/test2/var/tmp                      on  /var/tmp
```


----------



## rawthey (Oct 8, 2012)

```
I also notice that vfs.root.mountfrom entry inside /boot/loader.conf stays the same. But zpool property bootfs is correct. In this case tank/ROOT/test2

Is there something I missed or missread?
```
Could it be that /boot/loader.conf is read only by any chance?


----------



## urosgruber (Oct 9, 2012)

rawthey said:
			
		

> ```
> I also notice that vfs.root.mountfrom entry inside /boot/loader.conf stays the same. But zpool property bootfs is correct. In this case tank/ROOT/test2
> 
> Is there something I missed or missread?
> ...



No, I've checked that already.


----------



## vermaden (Oct 9, 2012)

@urosgruber
You can run *beadm *command in debug mode like that: sh -x *$( which beadm )* list instead of beadm list without debug.

Post the results of the 'non-working' *beadm activate* command here: sh -x *$( which beadm )* activate test2


----------



## urosgruber (Oct 9, 2012)

I finaly manage to resolve my issue. It looks like something was left behind when moving data from older zfs pool to the new zfs pool. Boot process was actually started from that older pool so bootfs on new pool was never actualy used. I removed the old pool, corrected some settings and now it looks like it works ok. I guess I needed some sleep


----------



## vermaden (Oct 9, 2012)

@urosgruber

Do You thing that *beadm* can be improved to cope with such things, what we can implement in *beadm* so that will not happen again?


----------



## urosgruber (Oct 9, 2012)

@vermaden

I realy don't know a good answer to that. I can give a list what was wrong in my case.


disks of the new pool didn't have any boot partition
because of that there was no bootcode on them.
disks of the old pool have correct partitioning and also bootcode. But becase content was almost the same as on new pool I didn't notice that maybe everyting is booting from the old pool or bootfs setting was read from the old pool but pointing it to the new pool.

If you install the whole system by the book there is no problem but in my case I was doing some conversion from plain zfs structure to beadm structure and also moving data from one zfs pool to another zfs pool with totaly different storage devices. It's realy hard to spot this kind of a problem.


----------



## vermaden (Oct 18, 2012)

@urosgruber

Ok, thanks for suggestions, maybe I will be able to at least add some more or less useful warning


----------



## Trois-Six (Oct 28, 2012)

Hi,

@Vermaden : is there a way to have a "more encrypted" setup than the one you descripted in the "Road Warrior Laptop" chapter ?

My problem in particular is that with your setup /etc is not encrypted.

I wanted to do something like :


```
kldload zfs aesni geom_eli
gpart destroy -F $PRIMARY_DISK
gpart create -s gpt $PRIMARY_DISK
gpart add -b 40 -s 256 -t freebsd-boot $PRIMARY_DISK
gpart add -b 2048 -s $SWAP_SIZE -t freebsd-swap -l swap0 $PRIMARY_DISK
gpart add -s $BOOT_SIZE -t freebsd-zfs -l boot0 $PRIMARY_DISK
gpart add -t freebsd-zfs -l root0 $PRIMARY_DISK
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 $PRIMARY_DISK

# /
echo $PASSPHRASE | geli init -b -a HMAC/SHA256 -e AES-XTS -l 256 -s 4096 -B none -J - /dev/gpt/root0
echo $PASSPHRASE | geli attach -j - /dev/gpt/root0
dd if=/dev/zero of=/dev/gpt/root0.eli bs=1M
zpool create -o altroot=$DESTDIR -o cachefile=/tmp/zpool.cache -O utf8only=on zroot /dev/gpt/root0.eli
zfs set mountpoint=none zroot
zfs set checksum=fletcher4 zroot
zfs set atime=off zroot
zfs create zroot/ROOT
zfs create -o mountpoint=/ zroot/ROOT/default

# /boot
gnop create -S 4096 /dev/gpt/boot0
dd if=/dev/zero of=/dev/gpt/boot0.nop bs=1M
zpool create -o altroot=$DESTDIR -o cachefile=/tmp/zpool.cache -O utf8only=on zboot /dev/gpt/boot0.nop
cp /tmp/zpool.cache /tmp/zpool.cache.bak
zpool export zboot
gnop destroy /dev/gpt/boot0.nop
mv /tmp/zpool.cache.bak /tmp/zpool.cache
zpool import -o altroot=$DESTDIR -o cachefile=/tmp/zpool.cache zboot
zfs set mountpoint=none zboot
zfs set checksum=fletcher4 zboot
zfs set atime=off zboot
zfs create -o mountpoint=/bootfs zboot/default
zfs set freebsd:boot-environment=1 zboot/default
zfs set bootfs=zboot/default zboot

# /usr/local
zfs create -o mountpoint=/usr/local zroot/local

# /var
zfs create -o mountpoint=/var zroot/var
zfs create -o compression=lzjb -o exec=off -o setuid=off zroot/var/crash
zfs create -o exec=off -o setuid=off zroot/var/db
zfs create -o compression=lzjb -o exec=on -o setuid=off zroot/var/db/pkg
zfs create -o exec=off -o setuid=off zroot/var/empty
zfs create -o compression=lzjb -o exec=off -o setuid=off zroot/var/log
zfs create -o compression=gzip -o exec=off -o setuid=off zroot/var/mail
zfs create -o exec=off -o setuid=off zroot/var/run

# /var/tmp, /tmp
zfs create -o compression=lzjb -o exec=on -o setuid=off zroot/var/tmp
chmod 1777 $DESTDIR/var/tmp
zfs create -o mountpoint=/tmp -o compression=on -o exec=on -o setuid=off zroot/tmp
chmod 1777 $DESTDIR/tmp

# /home
zfs create -o mountpoint=/home zroot/home

# Install OS
foreach file (/usr/freebsd-dist/*.txz)
 tar --unlink -xpJf $file -C $DESTDIR
end
zfs set readonly=on zroot/var/empty

# /boot on zboot
mv $DESTDIR/boot $DESTDIR/bootfs/boot
ln -shf bootfs/boot $DESTDIR/boot
chflags -h schg $DESTDIR/boot
cp /tmp/zpool.cache $DESTDIR/boot/zfs/zpool.cache

# FreeBSD Loader
cat >> $DESTDIR/boot/loader.conf <<EOF
ahci_load="YES"
aesni_load="YES"
geom_eli_load="YES"
kern.geom.eli.visible_passphrase="2"
zfs_load="YES"
vfs.root.mountfrom="zfs:zroot/ROOT/default"
linux_load="YES"
linprocfs_load="YES"
atapicam_load="YES"
snd_hda_load="YES"
kern.maxfiles="25000"
sem_load="YES"
autoboot_delay="2"
vesa_load="YES"
splash_bmp_load="YES"
bitmap_load="YES"
bitmap_name="/boot/splash.bmp"
if_iwn_load="YES"
mmc_load="YES"
mmcsd_load="YES"
sdhci_load="YES"
EOF

# Settings
echo "hostname=\"$HOSTNAME\"" >> $DESTDIR/etc/rc.conf
echo "ifconfig_$NETIF=\"DHCP\"" >> $DESTDIR/etc/rc.conf
cat >> $DESTDIR/etc/rc.conf <<EOF
zfs_enable="YES"
geli_swap_flags="-e AES-XTS -l 256 -s 4096 -d"
#wlans_iwn0="wlan0"
#ifconfig_wlan0="country FR WPA DHCP"
background_dhclient="YES"
background_fsck="YES"
fsck_y_enable="YES"
keymap="fr.iso.acc"
font8x8="iso15-8x8"
font8x14="iso15-8x14"
font8x16="iso15-8x16"
scrnmap="NO"
moused_enable="YES"
sshd_enable="YES"
postfix_enable="YES"
sendmail_enable="NO"
sendmail_submit_enable="NO"
sendmail_outbound_enable="NO"
sendmail_msp_queue_enable="NO"
ntpdate_enable="YES"
#hald_enable="YES"
#dbus_enable="YES"
#gdm_enable="YES"
#gdm_lang="fr_FR.UTF-8"
#gnome_enable="YES"
#linux_enable="YES"
clear_tmp_enable="YES"
EOF
echo -e "network={\n   ssid=\"MYSSID\"\n   pskw=\"MYKEY\"\n}" >> $DESTDIR/etc/wpa_supplicant.conf
echo '/dev/gpt/swap0.eli none swap sw 0 0' >> $DESTDIR/etc/fstab
echo 'WRKDIRPREFIX=/usr/obj' >> $DESTDIR/etc/make.conf
cp $DESTDIR/usr/share/zoneinfo/Europe/Paris $DESTDIR/etc/localtime
cd $DESTDIR/etc/mail
make aliases
freebsd-update -b $DESTDIR fetch
freebsd-update -b $DESTDIR install

zfs umount -a
zfs set mountpoint=/zroot zroot
zfs set mountpoint=/zboot zboot
zfs set mountpoint=/zroot/ROOT zroot/ROOT
zfs set mountpoint=legacy zroot/ROOT/default
```

beadm would have to be able to snapshot two pools instead of one and manage these pools.

Other question : I don't know what is the better between swap inside ZFS or outside ?

Thank you !


----------



## vermaden (Oct 29, 2012)

Trois-Six said:
			
		

> Hi,
> 
> @Vermaden : is there a way to have a "more encrypted" setup than the one you descripted in the "Road Warrior Laptop" chapter ?
> 
> My problem in particular is that with your setup /etc is not encrypted.



Well, my 'way' for Road Warrior is an 'hack' already (not having the WHOLE system encrypted as You specified).

IMHO FreeBSD Developers should implement/allow to boot from ZFS on GELI, which would solve the problem instead of dirty hacks like mine or Yours.






> I wanted to do something like :
> 
> (...)
> 
> beadm would have to be able to snapshot two pools instead of one and manage these pools.



I also experimented with that setup, I even had a *beadm* version to cope with that, here it is, maybe You will find it helpful: http://paste2.org/p/2396219 (but its very old - from the beggining when I started to work on beadm)

The *beadm* is BSD licensed and open-source, You can create Your branch from http://github.com/vermaden/beadm and add several quirks to make this possible, in the end its just a Shell script.





> Other question : I don't know what is the better between swap inside ZFS or outside ?


I use SWAP on ZFS because of flexibility. I can add/remove/increase/decrease the SWAP size as needed, I do not have that flexibility with GPT partitions, so I use ZFS here.


----------



## vermaden (Oct 29, 2012)

@Trois-Six

Here is the latest version of *beadm *(0.8.4) with the option to use separate /boot from separate pool, I haven't tested this as I do not longer have that setup and haven't checked it at VirtualBox, so beware 

http://paste2.org/p/2396248


----------



## Trois-Six (Oct 29, 2012)

Thank's !!

I am going to try it when I will have some time...


----------



## avilla@ (Oct 29, 2012)

I was already using some custom-ish boot environment-like (actually, zfs namespace-like) system, but beadm really makes things a lot easier. Thanks!

Now, one bug and one suggestion:

intermediate datasets with mountpoint=none (like usr/) are mounted by beadm mount;
why not using mktemp -dt be (${2} - name of boot environment - would be more meaningful, but could lead to too long names) instead of the meaningless /tmp/tmp.XXXXXX?


----------



## vermaden (Oct 29, 2012)

@avilla@

Thanks 

Consider the suggestion as approved, good idea BTW.

I will look into that beadm mount problem and let You know in this thread.


----------



## avilla@ (Oct 29, 2012)

avilla@ said:
			
		

> intermediate datasets with mountpoint=none (like usr/) are mounted by beadm mount;



Here's a patch against 0.8.3:
http://people.FreeBSD.org/~avilla/files/beadm.diff


----------



## avilla@ (Oct 29, 2012)

I don't understand why you're doing this, by the way:

```
MOUNTPOINT="/$( echo "${FS}" | sed s/"${PREFIX}"//g )"
```
Shouldn't it use the MOUNTPOINT of the dataset instead of its name?


----------



## vermaden (Oct 29, 2012)

avilla@ said:
			
		

> Here's a patch against 0.8.3:
> http://people.FreeBSD.org/~avilla/files/beadm.diff


Thanks, I will review it.



			
				avilla@ said:
			
		

> I don't understand why you're doing this, by the way:
> 
> ```
> MOUNTPOINT="/$( echo "${FS}" | sed s/"${PREFIX}"//g )"
> ...



Under a new prefix/root, why not, I will look into that.


----------



## vermaden (Oct 30, 2012)

avilla@ said:
			
		

> Here's a patch against 0.8.3:
> http://people.FreeBSD.org/~avilla/files/beadm.diff



Merged to HEAD, about temporary mount points names:

```
[color="Red"]- mktemp -d /tmp/tmp.XXXXXX[/color]
[color="Lime"]+ mktemp -d /tmp/beadm.${BE}.XXXXXX[/color]
```


----------



## avilla@ (Oct 30, 2012)

vermaden said:
			
		

> Merged to HEAD, about temporary mount points names:
> 
> ```
> [color="Red"]- mktemp -d /tmp/tmp.XXXXXX[/color]
> ...



Thanks! I think, though, that this will result in too long directory names, which will spoil beadm list output; beadm alone is probably a better template. Also, you shouldn't hardcode /tmp, but let the user set his TMPDIR, so consider using the -t option for mktemp.


----------



## vermaden (Oct 30, 2012)

avilla@ said:
			
		

> Thanks! I think, though, that this will result in too long directory names, which will spoil beadm list output; beadm alone is probably a better template.


Maybe I will think something shorter.



			
				avilla@ said:
			
		

> Also, you shouldn't hardcode /tmp, but let the user set his TMPDIR, so consider using the -t option for mktemp.


If user want to mount it somewhere else, then the syntax is beadm mount <beName> [mountpoint]


----------



## vermaden (Oct 30, 2012)

@avilla@

Introduce /tmp/BE-${BE}.XXXXXX for mountpoint names and introduce automatic deletion of the generated mountpoints:
https://github.com/vermaden/beadm/commit/a476ca72a069d06e5c5a9dc173df56323855d7a1
https://github.com/vermaden/beadm/commit/4025f07c3f43b1fb55173006fb41ea48dabdb07f

Regards,
vermaden


----------



## Trois-Six (Oct 31, 2012)

I can't find the way to boot on a dataset from my my zboot pool.

I explain : this works :


```
#!/usr/bin/env tcsh

set PRIMARY_DISK=`/sbin/sysctl -n kern.disks`
set NETIF=`/sbin/ifconfig -l -u | /usr/bin/sed -e 's/lo0//' -e 's/ //g'`
set SWAP_SIZE=1G
set BOOT_SIZE=1G
set HOSTNAME=freebsd.localdomain
set DESTDIR=/mnt
set PASSPHRASE=mypassword
kldload zfs aesni geom_eli
gpart destroy -F $PRIMARY_DISK
gpart create -s gpt $PRIMARY_DISK
gpart add -b 40 -s 256 -t freebsd-boot $PRIMARY_DISK
gpart add -b 2048 -s $BOOT_SIZE -t freebsd-zfs -l boot0 $PRIMARY_DISK
gpart add -t freebsd-zfs -l root0 $PRIMARY_DISK
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 $PRIMARY_DISK
echo $PASSPHRASE | geli init -b -e AES-XTS -l 256 -s 4096 -B none -J - /dev/gpt/root0
echo $PASSPHRASE | geli attach -j - /dev/gpt/root0
zpool create -o altroot=$DESTDIR -o cachefile=/tmp/zpool.cache -O utf8only=on -m none zroot /dev/gpt/root0.eli
zfs set checksum=fletcher4 zroot
zfs set atime=off zroot
zfs set mountpoint=none zroot
zfs create zroot/ROOT
zfs create -o mountpoint=/ zroot/ROOT/default
gnop create -S 4096 /dev/gpt/boot0
dd if=/dev/zero of=/dev/gpt/boot0.nop bs=1M
zpool create -o altroot=$DESTDIR -o cachefile=/tmp/zpool.cache -O utf8only=on -m /bootfs zboot /dev/gpt/boot0.nop
zpool export zboot
gnop destroy /dev/gpt/boot0.nop
zpool import -o altroot=$DESTDIR -o cachefile=/tmp/zpool.cache zboot
zfs set checksum=fletcher4 zboot
zfs set atime=off zboot
zfs set bootfs=zboot zboot
zfs create -o mountpoint=/usr/local zroot/local
zfs create -o mountpoint=/var zroot/var
zfs create -o compression=lzjb -o exec=off -o setuid=off zroot/var/crash
zfs create -o exec=off -o setuid=off zroot/var/db
zfs create -o compression=lzjb -o exec=on -o setuid=off zroot/var/db/pkg
zfs create -o exec=off -o setuid=off zroot/var/empty
zfs create -o compression=lzjb -o exec=off -o setuid=off zroot/var/log
zfs create -o compression=gzip -o exec=off -o setuid=off zroot/var/mail
zfs create -o exec=off -o setuid=off zroot/var/run
zfs create -o compression=lzjb -o exec=on -o setuid=off zroot/var/tmp
chmod 1777 $DESTDIR/var/tmp
zfs create -o mountpoint=/tmp -o compression=on -o exec=on -o setuid=off zroot/tmp
chmod 1777 $DESTDIR/tmp
zfs create -o mountpoint=/home zroot/home
foreach file (/usr/freebsd-dist/*.txz)
 tar --unlink -xpJf $file -C $DESTDIR
end
zfs set readonly=on zroot/var/empty
mv $DESTDIR/boot $DESTDIR/bootfs/boot
ln -shf bootfs/boot $DESTDIR/boot
chflags -h schg $DESTDIR/boot
cp /tmp/zpool.cache $DESTDIR/boot/zfs/zpool.cache
cat >> $DESTDIR/boot/loader.conf <<__EOF__
ahci_load="YES"
aesni_load="YES"
geom_eli_load="YES"
kern.geom.eli.visible_passphrase="2"
zfs_load="YES"
vfs.root.mountfrom="zfs:zroot/ROOT/default"
__EOF__
echo hostname=\"$HOSTNAME\" >> $DESTDIR/etc/rc.conf
echo ifconfig_$NETIF=\"DHCP\" >> $DESTDIR/etc/rc.conf
cat >> $DESTDIR/etc/rc.conf <<__EOF__
zfs_enable="YES"
__EOF__
cp $DESTDIR/usr/share/zoneinfo/Europe/Paris $DESTDIR/etc/localtime
cd $DESTDIR/etc/mail
setenv SENDMAIL_ALIASES $DESTDIR/etc/mail/aliases
make aliases
cd /
zfs umount -a
zfs set mountpoint=legacy zroot/ROOT/default
zfs set mountpoint=/zroot zroot
zfs set mountpoint=/zroot/ROOT zroot/ROOT
zfs set mountpoint=/home zroot/home
zfs set mountpoint=/tmp zroot/tmp
zfs set mountpoint=/usr/local zroot/local
zfs set mountpoint=/var zroot/var
```

But this doesn't :


```
#!/usr/bin/env tcsh

set PRIMARY_DISK=`/sbin/sysctl -n kern.disks`
set NETIF=`/sbin/ifconfig -l -u | /usr/bin/sed -e 's/lo0//' -e 's/ //g'`
set SWAP_SIZE=1G
set BOOT_SIZE=1G
set HOSTNAME=freebsd.localdomain
set DESTDIR=/mnt
set PASSPHRASE=mypassword
kldload zfs aesni geom_eli
gpart destroy -F $PRIMARY_DISK
gpart create -s gpt $PRIMARY_DISK
gpart add -b 40 -s 256 -t freebsd-boot $PRIMARY_DISK
gpart add -b 2048 -s $BOOT_SIZE -t freebsd-zfs -l boot0 $PRIMARY_DISK
gpart add -t freebsd-zfs -l root0 $PRIMARY_DISK
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 $PRIMARY_DISK
echo $PASSPHRASE | geli init -b -e AES-XTS -l 256 -s 4096 -B none -J - /dev/gpt/root0
echo $PASSPHRASE | geli attach -j - /dev/gpt/root0
zpool create -o altroot=$DESTDIR -o cachefile=/tmp/zpool.cache -O utf8only=on -m none zroot /dev/gpt/root0.eli
zfs set checksum=fletcher4 zroot
zfs set atime=off zroot
zfs set mountpoint=none zroot
zfs create zroot/ROOT
zfs create -o mountpoint=/ zroot/ROOT/default
gnop create -S 4096 /dev/gpt/boot0
dd if=/dev/zero of=/dev/gpt/boot0.nop bs=1M
zpool create -o altroot=$DESTDIR -o cachefile=/tmp/zpool.cache -O utf8only=on -m none zboot /dev/gpt/boot0.nop
zpool export zboot
gnop destroy /dev/gpt/boot0.nop
zpool import -o altroot=$DESTDIR -o cachefile=/tmp/zpool.cache zboot
zfs set checksum=fletcher4 zboot
zfs set atime=off zboot
zfs set mountpoint=none zboot
zfs set bootfs=zboot/default zboot
zfs create -o mountpoint=/bootfs zboot/default
zfs create -o mountpoint=/usr/local zroot/local
zfs create -o mountpoint=/var zroot/var
zfs create -o compression=lzjb -o exec=off -o setuid=off zroot/var/crash
zfs create -o exec=off -o setuid=off zroot/var/db
zfs create -o compression=lzjb -o exec=on -o setuid=off zroot/var/db/pkg
zfs create -o exec=off -o setuid=off zroot/var/empty
zfs create -o compression=lzjb -o exec=off -o setuid=off zroot/var/log
zfs create -o compression=gzip -o exec=off -o setuid=off zroot/var/mail
zfs create -o exec=off -o setuid=off zroot/var/run
zfs create -o compression=lzjb -o exec=on -o setuid=off zroot/var/tmp
chmod 1777 $DESTDIR/var/tmp
zfs create -o mountpoint=/tmp -o compression=on -o exec=on -o setuid=off zroot/tmp
chmod 1777 $DESTDIR/tmp
zfs create -o mountpoint=/home zroot/home
foreach file (/usr/freebsd-dist/*.txz)
 tar --unlink -xpJf $file -C $DESTDIR
end
zfs set readonly=on zroot/var/empty
mv $DESTDIR/boot $DESTDIR/bootfs/boot
ln -shf bootfs/boot $DESTDIR/boot
chflags -h schg $DESTDIR/boot
cp /tmp/zpool.cache $DESTDIR/boot/zfs/zpool.cache
cat >> $DESTDIR/boot/loader.conf <<__EOF__
ahci_load="YES"
aesni_load="YES"
geom_eli_load="YES"
kern.geom.eli.visible_passphrase="2"
zfs_load="YES"
vfs.root.mountfrom="zfs:zroot/ROOT/default"
__EOF__
echo hostname=\"$HOSTNAME\" >> $DESTDIR/etc/rc.conf
echo ifconfig_$NETIF=\"DHCP\" >> $DESTDIR/etc/rc.conf
cat >> $DESTDIR/etc/rc.conf <<__EOF__
zfs_enable="YES"
__EOF__
cp $DESTDIR/usr/share/zoneinfo/Europe/Paris $DESTDIR/etc/localtime
cd $DESTDIR/etc/mail
setenv SENDMAIL_ALIASES $DESTDIR/etc/mail/aliases
make aliases
cd /
zfs umount -a
zfs set mountpoint=legacy zroot/ROOT/default
zfs set mountpoint=/zfspools/zroot zroot
zfs set mountpoint=/zfspools/zroot/ROOT zroot/ROOT
zfs set mountpoint=/home zroot/home
zfs set mountpoint=/tmp zroot/tmp
zfs set mountpoint=/usr/local zroot/local
zfs set mountpoint=/var zroot/var
zfs set mountpoint=/zfspools/zboot zboot
zfs set mountpoint=/bootfs zboot/default
```

The diff :


```
@@ -68,13 +68,15 @@
 # /boot
 gnop create -S 4096 /dev/gpt/boot0
 dd if=/dev/zero of=/dev/gpt/boot0.nop bs=1M
-zpool create -o altroot=$DESTDIR -o cachefile=/tmp/zpool.cache -O utf8only=on -m /bootfs zboot /dev/gpt/boot0.nop
+zpool create -o altroot=$DESTDIR -o cachefile=/tmp/zpool.cache -O utf8only=on -m none zboot /dev/gpt/boot0.nop
 zpool export zboot
 gnop destroy /dev/gpt/boot0.nop
 zpool import -o altroot=$DESTDIR -o cachefile=/tmp/zpool.cache zboot
 zfs set checksum=fletcher4 zboot
 zfs set atime=off zboot
-zfs set bootfs=zboot zboot
+zfs set mountpoint=none zboot
+zfs set bootfs=zboot/default zboot
+zfs create -o mountpoint=/bootfs zboot/default
 
 # /usr/local
 zfs create -o mountpoint=/usr/local zroot/local
@@ -186,10 +188,12 @@
 
 zfs umount -a
 zfs set mountpoint=legacy zroot/ROOT/default
-zfs set mountpoint=/zroot zroot
-zfs set mountpoint=/zroot/ROOT zroot/ROOT
+zfs set mountpoint=/zfspools/zroot zroot
+zfs set mountpoint=/zfspools/zroot/ROOT zroot/ROOT
 zfs set mountpoint=/home zroot/home
 zfs set mountpoint=/tmp zroot/tmp
 zfs set mountpoint=/usr/local zroot/local
 zfs set mountpoint=/var zroot/var
+zfs set mountpoint=/zfspools/zboot zboot
+zfs set mountpoint=/bootfs zboot/default
```

The FreeBSD loader doesn't not find the zfsloader. Is there a way to do it ?

Help ?


----------



## vermaden (Oct 31, 2012)

I do not see any point about which I can say 'this one'.

IMHO start with the setup that works and change one thing at a time, that will take some time (can be scripted through) but will show You where the problem is.


----------



## Trois-Six (Nov 2, 2012)

Hi,

I finally found the configuration to be able to boot with two pools having datasets.

Howto :

First, boot with the FreeBSD liveCD, then start SSH :


```
mkdir /tmp/etc
mdmfs -s32m -S md /tmp/etc
mount -t unionfs /tmp/etc /etc
echo password | pw usermod root -h 0
rm /etc/resolv.conf
dhclient em0
cat /var/run/resolvconf/interfaces/* > /etc/resolv.conf
echo PermitRootLogin=yes >> /etc/ssh/sshd_config
service sshd onestart
```

Then you only have to copy via scp the attached script.

chmod +x it, and execute it.

time + 10min, you have a working FreeBSD system.

Config after reboot :


```
root@beastie:/root # zfs list
NAME                 USED  AVAIL  REFER  MOUNTPOINT
zboot                387M   597M   144K  /zfspools/zboot
zboot/default        386M   597M   386M  /bootfs
zroot               2.30G  16.3G   152K  /zfspools/zroot
zroot/ROOT          1.26G  16.3G   152K  /zfspools/zroot/ROOT
zroot/ROOT/default  1.26G  16.3G  1.26G  legacy
zroot/home           144K  16.3G   144K  /home
zroot/local          144K  16.3G   144K  /usr/local
zroot/swap          1.03G  17.3G    72K  -
zroot/tmp            184K  16.3G   184K  /tmp
zroot/var           1.93M  16.3G   568K  /var
zroot/var/crash      148K  16.3G   148K  /var/crash
zroot/var/db         388K  16.3G   244K  /var/db
zroot/var/db/pkg     144K  16.3G   144K  /var/db/pkg
zroot/var/empty      144K  16.3G   144K  /var/empty
zroot/var/log        192K  16.3G   192K  /var/log
zroot/var/mail       144K  16.3G   144K  /var/mail
zroot/var/run        240K  16.3G   240K  /var/run
zroot/var/tmp        152K  16.3G   152K  /var/tmp

root@beastie:/root # zpool get bootfs
NAME   PROPERTY  VALUE          SOURCE
zboot  bootfs    zboot/default  local
zroot  bootfs    -              default
```

I used your modified beadm script :


```
root@beastie:/root # beadm create upgrade
Created successfully
root@beastie:/root # beadm list
BE      Active Mountpoint Space Policy Created
default N      /          1.26G static 2012-11-02 21:52
upgrade -      -             8K static 2012-11-02 22:06
root@beastie:/root # zfs list
NAME                 USED  AVAIL  REFER  MOUNTPOINT
zboot                387M   597M   144K  /zfspools/zboot
zboot/default        386M   597M   386M  /bootfs
zroot               2.30G  16.3G   152K  /zfspools/zroot
zroot/ROOT          1.26G  16.3G   152K  /zfspools/zroot/ROOT
zroot/ROOT/default  1.26G  16.3G  1.26G  legacy
zroot/ROOT/upgrade     8K  16.3G  1.26G  /zfspools/zroot/ROOT/upgrade
zroot/home           144K  16.3G   144K  /home
zroot/local          144K  16.3G   144K  /usr/local
zroot/swap          1.03G  17.3G    72K  -
zroot/tmp            184K  16.3G   184K  /tmp
zroot/var           1.93M  16.3G   568K  /var
zroot/var/crash      148K  16.3G   148K  /var/crash
zroot/var/db         388K  16.3G   244K  /var/db
zroot/var/db/pkg     144K  16.3G   144K  /var/db/pkg
zroot/var/empty      144K  16.3G   144K  /var/empty
zroot/var/log        192K  16.3G   192K  /var/log
zroot/var/mail       144K  16.3G   144K  /var/mail
zroot/var/run        240K  16.3G   240K  /var/run
zroot/var/tmp        152K  16.3G   152K  /var/tmp
```

It didn't create a snapshot of zboot/default to zboot/upgrade ?
Is it the configuration you did in the past ?

Thank's,

Trois Six


----------



## vermaden (Nov 5, 2012)

Trois-Six said:
			
		

> First, boot with the FreeBSD liveCD, then start SSH :
> 
> 
> ```
> ...



You can shorten that procedure to:
`# dhclient em0
# nc -l 2222 > /root/install.sh`
... and on the client ...
`% nc -w3 ${SERVER_IP} 2222 < ../path/to/install.sh`






> Config after reboot :
> 
> 
> ```
> ...



Using that schema make beadm less useful then it actually is. By adding the /usr/local, /var/db/pkg and /var/db beadm will also 'snapshot' the installed software  which is great for upgrades/updates. If something during upgrade/update fails, You can go back to clean and working system. Besides that, its ok.



> I used your modified beadm script :


So its seems to work correctly with these two pools?



> It didn't create a snapshot of zboot/default to zboot/upgrade ?


The beadm creates snapshots from everything under zroot/ROOT/BENAME and nowhere else. If You want to have something 'supported' by beadm, put it under the zroot/ROOT/BENAME path (for example zroot/ROOT/BENAME/usr/local).

Of course beadm can be modified to also support another pool for boot.



> Is it the configuration you did in the past ?


Yes, something like that.


----------



## Trois-Six (Nov 5, 2012)

Hi,

Following my layout, I did a quick and dirty hack of your script.
create, rename, mount, umount, list work
activate... not really 

another problem is that the snapshotted /bootfs is mounted over the currently mounted /bootfs.

/usr/local, /var/db/pkg and /var/db do not always depend on system running ; but yes I agree that I can snapshot them too.

Regards,

Trois Six


----------



## vermaden (Nov 14, 2012)

Thanks for the extensive patch (I did not tired it, just reviewed).

I am afraid that these changes are quite big and I would like to NOT incorporate them into the beadm (because of possible BUGs, future code maintain and for easier implementing new features), but if You would need my assistance for that 'fork' for Your setup, let me know


----------



## Trois-Six (Nov 15, 2012)

Hi,

I fully agree you should not merge that patch , it's too invasive and I didn't spend time to make activate work for the moment.

Maybe only these changes :


```
if __be_clone ${POOL}/ROOT/${DESTROY}
           then
             # promote clones dependent on snapshots used by destroyed boot environment
-            zfs list -H -t all -o name,origin \
+            zfs list -H -t all -o name,origin -r ${POOL} \
               | while read NAME ORIGIN
                 do
                   if echo "${ORIGIN}" | grep -q -E "${POOL}/ROOT/${DESTROY}(/.*@|@)" 2> /dev/null
@@ -582,7 +750,7 @@
           if __be_clone ${POOL}/ROOT/${DESTROY}
           then
             # promote datasets dependent on origins used by destroyed boot environment
-            ALL_ORIGINS=$( zfs list -H -t all -o name,origin )
+            ALL_ORIGINS=$( zfs list -H -t all -o name,origin -r ${POOL} )
             echo "${ORIGIN_SNAPSHOTS}" \
               | while read S
                 do
@@ -596,7 +764,83 @@
                 done
           fi
           # destroy origins used by destroyed boot environment
-          SNAPSHOTS=$( zfs list -H -t snapshot -o name )
+          SNAPSHOTS=$( zfs list -H -t snapshot -o name -r ${POOL} )
```

Because in your code you don't specify the pool, and if you have more than one pool, maybe it will not do what it is expected to do.


----------



## vermaden (Nov 15, 2012)

True, applied and committed to HEAD.


----------



## sadsfae (Nov 27, 2012)

Thank you for the guide, it's very comprehensive and serves both uses cases that I had.
Unfortunately, I'm unable to boot afterwards - it's like the drive is not being flagged as bootable.

Should I be wiping the disk/MBR prior to setup (using parted/livecd) or should gpart destroy be taking care of this for me? (machine previously had grub/Linux).

I've also tried changing AHCI to Legacy for the SATA disk and flagging the /boot partition bootable in parted via a liveCD.  Any suggestions?


----------



## vermaden (Nov 27, 2012)

@sadsfae

First, leave the disk/chipset/controller in AHCI, its not this.

Second, as You had Linux there before, I would suggest wiping the beginning of the disk with that command:
`# dd < /dev/zero > /dev/ada0 bs=8m count=16`

Next, do the instructions as in the guide, it should work as expected.

You can also first check that You do these instructions properly under virtual machine within VirtualBox or other virtualization platform.

Try these and let me know what You get.



> and flagging the /boot partition bootable in parted via a liveCD.  Any suggestions?


Unlike Linux, FreeBSD does not use separate partition for /boot.


----------



## sadsfae (Nov 28, 2012)

vermaden said:
			
		

> @sadsfae
> 
> First, leave the disk/chipset/controller in AHCI, its not this.
> 
> ...



Thanks for the quick response, I tried it again after dd and still same results.
I did notice an error around this part, but maybe it's because of the livecd ::

(after geli attach)
# zpool create -f -o cachefile=/tmp/zpool.cache local /dev/gpt/local0.eli

cannot mount '/local': failed to create mountpoint

I'll try later today or tomorrow in a KVM VM or switch out the media, perhaps it's not extracting all the files correctly during the install portions.


----------



## vermaden (Nov 28, 2012)

sadsfae said:
			
		

> (after geli attach)
> # zpool create -f -o cachefile=/tmp/zpool.cache local /dev/gpt/local0.eli
> 
> cannot mount '/local': failed to create mountpoint



Its harmless error, its because / is mounted read-only with LiveCD, so /local mountpoint can not be created.


----------



## sadsfae (Nov 29, 2012)

vermaden said:
			
		

> Its harmless error, its because / is mounted read-only with LiveCD, so /local mountpoint can not be created.



Still no luck, but I think it's on my side - tried with another disk as well.  I think the hardware I'm using needs a firmware update (Thinkpad T420S with SATA/AHCI).

I'll work it out in a VM and also try some different bare-metal hardware.  Thank you for the assistance thus far.


----------



## sadsfae (Dec 4, 2012)

sadsfae said:
			
		

> Still no luck, but I think it's on my side - tried with another disk as well.  I think the hardware I'm using needs a firmware update (Thinkpad T420S with SATA/AHCI).
> 
> I'll work it out in a VM and also try some different bare-metal hardware.  Thank you for the assistance thus far.



@Vermaden - this is working beautifully now.  I think there were issues with the Lenovo Thinkpad T420s and an older version of the BIOS.  I'm up and running now on a Thinkpad T510.

Thank you for the wonderful guide and help here.

Just two questions

1) I'm using ZFS snapshots of / and eventually home when my userland/apps are perfected.. can I simply restore it online in a recovery scenario or do I need to boot to single-user or recovery and promote/change?  (I'm still reading through ZFS documentation)

2) Beadm - do folks use this to provision a new machine with similiar hardware and save the effort of the setup/ports compilation, etc?  Looks like a very powerful tool.


----------



## vermaden (Dec 4, 2012)

sadsfae said:
			
		

> @Vermaden - this is working beautifully now.  I think there were issues with the Lenovo Thinkpad T420s and an older version of the BIOS.  I'm up and running now on a Thinkpad T510.
> 
> Thank you for the wonderful guide and help here.



Welcome 



			
				sadsfae said:
			
		

> 1) I'm using ZFS snapshots of / and eventually home when my userland/apps are perfected.. can I simply restore it online in a recovery scenario or do I need to boot to single-user or recovery and promote/change?  (I'm still reading through ZFS documentation)



You can set zfs property called snapdir=visible, so You would have .zfs directory with snapshots. You can as well mount these snapshots somewhere and then do something with the files stored there.



			
				sadsfae said:
			
		

> 2) Beadm - do folks use this to provision a new machine with similiar hardware and save the effort of the setup/ports compilation, etc?  Looks like a very powerful tool.


I have done that in the past, do the zfs send sys/ROOT/name | ... | zfs recv ... and then just beadm activate name + reboot 

I also did beadm 'backup' before upgrading pacakges, before upgrading to newer system snapshot (STABLE), before moving into the PKGng and so on.


----------



## randomcop (Dec 21, 2012)

Thanks yor your great Howto and the beadm script!

I have already done some testinstalls both under virtualbox and on some real server hardware and the zfs-on-root + beadm setup really works ok.

What really makes me nervous are thinkable situations, when I have an active boot environment that locks up right after the kernel is loaded or some other form of broken freebsd. How can I go back to a previous, stable boot environment, without being able to use the beadm script?

As an experiment I created a new boot environment from default, named "be1". I activated "be1" with beadm and on startup escaped to loader prompt. There I did:

- unload kernel
- set vfs.root.mountfrom=zfs:sys/ROOT/default (from vfs.root.mountfrom=zfs:sys/ROOT/be1)
- set currdev=zfs:sys/ROOT/default: (from currdev=zfs:sys/ROOT/be1
- load kernel
- load zfs
- boot

This leads to the the result, that - after the kernel loaded ok - sys/ROOT/default cannot be mounted and the following text is displayed:

mounting from zfs:sys/ROOT/default: failed with error 2

Is this because the property "canmount" on sys/ROOT/default has still "noauto" set and should be set to "on" ?

What can I do if I have a an active, but broken, boot environment, and want to revert to a previous, stable boot environment?

FYI: I am testing on a freebsd 9-STABLE from december, 4th 2012..

Regards.


----------



## kpa (Dec 25, 2012)

There's a recent change on 9-STABLE that makes the zpool.cache optional for bootable ZFS pools. I think that beadm(1) should not consider a missing /boot/zfs/zpool.cache an error but print a warning that the pool may not be bootable unless the OS is recent enough.

http://forums.freebsd.org/showthread.php?t=36513


----------



## vermaden (Dec 26, 2012)

randomcop said:
			
		

> Thanks yor your great Howto and the beadm script!
> 
> I have already done some testinstalls both under virtualbox and on some real server hardware and the zfs-on-root + beadm setup really works ok.
> 
> What really makes me nervous are thinkable situations, when I have an active boot environment that locks up right after the kernel is loaded or some other form of broken freebsd. How can I go back to a previous, stable boot environment, without being able to use the beadm script?




You will have to use FreeBSD LiveCD or LiveUSB and do it 'by hand'.



			
				randomcop said:
			
		

> As an experiment I created a new boot environment from default, named "be1". I activated "be1" with beadm and on startup escaped to loader prompt. There I did:
> 
> - unload kernel
> - set vfs.root.mountfrom=zfs:sys/ROOT/default (from vfs.root.mountfrom=zfs:sys/ROOT/be1)
> ...



This error comes mostly when zpool.cache is not up to date, to make it up to date it requires to export and import a pool.





			
				randomcop said:
			
		

> What can I do if I have a an active, but broken, boot environment, and want to revert to a previous, stable boot environment?




You will have to use FreeBSD LiveCD or LiveUSB and do it 'by hand' and with the zpool.cache 'regeneration' step.


----------



## vermaden (Dec 26, 2012)

kpa said:
			
		

> There's a recent change on 9-STABLE that makes the zpool.cache optional for bootable ZFS pools. I think that beadm(1) should not consider a missing /boot/zfs/zpool.cache an error but print a warning that the pool may not be bootable unless the OS is recent enough.
> 
> http://forums.freebsd.org/showthread.php?t=36513



I have read about it, but not all parts are in the 9.1-STABLE yet:


> The change was introduced via multiple commits, the latest relevant revision in
> head is r243502.  The changes are *partially* MFC-ed, the remaining parts are
> scheduled to be MFC-ed soon.



Also, zpool.cache is used in 9.1-RELEASE, so beadm(1) will support it. I can check for FreeBSD version (r243107 for example) in beadm(1) and act accordingly with no zpool.cache for newer versions, but as I said, they are not even in STABLE yet


----------



## kpa (Dec 26, 2012)

Sorry, should have linked the relevant post in freebsd-stable by Andriy Gapon. The missing pieces were MFC'ed few days ago:

http://lists.freebsd.org/pipermail/freebsd-stable/2012-December/071345.html


I'm booting from a ZFS pool without a zpool.cache, works just fine.

I'm not asking for removal of the copying of zpool.cache, just make it optional in case there isn't one


----------



## vermaden (Dec 26, 2012)

@*kpa*

Nice to see these changes already in STABLE, I will have to test that out in some vm to check how the install procedure differs, what is not longer needed and what can be 'ifed' in beadm(1) to not touch zpool.cache again, thanks for clarification mate


----------



## Sebulon (Jan 15, 2013)

Hi,

I have read through this thread a couple of times, since IÂ´ve wanted to implement BE on a new server IÂ´m installing, but I think there will be something preventing it. The servers IÂ´m administering are storage units that have two USB's configured with ZFS as mirrored / that are used to boot the machine. Then, other filesystems, like /usr,/usr/local,/usr/src,/usr/obj,/usr/ports etc, are configured up as part of the bigger pool, made up of the hard drives in the system. The layout I was going for would then look something like:

```
FS                                   MOUNTPOINT
pool0                                none
pool0/ROOT/9.1-RELEASE               legacy (bootfs="pool0/ROOT/9.1-RELEASE", vfs.root.mountfrom="zfs:pool0/ROOT/9.1-RELEASE")
pool0/ROOT/9.X-RELEASE               legacy
pool0/ROOT/X.Y-RELEASE               legacy
pool1                                none
pool1/ROOT/9.1-RELEASE               none
pool1/ROOT/9.1-RELEASE/usr           /usr
pool1/ROOT/9.1-RELEASE/usr/home      /usr/home
pool1/ROOT/9.1-RELEASE/usr/local     /usr/local
pool1/ROOT/9.1-RELEASE/usr/obj       /usr/obj
pool1/ROOT/9.1-RELEASE/usr/ports     /usr/ports
pool1/ROOT/9.1-RELEASE/var           /var
pool1/ROOT/9.X-RELEASE               none
pool1/ROOT/9.X-RELEASE/usr           /usr
pool1/ROOT/9.X-RELEASE/usr/home      /usr/home
pool1/ROOT/9.X-RELEASE/usr/local     /usr/local
pool1/ROOT/9.X-RELEASE/usr/obj       /usr/obj
pool1/ROOT/9.X-RELEASE/usr/ports     /usr/ports
pool1/ROOT/9.X-RELEASE/var           /var
pool1/ROOT/X.Y-RELEASE               none
pool1/ROOT/X.Y-RELEASE/usr           /usr
pool1/ROOT/X.Y-RELEASE/usr/home      /usr/home
pool1/ROOT/X.Y-RELEASE/usr/local     /usr/local
pool1/ROOT/X.Y-RELEASE/usr/obj       /usr/obj
pool1/ROOT/X.Y-RELEASE/usr/ports     /usr/ports
pool1/ROOT/X.Y-RELEASE/var           /var
tmpfs                                /tmp
pool1/SWAP                           -
pool1/EXPORT                         none
pool1/EXPORT/datastore               /export/datastore
```
Would beadm be able to cope with this type of layout?

Also, since these systems are booting from USB, IÂ´ve configured the systems to mount / as read-only to prolong their lives by quite far. What whould be the best-practice way to configure that in use with beadm? Up until now, IÂ´ve only used plain-ol' fstab with all fs's legacy to handle things, so IÂ´m quite new to ZFSÂ´s automagic

/Sebulon


----------



## vermaden (Jan 15, 2013)

Sebulon said:
			
		

> Would beadm be able to cope with this type of layout?



The beadm operates on ${BOOTPOOL}/ROOT/${BENAME} and anything that is below that path is taken into the Boot Environment. Anything else, even with the same 'schema' is omitted ${OTHERPOOL}/ROOT/${BENAME} will not be snapshotted and used for beadm.



			
				Sebulon said:
			
		

> Also, since these systems are booting from USB, IÂ´ve configured the systems to mount / as read-only to prolong their lives by quite far. What whould be the best-practice way to configure that in use with beadm? Up until now, IÂ´ve only used plain-ol' fstab with all fs's legacy to handle things, so IÂ´m quite new to ZFSÂ´s automagic



First, do not use /etc/fstab for ZFS mounts, it will not work with beadm, use the zfs_enable=YES in /etc/rc.conf file.

Second, using plain file for ZFS becomes less relevant with every release, for example in 9.1-STABLE You no longer need to use the /boot/zfs/zpool.cache file and You do not need to set vfs.root.mountfrom in the /boot/loader.conf file anymore. Only bootfs property is used.

As for beadm to work with Boot Environment spread across several pool with the same prefix ... well, everything si possible, its about how much time You put into that, its definitely technically possible, but in its current implementation beadm does not support such schema and I am afraid I will not be adding these changes.

I will see what can be added to the beadm to support such schemas and will tell You if I will modify it to do that.


----------



## Sebulon (Jan 16, 2013)

vermaden said:
			
		

> The beadm operates on ${BOOTPOOL}/ROOT/${BENAME} and anything that is below that path is taken into the Boot Environment. Anything else, even with the same 'schema' is omitted ${OTHERPOOL}/ROOT/${BENAME} will not be snapshotted and used for beadm.


Yes, thatÂ´s what I was afraid of. Good to have that clarified.



			
				vermaden said:
			
		

> First, do not use /etc/fstab for ZFS mounts, it will not work with beadm, use the zfs_enable=YES in /etc/rc.conf file.


I had gathered as much.



			
				vermaden said:
			
		

> Second, using plain file for ZFS becomes less relevant with every release, for example in 9.1-STABLE You no longer need to use the /boot/zfs/zpool.cache file and You do not need to set vfs.root.mountfrom in the /boot/loader.conf file anymore. Only bootfs property is used.


Ah, sweet. Good to know, thanks!



			
				vermaden said:
			
		

> As for beadm to work with Boot Environment spread across several pool with the same prefix ... well, everything si possible, its about how much time You put into that, its definitely technically possible, but in its current implementation beadm does not support such schema and I am afraid I will not be adding these changes.
> 
> I will see what can be added to the beadm to support such schemas and will tell You if I will modify it to do that.


I totally understand not wanting to redo and possibly break whatÂ´s working. If you are ever bored enough and decide to go for it anyway, let me know Thanks again!

/Sebulon


----------



## _martin (Jan 25, 2013)

This is one awesome thread. I said it before, I'll say it again: thanks 

Recently I got notebook where I needed to use encryption. I want to have full  encryption though. Basically it's what you've done but with a small modification. 
I'm attaching it here, maybe it can be helpful. 

Idea is to have encrypted the whole disk leaving only /boot partition unencrypted. Boot is be done from bootz ZFS pool, rpool is the encrypted root pool. 

My final disk layout is as follows: 

`# gpart show ada0`

```
=>       34  976773101  ada0  GPT  (465G)
         34          6        - free -  (3.0k) 
         40        256     1  freebsd-boot  (128k)         # ; bootcode
        296    4194304     2  freebsd-zfs  (2.0G)          # ; boot ZFS
    4194600  950009856     3  freebsd-zfs  (453G)          # ; encrypted ZFS
  954204456   22568672     4  freebsd-swap  (10G)          # ; dump 
  976773128          7        - free -  (3.5k)
```

`# zpool status`

```
pool: bootz
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        bootz       ONLINE       0     0     0
          ada0p2    ONLINE       0     0     0

errors: No known data errors

  pool: rpool
 state: ONLINE
  scan: none requested
config:

        NAME          STATE     READ WRITE CKSUM
        rpool         ONLINE       0     0     0
          ada0p3.eli  ONLINE       0     0     0

errors: No known data errors
```

`# zfs list`

```
NAME                     USED  AVAIL  REFER  MOUNTPOINT
bootz                    350M  1.61G    31K  none
bootz/default            350M  1.61G   350M  /bootdir
rpool                    307M   445G   144K  none
rpool/ROOT               306M   445G   144K  none
rpool/ROOT/default       305M   445G  24.1M  legacy
rpool/ROOT/default/usr   281M   445G   281M  /usr
rpool/ROOT/default/var   868K   445G   868K  /var
rpool/ROOT/empty         144K   445G   144K  /var/empty
rpool/ROOT/home          184K   445G   184K  /home
rpool/ROOT/tmp           176K   445G   176K  /tmp
```

Where bootfs property is set for bootz/default. One has to deal with /boot to be set properly:

`# cd /; ll boot`

```
lrwxr-xr-x  1 root  wheel  12 Jan 26 00:20 boot@ -> bootdir/boot
```

This does screw with beadm I guess. It does require manual creation of bootz/* dataset and managing what is mounted to /bootdir.


----------



## vermaden (Jan 26, 2013)

matoatlantis said:
			
		

> This is one awesome thread. I said it before, I'll say it again: thanks



Welcome.



			
				matoatlantis said:
			
		

> This does screw with beadm I guess. It does require manual creation of bootz/* dataset and managing what is mounted to /bootdir.



I have created a modified version of beadm for such setup, here is latest modified 0.8.4 version (0.8.5 unmodified in Ports):
http://forums.freebsd.org/showpost.php?p=195040

... at least it works with that setup:
http://forums.freebsd.org/showpost.php?p=195631


----------



## _martin (Jan 27, 2013)

vermaden said:
			
		

> http://forums.freebsd.org/showpost.php?p=195040
> 
> ... at least it works with that setup:
> http://forums.freebsd.org/showpost.php?p=195631



I didn't notice somebody else had the setup as I have; had to go with reinventing the wheel procedure  ..my bad.

thanks.


----------



## nORKy (Mar 25, 2013)

Hi, I try to build the Trois-Six configuration, but I have a problem I don't understand:


```
mounting from zfs:zroot/ROOT/default failed with error 2
```

Why?


----------



## vermaden (Mar 25, 2013)

@nORKy

You messed something up with zpool.cache (which is very easy brake something btw).

Fortunately 9.1-STABLE does not need zpool.cache 'hack' anymore.

To fix that You need to boot from live CD/USB and import the pool with -o cachefile-zpool.cache to 'regenerate' it.


----------



## nORKy (Mar 26, 2013)

vermaden said:
			
		

> @nORKy
> 
> You messed something up with zpool.cache (which is very easy brake something btw).
> 
> ...



It works. *T*hank you*.*


----------



## moesasji (Apr 1, 2013)

If I read the detailed release-notes for FreeBSD 9.1 correctly it appears possible to install FreeBSD on ZFS without creating a seperate root-partition, see: http://www.freebsd.org/releases/9.1R/relnotes-detailed.html#boot Doing this to me makes sense for using beadm as everything then gets captured by beadm. 

Unfortunately I can't get my head around how this change (as well as the zroot.cache one) affects the install-guide in the first post. So could someone please point me in the correct direction?

It seems to me that I should just follow the guide, while simply leaving out the creation of the boot-fs partition? If that is indeed the case, how do I deal with the zpool.cache change when installing FreeBSD 9.1?


----------



## vermaden (Apr 1, 2013)

@@moesasji

This guide *is* about creating ZFS on root without separate UFS boot partition. The GPT boot partition is mandatory to boot, there is no filesystem there, without it You will not boot, its 128 kilobytes in size by the way.

This guide was created when 9.1-RELEASE was available. There is small problem with 9.1-RELEASE that has already been fixed in STABLE: http://www.freebsd.org/cgi/query-pr.cgi?pr=167905 (You can of course add that patch and still use 9.1-RELEASE) This is needed change: http://freshbsd.org/commit/freebsd/r237119

About zpool.cache, its no longer needed (in STABLE/HEAD/CURRENT), the vfs.root.mountfrom at /boot/loader.conf is also not needed (in STABLE/HEAD/CURRENT), but that is not the case in 9.1-RELEASE.


----------



## moesasji (Apr 1, 2013)

vermaden said:
			
		

> @@moesasji
> This guide *is* about creating ZFS on root without separate UFS boot partition. The GPT boot partition is mandatory to boot, there is no filesystem there, without it You will not boot, its 128 kilobytes in size by the way.



It is probably me misunderstanding the release notes for 9.1. Your (and other guides) installing FreeBSD on ZFS have a small freebsd-boot section created with 

```
gpart add -t freebsd-boot -l bootcode${NUMBER} -s 128k ${I}
```

which I assume is the GPT boot partition you mention and is also the way how I installed it in the past. However if I look at the example given in the 9.1-changelog that line creating a freebsd-boot partition is no longer there. See last bit of the section: http://www.freebsd.org/releases/9.1R/relnotes-detailed.html#boot. I guess that is sparc64 specific, yet not sure why it would be different. 

ps) thanks for the warning on the needed patch. Would have loved to be able to stick with a binary upgrade path. :-(


----------



## vermaden (Apr 1, 2013)

@@moesasji



			
				&quot said:
			
		

> # gpart create -s vtoc8 da0


The VTOC8 partition schame is not GPT patition scheme, but maybe ZFS only booting without freebsd-boot partition is possible, I haven't tried


----------



## moesasji (Apr 1, 2013)

vermaden said:
			
		

> @@moesasji
> The VTOC8 partition schame is not GPT patition scheme, but maybe ZFS only booting without freebsd-boot partition is possible, I haven't tried



After some digging it turns out I just have to learn to read the man-pages better. :r 

The man-page of gpart states under bootstrapping that the GPT-scheme needs the freebsd-boot section to place /boot/gptzfsboot. The case for VTOC8 is indeed different as the man-page states:



> The VTOC8 scheme does not support embedding bootstrap code.  Instead, the 8 KBytes bootstrap code image /boot/boot1 should be written with gpart bootcode command with -p bootcode option to all sufficiently large VTOC8  partitions.



That explains a lot and hence no need to try. Sorry for the noise.


----------



## dkeav (Apr 5, 2013)

Vermaden you may be interested in this.

*IMGUR* grub2 testing

Currently booting active and non-active BE's from the menu like how it is handled in Opensolaris/Illumos.

This bypasses loader, so all settings or modules loaded in your loader.conf would need to be set in grub.cfg, this will solve the issue of having to use a rescue media in the event you can't get a live system to rollback.

However, we will still have to wait for an updated grub2 in ports, the package I built is a test port thanks to nox-.


----------



## vermaden (Apr 5, 2013)

@@dkeav

I have talked with FreeBSD developers about that as a possible solution, but we all agreed, that using GNU code for that is questionable and shortcoming. We agreed that they will start to update/modify the boot code to create that menu before the ZFS loader, to have native solution. There were several commits but its nowhere near complete.

You can use GRUB2 of course, but You will need some modern Linux install for that, because as You mentioned, GRUB2 is Ports is in 1.98 version while booting from ZFS has been added at 1.99 and 1.99 or later is not in Ports beucase of older binutils from what I recall.

Making beadm to put valid entries into grub.cfg or menu.lst is very easy, but as we agreed that there would be native solution and that GRUB2 will not make fast into the Ports I did not added it.

BTW, about Your screenshots, at 9.1-STABLE there is no more need for zpool.cache and You do not need to set vfs.root.mountfrom as zpool property bootfs is used now. It works beatifully with/without beadm  but I havent tried it with GRUB2.


----------



## dkeav (Apr 5, 2013)

I knew about zpool.cache going away, but testing is being done on 9.1-RELEASE.  I'll switch to bootfs after I get things working with UEFI.


----------



## Trois-Six (Apr 7, 2013)

FYI on UEFI: https://wiki.freebsd.org/UEFI. All the remaining bits seem to be in place to get a working UEFI FreeBSD system (either by using GRUB or the FreeBSD bootloader?).


----------



## xy16644 (Apr 25, 2013)

I installed beadm on my FreeBSD 9.1 i386 system that has an encrypted ZFS root but I get an error when I run the `beadm list` command:

```
ERROR: This system is not configured for boot environments
```

Can this utility be used on an existing system? Or does a new system install need to be configured in such a way that beadm will work?


----------



## kpa (Apr 25, 2013)

I was able to use it on my NAS system that wasn't prepared with beadm. All I had to do was to rename the datasets to conform to the poolname/ROOT/default naming scheme.


----------



## vermaden (Apr 26, 2013)

@xy16644,

Show me the output of the `zfs list` and `mount` commands.


----------



## Savagedlight (Apr 26, 2013)

Is it ok if I reference this howto in a guide I'm currently writing? I'm not going to follow it exactly, but the ZFS setup is heavily inspired by this guide.


----------



## vermaden (Apr 26, 2013)

@Savagedlight
Sure, go ahead.


----------



## xy16644 (Apr 26, 2013)

vermaden said:
			
		

> @xy16644
> 
> Show me output of zfs list and mount commands.



Hi @vermaden, here*'*s the requested output:

`zfs list`:

```
NAME         USED  AVAIL  REFER  MOUNTPOINT
zroot       5.26G  14.3G  4.23G  /
zroot/swap  1.03G  15.3G   108K  -
```

`mount`:

```
zroot on / (zfs, local, noatime, nfsv4acls)
devfs on /dev (devfs, local, multilabel)
```

This output is from a test FreeBSD 9.1 amd64 VM. I'm happy to experiment and make any changes in this VM!

Thanks!


----------



## vermaden (Apr 27, 2013)

@xy16644

Then you need to do these changes to use beadm:


```
# zfs snapshot zroot@beadm
# zfs create zroot/ROOT
# zfs send zroot@beadm | zfs receive zroot/ROOT/default
```

Now edit /zroot/ROOT/default/boot/loader.conf and put there vfs.root.mountfrom="zfs:zroot/ROOT/default" line instead of the existing vfs.root.mountfrom one.


```
# zpool set bootfs=zroot/ROOT/default zroot
# reboot
```
If it works, then the beadm command should work properly now and you can delete everything in the /zroot directory except the ROOT directory, the zroot@beadm snapshot will also not be needed anymore.


----------



## xy16644 (Apr 28, 2013)

vermaden said:
			
		

> @xy16644
> 
> Then you need to do these changes to use beadm:
> 
> ...



Thanks very much for the reply! I seem to be getting stuck though...

I ran the following just fine:
`zfs snapshot zroot@beadm`
`zfs create zroot/ROOT`
`zfs send zroot@beadm | zfs receive zroot/ROOT/default`

My first problem is that the following path does not exist on my test system: /zroot/ROOT/default/boot/loader.conf. But I was able to edit the following path instead: /ROOT/default/boot/loader.conf. And I added the following to it:

```
vfs.root.mountfrom="zfs:zroot/ROOT/default"
```
I then ran the following fine:
`zpool set bootfs=zroot/ROOT/default zroot`

Funny thing is running `beadm list` gives me the following (BEFORE rebooting):

```
BE      Active Mountpoint     Space Created
default R      /ROOT/default   4.2G 2013-04-28 20:54
```

But after I reboot I can't get a login prompt and get many errors about paths being incorrect (see attached error). It looks like it has half booted up as I don't get a logon prompt.

Any ideas? :stud

PS: I should mention that I don't have a /zroot folder.


----------



## Savagedlight (May 3, 2013)

Is it possible to have child data sets of the BEADM data set, and have it mount only when its parent is used?

One of many use cases for this would be to store /usr/src with compression enabled, while not sharing it between BEADM data sets, as that would make little sense.


----------



## rawthey (May 3, 2013)

Savagedlight said:
			
		

> Is it possible to have child data sets of the BEADM data set, and have it mount only when its parent is used?
> 
> One of many use cases for this would be to store /usr/src with compression enabled, while not sharing it between BEADM data sets, as that would make little sense.



Yes, this is what I have:

```
curlew:/home/mike% zfs list -o name,compression,mountpoint -r sys/ROOT/kde4.10a sys/DATA sys/NOBACKUP
NAME                              COMPRESS  MOUNTPOINT
sys/DATA                               off  none
sys/DATA/home                         gzip  /home
sys/DATA/home/camera                   off  /home/camera
sys/DATA/home/db                       off  /home/db
sys/DATA/home/photos                   off  /home/photos
sys/DATA/root                          off  /root
sys/DATA/var                           off  none
sys/DATA/var/log                        on  /var/log
sys/NOBACKUP                           off  none
sys/NOBACKUP/nobackup                  off  /nobackup
sys/NOBACKUP/usr                       off  none
sys/NOBACKUP/usr/ports                 off  none
sys/NOBACKUP/usr/ports/distfiles       off  /usr/ports/distfiles
sys/NOBACKUP/usr/ports/packages        off  /usr/ports/packages
sys/ROOT/kde4.10a                      off  legacy
sys/ROOT/kde4.10a/tmp                   on  /tmp
sys/ROOT/kde4.10a/usr                   on  /usr
sys/ROOT/kde4.10a/usr/ports         gzip-9  /usr/ports
sys/ROOT/kde4.10a/usr/src           gzip-9  /usr/src
sys/ROOT/kde4.10a/var                  off  /var
sys/ROOT/kde4.10a/var/db               off  /var/db
sys/ROOT/kde4.10a/var/db/pkg            on  /var/db/pkg
sys/ROOT/kde4.10a/var/empty            off  /var/empty
sys/ROOT/kde4.10a/var/mail              on  /var/mail
sys/ROOT/kde4.10a/var/run              off  /var/run
sys/ROOT/kde4.10a/var/tmp               on  /var/tmp
```
Each BE has it's own version of /usr/src and  /usr/ports. When I create a new BE I want to continue to use my existing distfiles for ports so /usr/ports/distfiles are mounted from sys/NOBACKUP which is outside the BE filesystems, as are various other directories like /home. Note that for this to work some filesystems like sys/NOBACKUP/usr and sys/NOBACKUP/usr/ports need to have their mountpoint set to none


----------



## Savagedlight (May 3, 2013)

rawthey said:
			
		

> Yes, this is what I have:
> Each BE has it's own version of /usr/src and  /usr/ports. When I create a new BE I want to continue to use my existing distfiles for ports so /usr/ports/distfiles are mounted from sys/NOBACKUP which is outside the BE filesystems, as are various other directories like /home. Note that for this to work some filesystems like sys/NOBACKUP/usr and sys/NOBACKUP/usr/ports need to have their mountpoint set to none


Interesting. I would have thought that sys/ROOT/kde4.10a/var and sys/ROOT/somethingelse/var would clash when both have a mountpoint of /var; Does beadm() do some "magic" to the child data sets to decide which ones are automounted?


----------



## vermaden (May 3, 2013)

Savagedlight said:
			
		

> Does beadm() do some "magic" to the child data sets to decide which ones are automounted?



It uses canmount=on|off depends on if this BE is active or not.


----------



## xy16644 (May 23, 2013)

@vermaden,

I'm still having the problem mentioned in post #130. Any ideas?


----------



## vermaden (May 24, 2013)

xy16644 said:
			
		

> @vermaden,
> 
> I'm still having the problem mentioned in post #130. Any ideas?


Show me output of the `zfs get -r mountpoint zroot` command.


----------



## xy16644 (May 25, 2013)

As requested 

`zfs get -r mountpoint zroot`


```
NAME                      PROPERTY    VALUE          SOURCE
zroot                     mountpoint  /              local
zroot@beadm               mountpoint  -              -
zroot/ROOT                mountpoint  /ROOT          inherited from zroot
zroot/ROOT/default        mountpoint  /ROOT/default  inherited from zroot
zroot/ROOT/default@beadm  mountpoint  -              -
zroot/swap                mountpoint  -              -
```


----------



## vermaden (May 26, 2013)

Try that and reboot.
`# zfs set mountpoint=none zroot`
`# zfs set mountpoint=none zroot/ROOT`
`# zfs set mountpoint=legacy zroot/ROOT/default`


----------



## xy16644 (May 26, 2013)

vermaden said:
			
		

> Try that and reboot.
> `# zfs set mountpoint=none zroot`
> `# zfs set mountpoint=none zroot/ROOT`
> `# zfs set mountpoint=legacy zroot/ROOT/default`



Running `zfs set mountpoint=none zroot` gave me 
	
	



```
cannot unmount '/': Invalid argument
```

The second and third command ran fine. When I rebooted after running the commands I get the same error as described in my previous post. Any other ideas?

Thanks for your help!


----------



## vermaden (May 26, 2013)

Do that using live USB or live CD.


----------



## xy16644 (May 27, 2013)

vermaden said:
			
		

> Do that using live USB or live CD.



I'm not. FreeBSD is installed in a VM, but it*'*s not a live CD/USB.


----------



## vermaden (May 27, 2013)

xy16644 said:
			
		

> I'm not. FreeBSD is installed in a VM but its not a live CD/USB.



If it's VM, then it's even easier. Just download the DVD ISO image and pass it into the virtual machine and boot from it, just like in a 'real' hardware.


----------



## Dies_Irae (Jun 3, 2013)

vermaden said:
			
		

> Try that and reboot.
> `# zfs set mountpoint=none zroot`
> `# zfs set mountpoint=none zroot/ROOT`
> `# zfs set mountpoint=legacy zroot/ROOT/default`



@vermaden, thank you very much for this great guide!

On my home PC I finally decided to switch from using UFS to ZFS, so I used your guide as a base (with some modifications), but there is something still unclear to me.

In section "3.1. Server with Two Disks" you created an empty fstab and set the mountpoint of the root filesystem to legacy, but from what is written in zfs(8):



> If a file system's mount point is set to legacy, ZFS makes no attempt to manage the file system, and the administrator is responsible for mounting and unmounting the file system.



I expect to have to insert a line in fstab for the root filesystem, but apparently this is not needed.

It seems that `# zpool set bootfs=sys/ROOT/default sys` implies to ignore the mountpoint option.

So how does this work?


----------



## vermaden (Jun 4, 2013)

@Dies_Irae,

Welcome.



			
				Dies_Irae said:
			
		

> In section "3.1. Server with Two Disks" you created an empty fstab and set the mountpoint of the root filesystem to legacy



The / has to be mounted anyway, so that property is not important here.



			
				Dies_Irae said:
			
		

> I expect to have to insert a line in fstab for the root filesystem, but apparently this is not needed.


The only things needed are 
	
	



```
zfs_enable=YES
```
 in /etc/rc.conf and 
	
	



```
zfs_load=YES
```
 in the /boot/loader.conf file.



			
				Dies_Irae said:
			
		

> It seems that `# zpool set bootfs=sys/ROOT/default sys` implies to ignore the mountpoint option.


Yep.



			
				Dies_Irae said:
			
		

> So how does this work?


Generally the option 'legacy' is just the option that I used for the first time for Boot Environments to distinguish the Boot Environment from other ZFS datasets, it has no other function.


----------



## Dies_Irae (Jun 4, 2013)

vermaden said:
			
		

> The / has to be mounted anyway, so that property is not important here.
> 
> (...)
> 
> Generally the option 'legacy' is just the option that I used for the first time for Boot Environments to distinguish the Boot Environment from other ZFS datasets, it has no other function.



Crystal clear as always. Again, thank you very much! I owe you a beer :beergrin


----------



## vermaden (Jun 4, 2013)

:beergrin Na zdrowie!*

*Works better for Vodka then Beer


----------



## overmind (Jun 5, 2013)

xy16644 said:
			
		

> Running `zfs set mountpoint=none zroot` gave me
> 
> 
> 
> ...



I've used @vermaden's tutorial to install FreeBSD on my work machine.  Then I wanted to move my work machine to a VirtualBOX Guest FreeBSD (on a Mac). I've used the same installation tutorial on a VirtualBox machine to get a very basic installation of FreeBSD (where I wanted to put my work machine). 

But when I've tried to move the data from my work machine to that minimal installation using `zfs send/receive` command over ssh I got the same error: 
	
	



```
cannot unmount '/': Invalid argument
```

So to fix the problem on my VirtualBox minimal install I've renamed tank/ROOT to tank/ROOT2. After that I was able to send via ssh to VirtualBox my tank/ROOT and all volumes under tank/ROOT.

When sending ZFS data over ssh I first got the following error:


```
cannot umount "operation not permitted"
```

I had to install sudo on destination machine and add the following lines to /etc/sudoers:


```
Cmnd_Alias ZFS = /sbin/zfs

john ALL = NOPASSWD: ZFS
```

The complete command for sending the volume using `zfs send/receive` command was:


```
zfs send -R tank/ROOT@2013-06-01 | ssh john@10.0.0.127 sudo zfs recv -Fduv tank
```


----------



## overmind (Jun 5, 2013)

@vermaden,

A quick question: on your tutorial at section Road Warrior Laptop, you run the following commands:

`zpool create -f -o cachefile=/tmp/zpool.cache sys /dev/gpt/sys0`
`zpool create -f -o cachefile=/tmp/zpool.cache local /dev/gpt/local0.eli`
`cp /tmp/zpool.cache /mnt/boot/zfs/`

At the end the cache file created for local is copied to /boot and not the file for sys pool. My question is in order to boot, I should not create the zpool.cache for local pool, right? Just the one for sys pool?

Since the zpool.cache is only for booting, right?


----------



## vermaden (Jun 5, 2013)

@overmind,

You were probably hit by that BUG in 9.1-RELEASE:
http://lists.freebsd.org/pipermail/freebsd-bugs/2012-May/048757.html

Its fixed in STABLE, but not in RELEASE, well 8.4-RELEASE also has it fixed.


----------



## overmind (Jun 5, 2013)

vermaden said:
			
		

> @overmind,
> 
> You were probably hit by that BUG in 9.1-RELEASE:
> http://lists.freebsd.org/pipermail/freebsd-bugs/2012-May/048757.html
> ...



Thank *y*ou! I think that's my case and yes, I have 9.1-RELEASE. I thought it's a feature not being able to overwrite pool/volume that is mounted as root.


----------



## vermaden (Jun 5, 2013)

@overmind,

I described it earlier in details, here (same thread):
https://forums.freebsd.org/showpost.php?p=215300&postcount=115


----------



## AASoft (Jul 17, 2013)

trois-six said:
			
		

> ... redacted content. see link for the original patch ...





			
				trois-six said:
			
		

> ... redacted content. see link for the original auto install script ...



I've taken the installation script and the patch above and made them work better together. I've also considerably simplified the above patch, making the changes much more clear. In the process, "activate" was fixed (it was a typo in the original patch), along with a few other issues.

The result is a more fully fleshed-out idea started by @Trois-Six. In particular, the boot pool is now also administered by beadm. It is still mounted to /bootfs (via fstab, since it's a legacy mountpoint in zfs), and /bootfs/boot is then symlinked to /boot.

@vermaden, could you please review my changes to beadm? I think the ability to support a separate boot pool would be a very useful feature, and the patch is now cleaner and feels less intrusive.

Finally, here's a link to my clone of @vermaden's beadm repository with @Trois-Six's changes and my modifications applied, in case anyone else is interested: https://bitbucket.org/aasoft/beadm/


----------



## vermaden (Jul 17, 2013)

@AASoft,

Hi, I am not sure if all these patches are needed for that, as I currently use two ZFS pools with 'stock' beadm (_3.3. Road Warrior Laptop_ from the HOWTO) and it works flawlessly:


```
% zpool list
NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
local   133G   126G  7.04G    94%  1.00x  ONLINE  -
sys    15.9G  9.03G  6.84G    56%  1.00x  ONLINE  -

% zfs list -r sys
NAME            USED  AVAIL  REFER  MOUNTPOINT
sys            9.03G  6.60G    32K  none
sys/ROOT       9.02G  6.60G    31K  none
sys/ROOT/safe  9.02G  6.60G  9.02G  legacy

% beadm list
BE   Active Mountpoint  Space Created
safe NR     /            9.0G 2013-03-05 13:29
```


----------



## AASoft (Jul 17, 2013)

Right, I was more thinking of the following layout:


```
% beadm list
BE             Active Mountpoint             Space Created
default        N      /                       1.6G 2013-07-16 20:11

% zfs list -r zboot
NAME                        USED  AVAIL  REFER  MOUNTPOINT
zboot                       356M  1.61G   144K  none
zboot/ROOT                  354M  1.61G   144K  none
zboot/ROOT/default          354M  1.61G   354M  legacy

% zfs list -r zroot
NAME                                            USED  AVAIL  REFER  MOUNTPOINT
zroot                                          5.69G   109G   144K  none
zroot/ROOT                                     1.56G   109G   144K  none
zroot/ROOT/default                             1.55G   109G  24.2M  legacy
zroot/ROOT/default/usr                         1.38G   109G   341M  /usr
zroot/ROOT/default/var                          153M   109G   568K  /var
(/usr and /var structures redacted)
zroot/home                                      144K   109G   144K  /home
zroot/swap                                     4.13G   114G    72K  -
zroot/tmp                                       192K   109G   192K  /tmp
zroot/usr                                       296K   109G   144K  none
zroot/usr/jails                                 152K   109G   152K  /usr/jails
```

with default being the only existing BE at this point. zboot has its own 2 GB partition, and zroot is on a GELI-encrypted partition that takes up the rest of the disk. Executing `beadm create testBE` at this point will create zboot/ROOT/testBE and zroot/ROOT/testBE the way beadm currently does for the root pool.

I could easily be missing something, but I don't believe stock beadm supports such a configuration.


----------



## Trois-Six (Jul 18, 2013)

@AASoft: thanks for your work on this, glad to see that someone could finally finish the modifications.


----------



## Sebulon (Jul 20, 2013)

I think these modifications are terrific, and just what I need to be able to use beadm in production. I really hope this gets reviewed and commited!

/Sebulon


----------



## ejr2122 (Jul 25, 2013)

I need some help fixing a broken FreeBSD install that utilizes beadm.

I followed the two disk server guide featured in the very first post. Since then, I've been making snapshots of the default BE periodically. After installing some drivers and modifying my installs configuration, it seems that a driver I installed - unrelated to beadm - is causing a kernel panic at boot.

Given that a system using the stock beadm (as of July 23, 2013) and is unbootable, what can a newbie do to revert the BE back to an older snapshot which was bootable? I imagine that firing up a Live CD of FreeBSD and running some beadm commands would be part of the solution.

Please, if you care to respond, speak in newbie/layman's terms.


----------



## doc1623 (Aug 1, 2013)

*Help with errors*

First, thank you, @vermaden. This all looks really cool and I think it will save me many headaches.

I'm new to both ZFS and beadm. I followed your instructions...mainly. I have one SSD; so, I made some adjustments from another script.

I believe, I messed something up.  

Create

```
# beadm create -e default jailed
cannot open 'sys/ROOT/default@install@2013-07-31-18:43:25': invalid dataset name
```

Start jail

```
# jls
   JID  IP Address      Hostname                      Path
# beadm create -e default jailed
ERROR: Boot environment 'jailed' already exists
```

Activate

```
# beadm create upgrade
cannot open 'sys/ROOT/default@install@2013-07-31-18:50:22': invalid dataset name
# beadm activate upgrade
cannot set property for 'sys/ROOT/default@install': this property can not be modified for snapshots
```

Setup

```
root@freebsd:/root # zpool list
NAME   SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
sys   28.8G  1.53G  27.2G     5%  1.00x  ONLINE  -

root@freebsd:/root # zfs mount
sys/ROOT/default                /
sys/ROOT/default/usr            /usr
sys/ROOT/default/usr/home       /usr/home
sys/ROOT/default/usr/ports      /usr/ports
sys/ROOT/default/usr/src        /usr/src
sys/ROOT/default/var            /var
sys/ROOT/default/var/log        /var/log

root@freebsd:/root # beadm list
BE      Active Mountpoint  Space Created
default N      /            1.5G 2013-07-31 16:49
jailed  -      -           79.5K 2013-07-31 18:43
upgrade R      -           69.0K 2013-07-31 23:58

root@freebsd:/root # zfs list -r
NAME                                               USED  AVAIL  REFER  MOUNTPOINT
sys                                               1.52G  26.8G    31K  /sys
sys/ROOT                                          1.52G  26.8G  46.5K  /sys/ROOT
sys/ROOT/default                                  1.52G  26.8G   349M  /
sys/ROOT/default@install                           109K      -   349M  -
sys/ROOT/default@configured                           0      -   349M  -
sys/ROOT/default@configured_with_beadm                0      -   349M  -
sys/ROOT/default@2013-07-31-18:43:25                  0      -   349M  -
sys/ROOT/default@2013-07-31-23:58:12                  0      -   349M  -
sys/ROOT/default@2013-07-31-23:58:32                  0      -   349M  -
sys/ROOT/default/usr                              1.02G  26.8G   579M  /usr
sys/ROOT/default/usr@install                      52.5K      -   290M  -
sys/ROOT/default/usr@configured                     72K      -   579M  -
sys/ROOT/default/usr@configured_with_beadm            0      -   579M  -
sys/ROOT/default/usr@2013-07-31-18:43:25              0      -   579M  -
sys/ROOT/default/usr@2013-07-31-23:58:12              0      -   579M  -
sys/ROOT/default/usr@2013-07-31-23:58:32              0      -   579M  -
sys/ROOT/default/usr/home                         90.5K  26.8G    62K  /usr/home
sys/ROOT/default/usr/home@install                 28.5K      -  46.5K  -
sys/ROOT/default/usr/home@configured                  0      -    62K  -
sys/ROOT/default/usr/home@configured_with_beadm       0      -    62K  -
sys/ROOT/default/usr/home@2013-07-31-18:43:25         0      -    62K  -
sys/ROOT/default/usr/home@2013-07-31-23:58:12         0      -    62K  -
sys/ROOT/default/usr/home@2013-07-31-23:58:32         0      -    62K  -
sys/ROOT/default/usr/ports                         462M  26.8G   462M  /usr/ports
sys/ROOT/default/usr/ports@install                40.5K      -  46.5K  -
sys/ROOT/default/usr/ports@configured                 0      -   462M  -
sys/ROOT/default/usr/ports@configured_with_beadm      0      -   462M  -
sys/ROOT/default/usr/ports@2013-07-31-18:43:25        0      -   462M  -
sys/ROOT/default/usr/ports@2013-07-31-23:58:12        0      -   462M  -
sys/ROOT/default/usr/ports@2013-07-31-23:58:32        0      -   462M  -
sys/ROOT/default/usr/src                          37.5K  26.8G  37.5K  /usr/src
sys/ROOT/default/usr/src@install                      0      -  37.5K  -
sys/ROOT/default/usr/src@configured                   0      -  37.5K  -
sys/ROOT/default/usr/src@configured_with_beadm        0      -  37.5K  -
sys/ROOT/default/usr/src@2013-07-31-18:43:25          0      -  37.5K  -
sys/ROOT/default/usr/src@2013-07-31-23:58:12          0      -  37.5K  -
sys/ROOT/default/usr/src@2013-07-31-23:58:32          0      -  37.5K  -
sys/ROOT/default/var                               169M  26.8G   168M  /var
sys/ROOT/default/var@install                        67K      -   252K  -
sys/ROOT/default/var@configured                       0      -   168M  -
sys/ROOT/default/var@configured_with_beadm            0      -   168M  -
sys/ROOT/default/var@2013-07-31-18:43:25              0      -   168M  -
sys/ROOT/default/var@2013-07-31-23:58:12              0      -   168M  -
sys/ROOT/default/var@2013-07-31-23:58:32              0      -   168M  -
sys/ROOT/default/var/log                           230K  26.8G    94K  /var/log
sys/ROOT/default/var/log@install                  46.5K      -  73.5K  -
sys/ROOT/default/var/log@configured                   0      -  75.5K  -
sys/ROOT/default/var/log@configured_with_beadm        0      -  75.5K  -
sys/ROOT/default/var/log@2013-07-31-18:43:25          0      -  75.5K  -
sys/ROOT/default/var/log@2013-07-31-23:58:12          0      -    93K  -
sys/ROOT/default/var/log@2013-07-31-23:58:32          0      -    93K  -
sys/ROOT/jailed                                   79.5K  26.8G   349M  /usr/jails/jailed
sys/ROOT/upgrade                                    69K  26.8G   349M  legacy
```

Any help given will be greatly appreciated. Thank you.


----------



## vermaden (Aug 2, 2013)

ejr2122 said:
			
		

> Given that a system using the stock beadm (as of July 23, 2013) and is unbootable, what can a newbie do to revert the BE back to an older snapshot which was bootable? I imagine that firing up a Live CD of FreeBSD and running some beadm commands would be part of the solution.



Hi, sorry for late response. You can use the live CD from here: http://mfsbsd.vx.sk/. Then you will have to do something like this: `# zpool set bootfs=sys/ROOT/safe sys` and set 
	
	



```
vfs.root.mountfrom="zfs:sys/ROOT/safe"
```
 in the /boot/loader.conf file.

Let me know how that works.


----------



## vermaden (Aug 2, 2013)

@doc1623,

Which version of beadm are you using?


----------



## doc1623 (Aug 4, 2013)

I'm not sure of the version, I installed using your tutorial (thank you) on the 29th or so, just a few days ago, I used fetch (as your tutorial instructs). The file is dated Nov 18 2012. Is there a way to check the version?


----------



## storvi_net (Aug 4, 2013)

@doc1623:

Try to use the ports version.

Regards
Markus


----------



## storvi_net (Aug 27, 2013)

I got a short question:

Is there anything against:

Create a BE.
Mess everything up.
Return to the BE and rename the BE to "default" for example.
This would save one reboot per change.

Thanks and regards.
Markus


----------



## storvi_net (Sep 6, 2013)

After discussing the topic with @vermaden, he confirmed the way I asked for.

Create a new BE.
Mess everything up.
Reboot into the new BE.
(Optional) You can rename the BE to "default" again.
Regards,

Markus


----------



## _martin (Oct 4, 2013)

I encountered the same error as @doc1623 (_invalid dataset name_). I'm using beadm v0.8.5.

First scenario:


```
root@testbsd:/root # zfs list
NAME                 USED  AVAIL  REFER  MOUNTPOINT
rpool                631M  18.9G    31K  none
rpool/ROOT           631M  18.9G    31K  none
rpool/ROOT/default   631M  18.9G   631M  legacy
rpool/home          38.5K  18.9G  38.5K  /home
rpool/tmp             31K  18.9G    31K  /tmp
root@testbsd:/root #

root@testbsd:/root # beadm create 9.2
Created successfully
root@testbsd:/root #

root@testbsd:/root # beadm list
BE      Active Mountpoint  Space Created
default NR     /          631.1M 2013-10-04 22:17
9.2     -      -            1.0K 2013-10-04 23:12
root@testbsd:/root #

root@testbsd:/root # beadm destroy 9.2
Are you sure you want to destroy '9.2'?
This action cannot be undone (y/[n]): y
Destroyed successfully
root@testbsd:/root #
```

Now I like to see the snapshots when I list ZFS datasets, so I did the following: 


```
root@testbsd:/root # zpool set listsnapshots=on rpool
root@testbsd:/root #

root@testbsd:/root # zfs list
NAME                              USED  AVAIL  REFER  MOUNTPOINT
rpool                             631M  18.9G    31K  none
rpool/ROOT                        631M  18.9G    31K  none
rpool/ROOT/default                631M  18.9G   631M  legacy
rpool/ROOT/default@freshinstall  63.5K      -   631M  -
rpool/home                       38.5K  18.9G  38.5K  /home
rpool/tmp                          31K  18.9G    31K  /tmp
root@testbsd:/root #

root@testbsd:/root # beadm create 9.2
cannot open 'rpool/ROOT/default@freshinstall@2013-10-04-23:15:59': invalid dataset name
root@testbsd:/root #

root@testbsd:/root # zfs list
NAME                                     USED  AVAIL  REFER  MOUNTPOINT
rpool                                    631M  18.9G    31K  none
rpool/ROOT                               631M  18.9G    31K  none
rpool/ROOT/9.2                             1K  18.9G   631M  legacy
rpool/ROOT/default                       631M  18.9G   631M  legacy
rpool/ROOT/default@freshinstall         63.5K      -   631M  -
rpool/ROOT/default@2013-10-04-23:15:59  61.5K      -   631M  -
rpool/home                              38.5K  18.9G  38.5K  /home
rpool/tmp                                 31K  18.9G    31K  /tmp
root@testbsd:/root #
root@testbsd:/root # beadm list
BE      Active Mountpoint  Space Created
default NR     /          631.1M 2013-10-04 22:17
9.2     -      -           62.5K 2013-10-04 23:15
root@testbsd:/root #
```

I think problem is here: 


```
119    # clone properties of source boot environment
   120    zfs list -H -o name -r ${SOURCE} \
   121      | while read FS
   122        do
```
As the line 120 is expecting not to find any snapshots. But as the snapshot is listed, following line produces error: 


```
139            zfs clone -o canmount=off ${OPTS} ${FS}@${FMT} ${DATASET}
```


----------



## vermaden (Oct 5, 2013)

@matoatlantis,

Thank you for finding that out, I did not know that ZFS allows one to enable to 'always display snapshots', I will fix that and commit ASAP.


----------



## tj-w (Oct 10, 2013)

I am having an issue using beadm on my FreeBSD 9.2 full zfs machine. Here is my partition layout...


```
# gpart show
=>        34  3907029101  ada0  GPT  (1.8T)
          34           6        - free -  (3.0k)
          40         128     1  freebsd-boot  (64k)
         168    16777216     2  freebsd-swap  (8.0G)
    16777384  3890251744     3  freebsd-zfs  (1.8T)
  3907029128           7        - free -  (3.5k)

=>        34  3907029101  ada1  GPT  (1.8T)
          34           6        - free -  (3.0k)
          40         128     1  freebsd-boot  (64k)
         168    16777216     2  freebsd-swap  (8.0G)
    16777384  3890251744     3  freebsd-zfs  (1.8T)
  3907029128           7        - free -  (3.5k)

=>        34  3907029101  ada2  GPT  (1.8T)
          34           6        - free -  (3.0k)
          40         128     1  freebsd-boot  (64k)
         168    16777216     2  freebsd-swap  (8.0G)
    16777384  3890251744     3  freebsd-zfs  (1.8T)
  3907029128           7        - free -  (3.5k)
```


```
# zfs list
NAME                USED  AVAIL  REFER  MOUNTPOINT
boss-zfs           16.1G  3.53T  40.0K  none
boss-zfs/root      10.9G  3.53T  10.6G  /
boss-zfs/tmp       16.5M  3.53T  16.5M  /tmp
boss-zfs/usr       3.55G  3.53T  2.36G  /usr
boss-zfs/usr/home   151K  3.53T   115K  /usr/home
boss-zfs/var       1.63G  3.53T  1.53G  /var
```

I noticed that this same error occurs when trying to install to a machine that is not full ZFS.

Any help will be appreciated.


----------



## xy16644 (Nov 25, 2013)

Can beadm be used with a system that is set up with encrypted ZFS root and a separate /boot? My server is set up as follows:

https://www.dan.me.uk/blog/2012/05/06/full-disk-encryption-with-zfs-root-for-freebsd-9-x/

But to date I have not had any luck using beadm. I tried again today but when I tried to `beadm activate upgrade` it said that it couldn't find my zpool.cache file in the /tmp directory?

Can beadm work with encrypted ZFS root and having /boot on a separate USB key?


----------



## vermaden (Nov 25, 2013)

xy16644 said:
			
		

> Can beadm work with encrypted ZFS root and having /boot on a deparate USB key?


Nope.

beadm works if you boot from the root ZFS pool.

The UFS filesystem does not have 'bootable snapshots'.


----------



## xy16644 (Nov 25, 2013)

Thanks @vermaden!

I'm not using UFS. Only ZFS is used, even on the USB stick. The USB stick has a ZFS pool with /boot on it.

I assume I still can't use beadm?


----------



## vermaden (Nov 25, 2013)

xy16644 said:
			
		

> I assume I still can't use beadm?



From what I know this version of beadm allows to use ZFS boot pool: https://bitbucket.org/aasoft/beadm/src/ ... m?at=mydev


----------



## vermaden (Feb 24, 2014)

I have added update for FreeBSD 10.0-RELEASE:
viewtopic.php?f=39&t=31662&p=175331#p175331


----------



## xy16644 (Feb 25, 2014)

I look forward to when beadm supports multiple ZFS pool for root!


----------



## volatilevoid (Mar 11, 2014)

Do I need to have a separate SLOG device for my ZIL when using a pure SSD pool? I've read contradictory information on that. Some say yes, some say no.  :OOO


----------



## vermaden (Mar 12, 2014)

volatilevoid said:
			
		

> Do I need to have a separate SLOG device for my ZIL when using a pure SSD pool? I've read contradictory information on that. Some say yes, some say no.  :OOO



You may and You may not. Earlier I havent used seperate ZIL with single SSD, now I am using it, the stats seem to be similar to the stats I have had when NOT using separate ZIL partitions:


```
There are 909324 files.
There are 1003593 blocks and 153621 fragment blocks.
There are 110297 fragmented blocks (71.80%).
There are 43324 contiguous blocks (28.20%).
```


----------



## kpa (Mar 12, 2014)

volatilevoid said:
			
		

> Do I need to have a separate SLOG device for my ZIL when using a pure SSD pool? I've read contradictory information on that. Some say yes, some say no.  :OOO



Do you expect that the required write bandwidth will exceed what the SSD can do? What is your use case like, lots of NFS writes?


----------



## volatilevoid (Mar 12, 2014)

kpa said:
			
		

> Do you expect that the required write bandwidth will exceed what the SSD can do?


Not really. 



			
				kpa said:
			
		

> What is your use case like, lots of NFS writes?


I might use NFS occasionally but it's not going to be a file server if that's what you want to know.


----------



## kpa (Mar 12, 2014)

Well the ZIL won't be used unless there are syncronous writes that need to be journaled for proper disaster recovery. You'll have to figure out if you have any applications that will use synchronous writes. NFS is one of them and it's usually the main reason to use a dedicated device for the ZIL. If you know that the ZIL isn't going to get many writes there's no point in using a dedicated device for it.


----------



## volatilevoid (Mar 12, 2014)

kpa said:
			
		

> Well the ZIL won't be used unless there are syncronous writes that need to be journaled for proper disaster recovery. You'll have to figure out if you have any applications that will use synchronous writes. NFS is one of them and it's usually the main reason to use a dedicated device for the ZIL. If you know that the ZIL isn't going to get many writes there's no point in using a dedicated device for it.


Thank you for your explanation. I also read an article on the technical background which was very interesting. I don't think that I'll use many applications which depend on synchronous writes so I'll try my luck without a dedicated device for the ZIL.


----------



## markbsd2 (Jul 3, 2014)

Hi, 

I'm trying to boot using BE with no success. I created a BE using beadm as described here, but *I* can't boot on BE system with FreeBSD boot loader. Let's suppose *I* had a problem after my reboot, so can't *I* boot on my last backup? 

Thanks


----------



## vermaden (Jul 3, 2014)

markbsd2 said:
			
		

> I'm trying to boot using BE with no success. I created a BE using beadm as described here, but *I* can't boot on BE system with FreeBSD boot loader. Let's suppose *I* had a problem after my reboot, so can't *I* boot on my last backup?



FreeBSD Boot Loader does not support MENU for chosing BE at Boot, but You can have such menu by installing PC-BSD or TrueOS from iXsystems (both are based on FreeBSD) because they use a modified GRUB2 as Boot Loader.


----------



## markbsd2 (Jul 3, 2014)

Hi @vermaden,

That's the point. I don't want to use PC-BSD or TrueOS. I'd like to use FreeBSD. I tried to install GRUB on FreeBSD 10+ZFZ, and it doesn't work. Here are my steps:


```
root@x:~ # sysctl kern.geom.debugflags=16
kern.geom.debugflags: 0 -> 16

root@x:~ # gpart show
=>      34  19844973  ada0  GPT  (9.5G)
        34      1024     1  freebsd-boot  (512K)
      1058   4194304     2  freebsd-swap  (2.0G)
   4195362  15649645     3  freebsd-zfs  (7.5G)

root@x:~ # gpart delete -i 1 ada0
ada0p1 deleted

root@x:~ # gpart add -t bios-boot ada0
ada0p1 added

root@x:~ # grub-install --force /dev/ada0
Installing for i386-pc platform.
Installation finished. No error reported.

root@x:~ # grub-mkconfig -o /boot/grub/grub.cfg /dev/ada0
Generating grub configuration file ...
done
Installing GRUB to gptid/de790ea3-02d8-11e4-b0db-080027b9b042
Installing for i386-pc platform.
grub-install: error: cannot open `/dev/gptid/de790ea3-02d8-11e4-b0db-080027b9b042': Operation not permitted.
```

Thanks.


----------



## vermaden (Jul 4, 2014)

markbsd2 said:
			
		

> That's the point. I don't want to use PC-BSD or TrueOS. I'd like to use FreeBSD. I tried to install grub on FreeBSD 10+ZFZ, and it doens't work. Here are my steps:
> 
> 
> ```
> ...



From what I recall GRUB2 needs at least a 1M first partition. Also PC-BSD/TrueOS uses a modified version of GRUB2 which can be found in ports as sysutils/grub2-pcbsd. You may also install TrueOS to check how and what configuration for GRUB2 it generates. You may also check PC-BSD installer source for needed commands.


----------



## markbsd2 (Jul 4, 2014)

Hi verdeman,

I was using above sysutils/grub2-pcbsd, and I've changed bios-boot to 1M with same results.


----------



## vermaden (Jul 4, 2014)

markbsd2 said:
			
		

> Hi verdeman,


Quite close 



			
				markbsd2 said:
			
		

> I was using above grub2-pcbsd port, and i've changed bios-boot to 1M with same results


I do not have experience with GRUB2 and FreeBSD, I use FreeBSD Boot Loader.


----------



## free-and-bsd (Jul 9, 2014)

markbsd2 said:
			
		

> Hi @vermaden,
> 
> That's the point. I don't want to use PC-BSD or TrueOS. I'd like to use FreeBSD. I tried to install GRUB on FreeBSD 10+ZFZ, and it doesn't work. Here are my steps:
> 
> ...


Hey, @markbsd2, DON'T rely on the grub-mkconfig command. It screws things up and so far doesn't work for some reasons!
And basically: what message do you have on reboot, after your grub2 installation "finished with no errors"?
Do you get a GRUB prompt at least (not grub-rescue!)?


----------



## free-and-bsd (Jul 9, 2014)

Anyway, you can read my experience with GRUB2 and some tips on it.
Interesting, you were able to create the bios-boot partition using `-t bios-boot`. I had to use the command `#gpart add -t \!21686148-6449-6E6F-744E-656564454649 -s <size> -i <index> <geom>`.

Another thing I had to do was to create a "custom" GRUB image including both zfs and part_gpt modules (not there by default): `grub-install --modules="part_gpt zfs" /dev/ada0`, without which all I could get was the grub rescue prompt, which is almost useless.

...Whether or not you need the above, you will definitely have to create your own /boot/grub/grub.cfg file by hand, as grub-mkconfig can't create a working one. The truth is, this UUID stuff grub-mkconfig is trying to figure out is not going to be of any use, because grub2 just can't figure out ZFS pool using these UUIDs correctly. But if you point it to the right partition, you can see your ZFS pool file structure... So you will generally need something like this:

```
menuentry 'FreeBSD-10.0 on ZFS' {
insmod zfs
insmod part_gpt
set root=(hd0,gpt2)
	kfreebsd /ROOT/working/@/boot/kernel/kernel
	kfreebsd_loadenv /ROOT/working/@/boot/device.hints
	kfreebsd_module /ROOT/working/@/boot/zfs/zpool.cache -type /boot/zfs/zpool.cache
	kfreebsd_module_elf /ROOT/working/@/boot/kernel/opensolaris.ko
	kfreebsd_module_elf /ROOT/working/@/boot/kernel/zfs.ko
	kfreebsd_module_elf /ROOT/working/@/boot/kernel/linux.ko
	kfreebsd_module_elf /ROOT/working/@/boot/modules/nvidia.ko
	set kFreeBSD.vfs.root.mountfrom=zfs:mypool/ROOT/working
	set kFreeBSD.vfs.root.mountfrom.options=rw
}
```
Make sure to use the right grub device (mine is (hd0,gpt2)) and the right PATH in your grub.cfg 

And yes, GRUB2 IS great and much better than the FreeBSD BTX loader. With GRUB2 you can do a lot more and tell it to do what you can't tell BTX to do. Not to mention that it boots quicker... This way you can even have several OSes (NOT Oracle Solaris, though!) in your zpool/ROOT and boot each one separately using GRUB, because you won't even need a bootfs porperty to be set for that to work... but since FreeBSD is the best one among them, I only do this for testing purposes


----------



## vermaden (Jul 10, 2014)

free-and-bsd said:
			
		

> Interesting, you were able to create the bios-boot partition using `-t bios-boot`. I had to use the command
> 
> ```
> #gpart add -t \!21686148-6449-6E6F-744E-656564454649 -s <size> -i <index> <geom>
> ```



Its one of the options now, from *man gpart*:

```
bios-boot        The system partition dedicated to second stage of the
                      boot loader program.  Usually it is used by the GRUB 2
                      loader for GPT partitioning schemes.  The scheme-spe‐
                      cific type is "!21686148-6449-6E6F-744E-656564454649".
```


----------



## free-and-bsd (Jul 10, 2014)

That's a nice job by gpart developers !
How quickly they respond to the current needs, that's great. I must have missed this because of using the installation media version of gpart... or maybe because I didn't look for it ? 
...Given this small improvement, FreeBSD is keeping its status of the most user-friendly among the OSs.


----------



## vermaden (Jul 10, 2014)

free-and-bsd said:
			
		

> That's a nice job by gpart developers !
> How quickly they respond to the current needs, that's great. I must have missed this because of using the installation media version of gpart... or maybe because I didn't look for it ?
> ...Given this small improvement, FreeBSD is keeping its status of the most user-friendly among the OSs.



They have added some more popular types, a lot more are defined in the FreeBSD's fdisk source tree [1].

Its albo PITA to convert *0x0C* - _DOS or Windows 95 with 32 bit FAT (LBA)_ - to *!12* to use that as an argument for gpart command.

[1] Complete partition information in FreeBSD's fdisk source tree is that:

```
% grep -m 1 -A 90 part_types /usr/src/sbin/fdisk/fdisk.c
static const char *const part_types[256] = {
        [0x00] = "unused",
        [0x01] = "Primary DOS with 12 bit FAT",
        [0x02] = "XENIX / file system",
        [0x03] = "XENIX /usr file system",
        [0x04] = "Primary DOS with 16 bit FAT (< 32MB)",
        [0x05] = "Extended DOS",
        [0x06] = "Primary DOS, 16 bit FAT (>= 32MB)",
        [0x07] = "NTFS, OS/2 HPFS, QNX-2 (16 bit) or Advanced UNIX",
        [0x08] = "AIX file system or SplitDrive",
        [0x09] = "AIX boot partition or Coherent",
        [0x0A] = "OS/2 Boot Manager, OPUS or Coherent swap",
        [0x0B] = "DOS or Windows 95 with 32 bit FAT",
        [0x0C] = "DOS or Windows 95 with 32 bit FAT (LBA)",
        [0x0E] = "Primary 'big' DOS (>= 32MB, LBA)",
        [0x0F] = "Extended DOS (LBA)",
        [0x10] = "OPUS",
        [0x11] = "OS/2 BM: hidden DOS with 12-bit FAT",
        [0x12] = "Compaq diagnostics",
        [0x14] = "OS/2 BM: hidden DOS with 16-bit FAT (< 32MB)",
        [0x16] = "OS/2 BM: hidden DOS with 16-bit FAT (>= 32MB)",
        [0x17] = "OS/2 BM: hidden IFS (e.g. HPFS)",
        [0x18] = "AST Windows swapfile",
        [0x1b] = "ASUS Recovery partition (NTFS)",
        [0x24] = "NEC DOS",
        [0x3C] = "PartitionMagic recovery",
        [0x39] = "plan9",
        [0x40] = "VENIX 286",
        [0x41] = "Linux/MINIX (sharing disk with DRDOS)",
        [0x42] = "SFS or Linux swap (sharing disk with DRDOS)",
        [0x43] = "Linux native (sharing disk with DRDOS)",
        [0x4D] = "QNX 4.2 Primary",
        [0x4E] = "QNX 4.2 Secondary",
        [0x4F] = "QNX 4.2 Tertiary",
        [0x50] = "DM (disk manager)",
        [0x51] = "DM6 Aux1 (or Novell)",
        [0x52] = "CP/M or Microport SysV/AT",
        [0x53] = "DM6 Aux3",
        [0x54] = "DM6",
        [0x55] = "EZ-Drive (disk manager)",
        [0x56] = "Golden Bow (disk manager)",
        [0x5c] = "Priam Edisk (disk manager)", /* according to S. Widlake */
        [0x61] = "SpeedStor",
        [0x63] = "System V/386 (such as ISC UNIX), GNU HURD or Mach",
        [0x64] = "Novell Netware/286 2.xx",
        [0x65] = "Novell Netware/386 3.xx",
        [0x70] = "DiskSecure Multi-Boot",
        [0x75] = "PCIX",
        [0x77] = "QNX4.x",
        [0x78] = "QNX4.x 2nd part",
        [0x79] = "QNX4.x 3rd part",
        [0x80] = "Minix until 1.4a",
        [0x81] = "Minix since 1.4b, early Linux partition or Mitac disk manager",
        [0x82] = "Linux swap or Solaris x86",
        [0x83] = "Linux native",
        [0x84] = "OS/2 hidden C: drive",
        [0x85] = "Linux extended",
        [0x86] = "NTFS volume set??",
        [0x87] = "NTFS volume set??",
        [0x93] = "Amoeba file system",
        [0x94] = "Amoeba bad block table",
        [0x9F] = "BSD/OS",
        [0xA0] = "Suspend to Disk",
        [0xA5] = "FreeBSD/NetBSD/386BSD",
        [0xA6] = "OpenBSD",
        [0xA7] = "NeXTSTEP",
        [0xA9] = "NetBSD",
        [0xAC] = "IBM JFS",
        [0xAF] = "HFS+",
        [0xB7] = "BSDI BSD/386 file system",
        [0xB8] = "BSDI BSD/386 swap",
        [0xBE] = "Solaris x86 boot",
        [0xBF] = "Solaris x86 (new)",
        [0xC1] = "DRDOS/sec with 12-bit FAT",
        [0xC4] = "DRDOS/sec with 16-bit FAT (< 32MB)",
        [0xC6] = "DRDOS/sec with 16-bit FAT (>= 32MB)",
        [0xC7] = "Syrinx",
        [0xDB] = "CP/M, Concurrent CP/M, Concurrent DOS or CTOS",
        [0xDE] = "DELL Utilities - FAT filesystem",
        [0xE1] = "DOS access or SpeedStor with 12-bit FAT extended partition",
        [0xE3] = "DOS R/O or SpeedStor",
        [0xE4] = "SpeedStor with 16-bit FAT extended partition < 1024 cyl.",
        [0xEB] = "BeOS file system",
        [0xEE] = "EFI GPT",
        [0xEF] = "EFI System Partition",
        [0xF1] = "SpeedStor",
        [0xF2] = "DOS 3.3+ Secondary",
        [0xF4] = "SpeedStor large partition",
        [0xFE] = "SpeedStor >1024 cyl. or LANstep",
        [0xFF] = "Xenix bad blocks table",
};
```


----------



## free-and-bsd (Jul 10, 2014)

Impressi_ve_!


----------



## markbsd2 (Jul 11, 2014)

Hi @free-and-bsd,

Thank you very much for your help. After *I*'ve installed sysutils/grub2 when i reboot my machine i'm getting grub-rescue prompt. Any idea?


----------



## markbsd2 (Jul 11, 2014)

Here is my screen error.


----------



## markbsd2 (Jul 11, 2014)

Hi,

Just to update this topic. It's just worked 

@free-and-bsd, I did what you suggested, ran `grub-install --modules="part_gpt zfs" /dev/ada0`.

After that i used your sample config with just one change. I replaced `set root=(hd0,gpt2)` with `search --no-floppy -s -l zroot`

I tried this one on FreeBSD 10.0-RELEASE. Now i'll update my server to STABLE and I'll see if it keep working.

Thanks for your help!


----------



## free-and-bsd (Jul 13, 2014)

as I understand it, it doesn't include zfs module by default. And without it grub2 fails to figure out the partition, on which all its files and modules (including zfs!)are situated, so it escapes to rescue prompt with no way out of it. 

...And so, `search --no-floppy -s -l zroot` works?
Great!! For the device name independent config this is preferable. The point there, i understand, is to avoid using UUIDs with grub2 for a ZFS pool. Don't know if this is related or not, but in Linux, if you want to import a FreeBSD zfs pool, you need

```
zpool import -d /dev/disk/by-partuuid ...
```
without which it will report FAULTED zpool instead of working one. Perhaps, grub2 is more linux-minded in this regard, too. So the grub-mkconfig that comes with sysutils/grub2 is still rather linux-minded. To finalize it, I must add, that for Linux on ZFS the script works as poorly as it does for FreeBSD on ZFS... But since these things are nearly "experimental", one doesn't have to complain )


----------



## free-and-bsd (Jul 13, 2014)

markbsd2 said:
			
		

> Now i'll update my server to STABLE and I'll see if it keep working.


I hope you DON'T forget, that STABLE is a development release? I've learned it the hard way LOL.


----------



## wblock@ (Jul 13, 2014)

-STABLE is to FreeBSD what Service Pack 3 is to others.


----------



## free-and-bsd (Jul 13, 2014)

wblock@ said:
			
		

> -STABLE is to FreeBSD what Service Pack 3 is to others.


Really? Then I must have done something wrong. When the new release 10 was out, something changed there with what then became the STABLE release. I updated to STABLE and ended up with kernel flooding the screen with some error message from one of the HDD controllers. It was even impossible to log in. Or may it have been 10-STABLE?


----------



## wblock@ (Jul 13, 2014)

Maybe a problem with a particular driver.  -STABLE is not the shaky alpha people associate with early releases of commercial operating systems.  Most of the time, it's just a bug-fixed version of -RELEASE.


----------



## free-and-bsd (Jul 13, 2014)

wblock@ said:
			
		

> Maybe a problem with a particular driver.  -STABLE is not the shaky alpha people associate with early releases of commercial operating systems.  Most of the time, it's just a bug-fixed version of -RELEASE.


And if I have updated the RELEASE running `freebsd-update` (last time today) -- have I thus updated to STABLE?


----------



## wblock@ (Jul 13, 2014)

No, freebsd-update(8) only deals with releases.


----------



## markbsd2 (Jul 17, 2014)

Guys,

Just an important update about ZFS+GRUB. After I've updated my FreeBSD 10-RELEASE to STABLE my `grub-install` doesn't work anymore. Always got the message unknown filesystem when I run it. I realized that happened after *I* updated my zpool with some feature flags, like hole_birth. It's incompatible with grub and will break your boot loader. So, do not update your zpool with feature flags until someone fixes it. It's important to say that on FreeBSD 10-STABLE feature flags are enabled by default, so if you want to create a new pool without feature flags use arg version=28, like this: `zpool create -f -o version=28 -o altroot=/mnt -m none zroot /dev/ada0p3`

Hope it helps someone else.


----------



## markbsd2 (Jul 17, 2014)

I've been testing `beadm` for some days, it's really a cool tool, but i'm worried with a real situation that i can face with. Let's suppose for some reason after i updated my system it doesn't boot anymore and i didn't activate my BE. How can i deal with this situation?

Is there a way that i can set on GRUB to boot it without `beadm` activate? I configured my grub boot menu to boot my BE, but it didn't work because it didn't find kernel to load.

To sum up, i just i'd like to choose on my grub boot menu which ZFS BE i'll boot without run `beadmin activate` each time.


----------



## unrealx0 (Aug 3, 2014)

laptop freebsd 10 

```
zpool create -f -o cachefile=/tmp/zpool.cache sys /dev/gpt/sys0
```

Doesn't work.


```
ZFS NOTICE: Prefetch is disabled by default if less than 4GB of RAM is present; to enable, add "vfs.zfs.prefetch_disable=0" to /boot/loader.conf
ZFS filesystem version: 5
ZFS storage pool version: features support (5000)
cannot mount 'sys': No such file or directory
```


```
ls /dev/gpt
```
 boot*** local *** sys0 <- ok

pleas help


----------



## vermaden (Aug 3, 2014)

unrealx0 said:
			
		

> laptop freebsd 10
> 
> ```
> zpool create -f -o cachefile=/tmp/zpool.cache sys /dev/gpt/sys0
> ...


The command DOES work.
Type *zpool list* after executing it.



> ZFS NOTICE: Prefetch is disabled by default if less than 4GB of RAM is present; to enable, add "vfs.zfs.prefetch_disable=0" to /boot/loader.conf


This is warning/notice about your system having less then recommended 4 GB for ZFS with Prefetch enabled.



> ZFS filesystem version: 5
> ZFS storage pool version: features support (5000)


This is information about ZFS versions upon creation.



> cannot mount 'sys': No such file or directory


You are doing install from Live CD, so */* is _read only_ and */sys *can not be created, thus this warning.


----------



## unrealx0 (Aug 3, 2014)

thanks,  now understands

next problem


```
zfs set mountpoint=/home local/home
```


```
cannot mount '/home': failed to create mount point property may be set but unable to remount filesystem
```


----------



## vermaden (Aug 3, 2014)

unrealx0 said:
			
		

> thanks,  now understands
> 
> next problem
> 
> ...


You are not doint steps in the right order, You should xecute that command after the *zfs umount -a* command.


----------



## yggdrasil (Sep 12, 2014)

Hi,

I just searched the forums up and down for information regarding using beadm with encrypted root ZFS pool. I find it mentioned that it's doable, although the standard procedure of FreeBSD's installer of creating an additional bootpool seems not to work. What I can't find after skimming this thread and the search function is a way to actually accomplish it. How do I set[]up a FreeBSD install with encrypted root ZFS pool that is still manageable via beadm?

Thanks for any help.


----------



## vermaden (Sep 12, 2014)

@yggdrasil,

Here is a 'fork/version' that supports boot pool: https://bitbucket.org/aasoft/beadm/src/ ... m?at=mydev


----------



## yggdrasil (Sep 12, 2014)

Thank you for this fast response. But I already saw that link, and it didn't work, so I assumed it was superseded by some other method.
When I download that script and use it instead of the version from pkg, I get the same "zpool.cache no such file" error as with the pkg version, only this time it doesn't exit and echos "Activated successfully". But after a reboot it still boots default instead of the activated one, which is still set to "R".


----------



## vermaden (Sep 12, 2014)

@yggdrasil

The boot pool idea is a dirty hack/workaround anyway, FreeBSD Devs should at last add support to loader to boot from GELI encrypted ZFS pool ... and implement Boot Environment support in the loader, till that time then only poosible solution is GNU GRUB2.


----------



## kpa (Sep 12, 2014)

vermaden said:
			
		

> @yggdrasil
> 
> The boot pool idea is a dirty hack/workaround anyway, FreeBSD Devs should at last add support to loader to boot from GELI encrypted ZFS pool ... and implement Boot Environment support in the loader, till that time then only poosible solution is GNU GRUB2.



Direct boot from a GELI encrypted ZFS pool will never be possible. There's just no way you can tell the boot loader how to decrypt the pool contents unless you have some additional filesystem (or other external method to obtain the keys) with the  kernel and the decryption keys.


----------



## yggdrasil (Sep 13, 2014)

I, too, don't see direct boot from GELI encrypted pool coming anytime soon.
The best solution of course would be to fix beadm to work correctly with a bootpool, and make the freebsd loader work with BEs.

I very much dislike grub, and after just trying to install it in a VM and not getting it to work doesn't make that any better.

Is anyone here actually using GELI encrypted root on ZFS with boot environments?


----------



## usdmatt (Sep 13, 2014)

Booting directly from an encrypted ZFS pool is never going to happen. It would be incredibly messy trying to build GELI code into the boot loader and quit likely impossible. I also see no reason to create all this mess just to allow a few generic system files to be encrypted. Why do people actually want this? Is it just for simplicity so you only need one pool?

What makes a lot more sense, and will probably happen at some point (although it appears no time soon), is to add Oracle ZFS style encryption to OpenZFS so that we can install FreeBSD to a pool, then create additional encrypted datasets on the same pool at will, to store whatever sensitive data we want encrypted.


----------



## rawthey (Oct 24, 2014)

free-and-bsd said:


> Anyway, you can read my experience with GRUB2 and some tips on it.
> Interesting, you were able to create the bios-boot partition using `-t bios-boot`. I had to use the command `#gpart add -t \!21686148-6449-6E6F-744E-656564454649 -s <size> -i <index> <geom>`.
> 
> Another thing I had to do was to create a "custom" GRUB image including both zfs and part_gpt modules (not there by default): `grub-install --modules="part_gpt zfs" /dev/ada0`, without which all I could get was the grub rescue prompt, which is almost useless.
> ...



After following this advice I've managed to get sysutils/grub2-pcbsd to work in so far that I have a boot menu but I can only boot into the already activated BE. Attempting to select any other BE causes the system to hang partway through the boot process.

I think the cause of the problem is that beadm sets `canmount=on` for the children of the active BE and `canmount=noauto` for the others. If I attempt to boot into a non-activated BE then I end up mounting the root of the selected BE and the children of the previously activated BE.

The ability to select boot environments with GRUB would be ideal but it looks like further work is needed to fully integrate GRUB and beadm.

I think full integration might be possible it the following things are done:

Modify beadm to set `canmount=noauto` for *all *child datasets if ${GRUB} is set.
Provide an rc script to mount all child datasets for the selected boot environment.
Fix grub-mkconfig to work properly with ZFS. When beadm creates or deletes a BE it needs to run `grub-mkconfig` to keep the boot menus up to date so relying on a hand crafted /boot/grub/grub.cfg will not be a viable option
I'm tempted to start experimenting with a few scripts to try this out but I'd be interested in hearing any opinions before I start.


----------



## vermaden (Oct 27, 2014)

rawthey said:


> After following this advice I've managed to get sysutils/grub2-pcbsd to work in so far that I have a boot menu but I can only boot into the already activated BE. Attempting to select any other BE causes the system to hang part way through the boot process.
> 
> I think the cause of the problem is that beadm sets `canmount=on` for the children of the active BE and `canmount=noauto` for the others. If I attempt to boot into a non activated BE then I end up mounting the root of the selected BE and the children of the previously activated BE.
> 
> ...


My opinion hasn't changed. Investing time in a third-party port (GRUB) under the GPL2/GPL3 license instead of creating a native FreeBSD solution in the FreeBSD Loader is a waste of time.


----------



## vermaden (Jun 13, 2018)

I just updated the *beadm* - https://github.com/vermaden/beadm/tree/1.2.8 - to 1.2.8 version.

Summary of changes:
- Active BE can now be renamed.
- Quote all properties not only sharenfs.
- Bring back entropy handling.
- Add version argument.
- Fix canmount property.


----------



## uisge (Jul 8, 2018)

vermaden said:


> I just updated the *beadm* - https://github.com/vermaden/beadm/tree/1.2.8 - to 1.2.8 version.



I happened to have zfs filesystems without any zfs properties included in my BE. Your new version embeds all property values it finds into double quotes like: 
	
	



```
-o snapdir="hidden"
```
Now, without any zfs property available `beadm` will add the following to ...
	
	



```
zfs clone -o =""
```
... which will fail to create a new BE, e.g.:
	
	



```
cannot create 'zp0/ROOT/B1/usr/local': invalid property ''
```
 The previous version just ran ...
	
	



```
zfs clone -o =
```
... which did not fail.

I did post a dirty patch in the FreeBSD mailing list:

```
--- beadm-1.2.8    2018-07-07 16:17:19.231902000 +0200
+++ beadm-1.2.8-patched    2018-07-07 22:00:19.740611000 +0200
@@ -213,7 +213,7 @@
        local OPTS=""
        while read NAME PROPERTY VALUE
        do
-          local OPTS="-o ${PROPERTY}=\"${VALUE}\" ${OPTS}"
+          local OPTS="-o ${PROPERTY}=${VALUE} ${OPTS}"
        done << EOF
$( zfs get -o name,property,value -s local,received -H all ${FS} | awk '!/[\t ]canmount[\t ]/' )
EOF
```
Well, that is not elegant at all, but it did work for me before adding zfs properties to my zfs filesystems in question.

I just wanted to let you know about that issue, though.


----------



## vermaden (Jul 8, 2018)

@ uisge

Try that one and let me know how it behaves:
https://github.com/vermaden/beadm/blob/master/beadm


----------



## uisge (Jul 8, 2018)

vermaden said:


> Try that one and let me know how it behaves:
> https://github.com/vermaden/beadm/blob/master/beadm



Good news, it works for zfs filesystems without properties:

```
root> beadm list -a
BE/Dataset/Snapshot                                    Active Mountpoint       Space Created

11r336083
  zp0/ROOT/11r336083                                   NR     /                 1.1G 2018-07-08 12:29
  zp0/ROOT/11r336083/_jails                            -      /usr/home/jails  72.5M 2018-07-08 12:29
  zp0/ROOT/11r336083/usr                               -      /usr             15.5G 2018-07-08 12:29
  zp0/ROOT/11r336083/usr/local                         -      /usr/local      840.0M 2018-07-08 12:29
  zp0/ROOT/11r336083/usr/src                           -      /usr/src          1.7G 2018-07-08 12:29
  zp0/ROOT/11r336083/var                               -      /var            634.1M 2018-07-08 12:29

root> zfs create zp0/ROOT/11r336083/zzz

root> beadm list -a
BE/Dataset/Snapshot                                    Active Mountpoint       Space Created

11r336083
  [...]
  zp0/ROOT/11r336083/zzz                               -      /zzz             88.0K 2018-07-08 16:11

root> zfs get -o name,property,value -s local,received -H all | grep zp0/ROOT/11r336083/zzz

root> beadm create B1
cannot create 'zp0/ROOT/B1/zzz': invalid property ''

root> ./beadm-1.2.8-fixed create B2
Created successfully
```
Thank you very much for that quick fix!


----------



## vermaden (Jul 8, 2018)

Ok, I will now make new 1.2.9 version then.

Thanks for pointing this out.

Please send an update to the FreeBSD Mailing Lists that it will be fixed in 1.2.9.


----------



## uisge (Jul 8, 2018)

vermaden said:


> Please send an update to the FreeBSD Mailing Lists that it will be fixed in 1.2.9.



Done and thanks again for your quick fix!


----------



## uisge (Jul 8, 2018)

FYI:

a new port has arrived
you forgot about upgrading the version to 1.2.9 (no big deal though)


----------



## unitrunker (Jul 8, 2018)

vermaden said:


> I just updated the *beadm* - https://github.com/vermaden/beadm/tree/1.2.8 - to 1.2.8 version.
> 
> Summary of changes:
> - Active BE can now be renamed.



Nice! Was trying to do that the other day.


----------



## vermaden (Jul 8, 2018)

uisge said:


> FYI:
> 
> a new port has arrived
> you forgot about upgrading the version to 1.2.9 (no big deal though)


Thanks, I always forgot to do this ... fixed in the GitHub.


----------



## getopt (Jul 8, 2018)

vermaden said:


> Thanks, I always forgot to do this ... fixed in the GitHub.


You need to fix this one too:



> =======================<phase: checksum       >============================
> ===>  License BSD2CLAUSE accepted by the user
> ===> Fetching all distfiles required by beadm-1.2.9 for building
> => SHA256 Checksum mismatch for vermaden-beadm-1.2.9_GH0.tar.gz.
> ...


----------



## uisge (Jul 8, 2018)

getopt said:


> You need to fix this one too:



He is not the maintainer of the port. I did already inform the maintainer about the modified SHA256 entry in distinfo


----------



## vermaden (Jul 8, 2018)

uisge said:


> He is not the maintainer of the port. I did already inform the maintainer about the modified SHA256 entry in distinfo


Sorry for inconvenience and thanks for notifying the Maintainer.


----------



## vermaden (Jul 31, 2018)

Yesterday I was honored to give *ZFS Boot Environments* talk at the third (#3) *Polish BSD User Group *meeting.

You are encouraged to download PDF Slides - https://is.gd/BEADM - available here.

*ZFS Boot Environments*
https://vermaden.wordpress.com/2018/07/30/zfs-boot-environments-at-pbug/


----------

