# Going all ZFS - the first start



## dvl@ (Apr 30, 2013)

Last night I started work on an all-ZFS system. I made progress, but couldn't get the system to boot. I've not had time to document anything. Short story: Mounting from zfs:root failed with error 2.

I was using a combination of two approaches. I was booting from a thumb-drive and using these two guides: https://www.dan.me.uk/blog/2012/01/22/booting-from-zfs-raid0156-in-freebsd-9-0-release/ using the layout described in http://blogs.freebsdish.org/pjd/2010/08/06/from-sysinstall-to-zfs-only-configuration/.

See the screenshots on Google+.

I hope to get this going tonight.


----------



## kpa (Apr 30, 2013)

I didn't spot the ZFS pool in the list of GEOM managed devices. Did you copy the zpool.cache file to /poolmountpoint/boot/zfs at the end of the installation?'

If you were using a recent 9-STABLE you wouldn't have to muck with zpool.cache at all, just for your information


----------



## dvl@ (Apr 30, 2013)

A good point.  No, I did not.  There was no such file.  The original export failed.  I went back to try it again, failed.  I'm going to start smaller soon....


----------



## jrm@ (Apr 30, 2013)

This script worked well for me prior to the changes in 9-STABLE.  Maybe something in there is helpful.  Of course you would have to customize DISKS, vdevs et cetera.


```
# Based on http://www.aisecure.net/2012/01/16/rootzfs/ and 
# @vermaden's guide on the forums

DISKS="ada0 ada1"

for I in ${DISKS}; do
	NUM=$( echo ${I} | tr -c -d '0-9' )
	gpart destroy -F ${I}
	gpart create -s gpt ${I}
	gpart add -b 34 -s 94 -t freebsd-boot -l bootcode${NUM} ${I}
	gpart add -t freebsd-zfs -l disk${NUM} ${I}
	gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ${I}
	gnop create -S 4096 /dev/gpt/disk${NUM}
done

zpool create -f -o altroot=/mnt -o cachefile=/tmp/zpool.cache zroot mirror /dev/gpt/disk*.nop
zpool export zroot

for I in ${DISKS}; do
	NUM=$( echo ${I} | tr -c -d '0-9' )
	gnop destroy /dev/gpt/disk${NUM}.nop
done

zpool import -o altroot=/mnt -o cachefile=/tmp/zpool.cache zroot

zpool set bootfs=zroot zroot
zfs set atime=off sys
zfs set checksum=fletcher4 zroot

zfs create zroot/usr
zfs create zroot/usr/home
zfs create zroot/var
zfs create zroot/tmp

chmod 1777 /mnt/tmp
cd /mnt ; ln -s usr/home home
chmod 1777 /mnt/var/tmp

cd /usr/freebsd-dist
export DESTDIR=/mnt
for file in base.txz kernel.txz doc.txz;
do (cat $file | tar --unlink -xpJf - -C ${DESTDIR:-/}); done

cp /tmp/zpool.cache /mnt/boot/zfs/zpool.cache

cat << EOF >> /mnt/boot/loader.conf
zfs_load=YES
vfs.root.mountfrom="zfs:zroot"
EOF

cat << EOF >> /mnt/etc/rc.conf
defaultrouter="192.168.0.200"
hostname="storage2"
ifconfig_em0="inet 192.168.0.101  netmask 255.255.255.0"
keymap="us.iso"
mountd_flags="-r" # for nfsd
nfs_client_enable="YES"
nfs_server_enable="YES"
rpcbind_enable="YES"
sendmail_enable="NO"
sendmail_msp_queue_enable="NO"
sendmail_outbound_enable="NO"
sendmail_submit_enable="NO"
sshd_enable="YES"
zfs_enable=YES
EOF
```


----------



## dvl@ (Apr 30, 2013)

Thank you.  Success.  I will continue amending that script.

Screenshots


----------



## srobert (Apr 30, 2013)

Consider that the PC-BSD installer can be used to install FreeBSD, which may simplify the ZFS configuration.
http://wiki.pcbsd.org/index.php/Install_a_Server


----------



## zspider (Apr 30, 2013)

I've also started playing with ZFS on my Sturgis-850C system. The ability to spot data corruption is of definite interest to me. I had no problem with my zroot install from this guide, http://forums.freebsd.org/showthread.php?p=217704


----------



## dvl@ (May 1, 2013)

srobert said:
			
		

> Consider that the PC-BSD installer can be used to install FreeBSD, which may simplify the ZFS configuration.
> http://wiki.pcbsd.org/index.php/Install_a_Server



I tried the PC-BSD installer.  It's nice. I didn't have a mouse handy, but I did manage to get the graphical installer to run for me.  I missed the bit about installing only FreeBSD.  The results are at https://plus.google.com/106386350930626759085/posts/eF6SptKB5u8.


----------



## vermaden (May 2, 2013)

@dvl,

Try that (you will also have the beadm feature from Solaris):
http://forums.freebsd.org/showthread.php?t=31662


----------



## dvl@ (May 3, 2013)

And here she is.

I'll post more, perhaps later tonight.


----------



## dvl@ (May 3, 2013)

She's fast.

`# time /etc/periodic/weekly/310.locate`

```
Rebuilding locate database:

real	0m2.181s
user	0m0.355s
sys	0m1.926s
```

And, for what it's worth, `portsnap extract` took 27 seconds.


----------



## dvl@ (May 3, 2013)

Here is what I used:


```
# Based on [url]http://www.aisecure.net/2012/01/16/rootzfs/[/url] and 
# @vermaden's guide on the forums

DISKS="ada0 ada1 ada2 ada3 ada4 ada5"

gmirror load
gmirror stop swap

for I in ${DISKS}; do
        NUM=$( echo ${I} | tr -c -d '0-9' )
        gpart destroy -F ${I}
        gpart create -s gpt ${I}
        gpart add -b 34 -s 94 -t freebsd-boot -l bootcode${NUM} ${I}

        gpart add -s 2g -t freebsd-swap -l swap${I} ${I}

        #
        # note: not using all the disk, on purpose, adjust this size for your HDD
        #
        gpart add -t freebsd-zfs -s 2790G -l disk${NUM} ${I}
        gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ${I}
        gnop create -S 4096 /dev/gpt/disk${NUM}
done

gmirror label -F -h -b round-robin swap /dev/gpt/swap*

#zpool create -f -o altroot=/mnt    -o cachefile=/tmp/zpool.cache -O atime=off -O setuid=off -O canmount=off system raidz2 /dev/gpt/disk*.nop
zpool create -f -O mountpoint=/mnt -o cachefile=/tmp/zpool.cache -O atime=off -O setuid=off -O canmount=off system raidz2 /dev/gpt/disk*.nop
zpool export system

for I in ${DISKS}; do
        NUM=$( echo ${I} | tr -c -d '0-9' )
        gnop destroy /dev/gpt/disk${NUM}.nop
done

zpool import -o altroot=/mnt -o cachefile=/tmp/zpool.cache system

zfs create -o mountpoint=legacy -o setuid=on system/rootfs

zpool set bootfs=system/rootfs system

# there is no sys

#zfs set atime=off sys
zfs set checksum=fletcher4 system

mount -t zfs system/rootfs /mnt

zfs create system/root
zfs create -o canmount=off  system/usr
zfs create -o canmount=off  system/usr/home
zfs create -o setuid=on     system/usr/local
zfs create -o compress=gzip system/usr/src
zfs create -o compress=lzjb system/usr/obj
zfs create -o compress=gzip system/usr/ports
zfs create -o compress=off  system/usr/ports/distfiles
zfs create -o canmount=off  system/var
zfs create -o compress=gzip system/var/log
zfs create -o compress=lzjb system/var/audit
zfs create -o compress=lzjb system/var/tmp
#
# I was getting failure on these chmod so I did them after the system booted
#
#chmod 1777 /mnt/var/tmp
zfs create -o compress=lzjb system/tmp
#chmod 1777 /mnt/tmp
#chmod 1777 /mnt/var/tmp
zfs create system/usr/home/dan

cd /mnt ; ln -s usr/home home

cd /usr/freebsd-dist
export DESTDIR=/mnt
for file in base.txz kernel.txz doc.txz;
do (cat $file | tar --unlink -xpJf - -C ${DESTDIR:-/}); done

cp /tmp/zpool.cache /mnt/boot/zfs/zpool.cache

cat << EOF >> /mnt/etc/fstab
system/rootfs        /    zfs  rw,noatime 0 0
/dev/mirror/swap.eli none swap sw         0 0
EOF

cat << EOF >> /mnt/boot/loader.conf

geom_eli_load="YES"
geom_label_load="YES"
geom_mirror_load="YES"
geom_part_gpt_load="YES"

zfs_load=YES
vfs.root.mountfrom="zfs:system/rootfs"
EOF

cat << EOF >> /mnt/etc/rc.conf
defaultrouter="10.5.0.1"
hostname="slocum.unixathome.org"
ifconfig_em0="inet 10.5.0.207  netmask 255.255.255.0"
keymap="us.iso"
sendmail_enable="NO"
sendmail_msp_queue_enable="NO"
sendmail_outbound_enable="NO"
sendmail_submit_enable="NO"
sshd_enable="YES"
zfs_enable=YES
EOF

cat << EOF >> /mnt/etc/resolv.conf
search unixathome.org
nameserver 10.5.0.1
nameserver 10.5.0.2
EOF

echo WRKDIRPREFIX=/usr/obj >> /mnt/etc/make.conf

zfs umount -a
umount /mnt
zfs set mountpoint=/ system
```


----------



## dvl@ (May 3, 2013)

For background:

I booted using a USB stick, then dropped to a shell.  Started dhclient, then used scp to get the above file from another system.

I ran the script, then rebooted.  Done.


----------



## Sebulon (May 15, 2013)

Hi @dvl@!

After looking a little closer at those Seagates, IÂ´ve confirmed that at least they are 4k disks. Might be the case for the other big drives as well, but I havenÂ´t checked.
Seagate HDD datasheet

And correct me if IÂ´m wrong, but I canÂ´t see you specifying correct partition alignment for them. Repartitioning can be done online; just offline, repartition and resilver one disk at a time. It might not be a big deal for one standalone drive, but it might give better performance out of your zpool as a whole.

/Sebulon


----------



## dvl@ (May 31, 2013)

Sebulon said:
			
		

> Hi @dvl@!
> 
> And correct me if IÂ´m wrong, but I canÂ´t see you specifying correct partition alignment for them.




I believe the purpose of the gnop commands are to achieve 4K alignment.


----------



## wblock@ (May 31, 2013)

gnop(8) is used to force the use of 4K blocks.  In ZFS, this shows as ashift=12.  That differs from alignment.  Just because the filesystem is using 4K blocks does not mean they are evenly aligned with the 4K blocks native to the drive.

So the two different things that should be done for performance:

1. Start the partition on a block that is an even 4K multiple.  If the whole drive contains only the filesystem, this will be zero.  Otherwise, I recommend 2048 (1M) or a multiple of 1M or 1G.

2. Set the filesystem to use blocks that are 4K in size.


----------



## phoenix (May 31, 2013)

Just change your "gpart add" line to the following:

```
gpart add -t freebsd-zfs -s 2790G [b]-b 2048[/b] -l disk${NUM} ${I}
```

That will start the partition at the 1 MB boundary, thus aligning it to 4K blocks, and providing the best performance.

That also leaves you 1 MB of free space at the beginning of the drive, in case you ever need to make it bootable.    Very handy!  I just converted my home ZFS server from using a separate USB stick to boot and 2x mirror vdevs for storage to an all-ZFS (root-on-zfs) setup, without losing data or using extra disks .... because I had that extra 1 MB of free space at the beginning of the disks.


----------



## dvl@ (Jun 12, 2013)

wblock@ said:
			
		

> gnop(8) is used to force the use of 4K blocks.  In ZFS, this shows as ashift=12.  That differs from alignment.  Just because the filesystem is using 4K blocks does not mean they are evenly aligned with the 4K blocks native to the drive.
> 
> So the two different things that should be done for performance:
> 
> ...



We are in partial luck:


```
[dan@slocum:~] $ zdb | grep ashift
            ashift: 12
```


----------



## dvl@ (Jun 12, 2013)

phoenix said:
			
		

> Just change your "gpart add" line to the following:
> 
> ```
> gpart add -t freebsd-zfs -s 2790G [b]-b 2048[/b] -l disk${NUM} ${I}
> ...



Here is what I have now:


```
$ gpart show ada0
=>        34  5860533101  ada0  GPT  (2.7T)
          34          94     1  freebsd-boot  (47k)
         128     4194304     2  freebsd-swap  (2.0G)
     4194432  5851054080     3  freebsd-zfs  (2.7T)
  5855248512     5284623        - free -  (2.5G)
```

Let's see if I can do this math.

Partition 3 (the one used for zfs) starts at block 4194432.  If you divide by 4 (the number of 512 byte blocks in 4KB), you'll get the number of 4K blocks: 1048608

Am I misunderstanding this?


----------



## wblock@ (Jun 12, 2013)

dvl@ said:
			
		

> Here is what I have now:
> 
> 
> ```
> ...



(If the boot partition starts at an aligned value--normally block 40--the rest of the partitions will line up also.  As long as they are even multiples of 1 MB or 1 GB in size. gpart(8)'s -a works as expected after 9.1-RELEASE, too.)



> Let's see if I can do this math.
> 
> Partition 3 (the one used for zfs) starts at block 4194432.  If you divide by 4 (the number of 512 byte blocks in 4KB), you'll get the number of 4K blocks: 1048608
> 
> Am I misunderstanding this?



Well, one part: 4096/512 = 8, not 4.  But it is aligned: 4194432/8 = 524304, an integer.


----------



## dvl@ (Jul 24, 2013)

I recently used mixed devices (ada and da). For this situation, the script fails.

My solution: use a number, 1..N, where N is the number of disks (zero based).

Full details at http://dan.langille.org/2013/07/19/problem-with-disk-numbering-in-my-zfs-creation-script/


----------

