# HOWTO: Modern FreeBSD Install RELOADED (vermaden way)



## vermaden (Mar 8, 2010)

All these years *sysinstall(8)* was helping us to install FreeBSD with most options we needed, today with new filesystems/features like GJournal/ZFS/Geli/GMirror/GStripe its no longer up to the task, because it only supports creating installation on UFS filesystem with SoftUpdates turned ON or OFF.

In this guide you will learn how to setup FreeBSD installation in simple yet flexible setup based on read-only UFS (without SoftUpdates) for _'base system' [1]_, some SWAP space, /tmp mounted on SWAP and all the other filesystems (/var /usr ...) mounted on ZFS. It will not require rebuilding anything, just simple setup on plain MBR partitions. I should also mention that we would be using AHCI mode for disks. I also provided two versions, for system with one harddisk and with three of them for redundant setup.

Here is the layout of the system with 1 harddisk:

```
MBR SLICE 1 |    / | 512 MB | UFS/read-only 
            | SWAP |   2 GB |
            | /tmp | 512 MB | mounted on SWAP with[B] mdmfs(8)[/B]
------------+------+---------------------------------------
MBR SLICE 2 | /usr |   REST | ZFS dataset
            | /var |   REST | ZFS dataset
```

... and here layout for single disk for system with 3 disks:

```
MBR SLICE 1 |    / | 512 MB | UFS/read-only 
------------+------+--------+------------------------------
MBR SLICE 2 | SWAP |   1 GB |
            | /tmp | 512 MB | mounted on SWAP with [B]mdmfs(8)[/B]
------------+------+--------+------------------------------
MBR SLICE 3 | /usr |   REST | ZFS dataset
            | /var |   REST | ZFS dataset
```

Redundancy planning for system with 3 disks:

```
[ [B]DISK0[/B] ]           [ [B]DISK1[/B] ]           [ [B]DISK2[/B] ]
 [   /   ] [color="Silver"]< RAID1 >[/color] [   /   ] [color="Silver"]< RAID1 >[/color] [   /   ]
 [ SWAP0 ]           [ SWAP1 ]           [ SWAP2 ]
 [   Z   ] [color="Silver"]< RAID5 >[/color] [   F   ] [color="Silver"]< RAID5 >[/color] [   S   ]
```

FreeBSD core, the _'base system' [1]_ should remain almost unchanged/untouched on daily basis while you can mess all other filesystems, this ensures that when things go wrong, you will be able to fix anything still having working _'base system' [1]_.

You will need *-dvd-* disk or *-memstick-* image for this installation, *-disk1-* will not do since it does not contain *livefs *system.

Here is the procedude, described as simple as possible.

*1.0. I assume that our disk for the installation would be /dev/ad0* (/dev/ad0 /dev/ad1 /dev/ad2 for system with 3 disks)

*1.1. Boot *-dvd-* from DVD disk or *-memstick-* image from pendrive*

```
Country Selection --> United States
Fixit --> CDROM/DVD ([file]*-dvd-*[/file]) or USB ([file]*-memstick-*[/file])
```

*1.2. Create your temporary working environment*

```
fixit# [color="Blue"]/mnt2/bin/csh[/color]
# [color="blue"]setenv PATH /mnt2/rescue:/mnt2/usr/bin:/mnt2/sbin[/color]
# [color="blue"]set filec[/color]
# [color="blue"]set autolist[/color]
# [color="blue"]set nobeep[/color]
```

*1.3. Load needed modules*

```
fixit# [color="#0000ff"]kldload /mnt2/boot/kernel/geom_mbr.ko[/color]
fixit# [color="blue"]kldload /mnt2/boot/kernel/opensolaris.ko[/color]
fixit# [color="blue"]kldload /mnt2/boot/kernel/zfs.ko[/color]
```

*1.4. Create/mount needed filesystems*

```
[B]DISKS: 3[/B]                                               | [B]DISKS: 1[/B]
# [color="Blue"]cat > part << __EOF__[/color]                                | # [color="Red"]cat > part << __EOF__[/color]
[color="Blue"]p 1 165 63  512M[/color]                                       | [color="Red"]p 1 165 63  2560M[/color]
[color="Blue"]p 2 165  * 1024M[/color]                                       | [color="Red"]p 2 159  *     *[/color]
[color="Blue"]p 3 159  *     *[/color]                                       | [color="Red"]p 3   0  0     0[/color]
[color="Blue"]p 4   0  0     0[/color]                                       | [color="Red"]p 4   0  0     0[/color]
[color="Blue"]a 1[/color]                                                    | [color="Red"]a 1[/color]
[color="Blue"]__EOF__[/color]                                                | [color="Red"]__EOF__[/color]
                                                       |
# [color="Blue"]fdisk -f part ad0[/color]                                    | # [color="Red"]fdisk -f part ad0[/color]
# [color="Blue"]fdisk -f part ad1[/color]                                    |
# [color="Blue"]fdisk -f part ad2[/color]                                    |
                                                       |
# [color="Blue"]kldload /mnt2/boot/kernel/geom_mirror.ko[/color]             |
# [color="Blue"]gmirror label  rootfs ad0s1[/color]                          |
# [color="Blue"]gmirror insert rootfs ad1s1[/color]                          |
# [color="Blue"]gmirror insert rootfs ad2s1[/color]                          |
                                                       |
# [color="Blue"]bsdlabel -B -w /dev/mirror/rootfs[/color]                    | # [color="Red"]cat > label << __EOF__[/color]
                                                       | [color="Red"]# /dev/ad0s1:[/color]
                                                       | [color="Red"]8 partitions:[/color]
                                                       | [color="Red"]  a: 512m  0 4.2BSD[/color]
                                                       | [color="Red"]  b: *     * swap[/color]
                                                       | [color="Red"]__EOF__[/color]
                                                       |
                                                       | # [color="Red"]bsdlabel -B -w ad0s1[/color]
                                                       | # [color="Red"]bsdlabel       ad0s1 | tail -1 >> label[/color]
                                                       | # [color="Red"]bsdlabel -R    ad0s1 label[/color]
                                                       |
# [color="Blue"]glabel label swap0 ad0s2[/color]                             | # [color="Red"]glabel label rootfs ad0s1a[/color]
# [color="Blue"]glabel label swap1 ad1s2[/color]                             | # [color="Red"]glabel label swap   ad0s1b[/color]
# [color="Blue"]glabel label swap2 ad2s2[/color]                             |
                                                       |
# [color="Blue"]newfs /dev/mirror/rootfsa[/color]                            | # [color="Red"]newfs /dev/label/rootfs[/color]
# [color="Blue"]zpool create basefs raidz ad0s3 ad1s3 ad2s3[/color]          | # [color="Red"]zpool create basefs ad0s2[/color]
# [color="Blue"]zfs create basefs/usr[/color]                                | # [color="Red"]zfs create basefs/usr[/color]
# [color="Blue"]zfs create basefs/var[/color]                                | # [color="Red"]zfs create basefs/var[/color]
# [color="Blue"]mkdir /NEWROOT[/color]                                       | # [color="Red"]mkdir /NEWROOT[/color]
# [color="Blue"]mount /dev/mirror/rootfsa /NEWROOT[/color]                   | # [color="Red"]mount /dev/label/rootfs /NEWROOT[/color]
# [color="Blue"]zfs set mountpoint=/NEWROOT/usr basefs/usr[/color]           | # [color="Red"]zfs set mountpoint=/NEWROOT/usr basefs/usr[/color]
# [color="Blue"]zfs set mountpoint=/NEWROOT/var basefs/var[/color]           | # [color="Red"]zfs set mountpoint=/NEWROOT/var basefs/var[/color]
```

*1.5. Actually install needed FreeBSD sets*

```
# [color="blue"]setenv DESTDIR /NEWROOT[/color]
# [color="blue"]cd /dist/8.0-RELEASE[/color]

# [color="blue"]cd base[/color]
# [color="blue"]./install.sh[/color] (answer [I]'y'[/I] here)
# [color="blue"]cd ..[/color]

# [color="blue"]cd manpages[/color]
# [color="blue"]./install.sh[/color]
# [color="blue"]cd ..[/color]

# [color="Blue"]cd kernels[/color]
# [color="blue"]./install.sh generic[/color]
# [color="Blue"]cd ..[/color]

# [color="Blue"]cd /NEWROOT/boot[/color]
# [color="Blue"]rm -r kernel[/color]
# [color="Blue"]mv GENERIC kernel[/color]
```


----------



## vermaden (Mar 8, 2010)

*1.6. Provide basic configuration needed to boot new system*
*1.6.1.*

```
[B]DISKS: 3[/B]                                          | [B]DISKS: 1[/B]
# [color="Blue"]cat > /NEWROOT/etc/fstab << __EOF__[/color]             | # [color="Red"]cat > /NEWROOT/etc/fstab << __EOF__[/color]
[color="Blue"]#dev                #mount #fs  #opts #dump #pass[/color] | [color="Red"]#dev              #mount #fs  #opts #dump #pass[/color]
[color="Blue"]/dev/mirror/rootfsa /      ufs  rw    1     1[/color]     | [color="Red"]/dev/label/rootfs /      ufs  rw    1     1[/color]
[color="Blue"]/dev/label/swap0    none   swap sw    0     0[/color]     | [color="Red"]/dev/label/swap   none   swap sw    0     0[/color]
[color="Blue"]/dev/label/swap1    none   swap sw    0     0[/color]     | [color="Red"]__EOF__[/color]
[color="Blue"]/dev/label/swap2    none   swap sw    0     0[/color]     |
[color="Blue"]__EOF__[/color]                                           |
                                                  |
# [color="Blue"]cat > /NEWROOT/boot/loader.conf << __EOF__[/color]      | # [color="Red"]cat > /NEWROOT/boot/loader.conf << __EOF__[/color]
[color="Blue"]zfs_load="YES"[/color]                                    | [color="Red"]zfs_load="YES"[/color]
[color="Blue"]ahci_load="YES"[/color]                                   | [color="Red"]ahci_load="YES"[/color]
[color="Blue"]geom_mirror_load="YES"[/color]                            | [color="Red"]__EOF__[/color]
[color="Blue"]__EOF__[/color]                                           |
```

*1.6.1.*

```
# [color="Blue"]cat > /NEWROOT/etc/rc.conf << __EOF__
zfs_enable="YES"
__EOF__[/color]
```

*1.7. Unmount filesystems and reboot*

```
# [color="Blue"]cd /[/color]
# [color="Blue"]zfs umount -a[/color]
# [color="Blue"]umount /NEWROOT[/color]
# [color="Blue"]zfs set mountpoint=/usr basefs/usr[/color]
# [color="Blue"]zfs set mountpoint=/var basefs/var[/color]
# [color="Blue"]zpool export basefs[/color]
# [color="Blue"]reboot[/color]
```

*Now lets talk things you will need to do after reboot.*

*2.0. At boot loader select boot into single user mode*

[font="Courier New"]4. Boot FreeBSD in single user mode[/font]


```
Enter full pathname of shell or RETURN for /bin/sh: [color="Blue"]/bin/csh[/color]
% [color="Green"]/rescue/mount -w /[/color]
% [color="Green"]/rescue/zpool import -D || /rescue/zpool import -f basefs[/color]
% [color="Green"]exit[/color]
```

*2.1. Login as root without password*

```
login: [color="Green"]root[/color]
password: [color="Green"](just hit ENTER)[/color]
```

*2.2. Set root password*

```
# [color="Green"]passwd[/color]
```

*2.3. Set hostname*

```
# [color="Green"]echo hostname=\"HOSTNAME\" >> /etc/rc.conf[/color]
```

*2.4. Set timezone and date/time*

```
# [color="Green"]tzsetup[/color]
# [color="Green"]date 201001142240[/color]
```

*2.5. Tune the ZFS filesystem (only for i386)*

```
# [color="Green"]cat > /boot/loader.conf << __EOF__
vfs.zfs.prefetch_disable=0      # enable prefetch
vfs.zfs.arc_max=134217728       # 128 MB
vm.kmem_size=536870912          # 512 MB
vm.kmem_size_max=536870912      # 512 MB
vfs.zfs.vdev.cache.size=8388608 #   8 MB
__EOF__[/color]
```

*2.6. Mount /tmp on SWAP*

```
# [color="Green"]cat >> /etc/rc.conf << __EOF__
tmpmfs="YES"
tmpsize="512m"
tmpmfs_flags="-m 0 -o async,noatime -S -p 1777"
__EOF__[/color]
```

*2.7. Move termcap into /etc (instead of useless link on crash)*

```
# [color="Green"]rm /etc/termcap[/color]
# [color="Green"]mv /usr/share/misc/termcap /etc[/color]
# [color="Green"]ln -s /etc/termcap /usr/share/misc/termcap[/color]
```

*2.8. Add latest security patches*

```
# [color="Green"]freebsd-update fetch[/color]
# [color="Green"]freebsd-update install[/color]
```

*2.9. Make all changes to configuration in /etc, then set / to be mounted read-only in /etc/fstab*

```
[B]DISKS: 3[/B]                                           | [B]DISKS: 1[/B]
 #dev                #mount #fs  #opts #dump #pass |  #dev              #mount #fs  #opts #dump #pass
[color="Lime"]+/dev/mirror/rootfsa /      ufs  ro    1     1[/color]     | [color="Lime"]+/dev/label/rootfs /      ufs  ro    1     1[/color]
[color="Red"]-/dev/mirror/rootfsa /      ufs  rw    1     1[/color]     | [color="Red"]-/dev/label/rootfs /      ufs  rw    1     1[/color]
 /dev/label/swap0    none   swap sw    0     0     |  /dev/label/swap   none   swap sw    0     0
 /dev/label/swap1    none   swap sw    0     0     |
 /dev/label/swap2    none   swap sw    0     0     |
```

*2.10. Reboot and enjoy modern install of FreeBSD system*

```
# [color="Green"]shutdown -r now[/color]
```


----------



## vermaden (Mar 8, 2010)

*To summarise, this setup provides us these things:*
-- bulletproof _'base system' [1]_ on UFS (w/o SU) mounted read-only
-- /tmp filesystem mounted on SWAP
-- usage of new AHCI mode in FreeBSD
-- flexibility for all other filesystems on ZFS
-- fully working environment on crash (/etc/termcap)
-- disks/filesystems mounted by label, possible device name changes are harmless
*-- RAID1 for / and RAID5 for all other systems on setup with 3 disks*

*[1]* base system is / and /usr while this setup, in context of this setup I name _'base system'_
the most important core of FreeBSD, the / filesystem and its binaries/libraries/configuration
(thanks to *phoenix* for reminding me what REAL base system is/means) ​
*CHANGELOG*

*1.0* / 2010-01-14 / initial version
*1.1* / 2010-01-15 / simplified PATH
+fixit# setenv PATH /mnt2/rescue:/mnt2/usr/bin
-fixit# setenv PATH /mnt2/bin:/mnt2/sbin:/mnt2/usr/bin:/mnt2/usr/sbin​*1.2* / 2010-01-15 / added link for termcap (instead of duplicate on /etc and /usr) [2.6.]
.# rm /etc/termcap
+# mv /usr/share/misc/termcap /etc
+# ln -s /etc/termcap /usr/share/misc/termcap
-# cp /usr/share/misc/termcap /etc​*1.3* / 2010-01-21 / removed unneeded mount commands [2.0.]
-# zfs mount basefs/var
-# zfs mount basefs/usr​*1.4* / 2010-03-08 / added setup for 3 disks + cleanup
too much to fit here, we can as well call this new version RELOADED ​
MIRROR THREAD: http://daemonforums.org/showthread.php?t=4200
POLISH VERSION: http://bsdguru.org/dyskusja/viewtopic.php?t=19392
*
ADDED: 2010/10/21*

After rethinking setup from my HOWTO and after *phoenix* thoughts I currently use that setup for most FreeBSD installations that include ZFS.

*LOGICAL SETUP*


```
[SIZE="3"]UFS 512m /           ro
ZFS *    /home       rw | atime=off
RAM 128m /tmp        rw | async
UFS *    /usr        ro | softupdates (mounted r/w only for packages updates)
ZFS *    /usr/obj    rw | atime=off | checksum=off
ZFS *    /usr/ports  rw | atime=off
ZFS *    /usr/src    rw | atime=off
ZFS *    /var        rw
UFS 128m /var/db/pkg ro | softupdates (mounted r/w only for packages updates)[/SIZE]
```

*PHYSICAL SETUP (LAPTOP w/ 1 DISK)*


```
[SIZE="3"]p1 8g disk0s1a 512m UFS /           newfs -m 1    /dev/label/root
      disk0s1e 128m UFS /var/db/pkg newfs -m 1 -U /dev/label/pkg
      disk0s1f    * UFS /usr        newfs -m 1 -U /dev/label/usr

p2 *g disk0s2  ZFS/home             zfs create -o mountpoint=/home      pool/home
               ZFS/var              zfs create -o mountpoint=/var       pool/var
               ZFS/usr              zfs create -o mountpoint=none       pool/usr
               ZFS/usr/src          zfs create -o mountpoint=/usr/src   pool/usr/src
               ZFS/usr/obj          zfs create -o mountpoint=/usr/obj   pool/usr/obj
               ZFS/usr/ports        zfs create -o mountpoint=/usr/ports pool/usr/ports

               (if You need SWAP, omit on CF/Pendrive/SSD disks)
               ZFS/SWAP             zfs create -V 2g                    pool/swap

RAM/SWAP 128m  /tmp                 tmpmfs=YES --> /etc/rc.conf[/SIZE]
```

*PHYSICAL SETUP (CF + DISKS)*


```
[SIZE="3"]8g CF    disk0s1a 512m UFS /           newfs -m 1    /dev/label/root
         disk0s1e 128m UFS /var/db/pkg newfs -m 1 -U /dev/label/pkg
         disk0s1f    * UFS /usr        newfs -m 1 -U /dev/label/usr

*g ZFS   ZFS/home                      zfs create -o mountpoint=/home      pool/home
         ZFS/var                       zfs create -o mountpoint=/var       pool/var
         ZFS/usr                       zfs create -o mountpoint=none       pool/usr
         ZFS/usr/src                   zfs create -o mountpoint=/usr/src   pool/usr/src
         ZFS/usr/obj                   zfs create -o mountpoint=/usr/obj   pool/usr/obj
         ZFS/usr/ports                 zfs create -o mountpoint=/usr/ports pool/usr/ports

         (if You need SWAP)
         ZFS/SWAP                      zfs create -V 2g                    pool/swap

128M RAM /tmp                          tmpmfs=YES --> /etc/rc.conf[/SIZE]
```

Of course for serious storage/backup servers it would be 'nice' to have that CF (or pendrive) mirrored via GEOM/mirror.


----------



## graudeejs (Mar 8, 2010)

hmm, are you able to boot into single user mode with ZFS?
I can't for some reason, maybe because my HDD's are encrypted. But there are devices with eli (decrypted), weird


----------



## vermaden (Mar 8, 2010)

@killasmurf86

/ is on UFS (with bsdlabel) so no problem to boot into _single user mode_, I havent played with encrypted / to check here, maybe I will in some free time @ *virtualbox*.


----------



## graudeejs (Mar 8, 2010)

Encrypted root with UFS will work , bin there done that


----------



## dewarrn1 (Apr 4, 2010)

*Couple of questions about implementation and migration*

First off, thanks to Vermaden for posting this!  It's a very slick way to take advantage of the best filesystems that FreeBSD offers.  I'm considering migrating a home server to 8.0, and this seems like a great setup for me.  I've got a couple of questions before I give it a try, though.

First, would including a fourth disk be as simple as it looks?  I've got 7.2's ZFS spanning 3 disks at the moment, but I've got a fourth sitting around and figure that it might as well be in the server.

Second, do you have any recommendations for maintaining data integrity during the move?  I've got external HDD's that can hold all of my stuff, but they're just FAT32 and lack the kind of checksum protection that ZFS is giving me on the current system.  My plan would be to use all three of the current disks and a fourth in the new system, but that will require wiping out the current filesystem.

Thanks in advance!


----------



## vermaden (Apr 4, 2010)

@dewarrn1



> First, would including a fourth disk be as simple as it looks?  I've got 7.2's ZFS spanning 3 disks at the moment, but I've got a fourth sitting around and figure that it might as well be in the server.


It will fit well on 4 disks, but it will require to recreate the ZFS pool, you will just create raidz over 4 disks.



> Second, do you have any recommendations for maintaining data integrity during the move?  I've got external HDD's that can hold all of my stuff, but they're just FAT32 and lack the kind of checksum protection that ZFS is giving me on the current system.  My plan would be to use all three of the current disks and a fourth in the new system, but that will require wiping out the current filesystem.


You can tar(1) and split(1) all your data into that fat32 filesystem (parts need to be smaller then 4GB), You may as well create TWO copies of your data there, in two folders, or just make UFS filesystem there.


----------



## dewarrn1 (Apr 4, 2010)

Very cool, I'll get that process underway.  I'll probably generate some par2 data for those split tar files as well, just in case.  Thanks!


----------



## dewarrn1 (Apr 9, 2010)

This worked almost exactly as advertised!  I ended up with 4x500GB HDDs with a 4-way mirrored base system, 4 GB swap spread across the disks, and ~1.3TB ZFS.  My only hiccup was the "zpool import -D" bit, which for some reason didn't want to play nice.  However, "zpool import basefs" did the trick, and now I'm getting things back onto ZFS.  Nice work, V!


----------



## vermaden (Apr 9, 2010)

dewarrn1 said:
			
		

> This worked almost exactly as advertised!  I ended up with 4x500GB HDDs with a 4-way mirrored base system, 4 GB swap spread across the disks, and ~1.3TB ZFS.  My only hiccup was the "zpool import -D" bit, which for some reason didn't want to play nice.  However, "zpool import basefs" did the trick, and now I'm getting things back onto ZFS.  Nice work, V!



Good to know that it also works for others 

The first version included zpool import basefs, but after messing with 3 disks it imported with zpool import -D so I changed the formula, I think I will include both just in case, thanks.


----------



## Kami (Apr 17, 2010)

I've used your guide and had the same problem as dewarrn1 when trying to import the pool ([cmd=]zpool import -D[/cmd]), but [cmd=]zpool import basefs[/cmd] did the trick here as well 

Oh, I installed the system on 2 drives and used raid1 for all the slices (/, /usr and /var).

I plan to add another disk in the future and I will keep you updated how the process of adding another disk to the zfs pool goes.

Anyway, great guide.


----------



## vermaden (Apr 18, 2010)

Kami said:
			
		

> I plan to add another disk in the future and I will keep you updated how the process of adding another disk to the zfs pool goes.


But remember that You would have to destroy the current mirror and then create RAIDZ for example.



			
				Kami said:
			
		

> Anyway, great guide.


Thanks mate.


----------



## mefizto (May 3, 2010)

Dear vermaden or anybody,

would it be possible/prudent to modify the installation to have the entire / (USB) and /usr (ZFS) on one (mirrored) disk, in particular flash and having the rest of the file system, i.e., SWAP, /tmp, /var, /home, etc., on RAIDed hard drives using ZFS? 

The motivation would be to further separate the OS/Application (fairly static on my system) from the data.

Thank you,

M


----------



## graudeejs (May 3, 2010)

mefizto said:
			
		

> Dear vermaden or anybody,
> 
> would it be possible/prudent to modify the installation to have the entire / (USB) and /usr (ZFS) on one (mirrored) disk, in particular flash and having the rest of the file system, i.e., SWAP, /tmp, /var, /home, etc., on RAIDed hard drives using ZFS?
> 
> ...



I don't see any reason why this couldn't be done.

P.S.
Are you from Latvian Linux center?


----------



## vermaden (May 3, 2010)

@mefizto

If You want both / and /usr on separate disks/USB then it would be better, to create RAID 1 with gmirror on that USB drives, and use UFS for /usr on that USB disk, and then put all other filesystems with swap on remaining harddisks.


----------



## mefizto (May 3, 2010)

Dear killasmurf86,

thank you for the reply.  And, no, I am not from Latvian Linux center.

Dear vermaden,


```
. . .it would be better, to create RAID 1 with gmirror on that USB drives, . . .
```

That was what I meant by "to have the entire / (USB) and /usr (ZFS) on _one (mirrored) disk_".  Sorry for my imprecise English.


```
. . .use UFS for /usr on that USB disk. . .
```

What would be the advantage of using UFS instead of ZFS for /usr?

If it is not much bother, could you indicate, at least in general terms, which parts of your procedure have to be changed?

Thank you,

M


----------



## vermaden (May 3, 2010)

> What would be the advantage of using UFS instead of ZFS for /usr?
> 
> If it is not much bother, could you indicate, at least in general terms, which parts of your procedure have to be changed?


Keep small / and /usr (base system) on UFS (in case any problems with ZFS) to have fully working 'repair' environment, put all the rest on ZFS tank.

About changes, You would have to create additional UFS partition (e) for /usr of course in section 1.4:


```
# cat > label << __EOF__
# /dev/ad0s1:
8 partitions:
  a: 512m  0 4.2BSD
  e: *     * 4.2BSD
__EOF__
```


----------



## mefizto (May 4, 2010)

Dear vermaden,

thank you for your help.  

Kindest regards,

M


----------



## vermaden (May 4, 2010)

@mefizto

You are welcome mate.


----------



## Daisuke_Aramaki (May 11, 2010)

Great guide vermaden. Worked out perfect setting the system bottom up on a new laptop.


----------



## vermaden (May 11, 2010)

@Daisuke_Aramaki

Thanks 'n' Welcome


----------



## mystique (Jun 1, 2010)

```
DISKS: 3                                            
# cat > part << __EOF__                               
p 1 165 63  512M                                     
p 2 165  * 1024M                                      
p 3 159  *     *                                      
p 4   0  0     0                                    
a 1                                                
__EOF__                                               
                                                      
# fdisk -f part ad0                               
# fdisk -f part ad1                               
# fdisk -f part ad2
```

My disks are ad4, ad6, ad8 and ad10.. 

what would I change here? because this is not working for me; 

I understand fdisk -f part ad4 would be the right command but when I do that I get this error:


```
******* Working on device /dev/ad4 *******
fdisk: invalid fdisk partition table found
fdisk: geom not found: "ad4"
```

thanks in advance..


----------



## mystique (Jun 1, 2010)

So it looks like I found that I was not using the full path to add the kernel modules and was getting an error that I over looked.. hence why it could not find ad4; *blush* my bad..

So after that I was able to finish the install but when I get to the 'reboot in single user mode' I have some issues..

I have a usb keyboard and can not use the keyboard at boot time; odd.. 
when it boots I get the mountroot> prompt; also odd.. 

but when I look back through dmesg I do not see my (ad4, ad6, ad8, ad10) but rather I see ada0, ada1, ada2, ada3.. 

I am assuming that has *something* to do with ahci (something I've not used yet)

so what do I do with that?


```
# fdisk -f part ada0                                    
# fdisk -f part ada1                               
# fdisk -f part ada2
# fidsk -f part ada3
```

but do I need to load the ahci module beforehand?

thanks in advance.


----------



## zeroseven (Jun 8, 2010)

I may be partially in the wrong place to ask this question.  I have a sparc64 system and before I set myself up for a lot of desk to face action, I was wondering if I could accomplish this sort of install using the livefs in combination with disk1?


----------



## vermaden (Jun 8, 2010)

@zeroseven

I have one sparc64 machine [*] available, but its in production at least till 2010/09/1 so I may try the same setup under sparc64, but not until that date (maybe even later, projects, as they often do slip little or more in time).


[*] *Sun Ultra 45* if it comes to details.


----------



## zeroseven (Jun 8, 2010)

I think I will give it a shot today then, mine is only in hobby status, so I can't really do any damage.  I have a question about the AHCI mode though, because it's sparc and older architecture do I assume that part isn't applicable?  Also tuning zfs in your guide is i386 specific, I'm wondering if others do not need to do this or if I should dig around for sparc specific tuning to accomplish the same thing?

I appreciate the quick response, I'll post what I encounter if it's relevant, definitely if I get it to work.  Thanks.


----------



## vermaden (Jun 8, 2010)

@zeroseven

I havent seen any SPARC64/SPARC related info about AHCI, it propably depends what controller You have there and if it supports AHCI mode, if You kldload AHCI module and You will not have AHCI hardware, nothing bad will happen and You still will have /dev/ad* disks instead of /dev/ada* disks, but my guide operates on labels and ZFS which are not based on device names, so You cant shoot yourself in the foot here.

This Sun Ultra 45 box has some SATA drives, so if SATA is not i386/amd64 sprcific, then so is AHCI, but dunno if some developers put any love to sparc64 to add code for AHCI on it.

Anyway, please report how this guide 'works' on SPARC, propably no one tried it on Sparc


----------



## zeroseven (Jun 10, 2010)

@vermaden

I'll start with a little background here, I'm trying to accomplish this install on a Netra T1 105 with two scsi hdd.  So, naturally, because it's headless I'm having to work over a serial line, no problem.  I am fortunate enough to have a cdrom drive included stock with the system, however it obviously doesn't read a dvd.  That's the first snag.  I thought maybe there would be a method to umount the livefs disk in order to mount disk1 when I need to install the base system and boot.  Unfortunately I don't know how to / a safe way to do this.

So, I decided I would settle for a modified version of the "vermaden way" for now. Installing a base system with disk1 on a 512M root, swap and a third partition as a place holder for zfs.

I know this defeats some of the purpose of disk labels instead of disk references, i.e. rootfs/root, basefs/usr, etc...  However, the only affected mount point is the rootfs/root.  Once in single user mode I:

```
zpool create basefs da0e da1
zfs create basefs/usr
zfs create basefs/var
zfs create basefs/home
```
Then set the appropriate mount points to /usr, /var, /home and export the pool. Then: 

```
zpool import -f basefs
```
Everything mounted correctly.  However, if you are quick to analyze, I made a fundamental mistake when I imported.  I overlapped /usr with basefs/usr so when I left single user mode, whatever was needed in /usr was no where to be found.  I'm pretty sure that's easily fixed, by just setting the basefs/usr to a temporary mount point other than /usr and a mv /usr /tempbasefs/usr then reset the basefs/usr mount point back to /usr.

I didn't have enough time to try this, I'll give it a shot tomorrow.


----------



## vermaden (Jun 10, 2010)

zeroseven said:
			
		

> I am fortunate enough to have a cdrom drive included stock with the system, however it obviously doesn't read a dvd.  That's the first snag.



I assume that Your machine does not allow You to boot from USB/pendrive?



> I know this defeats some of the purpose of disk labels instead of disk references, i.e. rootfs/root, basefs/usr, etc...  However, the only affected mount point is the rootfs/root.


That is not a problem, You may LATER add those labels and modify /etc/fstab to boot from them, so You do not loose anything here.

As for 'problematic' /usr stuff, You may as well delete the whole /usr partition (along with data), and extract again base dataset, so it will (again) extract everything to / and to needed /usr partition.


----------



## vermaden (Jun 10, 2010)

zeroseven said:
			
		

> I didn't have enough time to try this, I'll give it a shot tomorrow.


Good luck, I see that all 'major' problems are solved here.

You may also want to go *phoenix* (longtime FreeBSD community member) with installing both / and /usr on UFS2, and put all the rest on ZFS, IMHO *phoenix* way is better for server while mine is better for laptop/workstation/desktop.

*TO ADMINS:* I needed to spread my reply into 2 posts, because I got 503 error from *varnish*.


----------



## zeroseven (Jun 10, 2010)

@vermaden

Thanks for all the tips and the good write up.  Eventually I just settled for a UFS2 / and /usr like you suggested.  It makes more sense I suppose.  I'm working with two very small 4 gig scsi drives and it would have been nice to have /usr on the 6.8 gig zfs tank so it could expand as needed.

Really this was all just an exercise to see if I could actually do this.  If I had a dvd drive I would have been able to get it done with no problems.  No worries.  The whole process is very similar on sparc from what I could tell.  You don't have to worry about setting up the master boot record and you can use sunlabel instead of bsdlabel to setup the partitions and for the most part I'm certain the rest of your write up is the same.


----------



## vermaden (Jun 19, 2010)

@zeroseven

Ok, thanks for testing and review on SPARC 

Glad that You have it worked, regards.


----------



## loop (Jul 1, 2010)

Am I right in thinking that, starting from a single-disk setup, you can simply add vdevs to the tank to increase the space available?

Specifically, can I add another drive to the system and have zfs expand /usr and use the space on the new drive?


----------



## vermaden (Jul 1, 2010)

loop said:
			
		

> Am I right in thinking that, starting from a single-disk setup, you can simply add vdevs to the tank to increase the space available?
> 
> Specifically, can I add another drive to the system and have zfs expand /usr and use the space on the new drive?


Yes, its 100% expandable, ZFS by addinganother mirror/raidz/raidz2 (which will make RAID0 of them), and also by GEOM Mirror which will add another partititons to current mirror setup.


----------



## vermaden (Jul 25, 2010)

After FreeBSD 8.1-RELEASE has been released, we can now upgrade our ZPOOL(s) to newer (v13 --> v14) version, below You will find simple way to achieve that.

First check what is currently on You system:
	
	



```
# [color="Blue"][B]uname -m -r[/B][/color]
8.1-RELEASE amd64

# [B][color="Blue"]zfs get version basefs[/color][/B]
NAME    PROPERTY  VALUE    SOURCE
basefs  version   3        -

# [B][color="#0000ff"]zpool list -o version[/color][/B]
VERSION
     13
```

Now lets proceed with the zpool/zfs upgrade procedure:

```
# [B][color="#0000ff"]zpool upgrade[/color][/B]
This system is currently running ZFS pool version 14.

The following pools are out of date, and can be upgraded.  After being
upgraded, these pools will no longer be accessible by older software versions.

VER  POOL
---  ------------
13   basefs

Use 'zpool upgrade -v' for a list of available versions and their associated
features.

# [color="#0000ff"][B]zpool upgrade basefs[/B][/color]
This system is currently running ZFS pool version 14.

Successfully upgraded 'basefs' from version 13 to version 14

# [B][color="#0000ff"]zfs upgrade[/color][/B]
This system is currently running ZFS filesystem version 3.

All filesystems are formatted with the current version.
```

After the upgrade we can check again our ZPOOL(s) version:

```
# [B][color="#0000ff"]zpool list -o version[/color][/B]
VERSION
     14
```

Your pool is now upgraded to newest 'release' zpool/zfs version.


----------



## jj (Jul 26, 2010)

Nice guide.

but is setting / to be mounted read in the last step necessary? I get a bunch of errors when booting, like "not able to write to /etc/hosts.conf, file system is read only" ect.

And it doesn't seem practical having to remount the file system as rw everytime I wanna edit something like /etc/rc.conf. Or am I just doing it wrong?


----------



## vermaden (Jul 26, 2010)

jj said:
			
		

> And it doesn't seem practical having to remount the file system as rw
> everytime I wanna edit something like /etc/rc.conf. Or am I just doing it wrong?



How often do You edit /etc/rc.conf (or anything else in /etc)? 

I do it that way:

```
# mount -w /
# vi /etc/rc.conf
# mount -r /
```

... You can even creata alias/function for that in Your shell.


----------



## olav (Oct 5, 2010)

I just created a mirrored USB pen setup by following this guide. It works great, except that when I pull out one of USB pens and reconnect, it doesn't notice that I've reconnected the USB pen. With another setup I have with a regular FreeBSD install (with only UFS partitions) this works fine. What could be wrong?


----------



## vermaden (Oct 6, 2010)

olav said:
			
		

> I just created a mirrored USB pen setup by following this guide. It works great, except that when I pull out one of USB pens and reconnect, it doesn't notice that I've reconnected the USB pen. With another setup I have with a regular FreeBSD install (with only UFS partitions) this works fine. What could be wrong?



Provide outputs of these commands BEFORE and AFTER unlugging/attaching the USB pendrives:

```
# gmirror status
# zpool status
```


----------



## olav (Oct 6, 2010)

Before outplugging:

```
zbtankX# gmirror status
         Name    Status  Components
mirror/rootfs  COMPLETE  da0s1
                         da1s1
zbtankX# zpool status
  pool: basefs
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        basefs      ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            da1s3   ONLINE       0     0     0
            da0s3   ONLINE       0     0     0

errors: No known data errors
```

When I unplug I get this message on screen:

```
ugen4.2: <Corsair> at usb4 (disconnected)
umass0: at uhub4, port 2, addr 2 (disconnected)
(da0:umass-sim0:0:0:0): lost device
GEOM_MIRROR: Device rootfs: provider da0s1 disconnected.
```

After unplug

```
zbtankX# gmirror status
         Name    Status  Components
mirror/rootfs  DEGRADED  da1s1
zbtankX# zpool status
  pool: basefs
 state: DEGRADED
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        basefs      DEGRADED     0     0     0
          mirror    DEGRADED     0     0     0
            da1s3   ONLINE       0     0     0
            da0s3   REMOVED      0   335     0

errors: No known data errors
```

Reattach:

Same as after unplug. No message on screen(with the other setup I get a message telling me that the usb pen has been reattached)


----------



## vermaden (Oct 6, 2010)

olav said:
			
		

> ```
> zbtankX# gmirror status
> Name    Status  Components
> mirror/rootfs  DEGRADED  da1s1
> ```



For gmirror, first try *activate*, if that does not helps, then *forget/insert* should do, check man gmirror for more details.


```
# gmirror activate rootfs da0s1
# gmirror rebuild rootfs da0s1
```


```
# gmirror forget rootfs da0s1
# gmirror insert rootfs da0s1
# gmirror rebuild rootfs da0s1
```



			
				olav said:
			
		

> ```
> zbtankX# zpool status
> pool: basefs
> state: DEGRADED
> ...



I would try that:

```
# zpool attach basefs da0s3
```






> Reattach:
> 
> Same as after unplug. No message on screen(with the other setup I get a message telling me that the usb pen has been reattached)


What about dmesg messages after plugin?


----------



## olav (Oct 6, 2010)

There is no /dev/da0 when I replug, dmesg says nothing. I really don't think this has anything to do with gmirror nor zfs.

When I use camcontrol I get this:

```
zbtankX# camcontrol rescan all
Re-scan of bus 0 was successful
Re-scan of bus 1 returned error 0xa
Re-scan of bus 2 was successful
```


```
zbtankX# camcontrol devlist
<TOSHIBA MK1234GSX AH002E>         at scbus0 target 0 lun 0 (pass0,ada0)
<Corsair Flash Voyager 1100>       at scbus2 target 0 lun 0 (pass2,da1)
```

Any other ideas about what I can do? Is it possible to restart the usb service/module?


----------



## vermaden (Oct 6, 2010)

@olav

Does these disks appear after reboot? (I mean if they are attached before You start the system)


----------



## olav (Oct 6, 2010)

Yes. Before I start the system they're both attached and appear with dmesg. I find this really weird, I've tried several other usb pens(different brands too), but none of them show up in dmesg if I attach them after the boot process.


----------



## vermaden (Oct 21, 2010)

FYI: added new/alternative way of setup at the end of #2 post: http://forums.freebsd.org/showpost.php?p=71762&postcount=2


----------



## mystique (Oct 24, 2010)

Question..

When I originally setup a machine to use two drives in a mirror.. 

The machine has been so successful; we wanted to add a third drive and change from mirror to raidz.

How would I do that?

Thanks in advance.

I found this link:
http://www.fscker.ca/rc/2010/05/20/migrate-zfs-mirror-to-raidz-on-freenas/

but it doesn't look like they have /usr and /var and /home running on their pool..


----------



## vermaden (Oct 24, 2010)

mystique said:
			
		

> I found this link:
> http://www.fscker.ca/rc/2010/05/20/migrate-zfs-mirror-to-raidz-on-freenas/
> 
> but it doesn't look like they have /usr and /var and /home running on their pool..


The guide is ok, following it would allow You to 'migrate' from RAID1 into RAID5, as for /usr, /var and so, its just a matter of datasets, just add needed datasets with appreciate -o mountpoint=/where arguments and put whatever You like on the pool.


----------



## zeroseven (Nov 4, 2010)

@vermaden

Hello again, it has been a while.  I recently acquired an intel machine a compact flash drive and a sata hdd.  I was following your recent post pertaining to that sort of setup, however I am having difficulty with the zfs pool being persistent through a reboot.  After a reboot only the root label is mounted and it kicks me to single user mode. From single user mode, the zfs pool is no longer importable or even recognized.

If I jump back into the fixit mode on the install cd, I can import and re-export the zfs pool, reboot and then re-import through single user mode, exit and the system runs.. until another reboot.  I have the appropriate /boot/loader.conf and /etc/rc.conf with zfs enabled and loaded.

I will continue to scour over your guide to see if I missed something and look around, but I was wondering if you could think of anything off the top of your head I might be missing.


----------



## zeroseven (Nov 4, 2010)

I figured it out.. conflicting mounts. the UFS /var/db/pkg in /etc/fstab creates a problem because it tries to mount before the zfs pool. Var doesn't exist yet and results in an error and boots to single user mode. I removed the entry from fstab and everything booted and mounted normally. Now my question is.. Other than manually mounting the /var/db/pkg every time I reboot.. is there a way to make it the last partition mounted?


----------



## vermaden (Nov 5, 2010)

I have it like that in /etc/fstab file:

```
#BASE
#DEV            #MOUNT      #FS    #OPTS      #PASS/DUMP
/dev/label/root /           ufs    rw,noatime 1 1
/dev/label/usr  /usr        ufs    rw         2 2
storage/var     /var        zfs    rw         0 0
/dev/label/pkg  /var/db/pkg ufs    rw         2 2
/dev/cd0        /mnt/cdrom  cd9660 ro,noauto  0 0

#ADDITIONAL
#DEV                        #MOUNT               #FS #OPTS #PASS/DUMP
storage                     /storage             zfs rw    0 0
storage/home                /home                zfs rw    0 0
storage/usr/obj             /usr/obj             zfs rw    0 0
storage/usr/ports           /usr/ports           zfs rw    0 0
storage/usr/ports/distfiles /usr/ports/distfiles zfs rw    0 0
storage/usr/ports/obj       /usr/ports/obj       zfs rw    0 0
storage/usr/ports/packages  /usr/ports/packages  zfs rw    0 0
storage/usr/src             /usr/src             zfs rw    0 0
```


----------



## zeroseven (Nov 5, 2010)

Wow.. I didn't even have any zfs declarations in my fstab.. thanks a million, makes a lot more sense now.


----------



## vermaden (Nov 5, 2010)

@zeroseven

Welcome.

There are two ways to have ZFS mounted after reboot.

1. put zfs_enable=YES in /etc/rc.conf, so all defined datasets will be mounted
2. put each needed dataset into /etc/fstab where You want


----------



## chainsaw (Jan 25, 2011)

Hello, I'm getting error:


```
fdisk: Geom not found: "ad4"
```
when doing:
`# fdisk -f part ad4`

I have 3 HDD's ad4, ad6, ad8 on SATA (with Intel s5000vsa mb it doesn't have AHCI mode for sata) I just tried running the same code before step 1.4 and it didn't have any errors, or it just didn't do anything? 

I have tried several ways for getting software raid5 to start working and failed so far :e FreeBSD hates me.


----------



## vermaden (Jan 26, 2011)

> fdisk: Geom not found: "ad4"


From what I remember it always complain about that, unless You kldload the geom_mbr module, but even without the module it just works, check fdisk ad4 command again if Your changes have been made.


----------



## dewarrn1 (Jan 26, 2011)

*Updates for 8.2?*

Hi Vermaden,

     I've got a system running 8.1 now, but I'm thinking of installing 8.2 from scratch using your method.  Will there be any changes to the installation procedure for the new STABLE release?  Thanks!


----------



## vermaden (Jan 26, 2011)

@dewarrn1

I do not think there will be any changes, it should work the same with 8.2.


----------



## technoUrbanNomad (Jan 29, 2011)

Excellent how-to! Two quick questions concerning the 3 disk configuration:

1) If one of the disks fail, would there be corruption of the swap space which might interfer with a running system?
2) If one disk fails, what is the recovery process?


----------



## caesius (Jan 29, 2011)

How do I go about adapting this guide if I want to have /home on a separate HDD?


----------



## copypaiste (Feb 6, 2011)

Cool guide, vermaden! One question here - why did you choose 159 (9f BSD/OS) type for the 2nd partition?


----------



## vermaden (Feb 6, 2011)

copypaiste said:
			
		

> Cool guide, vermaden! One question here - why did you choose 159 (9f BSD/OS) type for the 2nd partition?


I have used 165 (FreeBSD type) for UFS partitions and I wanted to make ZFS partitions that would be used different then UFS ones, as BSD/OS is already dead, so I used BSD/OS (159) label to make it look different in fdisk output.


----------



## overmind (Feb 16, 2011)

Nice tutorial. I have few questions:
- What is the advantage of having /tmp mounted in swap?
- Mounting a 2G /tmp in 4 GB swap will be ok on a 2G RAM machine? (I think yes but I want to be sure). In other words mounting in swap will not use any memory even if mdmfs according to man page will mount using in-memory file system, how much memory it will use?)
- If I use your setup with two drives, (with same setup as in your example with 3 drives) but using ZFS as RAID1, it will be possible later to add two new drives, and have /var and /usr on 4 drives like that: in a raidz tank to have: slice3 from drive1, slice 3 from drive2, drive 3 and drive4? (of course the pool must be recreated)
- On your setup is there any advantage of using GPT instead of MBR?
- If I want to encrypt with geli the whole pool, it will work ok? Should I put zfs on top of geli or the other way around?

Thank you


----------



## vermaden (Feb 16, 2011)

overmind said:
			
		

> - What is the advantage of having /tmp mounted in swap?


The 'usage pattern' of SWAP is almost the same as SWAP, many random very temporary files/data needed for short period of time, it can be also on ZFS, mostly matter of preference.



> - Mounting a 2G /tmp in 4 GB swap will be ok on a 2G RAM machine?


SWAP is just the disk space, so yes it will be ok.



> - If I use your setup with two drives, (with same setup as in your example with 3 drives) but using ZFS as RAID1, it will be possible later to add two new drives, and have /var and /usr on 4 drives like that: in a raidz tank to have: slice3 from drive1, slice 3 from drive2, drive 3 and drive4? (of course the pool must be recreated)


Yes, You will 'advance' from RAID1 to RAID10 (stripe of two RAID1).



> - On your setup is there any advantage of using GPT instead of MBR?


I havent found any, it should work the same with GPT partitions, I just use MBR partitions because I got used to them, I do not need more then 2-3 primary partitions, I also use bsdlabel partitions on one primary partition on FreeBSD mostly and sometime I need to put Windows XP on the same disk, which will not work with GPT partitions.



> - If I want to encrypt with geli the whole pool, it will work ok? Should I put zfs on top of geli or the other way around?


Yes, just encrypt the drives before creating the pool with zpool, I have seen several guides on the net for that, just search for zfs geli freebsd


----------



## bsd64 (Mar 15, 2011)

*No basefs to rescue?*

Thank you vermaden for this detailed step by step.

I appear to have no problems until: 
[cmd=]/rescue/zpool import -D || /rescue/zpool import -f basefs[/cmd]

My machine tells me there is no basefs to rescue. I proceeded with the other suggestions, but then on reboot the root file system cannot be found. 

I am trying this with 8.2 release.


----------



## vermaden (Mar 15, 2011)

Try only that instead: [cmd=]/rescue/zpool import -f basefs[/cmd]


----------



## bsd64 (Mar 15, 2011)

Yes, that works. Thank you! 
I have already managed to damage and repair the rootfs mirror. The machine mysteriously shut off when I tried to press ctrl-alt-F9 and my finger grazed a 4th button?!

cheers


----------



## bbzz (May 1, 2011)

@vermaden
Most awesome tutorial. I never felt need to try zfs on my desktop, but I'll give it a try.
One question. Would it be possible to modify this tutorial to include that change @20.10.2010. How you go about doing / and /usr on UFS and the rest on zfs with 3 disks in raid (but with vermaden's flavor with some /usr on zfs). Also is that /tmp @ 128RAM now only mounted in memory or is it still swap based?
It looks a little daunting...


----------



## vermaden (May 2, 2011)

@bbzz

Thanks mate.



> Would it be possible to modify this tutorial to include that change @20.10.2010.


What change exacly?



> How you go about doing / and /usr on UFS and the rest on zfs with 3 disks in raid (but with vermaden's flavor with some /usr on zfs).


I have had similar setup, / and /usr on CF 8GB card on UFS and /usr/local /var /tmp and rest on ZFS pool.


----------



## bbzz (May 2, 2011)

I was a bit confused by your new setup, but I get it now. I wanted to setup a small home server with 3x500gb, and I wasn't sure if I should just go all zfs. So I did; ended up with everything on zfs pool including swap, /tmp, etc. Not sure if that's a good idea.


----------



## vermaden (May 2, 2011)

bbzz said:
			
		

> I was a bit confused by your new setup, but I get it now. I wanted to setup a small home server with 3x500gb, and I wasn't sure if I should just go all zfs. So I did; ended up with everything on zfs pool including swap, /tmp, etc. Not sure if that's a good idea.


The only risk I may see in that setup are upgrades, for example changes in boot code that habe to be 'set up' again after upgrade, that would make upgrading a little PITA.

I have a NAS with FreeBSD and 2 * 2TB drives, base system (/) is on 8GB CF card and all the rest is on zpool mirror (raid1 equivalent) on those 2 drives, all the 'classic' MBR way.

... but for laptop that I use everyday O would go for GPT/ZFS only setup, one zpool 'to rule them all' 
If that laptop fails I do not care, I have backup (on the NAS ...) and I can do reinstall if upgrade fail.


----------



## bbzz (May 2, 2011)

What do you mean changes in boot code? Wouldn't that be just a matter of copying it to freebsd-zfs slice? Or I'm missing something (probably) 

Just a couple of quick questions about your setup if you don't mind please 
1) Why use UFS and not another zfs pool for /, /var/db/pkg, and  /usr  on another disk? Any special reason why you stick with UFS?
2) Are you keeping your backup/data files all under /home? Since I would be using this as my desktop regularly as well, wouldn't it be better to create another mountpoint (/data), which you can then tune (parts with lots of docs and text with compression, etc) rather than keep all under home directory?
3) Are there any limitations or zfs performance issues with having lots of mountpoints under mountpoints (say, /usr/home/bbzz/data/docs under /usr/home/bbzz/data under /usr/home/ under /usr, etc).
4) Is that /tmp mounted only under memory? I used that before, with only 16-32mb, and found out that sometimes there are issues when building large ports so ended up just mounting under swap.


----------



## vermaden (May 2, 2011)

bbzz said:
			
		

> What do you mean changes in boot code? Wouldn't that be just a matter of copying it to freebsd-zfs slice? Or I'm missing something (probably)



I remember some thread about problems after zpool/zfs upgrade.



> 1) Why use UFS and not another zfs pool for /, /var/db/pkg, and  /usr  on another disk? Any special reason why you stick with UFS?


I was not able to boot from ZFS pool from a MBR partition, that is why I use MBR+UFS+ZFS most of the time. I have tried several howtos on MBR/ZFSrootboot but none of them worked, at least for 8.2-RELEASE. So its either MBR/UFS+ZFS or GPT/ZFS. Using several ZFS pools is totally pointless.



> 2) Are you keeping your backup/data files all under /home? Since I would be using this as my desktop regularly as well, wouldn't it be better to create another mountpoint (/data), which you can then tune (parts with lots of docs and text with compression, etc) rather than keep all under home directory?


I use it most of the time as /storage or /data or something like that, /home is separate dataset only.



> 3) Are there any limitations or zfs performance issues with having lots of mountpoints under mountpoints (say, /usr/home/bbzz/data/docs under /usr/home/bbzz/data under /usr/home/ under /usr, etc).


I havent heard any performance issues about mount points count.

As You already asking about performance, here are differences between 8.1 and 8.2 ZFS in BLOGBENCH:


```
8.1 ZFS
Final score for writes:           375
Final score for reads :         42163

8.2 ZFS
Final score for writes:          1273
Final score for reads :        120520

8.2 UFS
Final score for writes:            77
Final score for reads :        119512
```



> 4) Is that /tmp mounted only under memory? I used that before, with only 16-32mb, and found out that sometimes there are issues when building large ports so ended up just mounting under swap.


I also use /tmp as a ZFS dataset now.


----------



## bbzz (May 3, 2011)

vermaden said:
			
		

> @zeroseven
> 
> Welcome.
> 
> ...



I reinstalled using your setup, vermaden. Seems most convenient since I can just blow up my usb stick.

When I only put 
	
	



```
zfs_enable=YES
```
 in /etc/rc.conf, what seems to happen is that first stuff under /etc/fstab is mounted, and then only zfs. Implication is that /var/db/pkg cannot be mounted under /var which is zfs, so all files get installed under zfs pool. 

On the other hand, if I just put /etc/fstab and don't enable 
	
	



```
zfs_enable=YES
```
 I can't access anything under /usr, that is only root has permission to access it. 

Conclusion. I had to enable both 1. and 2. to make it work, making sure that pool/var is mounted in /etc/fstab before /var/db/pkg.

Just wanted to comment on this if someone is reading this using this setup, and having this issue.

BTW if you have any other suggestions, please do tell before I fill up my drives too much.


----------



## vermaden (May 4, 2011)

bbzz said:
			
		

> Conclusion. I had to enable both 1. and 2. to make it work, making sure that pool/var is mounted in /etc/fstab before /var/db/pkg.


Strange, I did not had such problems, I only use /etc/fstab for ZFS/UFS mounts.



			
				bbzz said:
			
		

> BTW if you have any other suggestions, please do tell before I fill up my drives too much.


Currently after my NAS 'update' I again use / on CF card and /usr, /tmp and /var on ZPOOL, an alternative can be using / and /usr on CF and /usr/local, /tmp  and /var on ZPOOL.

This is my current /etc/fstab  file:

```
#DEV            #MOUNT          #FS     #OPTS           #PASS/DUMP
/dev/label/root /               ufs     rw,noatime      1 1
storage/usr     /usr            zfs     rw,noatime      0 0
storage/var     /var            zfs     rw,noatime      0 0
storage/tmp     /tmp            zfs     rw,noatime      0 0
/dev/cd0        /mnt/cdrom      cd9660  ro,noauto       0 0
```


----------



## bbzz (May 4, 2011)

Isn't the 'base system' / and stuff under /usr as well, like libraries, binaries, etc, things that won't work one without other. Also /var/db/pkg which would make updating tedious/impossible if corrupted. So yeah I really liked your previous setup where zpool only has stuff that can live on its own after reinstall.


```
#def_fs
/dev/label/root         /               ufs     rw,noatime      1 1
/dev/label/usr          /usr            ufs     rw              2 2
pool/var                /var            zfs     rw              0 0
/dev/label/pkg          /var/db/pkg     ufs     rw              2 2
pool                    /pool           zfs     rw              0 0
pool/home               /home           zfs     rw              0 0
pool/usr/obj            /usr/obj        zfs     rw              0 0
pool/usr/ports          /usr/ports      zfs     rw              0 0
pool/usr/ports/distfiles /usr/ports/distfiles zfs rw            0 0
pool/usr/ports/packages /usr/ports/packages zfs rw              0 0
pool/usr/src            /usr/src        zfs     rw              0 0
pool/data               /data           zfs     rw              0 0
pool/var/tmp            /var/tmp        zfs     rw              0 0
proc                    /proc           procfs  rw              0 0
```


```
mount
/dev/label/root on / (ufs, local, noatime, read-only)
devfs on /dev (devfs, local, multilabel)
/dev/label/usr on /usr (ufs, local, soft-updates)
pool/var on /var (zfs, local, noexec, nosuid)
/dev/label/pkg on /var/db/pkg (ufs, local, soft-updates)
pool on /pool (zfs, local)
pool/home on /home (zfs, local)
pool/usr/obj on /usr/obj (zfs, local)
pool/usr/ports on /usr/ports (zfs, local, nosuid)
pool/usr/ports/distfiles on /usr/ports/distfiles (zfs, local, noexec, nosuid)
pool/usr/ports/packages on /usr/ports/packages (zfs, local, noexec, nosuid)
pool/usr/src on /usr/src (zfs, local, noexec, nosuid)
pool/data on /data (zfs, local, noexec, nosuid)
pool/var/tmp on /var/tmp (zfs, local, nosuid)
procfs on /proc (procfs, local)
pool/tmp on /tmp (zfs, local, nosuid)
```

edit: Is there a need for pool to have its own mountpoint i.e /pool?


----------



## vermaden (May 4, 2011)

bbzz said:
			
		

> edit: Is there a need for pool to have its own mountpoint i.e /pool?


No, I set it to none for most of the times:

```
# zfs set -o mountpoint=none pool
```
... or You can get rid of pool/data dataset and use /pool instead.


----------



## vand777 (Oct 27, 2011)

Thank you very much! It is a very good tutorial. 

I faced a small problem. After reboot my zfs pool is "not available" (zfs filesystems not mounted). 

I have 
	
	



```
zfs_enable="YES"
```
 in my /etc/rc.conf (no zfs filesystems entries in /etc/fstab). When I run *zpool status* after reboot, it says 
	
	



```
no pools available
```
 When I manually import pool, everything appears to be OK. But it is not again mounted automatically after reboot.

Any ideas?

P.S. 8.2-RELEASE-p0.


----------



## rabfulton (Oct 27, 2011)

Do you have:

```
zfs_load="YES"
```

in /boot/loader.conf ?


----------



## vand777 (Oct 27, 2011)

Yes, I have.


----------



## zeroseven (Oct 28, 2011)

@vand777

I had a similar problem when I first tried this method.  How are your disks set up?  I'm going to assume that you have something like a CF for / and /usr and everything else is on the zfs pool.  If this is the case, then you need to mount any pools specifically in the the path of UFS devices you set up.  For example, in my setup I have /var/db/pkg as UFS, however my /var directory is on a zpool.  So, in my /etc/fstab I have to specifically mount pool/var to /var before the /var/db/pkg mount can be made.


```
#dev		  #mount	#fs	#opts	#dump	#pass
/dev/ufs/root	  /		ufs	rw	1	1
/dev/ufs/usr	  /usr  	ufs	rw	2	2
pool/var	  /var		zfs	rw	0       0
/dev/ufs/pkg	  /var/db/pkg	ufs	rw	2	2
```

When I hadn't declared the mount, the system would boot, but hang when trying to mount the filesystems and kick me to the rescue shell, at which point I could import the pool and boot normally.  After this tip from Vermaden, all has worked flawlessly since.


----------



## vand777 (Oct 28, 2011)

Thank you for your advice! I'll check it in the evening as I was playing with one of my home laptops. 

I saw your posts about it in this topic and I've tried it. Currently, the line with /var/db/pkg is commented out in  /etc/fstab. So in theory it should not conflict with zfs file systems. 

I tried to specifically define mount points for zfs file systems in  /etc/fstab but I faced some kind of error. At that moment didn't have much time to solve it but I'll try your advice tonight once I am at home.


----------



## vand777 (Oct 28, 2011)

As I expected, it didn't work. It didn't boot. The error was as follows:

```
Mounting local file systems:mount:zdata/home : No such file or directory
```

It failed on the first zfs file system line in /etc/fstab:

```
...
zdata/home   /home   zfs   rw    0 0
...
```

If I comment all zfs file systems in /etc/fstab, then it boots. 

```
#zpool status
no pools available
```

I can then import zdata pool:


```
#zpool import zdata
#zpool status
pool: zdata
state: ONLINE
scrub: none requested
config:

    Name             State            Read   Write    Cksum
    zdata            ONLINE           0      0        0
       ada0s2        ONLINE           0      0        0

errors: No known data errors
```

Looking at the above report I just realized that the disk device name is completely wrong. On my laptop I have hardware RAID controller and I have setup RAID1 and would expect the device to have the name ar0 (so it should display ar0s2). It does look that it displays one of the disks and is conflicting with hardware RAID configuration... Will look into it closely...


----------



## vand777 (Oct 28, 2011)

I tried to add ada1s2 device to the pool but it said that the device was already there. If I look at /dev, there is no such device as ar0 anymore, only ada0* and ada1* left. The obvious solution here is to disable hardware RAID and setup it via FreeBSD (software RAID) but this will mean to reinstall everything from scratch. 

Is there any other solution?


----------



## vand777 (Oct 28, 2011)

BTW, I found few worrying messages in /var/log/messages:


```
...
Oct 28 20:56:42 test kernel: GEOM ada0: the secondary GPT header is not in the last LBA.
Oct 28 20:56:42 test kernel: GEOM ada0s1: geometry does not match label (255h,63s != 16h,63s) 
...
```

I have same messages for ada1 and ada1s1. I have UFS file systems on this partition and have no problems mounting them.


----------



## vand777 (Oct 28, 2011)

Decided to rebuild the system without using hardware RAID.


----------



## vermaden (Oct 29, 2011)

vand777 said:
			
		

> Decided to rebuild the system without using hardware RAID.



ZFS and hardware RAID are not playing well together, plain ZFS is just a lot better, configure RAID to 'expose disks' to ZFS and ZFS will take care of the rest


----------



## vand777 (Oct 29, 2011)

vermaden said:
			
		

> ZFS and hardware RAID are not playing well together, plain ZFS is just a lot better, configure RAID to 'expose disks' to ZFS and ZFS will take care of the rest



Thank you for advice!


----------



## vand777 (Oct 31, 2011)

Without hardware RAID everything is working as expected. Unfortunately, FreeBSD 8.2 does not support WiFi device on my laptop (Lenovo Thinkpad W700)  I'm planning to see whether FreeBSD 9.0 supports it or not. If not, I'll have to try Ubuntu on it 

But for servers, FreeBSD forever :beer


----------



## curtisk (Dec 12, 2011)

This should be made as a PDF so it could be saved in the cloud. Thanks, nice post!


----------



## vermaden (Dec 13, 2011)

curtisk said:
			
		

> This should be made as a PDF so it could be saved in the cloud. Thanks, nice post!



Thanks, it actually is available as PDF, in _BSD MAGAZINE_ 2010/04 issue  

Check here: http://bsdmag.org/magazine/1049-hosting-bsd


----------



## dewarrn1 (Jan 14, 2012)

Hi Vermaden, great guide!  Any chance of an update for 9.0?  Much appreciated!


----------



## vand777 (Jan 15, 2012)

vand777 said:
			
		

> Unfortunately, FreeBSD 8.2 does not support WiFi device on my laptop (Lenovo Thinkpad W700)  I'm planning to see whether FreeBSD 9.0 supports it or not.



I might become the happiest man in the world today. It does look that 9.0 supports WiFi device on my laptop  Will try it properly in a few hours. Fingers crossed!


----------



## RusDyr (Jan 16, 2012)

> ```
> fixit# kldload /mnt2/boot/kernel/geom_mbr.ko
> fixit# kldload /mnt2/boot/kernel/opensolaris.ko
> fixit# kldload /mnt2/boot/kernel/zfs.ko
> ```



It's available shorten to:
[cmd="fixit#"]kldload geom_mbr[/cmd]
[cmd="fixit#"]kldload geom_mirror[/cmd]
[cmd="fixit#"]kldload opensolaris[/cmd]
[cmd="fixit#"]kldload zfs[/cmd]

Next, why don't use gpart and GPT-partition instead of (obsolete) MBR, bsdlabel and so on? What can be easier than that:

Partitioning and making bootable disk:
[cmd="fixit#"]gpart create -s GPT ad0[/cmd]
[cmd="fixit#"]gpart add -b 40 -s 128 -t freebsd-boot ad0[/cmd]
[cmd="fixit#"]gpart add -t freebsd-swap ad0[/cmd]
[cmd="fixit#"]gpart add -t freebsd-zfs -s 512M ad0[/cmd]
[cmd="fixit#"]gpart add -t freebsd-zfs ad0[/cmd]
[cmd="fixit#"]gpart bootcmd -b /boot/pmbr -p /boot/gptboot -i 1 ad0[/cmd]

For other disks can eliminate this steps by next simple command:
[cmd="fixit#"]gpart backup ad0 | gpart restore -F ad1 ad2[/cmd]

And next steps are yours gmirror creating and so on...


----------



## vermaden (Jan 16, 2012)

dewarrn1 said:
			
		

> Hi Vermaden, great guide!  Any chance of an update for 9.0?  Much appreciated!



Thanks, I will look in to the differences in some free time 



			
				RusDyr said:
			
		

> It's available shorten to:
> [cmd="fixit#"]kldload geom_mbr[/cmd]
> [cmd="fixit#"]kldload geom_mirror[/cmd]
> [cmd="fixit#"]kldload opensolaris[/cmd]
> [cmd="fixit#"]kldload zfs[/cmd]


Its probably the difference between 8.x FIXIT and 9.x FIXIT, on 8.x it worked only that way.



> Next, why don't use gpart and GPT-partition instead of (obsolete) MBR, bsdlabel and so on? What can be easier than that:


I used MBR partition because I also needed to boot Windows on that box, XP to e precise which does not support booting from GPT partitions.

Besides then that, I do not have any objections to GPT partitions.


----------



## RusDyr (Jan 17, 2012)

gpart can also do MBR scheme, doesn't it?
[CMD="fixit#"]gpart create -s MBR ad0[/CMD]


----------



## vermaden (Jan 17, 2012)

RusDyr said:
			
		

> gpart can also do MBR scheme, doesn't it?
> [CMD="fixit#"]gpart create -s MBR ad0[/CMD]



At the time of writing the HOWTO I used the method that worked well for me, I didn't bother to check all other available alternatives.


----------



## RusDyr (Jan 18, 2012)

I see. So wouldn't you want to update manual?


----------



## vermaden (Jan 18, 2012)

@RusDyr

I want, dunno if I will use GPT partitions, but I currently do not have time to update the HOWTO.


----------



## vand777 (Jan 21, 2012)

dewarrn1 said:
			
		

> Any chance of an update for 9.0?  Much appreciated!



HOWTO: Install FreeBSD 9.0 RELEASE (Root on UFS + ZFS, RAID1)


----------

