# Install system to its own zpool or into the same as storage for NAS?



## cbunn (Dec 13, 2021)

I'm planning an upgrade of my NAS with some new hard drives. Way back when I initially created this NAS, root-on-ZFS wasn't stable, so I created a mirrored pair of UFS-formatted USB hard flash drives to hold the boot and root filesystems. After setting up the zpool, which is a single raidz1 vdev, I also put /usr and /var onto the zpool.

I'm upgrading to new hard drives and changing the topology to a stripe of three mirrored vdevs. I was initially planning to simply swap the hard drives out and keep the system installation unchanged. But now I'm thinking it might be better to do a clean install. Through the FreeBSD installer, I know that I can install the boot and root (and swap) filesystems onto the same pool that will be for storage. But is that a good idea? For one thing, in testing with a VM, it looks like the drives are all partitioned first and then one partition from each drive is added to the vdevs. I've previously read that it's best practice to give ZFS entire hard drives and not partitions. I'm not sure if this contradicts that.

I'm also wondering if it's just generally good practice. Perhaps it might be better to add on a small SSD (or pair of SSDs in a mirror) for the system. I would still use ZFS in this case, to take advantage of boot environments, among other features.

EDIT: I meant that the current system uses USB flash drives for boot and root.


----------



## Alain De Vos (Dec 13, 2021)

I think most persons have a separate zpool on SSD or NVME for the O.S. , and a separate larger zpool for their "data"
As for partitioning, gpt is flexible. In theory you don't need a partition table but having one has no bad performance issue.
As for the only bad idea i think is, mixing different disk sizes and hardware in the same zpool.


----------



## covacat (Dec 13, 2021)

using nvme or even ssd drives to only boot from them seems like a waste


----------



## Alain De Vos (Dec 13, 2021)

Offcourse, also your home directory on SSD.


----------



## cbunn (Dec 13, 2021)

Alain De Vos said:


> I think most persons have a separate zpool on SSD or NVME for the O.S. , and a separate larger zpool for their "data"


That's what I'm thinking.


Alain De Vos said:


> As for partitioning, gpt is flexible. In theory you don't need a partition table but having one has no bad performance issue.


I suppose as long as everything is kept aligned to the 4k blocks, there shouldn't be any performance loss.


Alain De Vos said:


> As for the only bad idea i think is, mixing different disk sizes and hardware in the same zpool.


True. All the drives in the data zpool would be identical. If I were to use SSDs, they would be in their own pool.



covacat said:


> using nvme or even ssd drives to only boot from them seems like a waste


Perhaps. And at the time when I originally built this, that's a big reason I used a couple of USB flash drives. But nowadays, 120 GB SATA SSDs can be had for about US$20. So it's not much of a waste anymore.



Alain De Vos said:


> Offcourse, also your home directory on SSD.


Why do you say that? Generally, on a NAS, my home directory wouldn't contain much.


----------



## Alain De Vos (Dec 13, 2021)

As your home directory is small you can put it on the fastest media available. 
PS, I take an incremental zfs snapshot of it every  15 minutes.


----------



## cbunn (Dec 13, 2021)

Alain De Vos said:


> As your home directory is small you can put it on the fastest media available.
> PS, I take an incremental zfs snapshot of it every  15 minutes.


True, though in the case of this NAS, my home directory won't contain anything critical. Any scripts or dotfiles will be in a git repo. The valuable home directories are those on my desktop and laptop, which are replicated with Syncthing to each other and the data pool of the NAS.


----------



## mer (Dec 13, 2021)

Keep in mind that ZFS Boot Environments kind of want a specific layout for datasets, so be very careful about modifying that.  Things like /usr, /usr/local, /var want to be in the right place.
Not on a NAS, but I've got a system that I separated user data (home directory and general data storage) from the system level stuff.  It's let me easily migrate to new versions of the OS and upgrade hardware, so what I would do:
Export the NAS pool, shutdown, power off.
New devices for the OS, start with plugging in one, unplug the NAS devices just for safety.
Install system, get it configured reboot a couple times to make sure it works.
Plug in second new OS device, gpart the same way as the first one
Attach the second device zfs partition to the first one to mirror it
Make sure to update the bootpartitions/boot blocks on the new second device 
At this point you have a mirrored boot pair, ZFS on root.

Now plug in the NAS devices and import the pool and you should be good.  If the NAS pool has datasets with mountpoints that overlap the system (/usr, /var, etc) import the pool with alternate root.


----------



## cbunn (Dec 13, 2021)

mer said:


> Keep in mind that ZFS Boot Environments kind of want a specific layout for datasets, so be very careful about modifying that.  Things like /usr, /usr/local, /var want to be in the right place.


I'm not sure what you mean. Could you explain? I have another server with only a single drive and the guided ZFS installation gave me a layout of zroot/ROOT/default for the boot environment along with zroot/usr, zroot/var etc. Is that what you mean?

It sounds like you are advocating that I use a separate pool with a mirror vdev for the system. I'm not sure why you setup the two OS devices separately, though. Why not let the installer handle it by specifying a mirror vdev during installation?


----------



## Alain De Vos (Dec 13, 2021)

Freebsd boot installer configures datasets /usr & /var with canmount noauto option.
This in order so that you can put subdirectories of them in separate datasets which makes them no longer part of boot environment. Eg. a database.
Tools like beadm use this specific layout,
[Note : / is mostly a dataset myzpool/ROOT/default , with canmount also noauto, and mountpoint set to legacy]






						beadm
					






					www.freebsd.org
				








__





						BootEnvironments - FreeBSD Wiki
					

High-level overview of ZFS Boot Environment setup



					wiki.freebsd.org


----------



## mer (Dec 13, 2021)

cbunn said:


> I'm not sure what you mean. Could you explain? I have another server with only a single drive and the guided ZFS installation gave me a layout of zroot/ROOT/default for the boot environment along with zroot/usr, zroot/var etc. Is that what you mean?


Using the guided ZFS installation does the correct thing.  I was cautioning against trying to roll your own because it is possible to break boot environments.  Your earlier message talked about you setting up a gmirror for the boot and then moving /usr and /var onto the NAS pool (back in post #1).  Doing something similar for a ZFS install would break boot environments.  If you use the guided installer, you are fine.


cbunn said:


> It sounds like you are advocating that I use a separate pool with a mirror vdev for the system. I'm not sure why you setup the two OS devices separately, though. Why not let the installer handle it by specifying a mirror vdev during installation?


Simple personal preference/opinion.  I take things slow and careful doing system upgrades and I'd rather get a new install on a single device for the OS, make sure it's configured correctly, rebooting correctly before doing anything "fancy" like mirroring the boot and OS os devices.

Sometimes doing things manually lets you get a better feel for the effort put into a tool to simplify;  I also tend to do things manually to give myself assurances that I understand what is being done.

But if the installer lets you go and create a mirrored boot device, creates all the partitions and installs all the related boot bits correctly, sure feel free to use it.  Just double check about the boot bits;  there may be cases where on the second device it does not install them.  That last part is going by memory;  I recall a thread about just that situation, may have been UEFI boot config.

Alain De Vos points out beadm tool:  that is in a port/package.  In the base system there is a tool called bectl that does the same functionality.  A couple minor differences (mostly around destroying boot environments and minor visual output differences), it's syntax compatible with beadm and you have it available from install.

As per my usual disclaimer, anything I've written is my opinion based on what I've done, how I prefer to do things, feel free to disregard.


----------



## cbunn (Dec 13, 2021)

mer said:


> I was cautioning against trying to roll your own because it is possible to break boot environments.  Your earlier message talked about you setting up a gmirror for the boot and then moving /usr and /var onto the NAS pool (back in post #1).  Doing something similar for a ZFS install would break boot environments.  If you use the guided installer, you are fine.


Ah, fair enough. I hadn't intended to redo that setup in the new install, but I also didn't say that. The choice is either everything on one big pool of mirrored vdevs on hard drives or putting the system on a pair of small SSDs and then separately creating the pool of mirrored vdevs for data storage. I am strongly leaning toward the latter.


mer said:


> But if the installer lets you go and create a mirrored boot device, creates all the partitions and installs all the related boot bits correctly, sure feel free to use it.  Just double check about the boot bits;  there may be cases where on the second device it does not install them.  That last part is going by memory;  I recall a thread about just that situation, may have been UEFI boot config.


I think I'll stick with the installer for now, as I've had good experience with it, including a recent test in a virtual machine. I'll keep your advice about the boot bits in mind. Thanks!


----------



## mer (Dec 13, 2021)

cbunn said:


> The choice is either everything on one big pool of mirrored vdevs on hard drives or putting the system on a pair of small SSDs and then separately creating the pool of mirrored vdevs for data storage. I am strongly leaning toward the latter.


My opinion, the latter is the correct solution, especially for the future.


----------

