# Separating boot from OS



## mefizto (Mar 20, 2021)

Greetings all,

I have a server running on Supermicro X9 series board.  All the SATA ports are used by a (data) pool, the OS is installed on (internal) USB flash drive.  I wanted to install some additional application, but the flash drive was getting rather full.  Since there are unoccupied PCI slots on the board, my initial thought was to boot from SSD _via_ PCI/M.2 adapter.  However, a Supermicro engineer advised me that such is not supported by the X9 series.

I was thus wondering whether I could boot from the internal USB flash drive, and then somehow hand-over to the SSD on which I would install the OS. I have found a Wiki page https://wiki.freebsd.org/UEFI, so I think that the structure on the USB flash drive should look like the following:

```
# Set boot disk:
DISK="da0"

echo "Destroying old partitions on the destination drive"
gpart destroy -F $DISK

echo "Configuring zfs for ashift=12"
# Force ZFS to use 4k blocks, i.e., ashift=12 before creating the pool
sysctl -i vfs.zfs.min_auto_ashift=12

# Create the gpt structure on the drives.
echo "Partitioning the destination drive using gpt"
gpart create -s gpt $DISK
gpart add -t efi -l efiboot -a 4k -s 100M $DISK

# Format the efi partition to hold the the small MS-DOS efifilesystem for UEFI bootcode.
# Copy the FreeBSD /boot/boot1.efi bootcode file into the efi filesystem.
echo "Preparing the efi partition"
newfs_msdos /dev/ada0p1
mount -t msdosfs /dev/ada0p1 /mnt
mkdir -p /mnt/EFI/BOOT
cp /boot/boot1.efi /mnt/efi/boot/BOOTx64.efi
umount /mnt
```

Then I would install the remaining portion of the OS on the SSD:

```
#Set installation disk:
DISK="ada0"

echo "Destroying old partitions on the destination drive"
gpart destroy -F $DISK1

echo "Configuring zfs for ashift=12"
# Force ZFS to use 4k blocks, i.e., ashift=12 before creating the pool
sysctl -i vfs.zfs.min_auto_ashift=12

# Create the gpt structure on the drives.
echo "Partitioning the destination drive using gpt"
gpart create -s gpt $DISK1
gpart add -t freebsd-swap -l swap -a 1m -s 6G $DISK1
gpart add -t freebsd-zfs -l zfspool $DISK1

echo "Creating pool system"
# Create new ZFS root pool (/mnt, /tmp and /var are writeable)
zpool create -m none -R /mnt -f system /dev/DISK1{p1}
zfs set atime=off system
zfs set checksum=fletcher4 system
zfs set compression=lz4 system

echo "Configuring zfs filesystem"
# The parent filesystem for the boot environment.  All filesystems underneath will be tied to a particular boot environment.
zfs create -o mountpoint=none system/BE
zfs create -o mountpoint=/ -o refreservation=2G system/BE/default

# Set bootfs
zpool set bootfs=system/BE/default system

# Datasets excluded from the bootenvironment:
. . .

# Temporary directory on a disk
zfs create -o mountpoint=/tmp -o exec=off -o setuid=off -o quota=6G system/tmp

# Set sticky bit to and make /tmp and /var/tmp accessible
chmod 1777 /mnt/tmp
chmod 1777 /mnt/var/tmp

# Set ftp for fetching the installation files
FTPURL="ftp://ftp.freebsd.org/pub/FreeBSD/releases/amd64/11.1-RELEASE"

# Install the files
echo starting the fetch and install
cd /mnt/tmp
export DESTDIR=/mnt
for file in base.txz kernel.txz
do
  echo fetching ${file}
  fetch ${FTPURL}/${file}
  echo extratcting ${file}
  cat ${file} | tar --unlink -xpJf - -C ${DESTDIR:-/}
  rm ${file}
done
echo "finished with fetch and install"

# Create /etc/fstab file with encrypted swap
cat << EOF > /mnt/etc/fstab
# Device            Mountpoint    FSType    Options    Dump    Pass#
/dev/$DISK1{p2}.eli    none         swap        sw         0    0
EOF
```

I do not know how to continue from here.  As best understood from reading loader(8), I need to install /boot/loader and configure /boot/loader.conf.  Judging by the leading /, the /boot/loader is installed under the zpool, thus:

```
# Create /boot/loader
cat << EOF >> /mnt/boot/loader.conf
kern.geom.label.disk_ident.enable="0"
kern.geom.label.gptid.enable="0"
zfs_load="YES"
vfs.zfs.min_auto_ashift=12
EOF
```

However, I am unsure how to let the efi bootcode know where to find the /boot/loader.  The loader(8) contains in the section ZFS FEATURES:



> If  _/etc/fstab_ does  not have an entry for the root filesystem and
> _vfs.root.mountfrom_  is not set, but _currdev_  refers to a ZFS  filesystem,
> then *loader* will instruct kernel to use that filesystem as the root
> filesystem.



but I do not remember ever setting either.  The loader.conf(8) contains the following:



> _vfs.root.mountfrom_
> Specify the root partition to mount.     For example:
> 
> vfs.root.mountfrom="ufs:/dev/da0s1a"
> ...


Regretfully, I cannot understand how this helps.  I can set the variable, presumably: 
	
	



```
vfs.root.mountfrom="zfs:/dev/ada0p1
```
, in the /boot/loader.conf but I still do not understand how does the efi bootcode find it; furthermore, as noted above, there is no /etc/fstab.  Therefore, any help would be appreciated.

Kindest regards,

M


----------



## zirias@ (Mar 20, 2021)

mefizto said:


> However, I am unsure how to let the efi bootcode know where to find the /boot/loader.


No idea about your other questions, but nowadays, boot1.efi is deprecated. You just place the loader (loader.efi) directly on the EFI partition instead.
(use the same name, so, for amd64 efi/boot/bootx64.efi)

I just assume this is all you need if the loader then finds a pool with a `bootfs`.


----------



## SirDice (Mar 20, 2021)

mefizto said:


> but I still do not understand how does the efi bootcode find it


The answer is in the uefi(8) man page. 

```
The UEFI boot process proceeds as follows:
           1.   UEFI firmware runs at power up and searches for an OS loader
                in the EFI system partition.  The path to the loader may be
                set by an EFI environment variable.  If not set, an
                architecture-specific default is used.

                      Architecture    Default Path
                      amd64           /EFI/BOOT/BOOTX64.EFI
                      arm             /EFI/BOOT/BOOTARM.EFI
                      arm64           /EFI/BOOT/BOOTAA64.EFI

                The default UEFI boot configuration for FreeBSD installs
                loader.efi in the default path.
           2.   loader.efi reads boot configuration from /boot.config or
                /boot/config.
           3.   loader.efi searches partitions of type freebsd-ufs and
                freebsd-zfs for loader.efi.  The search begins with partitions
                on the device from which loader.efi was loaded, and continues
                with other available partitions.  If both freebsd-ufs and
                freebsd-zfs partitions exist on the same device the
                freebsd-zfs partition is preferred.
           4.   loader.efi loads and boots the kernel, as described in
                loader(8).
```

Don't set `vfs.root.mountfrom`, it interferes with bectl(8) and beadm(1). Unless you intent to boot from UFS and don't need a BE.


----------



## mefizto (Mar 20, 2021)

Hi Zirias,

thank you, I will correct my script.

Kindest regards,

M


----------



## mefizto (Mar 20, 2021)

Hi SirDice,

thank you for the reply.  I have read the portion of uefi(8) that you posted, and even re-reading it based on your post, I still do not (fully) understand it.

The link to uefi(8) does recite how boot1.efi  finds a loader.efi.  But, my scripts nowhere mention loader.efi, so how is the loader.efi installed?

Furthermore, If I use loader.efi as Zirias proposed, the man-page does not make sense.

Kindest regards,

M


----------



## zirias@ (Mar 21, 2021)

mefizto , I found the wording in this manpage confusing as well (especially given boot1.efi was used in earlier versions). I also have doubts step (3) in the manpage is actually correct: Why should loader.efi search *for itself* instead of a kernel it can boot? I guess this might be a leftover from describing boot1.efi.

At least, FreeBSD 13 added manpages boot1.efi(8) and loader.efi(8) (you have to select FreeBSD 13 to find them online until official release) that *really* clarify things. A rework of uefi(8) would still be nice as well.


----------



## Phishfry (Mar 21, 2021)

mefizto said:


> Since there are unoccupied PCI slots on the board, my initial thought was to boot from SSD _via_ PCI/M.2 adapter. However, a Supermicro engineer advised me that such is not supported by the X9 series.


The M.2 to PCIe adapter cards are meant for NVMe or Wifi cards with native PCIe interfaces.
M.2-SATA modules are not PCIe interfaces and will not work on any computer with a PCIe to M.2 adapter.

I suppose a manufacturer could make such a card with an integrated SATA controller chip for M.2 SATA but I have not seen those.
All I have seen are just dummy cards with straight PCIe to M.2 lane passthru meant for NVMe.

Why not just pick up a cheap SATA controller for booting? I don't have long term faith in USB sticks.

@@@EDIT@@@
I found this adapter has a controller and supports dual M.2 SATA and two more channels SATA





						2 M.2 B Key and 2 Port SATA III PCIe 3.0 x4 RAID Expansion Card
					

SY-PEX50123




					www.sybausa.com
				



So they do exist but are uncommon.


----------



## mefizto (Mar 21, 2021)

Hi Zirias,

exactly.  I am not sure where did SirDice found the reference, because when I followed the link he posted, it does refer to the files you mentioned.  I am still little confused, though, since I am running 12.2-Rlease, and do not plan to update till at least six months after 13.0 is released, is your recommendation re loader.efi valid or should I still use boot1.efi?

Hi Phishfry,

I am not sure that I follow.  As I understand it, there are HDs, with NVMe interface, that will plug into a PCIe to NVMe adapter.  I have already ordered such a hard drive (Samsung 980 PRO PCIe 4.0 NVMe SSD 250GB) and now I am researching adapters.

Can you please clarify?  Also, since you appear to be knowledgeable, can you recommend an adapter?  The X9 board BIOS does not enable bifurcation, thus i think that a single card adapter would be sufficient.

Kindest regards,

M


----------



## zirias@ (Mar 21, 2021)

mefizto said:


> I am still little confused, though, since I am running 12.2-Rlease, and do not plan to update till at least six months after 13.0 is released, is your recommendation re loader.efi valid or should I still use boot1.efi?


Putting loader.efi in the ESP was the correct thing to do on 12 as well, it just wasn't clearly documented (and I'm unsure whether the installer might still have used this boot1.efifat, containing boot1, but I think that was only the case on 11).


----------



## mefizto (Mar 21, 2021)

Hi Zirias,

thank you for letting me know.  I have 12.2-RELEASE, updated from 12.1-RELEASE installed _via _the above-reproduce script using the boot1.efi.  Looking at the /boot, there exist boot1, boot1.efi, boot1.efifat as well as loader.efi, all having the same date.

Kindest regards,

M


----------



## zirias@ (Mar 21, 2021)

Yes, they're all still built, but boot1.efi is deprecated and on 13, boot1.efifat is gone. I guess boot1.efi will be gone on 14.


----------



## mefizto (Mar 21, 2021)

Hi Zirias,

it is rather confusing, is it not?  It will be "interesting" to sort through that when my hardware comes.
Kindest regards,
M


----------



## zirias@ (Mar 21, 2021)

Actually, UEFI boot got extremely simple. You *just* need the loader (efi version), nothing else, no multiple stages any more. I'd just say the uefi(8) manpage could be improved, because ppl will know the older approach with boot1.efi and the wording is IMHO somewhat misleading.


----------



## mefizto (Mar 21, 2021)

Hi Zirias,

thank you again.  Well, the hardware should be here next week, so we'll see.

Kindest regards,

M


----------



## Phishfry (Mar 22, 2021)

mefizto said:


> I am not sure that I follow. As I understand it, there are HDs, with NVMe interface, that will plug into a PCIe to NVMe adapter. I have already ordered such a hard drive (Samsung 980 PRO PCIe 4.0 NVMe SSD 250GB) and now I am researching adapters.
> 
> Can you please clarify? Also, since you appear to be knowledgeable, can you recommend an adapter? The X9 board BIOS does not enable bifurcation, thus i think that a single card adapter would be sufficient.


Exactly correct. Single M.2 socket adapter card only. Can be x4 PCIe.
The Supermicro dual M.2 adapter for NVMe does require bifurication and some X9 boards have support and some don't.
Even on the LGA2011 boards. Some BIOS were updated for bifurcation feature and some were not.

If you are doing NVMe to PCIe slot than any adapter any is fine. They are transparent to the hardware. Passed straight thru.
I thought you were talking SATA M.2 not NVMe.

I will really be curious to see how an PCIe-4x NVMe works in PCIe-3x slot. Where is the bottleneck...PCIe or CPU...They are 2x as fast.
Are you are using an X9 board LGA2011 I assume ?? With V2 Xeon CPU for PCIe 3.0???


----------



## mefizto (Mar 22, 2021)

Hi Phishfry,

thank you for the reply.

According to the manual, my motherboard has PCIe 2.0 x4, which I understand limits the throughput to 2000 MB/s. Still beats the USB flash drive I used before.  The socket is LGA 1155.

Kindest regards,

M


----------



## 6502 (Mar 22, 2021)

Network boot?


----------



## mefizto (Mar 22, 2021)

Hi 6502,

interesting idea.  The question is, what would be the motivation?  In the solution I am considering both the USB drive with the boot part and the SSD running the OS is on the same machine.

Kindest regards,

M


----------



## blanchet (Mar 22, 2021)

An other solution would be a USB-to-SATA plugged to a SSD SATA disk that you attach where you can in the enclosure. It is easy and cheap.


----------



## mefizto (Mar 22, 2021)

Hi blanchet,

I had considered that.  It is not much cheaper, as I have to buy an SSD anyway and the difference in price between an USB-to-SATA and NVMe-to-PCIe is not significant.  On the other hand, the setup would, indeed, be easier.

But the speed difference had swayed me to the latter solution.
Kindest regards,
M


----------



## Phishfry (Mar 22, 2021)

The PCIe 4x drive gives you future potential.
Are you sure you can't eek PCIe 3.x out of her? Give me the board model number.
You see many of the X9 boards shipped with SandyBridge CPU support.
But with a firmware upgrade you could run Ivy Bridge CPU on many X9 boards.
Ivy Bridge brings PCIe 3.x with it. So it is worth the boost to Ivy Bridge.
(Especially for NVMe we are talking the difference between 700MB/sec PCIe 2.x to 2000+MB/sec on PCIe 3.x)
What CPU are you running now?


----------



## mefizto (Mar 22, 2021)

Hi Phishfry,

yes, I understand, that there are differences among the X9 boards.  Apparently some of them can even boot from the NVMe drive in UEFI mode.  But, according to the Supermicro engineer, not mine.

Nevertheless, thank you for the generous offer.  The board is X9SCM-F.  The processor is XEON E3-1230, 3.2 GHz, 8MB.

The PCIe slots are described in the manual as ""PCI-E 2.0 x4 on x8 slot".  I take it that the potential change of the processor also affect the cannot change this, so I cannot run two SSDs per adapter, correct?

Regarding the adapter, can you recommend a specific model?

Kindest regards,

M


----------



## Phishfry (Mar 22, 2021)

Checkout the last line of the product page.
*** BIOS rev. 2.0 or above is needed to support new E3-1200 v2 CPUs, which supports PCI-E 3.0 & DDR3 1600.

So with an Ivy Creek CPU you could have PCIe 3.0.
Like for instance E3-1230V2
Any E3-12xxV2 CPU is 1155 Ivy Creek Xeon.

It could give you a performance boost. Not necessary but it is a good upgrade path. Ivy Creek Xeons are cheap.


----------



## Phishfry (Mar 22, 2021)

mefizto said:


> so I cannot run two SSDs per adapter, correct?


Correct (No bifrucation on X9 1155 boards) and if you look at the product page notice only 2 of 4 slots are PCIe 3.0.
You could put a single NVMe in each PCIe 3.0 slot.

I don't have any recommendations on slot adapters. I have like 4 different styles. 
Some were x16 adapters and I milled off the extra fingers to fit x4 slot.
None cost more that 10 bucks from china.


----------



## mefizto (Mar 22, 2021)

Hi Phishfry*,*

thank you for the news.  There is absolutely nothing about it in the printed manual that I was referring to, either in the BIOS section or the PCIe setup section.
I looked at the product page, and it also states:


> 2 (x8) PCI-E 3.0 in x8 slots***


Again, nothing of that sorts in the manual.  Does it mean that I could buy an adapter holding two NVMe SSDs, each using x4 of the x8 lines and they would work?  Or does the BIOS need to support bifurcation?

Kindest regards,

M


----------



## Phishfry (Mar 22, 2021)

mefizto said:


> holding two NVMe


No that won't work at all. The slots are only x4 electrical in a x8 physical slot so that would not work..

For the 2 slots with PCIe 3.0 they will only work at PCIe 3.0 with an Ivy Creek 1155 CPU.
These CPU have a suffix of V2 to annotate Version 2 of the 1155 model.


----------



## mefizto (Mar 22, 2021)

Hi Phishfry,

thank you.  I will go hunt for an adapter and V2 Xeon.

Kindest regards,

M


----------



## Phishfry (Mar 22, 2021)

You might need to upgrade the BIOS _before_ it will take Ivy Creek.


----------



## mefizto (Mar 22, 2021)

Hi Phishfry,

yes, I know, I just downloaded the latest BIOS from Supermicro together with the release notes and upgrade instructions.

Kindest regards,

M


----------



## Phishfry (Mar 22, 2021)

SuperMicro is really good about numbering the PCIe expansion slots.
Problem is how do you know which slots are the PCIe 3.0 slots.

Worse comes to worse you might need to look at `pciconf` to see what the pci device is running at. It shows modes.
Then shuffle cards around to suit the slots. For example a video card that only does PCIe 2.0 nativly would be a waste in a PCIe 3.0 slot.
But that topic is probably better for another thread.


----------



## Phishfry (Mar 22, 2021)

If you checkout the diagram from the pdf manual on page 13 has the slots labeled. The two slots closest to the CPU are PCIe 3.0x.
Slots 6 and slot 7.


----------



## mefizto (Mar 23, 2021)

Hi Phishfry,

that is actually an excellent point.  Currently, I am using the machine as a headless server due to the limitation of the USB.  But, if I can make it work, I might consider to buy a video card, and promote it to a workstation.

And yes, in my hardcopy of the manual, the slots are clearly marked on p. 1-5.

Kindest regards,

M


----------



## mefizto (Apr 1, 2021)

Greeting all,

thanks to several people in the other thread: https://forums.freebsd.org/threads/cannot-install-bios-and-or-efi-bootcode.79592/, I successfully installed both the UEFI and the legacy BIOS bootloader on a USB drive /dev/da0.

The other script, installing the OS on an NVMe drive /dev/nvd0:

```
#!/bin/sh
# FreeBSD installation script 03/30/2021, no encryption, Beadm compatible
set -Cefu

# Set installation disk:
DISK="/dev/nvd0"

echo "Destroying old partitions on the destination drive"
gpart destroy -F $DISK

echo "Configuring zfs for ashift=12"
# Force ZFS to use 4k blocks, i.e., ashift=12 before creating the pool
sysctl -i vfs.zfs.min_auto_ashift=12

# Create the gpt structure on the drives.
echo "Partitioning the destination drive using gpt"
gpart create -s gpt $DISK
gpart add -t freebsd-swap -l swap -a4k -s 4G $DISK
gpart add -t freebsd-zfs -l zfspool -a4k $DISK

# Create new ZFS root pool, mount it, and set properties
#(/mnt, /tmp and /var are writeable)
echo "Creating pool system"
zpool create -f -o altroot=/mnt -m none system "/dev/gpt/zfspool"
zfs set atime=off system
zfs set checksum=fletcher4 system
zfs set compression=lz4 system

echo "Configuring zfs filesystem"
# The parent filesystem for the boot environment.
# All filesystems underneath will be tied to a particular boot environment.
zfs create -o mountpoint=none system/BE
zfs create -o mountpoint=/ -o refreservation=2G system/BE/default

# Datasets excluded from the bootenvironment:
.
.
.

# Set sticky bit to and make /var/tmp accessible
chmod 1777 /mnt/var/tmp

# Configure boot environment bootfs
zpool set bootfs=system/BE/default system

# Configure NIC
.
.
.
# Set ftp for fetching the installation files
FTPURL="ftp://ftp.freebsd.org/pub/FreeBSD/releases/amd64/12.2-RELEASE"

# Install the files
echo "Starting the fetch and install"
cd /mnt
export DESTDIR=/mnt
for file in base.txz kernel.txz
do
  echo "Fetching ${file}"
  /usr/bin/fetch ${FTPURL}/${file}
  echo "Extracting ${file}"
  cat ${file} | tar --unlink -xpJf - -C ${DESTDIR:-/}
  rm ${file}
done
echo "finished with fetch and install"

# Create /etc/fstab file with encrypted swap
echo "Creating /etc/fstab"
cat << EOF > /mnt/etc/fstab
# Device            Mountpoint    FSType    Options    Dump    Pass#
/dev/gpt/swap.eli    none         swap        sw         0    0
EOF

# Create /boot/loader
echo "Creating /boot/loader"
cat << EOF >> /mnt/boot/loader.conf
kern.geom.label.disk_ident.enable="0"
kern.geom.label.gptid.enable="0"
zfs_load="YES"
vfs.zfs.min_auto_ashift=12
EOF

# Define variables
# Hostname:
HOSTNAME=". . ."

# Primary IP address:
IP=". . ."

# the netmask for this server
NETMASK=". . ."

# the default gateway for this server i.e. defaultrouter
GATEWAY=". . ."

# Create basic /etc/rc.conf
cat << EOF >> /mnt/etc/rc.conf
hostname="${HOSTNAME}"
ifconfig_em0="inet ${IP} netmask ${NETMASK}"
defaultrouter="${GATEWAY}"
sshd_enable="YES"
dumpdev="AUTO"
zfs_enable="YES"
EOF

cd
umount -f /mnt
zfs set mountpoint=/system system

echo "Rebooting system"
reboot
```
executes until the `umount -f /mnt`, reporting: 

```
cannot mount '/mnt/system': failed to create mountpoint
property may be set but unable to remount system
```

The pool system is created, but is mounted on altroot, _i.e.,_ /mnt.  First, I tried zpool export system, which works, `zpool import` shows the pool system, but `zpool import system` return no response, the subsequent `zpool list` yields: 
	
	



```
internal error: failed to initialize ZFS library
```
  Second I tried to just reboot.  The machine clearly boots from the USB drive /dev/da0, which reports 
	
	



```
gptzfsboot: No ZFS pool located, can't boot
```

There are two issues here.  (1) despite my motherboard having plurality of boot options - UEFI, Legacy BIOS, and combination prioritizing one over the other, UEFI: Built-in EFI Shell, UEFI: 1100, 1100,  CD/DVD, HD, USB, regardless what I select, _i.e.,_ the the option UFEI first, the machine stubbornly tries to boot from legacy BIOS. (2) The reboot somehow loses the pool system.

The primary issue is to resolve (2); I can revisit (1) later, or just live with booting from legacy BIOS.

Again, any help would be appreciated.

Kindest regards,

M


----------



## Phishfry (Apr 1, 2021)

mefizto said:


> or just live with booting from legacy BIOS.


You should be able to boot via EFI.
Check in the BIOS screen and look for "Network Stack". All the options around there must be set to UEFI.


----------



## Phishfry (Apr 1, 2021)

If you look at your manual it is on page 4-10
PCI ROM Priority
You want EFI Compatibility ROM on all the choices in that section. Jam all the controls to EFI.
You also need to set the boot option filter to UEFI from the boot tab in the BIOS (page 4-18 in manual).


----------



## Snurg (Apr 1, 2021)

Phishfry said:


> Some were x16 adapters and I milled off the extra fingers to fit x4 slot.


Do you know any good instruction how to do this with cheap John Doe equipment?
For adapting a video card to X4 I once tried sawing off part of the fingers.
But there was a SMD thingy near the slot fingers, which popped off and flew into the nowhere.
Card didn't work anymore


----------



## mefizto (Apr 1, 2021)

Greetings all,

I lied, the pool system is not lost upon reboot,  `zpool import` shows it, but any attempt to import it `zpool import system` returns a prompt #, but again any zfs based command returns the error: 
	
	



```
internal error: failed to initialize ZFS library
```

The zfs.ko is already loaded into the kernel.  I was wondering whether I am not trying to mount over an existing mountpoint.  `zpool import -N system` seems to import the pool because `zpool list` now shows the pool system not mounted on altroot.  However, despite the zfs(8) reciting:


> *-N* Import the pool without mounting any file systems.


`zfs list` shows all the proper mount-points.  Upon reboot, the machine attempts to boot from pool storage, which (1) is not mounted and (2) is a data storage only.  This indicates that the system still cannot be found, so I tried to force it: `zpool set bootfs=system/BE/default system`, but to no avail.

Time to call it a night.

Kindest regards,

M


----------



## mefizto (Apr 1, 2021)

Hi Phishfry,

sorry, I forgot to answer in my frustrating attempt to make the boot work.

Part of the problem is, that the new BIOS has new options that are not in the manual for BIOS 1.0, additionally, some described options are not there.  Furthermore, they are not very well explained.  For example, what the heck is the (PCI) OptionROM, and since it affects boot process, why is it in a section different from boot.  regardless, I fumbled through it, and was able to make UEFI work; nevertheless, the boot still fails since the bootloader cannot find the pool system, since it does not appear to be mounted on startup.

I do not know, what else to try, so perhaps it is time to throw the towel and abandon the idea.

Kindest regards,

M


----------



## Snurg (Apr 1, 2021)

You have to mount that manually, respective script it, because your system pool is probably `canmount=off`.


----------



## mefizto (Apr 1, 2021)

Hi snurg,

thank you for the reply.  However, the problem is that I cannot import it, because of the noted error 
	
	



```
internal error: failed to initialize ZFS library
```
.  I can import it with the -N option, which, of course does not mount any of the datasets.

I also tried

```
# zpool import -N pool
# zfs mount -va
```

However, instead of the expected response 
	
	



```
service mountd reload
```
, I get back a prompt # and the ZFS library error.

I thnik the basic problem is that since the script fails after `umount -f /mnt`, with 
	
	



```
Cannot unmount /mnt: Device busy
```
.

Kindest regards,

M


----------



## mefizto (Apr 2, 2021)

Greetings all.

after another several attempts, I decided to use bdsinstall.  I have tried:

boot installer, setup keyboard etc
configured network
invoked shell for partitioning
run the script just partitioning the USB and the PCI drive
exit the shell and continued with the installer
The result is exactly the same as with my script; the boot fails with no system zpool being found; `zfs list` reports no system zpool found, although `zfs import` clearly shows the system zpool.

So if even the bdsinstaller fails, there is no hope for me to figure it out.

Kindest regards,

M


----------



## Phishfry (Apr 2, 2021)

M.
Your thread title says separating boot from OS. That seems to be appropriate.
I can't help you with ZFS. I use UFS and gmirror on SATA-DOM to boot my two 24 bay ZFS fileservers.
I can rebuild my fileservers UFS disk in minutes. I seriously doubt I could lose both drives in a gmirror.
I also have a rescue USB stick for my fileservers just as an option. I got burned on a ZFS version update on FreeNAS many years ago and it made me approach ZFS on FreeBSD much more cautiously.
The problem is that ZFS is very complex and you can really get yourself into trouble if you don't understand what you are doing.




Snurg said:


> Do you know any good instruction how to do this with cheap John Doe equipment?


I have bought x1 and x8 video cards for my servers. I have never butchered a video card by cutting.
I have cut the back out of some PCIe slots with a dremmel though.
The M.2 to PCIe adapters I cut were ridiculous. x16 lanes for a device that can only use x4 lanes.
I should have noticed when I bought them.
I bought like 4 batches from China via ebay when I first started messing with NVMe.
I was looking for low profile adapters for 1U chassis.


Not real proud of cutting any pcb but sometimes I get in project mode and just do it.


----------



## Snurg (Apr 2, 2021)

mefizto 
I'd then check the usb installer image, how it is configured.
There seems something missing. Maybe things like 'zfs_enable="YES"' or the like.
Such would be completely reasonable, as an USB installer image does not really need a full config in first place.

Phishfry 
So you used the dremel to carefully mill off the unwanted part of the slot?
Maybe I should have done that this way. I used a hacksaw. High vibration, flexing... not good.


----------



## Phishfry (Apr 2, 2021)

Snurg
Hacksaw would work. Go long with the cut and trim with file or nail file.
Use a piece of duct tape as a cutting guide.
You really need a good work holder and perhaps a magnifying glass (definatly need a steady hand).
Maybe you could use the side of the jaw of a pair of vice grips as a cutting guide for the hacksaw blade.

I used a milling machine and some rubber in the vice to keep the pcb + smd from getting damaged.
It did raise hell being so thin and the bridgeport only goes to 1500rpm. You need some higher rpms for pcb work.


----------



## mefizto (Apr 2, 2021)

Hi Phishfry,

thank you again for the reply.



> Your thread title says separating boot from OS. That seems to be appropriate.


Yes, I corrected the title to more clearly describe the goal, since I did a lot of searching, and it it surprising how often  the title does not reflect the subject matter discussed - just like mine before.



> I can't help you with ZFS. I use UFS and gmirror on SATA-DOM to boot my two 24 bay ZFS fileservers.


Do I understand it correctly that you have the OS on a UFS file system and the data on ZFS file system?

I cannot run SATA_DOM, since, as noted, all my SATA ports are taken.  Hence my idea with the NVMe drive.  My concern is, running the OS from the USB drive, since (1) the frequent writing into it will wear it out and (2) it is rather slow since I wanted to move additional processing on the machine.

Since I - or even the installer - cannot make it work, I had another idea - installing only minimal filesystems Physically on the USB drive, wherein the remaining  filesystems will be on the NVMe drive and mounted at appropriate mount-points on the USB drive.

The problem is, I do not know what is the minimal filesystem that must physically reside on the USB drive, so that I can still attempt to repair the system in case the NVMe drive has a failure.  So I opened another thread, and hopefully someone will help.

Hi Snurg,

thank you for the suggestion, but the USB does not appear to be a problem because as noted the messages indicate that the loader is looking for a pool that is not mounted.

Kindest regards,

M


----------



## Phishfry (Apr 2, 2021)

mefizto said:


> Do I understand it correctly that you have the OS on a UFS file system and the data on ZFS file system?


Exactly. I have two Chenbro 24bay chassis. I use SATADOM for OS with UFS and 24 bays for Zpool.
I use nvme for L2ARC and SLOG.
gmirror across the two SATADOM.




mefizto said:


> running the OS from the USB drive


These are all valid concerns. What about USB DOM in gmirror.. Innodisk makes real deal DOM.
USB3 if you have that onboard is also available as DOM.

One thing I have found is many motherboards stuff the connectors very close together so multiple USB-DOM may not be feasible. DOM have some bulk and you must plan accordingly.
What is nice are short extension cables to break the connector out. I did that with SATA DOM.

I do think booting from NVMe is a good idea. Maybe put UFS there and the pool on your SATA.

The problem with DOM are power connectors. They are not standard. SuperMicro has a SATA DOM power socket on many motherboards but the power connector is different from the Innodisk. So it helps to be handy with a soldering iron.

Some motherboards have a 'power over SATA' connector meant for SATA DOM (not supported by all SATA-DOM).


----------



## mefizto (Apr 2, 2021)

Hi Phishfry,

yes, on my board, the USB type A connector is very close to the SATA connectors.  The manual is silent on whether it is USB 2 or 3.  However, I found that there are USB DOMs that pug to the 9-pin header.

I would still prefer the USB to NVMe solution, especially if I could find an SLC USB and I already bough the NVMe hardware.  However, the USB DOM is a potential solution.  Where do you buy yours?

Why do you run a mirror?  I remember I had asked about it a while ago, and the people that I consider knowledgeable argued against it.  I do not remember why, i will try to see if I still have the notes.

Kindest regards,

M


----------



## Phishfry (Apr 2, 2021)

mefizto said:


> Why do you run a mirror


For redundancy. You can lose a whole drive and you have a hot replacement.

One backup box I use 2 drives in a gmirror. I have three drives that I use and I keep one on a shelf.
I rotate a drive into the gmirror maybe weekly and shelf the backup.
gmirror handles it all seamlessly. It notes old data and refreshes the disk.
My Off-line backup. Maybe a week old backup but better than nothing.


----------



## Phishfry (Apr 3, 2021)

mefizto said:


> if I could find an SLC USB


I have some of these and they were not very quick. (25MB/sec)





						USB EDC 2SE SLC ,
					

We provide Industrial Flash Storage such as USB EDC 2SE SLC , , Industrial Storage,



					www.memorydepot.com
				








						innodisk
					

innodisk




					www.innodisk.com
				




USB 3.0 DOM have superior throughput.





						USB Disk on Module EDC 3SE SLC ,
					

We provide Industrial Flash Storage such as USB Disk on Module EDC 3SE SLC , , Industrial Storage,



					www.memorydepot.com
				








						innodisk
					

innodisk




					www.innodisk.com


----------



## Phishfry (May 23, 2021)

So did you upgrade to IvyBridge sucessfully?


----------



## mefizto (Jun 12, 2021)

Hi Phishfry,

sorry for the belated response, I overlooked your inquiry.  No, I have not upgraded, I am too stupid to make the handshake between the USB and the NVMe work, so I gave up on the upgrade since I cannot use the NVMe as an OS drive.

Kindest regards,

M


----------



## Phishfry (Jun 12, 2021)

I have ran across many unruly EFI bios. Don't beat yourself up.
Support for NVMe on the earliest platforms can be spotty.


----------

