# 13.1 RC-3 Problem



## rrsum (Apr 22, 2022)

I posted this earlier on the freebsd-stable list, but got no interest, so I'm posting here:  

I'm running a 13.1 RC-3 server that has a zfs problem that didn't exist under 13.0 RELEASE.

First, here is the configuration of the server.  It has the operating system on an NVD drive with all the partitions UFS.  It has 8 UFS formatted drives in a SAS configuration.  All of these show up when rebooting.  I also have 2 drives in a ZFS mirror where the home directories are located and where the data in a MySQL database is located.  None of the ZFS datasets mount when rebooting.  After rebooting, if I do a "zpool import" all of the ZFS datasets mount.

Looking at dmesg after rebooting, it shows the following lines after the nvd0 drive shows up:

ZFS filesystem version: 5
ZFS storage pool version: features support (5000)
pid 48 (zpool), jid 0, uid 0: exited on signal 6
pid 49 (zpool), jid 0, uid 0: exited on signal 6

Further on in dmesg, the other drives show up, the 8 sas drives and the
and the 2 zfs drives.  It appears ZFS is trying to configure itself, but can't know about its drives yet?

Do I have something misconfigured in 13.1?  It has worked flawlessly in 13.0 for almost a year.

Rick


----------



## richardtoohey2 (Apr 22, 2022)

Probably not related, but this on Twitter (Colin Percival):

_Filed under "weird bugs which only seem to show up when we're about to do a release": FreeBSD's encrypted disk support was broken in 13.1 RCs because a kernel module was 128k+8 bytes long and ended with 8 bytes of zeroes. Many thanks to Kyle Evans for debugging and fixing this!_


----------



## grahamperrin@ (Apr 22, 2022)

Reproducible with 13.1-RC4? If so, please make a bug report.


----------



## rrsum (Apr 22, 2022)

> Reproducible with 13.1-RC4? If so, please make a bug report.


Yes, it is the same in 13.1-RC4.  I should also include the fact that each of the sas drives and each of the sata drives for zfs has a gpart label and fstab uses those labels.  However, since nvd0 is the only such "drive" in the box, it does not have a label.


----------



## cracauer@ (Apr 22, 2022)

rrsum said:


> ZFS filesystem version: 5
> ZFS storage pool version: features support (5000)
> pid 48 (zpool), jid 0, uid 0: exited on signal 6
> pid 49 (zpool), jid 0, uid 0: exited on signal 6



Do you get the same SIGABRT when running the zpool command after startup?

If so, you can run it in gdb to get a backtrace.


----------



## rrsum (Apr 22, 2022)

cracauer@ said:


> Do you get the same SIGABRT when running the zpool command after startup?


No, imports cleanly.


----------



## cracauer@ (Apr 22, 2022)

rrsum said:


> No, imports cleanly.



Strange. Do you know how to boot into single user mode, mount your /etc/fstab filesystems read-write and then do the `zpool import` in gdb?


----------



## grahamperrin@ (Apr 23, 2022)

rrsum said:


> … 2 drives in a ZFS mirror …





rrsum said:


> … each of the sata drives for zfs has a gpart label and fstab uses those labels. …



Try *not* using fstab(5) for mounts of ZFS file systems.

<https://serverfault.com/a/943079/91969> (Allan Jude), and so on. As far as I know, it's proper (or commonly preferred) to allow rc(8) to perform the mounts. In particular:









						freebsd-src/zfs at main · freebsd/freebsd-src
					

FreeBSD src tree (read-only mirror). Contribute to freebsd/freebsd-src development by creating an account on GitHub.




					github.com
				





(Recent <https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=262468#c3> helped me to think about order.)


----------



## grahamperrin@ (Apr 23, 2022)

Cross-reference: FreeBSD bug 263473 – ZFS drives fail to mount datasets when rebooting - 13.1-RC4


----------



## CyberCr33p (May 7, 2022)

After upgrading to 13.1-RC6, I notice "pid 12218 (zpool), jid 0, uid 0: exited on signal 6" message in my logs too, but ZFS datasets mount correctly.


----------



## grahamperrin@ (Jun 5, 2022)

From <https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=263473#c5>:



> … Also (no response yet): <https://lists.freebsd.org/archives/freebsd-stable/2022-April/000719.html>



Sorry, that seems to be the wrong URL for whatever was intended. Bugzilla can't be corrected.


----------



## getopt (Jun 5, 2022)

Our hyperactive "cross-reference" fetishist may want to notice that he missed to reference that 13.1-RELEASE is out since May 16, 2022.


----------



## Alain De Vos (Jun 5, 2022)

I experienced RC problems. But there is no reason to use a Release Candidate anymore is there ?


----------



## getopt (Jun 5, 2022)

Alain De Vos said:


> I experienced RC problems


... then, it makes just a little more sense to to get the final release when it is out and still wanting to report PRs.

Release candidates are obsolete by definition when the final release is available.

It takes me wonder that there is a need to tell about.








						Release Information
					

FreeBSD is an operating system used to power modern servers, desktops, and embedded platforms.




					www.freebsd.org


----------



## SirDice (Jun 8, 2022)

Alain De Vos said:


> But there is no reason to use a Release Candidate anymore is there ?


All release candidate versions (and all beta versions too) expired the very second the -RELEASE was made.



getopt said:


> Release candidates are obsolete by definition when the final release is available.


Exactly.


----------



## grahamperrin@ (Jun 9, 2022)

Alain De Vos said:


> 𠉥… no reason to use a Release Candidate anymore is there ?



Please note the 22nd May comment from Rick Summerhill. 

rrsum would you like to edit the title here? Thanks.


----------

