# Downgrading 13.1 to 12.4 for fun?



## decuser (Dec 6, 2022)

I'm not sure it's fun, but I didn't think it'd be painful. However, when I went to import my zpool, I got:


```
zpool import
   pool: zfs
     id: 1387220501496143749
  state: UNAVAIL
status: The pool can only be accessed in read-only mode on this system. It
    cannot be accessed in read-write mode because it uses the following
    feature(s) not supported on this system:
    org.zfsonlinux:userobj_accounting (User/Group object accounting.)
    org.zfsonlinux:project_quota (space/object accounting based on project ID.)
    com.delphix:log_spacemap (Log metaslab changes on a single spacemap and flush them periodically.)
action: The pool cannot be imported in read-write mode. Import the pool with
    "-o readonly=on", access the pool on a system that supports the
    required feature(s), or recreate the pool from backup.
 config:

    zfs           UNAVAIL  unsupported feature(s)
      mirror-0    ONLINE
        ada1      ONLINE
        ada2      ONLINE
      indirect-1  ONLINE
root@loki:~ # zpool import zfs
This pool uses the following feature(s) not supported by this system:
    org.zfsonlinux:userobj_accounting (User/Group object accounting.)
    org.zfsonlinux:project_quota (space/object accounting based on project ID.)
    com.delphix:log_spacemap (Log metaslab changes on a single spacemap and flush them periodically.)
All unsupported features are only required for writing to the pool.
The pool can be imported using '-o readonly=on'.
cannot import 'zfs': unsupported version or feature
```

I'm a guessing here, but it is probably a version mismatch between the zfs in 13 and 12. I recall having this kind of issue once before when migrating between linux and freebsd.

The solution, if I recall properly was to basically mount the pool read only and export it to a new pool. Does this sound correct? It's not a humongous dataset. 

What's the easiest way to restore it that's safe? 

Currently I have the zroot pool with 900 gigs free and the unmounted mirrored 1000G pool (with about 16 gigs of data on it).

Thanks!


----------



## SirDice (Dec 6, 2022)

Downgrades are never supported.



decuser said:


> but it is probably a version mismatch between the zfs in 13 and 12.


13.0 switched to OpenZFS. 12.x has the original imported ZFS code.


----------



## Alain De Vos (Dec 6, 2022)

.


----------



## mer (Dec 6, 2022)

13.x is using OpenZFS, 12.x is using "ZFS from FreeBSD/IllumOS".  If the pool was created under 13.x it's as if you did zpool upgrade to an incompatible version.


----------



## covacat (Dec 6, 2022)

you may run the openzfs kmod port or send the mirror to zroot / recreate / receive the mirrored pool


----------



## Alain De Vos (Dec 6, 2022)

By accident i upgraded my zpool to newer features. But i had it fixed by upgrading only /boot directory in order to mount it.
My /boot/boot* is from 14. But my kernel is 13.


----------



## decuser (Dec 6, 2022)

Sure. I know downgrades aren't 'supported'. I kind of implied it in the question, but maybe I misled unintentionally. Here's how I got here:

1. I had a 13.1 system with two pools - zfsroot (stripe) and zfs (mirrored pool)
2. I exported zfs pool
3. I installed a fresh 12.4 system on the ada0 device (becoming a new zfsroot)

Now I'm wanting to create a new mirrored zfs pool in 12.4 from the data that's on the current 13.1 pool which is mountable read only `zpool import -o readonly=on zfs` and the data seems fine.

I don't see any real reason why I can't do it, knowing full well that 13.1 is more modern, cooler, maybe even better.


----------



## Alain De Vos (Dec 6, 2022)

You can boot with a 13-kernel than import the 12-zpool & 13-zpool and copy files over ?


----------



## SirDice (Dec 6, 2022)

Alain De Vos said:


> You can boot with a 13-kernel than import the 12-zpool & 13-zpool and copy files over ?


That's probably a good idea. 



decuser said:


> if I recall properly was to basically mount the pool read only and export it to a new pool.


You will need to create a snapshot in order to zfs-send(8) it. As far as I know you cannot create a snapshot if it's read-only.


----------



## decuser (Dec 6, 2022)

Ah. I found my notes. Once I complete the restore, I'll post the solution. Straightforward, but who knows if they'll be useful to anyone else .


----------



## decuser (Dec 6, 2022)

Easy as they come - I did a bunch of stuff to ensure that the restore was byte identical, but it boiled down to:

Mount the pool readonly:

```
zpool import -o readonly=on zfs
```

Use rsync to backup the files (just for comparison really):

```
rsync -vaz /zfs/{fossils,scm} .
diff -r fossils /zfs/fossils
diff -r scm /zfs/scm
```

Use zfs to send the data to a file (the real work):

```
zfs send zfs/fossils > t/fossils-backup
zfs send zfs/scm > t/scm-backup
```

Use dd to create some empty files to hold the temporarily restored mounts:

```
dd if=/dev/zero of=/temp-scm bs=1M count=20000
dd if=/dev/zero of=/temp-fossils bs=1M count=5000
```

Use zpool to create the temporary pools:

```
zpool create t-fossils /temp-fossils
zpool create t-scm /temp-scm
```

Use zfs to recv the backups into the temp pools and compare the files:

```
zfs recv -F t-fossils < t/fossils-backup
zfs recv -F t-fossils < t/scm-backup
diff -r /t-fossils /zfs/fossils
diff -r /t-scm /zfs/scm
```

Remove the existing 13.1 pool and its cruft:

```
zpool destroy -r zfs
zpool labelclear -f ada1
zpool labelclear -f ada2
gpart destroy -F ada1
```

Create the new mirror and its mounts:

```
zpool create zfs mirror ada1 ada2
zfs create -o compression=off zfs/fossils
zfs create -o compression=off zfs/scm
```

Create a snapshot on the first temp pool, restore it into the new mount, and clean up after:

```
zfs snapshot t-fossils@snap1
zfs send t-fossils@snap1 | zfs recv -Fd zfs/fossils
zfs list -t snapshot
zfs destroy t-fossils@snap1
zfs destroy zfs/fossils@snap1
```

Compare it:

```
diff -r /zfs/fossils scm-bak/fossils
```

Create a snapshot on the second temp pool, restore it into the new mount, and clean up after:

```
zfs snapshot t-scm@snap1
zfs send t-scm@snap1 | zfs recv -Fd zfs/scm
zfs list -t snapshot
zfs destroy t-scm@snap1
zfs destroy zfs/scm@snap1
```

Compare it:

```
diff -r /zfs/scm scm-bak/scm
```

Clean up the remaining cruft and see what love hath wrought:

```
zpool destroy t-fossils
zpool destroy t-scm
rm -fr /root/scm-bak
zpool status
```

So bottom line. No problems at all, just a bit of patience and a lot of free disk space.


----------



## decuser (Dec 6, 2022)

I heart FreeBSD (and ZFS) - where else can you swap out major versions (completely unsupported, of course) so easily and not screw stuff up royally?


```
uname -a

FreeBSD loki.sentech.home 12.4-RELEASE FreeBSD 12.4-RELEASE r372781 GENERIC  amd64


zpool status

  pool: zfs

 state: ONLINE

  scan: none requested

config:


    NAME        STATE     READ WRITE CKSUM

    zfs         ONLINE       0     0     0

      mirror-0  ONLINE       0     0     0

        ada1    ONLINE       0     0     0

        ada2    ONLINE       0     0     0
```


----------



## Geezer (Dec 7, 2022)

decuser said:


> Downgrading 13.1 to 12.4 *for fun*?



The use of the phrase "_for fun_" here does not appear to be ansi standard terminology.


----------



## Vull (Dec 7, 2022)

Geezer said:


> The use of the phrase "_for fun_" here does not appear to be ansi standard terminology.


This might require the use of the patented FreeBSD operating system joystick.


----------



## Crivens (Dec 7, 2022)

decuser said:


> gpart destroy -F ada1


Did you forget ada2 by any chance?
Sorry, this nitpicking comes with the job :/

Otherwise, good job on this. I would not have thought of this way. And another fossil user. I like it.


----------



## decuser (Dec 8, 2022)

Crivens said:


> Did you forget ada2 by any chance?
> Sorry, this nitpicking comes with the job :/
> 
> Otherwise, good job on this. I would not have thought of this way. And another fossil user. I like it.



Well, that was interesting. I actually did do ada2, but it didn't have a partition to destroy, so I left it out of the note... maybe something to do with the way zfs mirrors?

I heart fossil, made the switch from git to fossil for the bulk of my repos about a year ago. Other than the learning curve, it was smooth and the repos are much, much easier to manage and host... at least for my uses. Somebody hosting a repo for bunches of users might feel differently.


----------

