# FreeBSD 10.1 update on rooted ZFS fails to mount with all block copies unavailable



## Robert Candey (Mar 18, 2016)

I updated a server from RootOnZFS FreeBSD 10.1-RELEASE-p10 to 10.1-RELEASE-p31 and it failed to reboot, stopping on 


```
ZFS: i/o error - all block copies unavailable
ZFS: can't read MOS of pool tank
gptzfsboot: failed to mount default pool

FreeBSD/x86 boot
<registries>
BTX halted
```

It is a Dell R320 server with 4-4TB SAS disks on a PERC H310 (LSI SAS 2008) controller in RAIDz2 array.

Commands were:

```
freebsd-update fetch
freebsd-update install
pkg update
pkg upgrade
shutdown -r now
```

I do not see anything obvious in the release notes.  I see this has been a problem for others, and I have tried various advice after booting from the memstick image, trying various combinations:


```
zpool import -R /mnt -o cachefile=/tmp/zpool.cache -f zroot
# reset boot data:
gpart bootcode -b /mnt/boot/pmbr -p /mnt/boot/gptzfsboot -i 1 mfisyspd0
gpart bootcode -b /mnt/boot/pmbr -p /mnt/boot/gptzfsboot -i 1 mfisyspd1
gpart bootcode -b /mnt/boot/pmbr -p /mnt/boot/gptzfsboot -i 1 mfisyspd2
gpart bootcode -b /mnt/boot/pmbr -p /mnt/boot/gptzfsboot -i 1 mfisyspd3
# reset cache
mv /mnt/boot/zfs/zpool.cache /mnt/boot/zfs/zpool.cache.old
zpool reguid zroot
cp -p /tmp/zpool.cache /mnt/boot/zfs/zpool.cache
# various combinations of:
zpool set bootfs=zroot/root zroot
zpool set bootfs=zroot/ROOT/default zroot
zpool set bootfs=zroot/ zroot
```


even this odd instruction to recopy the boot directory:

```
mv /mnt/boot /mnt/boot.orig
mkdir /mnt/boot
cp -Rp /mnt/boot.orig/* /mnt/boot
```

I do not understand a few references to linking the boot directory elsewhere; how would that work?

It's possible the BIOS can no longer reach the boot code, if it is above 2TB for 512 byte blocks, although my disks are 4k blocks so should have max 16TB.  If so, is there an alternative, such as a way to convert from the BIOS loader to UEFI on a root on ZFS system?  Is this wise?

loader.conf has contents:

```
zfs_load="YES"
kern.geom.label.gptid="0"
mfip_load="YES"
vfs.root.mountfrom="zfs:zroot/ROOT/default" # added this after problem started
```

Any help and ideas is much appreciated.  Thanx


----------



## vejnovic (Mar 18, 2016)

I have the same trouble after security upgrade to FreeBSD 10.2-RELEASE-p14:

```
ZFS: i/o error - all block copies unvaliable
ZFS: can't read object set for dataset u
ZFS: can't open root filesystem
gptzfsboot: failed to mount default pool zroot
FreeBSD/x86 boot
Default: zroot:
boot:
```

It's like that dataset is lost/corrupt?


----------



## Robert Candey (Mar 18, 2016)

vejnovic said:


> I have the same trouble after security upgrade to FreeBSD 10.2-RELEASE-p14:
> 
> ```
> ZFS: i/o error - all block copies unvaliable
> ...



I can mount the pool and file systems without error from the LiveCD part of the FreeBSD memstick image.  Did you find a fix for your trouble?

Does anyone know if there is a way to activate more debugging in the boot loader that might help find the problem?

Otherwise, I am pondering whether to try reinstalling the OS or install a separate OS on a SSD and mount the pool.  Any suggestions?  Thanx


----------



## vejnovic (Mar 21, 2016)

Robert Candey said:


> Did you find a fix for your trouble?


No. I have done a fresh install.


----------



## Robert Candey (Mar 21, 2016)

vejnovic said:


> No. I have done a fresh install.


Thanx.  I am worried there is a problem that will happen again, perhaps running into some BIOS limit, like the loader file is moved above the 48bit block size that it can see.


----------



## bvansomeren (Mar 29, 2016)

Hi all,

I ran into a similar problem upgrading two identical SuperMicro servers this weekend from FreeBSD 10.2-RELEASE-p9 -> 10.2-RELEASE-p14
Both machines were setup with pretty default settings, used the installer from a USB disk to install a ZFS mirror pair with the bootloader on it; After install I added another vdev mirror and a l2arc cache:


```
# zpool status
  pool: zroot
 state: ONLINE
  scan: scrub repaired 0 in 0h0m with 0 errors on Sun Mar 27 14:39:37 2016
config:

    NAME        STATE     READ WRITE CKSUM
    zroot       ONLINE       0     0     0
     mirror-0  ONLINE       0     0     0
       ada0p3  ONLINE       0     0     0
       ada1p3  ONLINE       0     0     0
     mirror-1  ONLINE       0     0     0
       ada2    ONLINE       0     0     0
       ada3    ONLINE       0     0     0
    cache
     ada4      ONLINE       0     0     0

errors: No known data errors
```

This is a list of my partitions

```
# gpart show
=>        34  3907029101  ada0  GPT  (1.8T)
          34           6        - free -  (3.0K)
          40        1024     1  freebsd-boot  (512K)
        1064         984        - free -  (492K)
        2048     4194304     2  freebsd-swap  (2.0G)
     4196352  3902832640     3  freebsd-zfs  (1.8T)
  3907028992         143        - free -  (72K)

=>        34  3907029101  ada1  GPT  (1.8T)
          34           6        - free -  (3.0K)
          40        1024     1  freebsd-boot  (512K)
        1064         984        - free -  (492K)
        2048     4194304     2  freebsd-swap  (2.0G)
     4196352  3902832640     3  freebsd-zfs  (1.8T)
  3907028992         143        - free -  (72K)

=>       63  125045361  ada5  MBR  (60G)
         63       1985        - free -  (993K)
       2048    1024000     1  linux-data  [active]  (500M)
    1026048  124018688     2  linux-lvm  (59G)
  125044736        688        - free -  (344K)

=>       63  125045361  diskid/DISK-AF340756062400148235  MBR  (60G)
         63       1985                                    - free -  (993K)
       2048    1024000                                 1  linux-data  [active]  (500M)
    1026048  124018688                                 2  linux-lvm  (59G)
  125044736        688                                    - free -  (344K)
```

NOTE: The last disk is a DOM module containing a defunct Linux install that I've replaced with FreeBSD ;-)

After the update (freebsd-update fetch and freebsd-update install) I'm greeted with the boot screen as in the pictured attachment. (The discoloration is caused by the console screen)

Booting the old kernel works fine.
I've tried finding a solution, which led me to this post.
Please let me know what other information might be relevant to share; I didn't want to drown out my post by including too much.

Thanks for any help you might be able to give me


----------

