# ZFS problems on reboot



## bnorton916 (Nov 15, 2013)

I have a fiber channel array hooked to a SunFire (Sparc) box. FreeBSD sees the drives fine (as long as the drives come up after FreeBSD boots, otherwise FreeBSD crashes). I create a zpool.

`zpool create fcarray da1 da2 da3 da4 da5 da6 da7 da8 da9 da10 da11 da12 da13`

This works fine. I create a couple of datasets and copy in some dummy data. Everything still seems to be fine. Reboot and

`norton-mars:zpool status`

```
pool: fcarray
 state: UNAVAIL
status: One or more devices could not be used because the label is missing 
        or invalid.  There are insufficient replicas for the pool to continue
        functioning.
action: Destroy and re-create the pool from
        a backup source.
   see: [url]http://illumos.org/msg/ZFS-8000-5E[/url]
  scan: none requested
config:

        NAME                      STATE     READ WRITE CKSUM
        fcarray                   UNAVAIL      0     0     0
          raidz3-0                UNAVAIL      0     0     0
            11911658763540063539  UNAVAIL      0     0     0  was /dev/da2d
            6150438915414765690   UNAVAIL      0     0     0  was /dev/da1d
            da3                   ONLINE       0     0     0
            17964023625509063400  UNAVAIL      0     0     0  was /dev/da12d
            17246879199635567821  UNAVAIL      0     0     0  was /dev/da14d
etc
```

So, I understand that my zpool is corrupted, maybe because it can't match up the disks somehow? I am getting fuzzy on my ZFS knowledge. But I don't know why it doesn't just come up after reboot. Did I miss a step? Ideas?

Bill


----------



## bnorton916 (Nov 15, 2013)

Looks like I needed to label my disks.

`glabel label disk1 /dev/da1`
`glabel label disk2 /dev/da2`
.
.
`glabel label disk8 /dev/da8`

`zpool create -f fcarray raidz2 disk1 disk2 disk3 disk4 disk5 disk6 disk7 disk8`

It created fine and on reboot everything was there. Well, except my data, now I have to go look for it. :\


----------



## kpa (Nov 15, 2013)

That's a bit unexpected because ZFS does a probe on each available disk for ZFS labels and it should be able to unambiguously decide which pool a disk belongs to. You should check each disk with `zdb -l <device>` for old labels that might interfere with the automatic probing of labels.


----------



## bnorton916 (Nov 16, 2013)

kpa said:
			
		

> That's a bit unexpected because ZFS does a probe on each available disk for ZFS labels and it should be able to unambiguously decide which pool a disk belongs to. You should check each disk with `zdb -l <device>` for old labels that might interfere with the automatic probing of labels.



Ok, I will look into to that. 

Do I need to erase old labels (how?) or should ZFS automatically rewrite the labels when I destroy/create a pool?

Bill


----------



## bnorton916 (Nov 16, 2013)

Ok, when I run `zdb -l /dev/da1` I get a long list of output.

It says there are four labels. They all look quite similar.

Not sure what it exactly means though.

Bill


----------

