# zpool / GPT problem



## dhsl (Oct 27, 2013)

Hi everybody.

Looks to me like I did a major mistake, here's the story:

I have 5x 3TB and 5x 2TB. The 5x 3TB drives made up a zpool called vol0 and were in RAIDZ1.

Then I attached the 5x 2TB disks to it in a second RAIDZ1.  Since I made this from the FreeNAS GUI, it created a partition on each disk and used this for the RAIDZ.

I didn't like that and replaced them one by one with the drive itself and let it resilver. No problem yet.

Now, when I rebooted the machine, it gives me these errors for each of the 2TB drives:

```
GEOM: ada3: the primary GPT table is corrupt or invalid.
GEOM: ada3: using the secondary instead -- recovery strongly advised.
```

and:

```
ZFS WARNING: Unable to attach to ada3.
```

So importing doesn't work:

```
[CMD][dh@pia] /# zpool import[/cmd]
   pool: vol0
     id: 8459871642955047970
  state: UNAVAIL
 status: One or more devices are missing from the system.
 action: The pool cannot be imported. Attach the missing
        devices and try again.
   see: [url]http://illumos.org/msg/ZFS-8000-3C[/url]
 config:

        vol0                      UNAVAIL  insufficient replicas
          raidz1-0                ONLINE
            ada5                  ONLINE
            ada4                  ONLINE
            ada0                  ONLINE
            ada1                  ONLINE
            ada2                  ONLINE
          raidz1-1                UNAVAIL  insufficient replicas
            13270970952839860995  UNAVAIL  cannot open
            2118572373763410130   UNAVAIL  cannot open
            18185913030426875065  UNAVAIL  cannot open
            12770493309709890490  UNAVAIL  cannot open
            9355264882251945064   UNAVAIL  cannot open
```

Correct me if I'm wrong, but to me it looks like my machine tries to recover a GPT table which shouldn't be there and therefore is not able to attach the complete drive (ada3 for example) to my RAIDZ1.

If this is correct, is there a way to tell it not to use the GPT table?

If there is no other possibility, is it possible to recover the data from raidz1-0? There weren't many writes on vol0 since I added the raidz1-1 to it.

Thanks a lot in advance!


----------



## wblock@ (Oct 27, 2013)

To get rid of a GPT format, you should use gpart(8)'s destroy command.  That would remove both the primary and backup GPT tables.  Since that was not done, the backup table is still present.  Later versions of ZFS are supposed to leave some unused space at the end of the drive to allow for variations in drive size, so it did not overwrite the leftover GPT table.

The kern.geom.part.check_integrity sysctl(8) can be set to zero before booting.  Or maybe even at runtime, and then followed by a `true > /dev/ada3` to force a retaste.  I don't know whether that will just assume the backup GPT values are good, or ignore it entirely.

After backing up the drives, the backup GPT can be erased by zeroing out the last 34 blocks of each of the drives affected.  Then use that retaste trick.  If ZFS recognizes the array after that, run `zpool scrub` on it to make sure nothing has been lost.  Since it did not overwrite the backup GPT table, that should not find errors.


----------



## dhsl (Oct 27, 2013)

Hi @wblock@,

Thanks a lot for your help! `gpart destroy` was necessary. I had to find out that the GUI also created a swap partition on each drive which was also in use and therefore I wasn't able to destroy the GPT. After deactivating the swap and deleting all entries I could destroy the GPT without problems. Zeroing the GPT wasn't necessary and a scrub is running currently.

Again, thanks a lot


----------

