# Moving zpool to another computer



## rawthey (Sep 23, 2012)

After successfully upgrading my 9.0-RELEASE ZFS system from a single 250GB drive to a mirror with two 500GB disks I now have a spare 250GB drive which I intend to put into another system. I upgraded by setting autoexpand to "on", adding each new drive with *zpool attach sys /dev/gpt/sys0 /dev/gpt/sys1* followed by *zpool attach sys /dev/gpt/sys1 /dev/gpt/sys2* and then removing the original drive with *zpool detach sys /dev/gpt/sys0*.

Although I'll be completely re-installing FreeBSD on the drive in the new location I thought it would be interesting to experiment to see if this would be a suitable procedure for copying a ZFS system to a different computer but when I installed the old drive in the other PC and tried to boot it failed with the following:

```
ZFS: i/o error - all block copies unavailable
ZFS: can't read MOS
ZFS: unexpected object set type 0
ZFS: unexpected object set type 0

FreeBSD/x86 boot
Default: sys:/boot/kernel/kernel
boot:
ZFS: unexpected object set type 0

FreeBSD/x86 boot
Default: sys:/boot/kernel/kernel
boot:
```
Then I tried booting the 9.0 installation DVD to see what I could discover about the ZFS pool on /dev/gpt/sys0

```
# [color="Blue"][B]zpool import -D[/B][/color]

  pool: zpool
    id: 5821244805494361020
 state: FAULTED (DESTROYED)
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
	The pool was destroyed, but can be imported using the '-Df' flags.
   see: http://www.sun.com/msg/ZFS-8000-5E
config:

	zpool                  FAULTED  corrupted data
	  5933748989556188019  UNAVAIL  corrupted data
     
# [color="Blue"][B]zpool import -Df zpool[/B][/color]

cannot import 'zpool': one or more devices is currently unavailable

# [color="Blue"][B]glabel status[/B][/color]

                                      Name  Status  Components
                             ntfs/PRESARIO     N/A  ada0s1
                       msdosfs/PRESARIO_RP     N/A  ada0s2
                             gpt/bootcode0     N/A  ada1p1
gptid/68a7d8c0-9b49-11e1-9c82-6cf0499e8897     N/A  ada1p1
                                 gpt/swap0     N/A  ada1p2
gptid/68bb1608-9b49-11e1-9c82-6cf0499e8897     N/A  ada1p2
                                  gpt/sys0     N/A  ada1p3
gptid/68cf7984-9b49-11e1-9c82-6cf0499e8897     N/A  ada1p3
                   iso9660/FREEBSD_INSTALL     N/A  cd0
# [color="Blue"][B]gpart show ada1[/B][/color]

=>       34  488397101  ada1  GPT  (232G)
         34        256     1  freebsd-boot  (128k)
        290    8388608     2  freebsd-swap  (4.0G)
    8388898  480008237     3  freebsd-zfs  (228G)
```
Perhaps it's not possible to re-use a drive this way after using *zpool detach* on it though I'd be interested to know if it can be done in case a similar need arises in the future. Perhaps I should have used *zpool offline* instead of *zpool detach* and then used the detach command after physically removing the drive?

I'm also puzzled that after installing in the other PC the pool now appears to be named *zpool* whereas it was originally named *sys*.

I'd welcome any suggestions on how the ZFS system on this drive might be recovered. Although I'll eventually be deleting the contents and re-installing I'm inclined to make the most of the opportunity of experimenting on a system where it won't matter if the contents get trashed - unless it's effectively trashed already


----------



## usdmatt (Sep 24, 2012)

It looks like detach is actually marking the disk as DESTROYED. It does appear, however, that it hasn't actually removed the pool information, just marked it unusable so it may be possible to manually reset this destroyed flag but that's way beyond me. It's entirely possible though that it could of removed/overwritten something that can't easily be rebuilt, it's obviously not designed to be reusable.

If you want to remove a single disk from a mirror and reuse, you want the zpool split command. When used with a root pool, I'm not sure if you'll need to boot off live cd and recreate the /boot/zfs/zpool.cache or whether it'll just work.

http://docs.oracle.com/cd/E19253-01/819-5461/gjooc/index.html


----------

