# zpool replace in a bad state



## pennello (Jul 3, 2013)

Hello all,

I've been scanning through the threads here trying to find some solution for the predicament I'm currently in, but haven't had any luck. It started out as a normal `zpool replace` after a device went bad, but now it's in a terribly bad state. The initial resilver went somewhat fine; at its end, the old device stayed UNAVAIL, the new device was ONLINE, but it still said "replacing", and that wouldn't go away. I tried a number of combinations of removing the disk, zeroing the first so many megabytes and last so many megabytes of the disk to reset everything, exporting and importing the pool, etc.

So here's the current state:

```
pool: pool
 state: DEGRADED
 scrub: none requested
config:

	NAME                        STATE     READ WRITE CKSUM
	pool                        DEGRADED     0     0     0
	  raidz1                    ONLINE       0     0     0
	    da6.nop                 ONLINE       0     0     0
	    da9.nop                 ONLINE       0     0     0
	    da5.nop                 ONLINE       0     0     0
	    da1.nop                 ONLINE       0     0     0
	  raidz1                    ONLINE       0     0     0
	    da2.nop                 ONLINE       0     0     0
	    da4.nop                 ONLINE       0     0     0
	    da0.nop                 ONLINE       0     0     0
	    da7.nop                 ONLINE       0     0     0
	  raidz1                    ONLINE       0     0     0
	    da3.nop                 ONLINE       0     0     0
	    da10.nop                ONLINE       0     0     0
	    da16.nop                ONLINE       0     0     0
	    da12.nop                ONLINE       0     0     0
	  raidz1                    ONLINE       0     0     0
	    da8.nop                 ONLINE       0     0     0
	    da19.nop                ONLINE       0     0     0
	    da13.nop                ONLINE       0     0     0
	    da17.nop                ONLINE       0     0     0
	  raidz1                    DEGRADED     0     0     0
	    replacing               UNAVAIL      0    62     0  insufficient replicas
	      1898809308392836239   UNAVAIL      0    64     0  was /dev/da15.nop/old
	      10338318586415748494  FAULTED      0    64     0  was /dev/da15.nop
	    da11.nop                ONLINE       0     0     0
	    da18.nop                ONLINE       0     0     0
	    da14.nop                ONLINE       0     0     0
	cache
	  ad8                       ONLINE       0     0     0

errors: No known data errors
```

What on earth can I do to get my pool healthy again?


----------



## mav@ (Jul 3, 2013)

Looking on non-zero number of write errors and "FAULTED" state I would guessed that you replacing disk is also not good. Or something bad with controller port/cable. I would try to replace something there and restart the replace.


----------



## pennello (Jul 6, 2013)

mav@ said:
			
		

> Looking on non-zero number of write errors and "FAULTED" state I would guessed that you replacing disk is also not good. Or something bad with controller port/cable. I would try to replace something there and restart the replace.



I took your advice and replaced the disk with yet another new physical disk.  For some reason, my _RAID_ controller decided to re-number the drive assignments, which isn't too big of a deal, excepting that _ZFS_ has now decided to identify da15 instead of da15.nop (4k sector transparent provider) as a disk to use; I'll deal with that after this mess is cleaned up.  In any case, the new drive number for the disk to replace is 8.  However, I'm now running into the error "cannot replace a replacing device".  I tried detaching them, but that didn't work either: "no valid replicas".


```
# zpool status
  pool: pool
 state: DEGRADED
 scrub: none requested
config:

        NAME                        STATE     READ WRITE CKSUM
        pool                        DEGRADED     0     0     0
          raidz1                    ONLINE       0     0     0
            da10.nop                ONLINE       0     0     0
            da9.nop                 ONLINE       0     0     0
            da5.nop                 ONLINE       0     0     0
            da1.nop                 ONLINE       0     0     0
          raidz1                    ONLINE       0     0     0
            da2.nop                 ONLINE       0     0     0
            da4.nop                 ONLINE       0     0     0
            da0.nop                 ONLINE       0     0     0
            da7.nop                 ONLINE       0     0     0
          raidz1                    ONLINE       0     0     0
            da3.nop                 ONLINE       0     0     0
            da6.nop                 ONLINE       0     0     0
            da16.nop                ONLINE       0     0     0
            da12.nop                ONLINE       0     0     0
          raidz1                    ONLINE       0     0     0
            da15                    ONLINE       0     0     0
            da19.nop                ONLINE       0     0     0
            da13.nop                ONLINE       0     0     0
            da17.nop                ONLINE       0     0     0
          raidz1                    DEGRADED     0     0     0
            replacing               UNAVAIL      0    51     0  insufficient replicas
              1898809308392836239   UNAVAIL      0    56     0  was /dev/da15.nop/old
              10338318586415748494  UNAVAIL      0    56     0  was /dev/da15.nop
            da11.nop                ONLINE       0     0     0
            da18.nop                ONLINE       0     0     0
            da14.nop                ONLINE       0     0     0
        cache
          ad8                       ONLINE       0     0     0

errors: No known data errors
# zpool replace pool 1898809308392836239 da8.nop
cannot replace 1898809308392836239 with da8.nop: cannot replace a replacing device
# zpool replace pool 10338318586415748494 da8.nop
cannot replace 10338318586415748494 with da8.nop: cannot replace a replacing device
# zpool detach pool 1898809308392836239
cannot detach 1898809308392836239: no valid replicas
# zpool detach pool 10338318586415748494
cannot detach 10338318586415748494: no valid replicas
```

What else can I try?


----------



## kpa (Jul 6, 2013)

Do you really need the gnop(8) devices to have a working pool? The standard method that I've seen is to use gnop(8) only at the creation of the pool, export the pool, destroy the gnop(8) devices and then import the pool without the gnop(8) layer.

You shouldn't need to use gnop(8) when replacing disks either because the ashift property is set to stone for the vdev at creation and can not change.


----------



## pennello (Jul 6, 2013)

kpa said:
			
		

> Do you really need the gnop(8) devices to have a working pool? The standard method that I've seen is to use gnop(8) only at the creation of the pool, export the pool, destroy the gnop(8) devices and then import the pool without the gnop(8) layer.
> 
> You shouldn't need to use gnop(8) when replacing disks either because the ashift property is set to stone for the vdev at creation and can not change.



I don't know if I really need them!  I'd created them way back when I first made the pool in order to take advantage of the 4k sectors on the disk (despite them being reported as 512-byte sectors), but never questioned their continued existence.

After reading more, it sounds like you're right--I can get rid of them.  I'll do that to simplify things.  Thanks!  Although I've still got this wacky replacing state to remedy...


----------



## pennello (Jul 6, 2013)

Is there some way to manually initialize a disk for use with _ZFS_?  Reading this thread, it seems that if I could only get da8 in a state where it had _ZFS_ metadata with UUID 10338318586415748494, then things could resilver, get into a good state, after which I could `zpool detach pool 1898809308392836239`, and be good to go.  But I don't know how to manually fudge such disk metadata.


----------



## kpa (Jul 6, 2013)

`zpool labelclear` will remove all ZFS labels from a disk if that's what you're after.


----------



## pennello (Jul 6, 2013)

kpa said:
			
		

> `zpool labelclear` will remove all ZFS labels from a disk if that's what you're after.



What I want is sort of the opposite - a `zpool labeladd`, if you will.  The new physical disk is good, but due to all the mucking around I had done earlier, _ZFS_ is expecting a UUID that doesn't exist on the new physical disk.  So now it's stuck in this replacement where it believes the new disk is UNAVAIL, despite, logically, there being sufficient data on the other disks in the vdev to constitute a successful restoration.  If only it could freshly "re-recognize" the new disk...


----------



## kpa (Jul 6, 2013)

Well, as far as I know the UUIDs are randomly generated to guarantee that the devices making up the vdevs will never have problems with non-unique UUIDs and you can't manually assign the UUIDs to devices.

What if you try to `zpool offline` one of the unavailable devices? Like: `# zpool offline pool  1898809308392836239`. You could also try to use the -f flag to `zpool replace`: `# zpool replace -f pool 1898809308392836239 da8`.


----------



## pennello (Jul 6, 2013)

kpa said:
			
		

> What if you try to `zpool offline` one of the unavailable devices? Like:
> 
> `# zpool offline pool  1898809308392836239`



"no valid replicas":

```
# zpool offline pool 1898809308392836239
cannot offline 1898809308392836239: no valid replicas
# zpool offline pool 10338318586415748494
cannot offline 10338318586415748494: no valid replicas
```


----------



## pennello (Jul 7, 2013)

kpa said:
			
		

> You could also try to use the -f flag to `zpool replace`: `# zpool replace -f pool 1898809308392836239 da8`.



Oops, I missed this until now.  This yields:


```
# zpool replace -f pool 1898809308392836239 da8
cannot replace 1898809308392836239 with da8: cannot replace a replacing device
```


----------



## usdmatt (Jul 7, 2013)

What version of FreeBSD is this? The vdevs being labelled 'raidz1' rather than 'raidz1-X' suggest it might be an older version.

As mentioned near the start it looks like the replacement drive is faulty - or there's a cable problem. You should be able to detach the faulted replacement drive but you appear to have tried this and received a 'no valid replicas' error. I remember older releases had a problem where you quite often couldn't remove/detach/offline a drive even if you had sufficient redundancy.

I would suggest trying to boot a recent live image, then try and detach the replacement drive using its GUID and restart the replacement with a new drive.


----------



## pennello (Jul 8, 2013)

usdmatt said:
			
		

> What version of FreeBSD is this? The vdevs being labelled 'raidz1' rather than 'raidz1-X' suggest it might be an older version.
> 
> As mentioned near the start it looks like the replacement drive is faulty - or there's a cable problem. You should be able to detach the faulted replacement drive but you appear to have tried this and received a 'no valid replicas' error. I remember older releases had a problem where you quite often couldn't remove/detach/offline a drive even if you had sufficient redundancy.
> 
> I would suggest trying to boot a recent live image, then try and detach the replacement drive using its GUID and restart the replacement with a new drive.



I just upgraded the box instead.    I was running 8.0; now I'm running 9.1.  That indeed got me past the problem of not being able to detach the faulted replacement.  Thanks!  The resilver is now going--only 15 hours to go!


----------



## usdmatt (Jul 9, 2013)

I've just noticed the following from the original post:



> The initial resilver went somewhat fine; at its end, the old device stayed UNAVAIL, the new device was ONLINE, but it still said "replacing", and that wouldn't go away



This was another problem with earlier FreeBSD/ZFS releases (see https://forums.freebsd.org/showthread.php?t=37394). After a replacement, the 'replacing' vdev would get stuck and not disappear, leaving the array in a DEGRADED state. As a 'replacing' vdev is effectively a mirror, it should be possible to detach the old disk:


```
# zpool detach pool old-disk
```

Hopefully you won't have any of those problems on 9.1. There's still a few shortcomings but in general ZFS seems pretty solid in recent releases.


----------



## pennello (Jul 9, 2013)

usdmatt said:
			
		

> Hopefully you won't have any of those problems on 9.1. There's still a few shortcomings but in general ZFS seems pretty solid in recent releases.



Nope, no problems -- the replace finished just fine.  The pool is now online, all 40[ ]T_B_ saved.



			
				kpa said:
			
		

> `zpool labelclear` will remove all ZFS labels from a disk if that's what you're after.



One thing that was odd, though, was that I tried to use `zpool labelclear` to ensure the new disk didn't have any _ZFS_ metadata on it before inserting it into the pool, but that yielded:


```
# zpool labelclear da8
Unable to open da8
```

Even with `-f`, that yielded the same error.  I just ended up using dd to wipe the first and last sector.  But isn't that supposed to be the idea of `labelclear`?


----------

