# Clean up ZFS raid-z pool after drive failures.



## aaronZZ (Mar 20, 2012)

ZFS worked wonders and protected my data in the face to two simultaneous drive failures. Only two files were damaged and everything is working better than ever. That last thing I want to take care of is getting the zpool back into a fully clean configuration. I tried using â€˜zpool clearâ€™ on it and it didnâ€™t clean up the mess. 

This is the sequence of events that got me here. Two drives out of three started running really slow after a power outage. I replaced one with a new drive and waited for it to resilver. Problem was the other bad drive kept restarting the resilver process. It just never really stopped resilvering. So I added a spare drive and then replaced the second dying drive while it was resilvering. At this point there were 4 drives and 3 of them were resilvering. The bad drive kept preventing a resilver to complete, so I finally just pulled it out and let the 2 new/1 old drive finish resilvering. Thatâ€™s when I ended up with the two bad files. A small price to pay.

But now I want to get the pool out of degraded status. Iâ€™ve tried to detach the faulted drive but that only works on mirrors. How do I get these out of a raid-z pool?

Any advice on commands to clean this up would be great. 



```
pool: tank2tb
 state: DEGRADED
status: One or more devices has experienced an error resulting in data
	corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
	entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scan: resilvered 1.67T in 13h50m with 6 errors on Mon Mar 19 11:35:05 2012
config:

	NAME                       STATE     READ WRITE CKSUM
	tank2tb                    DEGRADED     0     0     6
	  raidz1-0                 DEGRADED     0     0    12
	    replacing-0            DEGRADED     0     0     0
	      7290074769180900558  UNAVAIL      0     0     0  was /dev/ada5/old
	      spare-1              ONLINE       0     0     0
	        ada2               ONLINE       0     0     0
	        ada1               ONLINE       0     0     0
	    12838284120628520120   FAULTED      0     0     0  was /dev/ada1
	    ada0                   ONLINE       0     0     0
	spares
	  18143279662960688364     INUSE     was /dev/ada1

errors: 2 data errors, use '-v' for a list
```



```
History for 'tank2tb':
2010-11-10.18:27:29 zpool create tank2tb raidz ad2 ad3 ad10
2010-11-10.18:35:43 zfs set atime=off tank2tb
2012-02-04.08:33:35 zpool upgrade tank2tb
2012-02-17.02:48:42 zpool online tank2tb ada5
2012-02-17.02:54:34 zpool replace tank2tb ada5
2012-03-11.08:54:48 zpool add tank2tb spare ada4
2012-03-11.09:00:10 zpool replace tank2tb ada2 ada4
2012-03-18.07:53:05 zpool scrub tank2tb
2012-03-18.21:44:41 zpool clear tank2tb
```


----------



## soulreaver1 (Mar 21, 2012)

Raidz and top-level vdevs cannot be removed from a pool.


----------



## usdmatt (Mar 21, 2012)

Err, it's hard to know where to start with that output. There's a few things I'm not familiar with - like the spare which seems to act differently to my tests (pre v28) and looks to have stuck as a mirror. It also looks like some of the device names may have changed as you add ada4 as a spare, but it's down as 'was ada1' in the output.

First of all, a 'replacing' vdev should act just like a mirror, and you have a degraded mirror vdev (replacing-0) with an 'UNAVAIL' device and the ONLINE spare device (sub-mirror???). So I would start by detaching the UNAVAIL device and see what state that gets the pool into - 


```
zpool detach tank2tb 7290074769180900558
```

It may then be possible to detach either ada2 or ada1 from that spare-1 vdev although I'm not sure about that and I would obviously check the status/layout of the pool again after running the above command first (if that succeeds).

You then have the faulted device (12838284120628520120) which needs replacing. If ada1 & ada2 are both working drives then I would use whichever one I detached from the spare vdev to replace it. -


```
zpool offline tank2tb 12838284120628520120
zpool replace tank2tb 12838284120628520120 ada1 (or ada2)
```

I'm not sure the offline command is required, or if it will even work on a faulted disk but **hopefully** ZFS shouldn't let you do anything that either isn't possible or will fault the pool entirely.


----------



## aaronZZ (Mar 23, 2012)

I tried detach and it wouldn't let me. I'm going to try replace now. I have three disks, but like you said, it looks like somehow two of them are maybe mirrored now. I'll update.


```
#zpool replace tank2tb 12838284120628520120 ada1
cannot replace 12838284120628520120 with ada1: ada1 is busy

#zpool replace tank2tb 12838284120628520120 ada2
invalid vdev specification
use '-f' to override the following errors
/dev/ada2 is part of an active pool 'tank2tb'
```

So, which one of those is actually the drive in use? I'm guessing ada2.

Nothing is allowing me to take action without at least a â€“f. Is there any way to tell if I actually have three drives working properly as a raid-z array? Or, if I have one drive and a two-way mirror, so effectively just two drives in a raid-z array that started with three?

Looking at gstat during large reads I see ada0 and ada2 providing all responses. During writes, all three drives are uniformly active. Although during writes ada1/2 are almost identical in I/O, ada0 is usually a bit less.


----------



## usdmatt (Mar 23, 2012)

I'm surprised that it won't let you remove the UNAVAIL device. What error message does it give when you try the detach command?

From what I can see (if nothing has changed since your first post) you have a degraded raidz1 made up of the following 3 devices:


```
device 1: ada0 (ONLINE)
device 2: 12838284120628520120 (FAULTED)
device 3: replacing-0 (DEGRADED 'mirror')
```
The degraded mirror will still function correctly so with that and ada0 you have enough online to keep the raidz functional. The gstat output suggests to me that spare-1 is acting as a mirror with reads coming from the first disk and writes obviously going to both. But again, I would expect you to be able to detach one of the two devices, unless ZFS thinks the resilver never completed successfully.

If the pool status hasn't changed from the first post, ada0, 1 & 2 are all in use so unless you have more disks in the system that aren't part of the pool, the replace command is currently going to be no help.


----------

