# zpool freezing



## Alt (Jan 5, 2010)

Hi. Im experimenting with ZFS (FreeBSD 8.0-p0) in vmware 6.5 box.
Box haves 4 virtual disks 1G each. They gathered in RAIDZ1 pool and i experiment with them in 'survivability'.

- 1 disk *removed*. 'zpool status' says array is DEGRADED and disk is UNAVAIL. 'zpool replace' (with another disk) fixes it okay, so no problem.
- 2 disks removed. 'zpool status' says FAULTED. Its ok cus its raidz1

- 1 disk *replaced*. 'zpool replace' fixes it ok.
- 2 disks replaced. Cant be fixed, its ok.

Now i do this (with correct 4 disk raidz1): shutdown, go vmware settings. Remove disks 3 and 4. Create new disk, so it becomes disk3 (da2). Now when i boot and say 'zpool status' - it freezes:

```
# zpool status
Jan 5 15:36:01  root: ZFS: vdev failure, zpool=test type=vdev.open_failed
Jan 5 15:36:01  root: ZFS: vdev failure, zpool=test type=vdev.bad_label
```
Waited atleast 30min - no progress. No IO actions is going in this moment. ctrl-c, sig term, sig kill - no action! Seems zpool process cant be killed at all. It blocks da{1,2,3} and they cant be writed. zpool commands import/export/status/list/destroy - freezes same way. Im not saying the raid must survieve, i say zfs system becomes uncontrollabe

I found the way how to re-init this: reboot, dont cmd zpool, remove /boot/zfs/zpool.cache (this kills all zpool configs), dd if=/dev/zero of=/dev/daX bs=5m count=1. Now reboot and it will release old disks and not freeze anymore.. Without dd it says da0 is used somewhere(old raid) and he cant create new raidz..
Have anyone seen this issue before?


----------



## phoenix (Jan 5, 2010)

This is working as designed.

You have a 4-drive raidz vdev.  You removed 2 of the disks.  You now have a broken pool.  You then added 1 physical drive ... but it has not been added to the pool yet.  How do you expect it to work?    The pool is unusable, unrecoverable, dead, kaput, gone.

Your "fix" does basically the same thing as destroying the pool, and creating it from scratch.  Which is really all you can do, when you lose 2 drives in a 4-disk raidz1.  There's no way to recover from the loss of 2 disks at the same time.


----------



## Alt (Jan 5, 2010)

I know its not possible to recover this pool. But there is not possible to create new, too! (cus zpool utility freezes)
Im saying after this manipulation you can not manage ANY pools!


----------



## phoenix (Jan 6, 2010)

You should be able to boot to single-user mode, mount / as read-write, run the hostid script, and then force destroy the pool:

```
# mount -u /
# /etc/rc.d/hostid start
# zpool destroy -f <poolname>
```


----------



## Alt (Jan 7, 2010)

Same effect, it freezes on zpool call... ((
screenshot


----------



## FBSDin20Steps (Jan 7, 2010)

Did you try it with the "fixit" method? Is it possible to do this in single user mode?


----------



## Alt (Jan 7, 2010)

This screenshot is from single-user mode... Which "fixit" method?


----------



## FBSDin20Steps (Jan 7, 2010)

The "Fixit" option on your install dvd.


----------



## phoenix (Jan 7, 2010)

Alt said:
			
		

> Same effect, it freezes on zpool call... ((
> screenshot



Are you sure it's frozen, and not actually destroying the pool?  The destroy action will take a long time, depending on the size of the pool, and the amount of data in the pool.

You can check by hitting *CTRL+T*, which will output a line of info on what's happening.


----------



## Alt (Jan 10, 2010)

If someone interested - i sent pr kern/142563, cus my trackdown with gdb lead to system ioctl call that freezes..


----------

