# ZFS functionality Question



## Sylgeist (Jul 25, 2010)

This is my current config:


```
NAME        STATE     READ WRITE CKSUM
	tank        ONLINE       0     0     0
	  raidz1    ONLINE       0     0     0
	    ada4    ONLINE       0     0     0
	    ada5    ONLINE       0     0     0
	    ada2    ONLINE       0     0     0
	    ada3    ONLINE       0     0     0
	  raidz1    ONLINE       0     0     0
	    ada1    ONLINE       0     0     0
	    ada6    ONLINE       0     0     0
	    ada7    ONLINE       0     0     0
	cache
	  da0       ONLINE       0     0     0
```

I know that you cannot remove individual vdevs at the present, but is there a way to evacuate an entire raidz tree so I can remove the physical drives? I'm assuming not since I haven't seen examples, but I thought I would check!


----------



## phoenix (Jul 26, 2010)

No.  You cannot remove top-level vdevs from the pool (ie, mirror/raidz vdevs).

You can remove *cache* vdevs from a pool.

And you can remove *log* from a pool *if it is a ZFSv19+ pool* (ie, you cannot remove a log device from a FreeBSD ZFS pool, but you can from an OSol pool).


----------



## Sylgeist (Jul 26, 2010)

Thanks - that's what I figured. Guess I'll have to upgrade drive sizes then!


----------



## phoenix (Jul 26, 2010)

However, you can do an in-place upgrade of the drives, replacing the current ones with larger ones.  Once all the drives in a vdev are replaced, you can do either a reboot or a drop to single-user mode and export/import the pool.  After that, all the extra space will appear in the pool.

The general process is as follows:

zpool offline poolname devicename
if a hot-swappable device/controller you can replace the drive, otherwise reboot and physically replace the drive
zpool replace poolname olddevice newdevice
The exact commands will depend on your drive controller.  For RAID controllers, use the RAID management util to create the drive.  For ahci(4) based controllers, you can use camcontrol(8) to stop the device and to rescan the bus once the new drive is in-place.

We've done this to replace two raidz2 vdevs in our storage servers, using 1.5 TB drives to replace 500 GB drives.  Works like a charm.


----------



## Sylgeist (Jul 26, 2010)

Thanks sir - that's probably what I'll do. Conveniently forces me to upgrade all the drives to something larger


----------



## danbi (Jul 30, 2010)

While you are at it, it is not bad idea to use glabel to label your disks. For example, the same way they are marked on the enclosure, or whatever. This may save you some day, when you pull out the wrong cable, or the OS renames drives for whatever reason.


----------



## ScottJ97 (Aug 1, 2010)

Is it too late to use glabel on an existing pool?

The way I understand it, glabel uses the last sector of the drive to store its data, then presents a device that's one sector smaller than the actual drive. ZFS prefers to use whole drives, rather than slices. Will giving it a glabel have the same limitations as giving it a slice?


----------



## danbi (Aug 2, 2010)

If you replace disks in a pool, you may use glabels on the new disks, as long as the resulting device is larger than what it replaces. It will not work to replace the same disk with glabeled one, as it will be one sector shorter. Unless you used smaller part of it initially of course.
There are apparently no limitations on using glabels with ZFS.


----------

