# ZFS and drive removal?



## Kayot (Jan 20, 2014)

I want to set[]up a RAID6 style drive arrangement. I want to be able to remove a drive without replacing it, meaning that through whatever command, I can tell the array the /dev/sdX drive is being removed and not replaced so move the data to the other drives, re-parity without drive /dev/sdX and disengage drive /dev/sdX. Effectively, this would shrink the archive by one disk.

So, can I shrink a ZFS pool the way described above?


----------



## phoenix (Jan 20, 2014)

Short answer: nope!

Long answer: nope, not possible. A raidz vdev is immutable, meaning you cannot change the number of drives in the vdev. You can replace each drive individually with larger ones thus expanding the storage size of the vdev (once the last drive is replaced). But, you cannot add or remove drives to the vdev to change the total number of drives in the vdev.

You can, however, add another vdev to the pool, thus increasing the storage size and IOps of the pool.


----------



## Kayot (Jan 20, 2014)

Bummer, I guess the best way to handle it is to make each disk a ZFS (for deduplication) and use AUFS with snapraid. My data is organized by type so that should maximize dedup.

Thanks, I'll look into that instead.


----------



## JanJurkus (Jan 21, 2014)

*Re: [Question] ZFS and drive removal?*



			
				Kayot said:
			
		

> My data is organized by type so that should maximize dedup.



Or not. It depends on your data. 2 TB full of movies will not have much to deduplicate. I always get a little bit worried with such a 'stacked' solution, caveat emptor!


----------



## usdmatt (Jan 21, 2014)

*Re: [Question] ZFS and drive removal?*

I find it very strange to have a requirement to start with a set number of disks (i.e. 6) but want the ability to remove them. Is there any good reason for thinking that you'll want to reduce the size of the array?

Dedupe is also a dangerous feature (or can be) that many people seem to decide to use on a whim because it *might* save space and sounds good. To do it properly the recommendation is for about 5GB of RAM per 1TB of disk space.

Personally I would just build a simple pool big enough for my needs as a fixed configuration, forget dedupe and enable compression - which will probably save as much if not more than dedupe and is actually generally recommended because compression throughput often outperforms the disks, especially with the new lz4 algorithm.


----------

