# [ZFS] Combine striping and ZFS to use pools more efficient



## bugboy (Feb 5, 2010)

I have a question about RAID-Z combined with striping (RAID-0). Suppose I have the following drives:

 A+B: 500GB (identical)
 C+D: 1TB (identical)
 E+F: 1.5TB (identical)

If I want to use a RAID-Z array, then it will only have around 2TB, because it will only use 500GB per disk. The advantage is that two drives can fail, but it isn't very efficient. A mirror would be more efficient, because that would get me 3TB, but not every combination of two failing drives could be handled. Switching to a RAID-Z pool later is also impossible, without backing up all the data to external storage.

Now I am thinking the following scenario. I use the striping module (geom_stripe) to combine the A+B to a single 1TB device (G). Then I create a ZFS pool that uses device C, D, E, F and G (striped A+B). Now the smallest disk is 1TB, so I will get around 4TB of diskspace (I am aware that in this scenario only a single disk may fail, instead of two in the original pool). This would yield the following devices in the pool:

 C+D: 1TB (identical)
 E+F: 1.5TB (identical)
 G: 1TB (striped A+B)

When one of the drives fail (or A+B together), then the pool can still be recovered. Upgrading is also easy. I take G (striped A+B) offline and replace them with the first 1.5TB disk (H) and let the pool rebuild. Now I have the following scenario:

 C+D: 1TB (identical)
 E+F+H: 1.5TB (identical)

Still a valid pool that has the same size as before. Now I take drive C offline and put in the next 1.5TB disk (I) and let the pool recover again. Now we have the following scenario:

 D: 1TB
 E+F+H+I: 1.5TB (identical)

There is still no additional space after the pool has recovered. Now we take D offline and stripe D+E again to a striped 2TB disk (J). The striped disk (J) is added to the pool again. Now we have the following scenario:

 E+F+H+I: 1.5TB (identical)
 J: 2TB (striped C+D)

Now I have effectively upgraded to pool from 5x1TB to 5x1.5TB without the need to take the filesystem offline. I tried to created a striped volume and add it to the pool. It doesn't seem to be a problem. If drive C or D both fail, then I still don't loose any data.

The advantage of this method is that my old drives can still be used effectively and even if the oldest drives both fail, then my data is still save in most cases. Still, I do have some questions:


Am I missing something? I cannot find a lot of references of combining stripes and ZFS to use smaller drives more efficiently.
Is it a bad idea to combine striping and ZFS?

Hope somebody can provide me with some valuable info...


----------



## gilinko (Feb 5, 2010)

Without to much thought I would say that it probably is a bad idea, as it will disable some of the features of both zfs and the stripe. It's the same scenario of using both hardware raid and software raid, usually not a good idea(tm). And that probably why you will not find much informaion about it.

I would go with just a raidz1, and in turn replace the 500 GB and 1 TB disks with larger ones as time passes.


----------



## wonslung (Feb 5, 2010)

bugboy said:
			
		

> I have a question about RAID-Z combined with striping (RAID-0). Suppose I have the following drives:
> 
> A+B: 500GB (identical)
> C+D: 1TB (identical)
> E+F: 1.5TB (identical)



a friend of mine was in a similar situation.  What he ended up doing was this:

he found out how many blocks the 500 gb drives were in sade or sysinstall.

We're going to call e and f e1 for 1tb and e2 for 500gb, same goes for f

he then did the following
he made a raidz1 with a+b+e2+f2 and then added a second raidz1 on the pool using c+d+e1+f1

This is probably the best i would suggest with your drives.


----------



## phoenix (Feb 5, 2010)

Since you have pairs of each size, just create a pool with 3  mirrored vdevs.  That way, you get 3 TB of space (0.5 TB + 1.0 TB + 1.5 TB), and you can let ZFS worry about striping the data as needed across the vdevs.  You also get the performance benefits of mirrored vdevs over raidz vdevs.


----------

