# ZFS on four drives - RAID-Z2 or two mirrors?



## noteboat (Mar 4, 2014)

I'm setting up a new FreeBSD 10 system with four identical drives to be configured in a ZFS pool primarily used for network storage with CIFS clients. I want to make sure I'm putting together the best configuration. I'd like more fault tolerance than RAID-Z1 offers me, and I think RAID-Z3 is overkill. So I'm looking at either putting them all in a single RAID-Z2 vdev, or making two mirror vdevs of two drives each.

The total capacity should be comparable, so that's not really a factor. If I understand correctly, the RAID-Z2 configuration will give me more fault tolerance, because the zpool could survive the loss any two drives. Is that correct? Obviously with the mirrors, if two drives failed and they were from the same mirror, I'd be dead in the water. 

While I'm not especially concerned about performance (most of the clients will be on WiFi, and I'd expect that to be the bottleneck), I also don't feel like I have a solid handle on the relative performance impacts of these configurations. Am I correct in my understanding that the mirrors will give better read performance? Read performance is more important than write performance for me, but fault tolerance trumps them both.

Is there anything else I need to consider?


----------



## eduardm (Mar 4, 2014)

Hi @noteboat!

From my humble experience with storage I would say:


			
				noteboat said:
			
		

> The total capacity should be comparable, so that's not really a factor. If I understand correctly, the RAID-Z2 configuration will give me more fault tolerance, because the zpool could survive the loss any two drives. Is that correct? Obviously with the mirrors, if two drives failed and they were from the same mirror, I'd be dead in the water.


Yes, that is correct.  Although the advantage of RAID10 consists of the fact that reading data is fast, RAID-Z2 is safer.



			
				noteboat said:
			
		

> Am I correct in my understanding that the mirrors will give better read performance? Read performance is more important than write performance for me, but fault tolerance trumps them both.


Yes, altogether it is correct. But depending of how important your stored files are I think you should consider that it is better to have a little bit less read performance (RAID 10 vs RAID-Z2) and more safety than vice versa.



			
				noteboat said:
			
		

> Is there anything else I need to consider?


I guess pretty decent hardware, (a) Gbit NIC(s) and enough RAM (8-16 Gb GB) depending, again, on what you are going to store.

*K*ind regards,

eduardm


----------



## pmeunier (Mar 5, 2014)

For four drives, RAID 10 is much better than RAID 6 (~ RAID-Z2) in typical scenarios. RAID 6 wouldn't give you extra capacity, but would still have longer rebuild times, bad performance during rebuilds, half the write performance all the time -- all the disadvantages but not the advantage for which people usually choose RAID 6.  But if you're willing to sacrifice everything for a marginal safety gain, then go RAID-Z2.


----------



## jalla (Mar 5, 2014)

pmeunier said:
			
		

> For four drives, RAID 10 is much better than RAID 6 (~ RAID-Z2) in typical scenarios. RAID 6 wouldn't give you extra capacity, but would still have longer rebuild times, bad performance during rebuilds, half the write performance all the time -- all the disadvantages but not the advantage for which people usually choose RAID 6.  But if you're willing to sacrifice everything for a marginal safety gain, then go RAID-Z2.


In what "typical scenarios" does RAID 10 have 2X the write performance of RAID 6?


----------



## noteboat (Mar 5, 2014)

My workload is very read-heavy, so even if it does have half the write performance, I'm not sure that's a major concern for me. However, I'd love to see some real world numbers.

The main thing that attracts me about RAID 10 is the ability to expand the pool by adding two more drives while keeping the configuration consistent. But realistically, I don't think I'm very likely to do that. This is a home server, replacing an existing NAS appliance, and from the outset it will be at less than 25% capacity. Based on the growth over the past several years, I'd expect this amount of storage to last me at least five years if not longer. By that point, I think I'd be more likely to replace all the drives with larger ones, or replace the system entirely.

I'm trying hard to not optimize for benefits that I won't actually use.


----------

