# ZFS with multiple controllers - recommended setup



## boris_net (Jun 2, 2013)

Hi all,

I have 8 x 3 TB and 12 x 1 TB hooked up to 3 x IBM M1015 controllers (running mpslsi 15.00.00.00) on FreeBSD 9.1. As I can get up to 8 drives on a single controller, would you recommend distributing the 3 TB drives across multiple controllers or sticking them all on the same?

I have read many threads but still not sure what ZFS RAID level to use. I want the 8 x 3 TB to have reasonable security (raidz2) looks like a good candidate to me but I would welcome recommendation especially on the mirrors of vdevs since I am not entirely sure to understand the benefit compared to raidz2 and raidz3.

For the 12 x 1 TB, I would like to use them for iSCSI targets and local backup of critical data stored on the 8 x 3 TB (that would obviously be a subset of the data on the larger pool and yes I know this is not an acceptable backup solution and I am building a second server for proper backup). Again, would it be better to have 2 x 6-drive pools? And would it make any difference having each 6-drive pool on a single controller or spanning 2 controllers or even 3?

I have not fully decided on the split of the 12-drive pool in two 6-drive pools, it was more to illustrate the options I have across multiple controllers.

Thanks!


----------



## phoenix (Jun 2, 2013)

You can do 3 x 6-disk raidz2 vdev (2 using 1 TB drives, 1 using 3 TB drives). That would give you approximately 4+4+12=20 TB of usable storage, and two spare 3 TB drives.

Since you have the controllers, and 6 disks power vdev, you can distribute them across the controllers such that losing any 1 controller will not kill the pool:

1) put 4 x 1 TB and 2 x 3 TB drives onto each controller
2) use 2 disks from each controller to make up a vdev

Thus, if any controller dies, you lose 2 disks from each vdev (until a new controller is installed), but the pool carries on; raidz2 can survive the loss of two drives.



If you have other spare drives on hand, then you could even use the extra slots on the controllers to create a separate pool of mirror vdevs for the OS.


----------



## boris_net (Jun 3, 2013)

So if I count correctly, I have 2 unused 3 TB HDD's with this setup, so I will need another 3 TB to keep a raidz2 of 3 x 3 TB per controller to get a better use of all my 3 TB HDD.

Is that correct?

Thanks.


----------



## phoenix (Jun 3, 2013)

No.

You want to keep your vdevs identical in size (as in number of drives). So every raidz2 vdev in the pool needs to have 6 drives.

If you are hell-bent on using the 2 extra 3 TB drives (I'd keep them on the shelf as spares), them create a separate pool using those 2 drives in a mirror, and use that pool for the core OS install.


----------



## boris_net (Jun 3, 2013)

Ok thanks. Sorry to be thick though, I want to make sure I got that right. When you create 2 x (6 x 1 TB raidz2 vdev) + (6 x 3 TB raidz2 vdev), can you create a single ZFS pool with all of these vdev for a total of 20 TB or are they multiple pools?

In order to illustrate would it look like Option A or B as shown below?

Option  A:


```
NAME           STATE     READ WRITE CKSUM
	tank              ONLINE       0     0     0
	  raidz2         ONLINE       0     0     0
	   3TB-disk1  ONLINE       0     0     0
	   3TB-disk2  ONLINE       0     0     0
	   3TB-disk3  ONLINE       0     0     0
	   3TB-disk4  ONLINE       0     0     0
	   3TB-disk5  ONLINE       0     0     0
           3TB-disk6  ONLINE       0     0     0
          raidz2         ONLINE       0     0     0
	   1TB-disk1  ONLINE       0     0     0
	   1TB-disk2  ONLINE       0     0     0
	   1TB-disk3  ONLINE       0     0     0
	   1TB-disk4  ONLINE       0     0     0
	   1TB-disk5  ONLINE       0     0     0
           1TB-disk6  ONLINE       0     0     0
          raidz2         ONLINE       0     0     0
	   1TB-disk7  ONLINE       0     0     0
	   1TB-disk8  ONLINE       0     0     0
	   1TB-disk9  ONLINE       0     0     0
	   1TB-disk10  ONLINE       0     0     0
	   1TB-disk11 ONLINE       0     0     0
           1TB-disk12  ONLINE       0     0     0
```

Option B:


```
NAME           STATE     READ WRITE CKSUM
	tank3TB        ONLINE       0     0     0
	  raidz2         ONLINE       0     0     0
	   3TB-disk1  ONLINE       0     0     0
	   3TB-disk2  ONLINE       0     0     0
	   3TB-disk3  ONLINE       0     0     0
	   3TB-disk4  ONLINE       0     0     0
	   3TB-disk5  ONLINE       0     0     0
           3TB-disk6  ONLINE       0     0     0
          
	NAME           STATE     READ WRITE CKSUM
	tank1TB-A    ONLINE       0     0     0
          raidz2         ONLINE       0     0     0
	   1TB-disk1  ONLINE       0     0     0
	   1TB-disk2  ONLINE       0     0     0
	   1TB-disk3  ONLINE       0     0     0
	   1TB-disk4  ONLINE       0     0     0
	   1TB-disk5  ONLINE       0     0     0
           1TB-disk6  ONLINE       0     0     0
        
        NAME           STATE     READ WRITE CKSUM
	tank1TB-B    ONLINE       0     0     0
          raidz2         ONLINE       0     0     0
	   1TB-disk7  ONLINE       0     0     0
	   1TB-disk8  ONLINE       0     0     0
	   1TB-disk9  ONLINE       0     0     0
	   1TB-disk10  ONLINE       0     0     0
	   1TB-disk11 ONLINE       0     0     0
           1TB-disk12  ONLINE       0     0     0
```


----------



## phoenix (Jun 3, 2013)

Exactly like option A.


----------



## boris_net (Jun 26, 2013)

Just a more elaborated thanks since it took some time as I had to move data to a backup server which was slow.

I have this setup in place now and bonnie++ reported a nice read: 500[ ]MB/s write: 350[ ]MB/s with a high level of resilience.

Thanks again for your recommendation/help.


----------

