# ZFS disk configuration advice - 6 drives, unequal sizes



## throAU (Feb 28, 2012)

All,

I have a need for a temporary storage system (for Exchange) to take some load of my SAN whilst awaiting new SAN hardware, and have the following hardware kicking around:

Dell PowerEdge 2950 III (Quad core Xeon E5430, 16gb RAM)
4x 250gb 7200 rpm SATA
2x 1tb 7200 rpm SATA

I need ~450-500gb of storage.

Would I be better off (for IO throughput) configuring:

3x 250GB mirrored VDEVs (slicing the 1tb drives to only use 250gb)
2x 250GB VDEVs, 1x 1TB mirrored VDEV (whole disks - any benefit to this over the above option?)
2x RAIDZ VDEVs

I plan to boot from USB flash.

My gut feeling tells me that I'm better off with one of the mirrored options, I'm guessing option 2 is easier to administer (no slices, cache turned on, etc) - is there any problem having a mirror that is of different size in the pool if I plan to only be using ~450-500gb in any case?

edit:
Has anyone played with compression for Exchange databases?  Any win there?  Also, does ZFS gain read speed from a mirror (i.e., read different data from both sides of the mirror at once) or not?

I'm keen to get some testing of my own under-way, but the hardware isn't all available for a couple of days...

Cheers


----------



## phoenix (Feb 28, 2012)

Create 2x mirror vdevs using the 250 GB drives first.

Then add another mirror vdev using the 1 TB drives, giving you a total of 3 mirror vdevs, and 1.5 TB of usable storage in the pool.

That will give you the best performance (stripe of three mirrors) and the most storage space.  And will make it easier down the road to replace the 250 GB drives with larger ones.

You won't get perfect stripe performance, as ZFS will favour more-empty vdevs, so the bulk of the writes will go to the 1 TB drives.


----------



## throAU (Feb 28, 2012)

Cheers.  Turns out the server won't boot from USB under FreeNAS.  So... I've got it set up on vanilla FreeBSD 9.0 with 3 mirrors as suggested, with the OS on ZFS root.

To stop it attempting to keep writing to the more-free VDEV, would it be an idea to make the slice smaller on the 1tb disks to help performance (i.e., full stripe writes)?  750GB is more than I need at the moment...

This box isn't going to be in service long term - 1-2 months tops; it's just to help me juggle some space/IO while our new SAN is on order (Hence, upgradability is not a concern at this point.  Basically our current SAN has run out of IO, and I'd like to move an exchange storage group to it whilst awaiting upgrade).


----------

