# Choose a RAID Controller



## Ben (Nov 16, 2013)

Hi,

I want to run FreeBSD 9.2-RELEASE on a ZFS RAID1 on two SSDs. For this setup I can choose the RAID controller (JBOD):

- Adaptec 6405E
- LSI 9240-4i
- onBoard SATA-Controller

Which option should I choose? Do you have any experience about compatibility/stability issues?

Thanks.


----------



## wblock@ (Nov 16, 2013)

Use the onboard SATA controller.  The hassle and complexity of a RAID controller is not needed when there are only two devices.


----------



## Ben (Nov 16, 2013)

Yes, I think that's the right way to go.

Thanks.


----------



## Oko (Nov 17, 2013)

Ben said:
			
		

> Hi,
> 
> I want to run FreeBSD 9.2-RELEASE on a ZFS RAID1 on two SSDs. For this setup I can choose the RAID controller (JBOD):
> 
> ...




LSI are excellent hardware controllers! I use (LSI Logic / Symbios Logic MegaRAID SAS 2208) them on multiple machines in my LAB although with RedHat for now but I am in the process of switching to DragonFly BSD. MegaCli (command line interface for LSI) comes with FreeBSD script even though the FreeBSD is not included in new StorCli tool which is designed to replace MegaCli. I should know as I just used on on Friday evening to replace a failed HDD.  

I do not use ZFS but I did some research before picking up DragonFly for my needs. IIRC ZFS RAID-Z3 which should be supported by FreeBSD has fault tolerance to three failed disks which is better than RAID 6 (hardware or sofware) fault tolerance of two (I think FreeBSD doesn't have support for software RAID 6). ZFS doesn't like hardware controllers but you will actually need them. I think that minimal number of disks (people will correct me to set up ZFS RAID-Z3 is something like six. Due to the fact that FreeBSD doesn't have hot swap daemon to have complete functionality you have to have RAID card on but release control to ZFS. The best place to check these facts are FreeNAS forums. IX Systems sells pre build systems but at $10 000 U.S. for an entry level file server they aren't cheap.
ZFS aren't cheap. You need tons of RAM as in more than 128 GB to do anything serious (I have machines with 512 GB of RAM in the Lab).

Unless you have compelling reason to use ZFS, dedicated storage engineer to manage ZFS and your employer to purchase the hardware, I would go with DragonFly any time day or the night for home user. By the way if I had to use ZFS I would use NAS4Free. Do you research


----------



## xibo (Nov 17, 2013)

I can recommend LSI controllers, too. I've got a 6x2 (+3) RAID 1 ZFS pool connected to an LSI-2008 (ad-don card) and an LSI-2308 (on-board) which works amazingly well over here - but as @Oko said, at the expense of 256 GB memory, 1 x 2 ZIL and a LOG device(s).


----------



## usdmatt (Nov 17, 2013)

As already mentioned, for a few disks it's much simpler to just go with the on-board controller. Most motherboards support AHCI mode these days as well which means you get hot-swap support if your case has a hot-swap capable backplane.

For more disks, LSI are generally preferred at the moment for 2 main reasons:


They are one of few manufacturers that actually provide proper HBA firmware, rather than a full RAID controller with JBOD/Single mode.
The current driver in FreeBSD is provided and supported by LSI

--



> IIRC ZFS RAID-Z3 which should be supported by FreeBSD has fault tolerance to three failed disks which is better than RAID 6 (hardware or sofware) fault tolerance of two (I think FreeBSD doesn't have support for software RAID 6)



RAID-Z3 creates 3 pieces of parity, RAID-Z2 has 2, making RAID-Z2 effectively the ZFS equivalent of RAID6. It makes no real sense to say one is 'better' than the other. RAID-Z3 should be slower (as it's computing parity 3 times for each stripe) but if you're using large disks in a big pool you may want the extra redundancy. It's up to the user to decide which raid level meets their needs. 



> ZFS doesn't like hardware controllers but you will actually need them



Don't really know what you're getting at here.



> I think that minimal number of disks (people will correct me to set up ZFS RAID-Z3 is something like six



It's 4, but the recommended number for RAID-Z3 is probably 7 or 11. Don't know what relevance posting this really has here though, especially if you're not sure of the correct figure.



> Due to the fact that FreeBSD doesn't have hot swap daemon to have complete functionality you have to have RAID card on but release control to ZFS



All the 'hot swap daemon' does is automate the process of doing a `zpool replace`. FreeBSD has supported hot-swap hardware for years, even directly on motherboards if they have AHCI, as mentioned above. It just doesn't automatically use ZFS spares if a disk fails. Having a hardware RAID controller makes no difference to the functionality of ZFS on FreeBSD so I don't see why you'd *have* to have one.



> You need tons of RAM as in more than 128 GB to do anything serious



The more RAM the better, and if it's a 'serious' storage system then a decent amount of RAM is probably going to be by far one of the cheapest components of the system. Having said that, many people run ZFS fine with 16GB or less. You just might want to manually limit the ARC with little RAM on a system that has multiple roles because ZFS was designed primarily for large scale storage systems and expects that it can use all the system's RAM if it wants.



> Unless you have compelling reason to use ZFS, dedicated storage engineer to manage ZFS and your employer to purchase the hardware



ZFS is easier to use than any combination of RAID system and filesystem I've ever used before (so a dedicated storage engineer would be more applicable to traditional RAID systems than ZFS), and for most systems it shouldn't really require any changes to hardware choice.



> I do not use ZFS but I did some research
> Do you research



Yes, do your research. A lot of what you've put is either incorrect misinformation or makes little sense, and is fairly irrelevant to the original posters question about how best to attach his disks for a ZFS mirror. I wouldn't of hijacked the thread further by replying but I would rather not have users find this post in the future and take incorrect or unclear information as gospel.


----------



## Ben (Nov 18, 2013)

Thanks for your input. Very helpful!


----------



## gkontos (Nov 18, 2013)

usdmatt said:
			
		

> A lot of what you've put is either incorrect misinformation or makes little sense, and is fairly irrelevant to the original posters question about how best to attach his disks for a ZFS mirror. I wouldn't of hijacked the thread further by replying but I would rather not have users find this post in the future and take incorrect or unclear information as gospel.



You did a very good job actually. @Oko was writing things that could mislead other people. It is amazing how some people write full BS without considering the impact that this might have on other users.


----------



## DutchDaemon (Nov 18, 2013)

Cool down. Closed.


----------

