# SoftRaid - Is this possible?



## MasterCATZ (May 11, 2011)

I have an old server mainboard with 16 SATA ports and 7x PCI slots and wanting to make some use of it. I am thinking of getting 7x old PCI 8 port SATA cards and throwing in all my old HDDs, 500 GB ~ 1 TB size range. What I am wanting is something like this.

Expandable Volume

A way to expand array with redundancy without rebuilding the entire array (currently I have an 120 TB SAS array and it takes almost 2 months to add in disks). I.e. old data stays where it is and when extra drives are added the redundancy lies within their group but the volume expands (like how MS has the option to expand a volume with logical drives).

That way if there is a failure, only data in the group has a possibility of being corrupted and the rest are still fine. I am thinking maybe a mixture of JBOD with RAID 5.

I.e.

Group 1 8x RAID 5 Mode HDDs
Group 2 8x RAID 5 Mode HDDs
Group 3 8x RAID 5 Mode HDDs
Group 4 8x RAID 5 Mode HDDs

Volume 1 consists of JBOD groups 1 ~ 4. As new groups are added new data is written to blank space. OS with WOL (so it St3's when not in use and wakes up on LAN packet).

But here is the big one. Power management that spins down the HDDs and allows folder browsing without turning on every HDD on the system would be great if only the HDDs containing the data are the only ones that power up.

To sum it up, need to cut back on my power bill and running up to 70 odd HDDs 24/7 is not going to help.

Also wanting to leave hardware raid. The cards just die and are a pain to get replacements for when you need them!


----------



## SirDice (May 11, 2011)

Have a look at ZFS.


----------



## MasterCATZ (May 12, 2011)

Yep, I was looking into that just looking for a little help to get me started. I am still unsure if I should have a separate volume for each type of data I wanted stored or not. 

I am guessing throwing together a 50 TB array would take forever to verify data with and maybe I should break it down a little (downside being lots of lost space which I hate to have). 

Mostly VM's / Media / ISO's. 

Is there a way to prioritize what data gets stored in a pool or not? I.e. media gets stored on a group of IDE disks (unable to expand again) but if it fills up the data can then go into another pool of faster HDDs used for other storage? 

Also any way to get ESX to piggy-back off FreeBSD or any other good virtual server systems?


----------



## MasterCATZ (May 12, 2011)

I wanted to edit my last post but could not find the option to do so. I am finding http://docs.huihoo.com/opensolaris/solaris-zfs-administration-guide/html/ch04s04.html handy. 

Is there any place to go to find out more details about what can be done to identify a failed drive? Currently what I use flashes the HDD's lights so you can find them are there any commands for this?

Does FreeBSD have the ability to put HDD in idle mode (still powered on just on the lower RPM)? 
Does FreeBSD have the ability to read the HDD's SMART data for temps etc.?


----------



## SirDice (May 12, 2011)

As I understood it you shouldn't create RAIDZ volumes with more then 9 drives. You can however string serveral of those together creating one large volume. What data goes where is up to you.

Keep in mind that the power requirements you have won't work with RAID. A file will be "smeared out" over all the drives in a volume. Hence it'll active all of them when reading/writing to it.

As for virualization, only emulators/virtualbox-ose will run properly.


----------

