# Question about ZFS/Raidz



## Blu (Mar 8, 2011)

I have a question about ZFS/Raidz set up that I plan on using for my file server (making it a NAS solution) but my questions are, I plan on having 20 2TB drives, should I opt for Raidz2 for better redundancy? Also I hear ZFS/Raidz eats alot of ram, how much ram would you say? I'm willing to put 16GB in if it will help speed up the overall speed of my file server.

What are the best hard drives to use for Raidz? I was looking at the 2TB greens since I got a ton of them and they've never done me wrong but a friend suggested hitachi drives but I just read that WD bought hitachi or are going to in a few days/weeks here. Will this also work? If I get a 16 port SAS/Sata card and then connect my other drives to my motherboard will that work with software raid? Also does Raidz need a hard drive to install to? I was thinking of buying a cheap SSD for about $30-50 to be the main OS drive on it.

Another question, if my parity drive fails, will raidz tell me what drive failed based on serial number or something? So I don't have to go through 20 drives to see what failed, also will it rebuild the parity drive or any drive that fails on me if I replace it with a new one? I'm basically looking for a massive storage system for my home network for HD movies and HD tv shows. I'm going to post my current build for my file system and you guys tell me what you think.


Antec twelve hundred case
4x http://www.newegg.com/Product/Product.aspx?Item=N82E16817994077&cm_re=5_in_3-_-17-994-077-_-Product
Mobo: 880GMA (Free from a old build I was going to do on my htpc but I needed more pci slots
Ram: 16GB Corsair (Like I said Ive heard alot of people say raidz and zfs eat ALOT of ram)
Power supply (to be determined) $50-$10o
880GMA motherboard (Free!)
24 port sas card - $400 with shipping
12 4 pin molex connectors for the power for the drives.
30GB SSD $?????? Might buy this new or second hand, don't know how much it will cost so far.
H50 Cooler for the heck of it  -$70 (will get on sale for about 50)
AMD Sempron 145 Processor 2.8GHz NCIX.com - Buy AMD Sempron 145 Processor 2.8GHZ Socket AM3 1MB L2 Cache 45W Retail Box - AMD - SDX145HBGMBOX - in Canada - $50
UPS in case of a power outage, NCIX.com - Buy Cyberpower CP1500AVRLCD 1500VA 900W UPS 8 Outlet 1500 Joules AVR LCD RJ11/RJ45 & 1 USB Serial Black - CyberPower - CP1500AVRLCD - in Canada - $160 on sale (or I'll get a cheaper one but this could power my file server, my hdtv and my moms computer 
2 120mm fans.

TOTAL: $3420

Hope prices go down a bit by the time I order though, should I swap the sempron out for a cheap quad core cpu as well? Or get one of the duals that are known for being able to unlock 4 cores (if someone can tell me the name of that cpu again I would love that)

So what do you guys think? I probably won't build this for a few months still as I save for it. But yea let me know what you would change and such. Thanks!


----------



## phoenix (Mar 8, 2011)

Most, if not all, of your questions have been asked many times already, and the answers can be found via the search function.  

For a home server, the WD Green drives are okay.  You really can't beat the price.  However, they are not ideal for many reasons, all of which are detailed in many, many, many threads here.  If you can, avoid them.  If you can't get past the super-low price ... don't expect great performance.  

Don't skimp on the CPU.  A single-core Sempron will not be able to keep up with multiple raidz vdevs, especially if you enable compression and/or dedupe.  Get a quad-core, or more if possible.

Don't skimp on RAM.  Get the most you can afford.  For a server with 20 drives, 8 GB should be the minimum.  If you can afford the 16 GB, then get it.  ZFS will use it all for read cache, limiting the I/O to the disks, and increasing overall system performance.

Don't skimp on the power supply.  You'll need a beefy one to power all 20 drives at once.  Our 24-drive behemoths have 1300W redundant PSUs.  Under 1000W probably won't cut it.

As to the pool setup, you'll want to go with multiple raidz2 vdevs.  2 TB drives will take several days to resilver (rebuild).  If you lose a second drive while the first is rebuilding, using raidz1 ... your data is gone!!  With 20 drives, you can do 3 raidz2 vdevs using 6-drives each (8 TB usable space per vdev), and have 2 ports left over for spares, or for the OS drives, or whatever.

If you use glabel(8) to label the drives according to where they are (disk01, disk02; or slot01, slot02, etc), and then build the pool using the /dev/label/<name> devices, you'll be able to find failed drives right away.


----------



## Blu (Mar 8, 2011)

So your saying for every 8 drives to make a vdev and raidz2 it? So overall out of all the drives I would be loosing six to parity? The vdevs you suggested only add up to 18 drives when I'm looking at doing 20, would it be ok to do it this way?

10 drives per vdev and 2 for raidz for parity and x2 this method so overall I would have 32TB of data. Also I'll take your word on upgrading the cpu, I'm getting so many mixed reviews here, some people are saying 16gb ram is way overkill and that 2gb would be good enough, others disagree and such. I was also planning on trying unraid out but a friend told me that NFS/Raidz was so much better and safer to protect your data.

Is this SAS/Sata card good for adding the drives?
http://www.ncix.com/products/?sku=37999&vpn=AOC-SASLP-MV8&manufacture=SuperMicro
I plan on getting two of them since my other choice is out of stock and EOL'd and I fear this one might be EOL'd soon as well, so I might just pick up three or four of them (I found a store online with only 4 left in stock) and use two has back ups incase anything bad happens to the current card. As for the cpu, should I risk it and go with a dual core that has a chance of unlocking to a quad? (currently have one sitting waiting to be put in my HTPC and I'm hoping it's a quad) or just get the sure thing, or possibly get a 6 core cpu for it? You really think I'll need a 1k psu for 20 drives? I've heard drives only use about 10watts of power in full usage mode, but then again I'm getting mixed reviews on this as well like crazy, also would you suggest Hitachi or samsung drives for this build? And is the UPS a good idea so if the power goes out when it's writing data? I would be able to send the command to the pc to shut down after say 5 minutes of no power and save my data correct?


----------



## AndyUKG (Mar 8, 2011)

Hi,

  I think people are generally exaggerating about ZFS and RAM, if you have at least 4GB then this is quite sufficient that ZFS will run normally under most circumstances (ie for a file server should be fine). Additional RAM can be added to provide more RAM for the ARC cache, this is simply to provide more read cache for better performance.
I'd agree with the comments from Phoenix, possibly with the exception of CPU. I have old P4 era Xeons running ZFS fine with compression. So you don't necessarily need anything amazingly powerful, unless you plan on using GZIP compression (lzjb is the default and is CPU light) or possibly dedup (I have no experience using dedup).

thanks Andy.


----------



## disi (Mar 8, 2011)

I read a lot of reviews on drives and heard about Hitachi Deathstar


----------



## phoenix (Mar 8, 2011)

Blu said:
			
		

> So your saying for every 8 drives to make a vdev and raidz2 it? So overall out of all the drives I would be loosing six to parity? The vdevs you suggested only add up to 18 drives when I'm looking at doing 20, would it be ok to do it this way?
> 
> 10 drives per vdev and 2 for raidz for parity and x2 this method so overall I would have 32TB of data. Also I'll take your word on upgrading the cpu, I'm getting so many mixed reviews here, some people are saying 16gb ram is way overkill and that 2gb would be good enough, others disagree and such. I was also planning on trying unraid out but a friend told me that NFS/Raidz was so much better and safer to protect your data.



It all depends on which is more important to you:  speed or storage space.

If you want speed, you need multiple small vdevs.  As in, 3x 8-disk raidz2.

If you want storage space, you want fewer, large vdevs.  As in, 2x 10-disk raidz2.

The more drives you have in a raidz vdev, the longer it takes to resilver a dead drive, and the higher your chances of losing a second drive.

When I did my first ZFS box, I created a single 24-drive raidz2 vdev.  Created fine, worked fine, copied data into without issues, everything looked good.  Until the first drive died.  Spent over two weeks working on it, trying to get that drive to resilver.  Never finished.  Ended up having to rebuild the box using 3x 8-drive raidz2 vdevs.

The newest ZFS box I'm building, I'm using 4x 6-drive raidz2 to get the better performance.  ZFS stripes writes across all the vdevs (basically RAID0), so the more vdevs you have ... the better your overall performance.



> Is this SAS/Sata card good for adding the drives?
> http://www.ncix.com/products/?sku=37999&vpn=AOC-SASLP-MV8&manufacture=SuperMicro



I would avoid the Marvell chipsets, I haven't seen any positive reports from ZFS users.  Look into the AOC-USAS-L8i cards instead.  These use the LSI1068 chipset, fully supported by the mpt(4) drive in FreeBSD.  They're inexpensive, but well performing. (Plus, it's what we're using in our newest ZFS box instead of 3Ware 9650 controllers).



> As for the cpu, should I risk it and go with a dual core that has a chance of unlocking to a quad?



For a home server, sure, why not?  



> You really think I'll need a 1k psu for 20 drives? I've heard drives only use about 10 watts of power in full usage mode, but then again I'm getting mixed reviews on this as well like crazy,



It's not only the "full load" you need to worry about, but all the "peak power" used to spin up all the drives.  Do you really want to risk brown-outs internally?  *NEVER* skimp on power.  Dirty power, low voltage, over voltage, spikes, etc will do the same damage to low-end hardware as it will to expensive "enterprise" hardware, it doesn't discriminate.  

If you are using Green drives, you'll be hitting the "peak" power load a lot, as the drives go to sleep, spin down, wake up, spin up, etc.



> also would you suggest Hitachi or samsung drives for this build?



Drive preference is a personal thing.    Everyone has their favourites.  Currently, we're using Western Digital RE3 Black drives (500 GB and 1 TB), and Seagate 7200.11 1 TB, and Seagate 7200.12 1.5 TB drives with great success.



> And is the UPS a good idea so if the power goes out when it's writing data? I would be able to send the command to the pc to shut down after say 5 minutes of no power and save my data correct?



Definitely desirable if you can afford it.  ZFS can tolerate loss of power fairly well, depending on how large the caches are on the drives, and whether or not they honour the "cache flush" command.  If you can do orderly shutdowns, all the better.  

My home ZFS server is a crappy P4 box that locks up several times a week due to heat issues.  In the almost 2 years it's been running at home, I haven't lost a single file or had a single block of permanent corruption (as noted by zpool()).


----------



## phoenix (Mar 8, 2011)

AndyUKG said:
			
		

> I think people are generally exaggerating about ZFS and RAM, if you have at least 4GB then this is quite sufficient that ZFS will run normally under most circumstances (ie for a file server should be fine).



While it's true you can tune a ZFS system to work with less RAM (my home NFS server only has 2 GB, for example), the more RAM you can put into a ZFS box, the better things will run.  

Especially in a system with 20 drives, and tens of TB of disk space.  The more of that you can cache in the ARC, the better.  

And, remember, an L2ARC (cache vdev) is not a replacement for RAM, as you need RAM To track the contents of the L2ARC.  

And, enabling dedupe requires storing the dedupe table (DDT) in ARC/L2ARC, so even more RAM is needed.

If you tune your system, and don't use all the extra features, you can get away with < 8 GB of RAM.  But if you want to use all the extra, fancy features of ZFS, stuff the box as full of RAM as your budget can support.


----------



## Blu (Mar 8, 2011)

Can you link me to the SAS/Sata port card you suggested. The best I can find is a refurb on ebay for the same price as the new ones went, actually a little more.
http://cgi.ebay.ca/Supermicro-AOC-U..._EN_Networking_Components&hash=item1e61fe174d

I also read since I'm using a gigabyte board I need to disable HPA or it will just ruin my array. This correct?


----------



## Blu (Mar 8, 2011)

Sorry for the double post, I can't edit my previous post there.
I'm starting to think of going with two vdevs of 10 drives each and then raidz2 to have two parity drives (that's correct right?) Also should I get a small SSD for the OS to run ZFS/Raidz over the network? Thanks again guys for the help, you've done alot to convince me what to choose, I was going to go unraid but it seems like it's JBOD so I would need to say, make a movies folder on each drive to make one massive drive and thats just a headache.


----------



## Blu (Mar 9, 2011)

I forgot to ask as well, whats the best sata port card for ZFS/Raidz since that one i showed earlier has the marvel chip and you guys say its not a good idea?


----------



## phoenix (Mar 9, 2011)

Blu said:
			
		

> Can you link me to the SAS/Sata port card you suggested. The best I can find is a refurb on ebay for the same price as the new ones went, actually a little more.
> http://cgi.ebay.ca/Supermicro-AOC-U..._EN_Networking_Components&hash=item1e61fe174d



SuperMicros AUC-USAS-L8I at cdw.ca



> I also read since I'm using a gigabyte board I need to disable HPA or it will just ruin my array. This correct?



What's HPA?


----------



## phoenix (Mar 9, 2011)

Blu said:
			
		

> I'm starting to think of going with two vdevs of 10 drives each and then raidz2 to have two parity drives (that's correct right?)



Each raidz2 vdev would have double-parity redundancy, meaning you could lose 2 drives in each vdev (4 drives in total), before losing all the data in the pool.  The parity data is intermixed with the normal data; it's not 2 separate parity drives.



> Also should I get a small SSD for the OS to run ZFS/Raidz over the network?



If you have room in the case, then I'd go for an SSD (or two) for the OS.  You only need about 10 GB for the OS and any apps you want to install.  Then you can use the rest of the SSD for swap and L2ARC space.


----------



## Blu (Mar 9, 2011)

Say I lost three drives in one of the 10 drive arrays, then I would loose all that data correct? What OS would I need to install for this as I just want it to act as a media/file server on my home network. With raidz is it easy to tell what hard drive failed or no? Like would I have to go through every single drive to see what one failed?

HPA is a option on gigabyte motherboards that really seems to mess up raid arrays from what I've read.


----------



## Blu (Mar 9, 2011)

Also can you guys suggest me some SAS/Sata port cards that work well with ZFS/Raidz? Hopefully a place that accepts paypal and ships to Canada.


----------



## Blu (Mar 9, 2011)

phoenix said:
			
		

> SuperMicros AUC-USAS-L8I at cdw.ca



When I check the card on super micros site it lists only a few compatible motherboards, it says "UIO Motherboards" should I just disregard that? It should work fine on a normal mobo right.


----------



## Blu (Mar 9, 2011)

And one more thing, is it possible to say start with 10 drives and make two vdevs with raidz2 and then in a couple weeks add 10 more drives (5 to each vdev) without needing to rebuild the whole vdev? How long do you think it would take the parity drives to rebuild a failed drive as well?


----------



## jalla (Mar 9, 2011)

Blu said:
			
		

> And one more thing, is it possible to say start with 10 drives and make two vdevs with raidz2 and then in a couple weeks add 10 more drives (5 to each vdev) without needing to rebuild the whole vdev? How long do you think it would take the parity drives to rebuild a failed drive as well?



You can't add disks to a vdev, but you can add new vdevs to an existing pool.

The time to rebuild depends very much on number and size of disks, and the level of filesystem activity. Replacing a 500Gb disk in a 3-disk raidz with no other read/write activity takes about 2 hours. I guess that would scale pretty much linearly with disk/vdev size, but with any amount of ongoing IO rebuildtime might actually go through the roof.


----------



## AndyUKG (Mar 9, 2011)

Blu said:
			
		

> And one more thing, is it possible to say start with 10 drives and make two vdevs with raidz2 and then in a couple weeks add 10 more drives (5 to each vdev) without needing to rebuild the whole vdev? How long do you think it would take the parity drives to rebuild a failed drive as well?



As mentioned by Jalla, the way you need to do with this with ZFS if you want to start your config with 10 drives then got to 20 is first create 1 RAIDZ2 of 10 drives, then when you get the other 10 drives create another. If the two RAIDZ2 devices are in the same zpool, the extra space is automatically available to any existing file systems (ie all file systems will grow by the space added) which I guess is your requirement.

Re which SAS/SATA card, I am using cards based on Sil3124 (eSATA) with port multipliers disk shelves.

thanks Andy.


----------



## danbi (Mar 9, 2011)

To repeat again, on the zpool expansion... ZFS is designed for the typical "always grow" installation. You can start with single vdev of say, 5 drives. The vdev can be raidz1 or raidz2 (one or two "wasted" drives for redundancy). Then you could add another vdev of 5 drives, then another etc.

In theory, you could add different type of vdev to the pool. Single drive (no redundancy), mirror or raidz1, raidz2. This is rarely recommended but in principle possible.
Again, remember that if you have single non-redundant vdev, your entire zpool is non-redundant!

When you add new vdevs, there is no resilver, nothing is rebuilt or recalculated. It happens instantly. Any old data remains where it is (on old, existing vdevs), any new data is spread all over.

Currently, it is not possible to remove vdevs from an zpool, so plan wisely.


----------



## phoenix (Mar 9, 2011)

Blu said:
			
		

> When I check the card on super micros site it lists only a few compatible motherboards, it says "UIO Motherboards" should I just disregard that? It should work fine on a normal mobo right.



UIO cards are normal PCIe (PCI-Express) cards.  The only difference is that SuperMicro ships them with the backplane connector on the "wrong" side of the card.  That way, they can sell "UIO" motherboards for a premium, that will work with the UIO cards.  However, if you just remove the backplane from the card, it works perfectly well in any motherboard with PCIe slots.

There are even some places online where you can order normal connectors for these cards, if you really want to screw them into the case.


----------



## phoenix (Mar 9, 2011)

Blu said:
			
		

> Say I lost three drives in one of the 10 drive arrays, then I would loose all that data correct?



You would lose all data in the pool.



> What OS would I need to install for this as I just want it to act as a media/file server on my home network.



  Obviously, FreeBSD.  



> With raidz is it easy to tell what hard drive failed or no? Like would I have to go through every single drive to see what one failed?



Are you not reading what I write?  

Use glabel(8) to label the disks with names that correspond to where they are located in the box.  ie "disk01", "disk02", etc.  Or even "slot1", "bay03", etc.

Then you create the ZFS pool using the /dev/label/<name> devices.  When a drive fails, it will be listed in the output of `# zpool status` making it very easy to find and replace.


----------



## phoenix (Mar 9, 2011)

Blu said:
			
		

> And one more thing, is it possible to say start with 10 drives and make two vdevs with raidz2 and then in a couple weeks add 10 more drives (5 to each vdev) without needing to rebuild the whole vdev? How long do you think it would take the parity drives to rebuild a failed drive as well?



Le sigh. 

Now would be a good time to start reading through the man pages for zpool() and zfs(), along with the ZFS Best Practises guide, and maybe some of the historical blog posts about ZFS.  This is very basic information that is covered very well in there.

Searching the forums for threads on ZFS would also be helpful, as this exact question is answered at least a dozen times already.


----------



## Blu (Mar 10, 2011)

AndyUKG said:
			
		

> As mentioned by Jalla, the way you need to do with this with ZFS if you want to start your config with 10 drives then got to 20 is first create 1 RAIDZ2 of 10 drives, then when you get the other 10 drives create another. If the two RAIDZ2 devices are in the same zpool, the extra space is automatically available to any existing file systems (ie all file systems will grow by the space added) which I guess is your requirement.
> 
> Re which SAS/SATA card, I am using cards based on Sil3124 (eSATA) with port multipliers disk shelves.
> 
> thanks Andy.



Ok that sounds perfect, so I would be able to add 10 more drives as raidz2 to the zpool and have it show up as two networked drives worth of 16TB Space each (In windows under 'my computer -networked drives-') Right?


----------



## Blu (Mar 10, 2011)

Another odd question, does Raidz show me the drives serial number to make it easier to keep track of the drives? Or am I going to have to say add the hard drives one by one to keep track of which is which and label them in free BSD as I do this? I know some windows programs will show you the drives serial # and I figured this would be easy to keep track of them. To write down the serial on a piece of paper and tape it to the enclosure the drive is in and such to make it easier to know what drive is what when I use glabel to mark them in the zpool. Also you say it will be listed in the command "zpool status" don't you mean it won't appear there? or will that just show me what drives are missing from the zpool. Just want to make sure I got all this right cause I'm hoping to start ordering some parts soon.


----------



## Blu (Mar 10, 2011)

Also what would you guys say my data would be safer on? A unraid system or a Raidz2 System?


----------



## AndyUKG (Mar 10, 2011)

Blu said:
			
		

> Ok that sounds perfect, so I would be able to add 10 more drives as raidz2 to the zpool and have it show up as two networked drives worth of 16TB Space each(In windows under 'my computer -networked drives-') Right?



No, ZFS is more advanced than that. You will have a pool of 32TB (less actually as a 2TB drive is really going to be about 1.8TB). This 32TB can be configured as a single volume, or as many smaller volumes as you require. Check the ZFS documention...

ta Andy.


----------



## phoenix (Mar 10, 2011)

Blu said:
			
		

> Another odd question, does Raidz show me the drives serial number to make it easier to keep track of the drives? Or am I going to have to say add the hard drives one by one to keep track of which is which and label them in free BSD as I do this?



Seriously?  Are you even reading this thread anymore?  Or just posting the same question over and over?  This is the third time you've asked the *exact* same question.  The answer has not changed.

YOU LABEL THE DRIVES *BEFORE* YOU USE THEM WITH ZFS!!



> I know some windows programs will show you the drives serial # and I figured this would be easy to keep track of them. To write down the serial on a piece of paper and tape it to the enclosure the drive is in and such to make it easier to know what drive is what when I use glabel to mark them in the zpool.



Or, you connect the first drive, use glabel to label it using a nice name like "slot-01".  Then connect the second drive, use glabel to label it using a nice name like "slot-02".  Repeat until all the drives are labeled.  *Use whatever labeling scheme makes sense for your hardware layout.*

*THEN* you create the raidz2 vdev using the label devices (/dev/label/slot-01), not the physical devices (/dev/da0).

That way, when you run *zpool status*, it shows the labels.  Thus, if there's a drive marked as "OFFLINE" or "DEGRAGED", you just look at the label shown in the zpool output, and you know which drive to replace (label/slot-01 ah, the drive in slot-01).

What is so hard to understand about that?


----------



## Blu (Mar 11, 2011)

Sorry about the same question over and over Im just trying to get everything answered and should've gone back in this thread and re read some answers you guys had given me. So I think I've got this build set up.

1x Antec Twelve hundred case
4x 5.25" to 5 3.5" Bay converters
1x 1000watt corsair psu
1x 30gb-64gb SSD Drive for the OS
1x 880GMA Motherboard left over from HTPC build (never used cause I needed more pci slots)
2x 2x4GB Mushkin DDR3 Ram
2x 8 channel storage card
1x 3.4ghz AMD Quad Cpu
20x WD 2TB Green drives r mix them up with other 2TB Drives that are on sale, you can mix and match drives with software raid right?
4x Multi colored 120mm fans lol, want to make it look funky.

So overall does this look like a solid Raidz build with two vdevs running both running raidz2 in one zpool (did I say this correctly? I want it to show up as two massive storages under my computer and networked drives as 16TB each. I know it will be smaller cause the drive sizes and yada yada but you get my point right?) Did I do this all correctly? Am I buying the right parts? I know I'll probably need more 4 pin to 3x 4 pin connectors for the psu and alot of sata cables as well. But yea will this run a file server well enough for storing music, movies, tv shows, applications to back up and such? Let me know if there's anything I need to or should change


----------



## Blu (Mar 14, 2011)

phoenix said:
			
		

> What is so hard to understand about that?


Nothing really, I just wanted to see if I could of done a short cut, but loading a fresh copy of linux up and down won't be bad at all, I'm just thinking about my current restart times are taking about 5-10 minutes of pure hang time at the load screen when I'm used to about a 30 second restart. Darn Windows corruption some how  But yea can you take a look at my build above this post and let me know if I've gone the right way? Thanks.


----------



## Blu (Mar 17, 2011)

Could someone tell me if this build looks good? I would like to start ordering parts before they all sky rocket from the japanese disaster.


----------



## aragon (Mar 18, 2011)

Curious to know how you like that Antec case.


----------



## phoenix (Mar 18, 2011)

Blu said:
			
		

> 1x Antec Twelve hundred case
> 4x 5.25" to 5 3.5" Bay converters
> 1x 1000watt corsair psu
> 1x 30gb-64gb SSD Drive for the OS
> ...



Hardware looks fine.


----------



## AndyUKG (Mar 18, 2011)

Blu said:
			
		

> 20x WD 2TB Green drives r mix them up with other 2TB Drives that are on sale, you can mix and match drives with software raid right?



Hi,

Those drives will be 4k advanced format drives, there seems to be a good way to handle these in FreeBSD now, see: http://forums.freebsd.org/showthread.php?t=21644. If you were playing it really safe you might go for non 4k drives.

Regarding mixing, just make sure if you buy 4k drives you mix with other 4k drives, or just 512 byte with 512 byte.

Thanks Andy.


----------



## Sebulon (Mar 19, 2011)

Hi,

just raising a concern for those storage controllers. They seem to be AOC-USAS-L8I, which are what Supermicro calls UIO. UIO cards are (as far as I understand it) pci-e cards turned backwards and therefore only fits on Supermicro boards with the corresponding UIO slot(?)

Anyone please correct me if Im wrong, cause this sounds almost too cheap to be true=)

/Sebulon


----------



## carlton_draught (Mar 21, 2011)

phoenix said:
			
		

> It all depends on which is more important to you:  speed or storage space.
> 
> If you want speed, you need multiple small vdevs.  As in, 3x 8-disk raidz2.


Thanks for posting all this phoenix, it is obvious that all of the experience you have is "hard won", and it would be unwise to ignore it. 

Especially interesting about the power supply issues of a large NAS. If you do the math you are correct. 39W per HDD is a conservative figure to use, so 20*39 is 780. Add some power for your CPU, mobo and 8GB of RAM and you are near 1000W.

Do you have any recommendations of PSU brands/models? I like the Seasonic X-650 as it is both quiet, extremely efficient and has great electrical performance. Unfortunately in their X range, the X-850 is as high as they go. Of course, I am not looking to build a 20 HDD NAS, probably 8 is as much as I would go.


----------



## User23 (Mar 21, 2011)

carlton_draught said:
			
		

> 39W per HDD is a conservative figure to use,



39W ??? 

You mean max power consumption on startup? Usually the good storage controller starting the drives one by one and not all at once. And after the startup the consumption is lower. 

6,x W 
http://www.wdc.com/wdproducts/library/SpecSheet/ENG/2879-701229.pdf

or up to 

12,x W 
http://www.hitachigst.com/tech/tech...F64DBCC8825782300026498/$file/US7K3000_ds.pdf

with a 3TB 7200rpm drive.


----------



## carlton_draught (Mar 21, 2011)

User23 said:
			
		

> 39W ???
> 
> You mean max power consumption on startup? Usually the good storage controller starting the drives one by one and not all at once. And after the startup the consumption is lower.
> 
> ...


How about 3A on a 12V rail? In other words, 3*12=36W? Here. From the page I linked earlier.

What happens when your drives have gone to sleep and someone wants to copy a file from one zraid2 pool to the other? Does your good controller stage them, or do they all start up at once?


----------



## phoenix (Mar 22, 2011)

Sebulon said:
			
		

> just raising a concern for those storage controllers. They seem to be AOC-USAS-L8I, which are what Supermicro calls UIO. UIO cards are (as far as I understand it) pci-e cards turned backwards and therefore only fits on Supermicro boards with the corresponding UIO slot(?)



Incorrect.

UIO boards are standard PCIe (PCI-Express) cards, and will work in any PCIe slot.

The difference between a UIO board and a standard PCIe board?  Which side of the card the bracket attaches to.  UIO boards have reversed PCI brackets, so you can't use the included bracket in normal case with a normal motherboard.

However, if you remove the bracket, the card works perfectly well in any PCI slot.  And you can even buy "normal" brackets for these cards, if you really want to screw it down to the case.


----------



## phoenix (Mar 22, 2011)

carlton_draught said:
			
		

> Thanks for posting all this phoenix, it is obvious that all of the experience you have is "hard won", and it would be unwise to ignore it.
> 
> Especially interesting about the power supply issues of a large NAS. If you do the math you are correct. 39W per HDD is a conservative figure to use, so 20*39 is 780. Add some power for your CPU, mobo and 8GB of RAM and you are near 1000W.
> 
> Do you have any recommendations of PSU brands/models? I like the Seasonic X-650 as it is both quiet, extremely efficient and has great electrical performance. Unfortunately in their X range, the X-850 is as high as they go. Of course, I am not looking to build a 20 HDD NAS, probably 8 is as much as I would go.



Our PSUs come with the rackmount chassis, and are hot-swappable with 4 "power-unit" bays.  Depending on the use, we fill either 3 or 4 of the bays.  Our largest is a 4-way setup with 1300W total power.

I have no experience with non-rackmount PSUs.  The last PSU I bought was one of the first modular PSUs, an X-Power, for my ancient desktop.  (The local computer shop laughed at me when they saw the sea of cables sticking out of the PSU thinking it was some joke; a year or two later, pretty much every PSU company came out with a modular version.)

All I can recommend is to not skimp on the PSU.


----------



## Sebulon (Mar 22, 2011)

phoenix said:
			
		

> Incorrect.
> 
> UIO boards are standard PCIe (PCI-Express) cards, and will work in any PCIe slot.
> 
> ...



Oh, thank god!=)


----------



## User23 (Mar 24, 2011)

carlton_draught said:
			
		

> How about 3A on a 12V rail? In other words, 3*12=36W? Here. From the page I linked earlier.
> 
> What happens when your drives have gone to sleep and someone wants to copy a file from one zraid2 pool to the other? Does your good controller stage them, or do they all start up at once?



Yes, you are right, I am pretty sure everyone with disk arrays as big as this let the disk sleep and wake on demand.


----------



## carlton_draught (Mar 25, 2011)

User23 said:
			
		

> Yes, you are right, I am pretty sure everyone with disk arrays as big as this let the disk sleep and wake on demand.



From the TS:


> I'm basically looking for a massive storage system for my *home network* for HD movies and HD tv shows


I think there is a fair chance he will leave it on 24/7 for easy access (and maybe he has family who could potentially be watching stuff served by the NAS at any time of the day, so are ill-served by scheduled system power downs), but not want to pay money to have his disks spinning when they aren't being used. If this is the case, then you want a PSU that is able to cope with high peak loads (~1kW) but will also be efficient at low power draws.


----------



## naguz (Apr 14, 2011)

Blu said:
			
		

> Another odd question, does Raidz show me the drives serial number to make it easier to keep track of the drives? Or am I going to have to say add the hard drives one by one to keep track of which is which and label them in free BSD as I do this? I know some windows programs will show you the drives serial # and I figured this would be easy to keep track of them. To write down the serial on a piece of paper and tape it to the enclosure the drive is in and such to make it easier to know what drive is what when I use glabel to mark them in the zpool. Also you say it will be listed in the command "zpool status" don't you mean it won't appear there? or will that just show me what drives are missing from the zpool. Just want to make sure I got all this right cause I'm hoping to start ordering some parts soon.



Sorry for the dump, but no one answered the question, and I believe it is a quite common question to have when setting up a server with quite a few HDDS.

*No*, you don't need to plug in one hard drive, power on, label using glabel, power off, connect another drive, power on label... etc.

If you install smartmontools, you can run *smartctl -i /dev/ad6* (or whatever the path to your hard drive device is) and get the serial number of the drive.

ex:

```
# smartctl -i /dev/ad6
smartctl 5.40 2010-10-16 r3189 [FreeBSD 8.2-RELEASE amd64] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF INFORMATION SECTION ===
Model Family:     Seagate Barracuda 7200.11 family
Device Model:     ST31500341AS
Serial Number:    9VS1CAVS
Firmware Version: CC1H
User Capacity:    1,500,301,910,016 bytes
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   8
ATA Standard is:  ATA-8-ACS revision 4
Local Time is:    Thu Apr 14 14:51:36 2011 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
```

You can then label them with *glabel* accordingly. 

If you are going to have twenty drives, it will save you a bit of time.

edit: of course, with the hardware you list, I guess you could just hot-plug in the disks. Oh well, could be useful info for someone anyways.


----------



## Nukama (Apr 14, 2011)

Another way of determining the serial number of the drive with in-house means:
`# [man]diskinfo[/man] -v /dev/ada0 | grep Disk\ ident`

For some redundancy in labeling you can use GPT labels.
`# [man]gpart[/man] create -s GPT ada0`
`# # gpart add -b start -s size -t freebsd-zfs [b]-l label[/b] ada0`


----------

