# ZFS based home storage



## _martin (Feb 24, 2012)

Hi guys, 

I'm trying to build a home NAS. I saw some threads over the internet, but .. I think I'm more confused than I was before. I've a feeling this is kind of off-topic, so I rather put it here instead to the HW section. If you have this kind of storage at home, please do share your experience. 

My goal: build ZFS based storage with ~ 9TB free as quiet as possible (passive cooling + passive PSU ?) keeping the power consumption at minimum. If possible made it as a home router too (1Gbit LAN to 40MBps uplink). 

It seems it's impossible to meet my demands. If I set the highest priority to silence - what board will suit my needs ? Is it possible to power 3 x 3TB disks + aux on it (ATOM based)? If not and I need to buy something more powerful - what about power consumption? 

I want to ask somebody who built this storage to share his (her) experience what _not to buy and/or what to avoid in general.

Thanks.


----------



## vermaden (Feb 24, 2012)

I have 2 x 2TB setup with MINI-ITX motherboard here (Intel T8100 + 965GMA), but they are pot 'passive', they consume about 38W at BIOS state.

You can go for dual-core Atom system, which should 'take' 3 x 3TB without a problem.

Passive or not, get a high efficient power supply (80+ sign), check for details here: http://80plus.org

You may as well get the PICO PSU power supply, an example here: http://www.mini-box.com/picoPSU-160-XT


----------



## rajl (Feb 24, 2012)

My storage requirement isn't as large as yours, but I run a home ZFS storage server.  Cool and quiet is easy to achieve.  Get a decent quality fan to put on the CPU (stock fans are usually designed to be cheap, not quiet), a high-efficiency power-supply with at least 350+ watts and you will be good to go.

Note that ZFS requires massive amounts of RAM to run properly due to the ARC.  The more RAM you have the better.

If you plan on using deduplication, you should plan on at least 2GB of free RAM per Terabyte of available storage in order to have an adequate ARC.  If you are doing a mirror, you'll have 3TB of available storage space, so your system, in theory will need around 6GB of RAM.  If you are doing a Raid-z, you'll have approximately 6TB of available storage and 3TB of parity, needing 12GB of RAM.  A striped ZFS array (i.e. Raid-0) is going to give you 9TB of available storage and require 18 GB of RAM.  Given the exponential growth, it's usually more efficient to use compression instead of deduplication to save on space.


----------



## _martin (Feb 25, 2012)

@vermaden Thanks for the tip on PICO PSU - it does sound interesting. 



> You can go for dual-core Atom system, which should 'take' 3 x 3TB without a problem.


That's the point - can it? I read different things and that's why I'm looking for somebody to verify from his/her experience. I heard that is just too much for Atom to handle. 

@rajl I will be using simple raidz - it's basically for my movies, mostly serials (sitcoms) I have. Thanks to Apple (r) I don't need to care about music storage (nope, not a commercial break  ). 

I will try to get the ECC RAM, but the problem might be with motherboards - these small factor ones usually don't support it. And I don't know any fan which is quiet - all of them are kind of annoying. That's why going passive interests me a lot.


----------



## bbzz (Feb 25, 2012)

My home server runs on Intel i7 and 12GB RAM. Uses raidz2 with total of 12TB disk space. The reason I used strong processor is because server always runs complex networks in dynamips. I also run AES-256 geli encryption over all disks. There is some zfs compression as well.

Say I want to make pure media server with all those disks and features on new processor (and leave dynamips on i7). Would those Atom processors suffice for zfs and geli? 

p.s. I'm not trying to hijack thread, maybe some of this is useful to OP as well.


----------



## rajl (Feb 25, 2012)

If you are looking for a good balance of power and performance, I'd recommend a Bobcat/Llano solution.  Atom is power efficient, but it's still an in-order execution processor (although I think the roadmap from Intel has out of order execution on the "to do" list).

ZFS really hits the ram hard, so I would focus on that first.  You can probably get away with non-ECC RAM (and save a boatload of money), given ZFS's builtin checksumming.  After RAM, you need a decent processor.  AMD's Llano based boards are probably the sweet spot for your uses.  Atom will probably be too slow, but you don't need an i7 either (unless you're doing 256-bit disk encryption on top a compressed file system that is combined with deduplication that also calculates parity bits in RAIDz and.... you get the idea).

If you're keeping it simple, a lot of RAM and AMD Llano should work (or maybe Intel Atom).  Intel Pentium or Core i3 or an AMD Athlon II will definitely be more than enough if you're just doing an at home file server.  Intel Pentium and AMD Athlon II are exceptionally cheap solutions, so you might want to consider them.


----------



## _martin (Feb 25, 2012)

rajl said:
			
		

> given ZFS's builtin checksumming.



@rajl: You got it the other way around - ZFS checks what's in RAM with what's on disk. But if those data are wrong in RAM and correctly written, you are screwed. Checksum would be OK, data not. AMD solution might be interesting, but from what I know they're more problematic when it comes to heat and high temperature in general. 

I already have a small ZFS server in datacentrum - it's 4TB raidz + 160GB ZFS mirror with 8GB RAM. It also hosts two virtual machines, cca 2GB of memory is used for them. Works just fine. 

@bbzz: Well, hopefully this thread won't get hijacked .. I use dynamips for 8 years or so. Though you can compute the idlepc value, once you set your network with let's say EIGRP,OSPF,BGP - things get interesting. When I had weaker computer I did increase the hello timers value to avoid flapping routes. I wouldn't bet money on ATOM to be able to handle a lot of those cisco instances.


----------



## xibo (Feb 26, 2012)

I'm using a Xeon E3-1260L and 16GiB Memory (I chose to go with the more expensive enterprise hardware because I've been disappointed a lot by the desktop hardware lately and I was able make a deal that rendered it only little more expensive) for 4x2GB zraid (3+1). Assuming you use reiser4 checksums and gzip9 compression, and an Atom has 1/10th of the Xeon's performance, you should get somewhat acceptable reading speeds, but writes will be horrible. Deduplication should be rethought well if not enough memory can be expended, unless you want to feel the floppy speed nostalgia (it's alot faster with a sequence of mirrors than with zraid, but it's still bad). I can't comment on GELI because I never used it.

Some time ago I was running an dual-core Atom (D510) with 4GiB memory and two Hitachi Deskstars in linux mdraid-1, and the sequencial disk access speeds were 50-90MiB/sec (no compression, ext4fs, some tuning I don't remember any more). However, when accessing it via NFS, I couldn't get more then 10-15MiB independent of HDD specifics (access to tmpfs was no faster) even after alot of NFS tuning, while I got 25-30MiB/sec via FTP. FreeBSD's NFS and IP implementations are more efficient then linux's ones (or at least it feels to me), but if you want NFS access I would recommend to stay away from Atoms nevertheless.

In fact, after reading this article, I would recommend to stay away from Atom to begin with, as a personal NAS will have only burst load and be idle the remaining 99% of the time, while an enterprise NAS will be under too much load for an Atom to handle. Also, 1155 boards can usually take alot more memory.


----------



## Bobbla (Feb 26, 2012)

I myself have and would recommend a AMD Athlon II X2 because its cheap, quite fast and supports ECC. However if you want get ECC working it also requires ECC support by the motherboard. When I bought my NAS hardware I bought a motherboard with a 760G chipset, this way I also don't need to buy a graphics card. the 760G chipsets usually come with 6 sata ports, this gives good opportunities for Raidz or Raidz2.

You can also install the OS on a memory stick if you want to save sata ports for storage. If you want to avoid some work or just don't know to much about how to setup a NAS you can always use zfsguru or freenas.

Myself I've an AMD Athlon II X2, 4GB RAM (probably should get more..) and 2 Raidz's, one raidz with 6x 1TB and one with 6x 2TB. Usually its the receiver hard disk that is the bottleneck, however I do get some nice 80-90Mb/s when I use my laptop which has a good hard disk. And a scrub can take a night, even if its several TB.

I've got 5 fans, a 650W PSU and 13 hard disks. I can assure you that it is the 13 hard disks that makes the most noise. And I sleep 2m away from this thing. =.=


----------



## _martin (Feb 26, 2012)

@xibo: What MOBO are you using? E3-1260L - that's gonna need a bigger FAN to keep it cool, doesn't it? 

@Bobbla: 650W PSU? Isn't that just too power greedy? Not to mention that 650W PSU + 5 FANS - that must do a lot of noise :/ I'm ok with setup of it itself (still deciding whether I'll use FreeBSD or OpenIndiana), HW is something I seek help with. 

As I said, maybe my demands are just not feasible. It simply cannot be greedy on power and has to be as quiet as possible. It's questionable what is a good performance for home usage though.


----------



## bbzz (Feb 26, 2012)

matoatlantis said:
			
		

> @xibo: What MOBO are you using? E3-1260L - that's gonna need a bigger FAN to keep it cool, doesn't it?
> 
> @Bobbla: 650W PSU? Isn't that just too power greedy? Not to mention that 650W PSU + 5 FANS - that must do a lot of noise :/ I'm ok with setup of it itself (still deciding whether I'll use FreeBSD or OpenIndiana), HW is something I seek help with.
> 
> As I said, maybe my demands are just not feasible. It simply cannot be greedy on power and has to be as quiet as possible. It's questionable what is a good performance for home usage though.



What do you mean power greedy? 650W would be maximal output. Just make sure you get *a* power efficient one, i.e 80+. As for the noise, define noise? I use 5 quality fans. Relatively "load" low frequency hum is in fact enjoyable. My laptop on the other hand has one clogged fan that makes high frequency low volume sound and would make you want to beat the crap out of it.


----------



## _martin (Feb 26, 2012)

I find any noise from PC/notebook annoying. I plan to put it behind the co*u*ch in the living room but still, that sound bugs me. 

I asked whether it's greedy or not, I don't know. 650W means maximum but as they are not 100% efficient I was wondering.

It's just when you can buy a simple NAS with 2x SATA disks which has active consumption 18-20W and here you present 650W PSU - it confuses me (I'm not saying it cannot be true, it just confuses me).


----------



## xibo (Feb 26, 2012)

I'm using a SuperMicro X9SCM-F, a 250W power supply, and have originally been running with a 2HU stock active heatsink that was good enough but noicy. Without plugging in the fan of the heatsink of the stock heatsink the CPU could run pretty well on 'normal' conditions but overheated after about 10 minutes of full load (cd /usr/src/ && make -j12 buildworld), and kept running at around 50-60 deg Celcius if I disabled two cores. Since the board is in a "tower" and not a rack I installed a larger and more expensive heatsink which is both good enough to keep things cool w/o fan and silent even if the fan is running so I chose to have it running for redundancy with the tower fan.

If you want to go passive use a i3-*-T or e3-1220L which are 35W and 20W respective (which also is their max power consumption under full load, not the average).

You should keep in mind that there are 5-inch fans running with less then 20dB, while hard disks are usually more noicy, and it's not only the CPU/GPU that cause heat - the HDDs also do, and they will probably keep doing more heat then your CPU to begin with: HDDs take about 5W on idle, while CPUs take around 3W on idle, however you have 1 CPU and multiple disks...

I agree 650W is alot. However the inefficiency should be (R) limited, as the 80% is accounted for the power actually used, not the maximal possible.


----------



## ahavatar (Feb 26, 2012)

For building a quiet system, http://www.silentpcreview.com/. Having 5 fans does not necessarily make it noisy, it is about what kinds of fans, what speeds, etc.


----------



## jem (Feb 27, 2012)

I run a Freebsd+ZFS NAS at home, based on the HP ProLiant MicroServer.

It's a small form factor server that can take four SATA disk drives.  It has an Athlon II Neo N36L dual-core CPU which is a low power CPU similar to Intel's Atom.  It can also take up to 8GB of ECC RAM.

It has one large fan in the back of the chassis and runs almost silently.  The PSU is 150W.

I have four 2TB disks in mine, configured as a raidz1 pool, giving about 5.4TB of usable space.  It's handled everything I've thrown at it just fine, saturating the gigabit network connection when transferring files over SMB/CIFS.


----------



## _martin (Feb 27, 2012)

@jem @ahavatar Thanks, now I have indeed a lot of information to go through.


----------



## Toto (Mar 4, 2012)

I feel just a little bit puzzled by the amount of RAM and processing power suggested all over here for a standard NAS.

Beyond subjective facts or just recommanding one's hardware that brings author's satisfaction, can't we dig deeper in a sense of establishing the basic
equations that help to identify the hardware requirements assuming:

- This framework is independent of the storage required (would depend on the number of SATA connectors of the motherboard for instance).
- Focus is on the speed only assuming a throughput within what could handle a standard gigabyte ethernet card.

More specifically, is there anything specific to zfs formating leading to something more complex than: 
Maximum throughput = FSB clock x Number of transfers per cycle x Bus Width?


----------



## _martin (Mar 4, 2012)

@Toto: Well, the problem is: what does one picture under 'home storage' specification? 

I most certainly won't need deduplication - that is something very resource consuming providing very little (if any) benefit for me. I'm confident amount of RAM is not an issue here.

There's no problem in choosing 'good enough' HW to satisfy my performance expectations; it starts to be interesting when it has to be _green_ and silent though. I still didn't decide what to buy, I'm leaning toward waiting for Intel's IVY bridge.


----------



## xibo (Mar 4, 2012)

A desktop, private NAS and even small or medium sized enterprize backbone doesn't have a Gigabyte ethernet bandwidth. They have a Gigabit, which is 10 times slower.

For a ``normal'' NAS that you don't have any specific expectations of, a normal 100BaseT ethernet card will do, as well as JBOD and other default configurations you get in a consumer NAS ``box''. However you won't run zfs on that one. In fact they're not even configured with having anything else then Windows Home Server 2003 in mind (like they don't ship drivers for Windows 2008).

ZFS is being sold as a file system of the next generation, but in the first place it's a file system targeting dedicated systems in large-scale enterprises. Therefore minimizing hardware requirements has been optional from the start. In fact, you don't need an effective-current-generation-workstation machine to run ZFS, but you shouldn't expect it to perform well if you do ZFS on an Atom. Also, one of the killer features of ZFS that everyone would like to use (if it wasn't all that expensive in terms of hardware) is deduplication, which needs more then 1GB of memory for each TB of storage. Some lesser features are transparent compression (again, expensive in terms of hardware usage) and block-level error-correction (less expensive, still a bottleneck if you have a slow CPU).

If you don't care for the any of those, you won't need expensive hardware like you won't need ZFS - FFS (+GVINUM) can do the storage job quite well, too, and in many cases it can do it faster.

The other thing I was talking of is NFS, which puts considerable load to the server (can be tuned somewhat) and also runs in kernel mode on top of that, slowing all your userspace processes on the NAS. Again, if you say you don't need NFS and FTP is good enough, your hardware requirements can once again be reduced.

Btw. the FSB and memory timings are mostly irrelevant for a NAS, since the caching is done on the client.


----------



## Bobbla (Mar 9, 2012)

Little late.. but meh 

When a PSU is rated 650W it means that it can deliver 650W, it will draw more from the power outlet depending on the efficiency. 
ZFS needs a lot of RAM, but no worries as RAM is cheap at the moment.
ZFS without any fancy functions does not need a lot of processing power. 
Multimedia is almost ALWAYS already compressed, so no need for compression.
However if you are going to re-silver or scrub it might be a good idea to have some capability. But no worries, even the cheapest CPU's today will probably work as long as it is NOT any cheap ass ATOM or equivalent. 
Sure scrub/re-silvering will take time depending on how much data you have and how fragmented it is, but fear not. It can run while you sleep at night, or at work?
The difference between bit and byte is 1:8, not 1:10. 
I have been close to saturating a 1Gb Ethernet connection all the times, but HDD on my others computers are always the bottleneck.
Dedicated graphics card for server is a waste of money.
Something and stuff..

Eh, my 650W PSU drives 13 HDD and everything else. I bought it at a local shop because my old PSU died when I really needed access to the server. And my fans are also regulated, DOWN from super Ã¼ber awesome speed to low enough that they are no longer the main noise source.

There was probably more, but I have forgotten.

Also, when will the dynamic block pointer appear? WHEN? If you wonder about how much wattage you might need this will give you a hint: http://extreme.outervision.com/psucalculatorlite.jsp


----------



## Toto (Mar 9, 2012)

I just love this topic...


----------



## _martin (Mar 9, 2012)

Bobbla said:
			
		

> Little late.. but meh



Not late at all - I didn't mark this as Solved - it's far from that. A lot of information was shared, but I'm still not decided what to buy or what might be the best setup for my needs.

Anyway, thanks for the link, it might come in handy.


----------



## kisscool-fr (Mar 9, 2012)

Hi,

I will share my experience because I've done a project like this. 

My configuration is the following: 

MB: Supermicro X7SPA-HF
CPU: Integrated Atom D510
RAM: 4GB DDR2
HDD: 4 x 2,5" wd 320GB in raidz1 + 1 3,5" 80GB for the system
Case: Apex MI 100
PSU: Be Quiet SFX POWER 300

A quick test gave 90MB for write speed and 110 MB for read spead. I have to tell I haven't done any tuning to this config. It is essentially used as a file server with some jails running.

I read someone told that atom based configs are to be avoided. I think it all depends on the needs you have and the option you enable. 

Ah, and forgot to tell, that the box is running 24h/24, is at 2,5m near my bed and is very silent. Sometimes, I wonder if it is running


----------



## WiiGame (Mar 9, 2012)

@Toto, I find myself falling in love with this thread, as well.  Not enough current talk like this out there (that I could find).



			
				Bobbla said:
			
		

> I've an AMD Athlon II X2, 4GB RAM (probably should get more..) and 2 Raidz's, one raidz with 6x 1TB and one with 6x 2TB.



Wow. Cool. What are you hooking all those HDDs to? Obviously more than the mobo b/c you have 6 SATA ports. Keeping any of it external?  Or if all internal, what case are you using? Your additional hardware setup could be interesting to those considering many drives.

Also, did we come up with a verdict on the relative value of ECC RAM? There was some back and forth up there; was wondering if this jury weighs heavy on one side or is split.

And a curiosity: how low of "yesterday's" hardware do you think ZFS can run well on? (Think LGA775/DDR2 ballpark, not 486s.) Too advanced for a recycled box?


----------



## xibo (Mar 10, 2012)

> And a curiosity: how low of "yesterday's" hardware do you think ZFS can run well on? (Think LGA775/DDR2 ballpark, not 486s.) Too advanced for a recycled box?


A core2 should be fine.


----------



## Toto (Mar 10, 2012)

I feel like my gigabyte* 965P-DS3 is about to meet its destiny afters years of darkness...Any objections around?


*(@xibo: now you understand where is my typo error coming from)


----------



## phoenix (Mar 10, 2012)

WiiGame said:
			
		

> And a curiosity: how low of "yesterday's" hardware do you think ZFS can run well on? (Think LGA775/DDR2 ballpark, not 486s.) Too advanced for a recycled box?



I'm running it on a Pentium4 box with 2 GB of RAM.  Took a lot of tweaking and tuning to make it stable with my workloads.  Has 2x mirror vdevs using 4 500 GB drives.


----------



## WiiGame (Mar 11, 2012)

Thanks, friends. One small addition: Does anyone think 2 versus 1 network cards matters enough to make it worth it in a ZFS setup?


----------



## rajl (Mar 12, 2012)

I will list my hardware specs in the hope of helping those here:

Athlon II X3
8 GB (2 x 4GB) of DDR3-1333
400W DiabloTek PSU
2 x 1TB Hitachi HDD's in mirrored configuration (7,200 rpm, 64 MB cache)
120 GB SSD (Sata II) as system drive
cd-rom drive
Cheap Diablotek Case
Cheap dvd-rom/burner

This was one of the TigerDirect combo specials.  The base system was $280 after rebates.  That included everything except the two Hitachi HDD's which were $99 each.  Total was less than $500 (barely) after taxes.  It's used as a home file server which serves over the network with Samba and provide remote access with ssh/sftp.  Most of the time, it sits idle and the cpu spins down to the point that none of the fans need to be running.  It is more than able to saturate my home gigabit connection between my desktop and my file server without breaking a sweat.

I had to buy a new box because I didn't have one to recycle that was lieing around.  I went for the cheapest modern hardware possible and it still is extremely overpowered *for my use case*.  What do I think the minimum system requirements for my use case?  Probably a single core processor circa 2005 and 2gb of DDR RAM.  I actually had an old desktop that was an Athlon64 3700+ (2.2 Ghz San Diego single core) with 4GB of DDR-400 (4 x 1GB) that would have been a perfect box to recycle for this purpose.  Too bad I got rid of it a year ago (11 months before I felt the need to build a zfs fileserver).

Just my thoughts and unscientific opinions.


----------



## throAU (Mar 12, 2012)

WiiGame said:
			
		

> Thanks, friends. One small addition: Does anyone think 2 versus 1 network cards matters enough to make it worth it in a ZFS setup?



Will depend on your workload (number of clients/type of access) and type/number of VDEVs in your pool.

If you have gigabit ethernet, and a fairly random workload (i.e., multiple clients) then I doubt spinning disks will keep up unless you have a LOT of them.  If you're using SSD, I suspect you'll saturate gig-e no problem.

If you're doing large sequential reads, and you have enough disks to saturate the network, then of course multiple NICs will help.

Also, if you have 2 NICs in your server, obviously it will only be able to supply data at >1 gigabit if you have enough client machines hitting it simultaneously with 1 gigabit or faster - or a single client with multiple gig-e NICs.  In a home situation, I doubt you'll have multiple NICs in your client machines, and probably not enough machines hitting the box continually to matter.

Bear in mind that setting up multiple NICs for faster network throughput may require configuration of your switch ports.  A dumb switch may not help you do link aggregation...


----------



## LVLouisCyphre (Jan 5, 2020)

phoenix said:


> I'm running it on a Pentium4 box with 2 GB of RAM.  Took a lot of tweaking and tuning to make it stable with my workloads.  Has 2x mirror vdevs using 4 500 GB drives.


Do you have any tips on making a skeleton NAS using ZFS?  The current wisdom is that you're going to use ZFS the system should have 8 GB of ECC RAM.


----------



## Phishfry (Jan 5, 2020)

This post is 8 years old so hardware choices have changed.

I use 64GB ECC DDR4 in my rig. I would consider 16GB to be the minimum RAM for a ZFS box.
My 24bay chassis has 16 drives that are 500GB in 2 vdevs. I also have a separate zpool with 8 drives of 16GB SSD's for speed.
I recently added 2 NVMe for slog and arc.
The speed of ZFS isn't so great and adding more spindles did not help much. I only get around 200MB/sec with the 16 spindle zpool.
So the speed of a single drive is approximately my speed. This is without any tuning.

To me a skeleton NAS would entail a ZFS array and setup NFS for network shares.
So you need to figure out how you plan on sharing your files.
iSCSI, Samba or NFS.
Also decide on how you want your OS mounted. We have 'zfs on root' or you can use a small 'disk on module' with UFS for the OS.
For me a SATADOM made a good choice so I could keep the ZFS pool separate from my OS. I do backup the OS to the zpool.

My first go around I installed Webmin for a Web GUI. This time around I am using only the cli.


----------



## ralphbsz (Jan 5, 2020)

While I don't disagree that a 64-gig node with 24 disks and a couple of SSDs is a nice setup ... my home ZFS server has one 64GiB boot SSD (still booting from UFS, not for any particularly good reason just that's the way it was installed long ago), two spinning hard drives (mirrored in ZFS, even though one is 3TB and the other is 4TB), and 4GiB of memory, of which only 3GiB is actually usable, since it is still a 32-bit machine (Intel Atom). No ECC. All administration is via command line. Works fabulously well for my meager requirements.

Would I recommend running ZFS without ECC, or with that little memory, or on 32 bits? No, I would not. But if you are a little careful and have patience, it works well.


----------



## LVLouisCyphre (Jan 5, 2020)

Phishfry said:


> This post is 8 years old so hardware choices have changed.


I'm aware of that and have made more relevant hardware choices.  

It would be beneficial to know what tuning and tweaking needs to be done for a skeleton NAS using ZFS for disaster recovery purposes.  There are times when you may do disaster recovery using stone knives and bear skins as Mr. Spock did in ST:TOS to construct a mnemonic memory circuit.  I have one older system here that doesn't support > 4 GB of memory and only supports SATA I.  I've had to use it for disaster recovery for a desktop.


----------

