# ZFS superior to UFS for desktop / workstation use?



## raid (Aug 28, 2010)

I'm currently running FreeBSD 8.1-RELEASE as a graphical desktop-type system, with X11 and GNOME.  If you want to get an idea of what I use this system for, the main programs I run are vim, Firefox and xchat.

My question is:  would ZFS be a superior filesystem for my purposes?  I get the impression that it was primarily designed for servers, and that a (l)user like myself might not notice much difference in terms of system performance.


----------



## raid (Aug 28, 2010)

By the way, it's amd64 with 2GB of memory.  I heard that these were the prerequisites for using ZFS.


----------



## graudeejs (Aug 28, 2010)

I use ZFS on my server, Desktop and Laptop.
I see no reasons not to use it, even if you have 1 HDD


----------



## d_mon (Aug 28, 2010)

raid said:
			
		

> By the way, it's amd64 with 2GB of memory.  I heard that these were the prerequisites for using ZFS.



u need at least *4 gb* of ram...


----------



## davidgurvich (Aug 28, 2010)

I'm using ZFS on my older laptop with 1GB of ram.  There were a couple of issues with fine-tuning the zfs settings where the system would spontaneously reboot.  After making adjustments to zfs for a system with minimal ram there hasn't been any zfs problems.  

I have had issues with the Xorg driver and suspend where the system would lock-up and require a manual power off.  In all cases data was preserved and there wasn't any corruption of the filesystem.

Despite the ram requirements the system feels more responsive with ZFS than UFS.  Not much to choose in terms of speed other than filesystem recovery.  I used to watch the UFS background fsck have the hard drive access light blinking constantly.  I don't see any of that with ZFS.


----------



## rusty (Aug 28, 2010)

d_mon said:
			
		

> u need at least *4 gb* of ram...



Not so, 4GB is the recommended amount if you want to enable prefetching, ZFS is still perfectly usable with less. It will just need some tuning, that's all.


----------



## gordon@ (Aug 29, 2010)

What features of ZFS are you planning on using that would make sense to switch to that instead of UFS2?


----------



## phoenix (Aug 29, 2010)

d_mon said:
			
		

> u need at least *4 gb* of ram...



No you don't.  You can run ZFS with as little as 512 MB of RAM, if you spend a lot of time tuning.  2 GB is the recommended minimum.  4 GB is just the sweet spot where things get more stable without needing too much manual tuning.


----------



## tessio (Aug 29, 2010)

2GB to use an filesystem!? WTF!?


----------



## kpa (Aug 29, 2010)

It's not just a filesystem, it's a RAID/volume manager as well. http://www.sun.com/bigadmin/features/articles/zfs_overview.jsp.


----------



## raid (Aug 29, 2010)

gordon@ said:
			
		

> What features of ZFS are you planning on using that would make sense to switch to that instead of UFS2?



To be perfectly honest, I don't know.  I've read what the features of ZFS are, but I'm not technical enough to understand how they might benefit me.  I do know that many people seem to speak of ZFS excitedly and think of it as the "filesystem of the future," which OS X and Linux are currently attempting to integrate.


----------



## UNIXgod (Aug 29, 2010)

raid said:
			
		

> To be perfectly honest, I don't know.  I've read what the features of ZFS are, but I'm not technical enough to understand how they might benefit me.  I do know that many people seem to speak of ZFS excitedly and think of it as the "filesystem of the future," which OS X and Linux are currently attempting to integrate.



Actally Apple dropped zfs support in the last release. It was supposed to be a killer feature and they silently removed it:

http://apple.slashdot.org/story/09/10/23/2210246/Apple-Discontinues-ZFS-Project

also linux integrates it via fuse. it doesn't jive well with gpl so there are some politics involved. 

UFS is a tried and true filesystem. 
ZFS is new and integrates more 'features'.

I personally don't feel minimum specs for zfs are not realistic for all uses. Setting up a server (non desktop) with jails (kernel level visualization) prompted me to upgrade my 4gb ram to 12gb and add an intel ssd for larc2

Under heavy load zfs performance may suffer in comparison to the unix file system.

The usage of the word `superior` might be inappropriate for this discussion. These are both only tools. And will be used for such. zfs is probably best for mass storage solutions over something like a singular desktop install. But then again it's up for you to try before you buy =)


----------



## vermaden (Aug 30, 2010)

UNIXgod said:
			
		

> also linux integrates it via fuse. it doesn't jive well with gpl so there are some politics involved.



There is native ZFS port to Linux underway:
http://osnews.com/story/23416/Native_ZFS_Port_for_Linux


----------



## oliverh (Aug 31, 2010)

UFS is rock stable and real mature, heavily tested in countless environments since several decades. ZFS is not. Period. It's a nice file system, it's modern, it's sometimes faster, but it's also a ressource-hog and it has its known caveats. It's certainly a filesystem you should consider, but test it first on your hardware. Nobody can spare you this work ...


----------



## dennylin93 (Aug 31, 2010)

tessio said:
			
		

> 2GB to use an filesystem!? WTF!?



To clarify things a bit.

First of all, ZFS doesn't need 2 GB to run. 1 GB will do fine, although 2 GB performs better. It can also run on i386. Sometimes a bit of tuning is required.

Second of all, ZFS doesn't use all the RAM you have. Usually, a few hundred MB is used. It's also possible to limit ZFS to 100 or 200 MB.


----------



## User23 (Aug 31, 2010)

oliverh said:
			
		

> UFS is rock stable and real mature, heavily tested in countless environments since several decades.


Thats true. But dont forget to say that UFS (or more the tools to manage it) cant handle more than 2TB. So in some cases people may be forced to use ZFS if they want to stay with FreeBSD



			
				oliverh said:
			
		

> It's certainly a filesystem you should consider, but test it first on your hardware.


100% ack. For myself i run the system itself often on a gmirror or HW raid1 and the user data on a zfs pool. So i cant run into trouble while updating the system and try to boot from ZFS.


----------



## vertexSymphony (Aug 31, 2010)

> UFS is rock stable and real mature, heavily tested in countless environments since several decades. ZFS is not. Period.



Are you talking about the codebase, or about the filesystem stability per se ?
I've had *tons* of problems with the second one 



> Second of all, ZFS doesn't use all the RAM you have. Usually, a few hundred MB is used. It's also possible to limit ZFS to 100 or 200 MB.



Isn't ARC supposed to use TotalRAM- 1GB by default ?  (well, that's how it behaves in Solaris : source )
But yeah, the memory usage is perfectly tunable


----------



## User23 (Aug 31, 2010)

NFS Fileserver AMD64 with 32GB RAM and without any ZFS Tuning:


```
last pid: 61792;  load averages:  1.37,  1.23,  1.05  up 7+09:34:58  16:26:31
67 processes:  4 running, 63 sleeping
CPU:  0.0% user,  0.0% nice, 11.6% system,  0.1% interrupt, 88.2% idle
Mem: 138M Active, 22M Inact, 22G Wired, 228K Cache, 1596M Buf, 9202M Free
Swap: 32G Total, 32G Free
```

Before the last reboot not more than 21G was used as Wired.


```
NAME        STATE     READ WRITE CKSUM
	home        ONLINE       0     0     0
	  raidz2    ONLINE       0     0     0
	    da1     ONLINE       0     0     0
	    da2     ONLINE       0     0     0
	    da3     ONLINE       0     0     0
	    da4     ONLINE       0     0     0
	    da5     ONLINE       0     0     0
	  raidz2    ONLINE       0     0     0
	    da6     ONLINE       0     0     0
	    da7     ONLINE       0     0     0
	    da8     ONLINE       0     0     0
	    da9     ONLINE       0     0     0
	    da10    ONLINE       0     0     0
	logs        ONLINE       0     0     0
	  mirror    ONLINE       0     0     0
	    da12    ONLINE       0     0     0
	    da13    ONLINE       0     0     0
	cache
	  ad4       ONLINE       0     0     0
	  ad8       ONLINE       0     0     0
	spares
	  da11      AVAIL
```


----------



## d_mon (Aug 31, 2010)

User23 said:
			
		

> 32GB RAM



wow dude...u must be kidding! is that a eurocom panther 2.0?


----------



## Galactic_Dominator (Sep 1, 2010)

User23 said:
			
		

> Thats true. But dont forget to say that UFS (or more the tools to manage it) cant handle more than 2TB. So in some cases people may be forced to use ZFS if they want to stay with FreeBSD


A lot has changed since that was true.  GPT is here and working as well as 64 bit quotas.  I haven't actually tested it so it's possible there's something I missed but I believe your information is out of date.


----------



## User23 (Sep 1, 2010)

d_mon said:
			
		

> wow dude...u must be kidding! is that a eurocom panther 2.0?



No, build this system by myself.

This is the hw configuration:
http://forums.freebsd.org/showpost.php?p=59776&postcount=6

But now with:

2x Quad-Core AMD Opteron(tm) Processor 2374 HE (2211.35-MHz K8-class CPU)
+1 extra 1TB spare disk
+2 32GB SLC SSD as ZIL log mirror
+2 80GB MLC SSD as cache raid0

Especially the cache rocks.


----------



## User23 (Sep 1, 2010)

Galactic_Dominator said:
			
		

> A lot has changed since that was true.  GPT is here and working as well as 64 bit quotas.  I haven't actually tested it so it's possible there's something I missed but I believe your information is out of date.



Yes, i was wrong. GPT solve this.


----------



## Terry_Kennedy (Sep 1, 2010)

Galactic_Dominator said:
			
		

> A lot has changed since that was true.  GPT is here and working as well as 64 bit quotas.  I haven't actually tested it so it's possible there's something I missed but I believe your information is out of date.


fsck-ing a 2TB UFS partition takes quite some time. Even when [r]dump takes a snapshot, it takes some time. That's something to consider when building large partitions.

I've tested zpools of up to 21TB usable storage (3 x 5-2TB-drive raidz's) with 6TB of data in about 250000 files, and a scrub runs for around 10 hours, but it happens in the background.

I'm running systems with a SuperMicro X8DTH-iF w/ 2 E5520 CPUs and 48GB of RAM. The disk controller is a 3Ware/LSI 9650SE exporting 16 single units (WD RE4's). The OS is on a gmirror'd pair of WD3200BEKT's. There is also an OCZ Z-Drive R2 P84 (256GB PCI-E RAID0 SSD) in each system. I have 3 of these systems that I've been stress-testing for several months now. I've tested pulling drives, unclean shutdowns, simulating a failed SSD (it is the ZFS log device) and the systems have handled everything I've thrown at them. The only issue was a system lockup with the RELENG_8 version of the twa driver (it spews loads of error messages and eventually stops responding). The prior version in CVS works fine, as does an un-committed update I got from 3Ware.


----------



## User23 (Sep 1, 2010)

Terry_Kennedy said:
			
		

> The only issue was a system lockup with the RELENG_8 version of the twa driver (it spews loads of error messages and eventually stops responding). The prior version in CVS works fine, as does an un-committed update I got from 3Ware.



Did you remember the error messages? Was it something like "Micro Controller Error ... Unexpected status bit(s) ...."?


----------



## Terry_Kennedy (Sep 1, 2010)

User23 said:
			
		

> Did you remember the error messages? Was it something like "Micro Controller Error ... Unexpected status bit(s) ...."?


Nope. Here's a sample set (the system locked up after the last one):


```
Aug  2 19:18:02 new-rz1 kernel: twa0: ERROR: (0x05: 0x2018): Passthru request timed out!: request = 0xffffff8000bd3de0
Aug  2 19:18:02 new-rz1 kernel: twa0: INFO: (0x16: 0x1108): Resetting controller...:  
Aug  2 19:18:41 new-rz1 kernel: twa0: INFO: (0x04: 0x0063): Enclosure added: encl=0
Aug  2 19:18:41 new-rz1 kernel: twa0: INFO: (0x04: 0x0001): Controller reset occurred: resets=1
Aug  2 19:18:41 new-rz1 kernel: twa0: INFO: (0x16: 0x1107): Controller reset done!:  
Aug  2 19:18:41 new-rz1 kernel: twa0: ERROR: (0x05: 0x201A): Firmware passthru failed!: error = 60
Aug  2 21:01:24 new-rz1 kernel: twa0: ERROR: (0x05: 0x2018): Passthru request timed out!: request = 0xffffff8000bcb480
Aug  2 21:01:24 new-rz1 kernel: twa0: INFO: (0x16: 0x1108): Resetting controller...:  
Aug  2 21:02:03 new-rz1 kernel: twa0: INFO: (0x04: 0x0063): Enclosure added: encl=0
Aug  2 21:02:03 new-rz1 kernel: twa0: INFO: (0x04: 0x0001): Controller reset occurred: resets=2
Aug  2 21:02:03 new-rz1 kernel: twa0: INFO: (0x16: 0x1107): Controller reset done!:
```


----------



## User23 (Sep 1, 2010)

Strange. With the same error messages (except: "Enclosure added: encl=0" because the controller dont have that feature) you post, a 9550SXU-4LP of mine died some days before, crashing the whole system even if no system files where on it.  It performed well for 2 years and 11 months under FreeBSD 7.x, so it is still under warranty and LSI will replace it,  .


----------



## Terry_Kennedy (Sep 1, 2010)

User23 said:
			
		

> Strange. With the same error messages (except: "Enclosure added: encl=0" because the controller dont have that feature) you post, a 9550SXU-4LP of mine died some days before, crashing the whole system even if no system files where on it.  It performed well for 2 years and 11 months under FreeBSD 7.x, so it is still under warranty and LSI will replace it,  .


Did you update your kernel just before that started happening?

It looks like LSI submitted the patch I mentioned in my earlier reply: kern/149968


----------



## sub_mesa (Sep 1, 2010)

rusty said:
			
		

> Not so, 4GB is the recommended amount if you want to enable prefetching, ZFS is still perfectly usable with less. It will just need some tuning, that's all.


As i understand, even with 4GiB physical RAM you would still have prefetching turned off by default. If i'm not mistaken, it detects of 4GiB of available memory, but things like the kernel lower the amount of available memory straight away. 

So in reality you might need 6GB+ for prefetching to be enabled by default. As i understand this was done because otherwise prefetching would use up alot of memory and hurt performance in some cases instead of aiding it.



			
				User23 said:
			
		

> Yes, i was wrong. GPT solve this.


Or just use no partitions at all. No real reason you need them if you're not going to boot from them or needing cross-OS compatibility. A geom_label attached and using that instead, with no partition underneath, would avoid any alignment issues or partition limitations.

The new GPT support in FreeBSD seems nice though, especially as it can also label (/dev/gpt/disk0 for example). But i do not like it requiring geomflags in some cases and saying the GPT label is corrupt whenever you enlarge the device. I hope it gets improved further in this regard.


----------



## Galactic_Dominator (Sep 1, 2010)

Terry_Kennedy said:
			
		

> fsck-ing a 2TB UFS partition takes quite some time. Even when [r]dump takes a snapshot, it takes some time. That's something to consider when building large partitions.



Not when it's a gjournal'd, I don't have a UFS volume that big, but I'd estimate it would be done in under 3 min provided decent disk speed.  SU + J should be here soon as well which eliminates that problem.

FWIW, gjournal also improves IO on heavily multithreaded access, but will decrease performance on single threaded access.


----------



## User23 (Sep 2, 2010)

Terry_Kennedy said:
			
		

> Did you update your kernel just before that started happening?
> 
> It looks like LSI submitted the patch I mentioned in my earlier reply: kern/149968



No, i did not change the kernel before and a new controller fixed the problem. The array had/has a UFS on it. After replacing the controller i was able to reproduce the error on another machine with FreeBSD 8.1 continously without heavy io load.

But the patch is very interesting. Hopefully this may fix the problem i had with the "Unexpected status bit(s)"

thx


----------

