# zfs - to be or not to be



## graudeejs (May 13, 2009)

I'm starting to gain interest in ZFS....
I've read some articles and my gut is saying that I should try it... (i know it's not finished, I know i can't boot straight from zfs)

Self healing and zfs snapshots are very appealing features....

Being a desktop user, I don't have too much important data on my HDD (+ I have backups)
Atm I'm using my HDD space very inefficiently (multiple partitions, lots of free space etc)

I wonder how much of space will I have available if I want self healing... (I have total of 390GB of HDD space. 152GB on ide and 238 on SATA hard disk), would it be half of all disk space (195 GB)?

I have 1.5GB ram (R.I.P. 512MB, which i remover few days ago....)


Instead of thinking why I should migrate to zfs, i would like to hear, why i shouldn't migrate to zfs 


Thank you in advance

P.S.
links, info, experience, etc.... appreciated 

EDIT:
what do you think about:
all files on one ZFS vs system on one ZFS and data on other ZFS ?


----------



## vivek (May 14, 2009)

Over the past few months or so, I've seen occasional instability and crashes.  Most of them were in ISP style hosting setup with high load and almost all of the server were running with ZFS. With UFS we have no problem at all. 

In initial testing, it did provides us some good results, but real-world experience has left us to conclude that zfs in 7.xRelease is not stable or reliable. I believe by version 8.0 it will get stable enough. 

HTH


----------



## phoenix (May 14, 2009)

I have nothing but praise for and good experiences with using ZFS on FreeBSD 7.x.  I use it at home (3x 120 GB SATA using raidz1) keeping 2 months of daily snapshots.  At work, we use it on our backups servers, where they do rsync backups of over 100 remote servers every night (see my howto thread on the setup).

Once you wrap your head around the concept of a single storage pool per server, and thinking in terms of vdevs instead of disks, then it makes so much sense that you wonder how we ever survived using slices and partitions.  

If your drives are not all of the same size, you can't use raidz (you lose any space over and above the size of the smallest disk).  But you can do mirroring.  And you can set *copies=2* or higher to keep redundant copies of data on a non-redundant set of disks.  But if you don't do at least mirroring, then losing 1 disk will corrupt the entire pool.

It's possible to create vdevs using slices; however, ZFS (at least on Solaris, don't now for sure on FreeBSD) will disable the onboard disk cache if the vdevs don't consist of entire drives.  It's also possible to create vdevs using files.

IOW, unless you have a collection of totally random, odd-ball sized drives, I say give ZFS a spin.


----------



## graudeejs (May 14, 2009)

phoenix said:
			
		

> But you can do mirroring.  And you can set *copies=2* or higher to keep redundant copies of data on a non-redundant set of disks.



as i understand, if I, for example, set 1 HDD to use ZFS, I can make it redundant, right?

I'm consider putting ZFS on my 250GB sata HDD for test drive....
What I want is to be able to use ZFS after power failures.... so i need healing.... from what you write, I understand that I can enable this even with 1 disk.... (I don't consider losing entire disk, which is very unlikely to happen in next 2 years, and by that time I will have new PC [most likely])


----------



## graudeejs (May 14, 2009)

OMG. I have zfs on my HDD for about 10 minutes and i'm so impressed.... that i can't think straight....

May brain is saying that I must use ZFS for everything (as much as possible)



EDIT:
I am shocked.......
I love ZFS, I love FreeBSD, I love Sun Microsystems
I love, I love, I love....
you can't even imagine how much time this will save me dealing with partitioning (I'm kind of guy who likes optimizing everything.... Sometimes i hate myself, because i can't decide if i want 6 or 8 GB for /usr etc)
This rocks....


----------



## graudeejs (May 15, 2009)

OK, i tried setting ZFS on root, but failed (many times)
my PC stayed online max 15min with ZFS on root.

I've read resources from
http://wiki.freebsd.org/ZFS

I fallowed tuning guide, with little success
Now i'm thinking to either try FreeBSD_current or wait for FreeBSD-8-release (I'll probably try current)


----------



## phoenix (May 15, 2009)

killasmurf86 said:
			
		

> as i understand, if I, for example, set 1 HDD to use ZFS, I can make it redundant, right?



The *data* can be made redundant, by setting the filesystem property *copies* to something higher than 1.  Then ZFS will save multiple copies of each file in different places on the disk.  If one copy is corrupted, another copy will be loaded instead.  However, if the drive dies, everything on the drive is gone.



> I'm consider putting ZFS on my 250GB sata HDD for test drive....
> What I want is to be able to use ZFS after power failures.... so i need healing.... from what you write, I understand that I can enable this even with 1 disk.... (I don't consider losing entire disk, which is very unlikely to happen in next 2 years, and by that time I will have new PC [most likely])



Yes, this will work, and can be good for testing.  See the man page for zfs(8) for details on how to set properties on filesystems.


----------



## phoenix (May 15, 2009)

killasmurf86 said:
			
		

> OK, i tried setting ZFS on root, but failed (many times)
> my PC stayed online max 15min with ZFS on root.
> 
> I've read resources from
> ...



I wouldn't bother trying to get /-on-ZFS working until FreeBSD 8.x is released with proper support for it in the loader, the kernel, the init system, etc.

There are a bunch of different ways to do it with FreeBSD 7.x, but most of them are hacks that don't always work.

For a single harddrive, I'd recommend creating 2 slices on the disk: the first slice will be used for / and /usr (2 GB is plenty), the second slice will be used for everything else and dedicated to ZFS.

In the first slice, create 2 partitions:  / and swap  (you could create a third for /usr if you really want, but I'd leave it on /).

After the install and initial boot, enable ZFS support, and add the second slice to the pool.

Then create filesystems for /var, /usr/ports, /usr/src, /usr/obj, /home, and /usr/local; but don't set the mountpoint.

Finally, boot into single user mode, and:
* mount -u /
* /etc/rc.d/hostid start
* /etc/rc.d/zfs start
* *cp -Rp /path/* /pool/path/* for each of the above filesystems
* *rm -rf /path/** for each of the above filesystems
* *zfs set mountpoint=/path pool/path* for each of the filesystems
* shutdown -r now

That will copy all the data for each of the filesystems off / and onto ZFS filesystems.  Then reset the ZFS mountpoints to the correct locations.  And finally, boot into the OS using the ZFS filesystems.  After that, the only data under / will be the base FreeBSD OS.  Just enough to boot into single-user mode and fix ZFS issues if needed.  Everything else will be on ZFS.


----------



## graudeejs (May 15, 2009)

he he he, ye i was thinking about that.....
I just got tired today....
Now I had rest for few hours and I'm ready to continue...
ZFS is really wonderful, and i can't wait till 8 is out. (I'm waiting it even more than 7.0, when i became regular FreeBSD user) [pardon for poor language]



Off topic:
btw. I will do it a little different way, since my system is wiped (backups doesn't count. lol)
I will do everything from fixit (basically install FreeBSD without sysinstall. I'm already so used to this method, thanks to coray_james @ daemonforums.com for http://daemonforums.org/showthread.php?t=1538)
If not that post I woudn't use gpt, fully encrypted disks and lots of other stuff that I like

P.S.
FreeBSD-8-Current didn't even boot with ZFS, it had panic just before mounting drives


----------



## vivek (May 15, 2009)

Handbook provides very good information:
http://www.freebsd.org/doc/en/books/handbook/filesystems-zfs.html http://flux.org.uk/howto/solaris/zfs_tutorial_01
Also do not forget official sun documentation and man pages.

HTH


----------



## graudeejs (May 15, 2009)

man, When i was looking in hand book, I hit ctrl+f (in firefox) and typed zfs.
I was surprised that there was no info.... (i knew i saw it once)
I wouldn't not even think of searching "Z file system"
this should be changed (i think), because everyone calls it zfs

However i found this info elsewhere.


----------



## graudeejs (May 15, 2009)

phoenix said:
			
		

> The *data* can be made redundant, by setting the filesystem property *copies* to something higher than 1.  Then ZFS will save multiple copies of each file in different places on the disk.  If one copy is corrupted, another copy will be loaded instead.  However, if the drive dies, everything on the drive is gone.
> 
> 
> 
> Yes, this will work, and can be good for testing.  See the man page for zfs(8) for details on how to set properties on filesystems.



already learned all that. It was fast, and much more simpler, than it seamed at first


----------



## graudeejs (May 15, 2009)

OK so far so good, booted system...
Will see how stable it is, but for now there is 1 problem already:
System panics during shutdown/reboot

```
Waiting (max 60 seconds) for system process 'buffdaemon' to stop...done
All buffers synced
panic: vput negative ref cat
cpuid=0
Physical memory: 1523 MB
....
```

Edit:
bough disks are completely encrypted, i'm booting from flash

EDIT
other than this, everything seams to be fine
PC is up and running for 50 min already


----------



## f-andrey (May 16, 2009)

If you have a system i386, very bad. ZFS is better on amd64.
And CURRENT newer, so it is best to wait for him.
CURRENT can to boot with ZFS


----------



## graudeejs (May 16, 2009)

I recompiled my custom kernel.
Everything works, nothing crashes.


----------



## phoenix (May 16, 2009)

f-andrey said:
			
		

> If you have a system i386, very bad. ZFS is better on amd64.



ZFS works just fine on 32-bit systems.  You just need to be more aggressive in your kernel memory and ARC tuning.  And you really should have more than 2 GB of memory (people have run ZFS on 32-bit systems with as little as 512 MB, but more is always better).



> And CURRENT newer, so it is best to wait for him.
> CURRENT can to boot with ZFS



Kip Macy has made available a test branch of 7-STABLE that includes ZFSv13.  Will be interesting to see if this makes it into 7.3.


----------



## graudeejs (May 16, 2009)

phoenix said:
			
		

> ZFS works just fine on 32-bit systems.  You just need to be more aggressive in your kernel memory and ARC tuning.  And you really should have more than 2 GB of memory (people have run ZFS on 32-bit systems with as little as 512 MB, but more is always better).



I can use ZFS (if i download torrents with less than 2MB/s)
I will try tuning more....
I Had 2GB ram, but 512mb died....
I'm going to buy 512MB or 1GB next week




			
				phoenix said:
			
		

> Kip Macy has made available a test branch of 7-STABLE that includes ZFSv13.  Will be interesting to see if this makes it into 7.3.



That's wonderful....
Perhaps i should try....



HERE's disk IO

```
capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G      2      4   245K   201K
sys         90.2G  50.8G     11     17   869K   585K
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G     80      0  10.0M      0
sys         90.2G  50.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G    165     58  20.7M  4.64M
sys         90.2G  50.8G      2     38   184K   483K
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G      1      3   248K  15.5K
sys         90.2G  50.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G      0      0      0      0
sys         90.2G  50.8G     11      0   859K      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G     35      0  4.37M      0
sys         90.2G  50.8G      0      0      0  3.89K
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G    105      0  13.2M      0
sys         90.2G  50.8G      1      0   198K      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G      0    119      0  9.64M
sys         90.2G  50.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G    211      0  26.5M      0
sys         90.2G  50.8G      3     16   376K   149K
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G     56      0  7.06M      0
sys         90.2G  50.8G      0     41  62.4K   280K
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G     11      0  1.39M      0
sys         90.2G  50.8G      6      0   515K      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G     28     74  3.51M  9.30M
sys         90.2G  50.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G     40     47  4.93M  3.88M
sys         90.2G  50.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G    183      0  22.8M      0
sys         90.2G  50.8G      2     52   227K   516K
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G     46     70  5.84M   953K
sys         90.2G  50.8G      0      4      0  39.0K
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G      0      0      0      0
sys         90.2G  50.8G      4      0   310K  11.6K
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G      0      0   125K      0
sys         90.2G  50.8G     23      0  1.63M      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G    145      3  18.1M   443K
sys         90.2G  50.8G      5      0   411K      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G    141     76  17.6M  3.76M
sys         90.2G  50.8G      0     59      0   638K
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G      2      0   373K      0
sys         90.2G  50.8G      0      3      0  15.5K
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G      0      0      0      0
sys         90.2G  50.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G      0      0   124K      0
sys         90.2G  50.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G    141     29  17.6M  3.56M
sys         90.2G  50.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G    165      8  20.6M   201K
sys         90.2G  50.8G      1     51   127K   416K
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G    195      0  24.5M      0
sys         90.2G  50.8G      2      3   281K  14.5K
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G    184      2  23.1M   114K
sys         90.2G  50.8G      0     46  46.7K   445K
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G     52      3  6.38M  15.7K
sys         90.2G  50.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G      2     52   251K  2.01M
sys         90.2G  50.8G      0      0  95.1K      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G     20      0  2.57M      0
sys         90.2G  50.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G      0      0      0      0
sys         90.2G  50.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G      0      0      0      0
sys         90.2G  50.8G      1      0   196K      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G      0      0      0      0
sys         90.2G  50.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G     76    144  9.52M  8.56M
sys         90.2G  50.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G    159      0  19.9M      0
sys         90.2G  50.8G      6    114   660K  1.42M
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G     60      0  7.57M      0
sys         90.2G  50.8G      0      0      0  3.91K
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G    116      0  14.5M      0
sys         90.2G  50.8G      2      0   306K      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G    158     52  19.8M  6.32M
sys         90.2G  50.8G      0      0  96.7K      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G     36     35  4.42M   330K
sys         90.2G  50.8G     12      0  1.04M      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G      0      0      0      0
sys         90.2G  50.8G      1     82   124K   852K
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G     32      0  4.02M      0
sys         90.2G  50.8G      1      0   196K      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G      0     11      0  1.46M
sys         90.2G  50.8G      1      0   207K      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G     66     38  8.35M  4.79M
sys         90.2G  50.8G      3      0   412K      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G    206      0  25.8M      0
sys         90.2G  50.8G      0      0   105K      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G     54      0  6.75M      0
sys         90.2G  50.8G      4     60   398K   568K
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G      0      0  62.1K      0
sys         90.2G  50.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G      0      0      0      0
sys         90.2G  50.8G      2      0   299K      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G      0      4  61.9K   281K
sys         90.2G  50.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
data        58.2G   174G     31    158  3.75M  15.3M
sys         90.2G  50.8G      0      0  63.0K      0
----------  -----  -----  -----  -----  -----  -----
```
and then it hanged

I've been monitoring my ram, and i had plenty free ram (few hundreds to 1GB free)
It's also worth mentioning, that I'm running very light width desktop


----------



## phoenix (May 17, 2009)

Just curious, but why do you have two separate pools in the same system?

As for the hanging issue, have you done any VM/ARC tuning in /boot/loader.conf?


----------



## graudeejs (May 17, 2009)

phoenix said:
			
		

> Just curious, but why do you have two separate pools in the same system?


Because if i decide to move back to UFS, it'll be much more easy to do. Transferring over 60GB to laptop is pain, because laptops wifi and build in network card sux




			
				phoenix said:
			
		

> As for the hanging issue, have you done any VM/ARC tuning in /boot/loader.conf?


Yup, i tried.... I will keep on experimenting.
I will try again to compile kernel with ...KVA=512... last time it failed to compile.


Do you think using single pool would help?


Also ATA disk is about 4-5 Years old..... It might start failing soon


----------



## phoenix (May 17, 2009)

With only 1.5 GB of RAM, you can't use KVA_PAGES=512.  That will give you 2 GB of kernel memory space ... which means there's nothing left for the userland.    You'll want to remove that setting from your kernel.

By default, 1/2 of your RAM is configured as kernel memory.  In your case, that would be 768 MB.

My rule of thumb has been:  1/2 RAM for kernel, 1/2 kernel space for ARC.  Set kmem_max to 768 MB.  Then set zfs.arc_max to 384 MB.  That should keep things stable.


----------



## graudeejs (May 17, 2009)

phoenix said:
			
		

> With only 1.5 GB of RAM, you can't use KVA_PAGES=512.  That will give you 2 GB of kernel memory space ... which means there's nothing left for the userland.    You'll want to remove that setting from your kernel.
> 
> By default, 1/2 of your RAM is configured as kernel memory.  In your case, that would be 768 MB.
> 
> My rule of thumb has been:  1/2 RAM for kernel, 1/2 kernel space for ARC.  Set kmem_max to 768 MB.  Then set zfs.arc_max to 384 MB.  That should keep things stable.



ok, I could actually increase kmem_max even more.
I monitored my memory usage, and it's about 700MB free (Very stable, haven't seen less)

With ufs most of it probably was used for HDD cache.


EDIT:
currently on i386, kmem limit is 512M (so it seams), PC panicked




> On i386 systems you will need to recompile your kernel with increased KVA_PAGES option to increase the size of the kernel address space before vm.kmem_size can be increased beyond 512M. Add the following line to your kernel configuration file to increase available space for vm.kmem_size to at least 1 GB:
> 
> options KVA_PAGES=512


http://wiki.freebsd.org/ZFSTuningGuide


----------



## graudeejs (May 18, 2009)

added *options KVA_PAGES=512* to kernel comfig and increased kernel memory to 1G and Arc to 512m


vm.kmem_size_max: 1073741824 (1G)
vm.kmem_size: 1073741824 (1G)
vfs.zfs.arc_max: 536870912 (512M)


Still crashing.... could it be because I use 2 pools?


----------



## graudeejs (May 18, 2009)

he he
I did interesting test:
I downloaded file from some (relatively) high speed ftp (100Mbps)
It was downloading at ~9MB/s (which is up to 5 times faster then when i downloaded files from torrents). I had few small lags, but nothing crashed.
I downloaded file to each of pools using elinks. Everything went file.

Conclusion: It's probably Deluge causing all my problems (i never liked python), however I won't tag thread SOLVED for now... (just to make sure)


----------



## phoenix (May 19, 2009)

KVA_PAGES gets mutilplied by 4 to come to the number of KB to use for kernel memory.  Using KVA_PAGES=512 means 2 GB of kernel memory space.  If you run with this setting, with only 1.5 GB of RAM, you will run into issues, unless you have a lot of non-ZFS disk space set up for swap.

Unless you have over 2 GB of memory, don't mess with KVA_PAGES.

I'll have to dig up where I read about this, I just went over this myself last summer.  Setting this too high, and setting kmem_max too high, in relation to the amount of RAM you have, will panic the kernel.


----------



## bigboss (May 19, 2009)

*Argh... ZFS The filesystem I most love, and hate....*

Killasmurf, I had the same feeling when I fist learned of and used ZFS: I got to use it! for everything!
And so far I am very impressed with its features, but then comes the crashes... and crashes...

and Tuning doesn't always solve your problems forever
I've tried many different configurations in memory, my system is also i386, a pentium4 2.6ghz with 2gb of DDR400 RAM, 
Right now I am using

```
vfs.zfs.arc_max="512M"
#vfs.zfs.vdev.cache.size="5M"
#vfs.zfs.prefetch_disable=1
vm.kmem_size="1024M"
vm.kmem_size_max="1024M"
```
Once I decreased zfs memory so low that it never crashed, but running portmaster -a was so slow, and every other heavy io disk activity would take forever, so I am seriously considering buying more 2gb of ram so I can feed RAM hungry ZFS.


I am very interested in this testing FreeBSD 7STABLE branch which has zfs13, I think would be our best bet, mainly because I can't run FreeBSD 8 yet because it doesn't recognize my SATA disk controler
http://forums.freebsd.org/showthread.php?t=3682
but more on that later.... 
I've been too busy lately :-(

But even with all these problems I think ZFS worthies the work because it is so promising, powerful and simple.


----------



## graudeejs (May 19, 2009)

phoenix said:
			
		

> KVA_PAGES gets mutilplied by 4 to come to the number of KB to use for kernel memory.  Using KVA_PAGES=512 means 2 GB of kernel memory space.  If you run with this setting, with only 1.5 GB of RAM, you will run into issues, unless you have a lot of non-ZFS disk space set up for swap.



How can it be 2GB when i have clearly set kmem_max to 1G.
And when i booted, it showed 1G

I have 6G swap, just in case



			
				phoenix said:
			
		

> Unless you have over 2 GB of memory, don't mess with KVA_PAGES.


Without KVA_PAGES i can't have more than 512MB kernel memory
I will try to set it to lower number next time.


anyway, i somehow messed things up.
I didn't export data pool. I simply destroyed it and added sata disk to sys pool. In the end i have panics.
Managed to avoid panics, when I boot from FreeBSD-8-current cd, and import and export ZFS pool. Then i restart and i can imports sys pool.
However sometimes I can see Data pool (corrupted. lol). and when PC crashes, i have to boot from FreeBSD-8-Current cd, again.

For a few minutes i thought i lost my music collection....
Now i only need to transfer it to laptop....
After that, tomorrow, I will try to Install FreeBSD-8-Current.



			
				bigboss said:
			
		

> But even with all these problems I think ZFS worthies the work because it is so promising, powerful and simple.


Yup, it's so good, i can't refuse myself to go trough all the mambo jumbo, to get it work for everything.
Best of all, it's always consistent, no mater how many times my PC crashes.


----------



## bigboss (May 19, 2009)

*ZFSV13 on FreeBDS-7STABLE!!!*

Where is this Kip Macy FreeBDS-7STABLE branch ?
I tried to look for it but I got totally lost in the tree, can someone help me out ?

I really can't use CURRENT now because FreeBSD-8 doesn't recognize my hardware, and besides that I'd like to keep using stable.


----------



## graudeejs (May 19, 2009)

I don't think there is one yet....
http://people.freebsd.org/~bmah/relnotes/7-STABLE/relnotes.html

ftp://ftp.freebsd.org/pub/FreeBSD/snapshots/200905/


you can try to get it with csup


----------



## phoenix (May 19, 2009)

Search the mailing lists for -stable and -current, the link to the repos is in his e-mail message.  It's not part of the official FreeBSD source tree.


----------



## bigboss (May 21, 2009)

*FreeBSD 8-CURRENT Rocks!*

Hi guys, I managed to install FreeBSD 8 for good.(I'm just about figuring out the fix for this PR http://www.freebsd.org/cgi/query-pr.cgi?pr=121461 , but more on that later)
Phoenix, I searched the mail lists a little and I didn't find it. I was needing to recompile everything anyway, I thought upgrading to current now would be an opportunity to have a more stable ZFS. I "snapshoted" everything first, and upgraded my root partition to 8-CURRENT, and surprisingly it worked, last time I tried 8-CURRENT snapshot it didn't.

Anyway, I upgraded from 7.2-STABLE to 8-CURRENT using sources, and I got caught in a bad situation when you do

```
make installkernel
```
and then reboot, you get an almost unusable ZFS, with a ZFS 13 module on the kernel with ZFS 6 userland which doesn't start correctly and leave the system stuck at single user(specially if you have /usr on a zpool like me). You can just type

```
mount -t zfs tank/usr /usr
```
for example and you're good to go, but to really avoid this run

```
make installworld
```
BEFORE rebooting the machine, I repeat BEFORE rebooting the machine, so you'll have both zfs kernel modules and userland up to date.
There is also another gotcha, I had to installworld with the following variable set

```
make NO_FSCGH=true installworld
```



> as the old zfs filesystem was version=1 and don't support flags until you have upgraded the filesystem  to version 3.



Got it from the mail list
http://www.nabble.com/zfs-version-13-kernel-and-zfs-version-6-userland-tool--td20650216.html

There should be a note about this in /usr/src/UPDATING
there shouldn't ?

By the way, zfsv13 is FAR more stable. Running some stress tests without a reboot or freeze yet!
ï¿½e


----------



## bigboss (May 21, 2009)

*ZFSV13 has been pushed to FreeBSD-7STABLE*

Great news!


----------



## phoenix (May 21, 2009)

Looks like 7.3 will be an interesting release.


----------



## graudeejs (May 21, 2009)

I bought 1BG ram [now i have 2.5G ram] 
now customizing kernel

I think I will even make ZFS bootable flash with basic FreeBSD on it 

Unfortunately i wasn't able to use compression on zfs boot partition, and couldn't start off 128M flash 

But with compression GENERIC kernel fit in it quite well, i even had 25 (gzip) to 28 (gzip-9) MB free Disk space


----------



## graudeejs (May 21, 2009)

Look what i found:
http://unix.derkeiler.com/Mailing-Lists/FreeBSD/current/2003-06/1599.html

edit:
I have raidz (If i understand correctly. I used `$ zpool create a ad4 ad0`)
I have 2 very different HDD's.
I set *vfs.zfs.cache_flush_disable=1*, and my HDD total IO raised from 36MB/s to 45MB/s (25% gain)


note: i use AES 256 encryption


----------



## graudeejs (May 22, 2009)

Guys, do you really don't have lags when writing to disk at high speed?
I have small, very annoying, periodic lags....

From here I'm thinking of trying few things:
1) try to decrease vfs.zfs.arc_max to some very few megabytes. I hope this would force ZFS to write to disk instantly, unlike now when it writes at very high speed for few seconds, and then waits for cache to fill
2) Increase vfs.zfs.arc_max even more (currently it's 512MB)
3) rebuild pool without geli (man, i really don't want this)


----------



## phoenix (May 22, 2009)

killasmurf86 said:
			
		

> Guys, do you really don't have lags when writing to disk at high speed?



At work, no, we don't have that, and we do heavy, sustained reading and writing for 5-hour periods twice a day.

At home, yes, I do experience this, but have never been able to track down exactly how to fix/minimise it.

I'm almost positive it has to do with the size of the ARC and how often it gets flushed, but haven't played around with the settings too much to confirm.


----------



## jef (May 22, 2009)

phoenix said:
			
		

> Kip Macy has made available a test branch of 7-STABLE that includes ZFSv13.  Will be interesting to see if this makes it into 7.3.



First off, thanks for all the practical pointers on getting ZFS to be functional. From what I read, it sounds as though, under light or moderate load, ZFS is "stable enough" for "non-life-support applications." 

I'm going to be building up some Atom 330 iTX boxes with 2GB and paired 500 GB notebook drives to replace my decade-old Intel Pentium III (733.13-MHz 686-class CPU) boxes that mainly supply small-scale external web services, mail, and (internal) file server mainly for a couple Macs.

Do you have any feeling on the relative stability of the "test branch" of 7-STABLE compared to what I've been used to in tracking -STABLE since the 4.x days?


----------



## phoenix (May 22, 2009)

For boxes like that, with just two drives, I'd just use gmirror.  Less CPU/RAM required for gmirror compared to ZFS.  ZFS really only gets useful/fun when you have lots of disks.  

-STABLE is usually usable, but one should subscribe to the -stable mailing list and watch for the various HEAD'S UP messages about big changes that are going in, and the various MFC messages detailing code coming in from -CURRENT.


----------



## jef (May 22, 2009)

Thanks -- I missed the second page that indicates that ZFS is in -STABLE now. I've dealt with occasional "bad times to buildworld" in the past, so I'm ok with that.

ZFS looks like it solves a few issues for me that GEOM, I don't think, will, including

Snapshots for rollback
Dealing with a "partition" per jail (potentially with multiple "sub-partitions")
Resizing "partions"
It also becomes very interesting on the boxes where 500GB isn't enough (did I really say that?), such as the media and Time Machine file servers, which will probably have four (or six) 1 TB drives in addition to the pair of notebook drives.

I'll probably build the "critical services" machines on GEOM and try -STABLE on another box or two before making the decision about when to cut over.


----------



## graudeejs (May 22, 2009)

phoenix said:
			
		

> I'm almost positive it has to do with the size of the ARC and how often it gets flushed, but haven't played around with the settings too much to confirm.



Yes, I think exactly the same.
I tried setting ARC to 50MB, but for some reason it wasn't applied.

Do you use i386 or amd64 at home?


----------



## danger@ (May 22, 2009)

8.0 in the summer will be an interesting release


----------



## graudeejs (May 22, 2009)

danger@ said:
			
		

> 8.0 in the summer will be an interesting release



Very very interesting.... I'm running current right now.... {to bad it has lags.....}

I'm googling for second day, still can't find anything.....
I'm already starting to think about submitting PR


----------



## phoenix (May 22, 2009)

killasmurf86 said:
			
		

> Yes, I think exactly the same.
> I tried setting ARC to 50MB, but for some reason it wasn't applied.
> 
> Do you use i386 or amd64 at home?



32-bit FreeBSD 7.1, 3.0 GHz P4 CPU, 2 GB RAM, 3x 120 GB SATA drives in raidz1.


----------



## graudeejs (May 22, 2009)

I have 
32-bit, FreeBSD-8-Current, 3GHz P4 HTT enabled CPU, 2.5GB RAM, 1x250GB SATA + 1x160GB ATA HDD in raidz.


Let's make this clear (for me)...
`$ zpool create poolname ad0 ad4`
is that raidz (or just striping)? (i'm getting confused with all the raid names)


----------



## phoenix (May 22, 2009)

If you don't specify raidz on the command-line, then it isn't using raidz.    Same for mirroring.

What you have is a non-redundant pool comprised of two vdevs.  The pool is striped across the two vdevs (RAID0).


----------



## graudeejs (May 22, 2009)

ok, kinda thought so, but wasn't 100% sure, thanks


----------



## graudeejs (May 24, 2009)

*3x ZFS problems*

1)



I'm not sure if this is directly related to zfs

2) after running *zfs rollback* i get kernel panic

3) *zfs create* problem
When i run zfs create, it created new fs, and it's automatically mounted.
But you can't write to it unless you run

```
$ zfs umount -a
$ zfs mount -a
```
it seams that new fs was mounted under other fs. For example:
if i have /home (a/home)
and i `$ zfs create a/home/killasmurf86`
it will be automatically mounted (if you don't change default settings)
then if i do
restore my home directory backup everything will be written to /home.
mount will show a/home/killasmurf86 mounted, but i'm not able to restore backups until i remount it.

[uhh, explaining 3rd is really hard]



I use compression=gzip and copies=2 on almost all fs

I attached my kernel config, perhaps that has something to do with 1st problem


----------



## graudeejs (May 26, 2009)

OK, 1st problem was because i had localization in my ~/.shrc.
After i removed it, it disappeared.


----------

