# ZFS vs HAMMER



## Oko (Jan 2, 2015)

I am starting this thread in desire to document ZFS (FreeBSD) and HAMMER1 (DragonFlyBSD) feature differences for people who are interested in serious storage solutions. As a disclaimer I have been using ZFS almost exclusively for the past year and the half as a storage solution at my work but should be considered ZFS n00b.  I tried to use DragonFly but I bailed out about 3 months ago mostly out of frustration with monitoring tools ( I could not compile Monit on DF and net-snmp didn't work the way I wanted).

For starters ZFS is a file system and a volume manager in one. HAMMER is just a file system. If you want to use ZFS, you should get a decent Host Bus Adapter (HBA) like  LSI 9211-8i. If you want to use HAMMER as a serious storage solution you should get a good hardware RAID controller, LSI and Areca come to mind as very well supported (not cheap with the price over $600 in U.S.).

Both systems are in practical terms 64 bit only and they are resources hungry. If you want ZFS you need lots of expensive ECC RAM. The fact that somebody is running ZFS in her/his laptop on 2GB of RAM will be of no importance to you when that 16TB storage pool fails during the ZFS scrub or de-duplication because you have insufficient RAM or the RAM is of low quality. The rule of the thumb is that you typically need about 16GB of RAM to begin with ZFS and then about 1-2 GB for each additional TB of data. You also need a really good Intel processor. FreeNAS officially doesn't support any AMD hardware and those people know something about storage solutions. Please refer to mailing lists to all sorts of problems with AMD processors. Long story short you are looking at the very serious initial investment  with ZFS.

HAMMER is not a cheap file system too, but it is far cheaper than ZFS. While you probably need a little bit more than 2 GB of RAM with 16TB of RAID 6 with HAMMER1 on the top of it, you probably can get a way with it. Notice that HAMMER is not enterprise file system suitable for large data farms according to Matt himself.

When is come to storage solutions, I like to install both FreeBSD and DF on small 32 GB SSD drives (preferably 2 in ZFS mirror configuration or HAMMER mirror-stream). Using SSDs has actual technical advantages. I hope people seriously help me complete these lists.

*ZFS good*

Self healing.
ZFS Clones are writable (ideal for hot migration).
Fully journaling system using ZFS snapshots.
Uses compression to store data I like (lzmi).
Portable storage: ZFS pool created in Linux can be imported into FreeBSD.
"Portable" in general as it runs on Solaris, FreeBSD, and Linux. The truth of the matter is that large parts of Solaris kernel have been reimplemented in FreeBSD to add native support for ZFS. Linux implementation relays on FUSE.
It is possible to use ZFS Volume as a iSCSI Target
It is possible to share ZFS via NFS and SMB

*ZFS bad*

Well known data base degradation (not suitable to keep SQL data bases).
No file grained journaling like hammer history.
No volume growing at least FreeBSD version.

*ZFS ugly*

CDDL license.
Extremely complicated peace of software. List of files required to compile ZFS into FreeBSD kernel support is very loooong.

Legally associated with Oracle.
Native encryption for ZFS has been only available in Oracle version.
Upstream is dead. I don't know much about you but OpenZFS doesn't inspire lots of confidence in me.
*HAMMER good*

Fine grained journaling with HAMMER history beat a pants out of everything around.
HAMMER has high focus on data integrity.
Backup ready having network aware hammer-snapshot/hammer-stream.
Phenomenal performance with with SQL.
Much lower RAM requirements for HAMMER de-duplication than ZFS.
Pseudo-Filesystems (PFSs).
Fully open source with nice BSD license attached to it
It is possible to share HAMMER via NFS.

*HAMMER bad*

No self healing.
No compression.
No volume growing.
mirror-snapshot/stream are read only and require human intervention to be deployed.
PFS can't be used as iSCSI target.

*HAMMER ugly*

HAMMER 1 is dead end. The file system has reached limitations of its original design and no new features are planned/possible. 

Not portable https://wiki.freebsd.org/PortingHAMMERFS.

Finally, the DragonFly community is really tiny and charismatic. People are really cool but there are too few of them to stabilize infrastructure which is needed for enterprise solutions. FreeBSD also has a meaningful vendor support with HBA cards and RAID cards for that matter which should be used only with UFS. DF jail infrastructure is stuck in 2005 although there is a Google Summer Code project 2015 to sync DF jails to FreeBSD 9.xxx.

As a matter of subjective opinion I will tell you that DF new network stack and the whole OS feels smoking fast comparing to FreeBSD. As a matter of fact I have not seeing anything faster than DF on my hardware period (I have Red Hat computing nodes and OpenBSD too beside FreeBSD).


----------



## vermaden (Jan 2, 2015)

Oko



			
				Oko said:
			
		

> *ZFS bad*
> 3. No volume growing.



You can grow ZFS pool, with these:
`# zpool set autoexpand=on zroot
# zpool online -e zroot ada3 ada3`

I would also add these:

*ZFS good*
5. LZ4 compression which is even better then LZMA.
6. Deduplication.
7. ZFS Send/Receive with deduplication and compression on the fly.
8. Boot Environments.
9. Can 'do' block devices so can be used as backend for iSCSI or SWAP directly.

*ZFS bad*
4. Lots of RAM needed for deduplication.
5. No 'offline' deduplicaction like with HAMMER.


----------



## Crivens (Jan 2, 2015)

The comparison makes me want to see HAMMER2 on FreeBSD. Wasn't there some effort to have it ported to FreeBSD? That would be one point which I would add to the "HAMMER Ugly" list, if it is so deeply integrated into the kernel that it can not be ported to it's sibling operating systems. But that is only me.


----------



## gofer_touch (Jan 2, 2015)

One minor detail that also matters is that HAMMER is BSD-licensed, while ZFS is not. 

A battle over licensing was once fought over ZFS and the patents it represents: http://www.theregister.co.uk/2010/09/09/oracle_netapp_zfs_dismiss/

Also see NetApp's file system patents: http://en.swpat.org/wiki/NetApp's_filesystem_patents

Someone may decide someday that they want to sue again. For some ZFS is a proprietary file-system. Even the OpenBSD guys aren't touching it.


----------



## wblock@ (Jan 2, 2015)

One major difference is that ZFS is cross-platform.  Is HAMMER available on anything other than DragonFly?


----------



## Oko (Jan 2, 2015)

Crivens said:


> The comparison makes me want to see HAMMER2 on FreeBSD. Wasn't there some effort to have it ported to FreeBSD? That would be one point which I would add to the "HAMMER uUgly" list, if it is so deeply integrated into the kernel that it can not be ported to it's sibling operating systems. But that is only me.


Done! HAMMER2 should rectify most if not all problems with HAMMER1 including portability, lack of compression, snapshots will be alive and it will be true clustering file system. Readers be aware after two years of work HAMMER2 is not even close to being usable let production ready in DF. It will take years if ever that thing stabilize and get ported to other OSs.



wblock@ said:


> One major difference is that ZFS is cross-platform.  Is HAMMER available on anything other than DragonFly?


Nope! See Matt's answer I linked above from FreeBSD mailing lists to see how difficult/impossible would be porting HAMMER to FreeBSD without essentially replicating parts of DF kernel. DF is "the logical continuation of the FreeBSD 4.x series" and as such at this point is really very different OS than FreeBSD with whom is sharing common ancestry. The amount of work which went into rewriting kernel, network stack and for that all major parts of the OS is incredible in particularly in the light of the fact that it has been done by such a tiny group of developers. In BSDtalk248 Matt talks about it and makes the following interesting observation. At the moment of DF forking FreeBSD was faster, leaner and betting the pants out of Linux in all categories.  According to him and in my experience FreeBSD is now one super server bloated OS requiring huge resources while  DF is just a smoking fast research OS.


----------



## gofer_touch (Jan 2, 2015)

wblock@ said:


> One major difference is that ZFS is cross-platform.  Is HAMMER available on anything other than DragonFly?



It looks like HAMMER has been ported to Linux, I imagine read-only support and there also seems to be some work being done to get it working on OS X:

HAMMER for Linux - https://dlorch.github.io/hammer-linux

HAMMER for OS X - https://github.com/dlorch/hammer-fuse


----------



## NewGuy (Jan 2, 2015)

Almost all the bullet points in the original post are false. ZFS is active upstream, ZFS allows native encryption and it is possible to grow ZFS volumes. Also, ZFS runs fine with all sorts of RAM and doesn't require much of it. I often use ZFS on machines with less than 2GB of RAM and it runs smoothly. Granted, if you want deduplication or massive RAID arrays then, yes, more RAM is good, but it's not at all required for most home and small office scenarios.


----------



## Oko (Jan 2, 2015)

NewGuy said:


> Almost all the bullet points in the original post are false. ZFS is activ upstream, ZFS allows native encryption and it is possible to grow ZFS volumes. Also, ZFS runs fine with all sorts of RAM and doesn't require much of it. I often use ZFS on machines with less than 2GB of RAM and it runs smoothly. Granted, if you want deduplication or massive RAID arrays then, yes, more RAM is good, but it's not at all required for most home and small office scenarios.


Please fix it. Samples of code (how do you do for example native ZFS pool encryption) are especially appreciated. I am not an expert by any stretch of imagination. The idea of original post is to try to objectively compare two most sophisticated file systems in existence not start any flame wars. I said I am n00b to both file systems. Nothing would make me happier than having Matt and FreeBSD kernel hackers edit my post. Your claim about RAM is pure BS. Please tell people/customer service of FreeNAS/TrueNAS that you are running FreeNAS/TrueNAS with 2GB of RAM and let me know what is their reaction.


----------



## ANOKNUSA (Jan 2, 2015)

Oko said:


> Upstream is dead.


Care to elaborate? I choke on my coffee every time I see this stated. For some inexplicable reason, FOSS advocates like to declare a project "dead" if it goes more than 15 minutes without a code commit.


----------



## Oko (Jan 2, 2015)

ANOKNUSA said:


> Care to elaborate? I choke on my coffee every time I see this stated. For some inexplicable reason, FOSS advocates like to declare a project "dead" if it goes more than 15 minutes without a code commit.


This might be just my feeling. ZFS and Solaris of course were Sun children and I have personally associated (probably mistakenly) all development of ZFS to the employees and affiliates of that company. Oracle has closed sourced Solaris again. They charge money even if you think that you want to use Solaris. I have not seeing any talk of Solaris 12. It looks Solaris is dead and not marketed at all. ZFS is its native file system. Oracle closed source development. Native encryption for example was available for long time in Oracle version. My understanding is that Oracle implementation of ZFS is not compatible any more with the rest of the world and their ZFS pools can't be imported into other OSs. 

If OpenZFS is a future what is their tier one development platform? *illumos *, FreeBSD, or God forbid Linux on which ZFS feels very awkward?


----------



## Carpetsmoker (Jan 2, 2015)

Since I would like to migrate my server from UFS to $something_else, I was actually looking for a ZFS vs.HAMMER comparison a while ago; I'm somewhat disappointed with FreeBSD as of late, and HAMMER seems like a very viable alternative.

However, I would love if you would add references to your points. Right now it's just a bunch of assertions, with very little extra info.



> HAMMER has high focus on data integrity.



So does ZFS.

Here are the slides of a talk on the subject I attended a while ago.


----------



## usdmatt (Jan 2, 2015)

I don't really see how the code being CDDL is an "ugly" feature of ZFS when comparing file systems for use. What problem does that actually create for people who are looking for a file system to store their data on. Same for the point about Sun/Oracle. What issue does that actually cause for users of OpenZFS.

Do you have any examples of problems with AMD as well? A quick search didn't really turn anything obvious up, and both Intel and AMD processor use the same AMD64 version of FreeBSD, using the same code. So unless AMD processor have major bugs, I don't see why ZFS should have any specific problems on them.

I also don't really like the "ZFS needs ECC" arguments. ZFS needs ECC no more than any other file system. It's not like the developers specially made it rely on ECC features. With any file system, errors can happen in RAM which then get written to disk. Because one of the major ZFS "selling points" is that it has full data integrity, it's advised that users use ECC for production systems. Otherwise you run the risk of people thinking their data is 100% protected, when there is still actually a small chance of corruption. With other file systems no real guarantee of integrity is given in the first place. If HAMMER is supposed to provide great integrity, then I suspect ECC is advised there too.

Also I believe the OpenZFS project is fairly active. They've just had their developer summit and the original primary developer from Sun is still heavily involved. I think at the moment though the priority is more infrastructure work, building a solid API, making the code differences between ports smaller etc, stuff that doesn't really make any obvious changes for users.

I would like to see built in encryption, which does not exist at the moment. Can't see it happening any time soon though. The common way of doing it at the moment is with GELI, which is a lot more hassle than just using a dataset property like on Solaris.

It is also very complicated, there's not much chance of recovery if there is a problem, other than from backup. It does also seem to get slower over time as the data gets more and more fragmented. Makes scrubs slower as well as the scrub has to keep jumping around the disk to read the data in sequence. Not sure what the answer is for that other than rebuilding the pool every couple of years or so.

HAMMER seems intriguing but it still seems too new, hasn't had anywhere near the use and testing of ZFS so I'm not sure how much to trust it. It seems development has now moved onto v2, which has yet to appear, and has a very long list of desired features which worry me. Overall DragonFlyBSD just doesn't seem to have the user base or support for me to be happy using it for anything serious. And are there any sources to backup up the "phenomenal SQL performance" or it "beating the pants off everything else around"?


----------



## usdmatt (Jan 2, 2015)

> Your claim about RAM is pure BS. Please tell people/customer service of FreeNAS/TrueNAS that you are running FreeNAS/TrueNAS with 2GB of RAM and let me know what is their reaction.



Pure BS? What's BS about him saying he runs ZFS systems with 2GB or RAM? I also have a backup system with 1.5TB of data and 2GB of RAM, runs fine. Is that BS as well? And what does it have to do with FreeNAS/TrueNAS devs? They will obviously prefer more RAM, especially in relation to their enterprise TrueNAS kit but that doesn't mean it doesn't work or anyone saying they run it with less is making it up.

You say you have little knowledge of either,  and want  to create a fair comparison but you seem quite biased towards HAMMER.

My main issue with HAMMER is that with file systems you can't just produce code with a huge range of features, including complex stuff like clustering and just say there you go, it's better than everything else. ZFS has been through 10 years of massive use and thousands of issues have been found and fixed. I don't see that HAMMER has been through that, or that it ever will.


----------



## Oko (Jan 2, 2015)

usdmatt said:


> You say you have little knowledge of either,  and want  to create a fair comparison but you seem quite biased towards HAMMER.


Quite biased is little bit too heavy word for somebody who runs five large ZFS/FreeBSD file servers and have decommissioned the only DF based production machine at work. At the moment when I decommissioned our only DF production machine even LDAP authorization was not possible on DF so it was not realistically usable on the serious file server. In the mean time DF got infamous PAM modules so I was the first one to get LDAP actually work. Please check users@dragonflybsd.org



usdmatt said:


> My main issue with HAMMER is that with file systems you can't just produce code with a huge range of features, including complex stuff like clustering and just say there you go, it's better than everything else. ZFS has been through 10 years of massive use and thousands of issues have been found and fixed. I don't see that HAMMER has been through that, or that it ever will.


Please see ugly. HAMMER 1 is DEAD END. All your points are valid, no questions about it. HAMMER 2 for all I know might never be completed so I don't even want to go there. Once HAMMER 2 is finished (if ever), tested and DF code base stabilized we can try to compare ZFS and HAMMER 2 in enterprise environment. Until then I think your and my bank will continue to use ZFS.

However  SOHO file server based on HAMMER might be viable option in particular if the cost is of matter of great concern. I will repeat this and you may call me bias: DF feels smoking fast. It has brand new incredibly fast network stack. They have their own supper fast implementation of NFSv3 (Matt even said that they are not even interested in NFSv4) and there are some other pieces which a serious hobbyist might appreciate.


----------



## usdmatt (Jan 2, 2015)

I would be interested to try NFSv3 on Dragonfly. NFS really does feel a bit of a mess on FreeBSD to me at the moment. There's various versions of the daemon now, all with their own problems. You only have to look round this forum to find people who are struggling just to get the right version of NFS to run in the first place. I also think there's a few projects that outright dislike NFSv4? Didn't one of the OpenBSD devs say they don't want to touch it?

One of the big plus points for HAMMER to me was that it was designed from day one to be a lightweight contender for the OS's default file system. ZFS was designed to allow Sun to create NetApp style storage servers, with NetApp features and while it does work on the smaller end, it does seem much more suited to larger scale storage systems. HAMMER1 seemed to be a good default file system. Of course that's dead now, and with all the featured planned for v2, I wonder if that's going to end up with similar problems running on the low end?

Depending on the functionality, and how well it works, the clustering could be a big win for HAMMER2. I'd love a BSD supported file system where I can just throw some servers together and have clustered, resilient storage. At the moment we have to make do with stuff that's been hacked over from Linux.

We'll have to see how it develops. I've done very little research on DragonflyBSD, but last time I looked (probably 3+ months ago) I couldn't find much decent information about HAMMER2 at all other than a post by Matt about the planned features and a "I've started but it actually working is a long way off" message.

Edit, this is what OpenBSD said about NFSv4. Don't know if they've changed their mind since then.


> Not everyone was happy with the new protocol. In 2010, OpenBSD's Theo de Raadt wrote: "NFSv4 is not on our roadmap. It is a ridiculous bloated protocol which they keep adding crap to."


----------



## wblock@ (Jan 2, 2015)

Many people report running ZFS with as little as 1G of RAM.  FreeNAS is a canned package and they recommend what they feel works best with their system.  And we're long past the days when 8G was an outlandish amount of system memory for running large RAID arrays.  Consider also that ZFS takes over RAID controller functions, so memory is not an additional cost.  Instead of expensive RAID controllers, you buy system memory and cheaper JBOD controllers.

So look at it from the other direction: what mature, modern filesystems are available that do not lock the user into a single operating system?  ZFS, and, well, that's it.


----------



## wblock@ (Jan 2, 2015)

Oh, and I thought the database issue had been addressed: https://www.kib.kiev.ua/kib/pgsql_perf_v2.0.pdf.


----------



## jrm@ (Jan 2, 2015)

Oko said:


> Samples of code (how do you do for example native ZFS pool encryption) are especially appreciated.


For people interested in native ZFS pool encryption see Pawel Jakub Dawidek's interview on BSDNow Episode 62, specifically at 14:15.



Oko said:


> Your claim about RAM is pure BS. Please tell people/customer service of FreeNAS/TrueNAS that you are running FreeNAS/TrueNAS with 2GB of RAM and let me know what is their reaction.


I have no idea what sort of defaults FreeNAS or TrueNAS use, but from my experience running ZFS on somewhat older desktops and laptops with less RAM (I've never gone below 3 GB) isn't a problem.  I'm not talking about several TB drives with dedpuplication turned on and I do some tuning.  



usdmatt said:


> I also don't really like the "ZFS needs ECC" arguments. ZFS needs ECC no more than any other file system. It's not like the developers specially made it rely on ECC features. With any file system, errors can happen in RAM which then get written to disk. Because one of the major ZFS "selling points" is that it has full data integrity, it's advised that users use ECC for production systems. Otherwise you run the risk of people thinking their data is 100% protected, when there is still actually a small chance of corruption. With other file systems no real guarantee of integrity is given in the first place. If HAMMER is supposed to provide great integrity, then I suspect ECC is advised there too.


This is basically what Matt Ahrens said to a group of us asking questions after his BSDCan 2014 talk.


----------



## protocelt (Jan 3, 2015)

jrm said:


> I have no idea what sort of defaults FreeNAS or TrueNAS use, but from my experience running ZFS on somewhat older desktops and laptops with less RAM (I've never gone below 3 GB) isn't a problem.  I'm not talking about several TB drives with dedpuplication turned on and I do some tuning.



Agreed. The needs of a business are not always the same as the needs of a user running ZFS on a couple of servers at home. A home user often doesn't have the expectation or option of customer support and may not need or even desire all the features ZFS offers. As has been said already, ZFS can and does work fine with <8GB and even <4GB of RAM with some tuning when needed depending on the features needed/required according to the specific use case. It runs fine on my Laptop with 4GB of RAM with no tuning at all.

I am eager to check out HAMMER2 when it is finished though.


----------



## Oko (Jan 3, 2015)

I missed few questions directed at me so I will try to answer them here.



usdmatt said:


> If HAMMER is supposed to provide great integrity, then I suspect ECC is advised there too.


Actually ECC RAM is advisable for HAMMER as well Matt has never been shy to say that HAMMER requires serious resources. It just requires less RAM than ZFS. On another hand I am surprised that people take this fact so hard. Look at this way. With ZFS you need $100 HBA controller with DF you need $600-$700 HWRaid card. Difference of $600 can buy more than enough RAM for ZFS.




usdmatt said:


> Overall DragonFlyBSD just doesn't seem to have the user base or support for me to be happy using it for anything serious.


Even though that Matt keeps repeating that he likes to keep it small this was actual main reason I removed DF server from production in my Lab. I just felt overwhelmed with a slew of trivial issues which could not have been addressed due to the tiny user/developer base.



usdmatt said:


> And are there any sources to backup up the "phenomenal SQL performance" or it "beating the pants off everything else around"?


Probably not of the quality you would expect from the serious project. I am looking forward to new benchmarks. The last serious one came 2012.


----------



## russoj88 (Jan 3, 2015)

I'm definitely not a filesystem expert and know next to nothing about HAMMER.

ZFS on PCBSD as of 10.1 is tuned to use very little RAM (relatively) in the install: http://blog.pcbsd.org/2014/11/pc-bsd-10-1-release-now-available/

I'm not trying to say this is applicable to large arrays, but I have a couple laptops and desktops running PCBSD with low RAM usage.  They run better now than before the tune.

Here is another link to ZFS running on low memory with some tuning: https://wiki.freebsd.org/ZFSTuningGuide


----------



## Jasse Jansson (Jan 3, 2015)

I was running OpenSXCE for a couple years, I beleive it was between 2004-2008.  The machine had 4GB RAM and it used ZFS on a 72GB boot drive, a 120GB work drive and two 1TB drives running as a ZFS mirror. This computer ran 24/7 and had to be rebooted every couple of weeks because ZFS never released it's cache, it just filled up until everything was using swap.
If that issue is resolved than I will consider using it again.


----------



## gofer_touch (Jan 3, 2015)

What are you using as an alternative to ZFS?


----------



## gofer_touch (Jan 3, 2015)

Is there anyone out there using HAMMER in production then? 

The thread seems to have become more about defending ZFS's honor than providing additional comparisons and use case examples from the real world.


----------



## usdmatt (Jan 3, 2015)

Doesn't seem like it. HAMMER1 was developed for a while then effectively dropped, not sure exactly why (Don't know if it's still maintained as the current stable version?). After doing a bit more reading HAMMER2 seems to now be part of recent Dragonfly releases, but is not ready to be used.

So apart from HAMMER having some interesting features and being more lightweight, we may as well be comparing ZFS and the next big file system that has a big list of features that doesn't actually exist yet.

The only thing that really goes in HAMMER favour for me is if it's lightweight enough to truly be a drop in replacement for UFS as a general purpose file system. ZFS definitely still struggles with databases, and UFS is still a better choice in a lot of cases unless you really want ZFS features. Of course if it is only ever available on Dragonfly, I'll probably never get round to using it.


----------



## Crivens (Jan 3, 2015)

The point with ECC memory is, in my humble opinion, a second order fact. When you host several Terabytes of data, you will need a good lot of memory, no matter what file system or operating system you use. Since memory is likely to develop flipped bits, as was already cited, you need ECC memory when you have a lot of memory holding important data. When the data is only short lived, like if you are mainly doing image manipulation in a batch job, then you may go without it. Having cached metadata of the file system developing bitrot is bad, no matter what file system it is. That may be the reason why ZFS is mentioned with ECC memory. Oh, and also the possible uptimes of a machine come in here also.


----------



## Oko (Jan 3, 2015)

usdmatt said:


> Doesn't seem like it. HAMMER1 was developed for a while then effectively dropped, not sure exactly why (Don't know if it's still maintained as the current stable version?). After doing a bit more reading HAMMER2 seems to now be part of recent Dragonfly releases, but is not ready to be used.


That is FUD! HAMMER1 is feature complete and stable since 2008. It is the default file system of DF OS and used for the root partition. No new features are planned/possible due to the original design limitations.  Now one may argue that DF is a research OS which is pretty volatile for enterprise environments, and very few people are using it. That is true, but there is nothing unstable about HAMMER 1. HAMMER1 is as stable a product as ZFS is (most likely not as well tested due to the smaller user base). HAMMER2 has been in works for 2 years and is not even close to being ready for testing, let alone for anything else.


----------



## Jasse Jansson (Jan 3, 2015)

gofer_touch said:


> What are you using as an alternative to ZFS?


Don't get me wrong, I never had any problems with ZFS (running OpenSXCE), apart from never letting go of cache RAM.

I have been relying on a cheap Buffalo NAS since the last near_disaster_happening 3 years ago.
A recent near_disaster_happening with the NAS have kicked me in the rear to build a better solution for long time storage.
I'm currently searching the internet, checking what the alternatives are, just chipped in with my experience of ZFS.


----------



## usdmatt (Jan 3, 2015)

Sorry, I took that mainly from your own words:


> HAMMER 1 is DEAD END.



Is v2 supposed to replace the original or is v1 being kept as a complete, stable file system? How much effort is being put into ongoing support for v1 now the devs have moved their attention onto a completely new version?

I may have taken this thread off topic slightly. My original concern was just that there were a lot of unfounded plus points for HAMMER ("amazing performance"/"beats the pants of everything",etc). Half the ZFS down points were also unfounded or irrelevant and peoples attempting to right them were either asked for solid proof* or outright called liars.

*Fair enough but I don't see much proof for a lot of the HAMMER claims.


----------



## scottro (Jan 3, 2015)

Well, as it's already off topic, 

Every time I see this thread
Got that song stuck in my head
My my Dragonfly
Apple of Matt Dillon's Eye
As for me must confess
I use a lot more ZFS,
But I think it ain't no crime 
if you say 

STOP!
Hammer Time!

(OK, I'm done now--but I can't believe I'm the only one who keeps seeing the subject and thinking of You Can't Touch This).


----------



## Oko (Jan 3, 2015)

usdmatt said:


> Sorry, I took that mainly from your own words:


I got it. My bad. The fact that English is not my native tongue clearly shows. What I meant by "dead end" is that no new features are being planned/added to the HAMMER 1 or even possible due to B-trees design limitations. But this is also true for example for FFS on OpenBSD. HAMMER1 bugs are fixed regularly when found. Check out for yourself

http://gitweb.dragonflybsd.org/dragonfly.git



usdmatt said:


> Is v2 supposed to replace the original or is v1 being kept as a complete, stable file system? How much effort is being put into ongoing support for v1 now the devs have moved their attention onto a completely new version?


I am not sure that anybody knows what future holds for HAMMER 1 once HAMMER 2 get released. HAMMER 2 is meant to be complete, stable separate file system. It has well defined list of objectives.

http://leaf.dragonflybsd.org/mailarchive/users/2012-02/msg00020.html

Project is significantly behind original schedule so I don't want to think about HAMMER 2.
I recall vividly that Pawel Jakub Dawidek publicly expressed his doubts that Matt Dillan can pull something like HAMMER 1 and write a file system on his own without corporate backing. He did and I hope he will do it one more time with HAMMER 2. Matt and me are people of about the same age and definitely closer to the end of our lives than to the beggining. I learned C programming using his C compiler on Amiga. He was 16 and I was 16.  I really want him to pull this one 



usdmatt said:


> I may have taken this thread off topic slightly. My original concern was just that there were a lot of unfounded plus points for HAMMER ("amazing performance"/"beats the pants of everything",etc). Half the ZFS down points were also unfounded or irrelevant and peoples attempting to right them were either asked for solid proof* or outright called liars.
> 
> *Fair enough but I don't see much proof for a lot of the HAMMER claims.



I am by no stretch of imagination serious expert on file systems and I doubt you will find such people hanging here. I started a tread out of my personal frustration for the lack of any compassion between these two file systems and my hope that other users like myself who have been exposed to both file systems will fill in missing pieces. This forum is probably the only place where you can find people who have been exposed to both file systems.

I have no idea why people took so hard for example my statement that ZFS needs lots of RAM. I am having hard time to see why people have to defend ZFS so hard or their defend their choice to use ZFS. The coolest thing about HAMMER and DF is that it is labor of love. Matt got rich during dot com boom. He and handful of other like minded guys are hacking on that in their spare time. As a curios person I was always fascinated what they were doing and try using their labor of love at work. It didn't go quite the way I wanted first time but I am sure I will try again. Since my kids have to eat every day regardless whether Monit works or doesn't on DF I use enterprise tested technology ZFS+FreeBSD at my workplace. ZFS is pretty good you know. It is better than playing with XFS and mdadm.

DF guys are not selling anything, they are not trying to compete with Oracle or with FreeBSD for that matter. People who think that FreeBSD and ZFS are the best thing after slice of bread should ignore this thread. People who work for large server farms probably should ignore this thread as well. People who are too serious forum posts probably should ignore it as well.

That leaves me with a targeted audience of like minded geeks, who suffered cabin fever like myself hopefully from different reasons than mine (I had terrible Bronchitis over past 10 days with prevented me from skiing with my children over the holidays).


----------



## phoenix (Jan 6, 2015)

Oko said:


> You also need a really good Intel processor.



Absolutely, completely, and totally false, FUD, etc.  ZFS runs reliably on any x86 processor, whether that be Intel, AMD, Via, or anybody else.

Linux implementation relays on FUSE.
No, it doesn't, and it hasn't in a long time.  ZFS is available as a kernel module for Linux, and runs virtually the same on Linux as any other filesystem/volume manager. No FUSE required.

Well known data base degradation (not suitable to keep SQL data bases).
Not true.  It requires some tuning, but ZFS runs SQL databases just fine.

No volume growing at least FreeBSD version.
You can't add drives to a raidz vdev.  But you *can* add storage space to a raidz vdev by replacing each drive in turn, and then running `# zpool online -e` for each drive in the vdev.  You can also add more raidz vdevs to a pool to increase the total amount of storage in the pool.

Upstream is dead. I don't know much about you but OpenZFS doesn't inspire lots of confidence in me.
Not even close.  Upstream is very much alive, and patches and features are flowing bi-directionally between FreeBSD and OpenZFS.  The mailing lists are quite active, and development is happening as we speak.


----------



## phoenix (Jan 6, 2015)

Oko said:


> If OpenZFS is a future what is their tier one development platform? *illumos *, FreeBSD, or God forbid Linux on which ZFS feels very awkward?



Illumos is the primary development platform for OpenZFS, and is the ultimate gatekeeper of "what is OpenZFS".

However, all stakeholders that support ZFS (Solaris-derivatives, FreeBSD, Linux, MacOS X, etc) are free to develop and implement features.  Then, once they are stable, to submit them upstream, which then makes it available to all downstream systems.  It's not a perfect system, but it's working quite nicely.  For example, there are a handful of features that were originally developed on Linux that are now part of OpenZFS upstream, and have been imported into FreeBSD.

Solaris ZFS and OpenZFS are not related in any way other than they both support the old-school ZFSv28 features.  Once you enable feature flags (ZFSv5000), you are running OpenZFS and can ignore anything and everything Solaris/Oracle ZFS related.


----------



## vermaden (Jan 7, 2015)

phoenix said:


> Linux implementation relays on FUSE.
> No, it doesn't, and it hasn't in a long time.  ZFS is available as a kernel module for Linux, and runs virtually the same on Linux as any other filesystem/volume manager. No FUSE required.



No FUSE required, but ZFS on Linux lags behind 'Upstream' OpenZFS in features (in ZFS *feature flags *to be precise):
http://blog.vx.sk/archives/44-OpenZFS-Feature-Flags-Compatibility-Matrix.html

Also from *https://twitter.com/ahl/status/543064559301300225*:
_"teaser: *OpenZFS device removal has just landed in our repository*; looking forward to seeing it upstreamed!"_


----------



## jrm@ (Jan 7, 2015)

Here is a short blog post about the 2014 OpenZFS Developer Summit 2014.  More specifically, it's about one speaker's thoughts on OpenZFS on Illumos versus Linux.  It's a little off-topic for this thread, but I post it here because it shows (and links to information that shows) OpenZFS is alive and well and "growing".


----------



## gpatrick (Jan 10, 2015)

Oko said:


> It looks Solaris is dead and not marketed at all.


You post one flippant comment about OpenZFS being dead upstream, then go on and make another bizarre statement about Solaris being dead.  The most distant galaxy, z8_GND_5296, is probably close than these two absurdly incorrect comments.


----------



## Crivens (Jan 11, 2015)

gpatrick said:


> The most distant *known* galaxy, z8_GND_5296, is probably close than these two absurdly incorrect comments.


But nitpicking aside, I would like this thread to be civilized (ahh, astronomy, I like it!). It contains valuable information and will likely continue to do so. So, in order to reduce the chance to have this thread being closed for being closer to bar-bragging than forum rules, I would like all participants to calm down and carry on. 

Okay?


----------



## formateur_fou (Jan 11, 2015)

This thread has been discussed on the last BSD Now episode:
http://www.bsdnow.tv/episodes/2015_01_07-system_disaster
Well done Oko (and other members) for making it a good article to read.


----------



## gofer_touch (Mar 13, 2015)

After running DragonflyBSD/HAMMER1 for the past two months (and FreeBSD/ZFS for far longer) I have a couple of items to add to the list that might be beneficial to the discussion.

HAMMER1 (+)

Native encryption using dm_target_crypt, tcplay and libdm
No filesystem performance/penalty when "pools"/file systems are >80% full
ZFS's ZIL/L2ARC equivalent (swap-cache) does not need to be integrated into storage pools and can be removed, upgraded or resized at any time as your system needs change. Under ZFS you'll need to destroy your pool and restore from backup in order to accomplish this.
Surprised no one mentioned this before: offline and online data block deduplication with very spartan RAM requirements


----------



## Terry_Kennedy (Mar 14, 2015)

gofer_touch said:


> ZFS's ZIL/L2ARC equivalent (swap-cache) does not need to be integrated into storage pools and can be removed, upgraded or resized at any time as your system needs change. Under ZFS you'll need to destroy your pool and restore from backup in order to accomplish this.


The ZIL has been removable since zpool v28, which has been in FreeBSD for some years now.


----------



## gofer_touch (Mar 14, 2015)

Ahh, fair enough. Thanks for the correction. I wasn't aware of this. But that brings me to another item then:

Let's say we build a NAS box consisting of 2 HDDs.

Using ZFS you can create a pool across both disks or on either of the single disks by themselves. Let's say you create a pool on one disk and add a ZIL to that pool. Then you subsequently create a second pool on the remaining disk. It seems to me that you would only get the benefit of the ZIL while writing to the first pool only (unless you have some sort of mirror set up).

Under HAMMER, regardless of how you arrange your disks, you'd get the benefits of swap-cache across all the disks/filesystems in the system. Even if newer disks are added later on. This is because swap-cache caches meta-data and filesystem data for all filesystem transactions in the system, not for selected ones that it has been configured for. If I am wrong I am willing to stand corrected of course.


----------



## rusty (Mar 14, 2015)

I don't think you are wrong but the ZFS example you gave sounds like a really bad pool layout - especially when one of the main features is self-healing.


----------



## Oko (Mar 15, 2015)

gofer_touch said:


> After running DragonflyBSD/HAMMER1 for the past two months (and FreeBSD/ZFS for far longer) I have a couple of items to add to the list that might be beneficial to the discussion.
> 
> HAMMER1 (+)
> 
> ...



That is very informative post. Interestingly enough I am using more and more ZFS and FreeBSD at work (right now I am running five big rigs and adding more). I am also becoming more and more familiar with
ZFS gotchas. I played last week with ZFS replication to remote server and the thing is magical as long as you have the right hardware.

On another hand I am starting to realize that the whole thread ZFS vs HAMMER 1 is little ridiculous as it really compares two different things. HAMMER 1  is just a file system while ZFS is much more than that (softraid+LVM+file system).

At this point I fail to see how ZFS can be of any use to a typical home user. At the time when you can get 2TB HDD for about $80 I don't understand why would anybody at home put couple of thousand dollars to build the rig with sufficient number of HDD and high enough quality to be able to run properly RAID-Z2 or RAID-Z3 and get the full benefit of ZFS. For home user running 2x2TB HAMMER 1 as mirror where HDD are connected with SATA controllers is enough for all user case scenarios I can think off.


----------



## protocelt (Mar 15, 2015)

Oko said:


> At this point I fail to see how ZFS can be of any use to a typical home user. At the time when you can get 2TB HDD for about $80 I don't understand why would anybody at home put couple of thousand dollars to build the rig with sufficient number of HDD and high enough quality to be able to run properly RAID-Z2 or RAID-Z3 and get the full benefit of ZFS. For home user running 2x2TB HAMMER 1 as mirror where HDD are connected with SATA controllers is enough for all user case scenarios I can think off.


 I have been interested in trying out DragonflyBSD and HAMMER when I have more time. It seems to be a good midpoint between UFS and ZFS from comments I've seen in this thread as well as what I've read elsewhere so far. I have to personally disagree with your assertion that ZFS has no use for a home user though. You don't have to run RAID-Zn to get a benefit from ZFS. I use it simply for snapshots and data integrity on my desktop using multiple mirror vdevs. I feel better copying my data to a backup server from a ZFS system. Backup copies are no good if the data is corrupted before transit to the backup target. ZFS can help mitigate that better than any other production quality file system. Also keep in mind users who are using FreeBSD, or any BSD for that matter as a desktop/workstation, are not your general desktop user a large part of the time.


----------



## gofer_touch (Mar 15, 2015)

Oko said:


> At this point I fail to see how ZFS can be of any use to a typical home user. At the time when you can get 2TB HDD for about $80 I don't understand why would anybody at home put couple of thousand dollars to build the rig with sufficient number of HDD and high enough quality to be able to run properly RAID-Z2 or RAID-Z3 and get the full benefit of ZFS. For home user running 2x2TB HAMMER 1 as mirror where HDD are connected with SATA controllers is enough for all user case scenarios I can think off.



Interesting point. But, there are lots of home users that stand to benefit from using ZFS at home. Second hand Proliant micro servers for example can be had for less than $200 and this is with ECC RAM. Disks are cheap, so its easy to build a DIY NAS that takes advantage of most of ZFS's features such as RAID-Z2 (probably not deduplication though) for less than $500. ZFS scales well for the home or small business user as much as it does for big enterprise.


----------



## Oko (Mar 15, 2015)

gofer_touch said:


> Second hand Proliant micro servers for example can be had for less than $200 and this is with ECC RAM.


I only see Proliant micro servers of around $400 but I saw this new file server case for $80

http://www.ebay.com/itm/1U-2x5-25-1...635?pt=LH_DefaultDomain_0&hash=item5b0e5d1aab


Definitely good start for DragonFly home file server. Adding 2x2TB HDD CPU and RAM should be all together under $300.


----------



## phoenix (Mar 16, 2015)

gofer_touch said:


> Ahh, fair enough. Thanks for the correction. I wasn't aware of this. But that brings me to another item then:
> 
> Let's say we build a NAS box consisting of 2 HDDs.
> 
> Using ZFS you can create a pool across both disks or on either of the single disks by themselves. Let's say you create a pool on one disk and add a ZIL to that pool. Then you subsequently create a second pool on the remaining disk. It seems to me that you would only get the benefit of the ZIL while writing to the first pool only (unless you have some sort of mirror set up).



Partition the SSD in two, and then assigned each partition to the separate pools as LOG devices.

Same for CACHE devices.

You can't share individual vdevs between pools (which makes sense).  But there's nothing stopping you from sharing physical disks between pools.


----------



## phoenix (Mar 16, 2015)

Oko said:


> That is very informative post. Interestingly enough I am using more and more ZFS and FreeBSD at work (right now I am running five big rigs and adding more). I am also becoming more and more familiar with
> ZFS gotchas. I played last week with ZFS replication to remote server and the thing is magical as long as you have the right hardware.
> 
> On another hand I am starting to realize that the whole thread ZFS vs HAMMER 1 is little ridiculous as it really compares two different things. HAMMER 1  is just a file system while ZFS is much more than that (softraid+LVM+file system).
> ...



Why would you need to spend thousands of dollars to make ZFS worthwhile?

My home server runs ZFS.  Originally with 4x 160 GB IDE drives in raidz1.  Then with 4x 250 GB SATA drives in raidz1.  Currently with 4x 500 GB SATA drives in two mirror vdevs.  Works wonderfully as a home media server running Plex, storing our photos and files, and centralising resources (disk, printers, accounts, etc).  Over the years, I may have spent over $1000 CDN on the server, but that's going through 3 different motherboards/CPUs/RAM, multiple disk, multiple controllers, etc.


----------



## gofer_touch (Mar 17, 2015)

Oko said:


> I only see Proliant micro servers of around $400 but I saw this new file server case for $80
> 
> http://www.ebay.com/itm/1U-2x5-25-1...635?pt=LH_DefaultDomain_0&hash=item5b0e5d1aab
> 
> Definitely good start for DragonFly home file server. Adding 2x2TB HDD CPU and RAM should be all together under $300.



You'd be surprised at what you can get at state auctions!

A DIY box that is supported under Dragonfly has been put together by Matt Dillon here - http://secure.newegg.com/WishList/PublicWishDetail.aspx?WishListNumber=19110647


----------



## Noel Robichaud (May 16, 2015)

This thread spiralled into crap.   I was hoping to read further about this comparison.  However I don't use storage the same way that the use cases here indicate.

I'm a firm believer in disaggregation of the SVC and storage, unless you're willing to shell out serious dollars for enterprise DS.

I guess I would prefer a comparison that would use the same hardware as say an IBM V7000 because essentially it is using FreeBSD w/UFS.


----------



## Beastie7 (May 16, 2015)

Noel Robichaud said:


> This thread spiralled into crap.   I was hoping to read further about this comparison.  However I don't use storage the same way that the use cases here indicate.
> 
> I'm a firm believer in disaggregation of the SVC and storage, unless you're willing to shell out serious dollars for enterprise DS.
> 
> I guess I would prefer a comparison that would use the same hardware as say an IBM V7000 because essentially it is using FreeBSD w/UFS.



If you're looking for a in-depth CompSci analysis of each file system, I'd read the whitepapers on HAMMERs implementation, then watch this (and parts 2 and 3), then make your owns conclusions based on use cases.

Jeff Bonwick and Bill Moore (Storage Gods) does a very good thorough, detailed explanation of ZFS. It's a good watch.


----------



## hedwards (Sep 19, 2015)

Noel Robichaud said:


> This thread spiralled into crap.   I was hoping to read further about this comparison.  However I don't use storage the same way that the use cases here indicate.
> 
> I'm a firm believer in disaggregation of the SVC and storage, unless you're willing to shell out serious dollars for enterprise DS.
> 
> I guess I would prefer a comparison that would use the same hardware as say an IBM V7000 because essentially it is using FreeBSD w/UFS.



I agree about the thread. The main issue I'm seeing with ZFS is that grub2 doesn't play well with it. And I haven't seen anybody with a working solution to it. Or at least not one that's working under 10.2 Actually, it's one of the big reasons why I'm coming back to FreeBSD. Having all my stuff on ZFS and self-healing solves a lot of my personal data corruption concerns. 

To the person wondering about using this at home, the big reason to use ZFS at home is self-healing. I've got 2 1 TB disks which were probably $150 between the two of them and I can do my install to those. I'd love to have ECC, but this is for home use and the data gets backed up anyways.

Hammer is something that I'm curious about, but ZFS is quite good and is already available on FreeBSD. Even early on, ZFS was working pretty well.


----------



## gofer_touch (Sep 19, 2015)

Were you using HAMMER previously? Can you comment on whether the historical access functionality offers some protection against corruption? For example if a file gets corrupted on disk is it possible to roll back to a version before the corruption happened?


----------



## Oko (Sep 24, 2015)

I am resurrecting this infamous thread in order to preserve from the loss some of my personal finding as a consumer of ZFS and HAMMER file system. I will try to stick to technical details as much as possible. Just to make sure HAMMER means HAMMER1 which exists and is fully functional. I am not going to speculate anything about HAMMER2 which is in works.

The purpose of a file system is to keep your data. In this post I will try to address the following points typically encountered in production

1. Protection against data corruption
2. Journaling
3. Backup and recovery
4. Inquiry
5. Monitoring
6. Alerting

*ZFS* is a combined file system and logical volume manager originally designed by Sun Microsystems. *HAMMER* is a file system written for DragonFly which provides instant crash recovery (no fsck needed!).

ZFS is designed for large data centers where people live by high availability and redundancy. Redundancy means that the data is typically stored on a volume consisting of multiple physical HDDs in such a fashion that malfunction  of a single or even several drives doesn't affect data consistency and availability of data. Classical approach to this problem is using hardware or software RAID. In that respect one can think of a ZFS as a software RAID. The following RAID disciplines are available for ZFS, mirror, RAID-Z, RAID-Z2, RAID-Z3. In the layman one picks up typically 6, 7 or more drives and combine them using ZFS into the single volume which in ZFS lingo is known as a ZFS pool. Those drives are physically attached to a computer with a Host Bus Adapter and exposed to the ZFS as JBOD (Just a Bunch of Disks). In typical deployment file servers with multiple ZFS pools as large as 40-50 TB are common. Hardware RAID cards should not be used with ZFS even if they support JBOD mode. ZFS pools are pretty trivial to monitor and FreeBSD has excellent integration with S.M.A.R.T. daemon. ZFS on FreeBSD is hands-down enterprise grade  product. ZFS pools are portable and easy to import from computer to computer and even across OS. One has to be mindful of the version of ZFS. Linux version is older than FreeBSD so a ZFS pool created on a FreeBSD will not be importable into Linux? It is possible to use ZFS Volume as a iSCSI Target. FreeBSD does support ZFS Volume growth.

On the other hand HAMMER is just a file system. That means that if one wants a large logical volume one should be using HAMMER in combination with hardware RAID. Two  brands of hardware RAID cards come to mind: Areca and LSI MegaRAID cards. Areca cards are on FreeBSD supported by arcmsr() driver while newer LSI MegaRAID cards are supported by  mfi() driver. I have not tested any of these two drivers on DragonFly BSD and that is one of things on my TODO list (I have high end/$700 LSI MegaRAID cards in my lab). The immediate questions will be how does one monitor those cards and if it is possible to pass the status of HDD to SMART daemon. I am aware of the two set of tools to monitor LSI cards mfiutil() and proprietary sysutils/storcli. Areca card should be supported even better than LSI as they are open hardware. There is proprietary tool sysutils/areca-cli for Areca for inquiry/monitoring of Areca cards. I am not sure if there is a open source version. One would have to be very mindful of the support by DF BSD community if to use hardware RAID cards. I am not going to speculate how much testing is done with hardware RAID cards but all DF RAID drivers come from FreeBSD. In my experience those drivers some time work some time not quite. DF BSD has a spotty support for various monitoring tools just because of the size of community. I am not aware of any special tool that can monitor HAMMER file system itself. DF uses /dev/serno for drives which enables volumes to be imported from one machine into another. I have not played with that feature enough. DragonFly BSD has a support for Linux Volume Manager. I am not sure if there is any integration between HAMMER and LVM. Theoretically one should be able to use LVM to grow HAMMER file system. However I have not seeing any evidence on DF mailing lists to support this statement. On the contrary I have seen some of main project contributes stating that HAMMER can't be grown.

Once you have a ZFS pool or HAMMER file system on the top of Hardware RAID you will need to create ZFS datasets or HAMMER pseudo file systems PFS for short. In that respect both systems are similar. A single ZFS pool might contain multiple ZFS datasets with different properties. The really cool feature of ZFS includes data compression. I personally like lz4.  HAMMER volume can also contain multiple PFSs with different properties (master/slave) but no nested PFSs. I think that support for compression on HAMMER was in works.

*Data Protection*

ZFS has Copy-on-Write_,_ Check-sums, and Consistency. Depends on the type of the pool multiple HDD failures are permitted. RAID-Z3 discipline allows pool to remain fully functional even in the case when 3 HDD are dead. Depends on the HBA one could theoretically swap HDD on the sever which is up and running. ZFS has the ability to self-heal. In the past IIRC FreeBSD was version of ZFS was not supporting hot spare HDD. I am not sure if the things have changed. I personally have the luxury of taking my server down to replace failed HDD. That is also safer because if you removed wrong HDD you can shut down server again and put the HDD  back. Nothing bad will happen to ZFS pool(unlike Linux software RAID which would not survive such surgery). ZFS is preforming continuous integrity checking and automatic repair.

We should also talk about encrypting data. FreeBSD supports GELI full disk encryption when creating ZFS volumes. Using GELI is beyond the scope of this document.

I should also write something about log (ZIL) and L2ARC and in particularly address the use of ZFS on SSD.

HAMMER is supposed to sit on the top of hardware RAID. Theoretically for a fully supported ARECA or LSI card DF BSD should be able to tolerate 2 HDD disk failure. We should be able to have hot spare drive and to be able to replace failed HDD while server is running. Hardware RAID is suppose to heal after it. I am not sure how that will work with HAMMER. I have lots of experience with Linux XFS on the top of hardware RAID card and things work as advertised. Another interesting question is ability of HAMMER to self heal itself in the case of damaged file system. One could think of a hardware RAID with a dead drive as partially degraded volume. What happens once the RAID is healed? Will HAMMER self-heal and expend onto the replaced HDD? I am not aware of such capability. On another hand I think continuous integrity checking in HAMMER is in par with ZFS.

DragonFly has a device mapper target called dm_target_crypt (compatible with Linux dm-crypt) that provides transparent disk encryption. It makes best use of available cryptographic hardware, as well as multi-processor software crypto. DragonFly fully supports LUKS (cryptsetup) and TrueCrypt as disk encryption methods. tcplay, is a free (BSD-licensed) 100% compatible TrueCrypt implementation built on dm_target_crypt.

DF features SWAPCACHE - Managed SSD support. This DragonFly feature allows SSD-configured swap to also be used to cache clean filesystem data and meta-data. The feature is carefully managed to maximize the write endurance of the SSD. Swapcache is typically used to reduce or remove seek overheads related to managing filesystems with a large number of discrete inodes. DragonFly's swap subsystem also supports much larger than normal swap partitions. 32-bit systems support 32G of swap by default while 64-bit systems support up to 512G of swap by default.


*Journaling *

Contrary to popular opinion in Linux community ZFS and HAMMER are the only existing file systems which support journaling. What is Journaling? You  accidently delete a file or a whole directory. You would want to be able to pull that file/directory from a Journal. Even better. Let suppose you alter the file in undesirable fashion. It would be nice to revert file to its original state. One can think of Journaling as a version control system built into the file system.

ZFS supports journaling via periodic snapshots. Those are typically done as cronjobs. There are multitude of tools in FreeBSD ports which can be used to do snapshots I personally like sysutils/zfsnap but people might like others. If you delete a file/dir before the snapshot is taken too bad. You will not recover your file. In my Lab we take snapshots every 3 hours during the work days.

HAMMER also supports snapshots. The default installation takes snapshot via daily periodic scripts and keeps them in /var/pfs for sixty days. On the top of it HAMMER support fine grained journaling via history. That is absolutely the killer feature of HAMMER. Hammer history is fully functional version control system built into the file system. One can use Slider port on DF as a front end to history. You have to see it to believe it. Nothing is ever lost.

I should mention that ZFS and HAMMER journaling are both NFS and Samba aware which in practical terms means that you can continue to use your Windblows or OpenBSD desktop (like and my case) and still have Journaling. One should mentioned though that DF people have given up on NFSv4 but we should also mention that their implementation of NFSv3 seems very robust and the fastest I am aware of.


*Backup and recovery*

One typically uses ZFS replication to backup ZFS pools. Replication is of course network aware. It is done in deltas and is extremely efficient. One can do deduplication of blocks on the fly during ZFS replication. One can also use additional file system level compression to send deltas. Remote replicates of the file system are fully writable. However note that snapshots are needed before you can replicate your system.  Multiple targets are allowed. Remote replications are fully functional remote clones.

HAMMER uses hammer-stream for backup. It is network aware. One can have multiple targets (PFS slaves). Those are not writable. Note that a slave PFS can be promoted into the master. However one has to be aware of the problem with the time. Only t-time is preserved. PFS slaves are clones but not fully functional until promoted into the masters.

*Inquiry*

Making inquiry about ZFS pool or HAMMER FS is very easy. So both system are enterprise level. Example

`zpool status`

or

`hammer pfs-status /data`
*

Monitoring*

FreeBSD has enterprise level ZFS monitoring. IPMI, SNMP, S.M.A.R.T all work as expected. Tools like Nagios or Collectd have plugings for ZFS monitoring even the things like L2ARC.

Monitoring in DragonFly BSD is challenging to say at least. I was appalled that net-mgmt/collectd5 fails to compile on DF.

*
Alerting*

(to be written)


*Miscellaneous remarks *

It is possible to use ZFS as boot environments. One could use ZFS mirror for a root partition. sysutils/beadm is a killer feature of FreeBSD and ZFS. It allows one to role to pre-update/up-grade fully functional version of OS in the case something goes wrong.

DF BSD uses UFS for /boot. Boot is typically less than a 1GB. The rest of the system is HAMMER. DF installer doesn't support installation of PFS (master/slave) or for that matter on the pair of disks. Personally I hold a view that a large file server running DF will have OS installed on small SSD drive and use hardware RAID or physical drives for data.

It is possible to use ZFS Volume as a iSCSI Target. I have no clue what is the state of iSCSI support on DF BSD. I have not seen any evidence of iSCSI support in HAMMER man pages.

One of favorite FreeBSD are Jails. Tools as sysutils/iocage enable great integration of Jails and ZFS pools. Taking hot ZFS snapshot of a Jail and cloning it remotely is really cool. Similar tool is in works for Bhyve which will be really cool.

DF Jail infrastructure has not being touched for a long time. I am not aware of DF Jails being able to take advantage of HAMMER.


*Final Remarks*

ZFS Deduplication seems better more stable than HAMMER deduplication. However HAMMER boost off line deduplication. ZFS deduplications require tons of ECC RAM.

HAMMER also likes ECC like any other OS intended to run 24/7/365. However it is great choice for cash strapped people like me.

HAMMER is 4-bit file system while ZFS is 128-bit file system. In practical term both systems should be use on 64-bit machines only.

ZFS is tamed by CDDL license and the fact that Oracle is ultimate gatekeeper of the technology. For example native ZFS encryption is possible only on the Oracle versions of ZFS. HAMMER is a BSD-licensed file system.

ZFS is a no-brainer for large data centers (couple hundred servers). Actually everything considering DF and HAMMER are not usable even in the small shop like mine (300-400 TB of data on the handful of file servers).

For home users in particularly those who have no more than  2-3 TB of data having data on the pair of PFS mirrors is very tempting and probably much more cost effective than similar FreeBSD set up. One should be mindful of the fact that ZFS regardless of the number of HDD requires (or at least many people recommend) at least 16 GB or RAM. 16 GB is not too much and most people will be OK with 8GB or even less but similar DF rig with 2 GB of RAM will probably outperform FreeBSD file server.

I would like to see a project like FreeNAS focusing on DF and HAMMER. I think that such a project is unlikely before HAMMER2 release and full code stabilization of DF base. HAMMER2 looks like a radically new advanced file system. It will be the first fully distributed file system. In practical terms data will be spread over multiple master/slave PFS on different physical locations connected over the network. The system will be able to self-heal even if one of those physical location completely disappears from the face of the Earth.


----------



## usdmatt (Sep 25, 2015)

> Contrary to popular opinion in Linux community ZFS and HAMMER are the only existing file systems which support journaling. What is Journaling? You accidently delete a file or a whole directory. You would want to be able to pull that file/directory from a Journal.


I think your description of a journal is completely different to the rest of the computer industry. A journal is purely a log of in-flight changes being made to a file system so it can be quickly made consistent after a unclean mount. It has nothing to do with maintaining versions or history. This is more akin to the ZIL in ZFS, although ZFS should never be inconsistent on disk, even without replaying the ZIL. I'm intrigued how HAMMER handles consistency after a crash; Apparently it's available immediately without fsck but as far as I'm aware it (HAMMER1) isn't CoW?

Of course there's also Btrfs that has ZFS-style snapshots. I'm no fan of it, but I believe it's fairly commonly used in Linux distributions now. May even be default in one or two now?

ZFS replication actually isn't network aware (at least doesn't seem to be to me). Fortunately it ties in nicely with the UNIX 'many simple tools' ideology and allows you to chain its output into nc/ssh/whatever in order to get the data stream to another system.



> I should mention that ZFS and HAMMER journaling are both NFS and Samba aware which in practical terms means that you can continue to use your Windblows or OpenBSD desktop (like and my case) and still have Journaling.


I'm not sure what you mean here by the journalling being NFS/Samba aware? I thought you were going to mention the ability to tie ZFS snapshots in with Windows/SMB "Previous Versions" (which is a bit finicky but pretty cool none the less).

Lastly, I'm no file system expert, especially with distributed file systems. In fact I've never used a distributed file system so I'm not in any way trying to say you're wrong, but I'm genuinely intrigued what will make HAMMER2 the first fully distributed file system compared to stuff that already sells itself as fully distributed like Ceph or GlusterFS?


----------



## gofer_touch (Sep 25, 2015)

HAMMER is pretty fascinating and I wish it were available in FreeBSD as an option alongside ZFS. I also wish it were available across all the BSDs. I'm not sure I understand why NetBSD for example, is working on a ZFS port when they are really in the embedded space...where HAMMER will excel. 

The two file systems shouldn't be seen as competing against one another, but very much complementary. They are different. HAMMER 1 does employ an interesting set of features to avoid corruption altogether. It uses CRCs, REDO and UNDO. When you mount a disk and it reports a CRC failure, chances are your hardware is faulty rather than a crash or bad shutdown borked your data. 

For a home user (and even SMEs) with perhaps tens of terabytes of data, RAID isn't all that its cracked up to be. It makes your set up more complex, introduces additional points of failure, uses more electricity and generates excess heat. Enterprise quality large disks (6 TB +) with network mirroring and backups is plenty secure for the average shop. 

HAMMER 2 will completely change the way we think about data redundancy by way of a type of networked RAID. Live rebuilding of data on a failed disk using one or more networked mirrors from anywhere in the world! Or the continued functioning of a file server with a dead disk in one location because there are two more master mirrors in different locations that kick in to keep serving data...this is light years ahead of ZFS which only really does single system, single machine redundancy and is not network aware.

I like both ZFS and HAMMER and think that the two should be seen as complementary tools in the tool kit.


----------



## Oko (Sep 25, 2015)

usdmatt said:


> I think your description of a journal is completely different to the rest of the computer industry. A journal is purely a log of in-flight changes being made to a file system so it can be quickly made consistent after a unclean mount. It has nothing to do with maintaining versions or history.


You are right. According to that definition WAPBL is journaling file system even both of us know you can't revert to the older file version on NetBSD. However I am using term "journal" in the contest of different backup strategies. Perhaps the correct way to describe is "built in version control". That is also an understatement because hammer history is used also to protect you from file corruptions. If the file gets corrupted you just pull older version from the history and that one will be OK. 



usdmatt said:


> Of course there's also Btrfs that has ZFS-style snapshots. I'm no fan of it, but I believe it's fairly commonly used in Linux distributions now. May even be default in one or two now?


I am tired of hearing about Hurd and Btrfs. I work with Linux every day and RHEL has old SGI XFS. Everything else is crap-shoot. There are people in another building swearing by Ubuntu. I think they are stuck with Ext4.  Frankly based on my personal experience HAMMER2 is far more complete than Btrfs. Get a snapshot of DF and try for yourself.  



usdmatt said:


> ZFS replication actually isn't network aware (at least doesn't seem to be to me). Fortunately it ties in nicely with the UNIX 'many simple tools' ideology and allows you to chain its output into nc/ssh/whatever in order to get the data stream to another system.


You just send stream through SSH and unpack it on another side. 



usdmatt said:


> I'm not sure what you mean here by the journalling being NFS/Samba aware? I thought you were going to mention the ability to tie ZFS snapshots in with Windows/SMB "Previous Versions" (which is a bit finicky but pretty cool none the less).


In my Lab the home directories of my users are physically located on the file server running FreeBSD and using ZFS. Those home directories are mounted on the computing nodes via NFS. When one of our members accidentally delete file or do something stupid with it they send me the e-mail and I pull the older version of their file from .zfs/snapshots. I am not using Samba but I know people who used it and the same is true if you mount a home directory on the Windows machine. 

In the case of the HAMMER you have a whole history not just a snapshot. Checkout this picture

http://leaf.dragonflybsd.org/~sgeorge/PICs/Samba-hammer-snapshot-bkp-1.png

and the description of what is actually happening

https://www.dragonflybsd.org/docs/r...s__44___linux__44___bsd_and_mac_os_x_clients/




usdmatt said:


> Lastly, I'm no file system expert, especially with distributed file systems. In fact I've never used a distributed file system so I'm not in any way trying to say you're wrong, but I'm genuinely intrigued what will make HAMMER2 the first fully distributed file system compared to stuff that already sells itself as fully distributed like Ceph or GlusterFS?


I probably should left out that sentence as I knew that is going to play wrong with PR offices of people who are interesting more  in sales than innovation.


----------



## Oko (Sep 25, 2015)

gofer_touch said:


> HAMMER is pretty fascinating and I wish it were available in FreeBSD as an option alongside ZFS. I also wish it were available across all the BSDs. I'm not sure I understand why NetBSD for example, is working on a ZFS port when they are really in the embedded space...where HAMMER will excel.


NetBSD guys don't work on anything. They started "porting" ZFS 2007. They also through some hot air about HAMMER. Nobody in NetBSD camp is interested in those technologies. For embedded they have WAPBL which is really cool shit and I wish it was available on vanilla OpenBSD (You have it on the Bitrig). This is what I think about NetBSD.

http://daemonforums.org/showthread.php?t=8810





gofer_touch said:


> For a home user (and even SMEs) with perhaps tens of terabytes of data, RAID isn't all that its cracked up to be. It makes your set up more complex, introduces additional points of failure, uses more electricity and generates excess heat. Enterprise quality large disks (6 TB +) with network mirroring and backups is plenty secure for the average shop.


+1

I looked up and down both ZFS RAID-Z2 discipline and hardware RAID 6 for my new home file server and I just told to myself. Do you really need 6X3TB of HDD and all that mambo jumbo to store 100 GB worth of kids pictures and videos? Lets be real everything else on my computer is replaceable.



gofer_touch said:


> I like both ZFS and HAMMER and think that the two should be seen as complementary tools in the tool kit.


+1


----------



## Beastie7 (Sep 25, 2015)

Meh, I'd rather UFS be extended with such features or through another storage protocol for distributed computing than to fuss with another re-invention. That's just me though.

Like XFS/CXFS, for example.


----------



## vermaden (Sep 26, 2015)

Oko said:


> For home users in particularly those who have no more than  2-3 TB of data having data on the pair of PFS mirrors is very tempting and probably much more cost effective than similar FreeBSD set up. One should be mindful of the fact that ZFS regardless of the number of HDD requires (or at least many people recommend) at least 16 GB or RAM. 16 GB is not too much and most people will be OK with 8GB or even less but similar DF rig with 2 GB of RAM will probably outperform FreeBSD file server.


16 GB RAM is not needed for ZFS, I have used 512 MB on FreeBSD for 2 TB ZFS mirror pool for about two years ... but if you want to use deduplication, then you still can have that 512 MB but you have to add at least one L2ARC device (preferably SSD) for keeping the hashes in RAM and/or L2ARC, of course you may increase RAM instead.


----------



## Oko (Sep 26, 2015)

vermaden said:


> 16 GB RAM is not needed for ZFS, I have used 512 MB on FreeBSD for 2 TB ZFS mirror pool for about two years ... but if you want to use deduplication, then you still can have that 512 MB but you have to add at least one L2ARC device (preferably SSD) for keeping the hashes in RAM and/or L2ARC, of course you may increase RAM instead.


I don't know about the prices of the SSDs and ECC RAM in Poland but here in use the difference between 16GB of ECC RAM and 32 or 64 GB SSD are almost negligible. Beyond the point. I respect you knowledge and contribution to FreeBSD community but I hold the view that telling people that ZFS can be run on old hardware with mediocre specs is dis-genuine at best and border line lie. Who cares one might ask?

A recently a gentlemen who is doing humanitarian work in Africa Tanzania wrote an e-mail to DF mailing list after being directed there by few OpenBSD developers. He is trying to put together a network of electronic libraries in the part of Tanzania where power grid is barely existing and not very reliable. Could you guess what OS his computers are running now and what file system do they use?


----------



## protocelt (Sep 26, 2015)

When to use ZFS is a subjective matter of preference, needs, and environment. It can work just fine on lower tier consumer hardware, to a point. I wouldn't use it on a Netbook with 1GB of RAM, but you could and with some tuning, it could work just fine depending on what the user needs or wants. Again, it is a matter of personal preference and the context of your environment when to use it.


----------



## vermaden (Sep 26, 2015)

Oko said:


> I don't know about the prices of the SSDs and ECC RAM in Poland but here in use the difference between 16GB of ECC RAM and 32 or 64 GB SSD are almost negligible. Beyond the point. I respect you knowledge and contribution to FreeBSD community but I hold the view that telling people that ZFS can be run on old hardware with mediocre specs is dis-genuine at best and border line lie. Who cares one might ask?


Telling that ZFS requires 16 GB RAM just to work is a lie, as simple as that.



Oko said:


> A recently a gentlemen who is doing humanitarian work in Africa Tanzania wrote an e-mail to DF mailing list after being directed there by few OpenBSD developers. He is trying to put together a network of electronic libraries in the part of Tanzania where power grid is barely existing and not very reliable. Could you guess what OS his computers are running now and what file system do they use?


Could You elaborate more on that? Its quite interesing.


----------



## gofer_touch (Oct 8, 2015)

vermaden said:


> Could You elaborate more on that? Its quite interesing.



He's probably referring to this: https://forums.freebsd.org/threads/introducing-digital-library-initiative-in-africa.53505/


----------



## sossego (Oct 15, 2015)

Damn skippy!


----------



## grahamperrin@ (May 20, 2016)

hedwards said:


> …The main issue I'm seeing with ZFS is that grub2 doesn't play well with it. …



Amongst recent improvements: http://web.archive.org/web/20160423...tes/CURRENT/relnotes/article.html#boot-loader – the boot loader has been updated to support entering the GELI passphrase before loading the kernel.



Oko said:


> … FreeBSD supports GELI full disk encryption when creating ZFS volumes. Using GELI is beyond the scope of this document. …



Towards multiplatform  ZFS encryption: https://github.com/zfsonlinux/zfs/pull/4329


----------



## gofer_touch (Aug 19, 2017)

Next DFly release will have an initial HAMMER2 implementation


----------



## aimeec1995 (Aug 31, 2017)

I remember choosing zfs in the antergos installer and then it would crash because 2gb of ram wasn't enough.


----------



## forquare (Sep 1, 2017)

aimeec1995 said:


> I remember choosing zfs in the antergos installer and then it would crash because 2gb of ram wasn't enough.



I wonder if this is an Antergos issue or something else?  I've got a number of FreeBSD VMs running ZFS with no more than 2GB RAM for testing.  One of them has a few hundred GBs of storage using ZFS mirrors which I've used for some stress testing - it's not performant, but it's stable...


----------



## usdmatt (Sep 1, 2017)

I love ZFS but to be honest I always manually limit the ARC size to guarantee a reasonable amount left over for the system. Haven't kept up with whether any work has been done to help the memory issues but I've had far too many crashes in the past due to memory starvation. (I have been using ZFS since v15 so I was a pretty early adopter).


----------



## ralphbsz (Sep 1, 2017)

aimeec1995 said:


> I remember choosing zfs in the antergos installer and then it would crash because 2gb of ram wasn't enough.


I've run ZFS on 2GB and 4GB machines (of which only 3GB were used, because they are 32 bit), and it works just fine, with very minor tuning (one or two variables). Right now I have 4+3+3 TB disks in my server at home, with only 3GB of memory.

The stories of ZFS needing oodles of memory might be true for high-performance production systems (I can't verify that, the performance of my hardware doesn't allow that), but basic function of ZFS doesn't need lots of memory.


----------



## Oko (Sep 1, 2017)

ralphbsz said:


> I've run ZFS on 2GB and 4GB machines (of which only 3GB were used, because they are 32 bit), and it works just fine, with very minor tuning (one or two variables). Right now I have 4+3+3 TB disks in my server at home, with only 3GB of memory.
> 
> The stories of ZFS needing oodles of memory might be true for high-performance production systems (I can't verify that, the performance of my hardware doesn't allow that), but basic function of ZFS doesn't need lots of memory.


ZFS is 128-bit file system and it should not be used on 32-bit machine/OS. Your personal experience is irrelevant. This is a public  forum  and we all should refrain from posting bad advises.


----------



## vermaden (Sep 1, 2017)

Oko said:


> ZFS (just like HAMMER1 and HAMMER2) is 64-bit file system and it should not be used on 32-bit machine/OS. Your personal experience is irrelevant. This is a public  forum  and we all should refrain from posting bad advises.


ZFS is 128-bit filesystem ...


----------



## ralphbsz (Sep 2, 2017)

Oko said:


> ZFS is 128-bit file system and it should not be used on 32-bit machine/OS. Your personal experience is irrelevant. This is a public  forum  and we all should refrain from posting bad advises.


ZFS seems to run excellently on my 32-bit machine.  Matter-of-fact, I've run lots of 64-bit file systems and a different 128-bit file system on 32 bit machines.  YMMV.


----------



## aht0 (Oct 1, 2017)

HAMMER 2 is now available in DragonFly installer. Not in most recent release, but available in recent snapshots.


----------



## gofer_touch (Oct 1, 2017)

It should be in the 5.0 RELEASE which I've read is supposed to be released sometime next week. I for one am very interested in trying the live dedup feature.


----------



## gpatrick (Oct 1, 2017)

Oko said


> NetBSD guys don't work on anything.


More FUD and detachment from reality.

Three recent projects completed by NetBSD: 
1) Lua in the kernel
2) blacklistd
3) npf

Oko said


> ZFS is 128-bit file system and it should not be used on 32-bit machine/OS.


I'm sure SUN Microsystems would have been interested in knowing this since they shipped ZFS on Solaris for 32-bit machines.  Perhaps if you'd have worked for them, then your expert technical advice could have saved the company.

Oko said


> we all should refrain from posting bad advises.


Curiously you say this; then make bizarre statements about ZFS which have no basis in truth, and you have spread FUD about NetBSD for years.


----------



## marino (Oct 3, 2017)

gpatrick said:


> Curiously you say this; then make bizarre statements about ZFS which have no basis in truth, and you have spread FUD about NetBSD for years.



For what it's worth, NetBSD greybeards constantly bash Theo and OpenBSD.  They'll go out of their way to bring them up just to put them down (drawing on very weak tangents leaving the reader to go, huh?  nobody was talking about Theo or OpenBSD).

So if a person partial to OpenBSD wants to return the flavor, great!  what's good for the goose is also good for the gander.


----------



## wblock@ (Oct 4, 2017)

Except it's not allowed by the rules here.  Thread closed.


----------

