# UFS vs ZFS



## knightjp (Jul 25, 2022)

What would be the benefit of ZFS over UFS on the FreeBSD Desktop system? 
I recently chose to go with UFS over ZFS for my FreeBSD installation. I don't see a difference, other than the fact that mounting UFS drives are much more easier than ZFS. So with my limited knowledge, UFS seems the way to go.


----------



## 3301 (Jul 25, 2022)

Question already asked here: UFS vs ZFS


----------



## wolffnx (Jul 25, 2022)

There is no "versus" , choose what you need


----------



## dbdemon (Jul 25, 2022)

3301 said:


> Question already asked here: UFS vs ZFS


Yes, although that was nearly 12 years ago. Has really nothing relevant to the question changed in the meantime?


----------



## kpedersen (Jul 25, 2022)

dbdemon said:


> Yes, although that was nearly 12 years ago. Has really nothing relevant to the question changed in the meantime?


Not really. Both filesystems are fairly stable.

Personally I use UFS on my laptops. ZFS is great but on a laptop I rarely make use of its features and they are also fairly memory constrained. ZFS does take up more memory.


----------



## hbsd (Jul 25, 2022)

kpedersen said:


> ZFS does take up more memory


I had better experience with UFS and maybe that's because why. My system is powerful but I felt a relatively noticeable difference between UFS and ZFS. I believe UFS was faster and more stable for me.


----------



## mer (Jul 25, 2022)

dbdemon said:


> Yes, although that was nearly 12 years ago. Has really nothing relevant to the question changed in the meantime?


I agree with kpedersen on this;  fundamentally nothing has changed.  Bugs have been fixed, features have been added.

About the biggest thing I use ZFS for is Boot Environments.  I have a system, Intel NUC so while it's not a laptop, it is limited in upgradability, RAM size and disk size.  Nothing fancy but ZFS lets me upgrade with less trepidation.  Another system is a more typical desktop tower, more memory, power, space, so I use it more for storage.  So it gets mirrors.

Under certain workloads, I can believe that UFS would be faster than ZFS.  I think the stability is a wash between the two (at least I've not noticed a difference).


----------



## gpw928 (Jul 26, 2022)

I agree with mer.

ZFS is much more complex, and has some significant downsides.  e.g. it's resource hungry, and pools need to be kept less than 80% full.  However the really important features of ZFS for me are:

boot environments allow you to upgrade the operating system without fear of trashing your system; and
ZFS file systems share a common pool of unused disk space, within a pool.
To address the upgrade risk with UFS, I used to keep dual root file systems, and switch from one to the other with each upgrade.  No need for that any more with ZFS.

Having said that, all my FreeBSD systems, even the virtual ones, have adequate CPU, disk, and memory resources.  If things were very tight, I would consider UFS (which has stood the test of time).


----------



## pacohope (Jul 26, 2022)

I just found this thread because I've been investigating the tradeoffs. I'm trying to figure out whether I have a use case for ZFS. All the systems I use are virtual. That is, I'm running VMs inside Xen or I'm running EC2 instances on AWS. I have hardware RAID for physical storage, which Xen sees as one big disk. Xen creates virtual disks. If I want to add space, I do it at a layer below the operating system (e.g. resize the virtual disk/EBS volume). When I read about ZFS on the FreeBSD ZFS page, the early discussion talks about giving the filesystem knowledge of the underlying volumes. But in all my cases, everything is virtualised. So there are no physical hard disks or physical volumes for the OS to know about.

Today, I'm just running UFS everywhere because I've been doing BSD since 1993. If it ain't broke... But I'm trying to determine the advantages ZFS has to offer, if you take away the whole physical/logical connection. Snapshots sound pretty awesome, but I'm not fluent in how they work. I tend to do disaster-level backups by backing up the whole VM/volume (outside the OS). But ZFS sounds like it could enable some "oops" restorations really easily (a capability I don't really have right now). There's some discussion of performance (e.g., "MySQL runs faster"), so should I be putting my MariaDB on a ZFS volume, even though the underlying disk is virtual?

The 12-year-old original thread doesn't contain practical considerations that someone might use to judge their own workload and decide which suits them better. It's a lot of gut hunches, personal preferences, and educated guesses.


----------



## Profighost (Jul 26, 2022)

If not knowing "UFS or ZFS?" choice always should be freebsd-ufs.
This shall not feel as no "reduction" nor "downgrade", 'cause it's not!
*UFS* is a mature, fast, powerful, stable, reliable, while (very) easy to use fs.
Unsophisticated, straightforward, and fully capable of fulfilling way more than enough storage needs.

Since ufs provides more space per partition, you'd wasting capacity by using zfs on a single drive/partition.

Simply summerized:* 
ZFS* is to assemble partition_*s *_(drive_*s*_) to *storage pools*,
having large(r) (and growable) and/or redundant storage(s).
There are additional benefits. 
But most don't make sense or even work, such as raid, on a single drive or virtual drives within a vm.

If there are single drives only (Laptop, or single drive desktop machine)
or you are within a VM, where it all depends on the underlying native OS's fs anyway,
I don't see the point in using zfs.
Except you're well versed in zfs and know exactly what you're doing and why.

Even zfs is no rocketscience though it's a bit more complicated to install and use than any other fs.
And it uses a bit more resources (CPU, RAM).
So without using more than one drive natively
I don't see no advantage of using it at all.
You only may lose power (speed) and have all the effort, only.


----------



## hardworkingnewbie (Jul 26, 2022)

pacohope said:


> Today, I'm just running UFS everywhere because I've been doing BSD since 1993. If it ain't broke... But I'm trying to determine the advantages ZFS has to offer, if you take away the whole physical/logical connection. Snapshots sound pretty awesome, but I'm not fluent in how they work. I tend to do disaster-level backups by backing up the whole VM/volume (outside the OS). But ZFS sounds like it could enable some "oops" restorations really easily (a capability I don't really have right now). There's some discussion of performance (e.g., "MySQL runs faster"), so should I be putting my MariaDB on a ZFS volume, even though the underlying disk is virtual?
> 
> The 12-year-old original thread doesn't contain practical considerations that someone might use to judge their own workload and decide which suits them better. It's a lot of gut hunches, personal preferences, and educated guesses.


ZFS main advantages are: subvolumes with snapshots, portability of storage pools due to independence from hardware raid controllers, zfs send for fast backups, bitrot detection and self healing of these if at least the data is there two times, really high redundancy if you want to set it up that way.


----------



## T-Daemon (Jul 26, 2022)

Apropos to try out system upgrades in a ZFS boot environments first, on UFS2 a similar environment can be created with the gunion(8) control utility (new on 14.0-CURRENT).

The major difference to ZFS BE's is, to work with gunion(8), a extra physical media or a md(4) disk in size of at least the original disk is required.


I personally use ZFS on all my (one disk desktop) systems because of, to list a few,

ZFS boot environments
ZFS snapshots, to backup and easy rollback the file system to a certain state.
individual ZFS dataset properties
on a laptop easy automated geli encrypte Root-on-ZFS installation from the official installer.
Also, I don't experience any performance difference between the two file systems, running mostly desktop applications.


----------



## freezr (Jul 26, 2022)

Even though I do not use ZFS at its best (2 ssd in striped mode) the "boot environments" saved my life several times already...


----------



## mer (Jul 26, 2022)

hardworkingnewbie said:


> subvolumes with snapshots,


Couple this with clones and you can have some fun with "development and production" datasets for things like websites and databases


hardworkingnewbie said:


> portability of storage pools due to independence from hardware raid controllers,


Good if you are moving devices around machines.  I would say the independence from hardware raid controllers is actually the more important part.  I've been bitten in the past where a hw raid controller has failed and the only solution is reinstall from scratch because even if you replaced with the "same" controller, it would not work.  Lots of motherboards come with RAID capability built in;  if that fails, what do you do, get a new mobo?


hardworkingnewbie said:


> zfs send for fast backups,


snapshots and zfs send/receive is a good basis for doing backups.


----------



## cy@ (Jul 26, 2022)

Comparing the ZFS paradigm with other filesystems and volume managers, ZFS is like a filesystem and volume manager wrapped up in one. Typically in FreeBSD a person would need to use vinum or the newer gvinum (both of which are "clones" of Veritas Volume Manager -- VxVM) or a combination of gmirror and gconcat.Then put UFS into logical volumes or logical devices.

In Linux people do this using LVM (which is a clone of HP-UX LVM) into which they put EXT, EXT2, EXT3, EXT4, or XFS filesystems into logical volumes.

Or in Solaris, before ZFS, one would build a Solstice DiskSuite volume (a rudimentary thing like our gmirror), putting UFS into the logical device.

ZFS combines the function of volume manager and filesystem into one. Instead of at least 4+ commands to set up an EXT4 volume in Linux or 2+ commands to do the same with gmirror+UFS, once you've done the zpool create, setting up new filesystems (they're actually called datasets) is one simple zfs create command. ZFS simplifies management of your storage farm.

On the flip side, yes, UFS is much simpler and uses much less memory. In a memory constrained environment you're better off with UFS. However I did use ZFS on a 768 MB heavily tuned i386 laptop for a long time. This is not recommended unless you're willing to fiddle around just to get it right.

You need to compare them feature by feature, like ZFS compression or UFS simplicity and small footprint to determine which is better for your application. One size does not fit all.


----------



## bsduck (Jul 26, 2022)

I'm surprised that you find UFS faster. I did tests with both a few years ago and ZFS was clearly faster for me.
That was on HDDs, basic single-disk setups. I didn't compare the speed on a SSD.

I always use ZFS. It doesn't need multiple disks and complex setups to be useful.

I especially like:

* checksumming and zpool-scrub(8)
Instead of silent data corruption, you can know if a file has been damaged and needs to be restored from backup.

* different datasets without the need to partition the disk
That's way more flexible than disk-level partitioning. You can set, or not, the minimal and maximal size of any dataset, and adjust these values anytime. You can delete a dataset and all the space it used to take is instantly available to the others again. Datasets can be mounted with different options just like regular partitions, while still residing on the same physical partition of the disk.

* copies=2 (or 3)
Not something I would enable everywhere, but really handy to automatically keep two or three copies of each file, for important data on dedicated datasets or backup disks.



knightjp said:


> mounting UFS drives are much more easier than ZFS


It's not that complicated either, and in normal daily use you shouldn't be mounting your root disk manually too often anyway


----------



## Voltaire (Aug 13, 2022)

hbsd said:


> I had better experience with UFS and maybe that's because why. My system is powerful but I felt a relatively noticeable difference between UFS and ZFS. I believe UFS was faster and more stable for me.





bsduck said:


> I'm surprised that you find UFS faster. I did tests with both a few years ago and ZFS was clearly faster for me.
> That was on HDDs, basic single-disk setups. I didn't compare the speed on a SSD.


It depends on the situation (specific hardware/software/task).
*Sometimes ZFS is faster, sometimes UFS is faster.
For specific tasks either of these is clearly faster.*

ZFS wins:


			https://openbenchmarking.org/embed.php?i=1112101-AR-ZFSFREEBS06&sha=4e7a074&p=2
		



			https://www.phoronix.com/data/img/results/zfs_ext4_btrfs/7.png
		



			https://www.phoronix.com/data/img/results/zfs_ext4_btrfs/2.png
		



			https://openbenchmarking.org/embed.php?i=1904234-HV-FREEBSDZF72&sha=d7b42e1&p=2
		



			https://openbenchmarking.org/embed.php?i=1904234-HV-FREEBSDZF72&sha=8ad267f&p=2
		



			https://openbenchmarking.org/embed.php?i=1904234-HV-FREEBSDZF72&sha=4aaf381&p=2
		


UFS wins:


			https://openbenchmarking.org/embed.php?i=1904234-HV-FREEBSDZF72&sha=dbbea03&p=2
		



			https://openbenchmarking.org/embed.php?i=1904234-HV-FREEBSDZF72&sha=0c7341a&p=2
		



			https://www.phoronix.com/data/img/results/zfs_ext4_btrfs/1.png
		



			https://openbenchmarking.org/embed.php?i=1904234-HV-FREEBSDZF72&sha=36043d2&p=2
		



			https://2.bp.blogspot.com/_rlFXEOtfh1s/Rvd7XhD2czI/AAAAAAAAABM/KaFeHvr5tAI/s400/zfs-vs-vxfs-vs-ufs.jpg
		


My guess is that ZFS is currently faster than UFS on average, and this was the case a long time ago: https://blogs.oracle.com/solaris/post/zfs-to-ufs-performance-comparison-on-day-1
Looking ahead to  our results we find  that of our  12 Filesystem Unit test that were successfully run:

 *ZFS outpaces UFS in 6 tests by a mean factor of 3.4*
    UFS outpaces ZFS in 4 tests by a mean factor of 3.0
    ZFS equals UFS in 2 tests.
ZFS _on Linux_ is slow.

But _on FreeBSD_ ZFS is fast (frequently faster than EXT4/Btrfs are_ on Linux)_.

The thing that ZFS needs a lot of RAM is also a myth by the way. I've been using ZFS for 4 years on a system that has 4GB of system RAM, I've never had any problems. When I open 200 tabs in Chromium the browser becomes slow and some tabs crash. But if I close +- thirty tabs again, Chromium becomes responsive again. I've never had any data loss or anything like that. I haven't had any stability issues in games either, so I'd say ZFS works fine if you only have 4GB of system RAM available.

In terms of *reliability and stability*, I would say that ZFS is going to beat UFS.

ZFS vs UFS and power loss https://forum.netgate.com/topic/120393/zfs-vs-ufs-and-power-loss
_Yes, it is about *zillion times better* than UFS. Switching to ZFS should be a complete no brainer with anything that has 4GB of RAM or better. I'd still go for it even with 2GB boxes, had nothing but pain with UFS for years. Garbage filesystem.

UFS is still quite resilient to actual data corruption but it often requires a manual fsck after a power loss to fix the filesystem metadata (not the actual stored data but the filesystem bookkeeping information). It's actually better without the journaling as mentioned, keep soft-updates on though for reasonable performance. The downside is that manual fsck can take a long time but it will fix the filesystem unless something is completely corrupted or there is an actual hardware fault on the disk.
*ZFS is miles ahead in this department* though, I have never experienced any power loss related problems with ZFS._

ZFS also offers more features than UFS. ZFS has important features and characteristics that even Btrfs doesn't offer.


----------



## mer (Aug 13, 2022)

Voltaire good stuff.  I haven't done any looking but have you run across any benchmarking of ZFS on FreeBSD that compare "FreeBSD Native ZFS" (the version in 12.x) vs OpenZFS on FreeBSD (version in 13.x)?  From a pure user experience I've seen no difference in my daily use which is not really disk intensive.

RAM usage:  I think there is a lot that may depend on specific workload (I know, amazing).  The Number of tabs open in Chromium, I wonder if it's more Chromium vs the filesystem.

Regardless, thanks for posting the info.


----------



## Voltaire (Aug 13, 2022)

mer said:


> Voltaire I haven't done any looking but have you run across any benchmarking of ZFS on FreeBSD that compare "FreeBSD Native ZFS" (the version in 12.x) vs OpenZFS on FreeBSD (version in 13.x)?


You can also install version 2.1 in FreeBSD 12 and do a performance comparison. There are few or no benchmarks to be found that make a comparison. All I can say is that it seems that version 2.1 is going to be faster than the version FreeBSD 12 uses by default:









						TrueNAS 13.0-U1 Delivers Improved Performance, Scalability, and Reliability
					

TrueNAS 13.0 retains all the TrueNAS 12.0 services and middleware while providing significant improvements in security, availability, quality, and performance.




					www.truenas.com
				



_OpenZFS 2.1 *performance* and reliability improvements _









						Should I Upgrade to OpenZFS 2.1?
					

The release of FreeBSD 13.1 is just around the corner, and with it comes support for the most recent stable release of OpenZFS - version 2.1.2. Learn about the new features in OpenZFS 2.1 and about what to consider before upgrading your pools.




					klarasystems.com
				



*Improved zfs receive performance with lightweight write*_: This change improves performance when receiving streams with small compressed block sizes._









						OpenZFS 2.1 is out—let’s talk about its brand-new dRAID vdevs
					

dRAID vdevs resilver very quickly, using spare capacity rather than spare disks.




					arstechnica.com
				



_Distributed RAID (dRAID) is an entirely new vdev topology we first encountered in a presentation at the 2016 OpenZFS Dev Summit.
In the chart at the top of this section, we can see that, in a pool of ninety 16TB disks, resilvering onto a traditional, fixed spare takes *roughly 30 hours* no matter how we've configured the dRAID vdev—but resilvering onto distributed spare capacity *can take as little as one hour*. The fast resilvering is fantastic—*but draid takes a hit in both compression levels and some performance scenarios* due to its necessarily fixed-length stripes._





__





						OpenZFS 2.1 performance improvements – VX Weblog
					





					blog.vx.sk
				



_Alexander Motin and other OpenZFS developers are currently working on various micro-optimizations in areas like atomics, counters, scalability and memory usage.*  Release candidate testers already report improved performance* compared to OpenZFS 2.0 and previous releases._


----------



## mer (Aug 13, 2022)

That last one, on micro-optimizations, people often say "don't do that" but sometimes if they are part of the 80% of the code getting run, they add up.


----------



## hardworkingnewbie (Aug 13, 2022)

Voltaire said:


> ZFS _on Linux_ is slow.
> 
> But _on FreeBSD_ ZFS is fast (frequently faster than EXT4/Btrfs are_ on Linux)_


Now that's quite a daring statement given the fact that Linux and FreeBSD are using the same codebase nowadays, namely OpenZFS.

What are your sources to support this statement? Show me where the meat is, Voltaire.


----------



## Deleted member 67440 (Aug 13, 2022)

pacohope said:


> Today, I'm just running UFS everywhere because I've been doing BSD since 1993. If it ain't broke... But I'm trying to determine the advantages ZFS has to offer, if you take away the whole physical/logical connection. Snapshots sound pretty awesome, but I'm not fluent in how they work. I tend to do disaster-level backups by backing up the whole VM/volume (outside the OS).
> The 12-year-old original thread doesn't contain practical considerations that someone might use to judge their own workload and decide which suits them better. It's a lot of gut hunches, personal preferences, and educated guesses.


OK, here I am
I manage (about) 100 FreeBSD servers, physical & virtual around the World, UNIX user from (about) 30 years (Solaris in fact)
Datastorage, MariaDB, sphinx-server, nginx, whatever

Short version *in virtual-world* (off topic in respect a _What would be the benefit of ZFS over UFS on the FreeBSD *Desktop *system?) _but on maybe 500 FreeBSD machines (in the years, of course) I think to have installed X maybe... 2 times. So I have almost *zero *experience with FreeBSD *clients*

Snapshots: enormously *faster *than those of hypervisors. Normally less than a second even on large and busy machines.
Allows you to make backups of the .vmdk directly from zfs snapshots.
Unmatched against vSphere, VBOX etc snapshots (... on par... with... VmWare Workstation!)

No chkdsk / scandisk / whatever (if you DO NOT use deduplication)
This reason alone is enough to abandon filesystems that require it instead

Scrub (data integrity check).
Especially in the virtual field it makes the difference between "maybe" the data of a broken machine has been copied well, with "sure they work"

Resilvering. The whole system is basically a gigantic "RAID controller", with 8 or 16 CPUs and maybe 128GB or more of RAM.
Nothing to do with failing HW RAID systems

Compression, very good and very fast (LZ4)

Reasonably fast (considering everything it does). Not a big deal with today's machines

*Mirroring* of NVMe drives without the slightest problem, out-of-the-box
Even this *alone *is enough to abandon SATA / SAS HW RAID controllers

Plus a whole host of other things.

_Final judgment: it "pays" to use FreeBSD not because it is the "best" operating system
But why is it the best operating system to run zfs (and can become a samba-PDC-master, for very common Windows clients)_
Note: I am referring to version FreeBSD *11-12* zfs
Defects and solutions are known (which is essential when you cannot go on site)

13.x (with the new OpenZFS) does not convince me *at all*.
Too many youth problems, too many oddities for production use

I expect that, of course, the situation will improve over time
But today I refuse to use a BSD 13.x in production


----------



## Deleted member 67440 (Aug 13, 2022)

A word about benchmarks

They say almost nothing (in production), they are almost useless.
Having a system that runs 3.6% faster, but you can't test if it is working fine, is *the *difference between a hobbistic and a professional use.

Of course, not everyone scrubs their laptop every day, I understand that

But if we consider the "server world", and not the desktop world, there is not the slightest doubt (ufs vs zfs, or better zfs vs everything else)

There is some doubt with Solari/OmniOS/Nexenta (for a domain client fileserver with Solaris' ACLs), and even with Debian+ZFS (if you want to use the OpenZFS version it doesn't change much and you need Linux software)

These are my opinions, which however are based on facts, on evidence and decades of experience


----------



## Deleted member 67440 (Aug 13, 2022)

I could also add more interesting things about verifying backups with minimal tear of media takers, choosing expendable ones and activating deduplication on them (in this case yes, there is a sort of "fsck" with zfs, but on temporary drive you can tear off and remake)

I don't know if it matters to someone, I have to be careful, otherwise it seems that I can "suck" credit cards from remote


----------



## Deleted member 67440 (Aug 13, 2022)

hardworkingnewbie said:


> Now that's quite a daring statement given the fact that Linux and FreeBSD are using the same codebase nowadays, namely OpenZFS.
> 
> What are your sources to support this statement? Show me where the meat is, Voltaire.


There was a grain of truth. 
On Linux, historically, zfs worked with FUSE, so there was some greater overhead
It is actually modest
However, as mentioned, the benchmarks on filesystems with different functions are just about of little significance
FAT32 is much, much, much faster than both zfs and ufs
Because it is much simpler, it does practically nothing
Of course, however, I prefer ufs on FAT32


----------



## Voltaire (Aug 13, 2022)

hardworkingnewbie said:


> Now that's quite a daring statement given the fact that Linux and FreeBSD are using the same codebase nowadays, namely OpenZFS.
> 
> What are your sources to support this statement? Show me where the meat is, Voltaire.


It's these kind of results that give me this impression: https://openbenchmarking.org/embed.php?i=1901268-SP-ZFSBSDLIN95&sha=225b6b2&p=2

The results that I've seen on Phoronix over the years give me the impression that Linux used to be more than half slower than FreeBSD in ZFS. Phoronix sometimes does benchmarks that are very important, and sometimes less important benchmarks. What I've also seen frequently over the years is that _FreeBSD with ZFS_ gets much higher IOPS than_ Linux with EXT4/F2FS_ in certain situations. Sometimes more than 5x higher IOPS in Fio.

You can easily test it yourself. Install FreeBSD on your hardware, run multiple tests in Fio, the most reliable benchmark tool. See what your IOPS are. And then see what Linux gets with EXT4 or F2FS. My impression is that there are important scenarios where FreeBSD gets much higher IOPS.


----------



## Voltaire (Aug 13, 2022)

hardworkingnewbie said:


> Now that's quite a daring statement given the fact that Linux and FreeBSD are using the same codebase nowadays, namely OpenZFS.
> 
> What are your sources to support this statement? Show me where the meat is, Voltaire.


In my previous post I already give a strange performance difference and you can directly compare with ZoL.
And I mention the weird differences in Fio, so I mean specifically these results:


			https://openbenchmarking.org/embed.php?i=1901268-SP-ZFSBSDLIN95&sha=5ca0c1f&p=2
		



			https://openbenchmarking.org/embed.php?i=1901268-SP-ZFSBSDLIN95&sha=12872ac&p=2
		



			https://openbenchmarking.org/embed.php?i=1901268-SP-ZFSBSDLIN95&sha=49228e7&p=2
		


Furthermore, you also have these relevant results where Linux *ZoL* is much slower in the best SQL database:


			https://openbenchmarking.org/embed.php?i=1901268-SP-ZFSBSDLIN95&sha=ebda438&p=2
		



			https://openbenchmarking.org/embed.php?i=1901268-SP-ZFSBSDLIN95&sha=3fc693b&p=2
		


And PostgreSQL is not slow on FreeBSD + ZFS compared to Linux: https://redbyte.eu/en/blog/postgresql-benchmark-freebsd-centos-ubuntu-debian-opensuse/


----------



## hardworkingnewbie (Aug 13, 2022)

Voltaire said:


> It's these kind of results that give me this impression: https://openbenchmarking.org/embed.php?i=1901268-SP-ZFSBSDLIN95&sha=225b6b2&p=2
> 
> The results that I've seen on Phoronix over the years give me the impression that Linux used to be more than half slower than FreeBSD in ZFS. Phoronix sometimes does benchmarks that are very important, and sometimes less important benchmarks. What I've also seen frequently over the years is that _FreeBSD with ZFS_ gets much higher IOPS than_ Linux with EXT4/F2FS_ in certain situations. Sometimes more than 5x higher IOPS in Fio.
> 
> You can easily test it yourself. Install FreeBSD on your hardware, run multiple tests in Fio, the most reliable benchmark tool. See what your IOPS are. And then see what Linux gets with EXT4 or F2FS. My impression is that there are important scenarios where FreeBSD gets much higher IOPS.


You are comparing apples with onions here; as I stated FreeBSD is nowadays using the same ZFS codebase as Linux does. More specifically, that change happened with the release of FreeBSD 13.

But your benchmark is showing results of FreeBSD12, which used another ZFS implementation not used any longer.

So - this benchmark is pretty much useless to support your statement, because it just shows the performance profile of the past but not the present.


----------



## tingo (Aug 13, 2022)

hardworkingnewbie said:


> So - this benchmark is pretty much useless to support your statement, because it just shows the performance profile of the past but not the present.


Is your argument that zfs in FreeBSD 13 is slower than in FreeBSD 12.x?


----------



## mer (Aug 13, 2022)

Zfs codebase being the same as on Linux.  Ok, that can eliminate differences due to ZFS but can show differences in how ZFS interacts with the kernel.  So, it's not invalid, one just has to be aware of what they are looking at.
In theory, OpenZFS-2.x on FreeBSD vs same on Linux the difference is the kernel interface.  Features, checksumming, block management, that is all in the OpenZFS code itself.  Kernel interfaces (allocate this, write that, read this) wind up being where the differences are;  the differences may result in different optimizations being done in the OpenZFS code

FreeBSD-12, ZFS.  To the best of my knowledge FreeBSD-12 is not EOL, so it's not really "the past".  Comparisons of FreeBSD-12/NativeZFS vs Linux+ZoL indicates performance of the "system", not just ZFS.  FreeBSD-12+Native ZFS vs FreeBSD-12+UFS indicates differences between ZFS and UFS.  FreeBSD-12 also has the option of running with OpenZFS so one can do FreeBSD-12+NativeZFS vs FreeBSD-12+OpenZFS which will indicate differences between NativeZFS and OpenZFS and could provide data to prove/disprove "OpenZFS is faster/slower than NativeZFS on FreeBSD-12", but nothing more than that.  I don't know if there is a "NativeZFS on FreeBSD-13" port (like the OpenZFS for FreeBSD-12), but if so that would provide data as to "OpenZFS is faster/slower than NativeZFS on FreeBSD-13".

My opinion is that the links posted of benchmarking are not useless, but one has to understand what is being looked at, just like every single benchmark ever done.

More "my opinion":
I can understand a reluctance to OpenZFS on FreeBSD-13, simply because OpenZFS-2.0 is effectively ZFS on Linux.  That can lead to a caution in adopting FreeBSD-13.x, but that is what testing and waiting is for.  If one is dead set against OpenZFS, then by definition, they are stuck on FreeBSD-12.x until 12.x is EOL.  My systems, my rules, my choices.  Your systems, your rules, your choices.

NOTE:
All the above is my opinion, based on my experience, feel free to agree, disagree, tell me I'm off my rocker, tell me to shut up and keep my opinions to myself, it's all good.
ETA:
Sorry for writing too many words.


----------



## Alain De Vos (Aug 13, 2022)

Question regarding UFS.
Are there use-cases to not do journaling.
Are there use-cases to not do soft-updates


----------



## mer (Aug 13, 2022)

Alain De Vos said:


> Question regarding UFS.
> Are there use-cases to not do journaling.
> Are there use-cases to not do soft-updates


Journaling basically speeds up cleaning of dirty filesystems.  I think there is also something about you can't do journaling if you are doing snapshots (this is going by memory way in the attic, think of it as swap space, so may not be correct).

My understanding of softupdates is that it's similar to ZFS and the transaction groups where writes get grouped so that the data on the disk (data and metadata) is consistent.  You can still lose data if power is lost/hard pulled at the "right" time, but the disk will be consistent.

Not using softupdates?  Perhaps something like a database wants to do synch mounts so the data is actually on the disk.


----------



## Deleted member 67440 (Aug 14, 2022)

Alain De Vos said:


> Question regarding UFS.
> Are there use-cases to not do journaling.
> Are there use-cases to not do soft-updates


It is the "combo"
For use-cases for NOT (journaling && soft-updates) something here





						195485 – [ufs] mksnap_ffs(8) cannot create snapshot with journaled soft updates enabled
					






					bugs.freebsd.org
				




_As ZFS provides cheap snapshots, that is the filesystem of preference for folks that want snapshot functionality. The only remaining use for snapshots in UFS is the ability to do live dumps.  Thus I have not been motivated to go to the effort to migrate the kernel code to fsck (and nobody has offered to pay me the $25K to have me do it)._

I am not sure for newer version of FreeBSD

PS. _McKusick, for those who do not know him, is the "father" of UFS_
#define     FS_UFS1_MAGIC   0x011954 /* UFS1 fast file system magic number */
#define     FS_UFS2_MAGIC   0x19540119 /* UFS2 fast file system magic number */
Yes, his birthdate 
So I think it's really reliable


----------



## Alain De Vos (Aug 14, 2022)

Personally i think snapshot functionality for ufs is not that useful.
In fact for me they can remove the snapshot code.
It would make the filesystem a bit simpler.


----------



## facedebouc (Aug 14, 2022)

Alain De Vos said:


> Personally i think snapshot functionality for ufs is not that useful.
> In fact for me they can remove the snapshot code.
> It would make the filesystem a bit simpler.


According to dump(8):


> -L      This option is to notify dump that it is dumping a live file
> system.  To obtain a consistent dump image, dump takes a snapshot
> of the file system in the .snap directory in the root of the file
> system being dumped and then does a dump of the snapshot.


----------



## Alain De Vos (Aug 14, 2022)

I wonder what's the take of Kirk McKusick on this subject.
​


----------



## larshenrikoern (Aug 14, 2022)

What to use depends on more than just features and speed. It depends on your disk configuration, UFS is as I see it for non raid configurations. If you need  raid or other of the features of ZFS use it. 
It has also to do with stablilty. UFS is getting almost no new features, bugs are getting fixed and its limitations are quite well known. ZFS on the other hand is in quite active development, so the risk of some day to be hit by serious bugs are much bigger, although it until now seems to be the case.


----------



## hardworkingnewbie (Aug 14, 2022)

tingo said:


> Is your argument that zfs in FreeBSD 13 is slower than in FreeBSD 12.x?


What I am saying is that it makes no sense when benchmarking ZFS any longer to use FreeBSD 12.X against some Linux, because FreeBSD 12.X is the past and therefore obsolete ZFS implementation of FreeBSD. The present ZFS implementation in FreeBSD sinve 13.0 is OpenZFS; and probably will be for quite a long future.

When backing up bold statements like "ZFS on FreeBSD is 2x faster than on Linux" it nowadays makes only sense to compare FreeBSD >= 13.0 with the Linux distribution of choice then, obviously.

My personal expectation is that there are some slight speed variances, depending on hardware and used benchmarks. Sometimes probably Linux is quicker, sometimes FreeBSD. But FreeBSD being 2x faster would mean that on Linux is something fundamentally broken, which would really hard to believe. It would also mean that in OpenZFS there would be something probably fundamentally broken, which I disregard as possibility because I am pretty sure that otherwise the FreeBSD developers would not have bothered with switching over OpenZFS.

There was a talk by Michael Dexter at vBSDcon back in 2019, who compared the ZFS implementations of that time against each other using benchmarks. It's also listed on papers.freebsd.org. This is way more in line with what I would expect from something like that. Phoronix does lots of benchmarks, but not always in a sensible way.





_View: https://www.youtube.com/watch?v=HrUvjocWopI_


----------



## Jose (Aug 14, 2022)

hardworkingnewbie said:


> ...because FreeBSD 12.X is the past and therefore obsolete ZFS implementation of FreeBSD.


Just cause you say so doesn't make it so. Some version of Freebsd 12 is going to be supported until 2024.








						FreeBSD Security Information
					

FreeBSD is an operating system used to power modern servers, desktops, and embedded platforms.




					www.freebsd.org


----------



## hardworkingnewbie (Aug 14, 2022)

Oh, so ZFS in FreeBSD 12 will not just see security and bugfixes until EoL, despite the big change between FreeBSD 12.0 and 13.0? I sincerely doubt that.


----------



## hunter0one (Sep 14, 2022)

knightjp said:


> I don't see a difference, other than the fact that mounting UFS drives are much more easier than ZFS. So with my limited knowledge, UFS seems the way to go.


I agree. I just tried to reinstall and give ZFS a try but ran into the same issues I had over a year ago, ugh.

The only difference I notice with ZFS is that it consumes more ram, and I have to figure out how to properly mount my hard disk because the installer doesn't import it. To me the effort of reading pages upon pages of documentation just to take a shot at understanding how a different filesystem works doesn't outweigh the simplicity of UFS. I tried ZFS again for Poudriere, but its still not my cup of tea.


----------



## Alain De Vos (Sep 14, 2022)

Some zfs is explained in books,





						Amazon.com: FreeBSD Mastery: ZFS: 9780692452356: Lucas, Michael W, Jude, Allan: Books
					

Amazon.com: FreeBSD Mastery: ZFS: 9780692452356: Lucas, Michael W, Jude, Allan: Books



					www.amazon.com
				



Ones you take regular snapshots you master it.

Zfs normally takes only memory which is free. So it's not a problem except for embedded/small devices.
You can also tune in sysctl.conf:

```
vfs.zfs.arc_min= 1500000000              #0
vfs.zfs.arc_max= 2500000000              #0
```

zfs has a bit of a learning curve. But I did not use the installer to install my freebsd on zfs. I just used commandline commands.


----------



## ct85711 (Sep 14, 2022)

From my experience, mounting with ZFS isn't difficult, and if anything; easier than on Linux using various fs's (including btrf).  Even on one of my systems, where I encountered a race condition on ZFS (the fault for the race condition is my own fault); it still isn't difficult.  I will admit, I carried one thing from linux over (not sure if it is needed or not, but it doesn't hurt having it); that is to tell the kernel on startup what the root device/dataset (the base / drive/dataset, not /boot or /root) is (you only need specify one device, zfs is smart enough to see and load the other devices for the pool).

I will say, if you use multiple pools, the mounting order of which pool is mounted first may not always be in the order you expect and you may be better off telling one pool not to mount automatically.


----------



## Alain De Vos (Sep 14, 2022)

zfs on root is a problem for linux. I use zfs on linux but only for my data. Most problems are related to kernel updates.
zfs on root on freebsd works out of the box. It works flawless. And is better than any other filesystem jfs,xfs,etc...


----------



## Deleted member 67440 (Sep 15, 2022)

Alain De Vos said:


> Some zfs is explained in books,
> 
> 
> 
> ...


You can do at runtime too

sysctl vfs.zfs.arc_meta_limit=whatever
sysctl vfs.zfs.arc_max=youlike

But, in fact, zfs has no learning curve
You don't need to make any particular adjustments or I don't know what
It works quietly

When you enter situations such as "how to optimize the use of RAM for virtualbox servers" you are much, much, much beyond the normal user



> I just tried to reinstall and give ZFS a try but ran into the same issues


??? issues ???
The "strangest" may be make a "stripe-0" for a single-disk zpool
There is nothing to choose or change or fix for zfs, just "next-next-next-OK"

Sure you can, for example, put /home not inside zroot and so on.
But 99% of users don't need it, and 1% know what it's doing

You don't have to import (zpool) anything, you don't have to edit fstab, you do not need to set mountpoints etc etc
There are no "magical configurations" that will make it run 10x faster than the default settings

The "real curve" is in the zfs create and to set / get the compression setting, even atime is no more so critical (with non-spinning drives)


----------



## mer (Sep 15, 2022)

ct85711 said:


> I will say, if you use multiple pools, the mounting order of which pool is mounted first may not always be in the order you expect and you may be better off telling one pool not to mount automatically.


I find this curious/interesting:  I've never cared about what order pools are mounted in, nor have I cared what order normal partitions/UFS filesystems mounted.  Pools are separate from each other, datasets are separate, do you have an example of pools mounting in the wrong order caused you a problem?


----------



## ct85711 (Sep 15, 2022)

To explain my setup that I made on that system, is 2 pools, one being zroot with the primary system installed.  The second pool (I think I named it as zdata, that system is offline so I can't check to be sure); anyways, I moved /var/lib, /var/db, /usr/home and other larger directories to zdata (which has a much larger space available).  The problem came when I rebooted, in that zdata's datasets got mounted first, then zroot's datasets were mounted afterwards, mounting over.  So when the system started up, the contents of all of the zdata's were empty, even though they were still there, but not available to access.  Once I set zdata to legacy/non mounting and mount it through fstab, it corrected the issue.

I admit, that was my first time setting up zfs; so it ended up serving as a lesson of remembering the KISS rule.  I figure when ever I get around replacing those drives; I'll just get rid of both pools and recreate it with a single pool.


----------



## mer (Sep 15, 2022)

Ahh ok.  Root on ZFS has a couple of specific requirements to support Boot Environments.  That means some directories related to /var , /usr /usr/local want to be part of the zroot data set.  The best thing I can recommend is do a default install (you can do it in a vm) and then zfs list to see what winds up in it's own dataset (zpool history also is good).  Moving /usr/home is usually never a problem, /var things like /var/db and /var/lib should be part of the root dataset.
This is from a basic root on ZFs install.  notice which subdirectories from /var are listed.  They have their own datasets.  Everything else under /var is part of root dataset.
`zfs list
NAME                         USED  AVAIL     REFER  MOUNTPOINT
zroot                       28.2G   195G       88K  /zroot
zroot/ROOT                  7.98G   195G       88K  none
zroot/ROOT/13.1-RELEASE-p0     8K   195G     7.45G  /
zroot/ROOT/13.1-RELEASE-p2  7.98G   195G     7.34G  /
zroot/tmp                   2.19M   195G     2.19M  /tmp
zroot/usr                   20.1G   195G       88K  /usr
zroot/usr/home              19.1G   195G     19.1G  /usr/home
zroot/usr/ports              988M   195G      988M  /usr/ports
zroot/usr/src                 88K   195G       88K  /usr/src
zroot/var                   1.93M   195G       88K  /var
zroot/var/audit               88K   195G       88K  /var/audit
zroot/var/crash               88K   195G       88K  /var/crash
zroot/var/log               1.45M   195G     1.45M  /var/log
zroot/var/mail               112K   195G      112K  /var/mail
zroot/var/tmp                120K   195G      120K  /var/tmp`


----------



## avner (Oct 21, 2022)

13.1-RELEASE
`find /usr/src -type f -iname "*ufs*"` --> 52 files
`find /usr/src -type f -name "*ufs*" -exec du -ch {} + | grep total$` --> 684 kb

`find /usr/src -type f -iname "*ffs*"` --> 86 files
`find /usr/src -type f -name "*ffs*" -exec du -ch {} + | grep total$` --> 1.5 mb

`find /usr/src -type f -iname "*zfs*"` --> 1,065 files
`find /usr/src -type f -name "*zfs*" -exec du -ch {} + | grep total$` --> 15 mb


----------

