# UFS vs ZFS



## Qaz (Dec 24, 2010)

Hi to all!

I have FreeBSD 8.1 amd64 server, with ufs on first HDD, and I planned to add another 1Tb HDD for backups. I never run FreeBSD on ZFS, that's why I took some computer for test and install FreeBSD 8.1 amd64, ufs on first HDD, and ZFS on second, and run blogbench benchmark:

```
blogbench -d /home/qaz/test/
```


```
ufs:
Final score for writes:            96
Final score for reads :         91960
```


```
zfs:
Final score for writes:             5
Final score for reads :         58466
```


```
Computer info:
Motherboard: Asus P5V-VM-ULTRA
CPU Model: Intel(R) Celeron(R) D CPU 430  @ 1.80GHz
RAM:2Gb
HDD:

ATA channel 2:
    Master:  ad4 <SAMSUNG HD200HJ/KF100-06> SATA revision 2.x
    Slave:       no device present
ATA channel 3:
    Master:  ad6 <SAMSUNG HD322GJ/1AR10001> SATA revision 2.x
    Slave:       no device present
```

ad4 - ufs, where FreeBSD installed
ad6 - zfs

What I'm doing after installing FreeBSD:

```
test# cat /boot/loader.conf 
zfs_load="YES"
```


```
test# cat /etc/rc.conf 
defaultrouter="192.168.0.19"
hostname="test.zfs"
ifconfig_vr0="inet 192.168.0.10  netmask 255.255.255.0"
keymap="ua.koi8-u"
sshd_enable="YES"

zfs_enable="YES"
```


```
zpool create test /dev/ad6
zpool set compression=off test
```


```
test# zpool status
  pool: test
 state: ONLINE
 scrub: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	test        ONLINE       0     0     0
	  ad6       ONLINE       0     0     0

errors: No known data errors
```

I know that this is different HDD, but I think that results can't be so different, but I can try put ufs on ad6. And here is my question, why performance on ZFS is so poor, what I'm doing wrong?


----------



## Qaz (Dec 24, 2010)

Oh, I'm sorry, I forgot:

```
test# uname -a
FreeBSD test.zfs 8.1-RELEASE FreeBSD 8.1-RELEASE #0: Mon Jul 19 02:36:49 UTC 2010     root@mason.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC  amd64
```


----------



## nekoexmachina (Dec 24, 2010)

they say ufs+zfs is always slow due to memory troulbes, zfs-only is better
but as for me, i dont want to experiment with filesystems and use good old ufs


----------



## Orum (Dec 25, 2010)

ZFS is best on systems with 4+ GB of RAM, as well.  I wouldn't use it on anything with less.


----------



## nekoexmachina (Dec 25, 2010)

> ZFS is best on systems with 4+ GB of RAM, as well. I wouldn't use it on anything with less.


Here goes +1  too.
With my laptop with 1G of ram and should-be-great settings (from various sources) perfomance was bad as hell is and laptop had kernel panics every 4-5 hours of uptime (with some io from time to time)


----------



## chrcol (Dec 27, 2010)

compression off and low ram are the mistakes, installing zfs ignoring guidelines.


----------



## phoenix (Dec 29, 2010)

While it's possible to use ZFS on a single disk, performance will not be that great.  ZFS is designed for large storage setups (multiple disks, multiple controllers, lots of RAM, lots of CPU, etc), and that's where it really shines.

Configure a box with 24 harddrives in it, first using hardware RAID and UFS, then with software RAID and UFS, then with ZFS, and benchmark each setup.  Be sure to include tests for dead/dying disks, like disconnecting SATA cables in the middle of the benchmark.  And tests for corruption, like repeatably disconnecting a SATA cable from a drive in the middle of the benchmark or issuing *dd* commands in the background to write garbage to the middle of the disk.  And be sure to get MD5 checksums for all data files before and after the test to make sure everything is saved correctly throughout the tests.  And be sure to include fsck times in the UFS benchmarks, and disk rebuild/resilver times as well.

Yes, UFS will be faster for a single disk, maybe even for a handful of disks.  But get over 2 TB or 4 disks, and UFS-based systems get to be a pain to manage.  Especially when things die.

There's a lot more to storage than raw throughput.  After all, the fastest storage benchmarks in the world use /dev/zero and /dev/null; yet no one seems to store data on those devices in the real world.


----------



## Qaz (Feb 26, 2011)

Yesterday I update one my servers to FreeBSD 8.2 and update zfs pool to v15. In that server I have one ufs HDD and one zfs HDD for backups, and run some benchmark. Results was significant:

zfs 8.1 (before update)

```
Final score for writes:           375
Final score for reads :         42163
```

zfs 8.2 (no loader.conf tuning)

```
Final score for writes:          1273
Final score for reads :        120520
```

ufs 8.2 

```
Final score for writes:            77
Final score for reads :        119512
```


----------



## danbi (Feb 26, 2011)

It will get even better if you run the (still experimental), ZFS v28 code


----------



## frank_s (Feb 26, 2011)

Qaz, how much memory does your system have out of interest?


----------



## Qaz (Feb 26, 2011)

2Gb RAM
CPU:Intel(R) Core(TM)2 Duo CPU     E7500  @ 2.93GHz


----------



## oliverh (Feb 26, 2011)

The "versus" is an ill-fated comparison. If you're comparing just naked numbers, then ZFS looses against ext4 and brtfs. Numbers should be only one of many aspects in your consideration.


----------



## vermaden (Feb 27, 2011)

danbi said:
			
		

> It will get even better if you run the (still experimental), ZFS v28 code



ZFS v28 has just been commited to HEAD:
http://lists.freebsd.org/pipermail/freebsd-fs/2011-February/010799.html


----------



## hedwards (Feb 28, 2011)

Data deduplication is something that I've been wanting for a while. Now, if only Linux would add support so that when I dual boot, I can use that as my common filesystem.


----------



## vermaden (Feb 28, 2011)

hedwards said:
			
		

> Now, if only Linux would add support so that when I dual boot, I can use that as my common filesystem.



I has already ...
http://zfsonlinux.org/
http://zfsonlinux.org/example-zvol.html


----------



## hedwards (Feb 28, 2011)

That's what I get for not having looked recently.


----------



## phoenix (Feb 28, 2011)

There's also the FUSE implementation of ZFS.  Last time I checked, it had ZFSv22, which included dedupe.  We played around with it for a bit.  It's okay for testing and prototyping, but we found it to be too unstable for heavy use.


----------



## nekoexmachina (Mar 1, 2011)

> I has already ...
> http://zfsonlinux.org/
> http://zfsonlinux.org/example-zvol.html


There was a (little or not) trouble for me when I tried that. I was using GPT labels to create zvols and after using that zvols with zfs-fuse on Linux, the labels every time turned device nodes (/dev/gpt/zfs0 -> /dev/ad8p1 etc)


----------

