# ZFS / UFS / Soft Updates / GJurnal / Bonnie Performance



## vermaden (Jan 25, 2010)

I have done some tests with UFS/ZFS under *bonnie* benchmark.

here are the results if you are interested.

software:
*OS:* FreeBSD 7-CURRENT 200708 snapshot
*benchmark:* [font="Courier New"]bonnie -s 2048[/font]
*CFLAGS:* [font="Courier New"]-O2 -fno-strict-aliasing -pipe -s[/font]
*CPUTYPE:* [font="Courier New"]athlon-mp[/font]
*scheduler:* ULE​hardware:
*CPU:* (single) Athlon XP 2000+ [ 12.5 x 1333MHz ]
*MEM:* 1 GB DDR 266MHz CL2
*FSB Ratio:* 1:1
*MOTHERBOARD:* AMD 760 MPX
*HDD:* (single) Maxtor 6L160P0 ATA/133​legend:

```
GJ - GJournal
    SU - Soft Updates
  lzjb - zfs set compression=lzjb ${POOL}
gzip-* - zfs set compression=gzip-* ${POOL}
```

colors:

```
[B][color="Green"]GREEN[/color][/B] - first
[B][color="Orange"]ORANGE[/color][/B] - second
   [color="Red"][B]RED[/B][/color] - third
```



```
---------Sequential Output----------   -----Sequential Input---   --Random--
                     -Per Char-   --Block---   -Rewrite--   -Per Char-    ---Block---  --Seeks---
                     K/sec %CPU   K/sec %CPU   K/sec %CPU   K/sec %CPU    K/sec %CPU   /sec  %CPU
UFS                  44735 64.9   [B][color="Red"]46970[/color][/B] 18.0   15565  7.0   41166 54.9    47447 12.9   173.9  1.1
UFS.noatime          45524 66.0   [B][color="Orange"]47032[/color][/B] 18.1   15397  7.0   40431 54.3    46874 12.8   177.8  1.1
UFS.noatime.async    [color="Red"][B]45621[/B][/color] 66.4   46510 17.8   15432  7.0   41227 55.4    47501 12.9   174.0  1.1
UFS_SU               45294 66.5   42729 17.5   15563  7.1   39849 53.4    43410 11.9   167.4  1.0
UFS_SU.noatime       [B][color="Orange"]45998[/color][/B] 67.6   42278 17.3   15378  6.9   39169 51.7    44086 12.0   166.6  1.0
UFS_SU.noatime.async [color="Green"][B]46125[/B][/color] 67.7   43361 17.7   15520  7.0   39132 52.4    43598 11.9   169.0  1.0
UFS_GJ               18357 27.5   18079  7.5   10931  4.7   40076 52.9    46950 13.3   [color="red"][B]181.1[/B][/color]  1.2
UFS_GJ.noatime       18140 27.1   16990  7.1   10973  4.7   39837 53.4    47476 13.4   169.4  1.1
UFS_GJ.noatime.async 17942 26.9   17586  7.3   11107  4.8   38021 51.1    47414 13.2   171.4  1.1
ZFS                  32858 64.1   30611 20.4   15401 10.0   39544 60.3    47483 11.0    65.5  0.8
ZFS.noatime          32463 64.5   29860 20.8   14992  9.8   40286 62.0    47717 12.9    65.3  0.7
ZFS.comp=lzjb        40061 78.8   [color="#008000"][B]86064[/B][/color] 61.5   [B][color="#008000"]55270[/color][/B] 42.2   [B][color="#008000"]51819[/color][/B] 79.8   [B][color="#008000"]132028[/color][/B] 50.1   138.0  3.2
ZFS.comp=gzip-1      25843 49.2   38214 26.8   [color="Orange"][B]25772[/B][/color] 30.7   [color="Red"][B]45479[/B][/color] 77.2   [color="#ff0000"][B]102446[/B][/color] 54.4   [color="Orange"][B]354.7[/B][/color] 21.0
ZFS.comp=gzip-9      19968 38.2   22995 16.3   [color="Red"][B]19615[/B][/color] 25.2   [color="Orange"][B]46752[/B][/color] 84.6   [color="#ffa500"][B]102759[/B][/color] 63.0   [color="Green"][B]740.6[/B][/color] 62.6
```

*ZFS_DEF:* default ZFS/FreeBSD settings for 1GB/i386 system


```
kern.maxvnodes:           70235
vfs.zfs.prefetch_disable: 0
vfs.zfs.arc_max:          167772160
vm.kmem_size_max:         335544320
vfs.zfs.zil_disable:      0
```

*ZFS_TUNE: *tuned settings reccomneded here: http://wiki.freebsd.org/ZFSTuningGuide


```
kern.maxvnodes:           50000
vfs.zfs.prefetch_disable: 1
vfs.zfs.arc_max:          104857600
vm.kmem_size_max:         402653184
vfs.zfs.zil_disable:      0 / 1
```

*ZFS results:*



```
---------Sequential Output----------   -----Sequential Input---   --Random--
                                -Per Char-   --Block---   -Rewrite--   -Per Char-    ---Block---  --Seeks---
                                K/sec %CPU   K/sec %CPU   K/sec %CPU   K/sec %CPU    K/sec %CPU   /sec  %CPU
ZFS_DEF                         32858 64.1   30611 20.4   15401 10.0   39544 60.3    47483 11.0    65.5  0.8
ZFS_TUNE                        35637 68.1   30117 20.2   18787  9.9   35982 47.9    48953  9.3    66.3  0.7
ZFS_TUNE.zil=disabled           38353 74.9   31409 21.1   20198 10.6   35449 48.6    48207  9.6    65.6  0.7

ZFS_DEF.comp=lzjb               40061 78.8   86064 61.5   55270 42.2   51819 79.8   132028 50.1   138.0  3.2
ZFS_TUNE.comp=lzjb              40228 75.6   89397 59.1   50634 40.1   54886 91.4   156476 80.1   127.6  2.9
ZFS_TUNE.comp=lzjb.zil=disabled 40536 76.4   83370 57.4   52601 41.8   54335 92.1   151080 80.2   133.3  2.9
```


kernel config:


```
cpu		I686_CPU
ident		VERMADEN

options	SCHED_ULE		# ULE scheduler
options 	PREEMPTION		# Enable kernel thread preemption
options 	INET			# InterNETworking
options 	FFS			# Berkeley Fast Filesystem
options 	SOFTUPDATES		# Enable FFS soft updates support
options 	UFS_ACL		# Support for access control lists
options 	UFS_DIRHASH		# Improve performance on big directories
options 	UFS_GJOURNAL		# Enable gjournal-based UFS journaling
options 	GEOM_PART_GPT		# GUID Partition Tables.
options 	GEOM_LABEL		# Provides labelization
options 	COMPAT_43TTY		# BSD 4.3 TTY compat [KEEP THIS!]
options 	SCSI_DELAY=5000	# Delay (in ms) before probing SCSI
options 	SYSVSHM		# SYSV-style shared memory
options 	SYSVMSG		# SYSV-style message queues
options 	SYSVSEM		# SYSV-style semaphores
options 	_KPOSIX_PRIORITY_SCHEDULING # POSIX P1003_1B real-time extensions
options 	KBD_INSTALL_CDEV	# install a CDEV entry in /dev
options 	ADAPTIVE_GIANT	# Giant mutex is adaptive.
options 	STOP_NMI		# Stop CPUS using NMI instead of IPI

# SMP kernel
options 	SMP			# Symmetric MultiProcessor Kernel
device		apic			# I/O APIC

# Bus support
device		eisa
device		pci

# Floppy drives
device		fdc

# ATA and ATAPI devices
device		ata
device		atadisk		# ATA disk drives
device		atapicd		# ATAPI CDROM drives
device		atapifd		# ATAPI floppy drives
options 	ATA_STATIC_ID	# Static device numbering

# SCSI peripherals
device		scbus		# SCSI bus (required for SCSI)
device		da		# Direct Access (disks)
device		cd		# CD
device		pass		# Passthrough device (direct SCSI access)

# Keyboard and the PS/2 mouse
device		atkbdc		# AT keyboard controller
device		atkbd		# AT keyboard
device		psm		# PS/2 mouse
device		kbdmux		# keyboard multiplexer

# Syscons console driver
device		sc
device		vga		# VGA video card driver
device		splash		# Splash screen and screen saver support

# Add suspend/resume support for the i8254.
device		pmtimer

# NIC
device		miibus		# MII bus support
device		fxp		# Intel EtherExpress PRO/100B (82557, 82558)

# Pseudo devices
device		loop		# Network loopback
device		random		# Entropy device
device		ether		# Ethernet support
device		pty		# Pseudo-ttys (telnet etc)
device		md		# Memory "disks"

# Berkeley Packet Filter
device		bpf		# Berkeley packet filter
```

I added this old thread here, since it was present on [font="Courier New"]*bsdformus.org*[/font] [RIP] and is still present on other UNIX sites/forums but not here.


----------



## graudeejs (Jan 25, 2010)

On zfs lzjb is faster then gzip, this is because, gzip will compress data better, then lzjb, which is faster than gzip AFAIK

How knows, maybe on faster machine gzip would be faster than lzjb


----------



## oliverh (Jan 25, 2010)

I see one huge advantage at home without any benchmark, ZFS literally flies while building world or updating the source/ports-tree compared to UFS+SU.


----------



## dennylin93 (Jan 25, 2010)

I've tried using gzip-1 as well. Although it's still slower than lzjb, it's faster than gzip (default is gzip-6) and has a nice compression ratio.


----------



## vermaden (Jan 25, 2010)

Here are my current results on new box:

Create:
`# zfs create basefs/test
# zfs set mountpoint=/test basefs/test`

Options:
`# zfs set compression=[on|off] basefs/test
# zfs set checksum=[on|off]    basefs/test`



```
# cd /test && bonnie -s 8192 (this machine has 3GB RAM)
              -------Sequential Output-------- ---Sequential Input-- --Random--
              -Per Char- --Block--- -Rewrite-- -Per Char-  --Block--- --Seeks---
Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  K/sec %CPU  /sec %CPU 
         8192 36165 36.9 46683 16.4 20419  9.7 74582 78.0  94540 13.8  75.1  1.8 ZFS checksum=on  compression=off (default)
         8192 36325 37.1 43597 15.6 19792  9.5 72155 75.4  83432 12.8  58.6  1.6 ZFS checksum=off compression=off 
         8192 36345 37.0 45016 16.5 23312 10.5 69788 72.5  84694 12.8  67.9  1.2 ZFS checksum=off compression=on
         8192 56174 57.8 94827 31.1 71615 28.9 81527 88.6 301633 59.8 113.7  1.3 ZFS checksum=on  compression=lzjb
         8192 58430 59.1 90259 29.3 79894 32.2 82658 89.6 324807 64.3 150.5  1.4 ZFS checksum=off compression=lzjb
```

/boot/loader.conf 

```
# modules
zfs_load="YES"
ahci_load="YES"

# zfs tuning
vm.kmem_size=536870912          # 512 MB
vm.kmem_size_max=536870912      # 512 MB
vfs.zfs.vdev.cache.size=8388608 #   8 MB
vfs.zfs.arc_max=67108864        #  64 MB
vfs.zfs.prefetch_disable=0      # enable prefetch

# page share factor per proc
vm.pmap.shpgperproc=512

# default 1000
kern.hz=100

# avoid additional 128 interrupts per second per core
hint.atrtc.0.clock=0

# do not power devices without driver
hw.pci.do_power_nodriver=3

# ahci power management
hint.ahcich.0.pm_level=5
hint.ahcich.1.pm_level=5
hint.ahcich.2.pm_level=5
hint.ahcich.3.pm_level=5
hint.ahcich.4.pm_level=5
hint.ahcich.5.pm_level=5
```

/etc/sysctl.conf

```
# fs
vfs.read_max=32
```


----------



## oliverh (Jan 25, 2010)

Well, without 8-stable (lots of improvements for FreeBSD-related ZFS bugs) and decent memory (or even decent hardware) you will certainly just see a glimpse of ZFS performance. It's nice to see ZFS run on such low specs, but it creates a distorted image of this great filesystem and it is barely comparable in my opinion.


----------



## Savagedlight (Jan 25, 2010)

I'm throwing in some benchmarks showing the other side of ZFS. 

`$ uname -a`

```
FreeBSD freebsd.* 8.0-RELEASE-p2 FreeBSD 8.0-RELEASE-p2 #6: Thu Jan 21 05:16:55 CET 2010     marie@freebsd.*:/usr/obj/usr/src/sys/ServeWho  amd64
```

`# zpool status`

```
pool: storage
 state: ONLINE
 scrub: scrub completed after 2h4m with 0 errors on Sun Jan 24 22:11:49 2010
config:

        NAME        STATE     READ WRITE CKSUM
        storage     ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            ad10    ONLINE       0     0     0
            ad12    ONLINE       0     0     0
            ad14    ONLINE       0     0     0
            ad16    ONLINE       0     0     0
        spares
          ad6       AVAIL

errors: No known data errors
```

Those disks are all Western Digital Greenpower 1.5TB disks, with "idle3 timer" set to 25.5s.

*Specifications*:

```
CPU: Intel Core 2 Duo E7400 2.8GHz, Socket 775, 3MB, FSB 1066, Boxed
Motherboard: MSI P45 NEO-F, P45, Socket-775, DDR2, 1600FSB, ATX, ICH10, PCI-Ex(2.0)x16
RAM: 2x Corsair Value S. PC5300 DDR2 4GB Kit w/two matched Value Select 2048MB  (8GB total)
```

*Benchmark*:
`# zfs create storage/test`
`# cd /storage/test && bonnie -s 8192`

```
-------Sequential Output-------- ---Sequential Input-- --Random--
              -Per Char-   --Block--- -Rewrite-- -Per Char-  --Block--- --Seeks---
Machine    MB K/sec  %CPU   K/sec %CPU K/sec  %CPU  K/sec  %CPU K/sec  %CPU  /sec   %CPU
         8192 [color="DarkGreen"]142033[/color] 84.5  [color="Red"]156967[/color] 31.2  90154 21.6 154796  72.2 245225 22.3  164.2  0.8 (compression=off checksum=on)
         8192 [color="SandyBrown"]133396[/color] 76.9  [color="DarkGreen"]270523[/color] 54.9 [color="DarkGreen"]226394[/color] 50.5 [color="SandyBrown"]184542[/color]  83.0 [color="DarkGreen"]735736[/color] 63.8  168.2  0.6 (compression=lzjb checksum=on)
         8192 [color="Red"]103418[/color] 58.8  [color="SandyBrown"]178906[/color] 36.1 [color="SandyBrown"]149135[/color] 32.7 [color="DarkGreen"]202452[/color]  89.9 [color="Red"]673094[/color] 56.6  226.8  0.8 (compression=gzip-3 checksum=on)
         8192  85329 49.2  118054 23.9 107935 24.0 [color="Red"]168482[/color]  74.9 [color="SandyBrown"]689845[/color] 54.5  828.8  2.5 (compression=gzip-6 checksum=on)
         8192  82837 47.6  111416 23.2 [color="Red"]108160[/color] 23.8 155636  69.6 663867 52.9 1605.9  4.4 (compression=gzip-9 checksum=on)
```


The system had 4-6GB free ram at all times.


----------



## vermaden (Jan 25, 2010)

oliverh said:
			
		

> Well, without 8-stable (lots of improvements for FreeBSD-related ZFS bugs) and decent memory (or even decent hardware) you will certainly just see a glimpse of ZFS performance.



I will be changing current storage setup, since I own the new "deathstar" series disk, Western Diigtal Caviar Geern to be precise, maybe Iwill end up with raid5 (raidz1) with 3 disks or maybe some mirror on two bigger disks, I currently seek for some 'not so green' drives, maybe two more of Caviar Blue 640GB for example.

I would like to get RE3 driver, but they are very pricey ...



			
				oliverh said:
			
		

> It's nice to see ZFS run on such low specs, but it creates a distorted image of this great filesystem and it is barely comparable in my opinion.


You mean i386, loader.conf tunnig, system RAM or using it on just one disk?


----------



## volatilevoid (Jan 25, 2010)

Really helpful thread, vermaden. 

Do you mind if I use your values to tune my i386 box? Mine are a bit too restrictive I believe and on many concurrent file operations the system is lagging.


----------



## vermaden (Jan 25, 2010)

@volatilevoid

Thanks mate, I havent played a lot with these values, I should propably put them in some for loop and test all night by sctipt which are the best, but these just seem reasonable, I am also qurious what *oliverh* will tell with which thing I limit ZFS that much.

Also, post your 'restrictive' settings, I am curious what other people use.


----------



## volatilevoid (Jan 25, 2010)

I took the values from the ZFS Tuning Guide for 768M memory. I tried the ZFS standards with 512M kernel memory but that gave me panics on high FS loads.

I'd guess that tuning on amd64 is much easier...


----------



## vermaden (Jan 26, 2010)

volatilevoid said:
			
		

> I'd guess that tuning on amd64 is much easier...



Thanks for sharing.

The best thing about running ZFS on amd64 is that on amd64 it does not need tuning at all


----------



## Matty (Jan 26, 2010)

vermaden said:
			
		

> I will be changing current storage setup, since I own the new "deathstar" series disk, Western Diigtal Caviar Geern to be precise, maybe Iwill end up with raid5 (raidz1) with 3 disks or maybe some mirror on two bigger disks, I currently seek for some 'not so green' drives, maybe two more of Caviar Blue 640GB for example.


you could check out the new samsung F3 1TB disks


----------



## Matty (Jan 26, 2010)

Savagedlight said:
			
		

> I'm throwing in some benchmarks showing the other side of ZFS.


did you run the test with ahci on?


----------



## vermaden (Jan 26, 2010)

Matty said:
			
		

> you could check out the new samsung F3 1TB disks



Thanks Matty, but Samsungs have big problem with multithreaded reads: http://xbitlabs.com/articles/storage/display/1tb-14hdd-roundup_16.html

I currently have WD6400AAKS (WD Caviar Blue), so I propably end up with two/three more of these with raid5.


----------



## Matty (Jan 26, 2010)

vermaden said:
			
		

> Thanks Matty, but Samsungs have big problem with multithreaded reads: http://xbitlabs.com/articles/storage/display/1tb-14hdd-roundup_16.html


I see but these are the F1 drives not the F3 (Samsung Spinpoint F3 HD103SJ)

If you look at the pic http://www.xbitlabs.com/images/storage/1tb-14hdd-roundup/p13.jpg of the samsung drive you see its manufacture date 2007.11.

edit: will try at home. got 4x1tb with the named F3's in raidz1.


----------



## oliverh (Jan 26, 2010)

vermaden said:
			
		

> Thanks for sharing.
> 
> The best thing about running ZFS on amd64 is that on amd64 it does not need tuning at all



Well, you have to under certain conditions even on OpenSolaris.

But to answer your initial question:

CPU: (single) Athlon XP 2000+ [ 12.5 x 1333MHz ]
MEM: 1 GB DDR 266MHz CL2
FSB Ratio: 1:1
MOTHERBOARD: AMD 760 MPX
HDD: (single) Maxtor 6L160P0 ATA/133

With those specs you're using ZFS in low-power mode (in terms of secure operation and performance). To unleash its power and to understand my saying you have to compare it to some Porsche. The latter isn't a car for city-traffic and so it isn't of much use in such an environment. Anything is possible, but many things are barely reasonable. ZFS is a filesystem designed for really big servers or comparable workstations.


----------



## Savagedlight (Jan 26, 2010)

Matty said:
			
		

> did you run the test with ahci on?



AHCI was on in BIOS, but the AHCI kernel module was not loaded.


----------



## vermaden (Jan 27, 2010)

oliverh said:
			
		

> Well, you have to under certain conditions even on OpenSolaris.
> 
> But to answer your initial question:
> 
> ...



Yes, that hardware is ancient, I do not own it since long time, my current setup provides these results, but I must get some more disks (and get rid of WD Green):
http://forums.freebsd.org/showpost.php?p=64121&postcount=5



			
				Matty said:
			
		

> I see but these are the F1 drives not the F3 (Samsung Spinpoint F3 HD103SJ)
> 
> If you look at the pic http://www.xbitlabs.com/images/storage/1tb-14hdd-roundup/p13.jpg of the samsung drive you see its manufacture date 2007.11.
> 
> edit: will try at home. got 4x1tb with the named F3's in raidz1.


Thanks for info, I am only little scared of these new F3, since they are somehow very little on power, and I am curious if they do not incorporate some 'green' shit like Green Caviars from WD do:
http://tomshardware.com/reviews/2tb-hdd-7200,2430-10.html

Also random access time is not as good as on WD Caviar Blue 11.9-12.4 vs 13.5-13.9 difrence (more I/O operations), but F3 has a lot better MB/s transfers, hard to decide ...


----------



## volatilevoid (Jan 27, 2010)

What about the WD Caviar Black series?


----------



## vermaden (Jan 27, 2010)

I ended up ordering 3 x *Samsung Spinpoint F3 HD103SJ 1TB*, after all reviews they seem better choice then others and a lot lower on power consumption and with lower temperature, thanks for suggestion *Matty*


----------



## Matty (Jan 27, 2010)

vermaden said:
			
		

> I ended up ordering 3 x *Samsung Spinpoint F3 HD103SJ 1TB*, after all reviews they seem better choice then others and a lot lower on power consumption and with lower temperature, thanks for suggestion *Matty*



I couldn't find any reviews about multithreading on these drives which is too bad because I would really like to know how they perform.

edit: well. there is but it's in german. http://www.ocaholic.ch/xoops/html/modules/smartsection/item.php?itemid=369&page=6
tested with iozone -Rb test_xk.out -i0 -i1 -i2 -+n -r xk -s4g -t2

too bad there is no comparison with some WD disks


----------



## vermaden (Jan 28, 2010)

Matty said:
			
		

> I couldn't find any reviews about multithreading on these drives which is too bad because I would really like to know how they perform.
> 
> (...)
> 
> too bad there is no comparison with some WD disks



I have also found these:
http://bit-tech.net/hardware/storage/2009/10/06/samsung-spinpoint-f3-1tb-review/9
http://tomshardware.com/charts/2009-3.5-desktop-hard-drive-charts/IOMeter-2006.07.27,1039.html

They are really good (better then WD Black/RE) if it comes to performance per watt, raw I/O operations are faster on WD Black/RE, but RAW interface performance is faster on Samsungs F3.

Also, only the 2TB version of WD Black/RE uses 500GB platters (same as Samsung F3), older WD's use 320GB platters (3 of them on 1TB WD Black).

The only 'bad' thing that Smasung F3 lacks is 5 year warranty that WD Black/RE drives have ... (only 3 years for Samsung)

*WD Caviar Black 1TB* (WD1001FALS)






*Samsung F3 1TB* (HD103SJ)


----------



## vermaden (Feb 4, 2010)

diskinfo(1) results for *Samsung F3 1TB (HD103SJ)*


```
# dmesg | grep ada0
ada0 at ahcich0 bus 0 target 0 lun 0
ada0: <SAMSUNG HD103SJ 1AJ100E4> ATA/ATAPI-8 SATA 2.x device
ada0: 300.000MB/s transfers
ada0: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C)
ada0: Native Command Queueing enabled
```


```
# diskinfo -c -v -t ada0
ada0    
        512             # sectorsize
        1000204886016   # mediasize in bytes (932G)
        1953525168      # mediasize in sectors
        1938021         # Cylinders according to firmware.
        16              # Heads according to firmware.
        63              # Sectors according to firmware.
        S246J90Z131004  # Disk ident.

I/O command overhead:
        time to read 10MB block      0.077151 sec       =    0.004 msec/sector
        time to read 20480 sectors   2.211735 sec       =    0.108 msec/sector
        calculated command overhead                     =    0.104 msec/sector

Seek times:
        Full stroke:      250 iter in   5.234394 sec =   20.938 msec
        Half stroke:      250 iter in   3.918627 sec =   15.675 msec
        Quarter stroke:   500 iter in   6.541610 sec =   13.083 msec
        Short forward:    400 iter in   1.145674 sec =    2.864 msec
        Short backward:   400 iter in   2.402746 sec =    5.992 msec
        Seq outer:       2048 iter in   0.161329 sec =    0.079 msec
        Seq inner:       2048 iter in   0.216652 sec =    0.106 msec
Transfer rates:
        outside:       102400 kbytes in   0.697989 sec =   146707 kbytes/sec
        middle:        102400 kbytes in   0.832913 sec =   122942 kbytes/sec
        inside:        102400 kbytes in   1.297791 sec =    78903 kbytes/sec
```

Will post some ZFS benchmarks later ...


----------



## vermaden (Feb 5, 2010)

*RAID0 on 3 x Samsung F3 1TB (HD103SJ)*

Simple *bonnie* benchmark performance:

```
[B]raw#[/B] [color="Blue"]gstripe status[/color]
       Name  Status  Components
stripe/raw       UP  ada0s2
                     ada1s2
                     ada2s2

[B]bonnie#[/B] [color="Blue"]bonnie -s 2560m[/color]
              -------Sequential Output-------- ---Sequential Input-- --Random--
              -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine    MB  K/sec %CPU  K/sec %CPU  K/sec %CPU  K/sec %CPU   K/sec %CPU    /sec  %CPU
         2560 105848 77.5 255758 46.2  53981 11.5  87115 82.3  214039 26.5  8254.0  16.3 UFS
         2560 110135 78.7 255036 46.6  52714 11.3  86784 82.0  215048 27.2 10383.1  20.4 UFS (SoftUpdates)
         2560  77014 60.5 114061 24.4 110470 27.5 110183 98.0 1286423 99.7 49717.9 180.7 UFS (GJournal/async)
```

Simple *raw* device performance:

```
[B]write#[/B] [color="Blue"]dd < /dev/zero > /dev/stripe/raw bs=4M count=256[/color]
256+0 records in
256+0 records out
1073741824 bytes transferred in 4.187841 secs (256395083 bytes/sec) [B][250MB/s][/B]

[B]read#[/B] [color="Blue"]dd > /dev/null < /dev/stripe/raw bs=4M count=256[/color]
256+0 records in
256+0 records out
1073741824 bytes transferred in 3.785693 secs (283631498 bytes/sec) [B][280MB/s][/B]
```


*RAID5 (zfs raidz) on 3 x Samsung F3 1TB (HD103SJ)*

Simple *bonnie* benchmark performance:

```
[B]zfs#[/B] [color="#0000ff"]zpool create basefs raidz ada0s3 ada1s3 ada2s3[/color]
[B]zfs#[/B] [color="Blue"]zpool status[/color]
  pool: basefs
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        basefs      ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            ada0s3  ONLINE       0     0     0
            ada1s3  ONLINE       0     0     0
            ada2s3  ONLINE       0     0     0

errors: No known data errors

[B]zfs#[/B] [color="Blue"]zpool list[/color]
NAME     SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
basefs  2.72T  3.00G  2.72T     0%  ONLINE  -

[B]zfs#[/B] [color="Blue"]zfs list[/color]
NAME        USED  AVAIL  REFER  MOUNTPOINT
basefs     2.00G  1.78T  2.00G  /basefs

[B]zfs#[/B] [color="Blue"]df -h /basefs[/color]
Filesystem    Size    Used   Avail Capacity  Mounted on
basefs        1.8T    2.0G    1.8T     0%    /basefs

[B]zfs#[/B] [color="Blue"]cd /basefs && bonnie -s 8192m[/color]
              -------Sequential Output-------- ---Sequential Input-- --Random--
              -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine    MB K/sec %CPU K/sec  %CPU K/sec %CPU K/sec %CPU K/sec  %CPU   /sec %CPU
	 8192 46592 49.4  70095 21.9 44199 19.9 73329 77.9 151731 24.8   97.8  1.5 checksum=on  compression=off
	 8192 50684 51.1  76920 24.0 47233 19.7 85592 92.9 153819 24.3  115.0  1.2 checksum=off compression=off
	 8192 59356 59.3 103940 32.5 84086 34.0 83807 89.0 348380 55.3  157.1  1.9 checksum=on  compression=on (lzjb)
	 8192 58047 58.3 102645 32.1 83974 34.0 84356 89.6 353521 56.8  159.5  1.9 checksum=off compression=on (lzjb)
	 8192 43438 43.6  66016 20.6 49970 19.8 78088 81.5 256126 40.0  247.0  2.6 checksum=on  compression=gzip-1
	 8192 42704 43.1  65948 20.6 50832 20.1 77435 81.9 256208 40.0  255.0  2.5 checksum=off compression=gzip-1
	 8192 36383 36.7  45631 15.6 41276 17.1 76290 82.0 250496 42.0 1353.5  8.1 checksum=on  compression=gzip-9
	 8192 36896 37.1  46299 14.4 41364 17.0 77537 81.7 259652 40.6 1236.4  7.4 checksum=off compression=gzip-9
```

Simple *raw* device performance:

```
[B]write#[/B] [color="Blue"]dd < /dev/zero > /basefs/FILE bs=4M count=512[/color]
512+0 records in
512+0 records out
2147483648 bytes transferred in 23.038062 secs (93214596 bytes/sec) [B][90MB/s][/B]

[B]read#[/B] [color="Blue"]dd > /dev/null < /basefs/FILE bs=4M count=512[/color]
512+0 records in
512+0 records out
2147483648 bytes transferred in 13.027241 secs (164845622 bytes/sec) [B][160MB/s][/B]
```


----------



## chrcol (Feb 9, 2010)

thanks to the OP, only his test seems credible as the others do not have comparative tests alongside their zfs.


----------



## Matty (Feb 10, 2010)

4x 1TB samsung F3 disks onboard sata300 controller
Asus A8N with  AMD X2 4600 CPU
4GB ram ddr
8-Stable
ZFS raid10 (striped mirrors) with 2,7GB arc cache, pool is 48% full and prefetch is enabled


```
Record Size 128 KB
	File size set to 4194304 KB
	No retest option selected
	Command line used: iozone -C -t1 -r128k -s4g -i0 -i1 -i2 -+n
	Output is in Kbytes/sec
	Time Resolution = 0.000001 seconds.
	Processor cache size set to 1024 Kbytes.
	Processor cache line size set to 32 bytes.
	File stride size set to 17 * record size.
	Throughput test with 1 process
	Each process writes a 4194304 Kbyte file in 128 Kbyte records

	Children see throughput for  1 initial writers 	=  222891.94 KB/sec
	Parent sees throughput for  1 initial writers 	=  174595.94 KB/sec
	Min throughput per process 			=  222891.94 KB/sec 
	Max throughput per process 			=  222891.94 KB/sec
	Avg throughput per process 			=  222891.94 KB/sec
	Min xfer 					= 4194304.00 KB
	Child[0] xfer count = 4194304.00 KB, Throughput =  222891.94 KB/sec

	Children see throughput for  1 readers 		=  349713.72 KB/sec
	Parent sees throughput for  1 readers 		=  349674.88 KB/sec
	Min throughput per process 			=  349713.72 KB/sec 
	Max throughput per process 			=  349713.72 KB/sec
	Avg throughput per process 			=  349713.72 KB/sec
	Min xfer 					= 4194304.00 KB
	Child[0] xfer count = 4194304.00 KB, Throughput =  349713.72 KB/sec

	Children see throughput for 1 random readers 	=   24939.10 KB/sec
	Parent sees throughput for 1 random readers 	=   24938.90 KB/sec
	Min throughput per process 			=   24939.10 KB/sec 
	Max throughput per process 			=   24939.10 KB/sec
	Avg throughput per process 			=   24939.10 KB/sec
	Min xfer 					= 4194304.00 KB
	Child[0] xfer count = 4194304.00 KB, Throughput =   24939.10 KB/sec

	Children see throughput for 1 random writers 	=  195978.80 KB/sec
	Parent sees throughput for 1 random writers 	=  174361.77 KB/sec
	Min throughput per process 			=  195978.80 KB/sec 
	Max throughput per process 			=  195978.80 KB/sec
	Avg throughput per process 			=  195978.80 KB/sec
	Min xfer 					= 4194304.00 KB
	Child[0] xfer count = 4194304.00 KB, Throughput =  195978.80 KB/sec
```


----------



## vermaden (Feb 10, 2010)

sys: FreeBSD | amd64 | 8.0-RELEASE-p2
mob: Intel Q35 motherboard
cpu: Intel E6320 1.86Ghz
ram: 4 GB DDR2 800MHz
hdd: Samsung F3 1TB (3x)


```
% zpool status
  pool: basefs
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        basefs      ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            ada0s3  ONLINE       0     0     0
            ada1s3  ONLINE       0     0     0
            ada2s3  ONLINE       0     0     0

errors: No known data errors
```

Each drive partitioned this way:

```
512m   ufs (root on gmirror)
  1g   swap
930g   zpool
  4g   vfat
```

/boot/loader.conf
[CMD=""]vfs.zfs.arc_max=128M[/CMD]


```
Iozone: Performance Test of File I/O
	        Version $Revision: 3.326 $
		Compiled for 64 bit mode.
		Build: freebsd 

	Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
	             Al Slater, Scott Rhine, Mike Wisner, Ken Goss
	             Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
	             Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
	             Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy,
	             Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root.

	Run began: Wed Feb 10 22:03:37 2010

	Record Size 128 KB
	File size set to 4194304 KB
	No retest option selected
	Command line used: iozone -C -t1 -r128k -s4g -i0 -i1 -i2 -+n
	Output is in Kbytes/sec
	Time Resolution = 0.000001 seconds.
	Processor cache size set to 1024 Kbytes.
	Processor cache line size set to 32 bytes.
	File stride size set to 17 * record size.
	Throughput test with 1 process
	Each process writes a 4194304 Kbyte file in 128 Kbyte records

	Children see throughput for  1 initial writers 	=   99438.62 KB/sec
	Parent sees throughput for  1 initial writers 	=   98864.46 KB/sec
	Min throughput per process 			=   99438.62 KB/sec 
	Max throughput per process 			=   99438.62 KB/sec
	Avg throughput per process 			=   99438.62 KB/sec
	Min xfer 					= 4194304.00 KB
	Child[0] xfer count = 4194304.00 KB, Throughput =   99438.62 KB/sec

	Children see throughput for  1 readers 		=  103475.85 KB/sec
	Parent sees throughput for  1 readers 		=  103395.10 KB/sec
	Min throughput per process 			=  103475.85 KB/sec 
	Max throughput per process 			=  103475.85 KB/sec
	Avg throughput per process 			=  103475.85 KB/sec
	Min xfer 					= 4194304.00 KB
	Child[0] xfer count = 4194304.00 KB, Throughput =  103475.85 KB/sec

	Children see throughput for 1 random readers 	=    8650.13 KB/sec
	Parent sees throughput for 1 random readers 	=    8649.90 KB/sec
	Min throughput per process 			=    8650.13 KB/sec 
	Max throughput per process 			=    8650.13 KB/sec
	Avg throughput per process 			=    8650.13 KB/sec
	Min xfer 					= 4194304.00 KB
	Child[0] xfer count = 4194304.00 KB, Throughput =    8650.13 KB/sec

	Children see throughput for 1 random writers 	=   84414.15 KB/sec
	Parent sees throughput for 1 random writers 	=   84224.03 KB/sec
	Min throughput per process 			=   84414.15 KB/sec 
	Max throughput per process 			=   84414.15 KB/sec
	Avg throughput per process 			=   84414.15 KB/sec
	Min xfer 					= 4194304.00 KB
	Child[0] xfer count = 4194304.00 KB, Throughput =   84414.15 KB/sec



iozone test complete.
```


----------



## volatilevoid (Feb 11, 2010)

Man, I'd love to try ZFS on my 3 X25-M 80G disks. If only there was TRIM support in 8.0-RELEASE...


----------



## vermaden (Feb 11, 2010)

@volatilevoid

TRIM is supported on 8-STABLE and 9-CURRENT:
http://gitorious.org/freebsd/freebsd/commit/358451f9131486d4aaadafe076fb8f70730a503a


----------



## Matty (Feb 11, 2010)

I should add that I use raid10 (mirror)(with separate boot disk) because random reads in raidz just sucks.


----------



## Matty (Feb 11, 2010)

@vermaden

why only 128mb arc cache?


----------



## vermaden (Feb 11, 2010)

@Matty

I needed to reduce it since unixbench benchmark made kernel panic with defaults, maybe I will increase it to see how it influence performance.


----------



## vermaden (Feb 14, 2010)

Results with the same hardware but with new settings:

/boot/loader.conf
`vfs.zfs.arc_max=1024M [color="gray"](I need to check if [FILE]unixbench[/FILE] does not make panic now)[/color]
vfs.zfs.prefetch_disable=0`



```
Iozone: Performance Test of File I/O
	        Version $Revision: 3.326 $
		Compiled for 64 bit mode.
		Build: freebsd 

	Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
	             Al Slater, Scott Rhine, Mike Wisner, Ken Goss
	             Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
	             Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
	             Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy,
	             Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root.

	Run began: Sun Feb 14 17:34:18 2010

	Record Size 128 KB
	File size set to 4194304 KB
	No retest option selected
	Command line used: iozone -C -t1 -r128k -s4g -i0 -i1 -i2 -+n
	Output is in Kbytes/sec
	Time Resolution = 0.000001 seconds.
	Processor cache size set to 1024 Kbytes.
	Processor cache line size set to 32 bytes.
	File stride size set to 17 * record size.
	Throughput test with 1 process
	Each process writes a 4194304 Kbyte file in 128 Kbyte records

	Children see throughput for  1 initial writers 	=  169460.83 KB/sec
	Parent sees throughput for  1 initial writers 	=  156999.18 KB/sec
	Min throughput per process 			=  169460.83 KB/sec 
	Max throughput per process 			=  169460.83 KB/sec
	Avg throughput per process 			=  169460.83 KB/sec
	Min xfer 					= 4194304.00 KB
	Child[0] xfer count = 4194304.00 KB, Throughput =  169460.83 KB/sec

	Children see throughput for  1 readers 		=  157057.41 KB/sec
	Parent sees throughput for  1 readers 		=  156875.65 KB/sec
	Min throughput per process 			=  157057.41 KB/sec 
	Max throughput per process 			=  157057.41 KB/sec
	Avg throughput per process 			=  157057.41 KB/sec
	Min xfer 					= 4194304.00 KB
	Child[0] xfer count = 4194304.00 KB, Throughput =  157057.41 KB/sec

	Children see throughput for 1 random readers 	=    9479.04 KB/sec
	Parent sees throughput for 1 random readers 	=    9478.56 KB/sec
	Min throughput per process 			=    9479.04 KB/sec 
	Max throughput per process 			=    9479.04 KB/sec
	Avg throughput per process 			=    9479.04 KB/sec
	Min xfer 					= 4194304.00 KB
	Child[0] xfer count = 4194304.00 KB, Throughput =    9479.04 KB/sec

	Children see throughput for 1 random writers 	=  158118.22 KB/sec
	Parent sees throughput for 1 random writers 	=  149010.88 KB/sec
	Min throughput per process 			=  158118.22 KB/sec 
	Max throughput per process 			=  158118.22 KB/sec
	Avg throughput per process 			=  158118.22 KB/sec
	Min xfer 					= 4194304.00 KB
	Child[0] xfer count = 4194304.00 KB, Throughput =  158118.22 KB/sec



iozone test complete.
```


----------



## alvaro (Jan 17, 2012)

*zfs on ssd*

i've made this benchmark on a ADATA 32GB ssd 500 series, Read rate up to 260MB/S Write: up to 120MB/S, FreeBSD version is 8.2-RELEASE-p3 GENERIC
Gjounal didnt work (kernel panic) and one thing to note is when using zfs system almost hangs (very unresponsive in other tasks), the typical ram usage was 1.5Gb during tests


```
-------Sequential Output--------       ---Sequential Input--       --Random--
     -Per Char-    --Block---  -Rewrite--   -Per Char-    --Block---    --Seeks---
MB    K/sec %CPU   K/sec %CPU  K/sec %CPU   K/sec %CPU    K/sec %CPU     /sec %CPU

[I]ufs[/I]
8192  62122 68.2   58604  4.2  28637  6.7   70934 94.6   217208 16.3   8603.8 22.8
[I]ufs async[/I]
8192  62991 72.1   57936  4.2  29714  8.2   74173 96.4   222479 16.6   8560.7 22.8
[I]ufs async+noatime[/I]
8192  62586 71.4   56944  4.1  29829  8.4   74337 96.5   223479 16.4   9383.6 23.8
[I]ufs+su[/I]
8192  62140 71.1   56024  4.5  30733  8.3   72549 94.4   221915 16.1   9168.1 22.8
[I]ufs+su async[/I]
8192  63294 72.5   59221  4.8  28707  8.0   72880 94.6   223916 16.5   7993.1 20.1
[I]ufs+su noatime[/I]
8192  63345 72.5   59745  4.8  29388  8.2   67834 88.1   223271 16.7   8544.4 23.1
[I]ufs+su async+noatime[/I]
8192  62949 72.0   59529  4.8  28889  7.9   63622 82.8   222666 16.3   8523.2 22.0
[I]zfs[/I]
8192  81482 88.3   61524  9.9  58340 10.6   67106 87.3   227557 17.1   3760.3 11.3
[I]zfs comp=lzjb[/I]
8192  85707 91.8  464302 65.7 206292 29.2   66975 86.8   618647 35.4  10031.2 30.6
[I]zfs comp=gzip-1[/I]
8192  76810 82.4  131913 18.2 157721 21.2   70064 90.8   618953 33.1   9715.3 24.2
[I]zfs comp=gzip[/I]
8192  62788 67.4   63425  9.7  64563  9.8   64678 83.7   461306 24.3   8611.2 24.9
[I]zfs comp=gzip-9[/I]
8192  77806 83.3   60555 10.6  63580  9.8   72415 93.7   476410 26.2   9343.9 28.2
```


----------

