# how to check how fast ZFS is read/writing?



## wonslung (Jun 19, 2009)

is there anyways to check how fast or slow ZFS is working?

i did a big copy and checked zpool iostat -v 10 10 and the numbers kind of threw me off....
it didn't show ANY reads at all, just writes, and even then it didn't look very fast....


----------



## kegf (Jun 19, 2009)

Maybe here - http://lists.freebsd.org/pipermail/freebsd-geom/2007-September/002590.html


----------



## wonslung (Jun 19, 2009)

nah, i figured it out, it was actually the correct command, i just had it set to check too often and it was giving me weird results

basically if i did it every 2 seconds like this:

```
zpool iostat -v 2
```
it would show reeeeealyh low numbers or 0's 

if i did it like this

```
zpool iostat -v 10
```
i'd see 70m-150m/s reads writes

i'm not sure if theres a better way or not but at least i know that it's actually in the tripple digits which is what it SHOULD be....though i am getting panics sometimes, which is odd because i thought zfs didn't require tuning in 7.2


----------



## phoenix (Jun 19, 2009)

Have a look at gstat(8) as well.  That will list all GEOM devices (including all the physical devices, gmirror devices, labels, etc) and stats on what they are doing.  You can filter the view using -f on the command line to filter the view, and use -I to set the refresh rate (add 000000 after the number, so 3 seconds is 3000000).

For example, *gstat -I 3000000 -f label* will only show devices with "label" in the name, and refresh the screen every 3 seconds.


----------



## avilla@ (Jul 4, 2009)

hello!
i'm trying to benchmark zfs (v13) speed on my system, and i can see quite low results...
here's my setup:


```
# cat /var/log/dmesg.today | head
FreeBSD 7.2-STABLE #1: Thu Jun 18 12:20:31 CEST 2009
    root@echo.hoth:/usr/obj/usr/src/sys/TPR60
Timecounter "i8254" frequency 1193182 Hz quality 0
CPU: Intel(R) Core(TM) Duo CPU      T2300  @ 1.66GHz (1662.51-MHz 686-class CPU)
  Origin = "GenuineIntel"  Id = 0x6ec  Stepping = 12
  Features=0xbfe9fbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,CLFLUSH,DTS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE>
  Features2=0xc189<SSE3,MON,EST,TM2,xTPR,PDCM>
  AMD Features=0x100000<NX>
  Cores per package: 2
real memory  = 1600978944 (1526 MB)
avail memory = 1553580032 (1481 MB)
ACPI APIC Table: <LENOVO TP-7C   >
FreeBSD/SMP: Multiprocessor System Detected: 2 CPUs
 cpu0 (BSP): APIC ID:  0
 cpu1 (AP): APIC ID:  1

# cat /boot/loader.conf | grep kmem
vm.kmem_size="512M"
vm.kmem_size_max="512M"

# gpart show ad4
=>       34  156301421  ad4  GPT  (75G)
         34        128    1  freebsd-boot  (64K)
        162  156301293    2  freebsd-zfs  (75G)

# zpool status
  pool: system
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        system      ONLINE       0     0     0
          ad4p2     ONLINE       0     0     0

errors: No known data errors
```

and here's what i get:


```
# iozone -R -l 5 -u 5 -r 4M -s 256M
Children see throughput for  5 initial writers  =   18501.12 KB/sec
Parent sees throughput for  5 initial writers   =   12603.72 KB/sec
Min throughput per process                      =    2778.95 KB/sec 
Max throughput per process                      =    5081.93 KB/sec 
Avg throughput per process                      =    3700.22 KB/sec 
Min xfer                                        =  143360.00 KB     

Children see throughput for  5 rewriters        =   14975.33 KB/sec
Parent sees throughput for  5 rewriters         =   14522.65 KB/sec
Min throughput per process                      =    2866.66 KB/sec 
Max throughput per process                      =    3163.15 KB/sec 
Avg throughput per process                      =    2995.07 KB/sec 
Min xfer                                        =  237568.00 KB     

Children see throughput for  5 readers          =    8310.40 KB/sec
Parent sees throughput for  5 readers           =    8076.86 KB/sec
Min throughput per process                      =    1206.17 KB/sec 
Max throughput per process                      =    2403.73 KB/sec 
Avg throughput per process                      =    1662.08 KB/sec 
Min xfer                                        =  135168.00 KB
```

i've had just one panic in the month and a half i've used zfs (i'm using freebsd as a desktop os on my laptop), so i'm quite happy with it, but it seems to be a bit too slow...
what could i do? is there anything wrong with my configuration? should i post some more results?

thanks for your attention!


----------



## wonslung (Jul 4, 2009)

i'm PRETTY sure i read somewhere that ZFS isn't really as fast as UFS yet for single drive configurations.  To get most of the features you need to have some kind of redundant setup as well.


----------



## avilla@ (Jul 5, 2009)

wonslung said:
			
		

> i'm PRETTY sure i read somewhere that ZFS isn't really as fast as UFS *yet* for single drive configurations.



*yet*? did you read it's going to be improved? i hope so, since i'm not leaving zfs, even if it's slower than ufs
anyway, it seems they're trying to keep the secret: i couldn't find anything about this on the zfs best practices guide or similar sources...

by the way: am i taking advantage at least of the disk cache, even if the pool is on ad4*p2*? i think so, for *p1* is just a boot partition, but i'm not familiar with gpt, i just wanted to try the new technology


----------



## avilla@ (Jul 5, 2009)

mh, *this* seems to be at least a partial confirmation...


----------



## wonslung (Jul 6, 2009)

xzhayon said:
			
		

> *yet*? did you read it's going to be improved? i hope so, since i'm not leaving zfs, even if it's slower than ufs
> anyway, it seems they're trying to keep the secret: i couldn't find anything about this on the zfs best practices guide or similar sources...
> 
> by the way: am i taking advantage at least of the disk cache, even if the pool is on ad4*p2*? i think so, for *p1* is just a boot partition, but i'm not familiar with gpt, i just wanted to try the new technology



dude, just get a second drive with the same geometry and add it on as a mirror.  hard drives are DIRT cheap these days.  newegg has some great prices.  

with the mirror you'll get better performance and you'll be able to take advantage of the data checksum features of ZFS

right now your missing out of a LOT of the cooler features by not using a redundant setup.

as far as it being improved, it gets better with each version.


zfs isn't trying to "keep the secret" they say in all the guides i've read that you should be using a redundant setup to take advantage ofthe advance features.  ZFS gets it's speed from a combination of striping and using redundant setups.  The more you use, the faster it gets.

My system with just a single mirror is much much faster than it was with only 1 drive, and it only cost me 30 bucks for a new drive.

my other system has 12 1tb drives in 3 raidz vdevs, it's much faster than it was when it was only 2 raidz vdevs.


----------



## avilla@ (Jul 6, 2009)

wonslung said:
			
		

> dude, just get a second drive with the same geometry and add it on as a mirror.  hard drives are DIRT cheap these days.  newegg has some great prices.



i know, but:
1. this is a laptop, where should i put it? 
2. i have a very small drive (76 gb), i was thinking about making some upgrades (motherboard, cpu... pretty everything was just replaced except the disk), perhaps waiting for ssd's to become better and cheaper (i don't really know much about them, anyway)

but, if there is a solution to #1 (really, i don't know), i think i could buy a bigger drive and mirror it (and make it 76 gb large): then i would have a decent disk to use in the future



			
				wonslung said:
			
		

> right now your missing out of a LOT of the cooler features by not using a redundant setup.



i know, but i don't keep important data here, i put everything on my home server (which i want to convert to freebsd and then to zfs with raidz... now it's using fedora with a daily backup: zfs would be perfect!)


----------



## wonslung (Jul 6, 2009)

xzhayon said:
			
		

> i know, but:
> 1. this is a laptop, where should i put it?
> 2. i have a very small drive (76 gb), i was thinking about making some upgrades (motherboard, cpu... pretty everything was just replaced except the disk), perhaps waiting for ssd's to become better and cheaper (i don't really know much about them, anyway)
> 
> ...




ahh, i don't know then....it depends on the laptop...some have 2 hard drives....i didn't even consider that you might be using a laptop, i guess that's pretty silly of me.

yah, i guess you don't have a ton of options with that unless your laptop has another hard drive bay...some do, some don't

3 of the last 5 laptops i've owned DID but those 3 were ALL larger laptops

yes, i recently converted MY home server to freebsd.  It was a debian based linux server with mdadm raid5 and an XFS filesystem.  i had 6 1 tb hard drives.  I decided to do a hardware upgrade which proved to be the PERFECT time to upgrade the software as well

currently it's an intel q9550 with 8gb ram and 12 1tb drives running freebsd 7.2 amd64 and ZFS.  i have root and /usr on a gmirrored pair of compact flash cards, everything else is on ZFS including /var /usr/local and all my jails.  I used 3 raidz vdevs of 4 drives each and it's REALLY great.  I especially love using ZFS for my jails.  It's great, because if you make jails the old way (not with ezjails) and you use a zfs file system for the original jail you can clone it for each new jail.  It also lets you try stuff without worry....i use a jail for each thing i need...like mysql runs in it's own jail, my webserver runs in a jail.  Samba is running in a jail....and if i'm trying to do something strange or new i always just make a new jail with zfs clone pool/jails/basejail@base pool/jails/SOMEJAIL
i love ZFS


----------



## avilla@ (Jul 6, 2009)

wonslung said:
			
		

> yah, i guess you don't have a ton of options with that unless your laptop has another hard drive bay...some do, some don't



it does. if i remove the dvd tray 
i think i'll just keep it this way, that's not a big problem anyway, i don't do massive hd operations

thanks for your help!


----------

