# write performance slowdown on ZFS pool with ahci (8.1-RELEASE)



## sidh (Sep 21, 2010)

Greetings,

I recently upgraded my ZFS/NFS server to 8.1-RELEASE and tried the ahci driver with a 5GB file creation with dd on a ZFS pool, here are the results :

With AHCI disabled : 
	
	



```
5368709120 bytes transferred in 106.757255 secs (50288939 bytes/sec)
```

With AHCI enabled : 
	
	



```
5368709120 bytes transferred in 119.206514 secs (45037045 bytes/sec)
```

the command I performed is : `dd if=/dev/zero of=/myfs/file5GB bs=1k count=5M` (myfs is a zfs mountpoint)

Any hints ?

Regards,


----------



## vermaden (Sep 21, 2010)

AHCI is about RANDOM performance incerease, not SEQUENTIAL (like your dd):
http://forums.freebsd.org/showthread.php?t=7871


----------



## chrcol (Sep 22, 2010)

ahci will be slightly worse in situations where eg. benchmarking a dd, but better under loads where many things at once want hdd access, such as server environments.

also in my view zfs works better with ahci when the 2 following set in loader.conf.


```
vfs.zfs.vdev.min_pending=4
vfs.zfs.vdev.max_pending=8
```


----------



## sidh (Sep 22, 2010)

vermaden said:
			
		

> AHCI is about RANDOM performance incerease, not SEQUENTIAL (like your dd):
> http://forums.freebsd.org/showthread.php?t=7871


Thanks vermaden for the explanations and the link


----------



## sidh (Sep 22, 2010)

chrcol said:
			
		

> ahci will be slightly worse in situations where eg. benchmarking a dd, but better under loads where many things at once want hdd access, such as server environments.
> 
> also in my view zfs works better with ahci when the 2 following set in loader.conf.
> 
> ...


Your settings helps a little to increase write performance, thank you chrcol.


----------



## vermaden (Sep 22, 2010)

sidh said:
			
		

> Thanks vermaden for the explanations and the link



Welcome mate.


----------



## phoenix (Sep 23, 2010)

You also shouldn't use dd as a benchmarking tool, especially when using /dev/zero.  If you have compression enabled on a ZFS filesystem, you'll get SUPER high write speeds.  

Either use /dev/random (which may limit the read speed) or use a real filesystem benchmarking tool like bonnie++ or similar.


----------



## sidh (Sep 24, 2010)

phoenix said:
			
		

> You also shouldn't use dd as a benchmarking tool, especially when using /dev/zero.  If you have compression enabled on a ZFS filesystem, you'll get SUPER high write speeds.
> 
> Either use /dev/random (which may limit the read speed) or use a real filesystem benchmarking tool like bonnie++ or similar.


Today I found that useful link which mention bonnie(++) too.

I 've been told  too that using bs=128k with dd increase writes speed to double. So I realize dd is definitely not a benchmarking tool.

Thank you for your advices


----------



## vermaden (Sep 24, 2010)

sidh said:
			
		

> I 've been told  too that using bs=128k with dd increase writes speed to double. So I realize dd is definitely not a benchmarking tool.


Try using 8-16m (megabytes) for even more performance.


----------



## jem (Sep 24, 2010)

chrcol said:
			
		

> ```
> vfs.zfs.vdev.min_pending=4
> vfs.zfs.vdev.max_pending=8
> ```



Are these two tunables documented anywhere?  I'd like to understand what they do before I apply them to my system.


----------



## vermaden (Sep 24, 2010)

jem said:
			
		

> Are these two tunables documented anywhere?  I'd like to understand what they do before I apply them to my system.



Here mate:

```
# [color="Blue"]sysctl -d vfs.zfs.vdev.min_pending[/color]
vfs.zfs.vdev.min_pending: Initial number of I/O requests pending to each device

# [color="#0000ff"]sysctl -d vfs.zfs.vdev.max_pending[/color]
vfs.zfs.vdev.max_pending: Maximum I/O requests pending on each device
```


----------



## sidh (Sep 24, 2010)

Hi, 

`$ sysctl -d vfs.zfs.vdev.min_pending` 
and 
`$ sysctl -d vfs.zfs.vdev.max_pending` 
will give you a comment on those settings;

Regards,


----------



## chrcol (Sep 26, 2010)

yeah its internal FS queuing, but ahci has its own queuing so it seems logical reducing them will be better.  I assumed a min and max of 1 would in fact be optimal on ahci (NCQ) but those values seem to work best for me.


----------

