# ZFS tuning, raidz2 with 6 SSDs + mirror 2 SSDs



## meteor8488 (Jul 1, 2014)

Hi all,

I just set up a server with eight SSDs. There are two ZFS pools:


6 SSDs -- RAID-Z2 -- for web content
2 SSDs -- mirror -- for a MySQL database

My server has 32 GB memory. Could you please suggest how to tune the server to achieve better performance? How about below parameters?


```
vm.kmem_size=
vm.kmem_size_max=
vfs.zfs.arc_max=
vfs.zfs.vdev.cache.size=
kern.maxvnodes=
vfs.zfs.write_limit_override=
kern.maxvnodes=
```


----------



## aupanner (Jul 1, 2014)

*Re: ZFS tunning, raidz2 with 6 SSDs + mirror 2 SSDs*

The first rule of tuning: you're already done.

*M*easure performance. *I*s performance sufficient? *I*f so, tuning is complete. *I*f not, use measurement to deduce the bottleneck and compute the required performance increase.

You can tune ZFS but you can't tune a fish.


----------



## meteor8488 (Jul 2, 2014)

*Re: ZFS tunning, raidz2 with 6 SSDs + mirror 2 SSDs*



			
				aupanner said:
			
		

> The first rule of tuning: you're already done.
> 
> measure performance.
> is performance sufficient?
> ...



Sorry for my question. It's a production server, so it's pretty hard for me to change the settings and run tests again and again.

Here is my test result,

```
# dd if=/dev/zero of=/web/ddfile bs=2048k count=10000
10000+0 records in
10000+0 records out
20971520000 bytes transferred in 20.146371 secs (1040957705 bytes/sec)
#dd of=/dev/null if=/web/ddfile bs=2048k count=10000
20971520000 bytes transferred in 10.915077 secs (1921335054 bytes/sec)

#dd if=/dev/zero of=/data/ddfile bs=2048k count=10000
10000+0 records in
10000+0 records out
20971520000 bytes transferred in 39.640522 secs (529042478 bytes/sec)
#dd of=/dev/null if=/data/ddfile bs=2048k count=10000
10000+0 records in
10000+0 records out
20971520000 bytes transferred in 4.621978 secs (4537347386 bytes/sec)
```

And here are my settings:


```
vfs.zfs.prefetch_disable="1"
vfs.zfs.l2arc_write_max=1048576000
vfs.zfs.l2arc_write_boost=1048576000
```

It seems it's still slower than expected.


----------



## usdmatt (Jul 2, 2014)

Maybe you should have done more testing before going into production as tuning is all about tweaking settings, testing, and repeating. It's rare to find a one-size-fits-all strategy that improves performance for every workload.

Although testing with /dev/null and /dev/zero is pretty useless, I don't see much wrong with those results. (~500 MBps for the mirror and ~1 GBps for the RAID-Z2?)

Depending on exactly where your load is, you may benefit from heavily reducing ARC size (i.e. <20 GB) and assigning a good chunk of RAM to the MySQL cache. As far as database performance goes you will benefit more by having more cached directly in MySQL than in the file system. Of course you won't be able to profile database performance by just doing file system read/write tests. Ideally you need to test with your actual workload to see what sort of performance your application is getting.

There are a few changes that should usually be made for ZFS datasets holding MySQL databases, although this depends on the exact storage engine being used. Have a look at https://blogs.oracle.com/realneel/entry ... _practices for an overview of some of the recommended tunings.


----------



## meteor8488 (Jul 3, 2014)

usdmatt said:
			
		

> Maybe you should have done more testing before going into production as tuning is all about tweaking settings, testing, and repeating. It's rare to find a one-size-fits-all strategy that improves performance for every workload.
> 
> Although testing with /dev/null and /dev/zero is pretty useless, I don't see much wrong with those results. (~500 MBps for the mirror and ~1 GBps for the RAID-Z2?)
> 
> ...



Thanks for your reply.

I've got a very strange issue at this moment. The MySQL UPDATE caused very high disk I/O. For example, when I use insert in mysql, the disk usage is normal:

```
----------  -----  -----  -----  -----  -----  -----
ssd         10.8G   211G     62    184   998K  1.21M
0
----------  -----  -----  -----  -----  -----  -----
ssd         10.8G   211G    108      0  1.69M      0
----------  -----  -----  -----  -----  -----  -----
```

But if MySQL is trying to update a record in database, the disk usage is as follows:

```
ssd         10.8G   211G  9.64K      0   254M      0
----------  -----  -----  -----  -----  -----  -----
ssd         10.8G   211G  5.91K      0  294.5M      0
----------  -----  -----  -----  -----  -----  -----
ssd         10.8G   211G  6.82K    247   309M  1.73M
----------  -----  -----  -----  -----  -----  -----
ssd         10.8G   211G  7.43K      0   419M      0
```

Even though a very simple update will cause this problem.

It seems that there is some thing wrong with ZFS, could you please help?


----------



## usdmatt (Jul 3, 2014)

What are the current settings on the dataset storing the database? `zfs get all {dataset}`


----------



## Crivens (Jul 4, 2014)

Remember that the block size of ZFS is 128 KB, not 4 KB as in UFS. It would not surprise me that the I/O in the update touches some small spots in the database file, causing multiple blocks being written back.


----------

