# ZFS Hight read i/o



## grzegorz-derebecki (Oct 14, 2009)

I use Freebsd 7.2 with ZFS v13

On 1 of my server i have strange high numbers read operations. After server reboot it works better but only for some time. 

After reboot: 


```
root@core2:~# zpool iostat 1
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
tank         259G   385G     51      0  2.49M      0
tank         259G   385G     32      0   699K   128K
tank         259G   385G     23     18  1.29M  2.37M
tank         259G   385G     27      0   683K   128K
tank         259G   385G     31      0   936K   128K
tank         259G   385G     73      0  1.26M      0
tank         259G   385G     24      0  1.56M   128K
tank         259G   385G     49    643  1.28M  35.2M
tank         259G   385G     30      0  1.44M   128K
tank         259G   385G     30      0   988K   116K
tank         259G   385G     17      0   664K   128K
tank         259G   385G      6      0   768K      0
```


after some time (i think when free memory gets lower)


```
root@core2:~# zpool iostat 1
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
tank         259G   385G     60     20  4.53M  1.21M
tank         259G   385G    262      0  25.4M   128K
tank         259G   385G    212      9  22.8M  1.25M
tank         259G   385G    241      5  25.5M   767K
tank         259G   385G    224     13  22.7M  1.13M
tank         259G   385G    297      0  30.9M      0
tank         259G   385G    172    367  13.2M  8.66M
tank         259G   385G    174     65  13.8M   749K
tank         259G   385G    309      0  28.2M  7.99K
tank         259G   385G    395      0  43.7M      0
tank         259G   385G    300      0  31.8M      0
tank         259G   385G    256      0  25.7M  7.99K
tank         259G   385G    361      0  38.5M      0
tank         259G   385G    271      0  28.2M      0
```

Here is my zfs sysctl


```
root@core2:~# sysctl -a |grep zfs
vfs.zfs.arc_meta_limit: 426417920
vfs.zfs.arc_meta_used: 787332272
vfs.zfs.mdcomp_disable: 0
vfs.zfs.arc_min: 213208960
vfs.zfs.arc_max: 1705671680
vfs.zfs.zfetch.array_rd_sz: 1048576
vfs.zfs.zfetch.block_cap: 256
vfs.zfs.zfetch.min_sec_reap: 2
vfs.zfs.zfetch.max_streams: 8
vfs.zfs.prefetch_disable: 1
vfs.zfs.recover: 0
vfs.zfs.txg.synctime: 5
vfs.zfs.txg.timeout: 30
vfs.zfs.scrub_limit: 10
vfs.zfs.vdev.cache.bshift: 16
vfs.zfs.vdev.cache.size: 10485760
vfs.zfs.vdev.cache.max: 16384
vfs.zfs.vdev.aggregation_limit: 131072
vfs.zfs.vdev.ramp_rate: 2
vfs.zfs.vdev.time_shift: 6
vfs.zfs.vdev.min_pending: 4
vfs.zfs.vdev.max_pending: 35
vfs.zfs.cache_flush_disable: 0
vfs.zfs.zil_disable: 0
vfs.zfs.version.zpl: 3
vfs.zfs.version.vdev_boot: 1
vfs.zfs.version.spa: 13
vfs.zfs.version.dmu_backup_stream: 1
vfs.zfs.version.dmu_backup_header: 2
vfs.zfs.version.acl: 1
vfs.zfs.debug: 0
vfs.zfs.super_owner: 0
kstat.zfs.misc.arcstats.hits: 81068075
kstat.zfs.misc.arcstats.misses: 2978119
kstat.zfs.misc.arcstats.demand_data_hits: 80284676
kstat.zfs.misc.arcstats.demand_data_misses: 2575410
kstat.zfs.misc.arcstats.demand_metadata_hits: 783399
kstat.zfs.misc.arcstats.demand_metadata_misses: 402709
kstat.zfs.misc.arcstats.prefetch_data_hits: 0
kstat.zfs.misc.arcstats.prefetch_data_misses: 0
kstat.zfs.misc.arcstats.prefetch_metadata_hits: 0
kstat.zfs.misc.arcstats.prefetch_metadata_misses: 0
kstat.zfs.misc.arcstats.mru_hits: 20255118
kstat.zfs.misc.arcstats.mru_ghost_hits: 158500
kstat.zfs.misc.arcstats.mfu_hits: 60812957
kstat.zfs.misc.arcstats.mfu_ghost_hits: 302711
kstat.zfs.misc.arcstats.deleted: 2490889
kstat.zfs.misc.arcstats.recycle_miss: 1050085
kstat.zfs.misc.arcstats.mutex_miss: 24694
kstat.zfs.misc.arcstats.evict_skip: 75764298
kstat.zfs.misc.arcstats.hash_elements: 37695
kstat.zfs.misc.arcstats.hash_elements_max: 110225
kstat.zfs.misc.arcstats.hash_collisions: 739696
kstat.zfs.misc.arcstats.hash_chains: 4484
kstat.zfs.misc.arcstats.hash_chain_max: 10
kstat.zfs.misc.arcstats.p: 213208960
kstat.zfs.misc.arcstats.c: 213208960
kstat.zfs.misc.arcstats.c_min: 213208960
kstat.zfs.misc.arcstats.c_max: 1705671680
kstat.zfs.misc.arcstats.size: 797950880
kstat.zfs.misc.arcstats.hdr_size: 8462048
kstat.zfs.misc.arcstats.l2_hits: 0
kstat.zfs.misc.arcstats.l2_misses: 0
kstat.zfs.misc.arcstats.l2_feeds: 0
kstat.zfs.misc.arcstats.l2_rw_clash: 0
kstat.zfs.misc.arcstats.l2_writes_sent: 0
kstat.zfs.misc.arcstats.l2_writes_done: 0
kstat.zfs.misc.arcstats.l2_writes_error: 0
kstat.zfs.misc.arcstats.l2_writes_hdr_miss: 0
kstat.zfs.misc.arcstats.l2_evict_lock_retry: 0
kstat.zfs.misc.arcstats.l2_evict_reading: 0
kstat.zfs.misc.arcstats.l2_free_on_write: 0
kstat.zfs.misc.arcstats.l2_abort_lowmem: 0
kstat.zfs.misc.arcstats.l2_cksum_bad: 0
kstat.zfs.misc.arcstats.l2_io_error: 0
kstat.zfs.misc.arcstats.l2_size: 0
kstat.zfs.misc.arcstats.l2_hdr_size: 0
kstat.zfs.misc.arcstats.memory_throttle_count: 7
kstat.zfs.misc.vdev_cache_stats.delegations: 2117
kstat.zfs.misc.vdev_cache_stats.hits: 243980
kstat.zfs.misc.vdev_cache_stats.misses: 150265
```


----------



## graudeejs (Oct 14, 2009)

how about: what apps are you running?


----------



## grzegorz-derebecki (Oct 14, 2009)

mysql, litespeed web server (for rails app), nginx for static files (i have many graphics over 1 000 000.


----------



## grzegorz-derebecki (Oct 14, 2009)

With vfs.zfs.debug=1 i have in logs:


```
zfs_reclaim_complete:4342[1]: zp=0xffffff00a1d05938
zfs_reclaim_complete:4342[1]: zp=0xffffff009dade760
zfs_reclaim_complete:4342[1]: zp=0xffffff005557fb10
zfs_reclaim_complete:4342[1]: zp=0xffffff00ab9f9760
zfs_reclaim_complete:4342[1]: zp=0xffffff00a1adfb10
zfs_reclaim_complete:4342[1]: zp=0xffffff01f6c553b0
zfs_reclaim_complete:4342[1]: zp=0xffffff00a07a43b0
zfs_reclaim_complete:4342[1]: zp=0xffffff00a1bafb10
```


----------



## grzegorz-derebecki (Oct 14, 2009)

I changed some zfs options :


```
vm.kmem_size="3500M"
vfs.zfs.arc_min="2000M"
vfs.zfs.arc_max="3000M"
```

And now it works better. But i don't know why with 200mb arc_min it works so badly - maby it is something wrong with hdd?


----------

