# Memory Exhaustion, system hangs FreeBSD 9.1-RELEASE-p5



## dnalencastre (Aug 2, 2013)

FreeBSD 9.1-RELEASE-p5 
16 GB RAM
amd64

Hi,

One of my servers is hanging with memory exhausted.  This happens whenever the ZFS system is "on", either by starting the service (rc.local or `service zfs onestart`) or by calling a `zfs` command.

This same system has been tested with ZFS system off, and the server keeps running with about 15 GB free memory for upwards of 4 hours.

The system worked correctly for over one month, with ZFS filesystem mounted and being served over NFS.

From my testing, the memory exhaustion occurs between 15 to 90 minutes after the ZFS system is started.

I've tested with setting vfs.zfs.arc_max and vm.kmem_size, but without success, although the longer 90 minutes period occurred with vm.kmem_size_max=15032385536 and vfs.zfs.arc_max=10737418240 (this being a 16 GB server, amd64).

Any hints/tips/pointers in the right direction so that I can diagnose this?

Thanks in advance,
Duarte Alencastre


----------



## kpa (Aug 2, 2013)

dnalencastre said:
			
		

> I've tested with setting vfs.zfs.arc_max and vm.kmem_size, but without success ,
> although the longer 90 minutes period occurred with vm.kmem_size_max=15032385536 and vfs.zfs.arc_max=10737418240 (this being a 16 GB server, amd64).



Please don't touch vm.kmem_size_max, it is not what you think and most users should never change it's value but leave it to the default by not specifying it at all in /boot/loader.conf.

Remove the vm.kmem_size_max setting and try again if your system then runs out of memory.


----------



## wblock@ (Aug 2, 2013)

There are specific rules for using NFS on ZFS, although I have not paid attention to exactly what they are.


----------



## kpa (Aug 2, 2013)

wblock@ said:
			
		

> There are specific rules for using NFS on ZFS, although I have not paid attention to exactly what they are.



For read-only shares there's not much to do. On read/write shares you should make sure that your pool can withstand the high rate of writes to the ZIL because NFS forces synchronous writes if used with the default sync  mode. Other than that I don't think there's anything special.

https://wiki.freebsd.org/ZFSTuningGuide#NFS_tuning


----------



## dnalencastre (Aug 2, 2013)

The results are the same with both vfs.zfs.arc_max and vm.kmem_size configured, with just vfs.zfs.arc_max configured or with the defaults. The only thing that _might_ change is the fact that with those values configured, the time to memory starvation seems longer, but as I have only tried those twice it is statistically irrelevant.

Regarding NFS, I don't think its an issue right now, as it is off (along with rpcbind) and the server still hangs from memory starvation.

Duarte


----------



## phoenix (Aug 3, 2013)

Dedupe enabled at any time on that pool?

Filesystems or snapshots being deleted? System crashed while doing that?

What were you doing with the pool before it started locking up?


----------



## dnalencastre (Aug 5, 2013)

The system has a single zpool, with three file systems. One of the file systems on the pool (the largest) has dedup enabled. When operational, all of the three file systems get snapshots created and deleted on a regular basis (called by scripts on cron).
Currently all of these snapshot operations are inactive (cron entries commented out).

The system still hangs when the ZFS subsystem is activated, even without mounting any file system or doing snapshot operations. As a note, the disks where the zpool was created are close to their max tps whenever the zfs subsystems is active.


----------



## HarryE (Aug 6, 2013)

Where is the swap located? In a zvol or separate disk/partition?
Try to get a `zdb` and `zpool status -v`listing just before memory exhaustion.


----------



## dnalencastre (Aug 6, 2013)

Both swap and root share the same (controller configured) raid1 on dedicated partitions. The zfs pool is on separate, dedicated, disks. 

The memory exhaustion is gone, along with the high rate of access to the zfs disks. Me and my colleagues suspect that theres was some sort of ZFS activity on the background that was using all the memory, and whatever that activity was it is now finished.

The server was upgraded from 16 GB to 24 GB RAM and the arc was limited to a little less than 8 GB (vfs.zfs.arc_max=8317210624). The arc limitation had already been tried on Friday without success. Nevertheless system hangs still occurred after these two actions, before ceasing to occur a few hours later.

Any ideas if scrubbing the pool weekly or fortnightly might help with preventing re-occurrence?

I would still like to diagnose this issue, but I guess I'll have to wait for the next flare-up. Please suggest what kind of data I should be collecting and with what intervals.

Thanks,
Duarte


----------



## phoenix (Aug 6, 2013)

dnalencastre said:
			
		

> The system has a single zpool, with three file systems. One of the file systems on the pool (the largest) has dedup enabled. When operational, all of the three file systems get snapshots created and deleted on a regular basis (called by scripts on cron).
> Currently all of these snapshot operations are inactive (cron entries commented out).
> 
> The system still hangs when the ZFS subsystem is activated, even without mounting any file system or doing snapshot operations. As a note, the disks where the zpool was created are close to their max tps whenever the zfs subsystems is active.



Sounds like you need more RAM, then.  The DDT (dedupe table) is updated whenever you write or delete data in the pool, regardless of whether or not dedupe is enabled on the filesystem you are writing to.  Dedupe is a pool-wide feature, meaning it tracks the checksums for every block in the pool in the DDT.  It's only when writing to a filesystem that has dedupe enabled that it updates the reference numbers in the DDT (thus saving on the writes via dedupe).

You need roughly 1 GB of ARC per 1 TB of unique data in the *pool*.  If you have dedupe turned off for most of the filesystems, those are most likely considered "unique" data, ballooning your DDT.

The only way to "fix" the hang is to add a tonne of RAM, increase the ARC max limit (or remove it completely), and let the box boot and complete it's internal housekeeping on the DDT (which is causing the hang).

Then, if you don't need dedupe, turn it off, zfs send the data out of the dedupe-enabled filesystem, and destroy that filesystem.  At that point, the DDT should be cleared and no longer used.  If needed you can remove the extra RAM at this point.


----------



## dnalencastre (Aug 8, 2013)

We have already disabled dedup from the file-system, renamed it and copied the data out to another file-system. The original filesystem is no longer accessible via nfs (the new file-system took it's place). We need to wait a few days before we can finally destroy the file-system, as we may need the snapshots.

Once that data is no longer valid, the file-system that had dedup enabled will be destroyed.


----------



## Acardenes (Aug 8, 2013)

Hi, I'm having exactly the same issue on the same hardware specs. I do have dedup and compression in this server, being used as backup machine. The problem started while removing some snapshots, the machine froze with no memory left and it was restarted on the IPMI.

Now, after rebooting I see that the HDD LEDs are going crazy, the wired memory won't stop growing and at some stage it will become unresponsive as it happened since yesterday.

I tried to change this parameters with no result: 
	
	



```
vfs.zfs.arc_min="8000m"
```
 and 
	
	



```
vfs.zfs.arc_max="9048m"
```

Here is some of the output:


```
NAME          SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
criptanatop  4.53T  3.99T   552G    88%  1.21x  ONLINE  -
```


```
root@criptana:/root # zpool status -v
  pool: criptanatop
 state: ONLINE
  scan: resilvered 716M in 0h0m with 0 errors on Wed Aug  7 10:18:41 2013
config:

	NAME         STATE     READ WRITE CKSUM
	criptanatop  ONLINE       0     0     0
	  raidz1-0   ONLINE       0     0     0
	    da0.eli  ONLINE       0     0     0
	    da1.eli  ONLINE       0     0     0
	    da2.eli  ONLINE       0     0     0
	    da3.eli  ONLINE       0     0     0
	    da4.eli  ONLINE       0     0     0

errors: No known data errors
```


```
root@criptana:/root # sysctl vfs.zfs
vfs.zfs.l2c_only_size: 0
vfs.zfs.mfu_ghost_data_lsize: 0
vfs.zfs.mfu_ghost_metadata_lsize: 743960576
vfs.zfs.mfu_ghost_size: 743960576
vfs.zfs.mfu_data_lsize: 0
vfs.zfs.mfu_metadata_lsize: 380264448
vfs.zfs.mfu_size: 391661568
vfs.zfs.mru_ghost_data_lsize: 0
vfs.zfs.mru_ghost_metadata_lsize: 504000512
vfs.zfs.mru_ghost_size: 504000512
vfs.zfs.mru_data_lsize: 131072
vfs.zfs.mru_metadata_lsize: 407994880
vfs.zfs.mru_size: 562559488
vfs.zfs.anon_data_lsize: 0
vfs.zfs.anon_metadata_lsize: 0
vfs.zfs.anon_size: 1371091456
vfs.zfs.l2arc_norw: 1
vfs.zfs.l2arc_feed_again: 1
vfs.zfs.l2arc_noprefetch: 1
vfs.zfs.l2arc_feed_min_ms: 200
vfs.zfs.l2arc_feed_secs: 1
vfs.zfs.l2arc_headroom: 2
vfs.zfs.l2arc_write_boost: 8388608
vfs.zfs.l2arc_write_max: 8388608
vfs.zfs.arc_meta_limit: 2634022912
vfs.zfs.arc_meta_used: 2687163272
vfs.zfs.arc_min: 8388608000
vfs.zfs.arc_max: 10536091648
vfs.zfs.dedup.prefetch: 1
vfs.zfs.mdcomp_disable: 0
vfs.zfs.write_limit_override: 0
vfs.zfs.write_limit_inflated: 51376422912
vfs.zfs.write_limit_max: 2140684288
vfs.zfs.write_limit_min: 33554432
vfs.zfs.write_limit_shift: 3
vfs.zfs.no_write_throttle: 0
vfs.zfs.zfetch.array_rd_sz: 1048576
vfs.zfs.zfetch.block_cap: 256
vfs.zfs.zfetch.min_sec_reap: 2
vfs.zfs.zfetch.max_streams: 8
vfs.zfs.prefetch_disable: 0
vfs.zfs.mg_alloc_failures: 18
vfs.zfs.check_hostid: 1
vfs.zfs.recover: 0
vfs.zfs.txg.synctime_ms: 1000
vfs.zfs.txg.timeout: 5
vfs.zfs.vdev.cache.bshift: 16
vfs.zfs.vdev.cache.size: 0
vfs.zfs.vdev.cache.max: 16384
vfs.zfs.vdev.write_gap_limit: 4096
vfs.zfs.vdev.read_gap_limit: 32768
vfs.zfs.vdev.aggregation_limit: 131072
vfs.zfs.vdev.ramp_rate: 2
vfs.zfs.vdev.time_shift: 6
vfs.zfs.vdev.min_pending: 4
vfs.zfs.vdev.max_pending: 10
vfs.zfs.vdev.bio_flush_disable: 0
vfs.zfs.cache_flush_disable: 0
vfs.zfs.zil_replay_disable: 0
vfs.zfs.zio.use_uma: 0
vfs.zfs.snapshot_list_prefetch: 0
vfs.zfs.version.zpl: 5
vfs.zfs.version.spa: 28
vfs.zfs.version.acl: 1
vfs.zfs.debug: 0
vfs.zfs.super_owner: 0
```


```
root@criptana:/root # zpool iostat -v 2
                capacity     operations    bandwidth
pool         alloc   free   read  write   read  write
-----------  -----  -----  -----  -----  -----  -----
criptanatop  3.99T   552G    298      0  1.18M    443
  raidz1     3.99T   552G    298      0  1.18M    443
    da0.eli      -      -     59      0   244K    134
    da1.eli      -      -     59      0   244K    138
    da2.eli      -      -     59      0   243K    169
    da3.eli      -      -     59      0   244K    172
    da4.eli      -      -     59      0   243K    160
-----------  -----  -----  -----  -----  -----  -----

                capacity     operations    bandwidth
pool         alloc   free   read  write   read  write
-----------  -----  -----  -----  -----  -----  -----
criptanatop  3.99T   552G     73      0   293K      0
  raidz1     3.99T   552G     73      0   293K      0
    da0.eli      -      -     14      0  59.8K      0
    da1.eli      -      -     12      0  51.8K      0
    da2.eli      -      -     15      0  61.8K      0
    da3.eli      -      -     15      0  63.8K      0
    da4.eli      -      -     13      0  55.8K      0
-----------  -----  -----  -----  -----  -----  -----

                capacity     operations    bandwidth
pool         alloc   free   read  write   read  write
-----------  -----  -----  -----  -----  -----  -----
criptanatop  3.99T   552G     69      0   279K      0
  raidz1     3.99T   552G     69      0   279K      0
    da0.eli      -      -     20      0  81.6K      0
    da1.eli      -      -     10      0  43.8K      0
    da2.eli      -      -     12      0  49.8K      0
    da3.eli      -      -     10      0  43.8K      0
    da4.eli      -      -     14      0  59.7K      0
-----------  -----  -----  -----  -----  -----  -----
```


----------



## Acardenes (Aug 8, 2013)

Sorry, just wanted to add `top` at the moment of the freeze


```
last pid:  3531;  load averages:  0.07,  0.09,  0.08                                                                                                                                      up 0+02:23:02  16:22:41
112 processes: 51 running, 60 sleeping, 1 waiting
CPU:  0.0% user,  0.0% nice, 52.0% system,  0.0% interrupt, 48.0% idle
Mem: 18M Active, 19M Inact, 13G Wired, 865M Buf, 2782M Free
Swap: 4096M Total, 4096M Free

PID USERNAME  THR PRI NICE   SIZE    RES STATE   C   TIME   WCPU COMMAND
 2077 root        1  20    0 67884K  5584K select 10   0:00  0.00% sshd
 1069 root        1  20    0 67884K  5584K select 11   0:00  0.00% sshd
 1075 root        1  20    0 67884K  5572K select  8   0:01  0.00% sshd
 1064 root        1  20    0 67884K  5560K select  8   0:02  0.00% sshd
 1005 root        1  20    0 46744K  4716K select  6   0:00  0.00% sshd
 1011 smmsp       1  20    0 20272K  4460K pause   3   0:00  0.00% sendmail
 1008 root        1  20    0 20272K  4404K select 10   0:00  0.00% sendmail
  996 root        1  20    0 28176K  3888K nanslp  3   0:00  0.00% smartd
  755 root        1  20    0 10376K  3484K select  7   0:00  0.00% devd
 1080 root        1  20    0 16560K  3220K CPU1    0   0:22  0.00% top
  955 root        1  20    0 22196K  3204K select  7   0:01  0.00% ntpd
    0 root      199  -8    0     0K  3184K -       0   2:17 144.19% kernel
 1073 root        1  20    0 17532K  3148K pause   5   0:00  0.00% csh
 2091 root        1  20    0 17532K  3104K ttyin   5   0:00  0.00% csh
 1078 root        1  23    0 17532K  3096K pause   6   0:00  0.00% csh
 1067 root        1  20    0 17532K  3072K ttyin   7   0:00  0.00% csh
 1015 root        1  52    0 14128K  1812K nanslp  2   0:00  0.00% cron
  879 root        1  20    0 12052K  1684K select  8   0:00  0.00% syslogd
  729 root        1  52    0 14232K  1624K select  4   0:00  0.00% moused
 1051 root        1  52    0 12052K  1624K ttyin  10   0:00  0.00% getty
 1050 root        1  52    0 12052K  1624K ttyin   3   0:00  0.00% getty
 1049 root        1  20    0 12052K  1624K ttyin  11   0:00  0.00% getty
 1053 root        1  52    0 12052K  1624K ttyin   8   0:00  0.00% getty
 1055 root        1  52    0 12052K  1624K ttyin   7   0:00  0.00% getty
 1052 root        1  52    0 12052K  1624K ttyin   2   0:00  0.00% getty
 1054 root        1  52    0 12052K  1624K ttyin   4   0:00  0.00% getty
 1056 root        1  52    0 12052K  1624K ttyin   1   0:00  0.00% getty
  958 root        1  20    0 12052K  1520K RUN    11   0:48  0.49% powerd
 1738 root        1  20    0  3784K  1444K tx->tx  9   0:01  0.00% rm
  119 root        1  52    0  9920K  1420K pause   3   0:00  0.00% adjkerntz
   12 root       43 -76    -     0K   688K WAIT    0   0:39  0.00% intr
    1 root        1  24    0  6276K   568K wait    8   0:00  0.00% init
   11 root       12 155 ki31     0K   192K RUN    11  28.3H 1029.44% idle
   15 root        8 -68    -     0K   128K -       9   0:02  0.00% usb
   36 root        4  -8    -     0K    80K CPU11  11   1:49 12.35% zfskern
   13 root        3  -8    -     0K    48K RUN     3   1:17  0.10% geom
 1106 root        1  20    -     0K    16K geli:w  1   0:11  0.39% g_eli[1] da0
 1134 root        1  20    -     0K    16K geli:w  1   0:11  0.20% g_eli[1] da2
 1149 root        1  20    -     0K    16K RUN     3   0:10  0.20% g_eli[3] da3
 1162 root        1  20    -     0K    16K geli:w  3   0:10  0.10% g_eli[3] da4
 1150 root        1  20    -     0K    16K geli:w  4   0:10  0.10% g_eli[4] da3
 1126 root        1  20    -     0K    16K geli:w  6   0:10  0.10% g_eli[6] da1
 1130 root        1  20    -     0K    16K geli:w 10   0:09  0.10% g_eli[10] da1
 1113 root        1  20    -     0K    16K RUN     8   0:12  0.00% g_eli[8] da0
 1141 root        1  20    -     0K    16K RUN     8   0:12  0.00% g_eli[8] da2
 1128 root        1  20    -     0K    16K RUN     8   0:12  0.00% g_eli[8] da1
```

`vmstat`

```
procs      memory      page                    disks     faults         cpu
 r b w     avm    fre   flt  re  pi  po    fr  sr md0 da0   in   sy   cs us sy id
 0 0 7    718M    63M    68   4   0   0   111  27   0   0  207  149 6558  0  1 99
```


----------



## Acardenes (Aug 9, 2013)

Another `top` from the moment of a crash,


```
last pid:  1796;  load averages:  0.00,  0.00,  0.04                                                                                                                                      up 0+03:34:16  13:30:09
116 processes: 2 running, 110 sleeping, 4 waiting
CPU:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt, 99.9% idle
Mem: 26M Active, 1268K Inact, 15G Wired, 19M Buf, 144K Free
Swap: 4096M Total, 32K Used, 4096M Free

  PID USERNAME  THR PRI NICE   SIZE    RES STATE   C   TIME   WCPU COMMAND
   11 root       12 155 ki31     0K   192K CPU11  11  42.3H 1200.00% idle
 1067 root        1  20    0 16560K  1984K CPU8    7   0:38  0.10% top
    0 root      199  -8    0     0K  3184K -       9   3:33  0.00% kernel
   36 root        4  -8    -     0K    80K vmwait  7   2:19  0.00% zfskern
   13 root        3  -8    -     0K    48K -       3   2:08  0.00% geom
  955 root        1  20    0 12052K  1248K select  5   1:07  0.00% powerd
   12 root       43 -76    -     0K   688K WAIT    0   1:02  0.00% intr
 1088 root        1  20    -     0K    16K geli:w  1   0:20  0.00% g_eli[1] da1
 1092 root        1  20    -     0K    16K vmwait  5   0:19  0.00% g_eli[5] da1
 1131 root        1  20    -     0K    16K geli:w  5   0:19  0.00% g_eli[5] da4
 1127 root        1  20    -     0K    16K geli:w  1   0:19  0.00% g_eli[1] da4
 1095 root        1  20    -     0K    16K geli:w  8   0:19  0.00% g_eli[8] da1
 1090 root        1  20    -     0K    16K vmwait  3   0:19  0.00% g_eli[3] da1
 1118 root        1  20    -     0K    16K geli:w  5   0:19  0.00% g_eli[5] da3
 1101 root        1  20    -     0K    16K geli:w  1   0:19  0.00% g_eli[1] da2
 1105 root        1  20    -     0K    16K geli:w  5   0:19  0.00% g_eli[5] da2
 1079 root        1  20    -     0K    16K geli:w  5   0:19  0.00% g_eli[5] da0
 1075 root        1  20    -     0K    16K geli:w  1   0:19  0.00% g_eli[1] da0
 1129 root        1  20    -     0K    16K geli:w  3   0:19  0.00% g_eli[3] da4
 1114 root        1  20    -     0K    16K geli:w  1   0:19  0.00% g_eli[1] da3
 1134 root        1  20    -     0K    16K geli:w  8   0:19  0.00% g_eli[8] da4
 1116 root        1  20    -     0K    16K geli:w  3   0:18  0.00% g_eli[3] da3
 1103 root        1  20    -     0K    16K geli:w  3   0:18  0.00% g_eli[3] da2
 1082 root        1  20    -     0K    16K vmwait  8   0:18  0.00% g_eli[8] da0
 1108 root        1  20    -     0K    16K vmwait  8   0:18  0.00% g_eli[8] da2
 1077 root        1  20    -     0K    16K geli:w  3   0:18  0.00% g_eli[3] da0
 1098 root        1  20    -     0K    16K geli:w 11   0:18  0.00% g_eli[11] da1
 1091 root        1  20    -     0K    16K vmwait  4   0:18  0.00% g_eli[4] da1
 1121 root        1  20    -     0K    16K vmwait  8   0:18  0.00% g_eli[8] da3
 1137 root        1  20    -     0K    16K geli:w 11   0:18  0.00% g_eli[11] da4
 1117 root        1  20    -     0K    16K vmwait  4   0:18  0.00% g_eli[4] da3
 1089 root        1  20    -     0K    16K vmwait  2   0:18  0.00% g_eli[2] da1
 1130 root        1  20    -     0K    16K vmwait  4   0:18  0.00% g_eli[4] da4
 1128 root        1  20    -     0K    16K geli:w  2   0:18  0.00% g_eli[2] da4
 1087 root        1  20    -     0K    16K vmwait  0   0:17  0.00% g_eli[0] da1
 1111 root        1  20    -     0K    16K vmwait 11   0:17  0.00% g_eli[11] da2
 1104 root        1  20    -     0K    16K geli:w  4   0:17  0.00% g_eli[4] da2
 1094 root        1  20    -     0K    16K vmwait  7   0:17  0.00% g_eli[7] da1
 1085 root        1  20    -     0K    16K geli:w 11   0:17  0.00% g_eli[11] da0
 1078 root        1  20    -     0K    16K vmwait  4   0:17  0.00% g_eli[4] da0
 1126 root        1  20    -     0K    16K vmwait  0   0:17  0.00% g_eli[0] da4
 1096 root        1  20    -     0K    16K geli:w  9   0:17  0.00% g_eli[9] da1
 1115 root        1  20    -     0K    16K vmwait  2   0:17  0.00% g_eli[2] da3
 1076 root        1  20    -     0K    16K vmwait  2   0:17  0.00% g_eli[2] da0
 1102 root        1  20    -     0K    16K vmwait  2   0:17  0.00% g_eli[2] da2
```


----------



## Acardenes (Aug 9, 2013)

I changed a couple of parameters in /boot/loader.conf and it stabilized. 


```
vfs.zfs.arc_meta_limit="3000m"
   
 vfs.zfs.arc_min="1000m"
 
 vfs.zfs.arc_max="2048m"
 
 vfs.zfs.prefetch_disable="1"
```

It's resilvering now:


```
pool: criptanatop
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
	continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Fri Aug  9 16:14:28 2013
        1 scanned out of 3.99T at 1/s, (scan is slow, no estimated time)
        0 resilvered, 0.00% done
config:

	NAME         STATE     READ WRITE CKSUM
	criptanatop  ONLINE       0     0     0
	  raidz1-0   ONLINE       0     0     0
	    da0.eli  ONLINE       0     0     0
	    da1.eli  ONLINE       0     0     0
	    da2.eli  ONLINE       0     0     0
	    da3.eli  ONLINE       0     0     0
	    da4.eli  ONLINE       0     0     0

errors: No known data errors
```


`top`

```
last pid:  1311;  load averages:  1.14,  1.41,  1.06    up 0+01:14:25  16:54:44
110 processes: 2 running, 107 sleeping, 1 waiting
CPU:  0.0% user,  0.0% nice,  1.3% system,  0.0% interrupt, 98.7% idle
Mem: 17M Active, 16M Inact, 3515M Wired, 17M Buf, 12G Free
Swap: 4096M Total, 4096M Free

  PID USERNAME  THR PRI NICE   SIZE    RES STATE   C   TIME    CPU COMMAND
   11 root       12 155 ki31     0K   192K CPU11  11 867:47 1198.97% idle
   36 root        4  -8    -     0K    80K zio->i 11   0:35  2.10% zfskern
   13 root        3  -8    -     0K    48K -       6   0:51  0.10% geom
  951 root        1  20    0 12052K  1576K select  3   0:09  0.10% powerd
    0 root      196  -8    0     0K  3136K -       1   4:33  0.00% kernel
   12 root       43 -76    -     0K   688K WAIT    0   0:23  0.00% intr
 1151 root        1  20    -     0K    16K geli:w  8   0:18  0.00% g_eli[8] da3
 1138 root        1  20    -     0K    16K geli:w  8   0:18  0.00% g_eli[8] da2
 1164 root        1  20    -     0K    16K geli:w  8   0:18  0.00% g_eli[8] da4
 1125 root        1  20    -     0K    16K geli:w  8   0:18  0.00% g_eli[8] da1
 1112 root        1  20    -     0K    16K geli:w  8   0:18  0.00% g_eli[8] da0
 1156 root        1  20    -     0K    16K geli:w  0   0:17  0.00% g_eli[0] da4
 1105 root        1  20    -     0K    16K geli:w  1   0:17  0.00% g_eli[1] da0
 1160 root        1  20    -     0K    16K geli:w  4   0:17  0.00% g_eli[4] da4
 1158 root        1  21    -     0K    16K geli:w  2   0:17  0.00% g_eli[2] da4
 1104 root        1  20    -     0K    16K geli:w  0   0:17  0.00% g_eli[0] da0
 1157 root        1  23    -     0K    16K geli:w  1   0:17  0.00% g_eli[1] da4
 1149 root        1  21    -     0K    16K geli:w  6   0:17  0.00% g_eli[6] da3
 1126 root        1  21    -     0K    16K geli:w  9   0:17  0.00% g_eli[9] da1
 1167 root        1  20    -     0K    16K geli:w 11   0:16  0.00% g_eli[11] da
 1124 root        1  21    -     0K    16K geli:w  7   0:16  0.00% g_eli[7] da1
 1128 root        1  21    -     0K    16K geli:w 11   0:16  0.00% g_eli[11] da
 1159 root        1  20    -     0K    16K geli:w  3   0:16  0.00% g_eli[3] da4
 1114 root        1  20    -     0K    16K geli:w 10   0:16  0.00% g_eli[10] da
 1113 root        1  21    -     0K    16K geli:w  9   0:16  0.00% g_eli[9] da0
 1107 root        1  20    -     0K    16K geli:w  3   0:16  0.00% g_eli[3] da0
 1165 root        1  20    -     0K    16K geli:w  9   0:16  0.00% g_eli[9] da4
 1139 root        1  20    -     0K    16K geli:w  9   0:16  0.00% g_eli[9] da2
 1137 root        1  20    -     0K    16K geli:w  7   0:16  0.00% g_eli[7] da2
 1118 root        1  21    -     0K    16K geli:w  1   0:16  0.00% g_eli[1] da1
 1166 root        1  21    -     0K    16K geli:w 10   0:16  0.00% g_eli[10] da
 1115 root        1  21    -     0K    16K geli:w 11   0:16  0.00% g_eli[11] da
 1161 root        1  21    -     0K    16K geli:w  5   0:16  0.00% g_eli[5] da4
 1106 root        1  21    -     0K    16K geli:w  2   0:16  0.00% g_eli[2] da0
 1141 root        1  21    -     0K    16K geli:w 11   0:16  0.00% g_eli[11] da
 1136 root        1  21    -     0K    16K geli:w  6   0:16  0.00% g_eli[6] da2
 1154 root        1  20    -     0K    16K geli:w 11   0:16  0.00% g_eli[11] da
 1109 root        1  21    -     0K    16K geli:w  5   0:16  0.00% g_eli[5] da0
 1152 root        1  21    -     0K    16K geli:w  9   0:16  0.00% g_eli[9] da3
 1127 root        1  20    -     0K    16K geli:w 10   0:16  0.00% g_eli[10] da
 1140 root        1  21    -     0K    16K geli:w 10   0:16  0.00% g_eli[10] da
 1110 root        1  21    -     0K    16K geli:w  6   0:16  0.00% g_eli[6] da0
 1131 root        1  20    -     0K    16K geli:w  1   0:16  0.00% g_eli[1] da2
 1143 root        1  21    -     0K    16K geli:w  0   0:16  0.00% g_eli[0] da3
```

I will play with the tuneables as soon as the resilver is finished. Hope it works well and this is usefull for anyone that may find this problem.

Thanks.


----------

