# High load (when system idle) after replacing HDDs



## ers (May 9, 2021)

I have replaced 4 HDDs (bad sectors) for 6 new ones. The replacement passed without problems. System see disks, no unusual messages in logs.
*No other hardware, software or config was changed.* After replacement the system load is about 0.95+ when idle.
The mysterious load is in kernel (~12% by top). 5 top processes by top:
242.5H 730.32% [idle]
47.6H  93.21% [kernel]
36:56   0.00% [geom]
34:46   0.00% [intr]
9:00   0.00% [zfskern]

Before HDDs replacement powerd deamon changed frequency without problems. Now it is always in turbo mode...
Powerd calculated load (on highest frequency) is about 94-106% so powerd cannot lower frequency.
Why? What to look for?


----------



## SirDice (May 9, 2021)

Slower disks perhaps? Changed 7200 RPM disks for 5400 RPM ones?


----------



## ers (May 9, 2021)

No, replaced to 7200.
IMHO disk speed has no connection to high load when there is nothing read or write to them.
I have no clue what it could be. I see this behavior for the first time...


----------



## ralphbsz (May 9, 2021)

Maybe you put the new disks into existing ZFS pools, and the workload you're seeing is the resilvering of those pools? Try "zpool status" ir you are using ZFS.


----------



## ers (May 9, 2021)

New disk are in new pool and there is no error. No resilvering needed in any pool...


----------



## richardtoohey2 (May 9, 2021)

Unlikely but worth asking - definitely no hardware RAID involved?


----------



## mtu (May 10, 2021)

Without knowing your pool layouts (before and after), all we can do is guess. _"replaced 4 HDDs for 6 new ones"_ and _"New disks are in new pool"_ is not useful information.

Try and post at least the full output of `zpool status`, and describe what changed with the HDD replacement.


----------



## ers (May 11, 2021)

No hardware raid. Basic zfs on HBA.
old pools: *1x hdd not used, 3x hdd in raidz1*,  6x hdd in raidz2, 6x hdd in raidz2)
new pools: *6x hdd in raidz2*, 6x hdd in raidz2, 6x hdd in raidz2

Disks in bold removed. New ones create new pool. Other pools are unchanged.
Zpool status show all pools and disk online. Read 0, write 0 cksum 0 on all disks in all pools.


----------



## ralphbsz (May 11, 2021)

Yes, there is no logical reason I can see for ZFS do be doing resilvering. It might be doing a scrub, but that would be obvious and visible in zpool status. And the CPU time is being used by process [kernel], not by process [zfskern], so it probably isn't ZFS in the first place.

I honestly have no idea. Debugging ideas: Look with iostat, maybe whatever process is doing this is also doing IO which you can not otherwise explain. And look at top for user processes. Maybe there is a user process that is doing something that uses "a little bit" of CPU time in user space all the time, maybe that could give us a hint. Look at /var/log, and find log flies that are growing.


----------



## ers (May 11, 2021)

Iostat shows no activity on disks (i have stopped outside communication for tests).
Cpu stats are: kernel 12, idle 88. As previously observed in top - no hint there.
There is no user processes. This "magic" is present while no user activity.
?...


----------



## richardtoohey2 (May 11, 2021)

Did you change anything else?  Or literally JUST put the new drives in?  Looks like there was some changes you made to the the ZFS set-up (ZFS is not anything I know about) so not just a case of plugging in hot spares - there were some ZFS changes (but then the point is - why can't you see what ZFS is doing?)

Reboots?

Did you make any previous configuration changes or updates that might have kicked in after a reboot?

If you boot off a live "CD" does the kernel still show as being as busy?  Would expect it to not be, but if it still is, maybe that points to hardware?


----------



## ers (May 12, 2021)

*I know i did not change anything else. Why nobody believes? 
No previous changes (!) in config in hardware and software.*
Boot from the same same disk with the same system.
That is why this case is so strange...

I just took old drives and put new ones. Of course i had to make a small changes, but this is obvious to use the new drives.
Destroyed 3x hdd raidz1 and 6x hdd raidz2 created. Nothing else.

I do see what zfs is doing. Why you think i cannot?
Outside communication was stopped for tests. There is no "hidden" activity ralphbsz sugessted.

Reboot? Plenty. No changes.
Later, disabling not used devices to change irq mapping did not change anything. Reverted.


----------



## richardtoohey2 (May 12, 2021)

ers said:


> I do see what zfs is doing. Why you think i cannot?


Sorry, I thought that was the problem - you couldn't see why there was activity (or what the activity is/was).  I must have got the wrong end of the stick, sorry.


----------



## mark_j (May 12, 2021)

You need to start with uname output, dmesg output, rc.conf etc. There's just too little information provided other than "I have a problem, solve it for me!"

Have you looked at `systat -vmstat`?

The information provided seems to be that the new disks being the cause is a red-herring and have nothing at all to do with CPU usage. Maybe you've got an interrupt storm?

What's your settings for `sysctl -a | grep "kern.event"`. What cron jobs are running, both system and user?

It could also just be a bug.

Edit: I want to clarify, are we talking load averages here, or what?


----------



## ers (May 19, 2021)

I have wrote about something strange i found. All the years of using BSD did not prepare me to solve this mystery.
I have a problem, did you find something similar? I am asking for suggestions where to look for, because probably no one solve this...
The question is: _Why it works flawlessly before?_

I od not know why you want `uname` output, but here it is: `FreeBSD` 
Maybe you was thinking about something else? "uname -a" perhaps?  `FreeBSD 10.0-RELEASE #0 r260789` *No, there is no chance to change to 13 for at least year.*

This is not the interrupt storm, because there are no messages about the storm in logs.
The `top` stats (for idle) show: `CPU:  0.0% user,  0.0% nice, 13.3% system,  0.0% interrupt, 86.7% idle`. *0.0% interrupt* means no storm here. Am I wrong?
Of course i have looked at `systat -vmstat` but no help here. The most suspected is acpi, which is my only suspect. Why it was ok earlier?

```
8938 total
5427 acpi0 9
2 ehci0 16
3 ehci1 23
881 cpu0:timer
mps0 264
mps1 265
xhci0 266
::::
17 ahci0 277
264 cpu1:timer
178 cpu4:timer
1072 cpu3:timer
485 cpu6:timer
150 cpu2:timer
197 cpu7:timer
255 cpu5:timer
```

rc.conf:

```
ifconfig_igb0="inet XXX.XXX.XXX.XXX netmask XXX.XXX.XXX.XXX description LAN0"
defaultrouter="XXX.XXX.XXX.XXX"
sshd_enable="YES"
powerd_enable="YES"
powerd_flag="-a adp"
dumpdev="NO"
update_motd="NO"
zfs_enable="YES"
pf_enable="YES"
openntpd_enable="YES"
openntpd_flags="-s"
samba_enable="YES"
syslogd_flags="-4 -ss"
moused_enable="NO"
moused_ums0_enable="NO"
moused_ums1_enable="NO"
performance_cx_lowest="C2"
economy_cx_lowest="C2"
rpcbind_enable="YES"
nfs_server_enable="YES"
mountd_flags=""
```
Seting `performance_cx_lowest="Cmax"` or `economy_cx_lowest="Cmax"` did not change anything.

`sysctl -a | grep "kern.event"`

```
kern.eventtimer.et.LAPIC.flags: 7
kern.eventtimer.et.LAPIC.frequency: 50001119
kern.eventtimer.et.LAPIC.quality: 600
kern.eventtimer.et.HPET.flags: 7
kern.eventtimer.et.HPET.frequency: 14318180
kern.eventtimer.et.HPET.quality: 550
kern.eventtimer.et.RTC.flags: 17
kern.eventtimer.et.RTC.frequency: 32768
kern.eventtimer.et.RTC.quality: 0
kern.eventtimer.et.i8254.flags: 1
kern.eventtimer.et.i8254.frequency: 1193182
kern.eventtimer.et.i8254.quality: 100
kern.eventtimer.choice: LAPIC(600) HPET(550) i8254(100) RTC(0)
kern.eventtimer.singlemul: 2
kern.eventtimer.idletick: 0
kern.eventtimer.timer: LAPIC
kern.eventtimer.periodic: 0
```

Timecounter "TSC-low" frequency 1650036800 Hz quality 1000

System crontab is the only one used.

```
*/5     *       *       *       *       root    /usr/libexec/atrun
11      11      *       *       *       operator /usr/libexec/save-entropy
0       *       *       *       *       root    newsyslog
1       1       *       *       6       root    periodic daily
15      2       *       *       6       root    periodic weekly
30      3       1       *       *       root    periodic monthly
1,31    0-5     *       *       *       root    adjkerntz -a
```

This is simple storage. No cpu time consuming services.

Any ideas what went wrong?
Maybe someone have similar issue?


----------



## mark_j (May 19, 2021)

You still haven't stated what load you are talking about?
FreeBSD 10  is as old as my granny & she can't take much load either.


----------



## ers (May 19, 2021)

*You did not read the first message. Read the first message...
There is everything clearly described.*
The fact the system is old do not change that it was working and stopped after replacing only drives.
The calculated load is high and is preventing powerd to lower frequency.
This was working perfectly earlier.


----------



## ipsum (May 19, 2021)

Can you export that new pool and see if the load changes?


----------



## PMc (May 19, 2021)

So, as I understand, the system is consuming CPU, and we do not know why? We do know however that the load is accumulated on PID 0 (the kernel). Correct?

So the next step of in-vivo analysis is to see what piece of the kernel is consuming the load: `ps axH`.
This gives the processes separated into their individual threads, and the accumulated compute time - and some of these times increase over time. There is a bunch of threads on PID 0 (but I don't remember how that did look on Rel.10, it has changed a lot over time), so just compare the figures with a minute later.
(There is also an option to see these in `top` - but I won't search the manpage how that did work in Rel.10)
In any case we should get a name of a thread - and that should give a clue on what subsystem is eating the compute.


----------



## SirDice (May 19, 2021)

ers said:


> `FreeBSD 10.0-RELEASE #0 r260789` *No, there is no chance to change to 13 for at least year.*


You don't need to upgrade to 13.0, 12.2 is also supported. You could update to 11.4 too but I won't recommend that as it will be EoL at the end of September. Still a viable quick upgrade though, even if you have to do another upgrade to 12.2 soon after it. Although I believe by that time 12.3 might come out. The fact remains, as others have noted, that 10.0 is old (support ended in 2015).


ers said:


> The fact the system is old do not change that it was working and stopped after replacing only drives.


The act of replacing a disk may have triggered a bug or issue that has long since been fixed. There's been a lot of developments in the past 6 years. You can't just dismiss that.


----------



## chungy (May 19, 2021)

Even if you want to stick to an unsupported release, 10.4 is still a better shot at getting things working properly than 10.0....

Seriously though, upgrade to 12.2 or 13.0. It's well worth the time.


----------



## covacat (May 19, 2021)

just boot a aupported release from install media and drop to live fs
see if the load is still present


----------



## mark_j (May 20, 2021)

ers said:


> *You did not read the first message. Read the first message...
> There is everything clearly described.*
> The fact the system is old do not change that it was working and stopped after replacing only drives.
> The calculated load is high and is preventing powerd to lower frequency.
> This was working perfectly earlier.


And still you have not explained what the load is.
Are you referring to top's output, such as this:

```
last pid: 86750;  load averages:  0.15,  0.07,  0.04
```
If so, then you need to learn about load and how FreeBSD operates. I've had loads up around 300 and still got a responsive machine.


----------



## ralphbsz (May 20, 2021)

ers said:


> The `top` stats (for idle) show: `CPU:  0.0% user,  0.0% nice, 13.3% system,  0.0% interrupt, 86.7% idle`. *0.0% interrupt* means no storm here. Am I wrong?


Could be. Not clear. The problem here is that interrupt processing is so darn efficient. I'm looking at my home router, which acts as a NAT box and firewall, and even though there is heavy network traffic (probably 5 MByte/s, two house mates are watching videos over the network and saturating our DSL), the interrupt rate is at 0.0%. In spite of the fact that thousands of packets per second are being routed. I've been glancing at top for several minutes now, and I'm yet to set it climb to 0.1%. At the same time, my system fraction of the CPU seems to be at 2...3% typically, with jumps to 8% a few times (I have quite a few processes that do something every few seconds or once a minute).

So interrupt time being 0.0% doesn't prove much; even significant interrupts might not register as 0.1%.



ers said:


> The fact the system is old do not change that it was working and stopped after replacing only drives.


But it is quite likely that this old system has bugs which have been fixed long ago. And that people may not even remember. I don't know exactly when I upgraded to 11, but it was several years ago.



PMc said:


> So the next step of in-vivo analysis is to see what piece of the kernel is consuming the load: `ps axH`.


THIS!

To debug this, you need to break it down further. Something in the kernel (process ID 0) is using CPU time, but we don't know what, or why, or what monsters are hidden in this old kernel. All we know that it started after a disk replacement, but that might be a red herring.


----------



## mark_j (May 20, 2021)

PMc said:


> So, as I understand, the system is consuming CPU, and we do not know why? We do know however that the load is accumulated on PID 0 (the kernel). Correct?


But be careful. The load averages you see involve some POTENTIAL load on the CPU, not all actual. In other words, the CPU has a queue of runnable processes plus those already running. Some of the runnable processes (probably a lot?) are waiting on interrupts, so don't consume CPU resources at all; they're sleeping.

High CONSISTENT (and that means in the 15 minute zone) loads means you have a problem. High count in the 1 minute zone just means you've got a lot of running/runnable processes (threads actually) waiting to complete. If they're higher in the 5 minute zone than the 1 minute zone, then you're building up to a problem.

The scheduler uses this load average to schedule processes/threads. See kern_sync.c and sched_ule.c

The only time I can recall seeing high (over 200) in the 15 minute zone was when a SCSI disk array pack decided to die.


----------



## ers (May 21, 2021)

Specially for mark_j one more time from post #1:


> After replacement the *system load* is about *0.95*+ when idle.
> The mysterious *load* is in kernel (*~12% by top*). 5 top processes by top:
> 242.5H 730.32% [idle]
> 47.6H 93.21% [kernel]
> ...


As you can see i wrote about *load* not pure *cpu usage* and this was emphasized since post #1.
Load is a computed load available for example in top, first line for 3 different time period.
If you need equivalent in CPU usage it is ~12% available in 3rd line of top.
In addition to this you have information that powerd is calculating load about 94-106% and this is similar to calculated load value in top.
Because you did not read, or read but not understand, *the problem is that calculated load prevent powerd to lower cpu frequency*.
System is responsive (but could be better if we solve the problem of mysterious load).
Is this enough explanation from post 1? 

PMc: Do you think about top -HSaIs1 ? The suspect is kernel{acpi_task_*} (see post #15 - there was the information about most suspected process).

```
from top (idle threads removed)
    0 root         8    0     0K 12544K -       7 102.2H  34.96% [kernel{acpi_task_0}]
    0 root         8    0     0K 12544K -       2 103.1H  32.86% [kernel{acpi_task_2}]
    0 root         8    0     0K 12544K CPU4    4 103.7H  30.08% [kernel{acpi_task_1}]

from ps
    0  -  DLs   6135:37.99 [kernel/acpi_task_0]
    0  -  DLs   6223:21.73 [kernel/acpi_task_1]
    0  -  RLs   6189:10.98 [kernel/acpi_task_2]
```

SirDice: you information is most accurate so far, but we do not have the explanation/solution yet.

For *all*, one more time:
I have stopped outside communication for tests. There was no additional processes, users, etc. except running services (see message #15).
There was minimal network transfer for terminal to show the data. Please do read more carefully so i do not have to explain the same things again.
Big thank you in advance.


----------



## PMc (May 21, 2021)

ers said:


> PMc: Do you think about top -HSaIs1 ? The suspect is kernel{acpi_task_*} (see post #15 - there was the information about most suspected process).
> 
> ```
> from top (idle threads removed)
> ...


Now that's decent, thank You.
I don't find any machine where these three would show anything else than

```
0  -  DLs     0:00.00 [kernel/acpi_task_0]
    0  -  DLs     0:00.00 [kernel/acpi_task_1]
    0  -  DLs     0:00.00 [kernel/acpi_task_2]
```
This thing has even some more threads, namely "thermal" and "cooling", which also do nothing; and so I switched them off, which didn't make any difference to the established thermal and cooling behaviour (I suppose that all of this is only used on laptops). Maybe this here can also be switched off - there is info on how to switch if off in acpi()


----------



## ralphbsz (May 21, 2021)

OK, so your ACPI is broken, and is soaking up CPU. For some reason that I can't fathom, it started after the disk replacement. Given that it is an ancient version, there is no point trying to debug it.

I can only think of one way to fix it correctly: replace ACPI (and therefore all of the base system) with a newer version. You say you can't do that for another ~year, so you'll have to live with it.


----------



## mer (May 21, 2021)

Maybe something in the BIOS can be turned off?  Or perhaps disable powerd?
man -k acpi shows a bunch of things, acpiconf or acpidump may be interesting to poke with.

When you reboot, isn't there an option to boot with acpi disabled?  If so, see what happens with that.


----------



## ers (May 22, 2021)

PMc: RtfM is widely used technique but i have read this manual earlier. Many of these things is more for laptop.
Reconfiguration of acpi is new to me. How to change acpi config safely? How to seek what is the cause in the acpi subsystem? Any hint?
Thermal and cooling threads are almost not used according to ps/top.
ralphbsz: Very clever suggestion, very, very... It is broken, so leave it. 
mer: Why disable powerd when it is not the cause of the problem? I want to adjust frequency by powerd.
Disabling acpi was first thing i have tested. Kernel panic on boot and hang. Reverted.


----------



## PMc (May 22, 2021)

ers said:


> PMc: RtfM is widely used technique but i have read this manual earlier. Many of these things is more for laptop.
> Reconfiguration of acpi is new to me. How to change acpi config safely? How to seek what is the cause in the acpi subsystem? Any hint?
> Thermal and cooling threads are almost not used according to ps/top.


Thats why I switched them off - they annoyed me. I dont want employees lingering around when I dont know what they're doing (if anything at all). Now your's is a slightly more difficult case, that reminds of mutiny.
The answers then is something along "just try it out", "educated guess", "read the source", "trial&error" - and I know none of this is to your liking.



ers said:


> ralphbsz: Very clever suggestion, very, very... It is broken, so leave it.


Aye. It's broken, it's old, it's no longer supported, and it did undergo lots of change since then.That's not the best prospect for deep investigation.


----------



## ralphbsz (May 22, 2021)

ers said:


> ralphbsz: Very clever suggestion, very, very... It is broken, so leave it.


If you find a better idea, please share it. Other than finding (and probably paying) a BSD ACPI expert who knows the version 10 source base, I don't see any ideas.


----------



## mer (May 22, 2021)

ers said:


> Why disable powerd when it is not the cause of the problem? I want to adjust frequency by powerd.
> Disabling acpi was first thing i have tested. Kernel panic on boot and hang. Reverted.


Is there not a relationship between powerd and acpi?  I thought there was.  If not, then by all means leave it alone.


----------



## ers (May 29, 2021)

PMc only try and error method is _a new hope_... and maybe _return of the Jedi_ of BSD will give us acpi happines...
ralphbsz because this happen without recompiling the system there is a way but it is not found yet. Maybe newer will. Who knows. Better leave it as it is...  I think looking for solution is better than do nothing..
mer acpi do not need powerd to run properly. Powerd depends on acpi.


----------



## mer (May 29, 2021)

ers said:


> acpi do not need powerd to run properly. Powerd depends on acpi.


That was my point.  I've seen things go bad from acpi tables being not quite right, your first post is showing kernel acpi threads and powerd depends on acpi.  I was suggesting that maybe simply stopping powerd or disable and reboot could show a data point.
Powerd running, you have load, apparently from acpi kernel threads
Powerd stopped, maybe you don't have load, acpi kernel threads not doing anything.

If disabling powerd you don't have the load, then a conclusion could be made there is an interaction between powerd, acpi and the load you are seeing.

That is all, but by all means, feel free to ignore it.


----------



## VladiBG (May 29, 2021)

You can boot using latest version of FreeBSD LiveCD and see if the load is still there. Also check if you have any new version for your BIOS.


----------



## ers (May 30, 2021)

Load is high whether powerd is on or off since the beginning .of this investigation.


----------



## mark_j (May 30, 2021)

As was describedto you many days ago, load of .95 is nothing. Move on.

If you still think you've got a problem, post the output of ps when it happens, rather than some obtuse snippet that apparently shows this mythical high load.

If it's not happening now, then say so. 
Ps. I think this forum software needs a "closed" by user flag. (Perhaps there is?)


----------



## ers (May 30, 2021)

mark_j: As I see, you still do not read/understand what is the problem.
You must be kidding... Should I post ps output "when it happened"?
The problem started and *it is continuously all the time*.
Calculated load jumped to ~0.97 (sometimes 1.02) after replacing hdds and do not drop even for a while.
I see you want me to prove it to you again it is happening... ok...
What can you advice after reading this ps output?

This will be long post and this forum has 25000 bytes / message limit. I have to shrink repeated lines.


```
PID TT  STAT        TIME COMMAND
    0  -  DLs      1:09.52 [kernel/swapper]
    0  -  DLs      0:00.00 [kernel/firmware tas]
    0  -  DLs      0:00.00 [kernel/kqueue taskq]
    0  -  DLs      0:00.64 [kernel/thread taskq]
    0  -  DLs  10145:25.46 [kernel/acpi_task_0]
    0  -  DLs  10280:49.58 [kernel/acpi_task_1]
    0  -  RLs  10249:25.30 [kernel/acpi_task_2]
    0  -  DLs      0:00.00 [kernel/ffs_trim tas]
    0  -  DLs      0:00.00 [kernel/mps0 taskq]
    0  -  DLs      0:00.00 [kernel/mps1 taskq]
    0  -  DLs      0:00.03 [kernel/igb0 que]
    0  -  DLs      0:00.01 [kernel/igb0 que]
    0  -  DLs      0:00.14 [kernel/igb0 que]
    0  -  DLs      0:00.02 [kernel/igb0 que]
    0  -  DLs      0:00.00 [kernel/igb1 que] // 4x
    0  -  DLs      0:00.02 [kernel/mca taskq]
    0  -  DLs      0:00.00 [kernel/system_taskq] // 8x
    0  -  DLs      0:03.12 [kernel/zio_null_iss]
    0  -  DLs      0:00.20 [kernel/zio_null_int]
    0  -  DLs      0:00.00 [kernel/zio_read_iss] // 8x
    0  -  DLs      0:00.18 [kernel/zio_read_int] // 8x
    0  -  DLs      0:13.31 [kernel/zio_write_is]
    0  -  DLs      0:13.27 [kernel/zio_write_is]
    0  -  DLs      0:13.26 [kernel/zio_write_is]
    0  -  DLs      0:13.24 [kernel/zio_write_is]
    0  -  DLs      0:13.27 [kernel/zio_write_is]
    0  -  DLs      0:13.23 [kernel/zio_write_is]
    0  -  DLs      0:13.20 [kernel/zio_write_is]
    0  -  DLs      0:13.20 [kernel/zio_write_is]
    0  -  DLs      0:00.51 [kernel/zio_write_is]
    0  -  DLs      0:00.51 [kernel/zio_write_is]
    0  -  DLs      0:00.52 [kernel/zio_write_is]
    0  -  DLs      0:00.51 [kernel/zio_write_is]
    0  -  DLs      0:00.52 [kernel/zio_write_is]
    0  -  DLs      0:02.99 [kernel/zio_write_in]
    0  -  DLs      0:03.00 [kernel/zio_write_in]
    0  -  DLs      0:02.99 [kernel/zio_write_in]
    0  -  DLs      0:02.99 [kernel/zio_write_in]
    0  -  DLs      0:02.99 [kernel/zio_write_in]
    0  -  DLs      0:02.97 [kernel/zio_write_in]
    0  -  DLs      0:03.00 [kernel/zio_write_in]
    0  -  DLs      0:02.99 [kernel/zio_write_in]
    0  -  DLs      0:00.14 [kernel/zio_write_in]
    0  -  DLs      0:00.14 [kernel/zio_write_in]
    0  -  DLs      0:00.14 [kernel/zio_write_in]
    0  -  DLs      0:00.14 [kernel/zio_write_in]
    0  -  DLs      0:00.14 [kernel/zio_write_in]
    0  -  DLs      0:00.06 [kernel/zio_free_iss] // 100x
    0  -  DLs      0:00.00 [kernel/zio_free_int]
    0  -  DLs      0:00.00 [kernel/zio_claim_is]
    0  -  DLs      0:00.00 [kernel/zio_claim_in]
    0  -  DLs      0:00.00 [kernel/zio_ioctl_is]
    0  -  DLs      0:00.53 [kernel/zio_ioctl_in]
    0  -  DLs      0:00.00 [kernel/zfs_vn_rele_]
    0  -  DLs      0:00.04 [kernel/zil_clean]
    0  -  DLs      0:00.00 [kernel/zio_null_iss]
    0  -  DLs      0:00.01 [kernel/zio_null_int]
    0  -  DLs      0:00.00 [kernel/zio_read_iss] // 8x
    0  -  DLs      0:00.00 [kernel/zio_read_int] // 8x
    0  -  DLs      0:00.00 [kernel/zio_write_is] // 13x
    0  -  DLs      0:00.00 [kernel/zio_write_in] // 12x
    0  -  DLs      0:00.00 [kernel/zio_free_iss] // 100x
    0  -  DLs      0:00.00 [kernel/zio_free_int]
    0  -  DLs      0:00.00 [kernel/zio_claim_is]
    0  -  DLs      0:00.00 [kernel/zio_claim_in]
    0  -  DLs      0:00.00 [kernel/zio_ioctl_is]
    0  -  DLs      0:00.00 [kernel/zio_ioctl_in]
    0  -  DLs      0:00.00 [kernel/zfs_vn_rele_]
    0  -  DLs      0:15.10 [kernel/zio_null_iss]
    0  -  DLs      0:01.05 [kernel/zio_null_int]
    0  -  DLs      0:00.00 [kernel/zio_read_iss] // 8x
    0  -  DLs      1:08.27 [kernel/zio_read_int]
    0  -  DLs      1:08.48 [kernel/zio_read_int]
    0  -  DLs      1:08.39 [kernel/zio_read_int]
    0  -  DLs      1:08.26 [kernel/zio_read_int]
    0  -  DLs      1:08.45 [kernel/zio_read_int]
    0  -  DLs      1:08.33 [kernel/zio_read_int]
    0  -  DLs      1:08.29 [kernel/zio_read_int]
    0  -  DLs      1:08.48 [kernel/zio_read_int]
    0  -  DLs      6:34.59 [kernel/zio_write_is]
    0  -  DLs      6:35.11 [kernel/zio_write_is]
    0  -  DLs      6:34.35 [kernel/zio_write_is]
    0  -  DLs      6:34.96 [kernel/zio_write_is]
    0  -  DLs      6:34.89 [kernel/zio_write_is]
    0  -  DLs      6:34.38 [kernel/zio_write_is]
    0  -  DLs      6:34.62 [kernel/zio_write_is]
    0  -  DLs      6:35.25 [kernel/zio_write_is]
    0  -  DLs      1:19.88 [kernel/zio_write_is]
    0  -  DLs      1:19.67 [kernel/zio_write_is]
    0  -  DLs      1:19.72 [kernel/zio_write_is]
    0  -  DLs      1:19.83 [kernel/zio_write_is]
    0  -  DLs      1:19.82 [kernel/zio_write_is]
    0  -  DLs      4:46.77 [kernel/zio_write_in]
    0  -  DLs      4:46.49 [kernel/zio_write_in]
    0  -  DLs      4:46.75 [kernel/zio_write_in]
    0  -  DLs      4:46.39 [kernel/zio_write_in]
    0  -  DLs      4:46.82 [kernel/zio_write_in]
    0  -  DLs      4:46.71 [kernel/zio_write_in]
    0  -  DLs      4:46.65 [kernel/zio_write_in]
    0  -  DLs      4:46.54 [kernel/zio_write_in]
    0  -  DLs      0:03.68 [kernel/zio_write_in]
    0  -  DLs      0:03.70 [kernel/zio_write_in]
    0  -  DLs      0:03.68 [kernel/zio_write_in]
    0  -  DLs      0:03.63 [kernel/zio_write_in]
    0  -  DLs      0:03.69 [kernel/zio_write_in]

    0  -  DLs      0:00.99 [kernel/zio_free_iss]
                          to                                             // 100x
    0  -  DLs      0:01.41 [kernel/zio_free_iss]

    0  -  DLs      0:00.00 [kernel/zio_free_int]
    0  -  DLs      0:00.00 [kernel/zio_claim_is]
    0  -  DLs      0:00.00 [kernel/zio_claim_in]
    0  -  DLs      0:00.00 [kernel/zio_ioctl_is]
    0  -  DLs      0:00.46 [kernel/zio_ioctl_in]
    0  -  DLs      0:00.17 [kernel/zfs_vn_rele_]
    0  -  DLs      0:12.34 [kernel/zio_null_iss]
    0  -  DLs      0:00.83 [kernel/zio_null_int]
    0  -  DLs      0:00.00 [kernel/zio_read_iss] // 8x
    0  -  DLs    142:07.06 [kernel/zio_read_int]
    0  -  DLs    142:05.73 [kernel/zio_read_int]
    0  -  DLs    142:05.28 [kernel/zio_read_int]
    0  -  DLs    142:01.21 [kernel/zio_read_int]
    0  -  DLs    142:06.06 [kernel/zio_read_int]
    0  -  DLs    142:06.04 [kernel/zio_read_int]
    0  -  DLs    142:01.76 [kernel/zio_read_int]
    0  -  DLs    142:03.46 [kernel/zio_read_int]
    0  -  DLs      0:04.77 [kernel/zio_write_is]
    0  -  DLs      0:04.83 [kernel/zio_write_is]
    0  -  DLs      0:04.73 [kernel/zio_write_is]
    0  -  DLs      0:04.78 [kernel/zio_write_is]
    0  -  DLs      0:04.77 [kernel/zio_write_is]
    0  -  DLs      0:04.76 [kernel/zio_write_is]
    0  -  DLs      0:04.76 [kernel/zio_write_is]
    0  -  DLs      0:04.79 [kernel/zio_write_is]
    0  -  DLs      0:00.35 [kernel/zio_write_is]
    0  -  DLs      0:00.35 [kernel/zio_write_is]
    0  -  DLs      0:00.35 [kernel/zio_write_is]
    0  -  DLs      0:00.35 [kernel/zio_write_is]
    0  -  DLs      0:00.35 [kernel/zio_write_is]
    0  -  DLs      0:05.14 [kernel/zio_write_in]
    0  -  DLs      0:05.13 [kernel/zio_write_in]
    0  -  DLs      0:05.14 [kernel/zio_write_in]
    0  -  DLs      0:05.13 [kernel/zio_write_in]
    0  -  DLs      0:05.15 [kernel/zio_write_in]
    0  -  DLs      0:05.13 [kernel/zio_write_in]
    0  -  DLs      0:05.11 [kernel/zio_write_in]
    0  -  DLs      0:05.12 [kernel/zio_write_in]
    0  -  DLs      0:00.02 [kernel/zio_write_in]
    0  -  DLs      0:00.02 [kernel/zio_write_in]
    0  -  DLs      0:00.02 [kernel/zio_write_in]
    0  -  DLs      0:00.02 [kernel/zio_write_in]
    0  -  DLs      0:00.02 [kernel/zio_write_in]

    0  -  DLs      0:00.37 [kernel/zio_free_iss]
                          to                                          // 100x
    0  -  DLs      0:00.52 [kernel/zio_free_iss]

    0  -  DLs      0:00.00 [kernel/zio_free_int]
    0  -  DLs      0:00.00 [kernel/zio_claim_is]
    0  -  DLs      0:00.00 [kernel/zio_claim_in]
    0  -  DLs      0:00.00 [kernel/zio_ioctl_is]
    0  -  DLs      0:00.29 [kernel/zio_ioctl_in]
    0  -  DLs      0:00.00 [kernel/zfs_vn_rele_]
    0  -  DLs      0:20.11 [kernel/zio_null_iss]
    0  -  DLs      0:01.23 [kernel/zio_null_int]
    0  -  DLs      0:00.00 [kernel/zio_read_iss]
    0  -  DLs      0:00.00 [kernel/zio_read_iss]
    0  -  DLs      0:00.00 [kernel/zio_read_iss]
    0  -  DLs      0:00.00 [kernel/zio_read_iss]
    0  -  DLs      0:00.00 [kernel/zio_read_iss]
    0  -  DLs      0:00.00 [kernel/zio_read_iss]
    0  -  DLs      0:00.00 [kernel/zio_read_iss]
    0  -  DLs      0:00.00 [kernel/zio_read_iss]
    0  -  DLs      0:21.33 [kernel/zio_read_int]
    0  -  DLs      0:21.35 [kernel/zio_read_int]
    0  -  DLs      0:21.17 [kernel/zio_read_int]
    0  -  DLs      0:21.34 [kernel/zio_read_int]
    0  -  DLs      0:21.25 [kernel/zio_read_int]
    0  -  DLs      0:21.23 [kernel/zio_read_int]
    0  -  DLs      0:21.26 [kernel/zio_read_int]
    0  -  DLs      0:21.32 [kernel/zio_read_int]
    0  -  DLs     32:01.38 [kernel/zio_write_is]
    0  -  DLs     31:59.79 [kernel/zio_write_is]
    0  -  DLs     32:00.08 [kernel/zio_write_is]
    0  -  DLs     32:00.12 [kernel/zio_write_is]
    0  -  DLs     32:01.38 [kernel/zio_write_is]
    0  -  DLs     32:00.60 [kernel/zio_write_is]
    0  -  DLs     31:59.42 [kernel/zio_write_is]
    0  -  DLs     32:00.71 [kernel/zio_write_is]
    0  -  DLs      5:50.70 [kernel/zio_write_is]
    0  -  DLs      5:50.75 [kernel/zio_write_is]
    0  -  DLs      5:51.05 [kernel/zio_write_is]
    0  -  DLs      5:51.11 [kernel/zio_write_is]
    0  -  DLs      5:50.35 [kernel/zio_write_is]
    0  -  DLs     26:00.37 [kernel/zio_write_in]
    0  -  DLs     26:00.28 [kernel/zio_write_in]
    0  -  DLs     26:00.36 [kernel/zio_write_in]
    0  -  DLs     26:00.26 [kernel/zio_write_in]
    0  -  DLs     26:00.15 [kernel/zio_write_in]
    0  -  DLs     25:59.91 [kernel/zio_write_in]
    0  -  DLs     26:00.38 [kernel/zio_write_in]
    0  -  DLs     26:00.23 [kernel/zio_write_in]
    0  -  DLs      0:01.98 [kernel/zio_write_in]
    0  -  DLs      0:01.96 [kernel/zio_write_in]
    0  -  DLs      0:01.98 [kernel/zio_write_in]
    0  -  DLs      0:01.97 [kernel/zio_write_in]
    0  -  DLs      0:01.97 [kernel/zio_write_in]
    0  -  DLs      0:01.52 [kernel/zio_free_iss]
    0  -  DLs      0:01.54 [kernel/zio_free_iss]
    0  -  DLs      0:01.55 [kernel/zio_free_iss]
    0  -  DLs      0:01.55 [kernel/zio_free_iss]
    0  -  DLs      0:01.64 [kernel/zio_free_iss]
    0  -  DLs      0:01.56 [kernel/zio_free_iss]
    0  -  DLs      0:01.60 [kernel/zio_free_iss]
    0  -  DLs      0:01.57 [kernel/zio_free_iss]
    0  -  DLs      0:01.83 [kernel/zio_free_iss]
    0  -  DLs      0:01.60 [kernel/zio_free_iss]
    0  -  DLs      0:01.50 [kernel/zio_free_iss]
    0  -  DLs      0:01.60 [kernel/zio_free_iss]
    0  -  DLs      0:01.51 [kernel/zio_free_iss]
    0  -  DLs      0:01.64 [kernel/zio_free_iss]
    0  -  DLs      0:01.62 [kernel/zio_free_iss]
    0  -  DLs      0:01.59 [kernel/zio_free_iss]
    0  -  DLs      0:01.65 [kernel/zio_free_iss]
    0  -  DLs      0:01.67 [kernel/zio_free_iss]
    0  -  DLs      0:01.66 [kernel/zio_free_iss]
    0  -  DLs      0:01.61 [kernel/zio_free_iss]
    0  -  DLs      0:01.57 [kernel/zio_free_iss]
    0  -  DLs      0:01.55 [kernel/zio_free_iss]
    0  -  DLs      0:01.51 [kernel/zio_free_iss]
    0  -  DLs      0:01.62 [kernel/zio_free_iss]
    0  -  DLs      0:01.55 [kernel/zio_free_iss]
    0  -  DLs      0:01.61 [kernel/zio_free_iss]
    0  -  DLs      0:01.51 [kernel/zio_free_iss]
    0  -  DLs      0:01.56 [kernel/zio_free_iss]
    0  -  DLs      0:01.55 [kernel/zio_free_iss]
    0  -  DLs      0:01.62 [kernel/zio_free_iss]
    0  -  DLs      0:01.57 [kernel/zio_free_iss]
    0  -  DLs      0:01.53 [kernel/zio_free_iss]
    0  -  DLs      0:01.53 [kernel/zio_free_iss]
    0  -  DLs      0:01.53 [kernel/zio_free_iss]
    0  -  DLs      0:01.52 [kernel/zio_free_iss]
    0  -  DLs      0:01.58 [kernel/zio_free_iss]
    0  -  DLs      0:01.55 [kernel/zio_free_iss]
    0  -  DLs      0:01.52 [kernel/zio_free_iss]
    0  -  DLs      0:01.59 [kernel/zio_free_iss]
    0  -  DLs      0:01.60 [kernel/zio_free_iss]
    0  -  DLs      0:01.59 [kernel/zio_free_iss]
    0  -  DLs      0:01.53 [kernel/zio_free_iss]
    0  -  DLs      0:01.56 [kernel/zio_free_iss]
    0  -  DLs      0:01.54 [kernel/zio_free_iss]
    0  -  DLs      0:01.55 [kernel/zio_free_iss]
    0  -  DLs      0:01.56 [kernel/zio_free_iss]
    0  -  DLs      0:01.50 [kernel/zio_free_iss]
    0  -  DLs      0:01.61 [kernel/zio_free_iss]
    0  -  DLs      0:01.55 [kernel/zio_free_iss]
    0  -  DLs      0:01.58 [kernel/zio_free_iss]
    0  -  DLs      0:01.60 [kernel/zio_free_iss]
    0  -  DLs      0:01.51 [kernel/zio_free_iss]
    0  -  DLs      0:01.62 [kernel/zio_free_iss]
    0  -  DLs      0:01.52 [kernel/zio_free_iss]
    0  -  DLs      0:01.59 [kernel/zio_free_iss]
    0  -  DLs      0:01.60 [kernel/zio_free_iss]
    0  -  DLs      0:01.57 [kernel/zio_free_iss]
    0  -  DLs      0:01.55 [kernel/zio_free_iss]
    0  -  DLs      0:01.72 [kernel/zio_free_iss]
    0  -  DLs      0:01.56 [kernel/zio_free_iss]
    0  -  DLs      0:01.53 [kernel/zio_free_iss]
    0  -  DLs      0:01.53 [kernel/zio_free_iss]
    0  -  DLs      0:01.56 [kernel/zio_free_iss]
    0  -  DLs      0:01.72 [kernel/zio_free_iss]
    0  -  DLs      0:01.51 [kernel/zio_free_iss]
    0  -  DLs      0:01.68 [kernel/zio_free_iss]
    0  -  DLs      0:01.50 [kernel/zio_free_iss]
    0  -  DLs      0:01.57 [kernel/zio_free_iss]
    0  -  DLs      0:01.54 [kernel/zio_free_iss]
    0  -  DLs      0:01.57 [kernel/zio_free_iss]
    0  -  DLs      0:01.70 [kernel/zio_free_iss]
    0  -  DLs      0:01.75 [kernel/zio_free_iss]
    0  -  DLs      0:01.59 [kernel/zio_free_iss]
    0  -  DLs      0:01.55 [kernel/zio_free_iss]
    0  -  DLs      0:01.54 [kernel/zio_free_iss]
    0  -  DLs      0:01.65 [kernel/zio_free_iss]
    0  -  DLs      0:01.58 [kernel/zio_free_iss]
    0  -  DLs      0:01.55 [kernel/zio_free_iss]
    0  -  DLs      0:01.55 [kernel/zio_free_iss]
    0  -  DLs      0:01.56 [kernel/zio_free_iss]
    0  -  DLs      0:01.57 [kernel/zio_free_iss]
    0  -  DLs      0:01.56 [kernel/zio_free_iss]
    0  -  DLs      0:01.53 [kernel/zio_free_iss]
    0  -  DLs      0:01.54 [kernel/zio_free_iss]
    0  -  DLs      0:01.61 [kernel/zio_free_iss]
    0  -  DLs      0:01.57 [kernel/zio_free_iss]
    0  -  DLs      0:01.54 [kernel/zio_free_iss]
    0  -  DLs      0:01.55 [kernel/zio_free_iss]
    0  -  DLs      0:01.69 [kernel/zio_free_iss]
    0  -  DLs      0:01.56 [kernel/zio_free_iss]
    0  -  DLs      0:01.55 [kernel/zio_free_iss]
    0  -  DLs      0:01.66 [kernel/zio_free_iss]
    0  -  DLs      0:01.52 [kernel/zio_free_iss]
    0  -  DLs      0:01.59 [kernel/zio_free_iss]
    0  -  DLs      0:01.52 [kernel/zio_free_iss]
    0  -  DLs      0:01.52 [kernel/zio_free_iss]
    0  -  DLs      0:01.51 [kernel/zio_free_iss]
    0  -  DLs      0:01.52 [kernel/zio_free_iss]
    0  -  DLs      0:01.60 [kernel/zio_free_iss]
    0  -  DLs      0:01.58 [kernel/zio_free_iss]
    0  -  DLs      0:00.00 [kernel/zio_free_int]
    0  -  DLs      0:00.00 [kernel/zio_claim_is]
    0  -  DLs      0:00.00 [kernel/zio_claim_in]
    0  -  DLs      0:00.00 [kernel/zio_ioctl_is]
    0  -  DLs      0:00.61 [kernel/zio_ioctl_in]
    0  -  DLs      0:00.09 [kernel/zfs_vn_rele_]
    0  -  DLs      0:00.14 [kernel/zil_clean]
    0  -  DLs      0:00.00 [kernel/zil_clean]
    0  -  DLs      0:00.00 [kernel/zil_clean]
    0  -  DLs      0:03.46 [kernel/zil_clean]
    0  -  DLs      0:00.06 [kernel/zil_clean]
    0  -  DLs      0:18.42 [kernel/zil_clean]
    1  -  ILs      0:00.02 /sbin/init --
    2  -  DL       0:00.00 [crypto]
    3  -  DL       0:00.00 [crypto returns]
    4  -  DL       5:41.56 [zfskern/arc_reclaim]
    4  -  DL       0:04.63 [zfskern/l2arc_feed_]
    4  -  DL       0:11.28 [zfskern/trim sys]
    4  -  DL       0:02.33 [zfskern/txg_thread_]
    4  -  DL       0:31.25 [zfskern/txg_thread_]
    4  -  DL       0:10.69 [zfskern/trim ssd]
    4  -  DL       0:02.39 [zfskern/txg_thread_]
    4  -  DL       0:21.71 [zfskern/txg_thread_]
    4  -  DL       0:13.53 [zfskern/trim z2_6x2]
    4  -  DL       0:02.38 [zfskern/txg_thread_]
    4  -  DL       1:25.24 [zfskern/txg_thread_]
    4  -  DL       0:13.61 [zfskern/trim z2_6x4]
    4  -  DL       0:02.33 [zfskern/txg_thread_]
    4  -  DL     177:37.14 [zfskern/txg_thread_]
    4  -  DL       0:13.52 [zfskern/trim z2_6x8]
    4  -  DL       0:02.39 [zfskern/txg_thread_]
    4  -  DL       4:44.13 [zfskern/txg_thread_]
    5  -  DL       0:00.00 [sctp_iterator]
    6  -  DL       0:00.03 [xpt_thrd]
    7  -  DL       0:47.61 [ipmi0: kcs]
    8  -  DL       0:01.49 [enc_daemon0]
    9  -  DL       1:20.60 [pagedaemon]
   10  -  DL       0:00.00 [audit]
   11  -  RL   26199:28.10 [idle/idle: cpu0]
   11  -  RL   28102:08.10 [idle/idle: cpu1]
   11  -  RL   30235:35.04 [idle/idle: cpu2]
   11  -  RL   29133:57.51 [idle/idle: cpu3]
   11  -  RL   27957:29.08 [idle/idle: cpu4]
   11  -  RL   28086:39.13 [idle/idle: cpu5]
   11  -  RL   27984:29.59 [idle/idle: cpu6]
   11  -  RL   28096:08.54 [idle/idle: cpu7]
   12  -  WL       9:22.56 [intr/swi4: clock]
   12  -  WL       0:00.00 [intr/swi4: clock]
   12  -  WL       0:00.00 [intr/swi4: clock]
   12  -  WL       0:00.00 [intr/swi4: clock]
   12  -  WL       0:00.00 [intr/swi4: clock]
   12  -  WL       0:00.00 [intr/swi4: clock]
   12  -  WL       0:00.00 [intr/swi4: clock]
   12  -  WL       0:00.00 [intr/swi4: clock]
   12  -  WL       0:00.00 [intr/swi3: vm]
   12  -  WL       0:00.37 [intr/swi1: netisr 0]
   12  -  WL      40:30.60 [intr/swi2: cambio]
   12  -  WL       0:00.00 [intr/swi6: task que]
   12  -  WL       0:00.00 [intr/swi6: Giant ta]
   12  -  WL       0:00.00 [intr/swi5: fast tas]
   12  -  WL      22:53.30 [intr/irq264: mps0]
   12  -  WL      13:12.00 [intr/irq265: mps1]
   12  -  WL       0:00.00 [intr/irq266: xhci0]
   12  -  WL       0:29.21 [intr/irq16: ehci0]
   12  -  WL       5:20.91 [intr/irq267: igb0:q]
   12  -  WL       1:26.42 [intr/irq268: igb0:q]
   12  -  WL      39:36.69 [intr/irq269: igb0:q]
   12  -  WL      14:19.43 [intr/irq270: igb0:q]
   12  -  WL       0:01.00 [intr/irq271: igb0:l]
   12  -  WL       0:00.00 [intr/irq272: igb1:q]
   12  -  WL       0:00.00 [intr/irq273: igb1:q]
   12  -  WL       0:00.00 [intr/irq274: igb1:q]
   12  -  WL       0:00.00 [intr/irq275: igb1:q]
   12  -  WL       0:00.00 [intr/irq276: igb1:l]
   12  -  WL       0:25.46 [intr/irq23: ehci1]
   12  -  WL       1:13.09 [intr/irq277: ahci0]
   12  -  WL       0:00.00 [intr/swi0: uart uar]
   12  -  WL       0:00.00 [intr/swi1: pf send]
   13  -  DL       0:00.06 [geom/g_event]
   13  -  DL      60:24.61 [geom/g_up]
   13  -  DL      75:06.86 [geom/g_down]
   14  -  DL      21:03.60 [rand_harvestq]
   15  -  DL       0:00.00 [usb/usbus0]
   15  -  DL       0:00.00 [usb/usbus0]
   15  -  DL       0:22.73 [usb/usbus0]
   15  -  DL       0:00.00 [usb/usbus0]
   15  -  DL       0:00.00 [usb/usbus1]
   15  -  DL       0:00.00 [usb/usbus1]
   15  -  DL       0:33.85 [usb/usbus1]
   15  -  DL       0:44.33 [usb/usbus1]
   15  -  DL       0:00.00 [usb/usbus2]
   15  -  DL       0:00.00 [usb/usbus2]
   15  -  DL       0:31.38 [usb/usbus2]
   15  -  DL       0:37.21 [usb/usbus2]
   16  -  DL       0:21.23 [acpi_thermal]
   17  -  DL       0:03.94 [acpi_cooling1]
   18  -  DL       0:00.00 [vmdaemon]
   19  -  DL       0:00.01 [pagezero]
   20  -  DL       0:04.44 [bufdaemon]
   21  -  DL       1:52.05 [vnlru]
   22  -  DL      36:04.88 [syncer]
   23  -  DL       0:05.23 [softdepflush]
  120  -  Is       0:00.00 adjkerntz -i
  786  -  Is       0:00.52 /sbin/devd
  794  -  DL       5:52.00 [pf purge]
  921  -  Ss       0:01.82 /usr/sbin/syslogd -4 -ss
 1090  -  Is       0:00.02 /usr/sbin/sshd
 1093  -  Ss       0:16.96 sendmail: accepting connections (sendmail)
 1096  -  Is       0:00.43 sendmail: Queue runner@00:30:00 for /var/spool/clientmqueue (sendmail)
 1100  -  Ss       0:02.74 /usr/sbin/cron -s
47486  -  Ss       0:00.10 sshd: root@pts/0 (sshd)
 1139 v0  Is+      0:00.00 /usr/libexec/getty Pc ttyv0
 1140 v1  Is+      0:00.00 /usr/libexec/getty Pc ttyv1
 1141 v2  Is+      0:00.00 /usr/libexec/getty Pc ttyv2
 1142 v3  Is+      0:00.00 /usr/libexec/getty Pc ttyv3
 1143 v4  Is+      0:00.00 /usr/libexec/getty Pc ttyv4
 1144 v5  Is+      0:00.00 /usr/libexec/getty Pc ttyv5
 1145 v6  Is+      0:00.00 /usr/libexec/getty Pc ttyv6
 1146 v7  Is+      0:00.00 /usr/libexec/getty Pc ttyv7
47571  9  Ss       0:00.02 -bash (bash)
49741  9  R+       0:00.01 ps -axH
 1193  5  S+      62:49.85 zpool iostat -v 1
83451  8  S+     185:35.28 top -s 1 -HSaI
```


----------



## VladiBG (May 30, 2021)

Did you check if your Motherboard and HBA have a new version of the firmware? What is the interrupt rate of your newly attached device?


----------



## mtu (May 30, 2021)

This thread is being kept alive by technical intrigue in the face of an uncooperative, whiny and rude OP with an unsupported version of FreeBSD. What a show!


----------

