# 10.3->11.0 (Huge load averages after upgrade)



## mescalito (Oct 26, 2016)

Actually subj says it all. I see huge load averages for no reason:

top -aSP shows 

```
last pid: 52030;  load averages:  6.47,  6.99,  7.29                                                                                                                                up 0+05:44:40  19:07:35
115 processes: 5 running, 104 sleeping, 5 zombie, 1 waiting
CPU 0:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 1:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 2:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 3:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 4:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 5:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 6:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 7:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
Mem: 464M Active, 2208M Inact, 3293M Wired, 41G Free
ARC: 2140M Total, 990M MFU, 1021M MRU, 553K Anon, 23M Header, 106M Other
Swap: 4096M Total, 4096M Free
```


----------



## ASX (Oct 26, 2016)

That's actually a 100% *idle* ( = doing nothing at all) system, not overloaded.


----------



## mescalito (Oct 26, 2016)

ASX said:


> That's actually a 100% *idle* ( = doing nothing at all) system, not overloaded.



Yep, exactly. But why top, w, uptime all show load averages like 6.47, 6.99, 7.29?


----------



## ASX (Oct 26, 2016)

They are usually measured as 1.00 = 100% load of 1 core, you have 8 cores and for some times they have been used all/most of them at 80%/90%.


----------



## ekingston (Oct 26, 2016)

Load is not simply a measure of CPU usage. Load is a measure of system resource usage. A value in excess 1.0 means that a resource is at 100% capacity. As said earlier, if 1 core of a CPU is fully utilized you will see a load of 1. The same applies to disks and network interfaces. But, just like a multi-core system, a load of 1.0 due to network traffic does not necessarily mean you have actually capped network utilization if you have more than 1 NIC installed.


----------



## SirDice (Oct 27, 2016)

The load depends on the number of processes in the run queue, not the CPU percentages. Note that the load is calculated differently on FreeBSD compared to Linux.


----------



## mescalito (Nov 15, 2016)

Well, downgrading back to 10.3 returned my load averages back to usual numbers


----------



## topcat (Nov 15, 2016)

Did this happen with just the text based login prompt (no gui, nothing from ports running)? Also, what are the 5 zombie processes?


----------



## ASX (Nov 16, 2016)

mescalito said:


> Well, downgrading back to 10.3 returned my load averages back to usual numbers



I'm perplexed, your initial post show no load at all, other than some relatively high "load average" which clearly refer to some time before you grabbed the top output.

There was any other issue that lead to downgrade to 10.3 ?


----------



## krawall (Dec 8, 2016)

I've got the same problem with two Hetzner servers (from the old EX60 line - with i7-950 Nehalem processors).
Before updating they've had an average load of about .5 and 1 (for the last munin recorded year) and after updating to 11.0 the load went to an average of 12 and 8.

They're both about 90%+ idle on all CPUs most of the time and from 48G-RAM they both have the first is using about 10G and the other 20G with 1G spare and the rest used by ZFS.

For now they both seem to be as fast and as responsive as before, but they're flooding me with warning messages and I really would like a solution without downgrading them.
(The exact same way of updating went well on another 20+ servers that are running FreeBSD 11.0-p3 now without any trouble)


----------

