# FreeBSD randomly freeze after some days or a month.



## pader (Dec 3, 2021)

Hello everyone, I have a server running on FreeBSD 13.0, the server randomly freeze after some days or a month.

Here is the phenomenon:
1. Unable to connect to the ssh, when input ssh command, no any response.
2. Alot of services can not be visit, some simple service like static nginx page can be opened in a short time, but if you refreshed page some times, the page will be stuck, and have no response, some other services is the same.
3. Another server has a always logged ssh to FreeBSD Server, and opened a top command, when FreeBSD freeze, this ssh can still visit and top command can refresh and output system status，the memory is normal, cpu usage is normal, ZFS ARC is normal, swap is normal, clock is normal, looks like anything is normal. Any hot key for top can use, but when press q to quit top, and type other command, like "systat -ifstat", the command stuck, no any output, Ctrl + Z or C no response.
4. Ping server always normal.
5. The redis-server on freebsd is normal, because redis service can response and very good.
6. Unable to login from console, when type username and password, press enter, no any output.

Environment:
FreeBSD 13.0，Intel Xeon 4Core + 16GB Memory, Two 2T Disk, ZFS Mirror, Root on ZFS. It's a new machine, it's been less than half a year since we bought it.
Main system only running sshguard+ipfw, mount a nfs and use nullfs to a jail, jail file system running on zfs dataset clone, services all running in this jail.

Server has two bge network interface, one for lan, one for wan, the services is network heavy service.
In jail, running nginx, php-fpm, php cli server, mysql, redis-server, there is alot of nfs write, read by php.

Some try:
At first it was suspected to be a ZFS ARC problem, and I set arc max to 2G, but in top ARC is very normal..
When look at dmesg，or any log by system or services, every log stopped record when system freeze, means there is no any abnormal log.. but looks like some service that no need read or write file is normal.

I have no way to probe the system, because a new login cannot be generated, and a new command cannot be executedWhen system freeze, I have no anyway to see.

What should I do now?


----------



## SirDice (Dec 3, 2021)

pader said:


> Another server has a always logged ssh to FreeBSD Server, and opened a top command, when FreeBSD freeze, this ssh can still visit and top command can refresh and output system status，the memory is normal, cpu usage is normal, ZFS ARC is normal, swap is normal, clock is normal, looks like anything is normal. Any hot key for top can use, but when press q to quit top, and type other command, like "systat -ifstat", the command stuck, no any output, Ctrl + Z or C no response.


You might want to check the status of your disks. When you get really bad timeouts on the disk(s) you can end up with a hung up filesystem. That has the same symptoms you are describing here.


----------



## mark_j (Dec 3, 2021)

It's hard to believe there's not anything in a log file. If it is disk you would surely get CAM timeout messages or read/write failures.
Perhaps there's a rogue process in a tight loop for some reason? 
You may need to run iostat and vmstat full time piping the output to a file with time stamps to then reference when the lock-ups occur. This would at least give you a clue and a start.


----------



## ralphbsz (Dec 4, 2021)

Looks like process creation is not possible. As SirDice said, a totally frozen file system can cause that. Typically a single rogue process in user space should not cause that, but complete resource exhaustion by root processes can look somewhat similar (process creation might take several minutes). 

Like mer said: In addition to running top in ssh in a window, also run iostat and vmstat in another two windows. Perhaps even have a window open that is running tail -F (capital F!) on /var/log/messages, to see whether any messages show up. If you have access to a real console, you can use that for these monitoring tasks. You could also reconfigure syslog so all log messages go to the console: perhaps there are error messages, but they are not getting written to log due to complete failure of the disk IO subsystem. And having a shell already logged in as root on the console would also perhaps allow you to run some debugging commands. Although if the file system is completely frozen, no command will actually work, but that's in and of itself already interesting.


----------



## covacat (Dec 5, 2021)

is the nfs mounted volume int path or library_path ?
a dead nfs link can cause this symptoms but stuff on local storage should work ok
if there are no disk subsystem errors on the console you probably have a fs deadlock which is harder to debug
also try to mount nfs with soft and or intr options


----------



## pader (Dec 8, 2021)

covacat said:


> is the nfs mounted volume int path or library_path ?
> a dead nfs link can cause this symptoms but stuff on local storage should work ok
> if there are no disk subsystem errors on the console you probably have a fs deadlock which is harder to debug
> also try to mount nfs with soft and or intr options


NFS is just mounted in /mnt and just read for data, like images, videos.



SirDice said:


> You might want to check the status of your disks. When you get really bad timeouts on the disk(s) you can end up with a hung up filesystem. That has the same symptoms you are describing here.


Yes, It looks like is filesystem problem, I'll try to open top, vmstat, iostat and tail -F /var/log/message on a screen in persistent ssh connection.

I have another question, when kern.openfiles is full, will this happen?


----------



## covacat (Dec 8, 2021)

pader said:


> I have another question, when kern.openfiles is full, will this happen?


it shouldn't. the program should fail and that's it
is the hang consistent with some zfs ops like clone/snapshot/ (especially on zvols) ?


----------



## tarkhil (Dec 10, 2021)

Something has to be in /var/log/messages. Overall, it looks like the kernel gets locked on something and cannot spawn processes. Please show your top output next time, I can't say what exactly I hope to see there, but there may be something.
And, yes, it feels like something disk-related.
Maybe something like node_exporter will show something on graphs.


----------



## pader (Dec 15, 2021)

I have run `top -wSH`, `vmstat -w 2`, `iostat -t da -I -w 2`, `tail -F /var/log/messages` from ssh at another server's screen for some days, the freeze may be happened some days or a month, just wait to say something.


----------



## tarkhil (Dec 15, 2021)

pader said:


> I have run top -wSH, vmstat -w 2, iostat -t da -I -w 2, tail -F /var/log/messages from ssh at another server's screen for some days, the freeze may be happend some days or a month, just wait to say something.


Okay, waiting...


----------



## pader (Dec 16, 2021)

Some updates:

Today I found that my FreeBSD server CPU is idle, all processes exists, but cpu use is 0. The good news is many command can be execute and get response except df, and after some search I found that NFS mount is freeze.

I found some same problems:





						FreeBSD jail with NFSv4 share causes system to hang
					

I've been battling this all weekend, and i'm nowhere nearer to a solution. I'm emby in a jail on FreeBSD 12.0 (jail is 12.0 as well), and emby is installed from packages (pkg install emby-server) It seems Emby, or maybe ffmpeg is doing "something" that FreeBSD's NFS implementation doesn't like. I...




					emby.media
				








						251347 – NFS hangs on client side when mounted from outside in Jail Tree (BROKEN NFS SERVER OR MIDDLEWARE)
					






					bugs.freebsd.org
				








						Bug #2068: Mounting NFS share inside jail broken by VIMAGE - FreeNAS - iXsystems & FreeNAS Redmine
					

Redmine




					redmine.ixsystems.com
				




The same characteristics is mount NFS to jail filesystem tree, or mount to host and nullfs to jail.

And there is no solution to fix this.
Some other linux server mount to this NFS server too, and no problem.
It's not a NFS server problem, it's client side problem.

My server was mount nfs to host, and nullfs mount to jail before. And now I have try to mount to jail filesystem tree directly.
Run for a period of time, continue to observe.

If still happend, I will abandon the scenario where jail runs with NF.
I've tried jail for years, I found that jail has many problems (Like VNET kernel crash, and this NFS freeze), so use it carefully.


----------



## pader (Dec 17, 2021)

I got kernel crash this morning...


```
Dec 17 00:47:02 myhostname kernel: Fatal trap 12: page fault while in kernel mode
Dec 17 00:47:02 myhostname kernel: cpuid = 2; apic id = 04
Dec 17 00:47:02 myhostname kernel: fault virtual address     = 0x28
Dec 17 00:47:02 myhostname kernel: fault code                = supervisor read data, page not present
Dec 17 00:47:02 myhostname kernel: instruction pointer       = 0x20:0xffffffff821495f8
Dec 17 00:47:02 myhostname kernel: stack pointer             = 0x0:0xfffffe010fae48d0
Dec 17 00:47:02 myhostname kernel: frame pointer             = 0x0:0xfffffe010fae48d0
Dec 17 00:47:02 myhostname kernel: code segment              = base 0x0, limit 0xfffff, type 0x1b
Dec 17 00:47:02 myhostname kernel:                   = DPL 0, pres 1, long 1, def32 0, gran 1
Dec 17 00:47:02 myhostname kernel: processor eflags  = interrupt enabled, resume, IOPL = 0
Dec 17 00:47:02 myhostname kernel: current process           = 0 (z_wr_int_3)
Dec 17 00:47:02 myhostname kernel: trap number               = 12
Dec 17 00:47:02 myhostname kernel: panic: page fault
Dec 17 00:47:02 myhostname kernel: cpuid = 2
Dec 17 00:47:02 myhostname kernel: time = 1639673010
Dec 17 00:47:02 myhostname kernel: KDB: stack backtrace:
Dec 17 00:47:02 myhostname kernel: #0 0xffffffff80c574c5 at kdb_backtrace+0x65
Dec 17 00:47:02 myhostname kernel: #1 0xffffffff80c09ea1 at vpanic+0x181
Dec 17 00:47:02 myhostname kernel: #2 0xffffffff80c09d13 at panic+0x43
Dec 17 00:47:02 myhostname kernel: #3 0xffffffff8108b1b7 at trap_fatal+0x387
Dec 17 00:47:02 myhostname kernel: #4 0xffffffff8108b20f at trap_pfault+0x4f
Dec 17 00:47:02 myhostname kernel: #5 0xffffffff8108a86d at trap+0x27d
Dec 17 00:47:02 myhostname kernel: #6 0xffffffff81061958 at calltrap+0x8
Dec 17 00:47:02 myhostname kernel: #7 0xffffffff821a4d3e at dbuf_write_done+0x9e
Dec 17 00:47:02 myhostname kernel: #8 0xffffffff82190c5c at arc_write_done+0x33c
Dec 17 00:47:02 myhostname kernel: #9 0xffffffff822f920d at zio_done+0xd9d
Dec 17 00:47:02 myhostname kernel: #10 0xffffffff822f2d5c at zio_execute+0x3c
Dec 17 00:47:02 myhostname kernel: #11 0xffffffff80c6b161 at taskqueue_run_locked+0x181
Dec 17 00:47:02 myhostname kernel: #12 0xffffffff80c6c47c at taskqueue_thread_loop+0xac
Dec 17 00:47:02 myhostname kernel: #13 0xffffffff80bc7dde at fork_exit+0x7e
Dec 17 00:47:02 myhostname kernel: #14 0xffffffff810629de at fork_trampoline+0xe
Dec 17 00:47:02 myhostname kernel: Uptime: 1h44m58s
Dec 17 00:47:02 myhostname kernel: Dumping 2511 out of 16190 MB:..1%..11%..21%..31%..41%..51%..61%..71%..81%..91%---<<BOOT>>---
```


----------



## ralphbsz (Dec 17, 2021)

Page fault in kernel is a kernel bug. This one happens in the ZFS code, judging by the function names: Some user-space code has fork'ed, that requires ZFS IO to be performed, which writes to the ARC, the write has finished, and something in that routine uses a pointer to memory that is not mapped, causing a page fault.

The questions are now: Is this a known kernel bug? Is it already fixed, or is it already getting fixed? Did you do that to yourself by using an old or mis-configured kernel? Is there a way to live with it and prevent if from occurring, for example by changing some system configuration? Classic example is some kernel or memory configuration that is at unreasonable values.

So here are a few potential questions. Is this a stock kernel that came from a pre-compiled FreeBSD install? If yes, is it time to upgrade that version? Does a search of the FreeBSD bug tracker (they're called PR) find something that is exactly the same, or similar enough that you suspect it is the same? If you did not use a pre-compiled kernel but made one yourself, is the source up-to-date? Did you do some modifications? I'm sorry if these all seem like difficult questions.

Little anecdote: A friend was working in kernel development for super-computers (the kind that have hundreds or thousands of CPUs) several decades ago. He did a kernel compile, and the configure script asked him a question he didn't understand. He made an educated guess: The range of the answer was 0 to 65535 (unsigned 16-bit number), so he picked 10,000, which is sort of in the middle of the range. The result was spectacular crashes. Turns out the question was something like "how many serial ports does the master console use", and reasonable answers were 1, 2 or 3, and the 10,000 he entered created a completely insane kernel that would not work. Clearly, it is careless of the configure script to not explain this better, but stuff like this happens.


----------



## _martin (Dec 17, 2021)

Page fault is on obviously bogus address:

```
fault virtual address     = 0x28
```
From the output I see you have a kernel crash dump. Please do open a PR for this problem, that's the best way to reach the developers and people who can properly analyze this.

Also note page fault is not a bug, not even in a kernel, no matter how many likes you'll have that answer liked.
Invalid page in page fault handler is a different story.


----------



## pader (Dec 18, 2021)

Thanks for all your reply.

I have not custom kernel, and has just a little bit configure some parameters.

/boot/loader.conf

```
kern.geom.label.disk_ident.enable="0"
kern.geom.label.gptid.enable="0"
cryptodev_load="YES"
zfs_load="YES"
coretemp_load="YES"
net.inet.ip.fw.default_to_accept=1
vfs.zfs.arc_max="2G"

# Increase dmesg buffer to fit longer boot output.
kern.msgbufsize="524288"
```

/etc/sysctl.conf

```
# $FreeBSD$
#
#  This file is read when going to multi-user and its contents piped thru
#  ``sysctl'' to adjust kernel values.  ``man 5 sysctl.conf'' for details.
#

# Uncomment this to prevent users from seeing information about processes that
# are being run under another UID.
#security.bsd.see_other_uids=0

# Should I not change ashift?
vfs.zfs.min_auto_ashift=12
kern.ipc.somaxconn=4096
```

Before configure _kern.ipc.somaxconn_, when system hang, I can't login system, can't do any operate. But after change somaxconn, when system hang (worker processes freeze), I can still login to system, and do some operate that not touch the NFS mountpoint.

I think freeze problem because of NFS client, and jail.

Kernel crash happened after I change "*NFS mount at host, and use nullfs mount to jail tree*" to "*NFS* *directly mount to jail tree*".
Looks like after this change, problems occur more frequently, twice at that day.

I'm now change NFS mount out of jail, and move related programs out of jail, running on host, continue to observe for a while, if problem still happen, may be I need to add some options to nfs mount like soft, intr.


----------



## ralphbsz (Dec 18, 2021)

But you should also (a) save the dump file from the crash, and (b) open a PR (bug report). What you saw was a real bug, dereferencing a wrong pointer in the kernel is not a good thing. If you open a bug, it is more likely that a developer will look at whatever problem it is, and fix it.


----------



## pader (Dec 18, 2021)

Yes, I will find is there a same PR, if not I will open a PR.
I have found a same crash post: https://forums.freebsd.org/threads/...l-panic-on-freebsd-destination-machine.74445/


----------



## _martin (Dec 18, 2021)

pader You have a generic kernel (not custom built) with crash dump available. The crash dump is saved under /var/crash. The post you mentioned is about FreeBSD 11 and 12, so actually even different ZFS implementation to FreeBSD 13. PR is really the best way to go.

Also beware that crash dump will have information you may not want to share to public.


----------



## grahamperrin@ (Dec 18, 2021)

_martin said:


> PR is really the best way to go.



+1 

Also spin off to a separate topic (kernel panic distinct from freezing).


----------



## _martin (Dec 22, 2021)

pader Have you opened a PR for this? I did some tests in my lab but was not able to trigger the bug.


----------



## pader (Dec 23, 2021)

After I adjust the program out of jail, and add intr to nfs mount options, more system crash happened, and every crash stack trace is different, sometimes is zfs_aio_write, or zfs_execute, sometime is nfsctl, I now suspect that the possibility of hardware failure is quite high.

It's really a little strange. The earliest problem was that the system was frozen and unable to operate, after a series of adjustments, the system crashes and restarts.

Very confused.


----------



## grahamperrin@ (Dec 23, 2021)

_martin said:


> … this? I did some tests in my lab but was not able to trigger the bug.



The freeze, or the kernel panic?


----------



## _martin (Dec 23, 2021)

pader: There's no question about the fact that you're hitting a bug (that fault virtual address is a bogus one and afterall you paniced the kernel. I was pointing out that "page fault" itself is normal behavior in kernel, faulting on a bad address is a bug).

It's good that you're hitting this with GENERIC kernel, others may be able to reproduce. If you open a PR for this somebody from devs will see this and will be able to navigate you further.

grahamperrin : neither.


----------



## _martin (Dec 23, 2021)

pader said:


> I now suspect that the possibility of hardware failure is quite high.


While we can't rule that out it's not likely. Not that long ago I was debugging kernel crashes for somebody here. They actually had the bit flip causing wrong jumps within the code (HW issue related to i-caches). Your bogus address is really bad and obvious pointing towards the SW issue.

We don't know if ZFS is victim of other issue or is causing the issue. You didn't share the crashes so we can't say much. Also ZFS is really big chunk of code so it's better to contact devs directly through PR.


----------



## oodler (Dec 23, 2021)

I had a similar issue some time in the past and it usually ended up being hardware related. Back when I was throwing together my (now really OLD hw) I got a couple of those IBM M1015s from eBay. Flashed them to JBOD. I was using 2 of them; then started getting the rando freezes, panics, reboots. I ended up isolating it to one of the cards as being the culprit. I went down to just one (which my capacity allowed) and it was smooth sailing from there. Just one man's story. Hopefully it helps.


----------



## pader (Dec 24, 2021)

I tried to submit PR yesterday, because of my network situation (you know, china GFW), I can't upload the core dump (2.5G), connection always be reset.. I'll try it later again.


----------



## grahamperrin@ (Dec 24, 2021)

We have *two* very different issues (sets of symptoms) here: 

freezes
kernel panics



grahamperrin said:


> spin off to a separate topic (kernel panic distinct from freezing).



A new topic will help to avoid ambiguity and confusion.



_martin said:


> crash dump will have information you may not want to share to public.





pader said:


> can't upload the core dump



Don't attempt to upload that, it's premature. Please start a new topic then I'll explain what to upload. Thanks.


----------



## _martin (Dec 24, 2021)

pader: Thanks for opening the PR. You don't need to upload the core dump just yet. When you open a PR state the following:
a) your system (uname -a), generic kernel
b) you can share the contents of the /etc/sysctl.conf and /boot/loader.conf
c) describe your setup, both host and jail
d) specify how you mounted the nfs share on the host and how is jail using it
e) specify steps to reproduce the issue
f) share the beginning of the core.txt.N (as you did when you opened this thread)

People will ask for more information once the PR is assigned to a team.


----------



## jbo (Dec 24, 2021)

_martin said:


> While we can't rule that out it's not likely. Not that long ago I was debugging kernel crashes for somebody here. They actually had the bit flip causing wrong jumps within the code (HW issue related to i-caches). Your bogus address is really bad and obvious pointing towards the SW issue.


In case you're referring to this:








						Other - Debugging crash
					

I just experienced something I never experienced in all my years of using FreeBSD before: A crash!  I've been compiling some small code base (C, C++, cmake) with devel/jetbrains-clion while listening to some music via www/firefox when the system suddenly froze and then automatically rebooted...




					forums.freebsd.org
				




I'd just like to add that I have been using that particular machine every single day for 8 to 14 hours non-stop with both Windows 10 and FreeBSD and never encountered another crash at all. As stress tests didn't yield any results I went the extra mile to overclock the machine (CPU & RAM) and it's running in that state ever since rock solid.
No hardware modifications took place (didn't touch anything at all - didn't re-socket the CPU, re-paste it or anything like that).


----------



## pader (Dec 24, 2021)

I have post a PR: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=260664


----------



## _martin (Dec 25, 2021)

jbodenmann Yop, that's the one. As all the discussion took place there not much to say here. 

pader: Many thanks for opening the PR. I'll setup HW FreeBSD host mounting an NFS share and will try to generate some heavy writes in jail. I did this in VM but I was not able to trigger anything.


----------



## pader (Dec 26, 2021)

_martin said:


> jbodenmann Yop, that's the one. As all the discussion took place there not much to say here.
> 
> pader: Many thanks for opening the PR. I'll setup HW FreeBSD host mounting an NFS share and will try to generate some heavy writes in jail. I did this in VM but I was not able to trigger anything.


Thank you for your attention, if this is related to the hardware, it may not be reproduced.
Strange, system is not crash for four days now.


----------



## pader (Jan 24, 2022)

I think my problem is resolved after switch to NFSv3 client (Skipped NFSv4 where problems occur).

The patch commit is: https://cgit.freebsd.org/src/commit/?id=701eb03cc0dca907de872a41407b8487c38429fc

Related: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=260664#c11


----------

