# mbuf failures in 10.0



## adams (Jul 12, 2014)

So I've deployed my first 10.0 production server which is really an OpenVPN and PF firewall/proxy (two of them running CARP).  I've got lots of experience with 9.1 and we'll be moving everything to 10.1 once it's out, but the hardware on these servers demanded that we use 10.0 as one of the NICs needed the newer em0 driver to be recognized.

Everything functionally has been fine on these servers (thought not under any real traffic so far), however I see failures in `vmstat -z`:


```
ITEM                   SIZE  LIMIT     USED     FREE      REQ FAIL SLEEP
mbuf_packet:            256, 1583025,    1279,     749,11695298,2028,   0
mbuf:                   256, 1583025,       3,     744,22038011, 618,   0
mbuf_cluster:          2048, 247346,    2028,       8,    2028, 490,   0
mbuf_jumbo_page:       4096, 123673,       0,      88,  684189,  35,   0
mbuf_jumbo_9k:         9216,  36643,       0,       0,       0,   0,   0
mbuf_jumbo_16k:       16384,  20612,       0,       0,       0,   0,   0
mbuf_ext_refcnt:          4,      0,       0,       0,       0,   0,   0
```

I've searched but didn't really see anything relevant.  Both servers show the issue, both have identical hardware:

1x em0, 1x re0

With no other 10.0 servers to compare to is this just normal (for example `vmstat -z` "bucket" failures are actually normal) or indicative of a problem?  Again the servers seem fine but aren't really being used yet ... want to make sure or resolve this before they are.  Let me know if I can provide any additional information.


----------



## SirDice (Jul 14, 2014)

adams said:
			
		

> X-posting from Networking as it's not necessarily a networking question:


Please do not cross-post.


----------



## adams (Jul 14, 2014)

No way to delete and it looks like the Networking forum doesn't get much traffic, so no other option.


----------



## SirDice (Jul 14, 2014)

Thread has already been merged and the double post removed.


----------



## adams (Jul 14, 2014)

So you've kept it confined to the networking forum where nobody apparently looks 

Can you do the opposite as I mentioned in my x-post it's not specifically a networking issue and again the traffic in this networking forum apparently is tiny.

IE; Please remove THIS post and I'll repost in general.


----------



## SirDice (Jul 14, 2014)

It is a networking issue so I'm going to keep it here.


----------



## allanjude@ (Aug 2, 2014)

The FAIL column denotes a failure to allocate memory for the mbuf

Is you system under high memory pressure?

What does the output of `netstat -m` show?


----------



## adams (Aug 2, 2014)

Hey nearsourceit, thanks for the reply!

So this is actually two systems identically configured (and HW) which show this, one primary one backup via CARP.  Here is netstat -m:


```
1281/1509/2790 mbufs in use (current/cache/total)
1279/761/2040/247346 mbuf clusters in use (current/cache/total/max)
1279/753 mbuf+clusters out of packet secondary zone in use (current/cache)
0/88/88/123673 4k (page size) jumbo clusters in use (current/cache/total/max)
0/0/0/36643 9k jumbo clusters in use (current/cache/total/max)
0/0/0/20612 16k jumbo clusters in use (current/cache/total/max)
2878K/2251K/5129K bytes allocated to network (current/cache/total)
632/494/2032 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters delayed (4k/9k/16k)
35/0/0 requests for jumbo clusters denied (4k/9k/16k)
0 requests for sfbufs denied
0 requests for sfbufs delayed
0 requests for I/O initiated by sendfile
```

They're both super overbuilt for what they are (PF/OpenVPN gateways):

FreeBSD 10.0-RELEASE-p7 64-bit; Kernel GENERIC
Intel® Xeon® CPU E3-1225 v3 @ 3.20GHz; 64-bit; 4x Physical Cores  
4.0 GiB RAM: 1.3 GiB used / 409.0 MiB cache / 2.3 GiB free
8.0 GiB Swap: 0 bytes used / 8.0 GiB free 

Current top on the primary:


```
last pid: 16156;  load averages:  0.06,  0.06,  0.08                                                    up 36+06:31:29  21:00:53
46 processes:  1 running, 45 sleeping
CPU:  0.0% user,  0.0% nice,  0.3% system,  0.1% interrupt, 99.6% idle
Mem: 6228K Active, 968M Inact, 496M Wired, 408M Buf, 2393M Free
Swap: 8192M Total, 8192M Free
```

I had done some "standard" tuning that we apply to our 9.x boxes, too in /etc/sysctl.conf however I did this *after* I started seeing these issues (thinking it might clear some of them up), so I don't think it's related but just an FYI:


```
kern.ipc.somaxconn=10000                # -- TCP conn. queue size
kern.ipc.maxsockets=131072              # -- Maximum sockets
net.inet.tcp.maxtcptw=100000            # -- TCP TIME_WAIT limits
net.inet.tcp.sendbuf_max=8388608        # -- TCP buffer out
net.inet.tcp.recvbuf_max=8388608        # -- TCP buffer in

kern.maxprocperuid=65535                # -- Max numbers of process per user
kern.maxfiles=737280                    # -- Total max open files
kern.maxfilesperproc=235926             # -- Max open files per process
```

One last thing: File system is UFS, single disk.


----------

