# Too many TCP connection in FIN_WAIT_{1,2} state



## ChuyenGiaSon (Apr 25, 2016)

Hi all.
My server running : 10.1-RELEASE-p6
Services : 
- nginx (port 80, 443)

One day I checked and see that number of TCP connection in FIN_WAIT_{1,2} state is very high, about 3000 per total 3300 TCP connections. 

Along with it is the number of TCP DUP ACK packet is very high, about 50% of total number of packets in and out. 







I can see that the source IP is the IP of clients who finished request to nginx for a long time but still keep its TCP connection and send DUP ACK like crazy to my server. It made the bandwidth on my server increase from few Mbps to 100 Mbps. 

I read that DUP ACK happen when network from client to server is not good and mis configuration in client TCP layer. But too much like this is very strange. I wonder if I was being attacked or not. 

So pro, could you please tell me how to limit these DUP ACK and protect my server from client like this. 
All comments are being welcome.


----------



## Juha Nurmela (Apr 25, 2016)

Some programs show up as FIN_WAIT_2 or CLOSE_WAIT, doing one-directional transfer after having closed the reverse direction. That would not be an error per se.

Not a pro,
Juha


----------



## Jeckt (Apr 26, 2016)

If you are using IPFW, this sounds similar to an issue I had with apache crashing after too many connections held open (in a zombie state) when I knew that shouldn't happen. I noticed similar things happening with nginx on another server. Eventually I was able to track the problem down to dyn_keepalive, and resolved it by adding

```
net.inet.ip.fw.dyn_keepalive=0
```
in /etc/sysctl.conf

It seems like some clients would induce the problem, while others wouldn't despite using the same OS and web browser. Unfortunately I couldn't nail down enough details about how or why this happened so I couldn't file a bug report.


----------



## ChuyenGiaSon (Apr 27, 2016)

Thanks, I tried to set : net.inet.ip.fw.dyn_keepalive=0
Will post the result soon


----------

