# Rock-solid HTTP server



## BachiloDmitry (May 25, 2015)

So, that's what I'm trying to achive for years.
Through time I went from having a weak machine with Apache 1.3/MySQL4/PHP3 to a somewhat powerfull server with nginx-frontend/Apache24-backend/MySQL55/PHP55. Here I have to explain that this machine is not a super-dooper overpowered industrial server by any means, I just think that it is enough fro what it does. At least I suppose so. I dont even think it is neccessary to mention what machine it is now, I just want to understand, how should I measure the needs.

The problem is I don't think that even if I will install 10 or 100 times more RAM, my problems would go away. When the server serves its sites and no attack is going on, this group of software hardly use more thant 4-8 Gigs of RAM, but when, as I conclude, some kind of flood attack (maybe it is DDoS, maybe it is just search bots, idk) happens, it can eat 16, 32 or more gigs of ram, and when all the swap is full the proccesses begin to die. Usually the first one is MySQL-Server, then Apache dies, and finally nginx only serves "bad gateway" answers (never dies itself though).

I don't think I understand why this happens. I mean I don't think that, for example, if 1000 users are loading some page simultaneously, it really has to eat 32 gigs of RAM or, if restricted to eating only 16 gigs, must make other people wait for like half and hour to see the page. I think this is just me not configuring the server right.

I am not trying to achieve a superspeed of pages serving, I do not want to serve more than 200-300 people at once, I just want my server not to die every 20 minutes because it is out of swap space. But everything I do to prevent this from happening either does not change anything or just make the server inaccessible.

So what I want to discuss is what you guys usually do to make the webserver stable? I understand that I have not given any details whatsoever so feel free to ask, I want to share my experience and know something new that I have not tried for years.

I just wanna say that the server serves somewhat about 100 sites made by different people, and changing from Apache (at least as backend) and PHP as an apache module is not an option. Furthermore I think the only reason I have this problems is because MySQL gets flooded by requests and when database server is separated on another machine and virtualised so that every site has their own MySQL instance, the only thing that dies is that exact virtual machine serving SQL for one site or another. It easily can be just bad programming, so that PHP-scripts cause overload of MySQL but that is exactly what I want to stand. I mean, If I buy hosting and put a bad index.php there, nothing dies, it just works bad, but the hosting still is okay.

Let's discuss. My first question would be: how do you guys monitor the load? I would love to see what site exactly causes i/o, what is the exact request, why does it take so long/so much ram to serve it. Apache-status seems to be hardly helpful as are the logs.


----------



## SirDice (May 26, 2015)

First you probably have to tweak MySQL a bit. Most of the example my.cnf files you find will use memory, lots of it. Since you are in short supply this would be the first culprit. Use a tool like databases/tuning-primer to get some idea of the changes that are needed. Apache, and certainly NGINX, don't usually use a lot of memory.

To give you some comparison, my VPS has 8 GB of RAM. It's running Apache, NGINX, MySQL, mail and a game server. Oh, and it uses ZFS too. The server gets hammered by bots on a daily basis but it's been running without issues. It has never run out of memory.


----------



## BachiloDmitry (Jun 15, 2015)

Ok, I'm done with my research and tunnings. I've managed to raise outgoing traffic from my server at least ten times, reaching maximum my provider gives me, lower CPU usage from permanent 100% to 3-5%, RAM usage never goes over 1,7GB while it used to be 16GB and the whole swap. Basicly two things did the trick: VirtualHost that provided some media streaming now has `proxy_buffering off;` in nginx config - that lowered RAM usage (apearently because my /tmp is tmpfs, and nginx stores it's temp there) and in Apache24 I turned `EnableSendfile off` for that same VirtualHost - that lowerd CPU usege (apearently because my usage of XSendFile in this same VH does not automaticly turn off standard Sendfile which I was not aware off, and it is known to suck when serving NFS-mounted media).

No MySQL tuning was needed but I still did some work on that: I created virtual machines on a powerfull server enough to give each VH it's onw MySQL server. That showed me what sites were uneffectively using database. Corrections were made to the code and now it's all good, and databases are back to a single physical host.


----------

