Apache Gotcha With ServerLimit
A little gotcha that happened to me a while ago; A rather sizeable customer had just launched a new campaign and had problems with HTTP content matching alerts being thrown rather frequently.
This particular solution has got five loadbalanced webservers and two database back-ends designed to cater for a reasonably high amount of traffic and shorter bursts of traffic surges.
The servers did not seem to be under any particular load, and I could connect
to localhost just fine (telnet is a magnificent troubleshooting tool, is it
not?). I looked at the cacti graphs on all four servers in the cluster, and
noticed in the apache scoreboard that there were more or less exactly 256
threads running on all five servers. ps ax|grep httpd | wc -l
confirmed this.
So I went and edited httpd.conf
and raised ServerLimit
and MaxClients
to 750
(from 256).
Then here’s my mistake – being the nice guy that I am – I did
/usr/local/apache/bin/apachectl graceful
. But got this in error_log:
1 2 |
|
Hm, why would maxclients be respected, but not serverlimit? Short answer is; when doing graceful, the parent apache uber-process is not killed and restarted and this value can not be changed without a full blown restart. Pretty obvious once you know about it! I sifted through apachectl, and all it did was:…
case $ARGV instart|stop|restart|graceful|graceful-stop) $HTTPD -k $ARGV ERROR=$? ;;
…so obviously an apachectl restart
would not be sufficient either. So long
story short, when changing the ServerLimit
directive, a graceful-stop and a
start is necessary! At least this is true for this custom compiled apache
2.2.3.