Apache process idling and eating
I am stuck trying to debug an Apache process that keeps growing in memory. I am running Apache 2.4.6 with MPM Prefork on an Ubuntu virtual host with 4GB of RAM, serving a Django application with mod_wsgi. The application is heavily involved with AJAX calls and Apache receives 300 to 1000 requests per minute. This is what I see:
- As soon as I restart Apache, the first child process (with the lowest PID) will continue to increase memory usage, reaching the end of the gig in 6 or 7 minutes. All other Apache processes will support memory usage between 10MB-50MB per process.
- CPU usage for a difficult process will fluctuate, sometimes sinking very low, at other times hovering 20%, or sometimes rising.
- The alarming process will run indefinitely until I restart Apache.
- In my Django logs, I can see a nasty process serving multiple requests to multiple remote IPs (I see reports of URL exceptions encountered that I don't like in the first place).
- Apache error logs often (but not always) show "IOError: Failed to write data" for the PID, sometimes across multiple IP addresses.
- The Apache access logs do not show any requests associated with this PID.
- Running strace on a PID does not get any results other than "restart_syscall" (<... resuming an interrupted call ...> ", even though I see the PID mentioned in the application logs while strace is running.
I tried setting both MaxRequestsPerChild and MaxMemFree low and neither had any effect.
What could it be or how can I debug further? The fact that I don't see strace coming out makes me feel like my application has an infinite loop. If that were the case, how could I track the PID back to the running code or request that caused this problem?
+3
source to share