On 7/23/06, Fred Tyler wrote:
> I'm having a serious problem with Apache::Resource not killing
> children and ending up with all of the children in a hung state at the
> memory limit (Linux 2.6, mod_perl 1.29, Apache 1.33).


Wow, I finally figured out what was causing this. It turns out that
Apache::Resource -- actually BSD::Resource I suppose -- does two
different things, at least on Linux where I am testing:

1. It kills processes where the memory usage grows above the hard
limit. (This is expected behavior).

2. It *prevents* processes that are below the soft limit from
allocating any amount of memory that would push them above the soft
limit. If a process tries to allocate too much memory, the memory
request is denied, but it does *not* kill the process. (I did not
expect this behavior.)

In my case, I have a lot of little mod_perl scripts that typically
cache small amounts of data on each request. Individually none of them
increases memory usage by much, but the apache children do slowly grow
over time. I expected Apache::Resource to kill these when they hit a
certain size, but instead the child would often grow right up to
within a few hundred bytes of the limit, and then no further requests
could be made from that child because it couldn't even allocate enough
memory to start a new request (even for a static file). And since the
children would not be killed (due to that unexpected behavior
mentioned above), this would eventually result in all of the children
being idle and useless, effectively dossing the webserver.

Later I added Apache::SizeLimit which properly killed processes when
they got to a certain size, but I still wanted to keep
Apache::Resource in the loop because Apache::SizeLimit only does the
memory check at the start of the request -- it won't kill a one-off
runaway like Apache::Resource will (e.g. a loop of $a = $a x 1000).

I eventually realized that the safe way to set this up is to have
Apache::SizeLimit considerably lower than Apache::Resource. For
example, you might have SizeLimit trigger on 30MB and Resource trigger
on 40MB. You have to do it like this because SizeLimit, unlike
Resource, will allow a child to go above the limit you set --
sometimes by several MB -- because (1) once a request starts,
SizeLimit does no more checking, and (2) on a busy server you probably
are going to be telling SizeLimit to only check every 2-4 requests.
Whatever you do, you definitely do *not* want to let a child get
within that magical X bytes of the Resource limit, because after that
it's just dead to the world. You want SizeLimit to kill it off
children well below that point, and just keep Resource around to kill
the runaways.

Hopefully that'll come in handy for anyone else who gets bit by this... :-)

The rest of the original post is shown below for archival purposes:


> Here is the snippet from httpd.conf:
>
> PerlModule Apache::Resource
> # Both of the next two lines cause children to hang at 48MB
> PerlSetEnv PERL_RLIMIT_AS 48
> #PerlSetEnv PERL_RLIMIT_AS 48:96
> PerlChildInitHandler Apache::Resource
>
> What happens is that the children grow until they reach 48MB, and then
> they will not accept any more requests, nor will they die. Reqests
> that come in and that are sent to these children usually result in the
> following two lines being printed in the server log:
>
> Out of memory!
> Callback called exit.
>
> Occasionally other errors appear also, but they have the same root
> cause: No more memory can be allocated to the child that got the
> request.
>
> Eventually all of the children end up with their memory maxed out, and
> since Apache does not see a need to spawn more children (since all the
> children are just sitting there idle), the server becomes totally
> unresponsive with errors filling up the logs for every request.
>
> An extensive search did not reveal anyone else who has encountered
> this problem, and I have not had any luck solving it on my own. Can
> anyone help?
>