This is a discussion on Re: Mod_Perl and MaxRequestsPerChild - modperl ; Hi Perrin, Thanks so much for the quick reply. I've commented your original email below: On 10/16/07, Perrin Harkins wrote: > On 10/16/07, Mark Maunder wrote: > > My mod_perl app works with some fairly > > large data structures ...
Thanks so much for the quick reply. I've commented your original email below:
On 10/16/07, Perrin Harkins
> On 10/16/07, Mark Maunder
> > My mod_perl app works with some fairly
> > large data structures and AFAIK perl doesn't like to free memory back
> > to the OS once it's allocated it, so the processes tend to grow for
> > the first few hours of the server being up and then the plateau and
> > grow about 1 meg per day (a slow leak I think).
> Were you sharing these structures between threads explicitly? If not,
> they should not be any bigger with processes.
No I'm not sharing data structures between threads - they're mostly
scalars or arrays that are slurping in files and sorting the data.
> > I brought up my server with prefork and only 150 children.
> Why so many children? Most busy mod_perl servers run more like 20-50
> processes, with a separate front-end proxy server. I suspect you
> didn't have anywhere near that many active threads. If you're only
> serving 40 reqs/sec, you probably don't need more than 20 or so
Mornings are the busiest for us, so the following is not during peak.
This is my current mod_status:
39.4 requests/sec - 114.4 kB/second - 2976 B/request
80 requests currently being processed, 170 idle workers
I peak at about 75% of my threads being busy. If I have any less than
250 spare threads, then I get an alert from our monitoring every few
hours with a timeout because I suspect I'm hitting maxclients every
now and then.
> > I'm back on worker and I have a full 250 threads with much lower memory usage.
> But how many active perl interpreters do you have? I'm guessing a lot
> fewer than 250. You can't run 250 perl interpreters in 2GB of memory.
> What did you set PerlInterpStart and PerlInterpMax to? By the way,
> you probably should use PerlInterpMaxRequests rather than
> MaxRequestsPerChild when running in worker mode.
You've just opened up a whole new world to me. I had no idea these
config params existed. I've found them in the docs at:
and I can't wait to try them out. I don't have any of them specified
in my httpd.conf so I'm assuming I'm working with the defaults. So
perlInterpStart and PerlInterpMax are 3 and 5 respectively if they are
the defaults. Is there a way to check at runtime what these params are
set to? And what do you recommend for 250 threads?
> > When i was running with prefork, each process was 29 Megs and there
> > were 150 of them. That's 4.3 Gigs and my box only has 2 Gigs so
> > apparently copy-on-write was in effect and some of that was shared.
> The simplest thing to do when comparing free memory is usually to
> check the output of /usr/bin/free and see how much real memory is
> being used.
Sorry, I should have said that I was keeping a close eye on free and
on top and I watched the box slowly run out of memory. In top they
child procs were listed as 29 Megs and they didn't grow much beyond 30
megs, so I think the copy-on-write memory was becoming unshared
because the box was rapidly running out of free ram and buffers and
cached disappeared too.
Right now I've had the machine up for about 30 minutes with 250
threads running and servicing the load I mentioned above and the
output of free is:
total used free shared buffers cached
Mem: 2054504 863024 1191480 0 41380 305552
-/+ buffers/cache: 516092 1538412
Swap: 2096472 51352 2045120
/end my comments