This is a discussion on Re: mod_perl reqs/s @ concurrency-- for actual db based - modperl ; On Wed, 2006-09-06 at 13:46 -0400, Jonathan Vanasco wrote: > with 2 children running, I'm handling ~70 r/s @ concurrency 10-1000 > 4-8 children seems to be my point for diminishing marginal utility- > in that range, I'm handling ~100 ...
On Wed, 2006-09-06 at 13:46 -0400, Jonathan Vanasco wrote:
> with 2 children running, I'm handling ~70 r/s @ concurrency 10-1000
> 4-8 children seems to be my point for diminishing marginal utility-
> in that range, I'm handling ~100 r/s @ concurrency 10-1000 ; and the
> numbers don't really change no matter how many servers I toss at it.
That probably means you are limited by the database, like everyone else.
> granted, i'm also benching with ab, which is old and not very
I like httperf for benchmarks.
> i'm fairly confident that a LARGE limiting factor is db interaction
> and (b)locking. And i know that the bulk of the request timing is
> taken up in the DB or the DB/App ineraction layer. I wouldn't be
> surprised if I those numbers double , triple , and more with a nicely
> clustered db structure.
You can usually increase your performance greatly just by tuning your
existing SQL and database. Run Apache:Prof or the DBI profiler, find
out where the time is being spent, and work on it. There are many
resources for database performance tuning. Work on the actual queries
and schema structure, not on the database configuration. You always get
more from the former than the latter.
> i'm just wondering what a target range for DB/Templated apps with
> moderate page logic should be ( "hello world" < Moderate Logic <
> "comprehensive statistical analysis" )
There ain't no such thing. Once you fix the obvious architecture things
(by using mod_perl and running a reverse proxy), the performance of your
application depends entirely on what it's doing. You can almost always
improve it by tweaking the code and database more based on profiling.