Thanks Michael,

All my modules are in I've moved away from mysql because
it was too slow for my purposes and I have the equivalent of thousands
of small tables. So I'm using my own file access methods with flock()
with read/write locking. It's very fast, but the down side is I need
to do some record sorting in RAM.

The data is all intensively read/write so I don't have the option of
doing much caching.

I do use BDB, not for my main data storage but for some basic
key/value lookups and it's blazingly fast.

I've benchmarked sqlite and it's a lot slower than my home rolled
routines - mostly because of the intensive read/write/update/delete


On 10/16/07, Michael Peters wrote:
> Boysenberry Payne wrote:
> > $Apache2::SizeLimit::MAX_UNSHARED_SIZE = 50000;

> The key here is your unshared memory. On Linux COW takes care of all the stuff
> you pre-load and then don't change on prefork. But if you're constantly changing
> large data structures, then prefork won't really work for you memory-wise. Also,
> you should pre-load any Perl modules used at startup, else those become unshared
> when used.
> If you are using large data structures that change over time, you have to ask
> yourself "Can I do better"? I'd look at using something else to store the
> structures (are they just cache? then memcached. Are they important? Then some
> sort of shared memory, BDB, SQLite or MySQL might be more appropriate.
> --
> Michael Peters
> Developer
> Plus Three, LP

Mark Maunder