On Sun, 7 Mar 2004, James MacLean wrote:

> 6MBytes. 620+ sites. Thousands of client computers .

Approx how many of those client computers are actively surfing at a given
point in time?

I think you should split this load on multiple Squids. You already
indicated you have a SMP machine, in such case running more than one Squid
on the same server is possible.

> Certainly this goes up, and it is mostly on 1 CPU as I understand would be
> expected. Load gets over 2, but was staying under 3.

One Squid process can use only 1 CPU.

System "load" is pretty much worthless as load indication on a server.

> > * Number of active filedescriptors

> That climbs fast but does peek.

Where is the peak and normal values for your setup?

> The pipe is full regardless of having Squid up, but without squid the
> client response time is much more favorable. Maybe 10,000 requests spread
> over many clients works better over the pipe than all those requests from
> only Squid?

Depends a little on the QoS queue type. Some advanced QoS settings may
give a penalty if there is a single IP using a lot. Most don't.

> Ah. Ok, so with us we are using SCSI raided. And with that it sounds like
> one diskd line should satisfy?

You would get better disk performance if your split the raid in separate

But if you run without caching enabled the cache_dir is not used so in the
no cache configuration it does not matter what you have as cache_dir..

> It appears that requests not being serviced fast enough by the uplink is
> aiding in this congestion. I wonder if running multiple Squids on the one
> PC would be more effective than one with so many open FDs. I'm guessing
> that what might be sped up in FDs would get lost using independant
> caches?

If you see that your Squid hovers above 2000-3000 active filedescriptors
then splitting the load on multiple Squids will defenitely help, or wait
for the epoll support in squid-3 to stabilise which should allow Squid to
scale quite independent of the number of active filedescriptors.