> because it claimed not to have enough storage space to store and it tried
> to rebuild the store database.

Make sure it is not running into a 2GB file size limit on any of the log

That was one of my fears at first, but it hasn't happend.

> Also on Solaris situation can be worse if the filesytem is not tuned to
> optimize for space. The problem is that df includes block fragments in

> free space report, but for writes to larger files (>3KB or so) to succeed
> there needs to be full blocks available, not only fragments.

*ouch* okay, that was our problem. I saw that the dd was also able only to
write a file with 300MB or so, then it was out of disk space.

> space is most likely too fragmented to be useable. It helps reducing the
> cache size, but you may need to first delete some content to get Squid
> running again.

I simply recreated the filesystem.

Unfortunately, I can not just switch over to Linux for the squid (I'd love
to but our company wants only
Solaris-Boxes), therefore I can not do much to change the behaviour. Since
even Solaris ufs is quite
slow with small files, my question is: has anyone experience with Veritas
Filesystem used for Caching
purposes? Is it faster than Solaris ufs or has anyone on the list better
ideas (other storage types maybe)
and has already tested it on Solaris?

Ciao, Hanno