This is a discussion on Re: Too many open files (not only BIND 9.5.0-P1) - DNS ; At Fri, 11 Jul 2008 16:05:48 -0400, Shumon Huque wrote: > How so? On my Solaris 10 systems, it looks like eventually > calls which sets FD_SETSIZE to 1024 (or 65536 in LP64): > > /* > * Select uses ...
At Fri, 11 Jul 2008 16:05:48 -0400,
> How so? On my Solaris 10 systems, it looks like
which sets FD_SETSIZE to 1024 (or 65536 in LP64):
> * Select uses bit masks of file descriptors in longs.
> * These macros manipulate such bit fields.
> * FD_SETSIZE may be defined by the user, but the default here
> * should be >= NOFILE (param.h).
> #ifndef FD_SETSIZE
> #ifdef _LP64
> #define FD_SETSIZE 65536
> #define FD_SETSIZE 1024
> #endif /* _LP64 */
> I've upgraded 2 of our campus caching resolvers to 9.4.2-P1. So far
> no problems. I've seen upwards of 500 open file descriptors but the
> number fluctuates signficantly and on average is far below that.
Then FD_SETSIZE should be at least larger than 256 (I guess it's 1024
or even more) in your environment. And if you don't see any other
performance problem as you said, you don't have to do anything.
But anyway, if the system supports the run-time change of FD_SETSIZE,
the following should work:
% STD_CDEFINES='-DFD_SETSIZE=1024' ./configure; make
% setnev STD_CDEFINES '-DFD_SETSIZE=1024'
% ./configure; make
> I am concerned about upgrading my primary resolver though, which
> is higher traffic. If I exceed 1024, what is the recommended action?
> Redefine FD_SETSIZE upwards and recompile? Recompiling in 64-bit
> sounds scarier.
If named needs more than 1024 open sockets simultaneously, I really
recommend beta versions. With that number of open sockets, the API
overhead of P1s will be too severe anyway, even if it doesn't cause
'too many open files'.
Internet Systems Consortium, Inc.