Re: 9.3.5-P1 now issues "socket: too many open file descriptors" - DNS

This is a discussion on Re: 9.3.5-P1 now issues "socket: too many open file descriptors" - DNS ; FYI, I'm looking forward to the optimized releases since trying out 9.3.5-P1 and 9.5.1b1. For us 9.5.1b1 seems to be the most reasonable. Perhaps the other releases would do better, havn't tried those. Bind 9.5.1b1 performs better for us than ...

+ Reply to Thread
Results 1 to 5 of 5

Thread: Re: 9.3.5-P1 now issues "socket: too many open file descriptors"

  1. Re: 9.3.5-P1 now issues "socket: too many open file descriptors"

    FYI,

    I'm looking forward to the optimized releases since trying
    out 9.3.5-P1 and 9.5.1b1. For us 9.5.1b1 seems to be the most reasonable.
    Perhaps the other releases would do better, havn't tried those.

    Bind 9.5.1b1 performs better for us than 9.3.5-P1 on Solaris 9/10 here.
    Bind 9.5.1b1 seems to be running roughly two times plus hotter
    in terms of cpu, Bind 9.3.5-P1 was roughly ~12x more for us when
    pre-testing over a period of hours. We also needed to adjust the
    max file descriptors limit (ulimit -n) to 1024 since the default was 256.

    Robert

    >X-Original-To: bind-users@webster.isc.org
    >Date: Thu, 10 Jul 2008 08:25:52 -0700
    >From: JINMEI Tatuya / 神明達哉
    >To: Ed Ravin
    >Cc: bind-users@isc.org
    >Subject: Re: 9.3.5-P1 now issues "socket: too many open file descriptors"
    >User-Agent: Wanderlust/2.14.0 (Africa) Emacs/22.1 Mule/5.0 (SAKAKI)
    >MIME-Version: 1.0 (generated by SEMI 1.14.6 - "Maruoka")
    >Content-Transfer-Encoding: 8bit
    >List-unsubscribe:
    >List-Id:
    >X-List-ID:
    >
    >At Thu, 10 Jul 2008 09:54:11 -0400,
    >Ed Ravin wrote:
    >
    >> It is curently using between 320 and 377 file descriptors, and still
    >> sometimes peaks over 512 and issues the error above.
    >>
    >> This is big difference in resource consumption - is this related to
    >> the security fix? Is this intentional?

    >
    >Yes and yes. To (substantially) reduce the risk of accepting forged
    >response by guessing/blue-forcing UDP source ports, the latest patch
    >versions use a different UDP socket bound to random ports for
    >different queries.
    >
    >> What's the impact when named has too many file descriptors open? Do
    >> queries get dropped?

    >
    >Queries won't be dropped simply because it opens many UDP sockets.
    >But the overall load of the server will (possibly significantly) be
    >increased due to scalability problems of the underlying socket API.
    >If the increased load excesses the capacity to handle your normal
    >queries, they will be dropped as a result. 9.4.3b2 and 9.5.0b3 (and
    >9.3.6b1 which will be released shortly) use more efficient API (when
    >available - covering at least BSDs, Linux and Solaris) and should be
    >much more lightweight.
    >
    >---
    >JINMEI, Tatuya
    >Internet Systems Consortium, Inc.
    >
    >




  2. Re: 9.3.5-P1 now issues "socket: too many open file descriptors"

    On 10 jul, 15:12, robert wrote:
    > FYI,
    >
    > I'm looking forward to the optimized releases since trying
    > out 9.3.5-P1 and 9.5.1b1. For us 9.5.1b1 seems to be the most reasonable.
    > Perhaps the other releases would do better, havn't tried those.
    >
    > Bind 9.5.1b1 performs better for us than 9.3.5-P1 on Solaris 9/10 here.
    > Bind 9.5.1b1 seems to be running roughly two times plus hotter
    > in terms of cpu, Bind 9.3.5-P1 was roughly ~12x more for us when
    > pre-testing over a period of hours. We also needed to adjust the
    > max file descriptors limit (ulimit -n) to 1024 since the default was 256.
    >
    > Robert
    >
    >
    >
    > >X-Original-To: bind-us...@webster.isc.org
    > >Date: Thu, 10 Jul 2008 08:25:52 -0700
    > >From: JINMEI Tatuya / 神明達哉
    > >To: Ed Ravin
    > >Cc: bind-us...@isc.org
    > >Subject: Re: 9.3.5-P1 now issues "socket: too many open file descriptors"
    > >User-Agent: Wanderlust/2.14.0 (Africa) Emacs/22.1 Mule/5.0 (SAKAKI)
    > >MIME-Version: 1.0 (generated by SEMI 1.14.6 - "Maruoka")
    > >Content-Transfer-Encoding: 8bit
    > >List-unsubscribe:
    > >List-Id:
    > >X-List-ID:

    >
    > >At Thu, 10 Jul 2008 09:54:11 -0400,
    > >Ed Ravin wrote:

    >
    > >> It is curently using between 320 and 377 file descriptors, and still
    > >> sometimes peaks over 512 and issues the error above.

    >
    > >> This is big difference in resource consumption - is this related to
    > >> the security fix? Is this intentional?

    >
    > >Yes and yes. To (substantially) reduce the risk of accepting forged
    > >response by guessing/blue-forcing UDP source ports, the latest patch
    > >versions use a different UDP socket bound to random ports for
    > >different queries.

    >
    > >> What's the impact when named has too many file descriptors open? Do
    > >> queries get dropped?

    >
    > >Queries won't be dropped simply because it opens many UDP sockets.
    > >But the overall load of the server will (possibly significantly) be
    > >increased due to scalability problems of the underlying socket API.
    > >If the increased load excesses the capacity to handle your normal
    > >queries, they will be dropped as a result. 9.4.3b2 and 9.5.0b3 (and
    > >9.3.6b1 which will be released shortly) use more efficient API (when
    > >available - covering at least BSDs, Linux and Solaris) and should be
    > >much more lightweight.

    >
    > >---
    > >JINMEI, Tatuya
    > >Internet Systems Consortium, Inc.


    After update BIND we are seeing that the number of queries increase
    from the days before we installed the patch.
    Is it normal? Did your notice the same thing? Is it related that bind
    is using more UDP ports?


  3. Re: 9.3.5-P1 now issues "socket: too many open file descriptors"

    At Sat, 12 Jul 2008 14:16:58 -0700 (PDT),
    afbtasa@gmail.com wrote:

    > After update BIND we are seeing that the number of queries increase
    > from the days before we installed the patch.
    > Is it normal? Did your notice the same thing? Is it related that bind
    > is using more UDP ports?


    Could you be more specific?

    - which version are you talking about (there are 3 new P1 versions and
    2 new beta versions)?
    - which queries are you talking about? Queries sent to the caching
    server (=named) from clients, or queries from the caching server to
    other authoritative servers?
    - how much did the queries increase (e.g. 100qps to 200qps)?

    ---
    JINMEI, Tatuya


  4. Re: 9.3.5-P1 now issues "socket: too many open file descriptors"

    On 12 jul, 20:48, JINMEI Tatuya / 神明達哉 wrote:
    > At Sat, 12 Jul 2008 14:16:58 -0700 (PDT),
    >
    > afbt...@gmail.com wrote:
    > > After update BIND we are seeing that the number of queries increase
    > > from the days before we installed the patch.
    > > Is it normal? Did your notice the same thing? Is it related that bind
    > > is using more UDP ports?

    >
    > Could you be more specific?
    >
    > - which version are you talking about (there are 3 new P1 versions and
    > 2 new beta versions)?
    > - which queries are you talking about? Queries sent to the caching
    > server (=named) from clients, or queries from the caching server to
    > other authoritative servers?
    > - how much did the queries increase (e.g. 100qps to 200qps)?
    >
    > ---
    > JINMEI, Tatuya


    Hi Tatuya,
    We installed BIND 9.5.0-P1; we note an increase in queries from our
    caching server to authoritative server.
    We noticece an 20/30% increase.
    We are running Bind over Debian 4.0 on HP Proliant server (most of
    them DL380-G4).



  5. Re: 9.3.5-P1 now issues "socket: too many open file descriptors"

    At Sat, 12 Jul 2008 17:35:18 -0700 (PDT),
    afbtasa@gmail.com wrote:

    > > Could you be more specific?
    > >
    > > - which version are you talking about (there are 3 new P1 versions and
    > > 2 new beta versions)?
    > > - which queries are you talking about? Queries sent to the caching
    > > server (=named) from clients, or queries from the caching server to
    > > other authoritative servers?
    > > - how much did the queries increase (e.g. 100qps to 200qps)?


    > We installed BIND 9.5.0-P1; we note an increase in queries from our
    > caching server to authoritative server.
    > We noticece an 20/30% increase.


    I don't see any reason for the query increase simply because named
    uses random different ports. But, if you have upgraded named from an
    older major version (e.g. 9.4.x or 9.3.y) and if you don't specify
    max-cache-size in named.conf, the default max-cache-size of 9.5
    (=32MB) may decrease cache hit rate and increase the outgoing query
    rate as a result.

    ---
    JINMEI, Tatuya


+ Reply to Thread