Redhat Enterprise 4 and 15 second delays with NFS via TCP - NFS

This is a discussion on Redhat Enterprise 4 and 15 second delays with NFS via TCP - NFS ; Rick Jones wrote: > Mike Eisler wrote: > > You mean, instead of unilaterally adding a procedure to NFS, I have > > two sides sleeping for different lengths of time? > > Yep > > > What else could ...

+ Reply to Thread
Page 2 of 2 FirstFirst 1 2
Results 21 to 26 of 26

Thread: Redhat Enterprise 4 and 15 second delays with NFS via TCP

  1. Re: Redhat Enterprise 4 and 15 second delays with NFS via TCP


    Rick Jones wrote:
    > Mike Eisler wrote:
    > > You mean, instead of unilaterally adding a procedure to NFS, I have
    > > two sides sleeping for different lengths of time?

    >
    > Yep
    >
    > > What else could I have done?

    >
    > Short of a rev of the protocol? Perhaps not have the NFS server code
    > ever initiate connection close on idleness and rely instead on TCP
    > level keepalives to cull connections from dead clients. Keepalive


    What if the clients aren't dead, but the clients are leaking
    the connections?

    > intervals would have to be set pretty low to deal with DoS I guess.


    Keepalives were never a serious consideration for the reasons
    given in:

    http://groups.google.com/group/comp....40f4d7b9487059

    You can read my negative experiences with keepalives in that thread,
    and want folks like Dave Crocker, Craig Partridge, Vint Cerf had to say
    (nothing good).


  2. Re: Redhat Enterprise 4 and 15 second delays with NFS via TCP


    If the client is leaking connections, then the assumption that the
    server will wait longer than the client before closing the connection
    seems to be false. This close the connection after not seeing
    anything from the remote, but without actually trying to elicit
    something from the remote feels an awful lot like a stateful firewall
    tossing a TCP connection because it's been idle for a while.

    I will concede that the statelessness of NFS - or at least the lack of
    anything allowing the server to actively see if the client is still
    there was a huge constraint.

    Mike Eisler wrote:
    > Keepalives were never a serious consideration for the reasons given
    > in:


    > http://groups.google.com/group/comp....40f4d7b9487059


    > You can read my negative experiences with keepalives in that thread,
    > and want folks like Dave Crocker, Craig Partridge, Vint Cerf had to say
    > (nothing good).


    Like this from Dave Crocker?-)

    The use of Keepalives is terrible, but sometimes necessary. The
    key word, here, is "sometimes".

    and then later:

    There is a remarkable economy that derives from puting this
    mechanism into the kernel/transport system. It may be an accident
    that TCP does not have the mechanism but can be tricked into
    creating one, but it still is remarkably simple.

    Most application protocols have very simple interaction styles and
    tend to be relatively easy to program. To force time-based
    generation of action would complexify these protocols
    significantly.

    Craig Partridge:

    Well, I'm a firm hater of keep-alives, although Mike Karels has
    persuaded me that in the current world they are a useful tool for
    catching clients that go off into hyperspace without telling you.

    Bob Braden:

    I don't believe anyone has advocated that keep-alives are a bad
    thing... indeed, they appear to be a necessity in an imperfect
    world. The controversy (for the past 10 years, at least!) is
    whether or not they belong in TCP.

    Mike Karels:

    Last time Phil and I talked about keepalives in person, I asked him
    whether he had problems with telnet/rlogin servers accumulating on
    his systems if they didn't use keepalives. We certainly accumulate
    junk, including xterm programs, waiting for input from a half-open
    connection. Phil told me that he doesn't have problems, because he
    runs a "wall" every night to force output to all users, and of
    course breaking connections that time out. In other words, Phil
    violently objects to servers requesting keepalives from TCP, but
    allows the system manager (himself) to force them above the
    application level. And before people jump up to point out the
    difference in time scales, the current BSD code sends no keepalive
    packets until a connection has been idle for 2 hr, and that
    interval is easily changeable. One proposal for the Host
    Requirements document was to wait for 12 hr. I think that's a bit
    high, but the difference is only a factor of 6. Compare the number
    of keepalive packets with the number of packets exchanged by an
    xterm and an X server over the course of a week if used 4 hours a
    day!

    etc etc etc. Going through the thread I come away with the impression
    that keepalives are where the cleanliness of theory meets the ugliness
    of the world. If applications were properly specified and written (eg
    had their own keepalive mechanisms) then keepalive in TCP would be
    superfluous. However, appplications do not all have keepalive
    mechanisms (eg nothing in NFS 3) and so cleanliness goes away. Either
    in the form of keepalives, or in the form of assuming that the server
    will wait longer than the client before initiating a close.

    --
    portable adj, code that compiles under more than one compiler
    these opinions are mine, all mine; HP might not want them anyway...
    feel free to post, OR email to rick.jones2 in hp.com but NOT BOTH...

  3. Re: Redhat Enterprise 4 and 15 second delays with NFS via TCP


    Rick Jones wrote:
    > If the client is leaking connections, then the assumption that the
    > server will wait longer than the client before closing the connection
    > seems to be false. This close the connection after not seeing


    So what? Leaked connections are connections the client has
    forgotten about. Like leaked memory.

    > anything from the remote, but without actually trying to elicit
    > something from the remote feels an awful lot like a stateful firewall
    > tossing a TCP connection because it's been idle for a while.


    So what?

    > I will concede that the statelessness of NFS - or at least the lack of
    > anything allowing the server to actively see if the client is still
    > there was a huge constraint.


    The stalelessness of NFS, or at least the lack any need to tie state
    to the connection, means that the session at the NFS level of the
    protocol stack is unaffected.

    > Mike Eisler wrote:
    > > Keepalives were never a serious consideration for the reasons given
    > > in:

    >
    > > http://groups.google.com/group/comp....40f4d7b9487059

    >
    > > You can read my negative experiences with keepalives in that thread,
    > > and want folks like Dave Crocker, Craig Partridge, Vint Cerf had to say
    > > (nothing good).

    >
    > Like this from Dave Crocker?-)


    You neglected to go to the end and read about my experience
    where keepalives had exactly the
    opposite intended effect, or Vint Cerf's statement that
    the function of keepalive belongs at layers above the transport.

    "It is not the critic who counts, not the man who
    points out how the strong man stumbled, or
    where the doer of deeds could have done better."

    So let's talk about your NFS/TCP client and/or server. :-)


    > The use of Keepalives is terrible, but sometimes necessary. The
    > key word, here, is "sometimes".


    They aren't necessary in NFS.

    > and then later:
    >
    > There is a remarkable economy that derives from puting this
    > mechanism into the kernel/transport system. It may be an accident
    > that TCP does not have the mechanism but can be tricked into


    And Cerf is explicit in the thread that is no accident.

    > creating one, but it still is remarkably simple.


    "Everything should be made as simple as possible -- but no simpler!"

    > Most application protocols have very simple interaction styles and
    > tend to be relatively easy to program. To force time-based
    > generation of action would complexify these protocols
    > significantly.


    True. And someone should have early on defined a session layer to
    do that. TCP is transport not session.

    > Craig Partridge:
    >
    > Well, I'm a firm hater of keep-alives, although Mike Karels has
    > persuaded me that in the current world they are a useful tool for
    > catching clients that go off into hyperspace without telling you.


    Except for those clients that have a TCP stack that goes off into
    hyperspace.

    > Bob Braden:
    >
    > I don't believe anyone has advocated that keep-alives are a bad
    > thing... indeed, they appear to be a necessity in an imperfect
    > world. The controversy (for the past 10 years, at least!) is
    > whether or not they belong in TCP.


    Settled by Cerf later in the thread.

    > Mike Karels:
    >
    > Last time Phil and I talked about keepalives in person, I asked him
    > whether he had problems with telnet/rlogin servers accumulating on
    > his systems if they didn't use keepalives. We certainly accumulate
    > junk, including xterm programs, waiting for input from a half-open
    > connection. Phil told me that he doesn't have problems, because he
    > runs a "wall" every night to force output to all users, and of
    > course breaking connections that time out. In other words, Phil
    > violently objects to servers requesting keepalives from TCP, but
    > allows the system manager (himself) to force them above the
    > application level. And before people jump up to point out the
    > difference in time scales, the current BSD code sends no keepalive
    > packets until a connection has been idle for 2 hr, and that


    Two hours is too long. The client might have a zillion
    mounts to a server, one per mount (some clients do that
    still unfortunatelty). It crashes. It comes up. It can't connect
    because all the 4 tuples { src addr, src port [512-1023], dst addr,
    2049 }
    are in use.

    I'd rather the client waited 6 minutes instead of 2 hours.

    > interval is easily changeable. One proposal for the Host
    > Requirements document was to wait for 12 hr. I think that's a bit
    > high, but the difference is only a factor of 6. Compare the number
    > of keepalive packets with the number of packets exchanged by an
    > xterm and an X server over the course of a week if used 4 hours a
    > day!



  4. Re: Redhat Enterprise 4 and 15 second delays with NFS via TCP

    In article <1138420340.452018.212040@f14g2000cwb.googlegroups. com>,
    Mike Eisler wrote:
    >Rick Jones wrote:
    >
    >> If the client is leaking connections, then the assumption that the
    >> server will wait longer than the client before closing the connection
    >> seems to be false. This close the connection after not seeing

    >
    >So what? Leaked connections are connections the client has
    >forgotten about. Like leaked memory.


    Those of us who are very ancient remember when memory leaks were
    regarded as bugs, especially in high-RAS environments. How things
    have changed!


    Regards,
    Nick Maclaren.

  5. Re: Redhat Enterprise 4 and 15 second delays with NFS via TCP


    Nick Maclaren wrote:
    > In article <1138420340.452018.212040@f14g2000cwb.googlegroups. com>,
    > Mike Eisler wrote:
    > >Rick Jones wrote:
    > >
    > >> If the client is leaking connections, then the assumption that the
    > >> server will wait longer than the client before closing the connection
    > >> seems to be false. This close the connection after not seeing

    > >
    > >So what? Leaked connections are connections the client has
    > >forgotten about. Like leaked memory.

    >
    > Those of us who are very ancient remember when memory leaks were
    > regarded as bugs, especially in high-RAS environments. How things
    > have changed!


    The NFS server can't do anything to solve client bugs. In can
    however prevent client bugs from taking out the NFS service.

    >
    >
    > Regards,
    > Nick Maclaren.



  6. Re: Redhat Enterprise 4 and 15 second delays with NFS via TCP

    Mike Eisler wrote:
    > So let's talk about your NFS/TCP client and/or server. :-)


    I'm probably lucky to even have an NFS/TCP client and/or server
    available to me in the first place. By all means though feel free to
    cast pebbles, stones or boulders at netperf

    rick jones
    --
    oxymoron n, commuter in a gas-guzzling luxury SUV with an American flag
    these opinions are mine, all mine; HP might not want them anyway...
    feel free to post, OR email to rick.jones2 in hp.com but NOT BOTH...

+ Reply to Thread
Page 2 of 2 FirstFirst 1 2