Is it good enabling TCP_NODELAY on a FTP server? - TCP-IP

This is a discussion on Is it good enabling TCP_NODELAY on a FTP server? - TCP-IP ; Hi, I'm the maintainer of an FTP server library written in Python: http://code.google.com/p/pyftpdlib/ I noticed that some FTP servers like proftpd have the TCP_NODELAY option enabled by default. I was wondering if it was a good idea enabling TCP_NODELAY also ...

+ Reply to Thread
Results 1 to 3 of 3

Thread: Is it good enabling TCP_NODELAY on a FTP server?

  1. Is it good enabling TCP_NODELAY on a FTP server?

    Hi,
    I'm the maintainer of an FTP server library written in Python:
    http://code.google.com/p/pyftpdlib/
    I noticed that some FTP servers like proftpd have the TCP_NODELAY
    option enabled by default.
    I was wondering if it was a good idea enabling TCP_NODELAY also on my
    server but before changing anything in the production code I'd like to
    hear an opinion from some experts first.
    Does it really speed up the connection?
    Could it occasionally lead to problems with some clients?


    Thanks in advance


    --- Giampaolo
    http://code.google.com/p/pyftpdlib/

  2. Re: Is it good enabling TCP_NODELAY on a FTP server?

    Giampaolo Rodola' wrote:
    > Hi,
    > I'm the maintainer of an FTP server library written in Python:
    > http://code.google.com/p/pyftpdlib/
    > I noticed that some FTP servers like proftpd have the TCP_NODELAY
    > option enabled by default.
    > I was wondering if it was a good idea enabling TCP_NODELAY also on my
    > server but before changing anything in the production code I'd like to
    > hear an opinion from some experts first.
    > Does it really speed up the connection?
    > Could it occasionally lead to problems with some clients?


    Is that on the control connection or on the data connection? Modulo a
    broken TCP stack I don't think it should make much of a difference on
    the data connection - well unless the FTP server implementation is
    making sub-MSS send() calls. My dimm memory recalls some issues
    involving status messages on the control connection when doing mget or
    mput or something that might have been addressed by setting
    TCP_NODELAY.

    rick jones

    Here is some boilerplate I have on the topic - setting TCP_NODELAY is
    how one disabled the Nagle Algorithm:


    > I'm not familiar with this issue, and I'm mostly ignorant about what
    > tcp does below the sockets interface. Can anybody briefly explain what
    > "nagle" is, and how and when to turn it off? Or point me to the
    > appropriate manual.


    In broad terms, whenever an application does a send() call, the logic
    of the Nagle algorithm is supposed to go something like this:

    1) Is the quantity of data in this send, plus any queued, unsent data,
    greater than the MSS (Maximum Segment Size) for this connection? If
    yes, send the data in the user's send now (modulo any other
    constraints such as receiver's advertised window and the TCP
    congestion window). If no, go to 2.

    2) Is the connection to the remote otherwise idle? That is, is there
    no unACKed data outstanding on the network. If yes, send the data in
    the user's send now. If no, queue the data and wait. Either the
    application will continue to call send() with enough data to get to a
    full MSS-worth of data, or the remote will ACK all the currently sent,
    unACKed data, or our retransmission timer will expire.

    Now, where applications run into trouble is when they have what might
    be described as "write, write, read" behaviour, where they present
    logically associated data to the transport in separate 'send' calls
    and those sends are typically less than the MSS for the connection.
    It isn't so much that they run afoul of Nagle as they run into issues
    with the interaction of Nagle and the other heuristics operating on
    the remote. In particular, the delayed ACK heuristics.

    When a receiving TCP is deciding whether or not to send an ACK back to
    the sender, in broad handwaving terms it goes through logic similar to
    this:

    a) is there data being sent back to the sender? if yes, piggy-back the
    ACK on the data segment.

    b) is there a window update being sent back to the sender? if yes,
    piggy-back the ACK on the window update.

    c) has the standalone ACK timer expired.

    Window updates are generally triggered by the following heuristics:

    i) would the window update be for a non-trivial fraction of the window
    - typically somewhere at or above 1/4 the window, that is, has the
    application "consumed" at least that much data? if yes, send a
    window update. if no, check ii.

    ii) would the window update be for, the application "consumed," at
    least 2*MSS worth of data? if yes, send a window update, if no wait.

    Now, going back to that write, write, read application, on the sending
    side, the first write will be transmitted by TCP via logic rule 2 -
    the connection is otherwise idle. However, the second small send will
    be delayed as there is at that point unACKnowledged data outstanding
    on the connection.

    At the receiver, that small TCP segment will arrive and will be passed
    to the application. The application does not have the entire app-level
    message, so it will not send a reply (data to TCP) back. The typical
    TCP window is much much larger than the MSS, so no window update would
    be triggered by heuristic i. The data just arrived is < 2*MSS, so no
    window update from heuristic ii. Since there is no window update, no
    ACK is sent by heuristic b.

    So, that leaves heuristic c - the standalone ACK timer. That ranges
    anywhere between 50 and 200 milliseconds depending on the TCP stack in
    use.

    If you've read this far now we can take a look at the effect of
    various things touted as "fixes" to applications experiencing this
    interaction. We take as our example a client-server application where
    both the client and the server are implemented with a write of a small
    application header, followed by application data. First, the
    "default" case which is with Nagle enabled (TCP_NODELAY _NOT_ set) and
    with standard ACK behaviour:

    Client Server
    Req Header ->
    <- Standalone ACK after Nms
    Req Data ->
    <- Possible standalone ACK
    <- Rsp Header
    Standalone ACK ->
    <- Rsp Data
    Possible standalone ACK ->


    For two "messages" we end-up with at least six segments on the wire.
    The possible standalone ACKs will depend on whether the server's
    response time, or client's think time is longer than the standalone
    ACK interval on their respective sides. Now, if TCP_NODELAY is set we
    see:


    Client Server
    Req Header ->
    Req Data ->
    <- Possible Standalone ACK after Nms
    <- Rsp Header
    <- Rsp Data
    Possible Standalone ACK ->

    In theory, we are down two four segments on the wire which seems good,
    but frankly we can do better. First though, consider what happens
    when someone disables delayed ACKs

    Client Server
    Req Header ->
    <- Immediate Standalone ACK
    Req Data ->
    <- Immediate Standalone ACK
    <- Rsp Header
    Immediate Standalone ACK ->
    <- Rsp Data
    Immediate Standalone ACK ->

    Now we definitly see 8 segments on the wire. It will also be that way
    if both TCP_NODELAY is set and delayed ACKs are disabled.

    How about if the application did the "right" think in the first place?
    That is sent the logically associated data at the same time:


    Client Server
    Request ->
    <- Possible Standalone ACK
    <- Response
    Possible Standalone ACK ->

    We are down to two segments on the wire.

    For "small" packets, the CPU cost is about the same regardless of data
    or ACK. This means that the application which is making the propper
    gathering send call will spend far fewer CPU cycles in the networking
    stack.

    --
    denial, anger, bargaining, depression, acceptance, rebirth...
    where do you want to be today?
    these opinions are mine, all mine; HP might not want them anyway...
    feel free to post, OR email to rick.jones2 in hp.com but NOT BOTH...

  3. Re: Is it good enabling TCP_NODELAY on a FTP server?

    On Oct 15, 9:03*am, "Giampaolo Rodola'" wrote:
    > Hi,
    > I'm the maintainer of an FTP server library written in Python:http://code..google.com/p/pyftpdlib/
    > I noticed that some FTP servers like proftpd have the TCP_NODELAY
    > option enabled by default.
    > I was wondering if it was a good idea enabling TCP_NODELAY also on my
    > server but before changing anything in the production code I'd like to
    > hear an opinion from some experts first.
    > Does it really speed up the connection?
    > Could it occasionally lead to problems with some clients?


    Here are the two sides on that issue:

    1) Pro: While it shouldn't make any difference, on the off chance that
    there's something wrong on one side or the other, and it makes a
    difference, it will make things faster.

    2) Con: It should make no difference, and should anyone ever introduce
    any bad behavior into the server code, it will stop the stack from
    mitigating the damage.

    DS

+ Reply to Thread