Detecting drops of outgoing UDP segments due to lack of local resources - TCP-IP

This is a discussion on Detecting drops of outgoing UDP segments due to lack of local resources - TCP-IP ; Hi, Suppose an application is send()ing out a high rate of UDP segments, and most segments suffer local drops due to lack of system resources. How can the application detect that the local operating system is discarding outgoing segments? Thanks, ...

+ Reply to Thread
Results 1 to 9 of 9

Thread: Detecting drops of outgoing UDP segments due to lack of local resources

  1. Detecting drops of outgoing UDP segments due to lack of local resources

    Hi,

    Suppose an application is send()ing out a high rate of UDP
    segments, and most segments suffer local drops due to
    lack of system resources.

    How can the application detect that the local operating
    system is discarding outgoing segments?

    Thanks,
    Everton


  2. Re: Detecting drops of outgoing UDP segments due to lack of local resources

    In article <1155651553.796081.55060@m73g2000cwd.googlegroups.c om>,
    Everton wrote:
    >Suppose an application is send()ing out a high rate of UDP
    >segments, and most segments suffer local drops due to
    >lack of system resources.


    >How can the application detect that the local operating
    >system is discarding outgoing segments?


    The application can detect it by adding in tracing information
    (e.g., "sequence numbers") to the transmitted data, and examining
    that information at the receiving end, and finding some way to
    transmit that status information back (taking into account that
    the responses are going to be UDP as well and so might not make
    it back...)


    No, there is no standard interface by which an application could
    detect that outgoing packets are being dropped. And wouldn't it
    be more useful to know whether the dropped packets happen to belong
    to the application, rather than just vaguely knowing that
    *somewhere* on the system, there is an application whose packets
    are being dropped?

    Any particular operating system might provide more extensive
    interfaces. See in particular your local implementation of 'netstat'.

  3. Re: Detecting drops of outgoing UDP segments due to lack of local resources

    Walter Roberson wrote:
    >
    > No, there is no standard interface by which an application could
    > detect that outgoing packets are being dropped. And wouldn't it
    > be more useful to know whether the dropped packets happen to belong
    > to the application, rather than just vaguely knowing that
    > *somewhere* on the system, there is an application whose packets
    > are being dropped?


    Yes, it would. I was just hoping the system could keep per-socket
    counters for locally discarded segments, and would make those
    counters available in some API. Something like this:

    sockfd = socket(...);
    connect(sockfd, ...);
    while (pending_data) {
    sent = send(sockfd, ...);
    /* do some work, check for failures, so on */
    }
    local_output_drops = dgram_discards(sockfd); /* wanted API :-) */
    close(sockfd);

    Thanks,
    Everton


  4. Re: Detecting drops of outgoing UDP segments due to lack of local resources

    In article ,
    Walter Roberson wrote:

    >>Suppose an application is send()ing out a high rate of UDP
    >>segments, and most segments suffer local drops due to
    >>lack of system resources.

    >
    >>How can the application detect that the local operating
    >>system is discarding outgoing segments?

    >
    >The application can detect it by adding in tracing information
    >(e.g., "sequence numbers") to the transmitted data, and examining


    Some such mechanism is a good idea in general because it detects most
    of the many kinds of UDP datagram losses and not only losses in
    the sending host.


    >No, there is no standard interface by which an application could
    >detect that outgoing packets are being dropped.


    On the contrary, BSD related TCP/IP implementations generally have
    send() answer -1 and set errno=ENOBUFS when the application sends too
    many UDP datagrams too quickly. I don't know but wouldn't be surprised
    if Solaris does the same but with ENOSR instead of ENOBUFS.

    For an explicit example, look at the rate limiting in ttcp. See
    http://www.google.com/search?q=ttcp+enobufs


    Note that UDP involves "datagrams" or "packets" because each UDP message
    is independent as far as the networ protocol is concerned. TCP involves
    "segments" because each TCP/IP packet is part of the total TCP stream.


    Vernon Schryver vjs@rhyolite.com

  5. Re: Detecting drops of outgoing UDP segments due to lack of local resources


    Everton wrote:

    > Yes, it would. I was just hoping the system could keep per-socket
    > counters for locally discarded segments, and would make those
    > counters available in some API. Something like this:


    There really is nothing useful you could do with the information. Why
    would you want to treat a local drop differently from a drop a hop
    away?

    DS


  6. Re: Detecting drops of outgoing UDP segments due to lack of local resources

    Vernon Schryver wrote:
    > On the contrary, BSD related TCP/IP implementations generally have
    > send() answer -1 and set errno=ENOBUFS when the application sends
    > too many UDP datagrams too quickly. I don't know but wouldn't be
    > surprised if Solaris does the same but with ENOSR instead of
    > ENOBUFS.


    My recollection is that Solaris may have intra-stack flow control for
    a blocking UDP socket at least. If I am recalling netperf behaviour
    correctly. HP-UX 11 has no notification back. Linux has intra-stack
    flow control - at least for a blocking socket. AIX 5.3 appears to set
    ENOBUFS (again based on netperf behaviour).

    As for knowing at the application level whether the drops were local
    or somewhere else in the cloud, I'm not sure how much the
    _application_ could do with that information - it has to retransmit
    regardless.

    > For an explicit example, look at the rate limiting in ttcp. See
    > http://www.google.com/search?q=ttcp+enobufs


    While that is goodness, an application would still need (?) to try to
    rate limit itself in the case of drops somewhere in the cloud right?
    The mechanism to do that would also cover the local loss case. I have
    to wonder if what ttcp does is a bit more of a special case to ease
    benchmarking?

    > Note that UDP involves "datagrams" or "packets" because each UDP


    My vote would be for datagram

    rick jones
    --
    a wide gulf separates "what if" from "if only"
    these opinions are mine, all mine; HP might not want them anyway...
    feel free to post, OR email to rick.jones2 in hp.com but NOT BOTH...

  7. Re: Detecting drops of outgoing UDP segments due to lack of local resources


    davids@webmaster.com wrote:
    > Everton wrote:
    >
    > > Yes, it would. I was just hoping the system could keep per-socket
    > > counters for locally discarded segments, and would make those
    > > counters available in some API. Something like this:

    >
    > There really is nothing useful you could do with the information. Why
    > would you want to treat a local drop differently from a drop a hop
    > away?


    Because it makes the difference between beating the sysadmin and
    beating the network admin?

    ;-)


  8. Re: Detecting drops of outgoing UDP segments due to lack of localresources

    Everton wrote:
    > How can the application detect that the local operating
    > system is discarding outgoing segments?


    I don't believe this actually happens at all. The implementations I am
    aware of all block until there is room in the socket send buffer, which
    can only be created by sending. Of course the datagram can be lost real
    quick thereafter ...

  9. Re: Detecting drops of outgoing UDP segments due to lack of local resources

    EJP wrote:
    > Everton wrote:
    >> How can the application detect that the local operating
    >> system is discarding outgoing segments?


    > I don't believe this actually happens at all. The implementations I am
    > aware of all block until there is room in the socket send buffer, which
    > can only be created by sending. Of course the datagram can be lost real
    > quick thereafter ...


    There are a number of platforms where tings will not block for UDP and
    in fact, nothing ever _really_ gets put into SO_SNDBUF in the first
    place. HP-UX, AIX, not sure if BSD has gone the linux-like
    intra-stack flow cotnrol yet.

    rick jones
    --
    Process shall set you free from the need for rational thought.
    these opinions are mine, all mine; HP might not want them anyway...
    feel free to post, OR email to rick.jones2 in hp.com but NOT BOTH...

+ Reply to Thread