Slow TCP transfer from 100 megabit host to 1000 megabit host - Networking

This is a discussion on Slow TCP transfer from 100 megabit host to 1000 megabit host - Networking ; I have a Tivo running at 100 megabits (its top speed) on the same network with 1 gigabit Linux hosts. TCP transfers from the Tivo are very slow because of the bandwidth differential. During the TCP session, the gigabit host ...

+ Reply to Thread
Results 1 to 7 of 7

Thread: Slow TCP transfer from 100 megabit host to 1000 megabit host

  1. Slow TCP transfer from 100 megabit host to 1000 megabit host

    I have a Tivo running at 100 megabits (its top speed) on the same
    network with 1 gigabit Linux hosts. TCP transfers from the Tivo are
    very slow because of the bandwidth differential. During the TCP
    session, the gigabit host will acknowledge the previous push packet
    from the Tivo too quickly, and the switch drops it on the floor (as
    observed with tcpdump). The Tivo will retransmit the previous packet,
    and eventually it will get acknowledged and move on to the next frame.
    The full throughput at this speed is around 5 megabits/s. If I set the
    host interface to 100 megabits with ethtool, I get nearly the full 100
    megabits of overall throughput.

    Is there any way to throttle a TCP session to a specific host such
    that acknowledgements are queued on the Linux host to simulate a 100
    megabit session, and retain full gigabit bandwidth to every other
    host?

    I believe there are switches out there that can deal with this
    bandwidth differential gracefully by queuing packets from a host
    running at a greater speed, but unfortunately I went the cheap Linksys
    route (SRW2048).



  2. Re: Slow TCP transfer from 100 megabit host to 1000 megabit host

    hackerbob@gmail.com writes:

    >I have a Tivo running at 100 megabits (its top speed) on the same
    >network with 1 gigabit Linux hosts. TCP transfers from the Tivo are
    >very slow because of the bandwidth differential. During the TCP
    >session, the gigabit host will acknowledge the previous push packet
    >from the Tivo too quickly, and the switch drops it on the floor (as


    Buy a better switch. Anything else you do is simply a workaround. The
    switch should NOT be throwing stuff away.

    >observed with tcpdump). The Tivo will retransmit the previous packet,
    >and eventually it will get acknowledged and move on to the next frame.
    >The full throughput at this speed is around 5 megabits/s. If I set the
    >host interface to 100 megabits with ethtool, I get nearly the full 100
    >megabits of overall throughput.


    >Is there any way to throttle a TCP session to a specific host such
    >that acknowledgements are queued on the Linux host to simulate a 100
    >megabit session, and retain full gigabit bandwidth to every other
    >host?


    >I believe there are switches out there that can deal with this
    >bandwidth differential gracefully by queuing packets from a host
    >running at a greater speed, but unfortunately I went the cheap Linksys
    >route (SRW2048).


    A switch that throws stuff away is a broken switch, cheap or not.


    >


  3. Re: Slow TCP transfer from 100 megabit host to 1000 megabit host

    Unruh wrote:
    > hackerbob@gmail.com writes:


    > >I have a Tivo running at 100 megabits (its top speed) on the same
    > >network with 1 gigabit Linux hosts. TCP transfers from the Tivo are
    > >very slow because of the bandwidth differential. During the TCP
    > >session, the gigabit host will acknowledge the previous push packet
    > >from the Tivo too quickly, and the switch drops it on the floor (as


    > Buy a better switch. Anything else you do is simply a workaround. The
    > switch should NOT be throwing stuff away.


    Indeed, those ACKs, even at an ACK-every-other, pace should not be
    getting dropped by the switch. The bandwidth of the ACK stream would
    be well below 100 Mbit/s. This ass-u-me-s there is no data flowing
    back in the ACK segments.

    However, by any chance is the link from the TiVo to your switch
    half-duplex? Are the transfers from the TiVo "bursty?" If the link
    comes-up half-duplex it is possible you could be seeing capture
    effect. Web search for the details, but the idea is that a sender
    able to send >> 100BT speeds can end-up grabbing the link for a long
    time, causing the other end of the link to have its back-off timers
    grow large (this is at the ethernet level) and perhaps even end-up
    dropping packets for excessive transmission retries (again this is at
    the Ethernet CSMA/CD level).

    If you switch has stats, check them for something like "excessive
    retries."

    There is also the good old fashioned issue of duplex mismatch.

    How 100Base-T Autoneg is supposed to work:

    When both sides of the link are set to autoneg, they will "negotiate"
    the duplex setting and select full-duplex if both sides can do
    full-duplex.

    If one side is hardcoded and not using autoneg, the autoneg process
    will "fail" and the side trying to autoneg is required by spec to use
    half-duplex mode.

    If one side is using half-duplex, and the other is using full-duplex,
    sorrow and woe is the usual result.

    So, the following table shows what will happen given various settings
    on each side:

    Auto Half Full

    Auto Happiness Lucky Sorrow

    Half Lucky Happiness Sorrow

    Full Sorrow Sorrow Happiness

    Happiness means that there is a good shot of everything going well.
    Lucky means that things will likely go well, but not because you did
    anything correctly Sorrow means that there _will_ be a duplex
    mis-match.

    When there is a duplex mismatch, on the side running half-duplex you
    will see various errors and probably a number of _LATE_ collisions
    ("normal" collisions don't count here). On the side running
    full-duplex you will see things like FCS errors. Note that those
    errors are not necessarily conclusive, they are simply indicators.

    Further, it is important to keep in mind that a "clean" ping (or the
    like - eg "linkloop" or default netperf TCP_RR) test result is
    inconclusive here - a duplex mismatch causes lost traffic _only_ when
    both sides of the link try to speak at the same time. A typical ping
    test, being synchronous, one at a time request/response, never tries
    to have both sides talking at the same time.

    Finally, when/if you migrate to 1000Base-T, everything has to be set
    to auto-neg anyway.

    --
    Wisdom Teeth are impacted, people are affected by the effects of events.
    these opinions are mine, all mine; HP might not want them anyway...
    feel free to post, OR email to rick.jones2 in hp.com but NOT BOTH...

  4. Re: Slow TCP transfer from 100 megabit host to 1000 megabit host

    On Aug 4, 7:49*am, hacker...@gmail.com wrote:

    > I have a Tivo running at 100 megabits (its top speed) on the same
    > network with 1 gigabit Linux hosts. TCP transfers from the Tivo are
    > very slow because of the bandwidth differential. During the TCP
    > session, the gigabit host will acknowledge the previous push packet
    > from the Tivo too quickly, and the switch drops it on the floor (as
    > observed with tcpdump). The Tivo will retransmit the previous packet,
    > and eventually it will get acknowledged and move on to the next frame.
    > The full throughput at this speed is around 5 megabits/s. If I set the
    > host interface to 100 megabits with ethtool, I get nearly the full 100
    > megabits of overall throughput.


    > I believe there are switches out there that can deal with this
    > bandwidth differential gracefully by queuing packets from a host
    > running at a greater speed, but unfortunately I went the cheap Linksys
    > route (SRW2048).


    This is not normal for any switch no matter how cheap. Something is
    really, really wrong. Are you sure you don't have a duplex-mismatch on
    the 100 megabit section?

    DS

  5. Re: Slow TCP transfer from 100 megabit host to 1000 megabit host

    On Aug 4, 11:48*am, Rick Jones wrote:

    > Unruh wrote:
    > > hacker...@gmail.com writes:
    > > >I have a Tivo running at 100 megabits (its top speed) on the same
    > > >network with 1 gigabit Linux hosts. TCP transfers from the Tivo are
    > > >very slow because of the bandwidth differential. During the TCP
    > > >session, the gigabit host will acknowledge the previous push packet
    > > >from the Tivo too quickly, and the switch drops it on the floor (as

    > > Buy a better switch. Anything else you do is simply a workaround. The
    > > switch should NOT be throwing stuff away.

    >
    > Indeed, those ACKs, even at an ACK-every-other, pace should not be
    > getting dropped by the switch. The bandwidth of the ACK stream would
    > be well below 100 Mbit/s. *This ass-u-me-s there is no data flowing
    > back in the ACK segments.
    >
    > However, by any chance is the link from the TiVo to your switch
    > half-duplex? *Are the transfers from the TiVo "bursty?" *If the link
    > comes-up half-duplex it is possible you could be seeing capture
    > effect. *Web search for the details, but the idea is that a sender
    > able to send >> 100BT speeds can end-up grabbing the link for a long
    > time, causing the other end of the link to have its back-off timers
    > grow large (this is at the ethernet level) and perhaps even end-up
    > dropping packets for excessive transmission retries (again this is at
    > the Ethernet CSMA/CD level).
    >
    > If you switch has stats, check them for something like "excessive
    > retries."
    >
    > There is also the good old fashioned issue of duplex mismatch.
    >
    > How 100Base-T Autoneg is supposed to work:
    >
    > When both sides of the link are set to autoneg, they will "negotiate"
    > the duplex setting and select full-duplex if both sides can do
    > full-duplex.
    >
    > If one side is hardcoded and not using autoneg, the autoneg process
    > will "fail" and the side trying to autoneg is required by spec to use
    > half-duplex mode.
    >
    > If one side is using half-duplex, and the other is using full-duplex,
    > sorrow and woe is the usual result.
    >
    > So, the following table shows what will happen given various settings
    > on each side:
    >
    > * * * * * * * * *Auto * * * Half * * * Full
    >
    > * *Auto * * * *Happiness * Lucky * * *Sorrow
    >
    > * *Half * * * *Lucky * * * Happiness *Sorrow
    >
    > * *Full * * * *Sorrow * * *Sorrow * * Happiness
    >
    > Happiness means that there is a good shot of everything going well.
    > Lucky means that things will likely go well, but not because you did
    > anything correctly Sorrow means that there _will_ be a duplex
    > mis-match.
    >
    > When there is a duplex mismatch, on the side running half-duplex you
    > will see various errors and probably a number of _LATE_ collisions
    > ("normal" collisions don't count here). *On the side running
    > full-duplex you will see things like FCS errors. *Note that those
    > errors are not necessarily conclusive, they are simply indicators.
    >
    > Further, it is important to keep in mind that a "clean" ping (or the
    > like - eg "linkloop" or default netperf TCP_RR) test result is
    > inconclusive here - a duplex mismatch causes lost traffic _only_ when
    > both sides of the link try to speak at the same time. A typical ping
    > test, being synchronous, one at a time request/response, never tries
    > to have both sides talking at the same time.
    >
    > Finally, when/if you migrate to 1000Base-T, everything has to be set
    > to auto-neg anyway.
    >
    > --
    > Wisdom Teeth are impacted, people are affected by the effects of events.
    > these opinions are mine, all mine; HP might not want them anyway...
    > feel free to post, OR email to rick.jones2 in hp.com but NOT BOTH...


    So, as it turns out, you were correct.

    I had forced the switch port connected to the Tivo to 100/full. I
    switched it back to autonegotiate and it autonegotiated to 100/full on
    its own, and I'm now getting full throughput with the client at 1000/
    full! I don't think Tivo exposes a way to force speed and duplex,
    which kinda sucks, but at least it works.

    Thanks, guys! It's awesome you knew where I went wrong with what
    little information I gave.

  6. Re: Slow TCP transfer from 100 megabit host to 1000 megabit host

    On Aug 5, 7:09*am, hacker...@gmail.com wrote:

    > I had forced the switch port connected to the Tivo to 100/full. I
    > switched it back to autonegotiate and it autonegotiated to 100/full on
    > its own, and I'm now getting full throughput with the client at 1000/
    > full! I don't think Tivo exposes a way to force speed and duplex,
    > which kinda sucks, but at least it works.


    Forcing duplex is largely obsolete. Half-duplex is almost completely
    obsolete and devices that don't negotiate but also can do full-duplex
    are basically non-existent.

    The only reason to configure the speed would be if you have a device
    or component that can't negotiate (such as a cable) between two
    devices that can support a higher speed. That's not very common, but
    common enough that being able to lock the speed is handy sometimes.

    DS

  7. Re: Slow TCP transfer from 100 megabit host to 1000 megabit host

    hackerbob@gmail.com wrote:
    > Thanks, guys! It's awesome you knew where I went wrong with what
    > little information I gave.


    Chalk one up to finely aged intuition Next time though, if you can,
    provide more information

    rick jones
    --
    denial, anger, bargaining, depression, acceptance, rebirth...
    where do you want to be today?
    these opinions are mine, all mine; HP might not want them anyway...
    feel free to post, OR email to rick.jones2 in hp.com but NOT BOTH...

+ Reply to Thread