iperf performance - TCP-IP

This is a discussion on iperf performance - TCP-IP ; I am using iperf to measure performance. Iperf is running on a windows XP machine with 1Gbps NIC card. This works fine when I am setting up iperf for rates up to about 35 Mbps but after that iperf chokes. ...

+ Reply to Thread
Results 1 to 4 of 4

Thread: iperf performance

  1. iperf performance

    I am using iperf to measure performance. Iperf is running on a windows
    XP machine with 1Gbps NIC card. This works fine when I am setting up
    iperf for rates up to about 35 Mbps but after that iperf chokes. I
    suspect this has something to do with windows reserving bandwidth and
    some of the services using bandwidth. What I need to do is run iperf
    at 100 Mbps and I was wondering if anyone had some ideas on tweaks I
    could make to the setup to achieve this.

    Thank You

  2. Re: iperf performance

    swengineer001@gmail.com wrote:
    > I am using iperf to measure performance. Iperf is running on a
    > windows XP machine with 1Gbps NIC card. This works fine when I am
    > setting up iperf for rates up to about 35 Mbps but after that iperf
    > chokes. I suspect this has something to do with windows reserving
    > bandwidth and some of the services using bandwidth. What I need to
    > do is run iperf at 100 Mbps and I was wondering if anyone had some
    > ideas on tweaks I could make to the setup to achieve this.


    What exactly do you mean by "chokes?"

    rick jones
    --
    Wisdom Teeth are impacted, people are affected by the effects of events.
    these opinions are mine, all mine; HP might not want them anyway...
    feel free to post, OR email to rick.jones2 in hp.com but NOT BOTH...

  3. Re: iperf performance

    On Dec 11, 2:33 pm, Rick Jones wrote:
    > swengineer...@gmail.com wrote:
    > > I am using iperf to measure performance. Iperf is running on a
    > > windows XP machine with 1Gbps NIC card. This works fine when I am
    > > setting up iperf for rates up to about 35 Mbps but after that iperf
    > > chokes. I suspect this has something to do with windows reserving
    > > bandwidth and some of the services using bandwidth. What I need to
    > > do is run iperf at 100 Mbps and I was wondering if anyone had some
    > > ideas on tweaks I could make to the setup to achieve this.

    >
    > What exactly do you mean by "chokes?"
    >
    > rick jones
    > --
    > Wisdom Teeth are impacted, people are affected by the effects of events.
    > these opinions are mine, all mine; HP might not want them anyway...
    > feel free to post, OR email to rick.jones2 in hp.com but NOT BOTH...


    Iperf begins to report packet loss but the packets appear to be
    getting lost due to a problem on the pc not due to any external device
    that is supposed to be what the test is measuring. everything that
    makes it onto the wire makes it to the receiving side.

  4. Re: iperf performance

    swengineer001@gmail.com wrote:
    > On Dec 11, 2:33 pm, Rick Jones wrote:
    > > What exactly do you mean by "chokes?"


    > Iperf begins to report packet loss but the packets appear to be
    > getting lost due to a problem on the pc not due to any external
    > device that is supposed to be what the test is measuring. everything
    > that makes it onto the wire makes it to the receiving side.


    I take it then you are sending data via UDP? And when you say makes
    it to the receiving side, you mean all the way up to user space?

    When doing things with UDP and netperf at least, there are two common
    places where packets are tossed - one is the receive socket buffer,
    the other is the transmit queue on the sender. In each case there are
    stats in the stack which (should) be incremented. For transmit queue
    overflow on Linux I typically look at ethtool stats, lanadmin on
    HP-UX. I look at netstat to try to see the socket buffer overflows on
    the receiving side. What the link-level stats util is for Windows
    continues to elude me, but someone else might be able to chime-in with
    that.

    You spoke of configuring iperf to send at a given rate. I've not
    looked at iperf sources, but in netperf, by default, what would be a
    burst size and an interburst interval. When the granularity of
    interval timers is poorly suited to the task, one can end-up with a
    rather large burst size, which can be greater than the transmit queue.
    At that point one is in a very short race between the host CPU(s) and
    the NIC. If the host CPU(s) "win" that race it can lead to transmit
    queue drops.

    One thing I do in netperf to workaround this issue is allow it to
    "spin" rather than sleep. This can allow the bursts to be much much
    smaller as a much shorter interburst interval can be used. However,
    this comes at the expense of significantly larger CPU utilization on
    the sending side. It also depends on decent resolution on the
    platform's "gettomeofday()-like" call.

    What are the specific options you are giving to iperf? Does it have a
    "sit and spin" option?

    rick jones
    --
    No need to believe in either side, or any side. There is no cause.
    There's only yourself. The belief is in your own precision. - Jobert
    these opinions are mine, all mine; HP might not want them anyway...
    feel free to post, OR email to rick.jones2 in hp.com but NOT BOTH...

+ Reply to Thread