eth network seems way too slow - Networking

This is a discussion on eth network seems way too slow - Networking ; I just ran a file-transfer speed test on my newly configured LAN. I used ssh for a quick and dirty test before I move to stronger ammunition including netperf etc. For a 4 GB file transfer I have a time ...

+ Reply to Thread
Page 1 of 2 1 2 LastLast
Results 1 to 20 of 30

Thread: eth network seems way too slow

  1. eth network seems way too slow

    I just ran a file-transfer speed test on my newly configured LAN. I used
    ssh for a quick and dirty test before I move to stronger ammunition
    including netperf etc.

    For a 4 GB file transfer I have a time of approx. 86 sec. Using a rough
    factor of 1 GByte=10 Gbits this translates to an effective transmit of 0.46
    Gbps. Even accounting for the protocol overheads etc. this seems a rate way
    too low for what I was expecting for my twin-eth-ports bonded machines.

    Do other people have any benchmark numbers that I can compare against?

    I ought to mention that cpu's on both machines were almost unloaded. As a
    rough indicator of Disk I/O I see 40 secs. for a file copy from disk to
    disk on any single machine. So I _think_ the network is my bottleneck. But
    correct me if I am wrong. I'd be up for trying any more "realistic" /
    "sophisticated" tests that people might suggest instead of my primitive
    approach.


    --
    Rahul

  2. Re: eth network seems way too slow

    In comp.os.linux.networking Rahul wrote:
    > I just ran a file-transfer speed test on my newly configured LAN. I
    > used ssh for a quick and dirty test before I move to stronger
    > ammunition including netperf etc.


    > For a 4 GB file transfer I have a time of approx. 86 sec. Using a
    > rough factor of 1 GByte=10 Gbits this translates to an effective
    > transmit of 0.46 Gbps. Even accounting for the protocol overheads
    > etc. this seems a rate way too low for what I was expecting for my
    > twin-eth-ports bonded machines.


    > Do other people have any benchmark numbers that I can compare against?


    What was the CPU utilization of all the CPUs on each side? Was one of
    them pegged?

    ssh/scp implies crypto. crypto implies CPU overhead.

    also, there may be a question of the TCP window size being used for
    the transfer. ssh may set an explicit SO_*BUF size which will disable
    Linux's much-vaunted socket buffer autotuning and may subject ssh/scp
    to a rather lower socket buffer size limit than can be achieved (by
    default) with the autotuning.

    > I ought to mention that cpu's on both machines were almost
    > unloaded. As a rough indicator of Disk I/O I see 40 secs. for a file
    > copy from disk to disk on any single machine. So I _think_ the
    > network is my bottleneck. But correct me if I am wrong. I'd be up
    > for trying any more "realistic" / "sophisticated" tests that people
    > might suggest instead of my primitive approach.


    You want to consider both getting even more primitive and running
    netperf TCP_STREAM and also taking packet traces of your scp
    transfer(s).

    rick jones
    --
    firebug n, the idiot who tosses a lit cigarette out his car window
    these opinions are mine, all mine; HP might not want them anyway...
    feel free to post, OR email to rick.jones2 in hp.com but NOT BOTH...

  3. Re: eth network seems way too slow

    Rick Jones wrote in news:ga6cam$uj8$4
    @usenet01.boi.hp.com:

    > What was the CPU utilization of all the CPUs on each side? Was one of
    > them pegged?


    I might have jumped the gun. I'm not so sure about my earlier statement
    that the cpu was not the limiting factor. I'm not exactly sure how to
    measure the CPU utilization accurately on a multicore machine. Other than
    "top" what are the reccomended options?

    > ssh/scp implies crypto. crypto implies CPU overhead.
    >


    Any way to force a disable-crypto mode on ssh? Also funny is the following
    result. I tried to benchmark how long it takes scp to do a local disk-to-
    disk copy. Presuming that this would allow me a comparison versus a simple
    cp and thus figure out the encryption overhead.

    But cp and scp took the same time! Does scp figure out that it is a local
    copy and then disable encryption?




    --
    Rahul

  4. Re: eth network seems way too slow

    Rahul wrote:
    > Rick Jones wrote in news:ga6cam$uj8$4
    > @usenet01.boi.hp.com:
    >
    >> What was the CPU utilization of all the CPUs on each side? Was one of
    >> them pegged?

    >
    > I might have jumped the gun. I'm not so sure about my earlier statement
    > that the cpu was not the limiting factor. I'm not exactly sure how to
    > measure the CPU utilization accurately on a multicore machine. Other than
    > "top" what are the reccomended options?


    I do not know about "recommended" options, but one I like is xosview. It
    gives a bar chart showing CPU use (divided as to user, nice, system, idle,
    wait, hardware interrupt, software interrupt), IO usage, network usage, and
    many other things. I have mine set to show things at one second intervals,
    but I believe you can make them go up to 1/10 second intervals if you want.

    Another useful one is vmstat, and still another is iostat. But I do not know
    if these will help you or not. You can always try them.

    --
    .~. Jean-David Beyer Registered Linux User 85642.
    /V\ PGP-Key: 9A2FC99A Registered Machine 241939.
    /( )\ Shrewsbury, New Jersey http://counter.li.org
    ^^-^^ 18:00:01 up 34 days, 6 min, 4 users, load average: 4.10, 4.11, 4.16

  5. Re: eth network seems way too slow

    Jean-David Beyer writes:

    > Rahul wrote:
    >> Rick Jones wrote in news:ga6cam$uj8$4
    >> @usenet01.boi.hp.com:
    >>
    >>> What was the CPU utilization of all the CPUs on each side? Was one of
    >>> them pegged?

    >>
    >> I might have jumped the gun. I'm not so sure about my earlier
    >> statement that the cpu was not the limiting factor. I'm not exactly
    >> sure how to measure the CPU utilization accurately on a multicore
    >> machine. Other than "top" what are the reccomended options?


    In addition to the ones in the previous post, I'll mention gkrellm; it
    also displays a wide variety of system behavior graphs, including CPU
    utilization.

  6. Re: eth network seems way too slow

    In comp.os.linux.networking Rahul wrote:
    > Rick Jones wrote in news:ga6cam$uj8$4
    > @usenet01.boi.hp.com:


    > > What was the CPU utilization of all the CPUs on each side? Was
    > > one of them pegged?


    > I might have jumped the gun. I'm not so sure about my earlier
    > statement that the cpu was not the limiting factor. I'm not exactly
    > sure how to measure the CPU utilization accurately on a multicore
    > machine. Other than "top" what are the reccomended options?


    I would go with top and then hit "1" to get it to show per-CPU
    (core/whatever) utilization.

    I'm sure there are other tools out there with wizzy displays and
    charts and graphs and dials and whatnot, but for what you are doing
    IMO just numbers are fine.

    rick jones
    --
    Process shall set you free from the need for rational thought.
    these opinions are mine, all mine; HP might not want them anyway...
    feel free to post, OR email to rick.jones2 in hp.com but NOT BOTH...

  7. Re: eth network seems way too slow

    Rick Jones wrote in news:ga6cam$uj8$4
    @usenet01.boi.hp.com:

    > You want to consider both getting even more primitive and running
    > netperf TCP_STREAM and also taking packet traces of your scp
    > transfer(s).
    >
    >


    I did run netperf Here's some output snippets. To me it seems performance
    has been insensitive to choice of mode so far. It is not even relevant if
    I have one ethernet card or two. I cannot rationalize this. Any leads?

    Or am I running the wrong netperf tests? IMO only explaination is if
    netperf has not generated enough traffic to saturate both my links.

    --
    Rahul


    ************************************************** *******
    mode=4

    [root@node01 scratch]# /opt/netperf2/bin/netperf -t TCP_RR -H 10.0.0.100
    TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
    10.0.0.100 (10.0.0.100) port 0 AF_INET
    Local /Remote
    Socket Size Request Resp. Elapsed Trans.
    Send Recv Size Size Time Rate
    bytes Bytes bytes bytes secs. per sec

    16384 87380 1 1 10.00 7023.38
    16384 87380

    ************************************************** ********************
    mode=6

    [root@node04 ~]# /opt/netperf2/bin/netperf -t TCP_RR -H 10.0.0.100
    TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
    10.0.0.100 (10.0.0.100) port 0 AF_INET
    Local /Remote
    Socket Size Request Resp. Elapsed Trans.
    Send Recv Size Size Time Rate
    bytes Bytes bytes bytes secs. per sec

    16384 87380 1 1 10.00 7600.67
    16384 87380
    ************************************************** ***
    only 1 eth up. Other disabled via switch simulating a card failure.

    [root@node05 ~]# /opt/netperf2/bin/netperf -t TCP_RR -H 10.0.0.100
    TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
    10.0.0.100 (10.0.0.100) port 0 AF_INET
    Local /Remote
    Socket Size Request Resp. Elapsed Trans.
    Send Recv Size Size Time Rate
    bytes Bytes bytes bytes secs. per sec

    16384 87380 1 1 10.00 6952.47
    16384 87380



  8. Re: eth network seems way too slow

    On Sep 9, 2:42*pm, Rahul wrote:

    > Any way to force a disable-crypto mode on ssh? Also funny is the following
    > result. I tried to benchmark how long it takes scp to do a local disk-to-
    > disk copy. Presuming that this would allow me a comparison versus a simple
    > cp and thus figure out the encryption overhead.
    >
    > But cp and scp took the same time! Does scp figure out that it is a local
    > copy and then disable encryption?


    It depends on the command line you pass to scp. If you do
    '@localhost:' it will actually connect to the local machine's 'ssh'
    server.

    DS

  9. Re: eth network seems way too slow

    On 2008-09-09, Rick Jones wrote:
    > In comp.os.linux.networking Rahul wrote:
    >> I just ran a file-transfer speed test on my newly configured LAN. I
    >> used ssh for a quick and dirty test before I move to stronger
    >> ammunition including netperf etc.

    >
    >> For a 4 GB file transfer I have a time of approx. 86 sec. Using a
    >> rough factor of 1 GByte=10 Gbits this translates to an effective
    >> transmit of 0.46 Gbps. Even accounting for the protocol overheads
    >> etc. this seems a rate way too low for what I was expecting for my
    >> twin-eth-ports bonded machines.

    >
    >> Do other people have any benchmark numbers that I can compare against?

    >
    > What was the CPU utilization of all the CPUs on each side? Was one of
    > them pegged?
    >
    > ssh/scp implies crypto. crypto implies CPU overhead.
    >
    > also, there may be a question of the TCP window size being used for
    > the transfer. ssh may set an explicit SO_*BUF size which will disable
    > Linux's much-vaunted socket buffer autotuning and may subject ssh/scp
    > to a rather lower socket buffer size limit than can be achieved (by
    > default) with the autotuning.
    >
    >> I ought to mention that cpu's on both machines were almost
    >> unloaded. As a rough indicator of Disk I/O I see 40 secs. for a file
    >> copy from disk to disk on any single machine. So I _think_ the
    >> network is my bottleneck. But correct me if I am wrong. I'd be up
    >> for trying any more "realistic" / "sophisticated" tests that people
    >> might suggest instead of my primitive approach.

    >
    > You want to consider both getting even more primitive and running
    > netperf TCP_STREAM and also taking packet traces of your scp
    > transfer(s).


    How about trying ftp to see if that can achieve higher
    throughput than scp?

    Another option for testing (and improving) network transfer
    speed is to use a pair of simple TCP sockets to do the
    transfer. I got fed up with the CPU overhead slowing down
    my SCP transfers, so I wrote a quick Java program that
    essentially copies between a file and a TCP socket. On my
    100Mbit home LAN, it achieves full network potential with
    almost no CPU overhead.

    HTH

    --
    Robert Riches
    spamtrap42@verizon.net
    (Yes, that is one of my email addresses.)

  10. Re: eth network seems way too slow

    Rick Jones wrote in news:ga70u6$bhg$1
    @usenet01.boi.hp.com:

    > I would go with top and then hit "1" to get it to show per-CPU
    > (core/whatever) utilization.


    Perfect Rick. The simpler the better. I did just that.

    Processor load is around 65%. Only one a single core. So processor is not
    my bottleneck now right? Could only be network or disk.


    --
    Rahul

  11. Re: eth network seems way too slow

    In comp.os.linux.networking Rahul wrote:
    > Rick Jones wrote in news:ga6cam$uj8$4
    > @usenet01.boi.hp.com:


    > > You want to consider both getting even more primitive and running
    > > netperf TCP_STREAM and also taking packet traces of your scp
    > > transfer(s).


    > I did run netperf Here's some output snippets. To me it seems performance
    > has been insensitive to choice of mode so far. It is not even relevant if
    > I have one ethernet card or two. I cannot rationalize this. Any leads?


    The netperf TCP_RR test is a "ping-pong" test rather like the ping
    utility with no think/pause time at all. It is measuring latency - or
    rather the inverse in transactions per second.

    > Or am I running the wrong netperf tests? IMO only explaination is if
    > netperf has not generated enough traffic to saturate both my links.


    I think you should run TCP_STREAM tests:

    netperf -t TCP_STREAM -H 10.0.0.100 -c -C -- -s 1M -S 1M -m 64K

    is my current favorite bulk throughput test. The -c/-C will include
    CPU util, the -s/-S will set large socket buffers (and will perhaps be
    clipped by the stack, which will become clear in the output) and then
    push 64KB of data into the socket at one time.


    > [root@node01 scratch]# /opt/netperf2/bin/netperf -t TCP_RR -H 10.0.0.100
    > TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
    > 10.0.0.100 (10.0.0.100) port 0 AF_INET
    > Local /Remote
    > Socket Size Request Resp. Elapsed Trans.
    > Send Recv Size Size Time Rate
    > bytes Bytes bytes bytes secs. per sec


    > ************************************************** ***
    > mode=4
    > 16384 87380 1 1 10.00 7023.38
    > 16384 87380


    > ************************************************** ***
    > mode=6


    > 16384 87380 1 1 10.00 7600.67
    > 16384 87380
    > ************************************************** ***
    > only 1 eth up. Other disabled via switch simulating a card failure.


    > 16384 87380 1 1 10.00 6952.47
    > 16384 87380


    Looks like you have NICs and drivers which favor bulk throughput over
    minimizing latency:

    ftp://ftp.cup.hp.com/dist/networking...cy_vs_tput.txt

    rick jones
    --
    firebug n, the idiot who tosses a lit cigarette out his car window
    these opinions are mine, all mine; HP might not want them anyway...
    feel free to post, OR email to rick.jones2 in hp.com but NOT BOTH...

  12. Re: eth network seems way too slow

    Rick Jones wrote in news:ga97d2$raa$1
    @usenet01.boi.hp.com:

    > I think you should run TCP_STREAM tests:
    >
    > netperf -t TCP_STREAM -H 10.0.0.100 -c -C -- -s 1M -S 1M -m 64K
    >
    >


    Thanks Rick. I did try that. Output attached below. If I read them
    correctly (I might be totally off) they are talking at about 1 Gbps and
    this is irrespective of the mode of bonding chosen. Even a link disabled
    gets me the same numbers (assuming "Throughput" is the column to look
    at). What I expecting was numbers closer to 2Gbps.

    What's happening here? Any more comments / ideas?

    Oh, also which is the cpu utilisation figure again? I'm not sure I see
    that.

    **********************************************
    mode=4

    [root@node02 ~]# /opt/netperf2/bin/netperf -t TCP_STREAM -H 10.0.0.100 -c
    -C -- -s 1M -S 1M -m 64K
    TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.0.100
    (10.0.0.100) port 0 AF_INET : demo
    Recv Send Send Utilization Service
    Demand
    Socket Socket Message Elapsed Send Recv Send
    Recv
    Size Size Size Time Throughput local remote local
    remote
    bytes bytes bytes secs. 10^6bits/s % S % S us/KB
    us/KB

    262142 262142 65536 10.00 940.89 0.87 1.83 0.608
    1.278


    ************************************************** *
    mode=6

    [root@node10 ~]# /opt/netperf2/bin/netperf -t TCP_STREAM -H 10.0.0.100 -c
    -C -- -s 1M -S 1M -m 64K
    TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.0.100
    (10.0.0.100) port 0 AF_INET : demo
    Recv Send Send Utilization Service
    Demand
    Socket Socket Message Elapsed Send Recv Send
    Recv
    Size Size Size Time Throughput local remote local
    remote
    bytes bytes bytes secs. 10^6bits/s % S % S us/KB
    us/KB

    262142 262142 65536 10.00 941.05 2.25 0.31 1.567
    0.217
    ******************************************
    only 1 link up

    [root@node10 ~]# /opt/netperf2/bin/netperf -t TCP_STREAM -H 10.0.0.100 -c
    -C -- -s 1M -S 1M -m 64K
    TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.0.100
    (10.0.0.100) port 0 AF_INET : demo
    Recv Send Send Utilization Service
    Demand
    Socket Socket Message Elapsed Send Recv Send
    Recv
    Size Size Size Time Throughput local remote local
    remote
    bytes bytes bytes secs. 10^6bits/s % S % S us/KB
    us/KB

    262142 262142 65536 10.00 941.08 -24.17 2.27 -16.833
    1.580
    *******************************************

    --
    Rahul

  13. Re: eth network seems way too slow

    Rick Jones wrote in news:ga97d2$raa$1
    @usenet01.boi.hp.com:

    > Looks like you have NICs and drivers which favor bulk throughput over
    > minimizing latency:
    >


    In which case my latency might be horrible but I ought to still expect
    throughput close to 2 Gbps post bonding, right? At least with mode=4
    (802.3ad) that truely aggregates the channels even for a peer-to-single-
    peer?

    --
    Rahul

  14. Re: eth network seems way too slow

    On Wed, 10 Sep 2008, Rahul wrote:

    > In which case my latency might be horrible but I ought to still expect
    > throughput close to 2 Gbps post bonding, right?


    No.

  15. Re: eth network seems way too slow

    In comp.os.linux.networking Rahul wrote:
    > Rick Jones wrote in news:ga97d2$raa$1
    > @usenet01.boi.hp.com:


    > > I think you should run TCP_STREAM tests:
    > >
    > > netperf -t TCP_STREAM -H 10.0.0.100 -c -C -- -s 1M -S 1M -m 64K


    > Thanks Rick. I did try that. Output attached below. If I read them
    > correctly (I might be totally off) they are talking at about 1 Gbps
    > and this is irrespective of the mode of bonding chosen. Even a link
    > disabled gets me the same numbers (assuming "Throughput" is the
    > column to look at). What I expecting was numbers closer to 2Gbps.


    > What's happening here? Any more comments / ideas?


    The only mode that even begins to operate in a way that _could_ get
    you 2Gbps is mode-rr, and then one is still constrained by what the
    switch is going to do for inbound.

    > Oh, also which is the cpu utilisation figure again? I'm not sure I see
    > that.


    The system-wide quantity of CPU used during the test. The last four
    columns of the output below are local and remote CPU util and local
    and remote service demand, which is the quantity of CPU used per KB of
    data transferred by netperf.


    > [root@node02 ~]# /opt/netperf2/bin/netperf -t TCP_STREAM -H 10.0.0.100 -c
    > -C -- -s 1M -S 1M -m 64K
    > TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.0.100
    > (10.0.0.100) port 0 AF_INET : demo
    > Recv Send Send Utilization Service
    > Demand
    > Socket Socket Message Elapsed Send Recv Send
    > Recv
    > Size Size Size Time Throughput local remote local
    > remote
    > bytes bytes bytes secs. 10^6bits/s % S % S us/KB
    > us/KB


    > **********************************************
    > mode=4


    > 262142 262142 65536 10.00 940.89 0.87 1.83 0.608



    Notice how the actual socket buffer is only 256K - not that it matters
    for gigabit here, but it means the 1MB socket buffer size request got
    clipped by the current sysctl limits for net.core.rmem and wmem (IIRC
    those are the mnemonics).

    > 1.278


    > ************************************************** *
    > mode=6


    > 262142 262142 65536 10.00 941.05 2.25 0.31 1.567
    > 0.217
    > ******************************************
    > only 1 link up


    > 262142 262142 65536 10.00 941.08 -24.17 2.27 -16.833
    > 1.580
    > *******************************************


    Clearly something went wrong with the CPU utilization cacluation there
    Does it happen consistently in the only 1 link up case?

    rick jones
    --
    denial, anger, bargaining, depression, acceptance, rebirth...
    where do you want to be today?
    these opinions are mine, all mine; HP might not want them anyway...
    feel free to post, OR email to rick.jones2 in hp.com but NOT BOTH...

  16. Re: eth network seems way too slow

    Rick Jones wrote in news:ga9g7u$d71$1
    @usenet01.boi.hp.com:

    > Clearly something went wrong with the CPU utilization cacluation there
    > Does it happen consistently in the only 1 link up case?


    No. That seems a one-off error. Not consistent. The new two runs do not
    have the -ive number (is that what you meant?).

    [root@node10 ~]# /opt/netperf2/bin/netperf -t TCP_STREAM -H 10.0.0.100 -c
    -C -- -s 1M -S 1M -m 64K
    TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.0.100
    (10.0.0.100) port 0 AF_INET : demo
    Recv Send Send Utilization Service
    Demand
    Socket Socket Message Elapsed Send Recv Send
    Recv
    Size Size Size Time Throughput local remote local
    remote
    bytes bytes bytes secs. 10^6bits/s % S % S us/KB
    us/KB

    262142 262142 65536 10.00 941.17 0.32 0.27 0.224
    0.187
    [root@node10 ~]# /opt/netperf2/bin/netperf -t TCP_STREAM -H 10.0.0.100 -c
    -C -- -s 1M -S 1M -m 64K
    TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.0.100
    (10.0.0.100) port 0 AF_INET : demo
    Recv Send Send Utilization Service
    Demand
    Socket Socket Message Elapsed Send Recv Send
    Recv
    Size Size Size Time Throughput local remote local
    remote
    bytes bytes bytes secs. 10^6bits/s % S % S us/KB
    us/KB

    262142 262142 65536 10.00 941.19 0.85 0.70 0.594
    0.487

    --
    Rahul

  17. Re: eth network seems way too slow

    In comp.os.linux.networking Rahul wrote:
    > Rick Jones wrote in news:ga9g7u$d71$1
    > @usenet01.boi.hp.com:


    > > Clearly something went wrong with the CPU utilization cacluation there
    > > Does it happen consistently in the only 1 link up case?


    > No. That seems a one-off error. Not consistent. The new two runs do not
    > have the -ive number (is that what you meant?).


    Yes.

    rick
    --
    Process shall set you free from the need for rational thought.
    these opinions are mine, all mine; HP might not want them anyway...
    feel free to post, OR email to rick.jones2 in hp.com but NOT BOTH...

  18. Re: eth network seems way too slow

    Rick Jones wrote in news:ga9g7u$d71$1
    @usenet01.boi.hp.com:

    > The only mode that even begins to operate in a way that _could_ get
    > you 2Gbps is mode-rr, and then one is still constrained by what the
    > switch is going to do for inbound.
    >

    Hmm...Can I open two sessions on a node and then invoke two netperf calls,
    one from each session? Now these are seperate streams, right? Perhaps to
    even two different machines each with its own instance of netserver.
    Conceptually, would you then, at least, expect the bonded vs unbonded
    performances to differ? Now the machine has two different peers to talk to.

    You are right the switch could be the issue too. Which is why I tried both
    modes 4 and 6. Especially in mode=4 (802.3) the switch is "aware" of the
    bonding and hence even inbound _ought_ to be 2 Gbps.

    --
    Rahul

  19. Re: eth network seems way too slow

    In comp.os.linux.networking Rahul wrote:

    > Hmm...Can I open two sessions on a node and then invoke two netperf
    > calls, one from each session? Now these are seperate streams, right?
    > Perhaps to even two different machines each with its own instance of
    > netserver. Conceptually, would you then, at least, expect the
    > bonded vs unbonded performances to differ?


    If you have three systems involved and two netperfs I would expect to
    see the sum of the netperfs to be greater than a single link.

    rick jones
    --
    The glass is neither half-empty nor half-full. The glass has a leak.
    The real question is "Can it be patched?"
    these opinions are mine, all mine; HP might not want them anyway...
    feel free to post, OR email to rick.jones2 in hp.com but NOT BOTH...

  20. Re: eth network seems way too slow

    Steve Thompson wrote in
    news:alpine.LRH.0.9999.0809101654230.20117@honker. vgersoft.com:

    >> In which case my latency might be horrible but I ought to still expect
    >> throughput close to 2 Gbps post bonding, right?

    >
    > No.
    >


    Why not? I know that bandwidth and latency can in some cases be coupled.
    But from what I know of how technologies like ISDN work bonding two (or
    more) channels (albeit of bad latency) does result in a bandwidth
    multiplication. Or not? I thought latency was something one is stuck with
    but bandwidth is "more easily" multiplied? Especially with modern Linux
    bonding drivers.

    In any case pings on my LAN reveal latencies of around time=0.16 ms. I
    suppose that isn't terrible is it?

    --
    Rahul

+ Reply to Thread
Page 1 of 2 1 2 LastLast