10GB link vs 3Mb/s transfer - TCP-IP

This is a discussion on 10GB link vs 3Mb/s transfer - TCP-IP ; Hello,Stupid Probably stupid, but need to have second opinion; in company I am working in we have 10BG WAN link between 2 cities - one in GB, second in DE (~1500 km distance). Even if link speed is defined by ...

+ Reply to Thread
Page 1 of 2 1 2 LastLast
Results 1 to 20 of 26

Thread: 10GB link vs 3Mb/s transfer

  1. 10GB link vs 3Mb/s transfer

    Hello,Stupid

    Probably stupid, but need to have second opinion; in
    company I am working in we have 10BG WAN link between
    2 cities - one in GB, second in DE (~1500 km distance).
    Even if link speed is defined by operator as 10 GBIt/s,
    real transfer I am able to have are about 3 megabytes/s
    and nothing more. Operator asked to change several
    network tunable parameters on servers in both location,
    but when it did not helped, I was told that 'because of
    latency there is no possibility to have higher transfer
    between those location because of the distance'

    I cannot believe it - is this true? From my point of
    wiev 10MBit/s should give around 120 megabytes/s of
    transfer, not 3 only.. Anybody could confirm operator's
    opinion (or mine)? What it depend on?

    regards
    Tomasz


  2. Re: 10GB link vs 3Mb/s transfer

    Bonsoir Tomasz,

    > Probably stupid, but need to have second opinion; in
    > company I am working in we have 10BG WAN link between
    > 2 cities - one in GB, second in DE (~1500 km distance).


    What do you call 10BG WAN?

    We can suppose it is Ethernet 10GBase-W also refered as 10G WAN PHY.

    Best regards,
    Michelot

  3. Re: 10GB link vs 3Mb/s transfer

    What do you call 10BG WAN?

    We can suppose it is Ethernet 10GBase-W also refered as 10G WAN PHY.

    Best regards,
    Michelot


    that right

    regards
    Tomasz


  4. Re: 10GB link vs 3Mb/s transfer

    Hi Tomasz,


    serwan@gdziestam.il.pw.edu.pl schreef:
    > Probably stupid, but need to have second opinion; in
    > company I am working in we have 10BG WAN link between
    > 2 cities - one in GB, second in DE (~1500 km distance).
    > Even if link speed is defined by operator as 10 GBIt/s,
    > real transfer I am able to have are about 3 megabytes/s
    > and nothing more. Operator asked to change several
    > network tunable parameters on servers in both location,
    > but when it did not helped, I was told that 'because of
    > latency there is no possibility to have higher transfer
    > between those location because of the distance'
    > I cannot believe it - is this true? From my point of
    > wiev 10MBit/s should give around 120 megabytes/s of
    > transfer, not 3 only.. Anybody could confirm operator's
    > opinion (or mine)? What it depend on?



    Yes, that is true, but it's not as bad as you think. :-)

    The thing is that this limit is PER TCP-session, not concerning the
    total bandwidht of the pipe.


    You should know that, concidering the maximum throughput you can achieve
    over a TCP-link, there are quite a number of issues that are important.
    The best known is the maximum bandwidth of the smallest pipe in the
    path, but there are others: round-trip time (which affect the
    TCP-windowing), packet-size, packets-per-second rates of intermediate
    switches and routers, influence of other traffic on the same pipe, CPU
    of the intermediate switches and routers and the hosts sending/receiving
    the data, if you use IPv4 or IPv6, ...

    In most cases, what has the biggest impact is the total bandwidth of the
    pipe, but there are two cases where other aspects cap the total speeds
    at an earlier point:
    - Where you have a very long delay (like over satellite-links, GPRS or
    UMTS-links, ...)
    - Where the bandwidth is so high that even a low (say 30 or 40 ms) delay
    becomes a limit, like over very-high speed, long-distance links.

    You are in the latter case.


    But, there are a number of solutions for this:
    -> Split out your bulk file-transfer over multiple TCP-sessions.
    -> Use media with larger packet size (like ethernet jumbo-frames or IP
    jumbo-packets), ATM, FDDI, ...
    -> Switch to a UDP-based protocol with fewer two-way communication.
    (there are boxes which can do this, they translate TCP in a UDP-based
    protocol. They are quite common on satellite)
    - Also switching should help in theory (as they require less processing
    per packet in the intermediate routers), but practice has shown quite a
    lot of routers do not process IPv6 at full rate of the line-card, while
    they do switch IPv4 at full speed.

    But the most important issue is the application you plan to use on the link.
    If you run (say) a file-server with a lot of interaction in them, you
    cannot expect this to run at the same speeds as running it locally on a
    LAN-network.

    Hope this helps.

    > regards
    > Tomasz



    Cheerio! Kr. Bonne.

  5. Re: 10GB link vs 3Mb/s transfer

    Hi Tomasz,


    serwan@gdziestam.il.pw.edu.pl schreef:
    > Probably stupid, but need to have second opinion; in
    > company I am working in we have 10BG WAN link between
    > 2 cities - one in GB, second in DE (~1500 km distance).
    > Even if link speed is defined by operator as 10 GBIt/s,
    > real transfer I am able to have are about 3 megabytes/s
    > and nothing more. Operator asked to change several
    > network tunable parameters on servers in both location,
    > but when it did not helped, I was told that 'because of
    > latency there is no possibility to have higher transfer
    > between those location because of the distance'
    > I cannot believe it - is this true? From my point of
    > wiev 10MBit/s should give around 120 megabytes/s of
    > transfer, not 3 only.. Anybody could confirm operator's
    > opinion (or mine)? What it depend on?



    Yes, that is true, but it's not as bad as you think. :-)

    The thing is that this limit is PER TCP-session, not concerning the
    total bandwidht of the pipe.


    You should know that, concidering the maximum throughput you can achieve
    over a TCP-link, there are quite a number of issues that are important.
    The best known is the maximum bandwidth of the smallest pipe in the
    path, but there are others: round-trip time (which affect the
    TCP-windowing), packet-size, packets-per-second rates of intermediate
    switches and routers, influence of other traffic on the same pipe, CPU
    of the intermediate switches and routers and the hosts sending/receiving
    the data, if you use IPv4 or IPv6, ...

    In most cases, what has the biggest impact is the total bandwidth of the
    pipe, but there are two cases where other aspects cap the total speeds
    at an earlier point:
    - Where you have a very long delay (like over satellite-links, GPRS or
    UMTS-links, ...)
    - Where the bandwidth is so high that even a low (say 30 or 40 ms) delay
    becomes a limit, like over very-high speed, long-distance links.

    You are in the latter case.


    But, there are a number of solutions for this:
    -> Split out your bulk file-transfer over multiple TCP-sessions.
    -> Use media with larger packet size (like ethernet jumbo-frames or IP
    jumbo-packets), ATM, FDDI, ...
    -> Switch to a UDP-based protocol with fewer two-way communication.
    (there are boxes which can do this, they translate TCP in a UDP-based
    protocol. They are quite common on satellite)
    - Also switching to IPv6 should help in theory (as they require less
    processing per packet in the intermediate routers), but practice has
    shown quite a lot of routers do not process IPv6 at full rate of the
    line-card, while they do switch IPv4 at full speed.

    But the most important issue is the application you plan to use on the link.
    If you run (say) a file-server with a lot of interaction in them, you
    cannot expect this to run at the same speeds as running it locally on a
    LAN-network.

    Hope this helps.

    > regards
    > Tomasz



    Cheerio! Kr. Bonne.

  6. Re: 10GB link vs 3Mb/s transfer

    Bonsoir Tomasz,

    Certainly Kristoff gives some elements from the upper layer.

    The delay between the both UNI interfaces, for 1500 km, is around 10
    ms (1500 x 5 Ás + nodes), it's not very important. We can have more.

    If you have pure Ethernet 10GBase-W, that is STM-64, you can see that
    issue. The accuracy of the STM-64 coming from 10GBase-W (+- 20 ppm) is
    not the accuracy of a STM-64 coming from SDH (+- 4.6 ppm). So, you can
    have some slips.

    How is the rate measurement, at the application level, above TCP ?

    Best regards,
    Michelot

  7. Re: 10GB link vs 3Mb/s transfer

    serwan@gdziestam.il.pw.edu.pl wrote:
    > Probably stupid, but need to have second opinion; in company I am
    > working in we have 10BG WAN link between 2 cities - one in GB,
    > second in DE (~1500 km distance). Even if link speed is defined by
    > operator as 10 GBIt/s, real transfer I am able to have are about 3
    > megabytes/s and nothing more. Operator asked to change several
    > network tunable parameters on servers in both location, but when it
    > did not helped, I was told that 'because of latency there is no
    > possibility to have higher transfer between those location because
    > of the distance'


    > I cannot believe it - is this true? From my point of wiev 10MBit/s
    > should give around 120 megabytes/s of transfer, not 3 only.. Anybody
    > could confirm operator's opinion (or mine)? What it depend on?


    For a bulk transfer over TCP - just shoving bytes from one side to the other, the simple description is:

    Throughput <= WindowSize/RoundTripTime

    TCP can send at most one window's-worth of data before it must stop
    and wait for an ACKnowledgement from the remote, and the soonest that
    can arrive is one RoundTripTime (RTT).

    So, plug-in some values, keep the units straight, and you can
    figure-out a minimum window size required to get link-rate on your WAN
    link.

    Or you can just try larger and larger values in something like:

    netperf -H -l 30 -- -s -S -m 64K

    until you see a peak.

    rick jones
    --
    oxymoron n, Hummer H2 with California Save Our Coasts and Oceans plates
    these opinions are mine, all mine; HP might not want them anyway...
    feel free to post, OR email to rick.jones2 in hp.com but NOT BOTH...

  8. Re: 10GB link vs 3Mb/s transfer

    On Sep 15, 7:28*am, ser...@gdziestam.il.pw.edu.pl wrote:

    > I cannot believe it - is this true? From my point of
    > wiev 10MBit/s should give around 120 megabytes/s of
    > transfer, not 3 only.. Anybody could confirm operator's
    > opinion (or mine)? What it depend on?


    Are you using TCP? What is the window size? What is the round-trip-
    time?

    DS

  9. Re: 10GB link vs 3Mb/s transfer

    On Mon, 15 Sep 2008 14:28:52 +0000, serwan wrote:

    > Hello,Stupid
    >
    > Probably stupid, but need to have second opinion; in company I am
    > working in we have 10BG WAN link between 2 cities - one in GB, second in
    > DE (~1500 km distance). Even if link speed is defined by operator as 10
    > GBIt/s, real transfer I am able to have are about 3 megabytes/s and
    > nothing more. Operator asked to change several network tunable
    > parameters on servers in both location, but when it did not helped, I
    > was told that 'because of latency there is no possibility to have higher
    > transfer between those location because of the distance'
    >
    > I cannot believe it - is this true? From my point of wiev 10MBit/s
    > should give around 120 megabytes/s of transfer, not 3 only.. Anybody
    > could confirm operator's opinion (or mine)? What it depend on?


    Another point no one made, and probably irrelevant, is that the protocol
    used can have a very high impact. On the one end, real streaming
    protocols like ftp or scp are able to use the full bandwidth of a tcp
    session[1], but for instance Windows SMB/CIFS is notorious for decreased
    performance once latency comes into play. Another notorious one is older
    versions of OracleForms (particularly version 6)[2]. And many (most?)
    home grown protocols also fall into this category.

    Those "bad" protocols are also called ping-pong protocols, because they
    have to wait for the pong before they can send another ping.

    If you are affected by this, there may be solutions. There are a number
    of vendors that sell "accelerators" for these protocols. Not all of them
    actually work, and some have side effects, so be sure to test them
    thoroughly.

    M4

    [1] Which, as others have noted can still be much lower than you
    available bandwidth.

    [2] I saw an implementation where there was 30ms RTT, every question
    packet in fact irrelevant ("please send next part") and every answer
    packet was 266 bytes, containing only something like 80 bytes of real
    data. Performance was horrible to non existing.

  10. Re: 10GB link vs 3Mb/s transfer

    Hello,

    Thanks for help all who replied. Maybe to give
    you more details:

    - I am not network guru, work with AIX

    - Got machines in DE and in GB - and link
    10gigabyte/s between them, doing 3megabytes/s
    only (nfs - using udp? - was slightly faster

    - Network guy we need to work with told me
    that there is no way to have more than 3mb/s;
    he is explaining it with latency, and has mentioned
    that latency we have is 15-25 milisec; is this
    possible reason for such a slow transfer?

    - From my side I have tried to change some tunable
    parameters of tcpip according to IBM's manuals -
    unfortunately without any improvement.

    - Question I am looking for answers are:

    1. Is it possible to have more on this link, or not?
    2. If it is possible, where to look for way of
    improvement? network devices? tcp parameters? if
    possible, give me please some hints, will check them
    all
    3. Most important - if it is possible to have more,
    how to prove it / how to test or check where are
    bottlenecks? some tools to test? advice pls.

    Thank you for help & regards
    Tomasz


  11. Re: 10GB link vs 3Mb/s transfer

    Hello,

    Connect some ms-windows boxes to the network if you wanna do a speed test
    with my tool

    http://www.myjavaserver.com/~skybuck/

    (Scroll down on the left, then take a look at UDP Speed Test 3)

    This tool will blast packets over the link to the other side, so you can
    test the link speed.

    You'll need somebody on the other side to help you though

    In the near future you might even do it yourself thanks to a new
    icmp-related feature, it's kinda crude but it could give one an idea. (I
    could even develop a new tool which is much easier and user friendly and
    doesnt need any other computer to test... just a network that's all )

    Anyway, as I said the tool will give you an idea of the raw link speed.

    Anything else is just protocol issue.

    I also have an udp file transfer tool... but it's very cpu heavy... so it's
    bottlenecked by the cpu. (It does lots of integrity checking and such).

    It's also on the website.

    I also made a light version of it... but I never released it because I like
    integrity checks

    Releasing file transfer tools without integrity checks in it seems
    irresponsible to me

    I also have 64 bit file transfer version of it but also not released yet
    because I wanna work on it some more to improve some things like a cancel or
    so.

    I am also planning to do some major work on the tool in the coming days,
    weeks and months =D

    Bye,
    Skybuck.



  12. Re: 10GB link vs 3Mb/s transfer

    serwan@gdziestamna.il.pw.edu.pl wrote:
    > - I am not network guru, work with AIX


    netperf should compile and run under AIX.

    > - Got machines in DE and in GB - and link 10gigabyte/s between them,
    > doing 3megabytes/s only (nfs - using udp? - was slightly faster


    3 MB/s using _what_ *exactly* for the transfer?

    > - Network guy we need to work with told me that there is no way to
    > have more than 3mb/s; he is explaining it with latency, and has
    > mentioned that latency we have is 15-25 milisec; is this possible
    > reason for such a slow transfer?


    Unless there is some traffic shaping the network guy is failing to
    mention, or there is something about your transfer utility we don't
    know (like its name and such...), IMO he is incorrect. It it possible
    to acheive greater than 3MB/s transfer rate over a link with 25ms
    round-trip latency.

    Again:

    Throughput <= WindowSize/RTT

    So, if you have 3MB/s and an RTT of 25ms:

    3MB/s <= WindowSize/25ms

    make the units consistent:

    3MB/s <= WindowSize/0.025s

    3MB/s * 0.025s <= WindowSize

    WindowSize ~= 77KB

    > - From my side I have tried to change some tunable parameters of
    > tcpip according to IBM's manuals - unfortunately without any
    > improvement.


    _Which_ tunable parameters. You really need to be much more specific.
    Names and values.

    > - Question I am looking for answers are:


    > 1. Is it possible to have more on this link, or not?


    In theory it should be.

    > 2. If it is possible, where to look for way of improvement? network
    > devices? tcp parameters? if possible, give me please some hints,
    > will check them all


    In the formula above, that WindowSize should really be described as
    "EffectiveWindowSize" which will be the smaller of three things:

    a) the classic TCP window advertised by the receiver
    b) the sender's SO_SNDBUF size
    c) the congestion window

    We've already discussed "a." The explanation for "b" is that the
    sender must retain a reference to the data sent until it is ACKed by
    the receiver lest it need to be retransmitted. The most it can track
    is SO_SNDBUF (the send socket buffer size). For "c" that is something
    computed by the sending TCP based on data which is sent and whether or
    not there have been retransmissions. So, you should also be checking
    your TCP statistics with netstat to see if there are any
    retransmissions - take a snapshot before your transfer, and one after
    and subtract one from the other. You can use "beforeafter" from:

    ftp://ftp.cup.hp.com/dist/networking/tools/

    to do the subtraction of two snapshots of netstat output. It should
    compile and run under just about any Unix.

    > 3. Most important - if it is possible to have more, how to prove it
    > / how to test or check where are bottlenecks? some tools to test?
    > advice pls.


    Netperf TCP_STREAM test, altering the socket buffer sizes with the -s
    and -S options. All while checking statistics with netstat.

    http://www.netperf.org/

    http://www.netperf.org/svn/netperf2/...c/netperf.html

    is the "manual" for the latest released version, the source for which
    should be at:

    ftp://ftp.netperf.org/

    Netperf should compile and run under just about any *nix as well as
    Windows.

    If you see netperf getting more then we start looking into the
    specifics of your actual application(s).

    rick jones

    FWIW, here is an example of achieving > 3MB/s with latency higher than 25ms:

    raj@tardy:~/netperf2_trunk$ src/netperf -H lart.fc.hp.com -f M -- -m 64K

    TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to lart.fc.hp.com (15.11.146.31) port 0 AF_INET
    Recv Send Send
    Socket Socket Message Elapsed
    Size Size Size Time Throughput
    bytes bytes bytes secs. MBytes/sec

    NNNNN MMMMM 65536 10.14 8.61
    raj@tardy:~/netperf2_trunk$ ping lart.fc.hp.com
    PING lart.fc.hp.com (15.11.146.31) 56(84) bytes of data.
    64 bytes from lart.fc.hp.com (15.11.146.31): icmp_seq=1 ttl=56 time=35.7 ms
    64 bytes from lart.fc.hp.com (15.11.146.31): icmp_seq=2 ttl=56 time=35.4 ms
    64 bytes from lart.fc.hp.com (15.11.146.31): icmp_seq=3 ttl=56 time=35.6 ms

    --- lart.fc.hp.com ping statistics ---
    3 packets transmitted, 3 received, 0% packet loss, time 1999ms
    rtt min/avg/max/mdev = 35.498/35.639/35.745/0.241 ms

    Now, you should completely ignore the socket buffer sizes reported
    above - that is why I blanked them out - there are peculiarities of
    the Linux TCP stack at play - it was "autotuning" the socket buffer
    sizes and what was reported were only the starting sizes, not the
    ending sizes. I also didn't have permissions on one of the systems to
    tweak the sysctl settings to allow an explicit socket buffer size
    setting of sufficient size.

    That should not be an issue for you with AIX.

    A later version of netperf has a set of tests to report the requested,
    initial and final socket buffer sizes.

    rick jones
    --
    a wide gulf separates "what if" from "if only"
    these opinions are mine, all mine; HP might not want them anyway...
    feel free to post, OR email to rick.jones2 in hp.com but NOT BOTH...




  13. Re: 10GB link vs 3Mb/s transfer

    On 2008-09-15 10:28:52 -0400, serwan@gdziestam.il.pw.edu.pl said:

    > I cannot believe it - is this true? From my point of
    > wiev 10MBit/s should give around 120 megabytes/s of
    > transfer, not 3 only.. Anybody could confirm operator's
    > opinion (or mine)? What it depend on?


    If you're using a Linux-based server with a 2.6 flavor of kernel (check
    your "uname -a") on your system, and also the output of a "sysctl -a |
    grep personality", and you might see output like the following:

    net.ipv4.tcp_congestion_control = reno
    net.ipv4.tcp_available_congestion_control = reno
    net.ipv4.tcp_allowed_congestion_control = reno

    Later Linux kernel versions include a feature to change the TCP/IP
    stack "personality" - basically, changing the mathematics and
    ultimately the queueing / windowing characteristics of the TCP/IP stack
    on the host. A full discussion of this is beyond this post, but this
    link is a good start. You might want to try HiTCP or BICTCP (set it in
    /etc/sysctl.conf, and reboot just to be complete) as a replacement for
    Reno or NewReno. The two personalities I mentioned were really
    developed for high-speed LAN links, I can't comment how they'd fare
    across 10ms-20ms RTT links, but it doesn't immediately strike me as a
    problem.

    Take a quick trace of a connection and see if TCP window scaling is on
    from the first few packets (the TCP handshake) of your data-passing
    socket connection. Under Linux, check this:

    sysctl -a |grep window
    net.ipv4.tcp_window_scaling = 1

    A value of "1" means the "feature" is on.

    If you have the flexibility to tweak this, and test like other posters
    have, you might like some of the results. For optimal performance, it's
    best if both ends are using the same TCP/IP "personality" - a Windows
    machine with a classic Reno or NewReno TCP/IP stack talking to a Linux
    host with a HiTCP or BICTCP implementation might not do so well.

    /dmfh

    --
    _ __ _
    __| |_ __ / _| |_ 01100100 01101101
    / _` | ' \| _| ' \ 01100110 01101000
    \__,_|_|_|_|_| |_||_| dmfh(-2)dmfh.cx


  14. Re: 10GB link vs 3Mb/s transfer

    >> - Got machines in DE and in GB - and link 10gigabyte/s between them,
    >> doing 3megabytes/s only (nfs - using udp? - was slightly faster


    >3 MB/s using _what_ *exactly* for the transfer?


    not sure if understood your question - just to try to answer:
    - ftp - up to 3MB/s
    - scp - up to 3MB/s
    - nfs - once achieved up to 7 MB/s, usually ~4,5MB/s

    >Unless there is some traffic shaping the network guy is failing to
    >mention, or there is something about your transfer utility we don't
    >know (like its name and such...), IMO he is incorrect. It it possible
    >to acheive greater than 3MB/s transfer rate over a link with 25ms
    >round-trip latency.


    there is no any traffic shaping for sure; only one thing I thought
    about was that maybe routers' configuration between locations may
    be the reason of slow transfer - but I guess it should not

    >> - From my side I have tried to change some tunable parameters of
    >> tcpip according to IBM's manuals - unfortunately without any
    >> improvement.


    >_Which_ tunable parameters. You really need to be much more specific.
    >Names and values.


    And this is interesting; in IBM's manuals there are several parameters
    mentioned; I have tested, as described here:
    http://publib.boulder.ibm.com/infoce...dex.jsp?topic=
    /com.ibm.aix.prftungd/doc/prftungd/tcp_udp_perf_tuning.htm
    - tcp_sendspace - values from 16384 up to 655360 (no results)
    - tcp_recvspace - values from 16384 up to 655360 (no results)
    - sb_max - values from 1048576 up to 1310720 (no results)
    and, finally, yesterday evening I have found also:
    - tcp_nodelayack - have changed it from 0 to 1, and some imporvement
    was observer - average transfer was:
    -- ftp - up to 5 MB/s
    -- scp - up to 5 MB/s
    -- nfs - up to 10 MB/s
    little bit better, but still - can I have more?

    right now I will try with netperf, will let you know if any news.

    thanks for help & regards
    Tomasz


  15. Re: 10GB link vs 3Mb/s transfer

    Interesting.. netperf started on 2 machines on both
    ends of the link (gb<->de) showed:

    Recv Send Send
    Socket Socket Message Elapsed
    Size Size Size Time Throughput
    bytes bytes bytes secs. 10^6bits/sec

    873800 873800 873800 10.04 240.71

    Am I right considering that (in theory at least) is should
    be possible to achieve transfer up to 30MB/s?..

    regards
    Tomasz


  16. Re: 10GB link vs 3Mb/s transfer

    If it is really a 10 Gigabit link the maximum speed should be close to 1.2
    GigaByte per second.

    So you are very far off what it's supposed to be...

    So to me it seems like some possibilities:

    1. You got the GB and MB mixed ?

    2. You being screwed.

    3. Your computers are too slow ???

    4. Your network card/chip is too slow ???

    5. The cables are limited to 10 or 100 MB/sec ?

    I have a 1 gigabit link at home... and even with my "slow" udp file transfer
    tool I can achieve 5 MB/sec to an old Pentium III 450 mhz... the latency
    shouldn't play a too big role for my tool at least (With encryption
    disabled and Buffer Size 65000 or so on both sides )

    Anyway what are your ping times ?

    Bye,
    Skybuck.

    wrote in message
    news:gaqmb7$kqe$1@julia.coi.pw.edu.pl...
    > Interesting.. netperf started on 2 machines on both
    > ends of the link (gb<->de) showed:
    >
    > Recv Send Send
    > Socket Socket Message Elapsed
    > Size Size Size Time Throughput
    > bytes bytes bytes secs. 10^6bits/sec
    >
    > 873800 873800 873800 10.04 240.71
    >
    > Am I right considering that (in theory at least) is should
    > be possible to achieve transfer up to 30MB/s?..
    >
    > regards
    > Tomasz
    >




  17. Re: 10GB link vs 3Mb/s transfer

    serwan@gdziestamna.il.pw.edu.pl wrote:
    > >3 MB/s using _what_ *exactly* for the transfer?


    > not sure if understood your question - just to try to answer:
    > - ftp - up to 3MB/s
    > - scp - up to 3MB/s
    > - nfs - once achieved up to 7 MB/s, usually ~4,5MB/s


    That is what I was looking for.

    > >_Which_ tunable parameters. You really need to be much more specific.
    > >Names and values.


    > And this is interesting; in IBM's manuals there are several parameters
    > mentioned; I have tested, as described here:
    > http://publib.boulder.ibm.com/infoce...dex.jsp?topic=
    > /com.ibm.aix.prftungd/doc/prftungd/tcp_udp_perf_tuning.htm
    > - tcp_sendspace - values from 16384 up to 655360 (no results)
    > - tcp_recvspace - values from 16384 up to 655360 (no results)
    > - sb_max - values from 1048576 up to 1310720 (no results)


    For a 10Gbit link with 25ms RTT you will need to be going much larger
    than those values I suspect.

    > and, finally, yesterday evening I have found also:
    > - tcp_nodelayack - have changed it from 0 to 1, and some imporvement
    > was observer - average transfer was:
    > -- ftp - up to 5 MB/s
    > -- scp - up to 5 MB/s
    > -- nfs - up to 10 MB/s
    > little bit better, but still - can I have more?


    That tcp_nodelayack affected ftp or scp is surprising to me. A bulk
    transfer such as that performed by FTP should not need immediate ACKs.
    As for the rest, those settings you changed are likely defaults. If
    FTP or scp are making their own setsockopt() calls to set socket
    buffer sizes, it stands to reason that a change in the defaults would
    not result in a change in performance. Ostensibly, if there is a way
    to get FTP to use a different socket buffer size (on _both_ ends) it
    would appear in the manpages for ftp and ftpd. Taking a system call
    trace of the ftp client (or ftpd) would show if it is making
    setsockopt() calls.

    NFS may be using the defaults.

    rick jones
    --
    denial, anger, bargaining, depression, acceptance, rebirth...
    where do you want to be today?
    these opinions are mine, all mine; HP might not want them anyway...
    feel free to post, OR email to rick.jones2 in hp.com but NOT BOTH...

  18. Re: 10GB link vs 3Mb/s transfer

    serwan@gdziestamna.il.pw.edu.pl wrote:
    > Interesting.. netperf started on 2 machines on both
    > ends of the link (gb<->de) showed:


    > Recv Send Send
    > Socket Socket Message Elapsed
    > Size Size Size Time Throughput
    > bytes bytes bytes secs. 10^6bits/sec


    > 873800 873800 873800 10.04 240.71


    There is a -f option to change the reporting units if you like:

    netperf -f M ...

    will report in power-of-two megabytes per second.

    > Am I right considering that (in theory at least) is should be
    > possible to achieve transfer up to 30MB/s?..


    In theory, over a 10Gbit/s link it should be possible to achieve a
    transfer rate of up to 100MB/s. If we assume a 1514 byte frame size
    on the link, and a 1448 effective MSS (not 1460 since timestamps
    _better_ be enabled) and then further ass-u-me the link is
    full-duplex, "speed of light" for TCP would be ~10Gbit/s*(1448/1514).
    That works out to something like 9.5 Gbit/s. Given I rarely see more
    than 940Mbit/s on a 1Gbit/s link I'll call it 9.4 Gbit/s.

    With a 25ms RTT that requires a very large TCP window though:

    9.4Gbit/s <= Window/RTT
    9.4Gbit/s <= Window/0.025s
    1.12GB/s < Window/0.025s
    Window ~= 30MB (28 and change but I rounded it up)

    (b => power-of-ten bits; B => power-of-two bytes)

    Once you have that large a window (assuming it is possible) then we
    probably need to start talking about CPU consumption on either end,
    and the various offloads enabled or not on the NICs.

    Scp has its own "window" as well, I'm not sure if the high-speed
    network patches for it are in the AIX scp, and for that matter if they
    allow a window to get anywhere near 30MB.

    NFS over TCP will need such a large window. It is also a
    request/response protocol rather than FTP's "blast it all out"
    behaviour. So, that means that there would need to be enough
    read-ahead or write-behind to have 30MB of data outstanding at one
    time.

    rick jones
    --
    denial, anger, bargaining, depression, acceptance, rebirth...
    where do you want to be today?
    these opinions are mine, all mine; HP might not want them anyway...
    feel free to post, OR email to rick.jones2 in hp.com but NOT BOTH...

  19. Re: 10GB link vs 3Mb/s transfer

    Digital Mercenary For Honor wrote:
    > If you're using a Linux-based server with a 2.6 flavor of kernel


    IIRC, the OP stated he is using AIX.

    > (check your "uname -a") on your system, and also the output of a
    > "sysctl -a | grep personality", and you might see output like the
    > following:


    > net.ipv4.tcp_congestion_control = reno
    > net.ipv4.tcp_available_congestion_control = reno
    > net.ipv4.tcp_allowed_congestion_control = reno


    > Later Linux kernel versions include a feature to change the TCP/IP
    > stack "personality" - basically, changing the mathematics and
    > ultimately the queueing / windowing characteristics of the TCP/IP
    > stack on the host. A full discussion of this is beyond this post,
    > but this link is a good start. You might want to try HiTCP or BICTCP
    > (set it in /etc/sysctl.conf, and reboot just to be complete) as a
    > replacement for Reno or NewReno. The two personalities I mentioned
    > were really developed for high-speed LAN links, I can't comment how
    > they'd fare across 10ms-20ms RTT links, but it doesn't immediately
    > strike me as a problem.


    Ignoring the specifics of the OP running AIX vs Linux for a moment,
    would the congestion control selection matter much if there were no
    packet losses? We are still awaiting the OP's "netstat report"

    rick jones
    --
    denial, anger, bargaining, depression, acceptance, rebirth...
    where do you want to be today?
    these opinions are mine, all mine; HP might not want them anyway...
    feel free to post, OR email to rick.jones2 in hp.com but NOT BOTH...

  20. Re: 10GB link vs 3Mb/s transfer

    Rick Jones wrote:
    > In theory, over a 10Gbit/s link it should be possible to achieve a
    > transfer rate of up to 100MB/s. If we assume a 1514 byte frame size


    Sigh - mixing my links. 1GB/s and change...

    rick jones
    --
    web2.0 n, the dot.com reunion tour...
    these opinions are mine, all mine; HP might not want them anyway...
    feel free to post, OR email to rick.jones2 in hp.com but NOT BOTH...

+ Reply to Thread
Page 1 of 2 1 2 LastLast