Is "network" (bit endian) byte order beneficial for wire protocols? - TCP-IP

This is a discussion on Is "network" (bit endian) byte order beneficial for wire protocols? - TCP-IP ; I am wondering, (since most machines on the internet are little endian and I know that little endian byte order on a machine seems "natural" when programming), if there is any benefit to the big endianness of tcp/ip (?). The ...

+ Reply to Thread
Page 1 of 3 1 2 3 LastLast
Results 1 to 20 of 41

Thread: Is "network" (bit endian) byte order beneficial for wire protocols?

  1. Is "network" (bit endian) byte order beneficial for wire protocols?

    I am wondering, (since most machines on the internet are little endian and I
    know that little endian byte order on a machine seems "natural" when
    programming), if there is any benefit to the big endianness of tcp/ip (?).

    The follow up question may be why IPv6 doesn't have an endianness flag in
    the IP frames (?). I think little endian is the way to go but if there has
    to be both, then accomodating both simply with the flag is a good
    compromise. I mean, why bother end nodes (more numerous than routers) with
    endianness all of the time?

    Tony



  2. Re: Is "network" (bit endian) byte order beneficial for wire protocols?

    In article , Tony wrote:

    >I am wondering, (since most machines on the internet are little endian


    Until fairly recently that has been true only given the controversial
    claim that a Wintel box could be "on the Internet." Before Microsoft
    systems got more or less reasonable TCP implementations, knowledgeable
    people would insist on caveats for claims such boxes could be "on
    the Internet."

    > and I
    >know that little endian byte order on a machine seems "natural" when
    >programming),


    Both that statement and the obverse about big ending being more
    natural are true only in circumstances that are now quite unusual
    or when one knows less than one realizes about programming.


    > if there is any benefit to the big endianness of tcp/ip (?).


    Google is your friend. In other words, see
    http://www.google.com/search?q=big+little+endian
    http://www.rdrop.com/~cary/html/endian_faq.html
    http://en.wikipedia.org/wiki/Endianness


    >The follow up question may be why IPv6 doesn't have an endianness flag in
    >the IP frames (?). I think little endian is the way to go but if there has
    >to be both, then accomodating both simply with the flag is a good
    >compromise. I mean, why bother end nodes (more numerous than routers) with
    >endianness all of the time?


    There have been protocols that had endian flags. Such flags are mistakes
    and always reliable signs that technical considerations have been
    abandoned in favor of politics. I'm thinking of XTP as well as the
    Apollo (later HP) receiver-makes-it-wrong style of RPC.

    Your application can use any byte order or other data encoding that
    it prefers for its data. The byte order of the TCP, IP, UDP, ICMP,
    PPP, and other IETF headers are irrelevant to your application
    except in rare cases such as when you need to manipulate addreses
    and netmasks or compare port numbers.

    The operating system and router code that cares about byte order
    is easy with ntohl() and similar functions. Besides even that code
    rarely cares about byte order. For example, whether the address
    10.0.0.1 is odd or even rarely matters except when computing port
    numbers, finding longest matching prefixes, etc.



    ] Lack of agreement on endianness has hindered the industry greatly.

    It is less inaccurate to say that lack of agreement on endianness
    has discouraged people without the requisite interests for writing
    code and so helped the industry greatly.

    > Little
    >endian is the way to go and the internet should be switched over to little
    >endian before we turn the clocks away from daylight savings time.


    Oh, so it was all a joke.


    Vernon Schryver vjs@rhyolite.com

  3. Re: Is "network" (bit endian) byte order beneficial for wire protocols?

    On 2008-11-01, Tony wrote:
    > I am wondering, (since most machines on the internet are little endian and I
    > know that little endian byte order on a machine seems "natural" when
    > programming), if there is any benefit to the big endianness of tcp/ip (?).


    Historically it isn't disputable that big endian machines dominated
    the internet. It made sense then and it makes sense now given the
    huge hassle of changing for a marginal benefit on _some_ machines
    on the internet. If you think that either arrangement is more
    'logical' than the other then you must be missing something. If
    that was the case that arrangement would have been universally
    adopted - it isn't some conspiracy to deliberately obfuscate things.
    The PDP-11's 2143 arrangement didn't gain traction for precisely
    that reason.

    > The follow up question may be why IPv6 doesn't have an endianness flag in
    > the IP frames (?). I think little endian is the way to go but if there has
    > to be both, then accomodating both simply with the flag is a good
    > compromise. I mean, why bother end nodes (more numerous than routers) with
    > endianness all of the time?


    Its been tried in the past and put simply it isn't a good idea -
    it creates rather than removes complexity. Instead of always or
    never rearranging byte order as appropriate, you have to make a
    decision first and then, regardless of your machine's byte order,
    be prepared to do a conversion anyway. In essence you are saving
    nothing and introducing an extra step for no good reason. Devices
    working at wire speed or faster (routers etc) are generally
    implemented using network byte order as their native endianness.
    That way they can consider the values present directly with no need
    for any transformation - this approach would be impossible if the
    byte order could vary for every single packet.

    --
    Andrew Smallshaw
    andrews@sdf.lonestar.org

  4. Re: Is "network" (bit endian) byte order beneficial for wire protocols?


    "Vernon Schryver" wrote in message
    news:gegjid$iqe$1@calcite.rhyolite.com...
    > In article , Tony
    > wrote:
    >
    >>I am wondering, (since most machines on the internet are little endian

    >
    > Until fairly recently that has been true only given the controversial
    > claim that a Wintel box could be "on the Internet." Before Microsoft
    > systems got more or less reasonable TCP implementations, knowledgeable
    > people would insist on caveats for claims such boxes could be "on
    > the Internet."
    >
    >> and
    >> I
    >>know that little endian byte order on a machine seems "natural" when
    >>programming),

    >
    > Both that statement and the obverse about big ending being more
    > natural are true only in circumstances that are now quite unusual
    > or when one knows less than one realizes about programming.


    Endianness concerns have hampered "easy programming".

    >
    >
    >> if there is any benefit to the big endianness of tcp/ip (?).

    >
    > Google is your friend. In other words, see
    > http://www.google.com/search?q=big+little+endian
    > http://www.rdrop.com/~cary/html/endian_faq.html
    > http://en.wikipedia.org/wiki/Endianness
    >


    I prefer little. Big is brain-damaged.

    >
    >>The follow up question may be why IPv6 doesn't have an endianness flag in
    >>the IP frames (?). I think little endian is the way to go but if there has
    >>to be both, then accomodating both simply with the flag is a good
    >>compromise. I mean, why bother end nodes (more numerous than routers) with
    >>endianness all of the time?

    >
    > There have been protocols that had endian flags. Such flags are mistakes
    > and always reliable signs that technical considerations have been
    > abandoned in favor of politics.


    I suggest that the opposite is true.

    > I'm thinking of XTP as well as the
    > Apollo (later HP) receiver-makes-it-wrong style of RPC.
    >
    > Your application can use any byte order or other data encoding that
    > it prefers for its data. The byte order of the TCP, IP, UDP, ICMP,
    > PPP, and other IETF headers are irrelevant to your application
    > except in rare cases such as when you need to manipulate addreses
    > and netmasks or compare port numbers.


    I know that. The whole htonl etc stuff is totally unnecessary (read, would
    be the industry could have "gotten on the same page"). It's a political
    nightmare.

    >
    > The operating system and router code that cares about byte order
    > is easy with ntohl() and similar functions. Besides even that code
    > rarely cares about byte order. For example, whether the address
    > 10.0.0.1 is odd or even rarely matters except when computing port
    > numbers, finding longest matching prefixes, etc.
    >
    >
    >
    > ] Lack of agreement on endianness has hindered the industry greatly.
    >
    > It is less inaccurate to say that lack of agreement on endianness
    > has discouraged people without the requisite interests for writing
    > code and so helped the industry greatly.


    So you propose to make it harder than necessary to program. I know the
    industry loves complexity (the big guns thrive on that alone).

    >
    >> Little
    >>endian is the way to go and the internet should be switched over to little
    >>endian before we turn the clocks away from daylight savings time.

    >
    > Oh, so it was all a joke.


    I am totally serious, joe.

    Tony



  5. Re: Is "network" (bit endian) byte order beneficial for wire protocols?


    "Andrew Smallshaw" wrote in message
    news:slrnggovu4.hmm.andrews@sdf.lonestar.org...
    > On 2008-11-01, Tony wrote:
    >> I am wondering, (since most machines on the internet are little endian
    >> and I
    >> know that little endian byte order on a machine seems "natural" when
    >> programming), if there is any benefit to the big endianness of tcp/ip
    >> (?).

    >
    > Historically it isn't disputable that big endian machines dominated
    > the internet. It made sense then and it makes sense now given the
    > huge hassle of changing for a marginal benefit on _some_ machines
    > on the internet. If you think that either arrangement is more
    > 'logical' than the other then you must be missing something.


    I'm missing "clean code" because the chaff called "htonl" etc. is the
    industry's "solution" which is no solution at all. Lame. If there was one
    guy to blame for it, I'd fire him.

    > If
    > that was the case that arrangement would have been universally
    > adopted - it isn't some conspiracy to deliberately obfuscate things.


    It's failure of the industry to agree on foundational stuff which makes it
    less than elegant.

    > The PDP-11's 2143 arrangement didn't gain traction for precisely
    > that reason.
    >
    >> The follow up question may be why IPv6 doesn't have an endianness flag in
    >> the IP frames (?). I think little endian is the way to go but if there
    >> has
    >> to be both, then accomodating both simply with the flag is a good
    >> compromise. I mean, why bother end nodes (more numerous than routers)
    >> with
    >> endianness all of the time?

    >
    > Its been tried in the past and put simply it isn't a good idea -
    > it creates rather than removes complexity.


    Oh? do tell... I'll read on..

    > Instead of always or
    > never rearranging byte order as appropriate, you have to make a
    > decision first and then, regardless of your machine's byte order,
    > be prepared to do a conversion anyway.


    Or reject the incoming with a reply saying: "Get with the f'm program you
    big endian lamer!".

    > In essence you are saving
    > nothing and introducing an extra step for no good reason.


    I was suggesting that over time the big endian machines and routers would
    just go away (as in, good riddance).

    >Devices
    > working at wire speed or faster (routers etc) are generally
    > implemented using network byte order as their native endianness.


    So much for lame hardware without upgradeable firmware.

    > That way they can consider the values present directly with no need
    > for any transformation - this approach would be impossible if the
    > byte order could vary for every single packet.


    It's a simple test of a big/little flag, the purpose of which is to migrate
    to one agreed upon endianness (little, of course!).

    Tony



  6. Re: Is "network" (bit endian) byte order beneficial for wireprotocols?

    On Nov 1, 4:23*pm, "Tony" wrote:

    > I prefer little. Big is brain-damaged.


    This discussion comes up from time to time. Fortunately, RFC 791
    states:

    ----------------
    APPENDIX B: Data Transmission Order

    The order of transmission of the header and data described in this
    document is resolved to the octet level. Whenever a diagram shows a
    group of octets, the order of transmission of those octets is the
    normal
    order in which they are read in English.

    [ ... ]

    Whenever an octet represents a numeric quantity the left most bit in
    the
    diagram is the high order or most significant bit. That is, the bit
    labeled 0 is the most significant bit.

    [ ... ]

    Similarly, whenever a multi-octet field represents a numeric quantity
    the left most bit of the whole field is the most significant bit.
    When
    a multi-octet quantity is transmitted the most significant octet is
    transmitted first.
    --------------------------

    So, that settles it for IPv4. All IP headers work this way, all the
    way up the stack from layer 3. This includes UDP, TSP, RTP, HTTP, etc.
    etc.

    Unfortunately, a similarly clear and unequivocal statement does not
    appear in RFC 2460, for IPv6, although that's not because the IETF is
    having second thoughts. That's because there is general consensus that
    this is the only way to go.

    By the way, you'll note that whether a number is sent as a binary
    multi-byte field of *any* length, or whether the number is sent as
    ASCII characters, big endian remains consistent. The most significant
    binary byte, or the most significant ASCII numeral, is sent first
    always.

    Bert

  7. Re: Is "network" (bit endian) byte order beneficial for wire protocols?

    On 2008-11-01, Tony wrote:
    >
    > "Andrew Smallshaw" wrote in message
    > news:slrnggovu4.hmm.andrews@sdf.lonestar.org...
    >> On 2008-11-01, Tony wrote:
    >>> I am wondering, (since most machines on the internet are little endian
    >>> and I
    >>> know that little endian byte order on a machine seems "natural" when
    >>> programming), if there is any benefit to the big endianness of tcp/ip
    >>> (?).

    >>
    >> Historically it isn't disputable that big endian machines dominated
    >> the internet. It made sense then and it makes sense now given the
    >> huge hassle of changing for a marginal benefit on _some_ machines
    >> on the internet. If you think that either arrangement is more
    >> 'logical' than the other then you must be missing something.

    >
    > I'm missing "clean code" because the chaff called "htonl" etc. is the
    > industry's "solution" which is no solution at all. Lame. If there was one
    > guy to blame for it, I'd fire him.


    If you think the likes of htonl represent 'dirty' programming then
    you have a lot to learn about programming. In any case, going
    little endian would not eliminate these functions - at present they
    are used on big as well as little endian machines (typically
    implemented as macros that return their parameter). There is no
    reason why that would change. Finally, in any discussion about
    clean programming it is laughable to use x86 as an example of good
    practice.

    >> Instead of always or
    >> never rearranging byte order as appropriate, you have to make a
    >> decision first and then, regardless of your machine's byte order,
    >> be prepared to do a conversion anyway.

    >
    > Or reject the incoming with a reply saying: "Get with the f'm program you
    > big endian lamer!".


    So, let's get this striaght... you want the world to spend what
    would probably amount to trillions of dollars to satisfy a personal
    whim of yours, despite the fact that you have provided no good
    reason for why that change should be made. Now you make compliance
    with the very standards that you proposed optional thus breaking
    the entire Internet. I thiink this tells us all we need to know
    about your credibility when it comes to designed networking protocols.

    > I was suggesting that over time the big endian machines and routers would
    > just go away (as in, good riddance).


    You proceed from a false assumption, namely that little endian
    machines are somehow better to start with. You have spectacularly
    failed to make your case.

    >>Devices
    >> working at wire speed or faster (routers etc) are generally
    >> implemented using network byte order as their native endianness.

    >
    > So much for lame hardware without upgradeable firmware.


    You really have no idea what you are talking about do you? Endianness
    is a fundamental decision made early on in the design of any
    processor. It isn't in general a case of simply loading up some
    new firmware. Even many bi-endian machines cannot have their
    endianness adjusted at runtime - it just something that there is no
    provision for. Indeed some CPU datasheets I've seen are at pains
    to point out that under no circumstances must you attempt a change
    while the processor is operating.

    >> That way they can consider the values present directly with no need
    >> for any transformation - this approach would be impossible if the
    >> byte order could vary for every single packet.

    >
    > It's a simple test of a big/little flag, the purpose of which is to migrate
    > to one agreed upon endianness (little, of course!).


    Which once again contradicts what you had already said. Networking
    protocols depend on universal agreement about what things mean and
    what is permissable, and yet you can't agree with yourself from one
    post to the next.

    --
    Andrew Smallshaw
    andrews@sdf.lonestar.org

  8. Re: Is "network" (bit endian) byte order beneficial for wireprotocols?

    On Oct 31, 9:34*pm, "Tony" wrote:
    > I am wondering, (since most machines on the internet are little endian and I
    > know that little endian byte order on a machine seems "natural" when
    > programming), if there is any benefit to the big endianness of tcp/ip (?)..



    I'd suggest that neither assumption is obvious. While most primary
    computing devices attached to the Internet are clearly little-endian
    x86 boxes, there are vast number of additional devices in the world,
    many of which are also connected to the Internet. Everything from
    routers, bridges, set-top boxes, printers, scanners, to phones, not to
    mention the odd refrigerator and washing machine. Many of those are
    big-endian devices. Who has the bigger absolute numbers is hard to
    say, although I'd guess that little endian systems have a small
    majority.

    As to big or little endian being more natural, that's a huge, and
    ultimately pointless, debate. Both clearly work well, and at best,
    the more "natural" one seems to be the one you were introduced to
    first.

    Next you’ll be claiming one side or the other in the “arrays should
    start at zero or one” debate.


    > The follow up question may be why IPv6 doesn't have an endianness flag in
    > the IP frames (?). I think little endian is the way to go but if there has
    > to be both, then accomodating both simply with the flag is a good
    > compromise. I mean, why bother end nodes (more numerous than routers)
    > with endianness all of the time?



    As has been pointed out, making it variable is a horrible idea.
    Dealing with variable byte order at run time is a large overhead,
    which you end up always incurring. Rather than no overhead if your
    local byte order consistently matches the one for the network, or a
    small one if it does not.

  9. Re: Is "network" (bit endian) byte order beneficial for wire protocols?

    On Sat, 1 Nov 2008 14:38:57 -0700 (PDT), Albert Manfredi
    wrote:

    >By the way, you'll note that whether a number is sent as a binary
    >multi-byte field of *any* length, or whether the number is sent as
    >ASCII characters, big endian remains consistent. The most significant
    >binary byte, or the most significant ASCII numeral, is sent first
    >always.


    For someone doing real hardcore debugging at bit- and bytelevel since
    more than 25 years it's just that much easier to work with big endian.
    With big endian a hex dump just comes out in a way where you can read
    all kind of values (16-bit, 32-bit, 64 bit, ...) directly. You can
    recognize patterns, pointers and numbers immediatly. With little-endian
    this is much much harder.

    As we have a mixed world of little and big endian we have to use
    something like htonl() anyway. So going to little-endian would not save
    anything in programming effort.

  10. Re: Is "network" (bit endian) byte order beneficial for wire protocols?

    Andrew Smallshaw writes:

    >On 2008-11-01, Tony wrote:
    >> I am wondering, (since most machines on the internet are little endian and I
    >> know that little endian byte order on a machine seems "natural" when
    >> programming), if there is any benefit to the big endianness of tcp/ip (?).


    >Historically it isn't disputable that big endian machines dominated
    >the internet. It made sense then and it makes sense now given the
    >huge hassle of changing for a marginal benefit on _some_ machines
    >on the internet. If you think that either arrangement is more
    >'logical' than the other then you must be missing something. If
    >that was the case that arrangement would have been universally
    >adopted - it isn't some conspiracy to deliberately obfuscate things.
    >The PDP-11's 2143 arrangement didn't gain traction for precisely
    >that reason.


    Is that really true? I think they picked big endian BECAUSE they
    wrote TCP/IP on a VAX (little endian) so they were sure that the
    code identified the places were endian needed to be converted.

    Casper
    --
    Expressed in this posting are my opinions. They are in no way related
    to opinions held by my employer, Sun Microsystems.
    Statements on Sun products included here are not gospel and may
    be fiction rather than truth.

  11. Re: Is "network" (bit endian) byte order beneficial for wire protocols?

    andrew@cucumber.demon.co.uk (Andrew Gabriel) writes:

    >They have more than enough CPU power nowadays not to care.


    Also, the "receiver makes right" are often atleast as
    expensive as converting immediately.

    Casper

  12. Re: Is "network" (bit endian) byte order beneficial for wire protocols?

    In article <490db728$0$198$e4fe514c@news.xs4all.nl>,
    Casper H.S. Dik wrote:
    >andrew@cucumber.demon.co.uk (Andrew Gabriel) writes:
    >
    >>They have more than enough CPU power nowadays not to care.

    >
    >Also, the "receiver makes right" are often atleast as
    >expensive as converting immediately.


    Real life considerations made Greg Chesson label it "receiver makes
    it wrong" after the Apollo fellow's dog and pony show in Mtn. View.
    No matter how many alternatives you try to not insult, there are
    always more that your cowardice unintentionally snubs. Apollo's
    NDR made things "right" only for a few of the available choices at
    the time, not to mention later data formats.

    As with the byte order bit squeezed into XTP as an attempt at
    defibrillation, such switches are purely political. The trolling
    behind this thread got it backwards, no doubt intentionally. You add
    such nasty, evil, buggy, insufficient knobs and switches only if as
    a designer you are so wishywashy that you can't even make easy,
    arbitrary choices (and so are a politician instead of a designer) or
    you are under overwhelming pressure from bosses, customers, etc. to
    not choose or choose again differently. I (and perhaps you) were
    treated to another classic performance of the melodrama in the attempt
    at bi-endian MIPS hardware with IRIX software. The final act then
    involved tests showing that htonl() &co were inconsequential with the
    mismatch between CPU and RAM speeds even then.


    Separately, while it was a nice bit of trolling, the VAX TCP/IP
    implementation was far from first. The Berkeley guys' access to VAX
    hardware can't be blamed for choices made in Stanford, Boston, or UCLA.
    If you're going to blame a popular DEC CPU for the big endian choice,
    wouldn't the PDP-10 be closer? See
    http://www.isoc.org/internet/history/brief.shtml
    Note the discussion of the IMP's byte order in
    http://www.ietf.org/rfc/ien/ien137.txt

    Besides, you don't find missing or extra htonl()'s by running your code
    on little endian VAXs. You need both big and little endian boxes.


    Vernon Schryver vjs@rhyolite.com

  13. Re: Is "network" (bit endian) byte order beneficial for wire protocols?

    On 02 Nov 2008 12:55:50 GMT, Casper H.S Dik wrote:
    > Andrew Smallshaw writes:
    >
    >>On 2008-11-01, Tony wrote:
    >>> I am wondering, (since most machines on the internet are little endian and I
    >>> know that little endian byte order on a machine seems "natural" when
    >>> programming), if there is any benefit to the big endianness of tcp/ip (?).

    >
    >>Historically it isn't disputable that big endian machines dominated
    >>the internet. It made sense then and it makes sense now given the
    >>huge hassle of changing for a marginal benefit on _some_ machines
    >>on the internet. If you think that either arrangement is more
    >>'logical' than the other then you must be missing something. If
    >>that was the case that arrangement would have been universally
    >>adopted - it isn't some conspiracy to deliberately obfuscate things.
    >>The PDP-11's 2143 arrangement didn't gain traction for precisely
    >>that reason.

    >
    > Is that really true? I think they picked big endian BECAUSE they
    > wrote TCP/IP on a VAX (little endian) so they were sure that the
    > code identified the places were endian needed to be converted.


    The way I heard it, it was you guys at Sun who discovered the
    endianness issue when making NFS portable, and so had to invent
    htonl() and friends and slap them on everywhere. Nobody had
    encountered little-endian machines on an IP network before.

    But this is probably urban folklore, and it contradicts other
    information (like the one about the PDP-11 above, and about the
    VAX being little-endian (was it really?)).

    /Jorgen

    --
    // Jorgen Grahn \X/ snipabacken.se> R'lyeh wgah'nagl fhtagn!

  14. Re: Is "network" (bit endian) byte order beneficial for wire protocols?

    In article ,
    Jorgen Grahn wrote:

    >The way I heard it, it was you guys at Sun who discovered the
    >endianness issue when making NFS portable, and so had to invent
    >htonl() and friends and slap them on everywhere. Nobody had
    >encountered little-endian machines on an IP network before.


    >But this is probably urban folklore, and it contradicts other
    >information (like the one about the PDP-11 above, and about the
    >VAX being little-endian (was it really?)).


    I hope that is just more trolling, because it contracts well known
    facts that are also easily found with Google.

    Sun used big endian Motorola 68000's in the early days
    http://www.google.com/search?q=sun+m...ms+history+cpu
    http://en.wikipedia.org/wiki/Sun_Microsystems#Hardware
    http://en.wikipedia.org/wiki/Motorola_68000

    Bob Lyon is widely said to have based Sun's RPC protocol including XDR
    on his experience with Courier
    http://www.google.com/search?q=bob+lyon+rpc+alto

    Compare the date on Danny Cohen's classic discussion of big vs. little
    endian with the founding date of Sun Microsystems.
    http://www.ietf.org/rfc/ien/ien137.txt

    Some of hits for the obvious htonl() search
    http://www.google.com/search?q=htonl+history
    say "the byteorder functions appeared in 4.2BSD", which probably
    means htonl() was in 4.1a. See
    http://www.google.com/search?q=4.1a+bsd
    http://en.wikipedia.org/wiki/Berkele...e_Distribution

    The VAX was little endian as menioned by Danny Cohen's 1980 "On Holy
    Wars and a Plea for Peace" or
    http://www.google.com/search?q=vax

    You can get most of the facts starting with the first hit for
    http://www.google.com/search?q=big+endian
    at
    http://en.wikipedia.org/wiki/Endianness

    Another obvious search,
    http://www.google.com/searchq=big+endian+sun
    turns up something that should give the original trolling pause. Consider
    the list of big and little endian file formats in
    http://www.cs.umass.edu/~verts/cs32/endian.html
    Can we expect a demand that Adobe Photoshop, IMG, and JPEG be changed
    to little endian?


    Such minor ancient history is not important except to the old farts who
    were there. However, there's a big difference between ignorance of
    minor history and flogging invented, easily refuted tales. If you can't
    be bothered to check well documented minor ancient history when talking
    to thousands of people, will your designs or code be worth using? Should
    any of those thousands of people care about anything else you write?


    Vernon Schryver vjs@rhyolite.com

  15. Re: Is "network" (bit endian) byte order beneficial for wire protocols?

    Jorgen Grahn writes:

    >The way I heard it, it was you guys at Sun who discovered the
    >endianness issue when making NFS portable, and so had to invent
    >htonl() and friends and slap them on everywhere. Nobody had
    >encountered little-endian machines on an IP network before.


    Well, I would suggest that the BSD Vax systems pre-dated
    Sun's "anything"; they were little endian.

    >But this is probably urban folklore, and it contradicts other
    >information (like the one about the PDP-11 above, and about the
    >VAX being little-endian (was it really?)).


    Yes, the VAX was little endian.

    Casper
    --
    Expressed in this posting are my opinions. They are in no way related
    to opinions held by my employer, Sun Microsystems.
    Statements on Sun products included here are not gospel and may
    be fiction rather than truth.

  16. Re: Is "network" (bit endian) byte order beneficial for wire protocols?


    "Albert Manfredi" wrote in message
    news:7ad24acd-f76d-4598-876e-a27023cf9186@f40g2000pri.googlegroups.com...

    >So, that settles it for IPv4. All IP headers work this way, all the
    >way up the stack from layer 3. This includes UDP, TSP, RTP, HTTP, etc.
    >etc.


    >Unfortunately, a similarly clear and unequivocal statement does not
    >appear in RFC 2460, for IPv6, although that's not because the IETF is
    >having second thoughts. That's because there is general consensus that
    >this is the only way to go.


    Proof that voting isn't all it's cracked up to be. How topical given that
    tomorrow is sheeple day.

    >By the way, you'll note that whether a number is sent as a binary
    >multi-byte field of *any* length, or whether the number is sent as
    >ASCII characters, big endian remains consistent. The most significant
    >binary byte, or the most significant ASCII numeral, is sent first
    >always.


    No doubt because TCP-IP was developed by enginerds who had UNIX big-endian
    boxes and thought that would make it convenient for them rather than
    considering the other possibility and coming up with a better design. Error
    of omission? Intential or a mistake? The world will never know.

    Tony



  17. Re: Is "network" (bit endian) byte order beneficial for wire protocols?


    "Emil Naepflein" wrote in message
    news:ebrqg4ppmgb0puepru9qiciv4jfe6iad1i@4ax.com...
    > On Sat, 1 Nov 2008 14:38:57 -0700 (PDT), Albert Manfredi
    > wrote:
    >
    >>By the way, you'll note that whether a number is sent as a binary
    >>multi-byte field of *any* length, or whether the number is sent as
    >>ASCII characters, big endian remains consistent. The most significant
    >>binary byte, or the most significant ASCII numeral, is sent first
    >>always.

    >
    > For someone doing real hardcore debugging at bit- and bytelevel since
    > more than 25 years it's just that much easier to work with big endian.
    > With big endian a hex dump just comes out in a way where you can read
    > all kind of values (16-bit, 32-bit, 64 bit, ...) directly. You can
    > recognize patterns, pointers and numbers immediatly. With little-endian
    > this is much much harder.


    Does anyone under 60 years of age ever look at "a hex dump"?

    >
    > As we have a mixed world of little and big endian we have to use
    > something like htonl() anyway. So going to little-endian would not save
    > anything in programming effort.


    Most end nodes are little-endian. And those are the ones with the most
    applications. So it makes sense to tailor to those, at least _also_. (After
    that is fixed, or concurrently, the lame programming model that doesn't
    abstract other platform portability issues can be worked on). Let's face it,
    big-endian-only is a zit on the nose of TPC-IP. (You would think "they"
    would get some acne cream on that for IPv6!).

    Tony



  18. Re: Is "network" (bit endian) byte order beneficial for wire protocols?


    "Andrew Smallshaw" wrote in message
    news:slrnggpn4m.av5.andrews@sdf.lonestar.org...
    > On 2008-11-01, Tony wrote:
    >>
    >> "Andrew Smallshaw" wrote in message
    >> news:slrnggovu4.hmm.andrews@sdf.lonestar.org...
    >>> On 2008-11-01, Tony wrote:
    >>>> I am wondering, (since most machines on the internet are little endian
    >>>> and I
    >>>> know that little endian byte order on a machine seems "natural" when
    >>>> programming), if there is any benefit to the big endianness of tcp/ip
    >>>> (?).
    >>>
    >>> Historically it isn't disputable that big endian machines dominated
    >>> the internet. It made sense then and it makes sense now given the
    >>> huge hassle of changing for a marginal benefit on _some_ machines
    >>> on the internet. If you think that either arrangement is more
    >>> 'logical' than the other then you must be missing something.

    >>
    >> I'm missing "clean code" because the chaff called "htonl" etc. is the
    >> industry's "solution" which is no solution at all. Lame. If there was one
    >> guy to blame for it, I'd fire him.

    >
    > If you think the likes of htonl represent 'dirty' programming then
    > you have a lot to learn about programming. In any case, going
    > little endian would not eliminate these functions - at present they
    > are used on big as well as little endian machines (typically
    > implemented as macros that return their parameter). There is no
    > reason why that would change.


    Sure there is. One would need only to set the flag in the packet. Routers
    should convert the few remaining big-endian polluters's pollution to little
    endian until one day, everyone is on the same good page. Presto! No more
    htonl bull****.

    > Finally, in any discussion about
    > clean programming it is laughable to use x86 as an example of good
    > practice.


    Sounds like a new thread.

    >
    >>> Instead of always or
    >>> never rearranging byte order as appropriate, you have to make a
    >>> decision first and then, regardless of your machine's byte order,
    >>> be prepared to do a conversion anyway.

    >>
    >> Or reject the incoming with a reply saying: "Get with the f'm program you
    >> big endian lamer!".

    >
    > So, let's get this striaght... you want the world to spend what
    > would probably amount to trillions of dollars to satisfy a personal
    > whim of yours,


    No, it's just a good concept: migrate slowly but deliberately to little
    endian or at least to an endian indicating flag in the IP frame.

    > despite the fact that you have provided no good
    > reason for why that change should be made.


    Yes I have. To move to a simpler programming model.

    > Now you make compliance
    > with the very standards that you proposed optional thus breaking
    > the entire Internet.


    IPv6b!

    > I thiink this tells us all we need to know
    > about your credibility when it comes to designed networking protocols.


    Ad hominem. How "cute". :P

    >
    >> I was suggesting that over time the big endian machines and routers would
    >> just go away (as in, good riddance).

    >
    > You proceed from a false assumption, namely that little endian
    > machines are somehow better to start with. You have spectacularly
    > failed to make your case.


    You are just blinded by the trees (can't see the forest). Or have a hearing
    or comprehension problem.

    >
    >>>Devices
    >>> working at wire speed or faster (routers etc) are generally
    >>> implemented using network byte order as their native endianness.

    >>
    >> So much for lame hardware without upgradeable firmware.

    >
    > You really have no idea what you are talking about do you? Endianness
    > is a fundamental decision made early on in the design of any
    > processor.


    Not that SPARCs can be setup to be either. As I'm sure you know, but it is
    obvious you are in ad hominem attack mode so I'll just let you crash and
    burn.

    > It isn't in general a case of simply loading up some
    > new firmware. Even many bi-endian machines cannot have their
    > endianness adjusted at runtime - it just something that there is no
    > provision for. Indeed some CPU datasheets I've seen are at pains
    > to point out that under no circumstances must you attempt a change
    > while the processor is operating.


    You have no case, have made no credible point other than "that's just the
    way it is". Retire already and let people fix the broken stuff you no doubt
    helped put in place. :P

    >
    >>> That way they can consider the values present directly with no need
    >>> for any transformation - this approach would be impossible if the
    >>> byte order could vary for every single packet.

    >>
    >> It's a simple test of a big/little flag, the purpose of which is to
    >> migrate
    >> to one agreed upon endianness (little, of course!).

    >
    > Which once again contradicts what you had already said. Networking
    > protocols depend on universal agreement about what things mean and
    > what is permissable, and yet you can't agree with yourself from one
    > post to the next.


    OK little boy! Whatever you say must be true. (It'll be OK... wahhh wahhh
    your way home to mommy now).

    Tony



  19. Re: Is "network" (bit endian) byte order beneficial for wire protocols? (v2)

    v1 of this message was like "big endianness as the standard" while this
    version is devoid of the kind of crap you spew with your ad hominem bent,
    Andrew. Which is better?

    "Andrew Smallshaw" wrote in message
    news:slrnggpn4m.av5.andrews@sdf.lonestar.org...
    > On 2008-11-01, Tony wrote:
    >>
    >> "Andrew Smallshaw" wrote in message
    >> news:slrnggovu4.hmm.andrews@sdf.lonestar.org...
    >>> On 2008-11-01, Tony wrote:
    >>>> I am wondering, (since most machines on the internet are little endian
    >>>> and I
    >>>> know that little endian byte order on a machine seems "natural" when
    >>>> programming), if there is any benefit to the big endianness of tcp/ip
    >>>> (?).
    >>>
    >>> Historically it isn't disputable that big endian machines dominated
    >>> the internet. It made sense then and it makes sense now given the
    >>> huge hassle of changing for a marginal benefit on _some_ machines
    >>> on the internet. If you think that either arrangement is more
    >>> 'logical' than the other then you must be missing something.

    >>
    >> I'm missing "clean code" because the chaff called "htonl" etc. is the
    >> industry's "solution" which is no solution at all. Lame. If there was one
    >> guy to blame for it, I'd fire him.

    >
    > If you think the likes of htonl represent 'dirty' programming then
    > you have a lot to learn about programming. In any case, going
    > little endian would not eliminate these functions - at present they
    > are used on big as well as little endian machines (typically
    > implemented as macros that return their parameter). There is no
    > reason why that would change.


    Sure there is. One would need only to set the flag in the packet. Routers
    should convert the few remaining big-endian polluters's pollution to little
    endian until one day, everyone is on the same good page. Presto! No more
    htonl bull****.

    > Finally, in any discussion about
    > clean programming it is laughable to use x86 as an example of good
    > practice.


    Sounds like a new thread.

    >
    >>> Instead of always or
    >>> never rearranging byte order as appropriate, you have to make a
    >>> decision first and then, regardless of your machine's byte order,
    >>> be prepared to do a conversion anyway.

    >>
    >> Or reject the incoming with a reply saying: "Get with the f'm program you
    >> big endian lamer!".

    >
    > So, let's get this striaght... you want the world to spend what
    > would probably amount to trillions of dollars to satisfy a personal
    > whim of yours,


    No, it's just a good concept: migrate slowly but deliberately to little
    endian or at least to an endian indicating flag in the IP frame.

    > despite the fact that you have provided no good
    > reason for why that change should be made.


    Yes I have. To move to a simpler programming model.

    > Now you make compliance
    > with the very standards that you proposed optional thus breaking
    > the entire Internet.


    IPv6b!

    > I thiink this tells us all we need to know
    > about your credibility when it comes to designed networking protocols.


    Ad hominem. How "cute". :P

    >
    >> I was suggesting that over time the big endian machines and routers would
    >> just go away (as in, good riddance).

    >
    > You proceed from a false assumption, namely that little endian
    > machines are somehow better to start with. You have spectacularly
    > failed to make your case.


    You are just blinded by the trees (can't see the forest). Or have a hearing
    or comprehension problem.

    >
    >>>Devices
    >>> working at wire speed or faster (routers etc) are generally
    >>> implemented using network byte order as their native endianness.

    >>
    >> So much for lame hardware without upgradeable firmware.

    >
    > You really have no idea what you are talking about do you? Endianness
    > is a fundamental decision made early on in the design of any
    > processor.


    Not that SPARCs can be setup to be either.

    > It isn't in general a case of simply loading up some
    > new firmware. Even many bi-endian machines cannot have their
    > endianness adjusted at runtime - it just something that there is no
    > provision for. Indeed some CPU datasheets I've seen are at pains
    > to point out that under no circumstances must you attempt a change
    > while the processor is operating.


    You have no case, have made no credible point other than "that's just the
    way it is".

    >
    >>> That way they can consider the values present directly with no need
    >>> for any transformation - this approach would be impossible if the
    >>> byte order could vary for every single packet.

    >>
    >> It's a simple test of a big/little flag, the purpose of which is to
    >> migrate
    >> to one agreed upon endianness (little, of course!).

    >
    > Which once again contradicts what you had already said. Networking
    > protocols depend on universal agreement about what things mean and
    > what is permissable, and yet you can't agree with yourself from one
    > post to the next.


    I just want to fix the problem. While you want to have a pissing contest.

    Tony





  20. Re: Is "network" (bit endian) byte order beneficial for wire protocols?


    "Andrew Gabriel" wrote in message
    news:490db0d3$0$506$5a6aecb4@news.aaisp.net.uk...
    > In article ,


    > My first computing was at assembly level on 8 bit micros, and
    > little-endian seemed perfectly natural, and big-endian was
    > strange when I first met it. It didn't take long to realise
    > that big-endian is more correct and natural -- it was just
    > that I hadn't been so familiar with it initially. I use both
    > equally now, and big-endian feels correct, and little-endian
    > more of a bodge.


    If you have a bent to shoehorn computers into looking like a wire! Ugly.

    >
    >>> The follow up question may be why IPv6 doesn't have an endianness flag
    >>> in
    >>> the IP frames (?). I think little endian is the way to go but if there
    >>> has
    >>> to be both, then accomodating both simply with the flag is a good
    >>> compromise. I mean, why bother end nodes (more numerous than routers)
    >>> with
    >>> endianness all of the time?

    >
    > They have more than enough CPU power nowadays not to care.


    That's not the point at all.

    Tony



+ Reply to Thread
Page 1 of 3 1 2 3 LastLast