Is "network" (bit endian) byte order beneficial for wire protocols? - TCP-IP

This is a discussion on Is "network" (bit endian) byte order beneficial for wire protocols? - TCP-IP ; wrote in message news:6b44671d-9bbb-4ac0-8e07-506ef184f988@w1g2000prk.googlegroups.com... On Oct 31, 9:34 pm, "Tony" wrote: > I am wondering, (since most machines on the internet are little endian and > I > know that little endian byte order on a machine seems "natural" when ...

+ Reply to Thread
Page 2 of 3 FirstFirst 1 2 3 LastLast
Results 21 to 40 of 41

Thread: Is "network" (bit endian) byte order beneficial for wire protocols?

  1. Re: Is "network" (bit endian) byte order beneficial for wire protocols?


    wrote in message
    news:6b44671d-9bbb-4ac0-8e07-506ef184f988@w1g2000prk.googlegroups.com...
    On Oct 31, 9:34 pm, "Tony" wrote:
    > I am wondering, (since most machines on the internet are little endian and
    > I
    > know that little endian byte order on a machine seems "natural" when
    > programming), if there is any benefit to the big endianness of tcp/ip (?).


    >As to big or little endian being more natural, that's a huge, and
    >ultimately pointless, debate. Both clearly work well, and at best,
    >the more "natural" one seems to be the one you were introduced to
    >first.


    That's not it at all. There is "impedence mismatch" in the current
    brain-damaged implemention. (The mismatch may actually be in the minds of
    those keeping things ugly rather than fixing them as time goes on).

    > The follow up question may be why IPv6 doesn't have an endianness flag in
    > the IP frames (?). I think little endian is the way to go but if there has
    > to be both, then accomodating both simply with the flag is a good
    > compromise. I mean, why bother end nodes (more numerous than routers)
    > with endianness all of the time?



    As has been pointed out, making it variable is a horrible idea.

    Where?

    >Dealing with variable byte order at run time is a large overhead,


    Wrong.

    >which you end up always incurring.


    Wrong.

    > Rather than no overhead if your
    >local byte order consistently matches the one for the network, or a
    >small one if it does not.


    All you are saying is that "the wire should influence the programming
    model", which is stupid because you write software on a computer, not on a
    wire!

    Tony



  2. Re: Is "network" (bit endian) byte order beneficial for wire protocols?

    In article ,
    Tony wrote:

    >Does anyone under 60 years of age ever look at "a hex dump"?


    It would be more accurate to say that anyone who does not look at a hex
    dump at least once every few months is not a real computer programmer.
    Only programmers whose only languages are interpreted scripting languages
    never look at hex dumps, and even they sometimes need to look use
    something like the `od` command to make hex dumps of things like Perl's
    packed data files. Anyone doing any of the following must at least
    occassionally poke through hex dumps:

    - developing or debugging code for many embedded systems
    - operating system kernel developers and debuggers
    - file system (including database) developers (I don't mean people
    who write shell scripts to poke at MySQL, but people develope new
    file systems and databases)
    - many database application developers (when your database is not
    answering your updates and queries sensibly, you often must be
    figure out from the binary or hex bits what it thinks it is doing)
    - application developers dealing with broken core dumps, stack corruption,
    and other odd cases that are otherwise impossible to diagnose.


    >> As we have a mixed world of little and big endian we have to use
    >> something like htonl() anyway. So going to little-endian would not save
    >> anything in programming effort.

    >
    >Most end nodes are little-endian. And those are the ones with the most
    >applications.


    If IPv6 makes it, then most "nodes" on the net will be embedded
    systems. There are orders of magnitude more embedded systems than
    Wintel PCs. At one time and I suspect still, most embedded systems
    were either big-endian or byte-wide and so neither big nor little
    endian (e.g. 8051).

    > So it makes sense to tailor to those, at least _also_. (After
    >that is fixed, or concurrently, the lame programming model that doesn't
    >abstract other platform portability issues can be worked on).


    Only someone who is trolling or who has no clue about "platform
    portability issues" except how to type it would say that.

    > Let's face it,
    >big-endian-only is a zit on the nose of TPC-IP. (You would think "they"
    >would get some acne cream on that for IPv6!).


    Even if IPv6 were little endian, all competent programmers would still
    use htoIPv6l() and IPv6ntoh() (or whatever) for multi-byte TCP/IPv6
    integers so that their code would work on the (supposed) minority of
    big endian systems. For at least decades there will be many big-endian
    systems on the Internet.

    For the higher layers, big vs. little endian network hassles are
    non-issues. You generally want to prevent alignment worries, and so
    you almost always treat multi-byte values arriving or going out as
    strings of bytes. You almost always pick them up or put them down
    down one byte at a time because you can't know what sort of crazy
    alignment restrictions the host might have, and must not allow unexpected
    structure padding. Depending on whether your chosen wire format is
    big or little endian, you start at the most or least significant end.

    Even far away from the network, it is often wise to pick a byte
    order for your application data files, and assume that any given
    platform's native order might differ. You might use something like
    XDR from Sun's RPC, htonl() and friends, or manual fetch/shift/store
    to prevent problems. XDR is nice because your XDR compiler can
    generate optimal (i.e. no) code when possible.

    This applies to more than byte order. Competent programmers who
    need to care about accuracy and precision know there are more than
    2 floating point memory formats even only on Wintel boxes, but often
    use only IEEE 754 in their disk files.


    Vernon Schryver vjs@rhyolite.com

  3. Re: Is "network" (bit endian) byte order beneficial for wire protocols?

    In article ,
    Tony wrote:

    >Proof that voting isn't all it's cracked up to be. How topical given that
    >tomorrow is sheeple day.


    Voting is supposed to be outlawed in the IETF. Instead the IETF
    is supposed choose based on consensus and running code.


    >>By the way, you'll note that whether a number is sent as a binary
    >>multi-byte field of *any* length, or whether the number is sent as
    >>ASCII characters, big endian remains consistent. The most significant
    >>binary byte, or the most significant ASCII numeral, is sent first
    >>always.

    >
    >No doubt because TCP-IP was developed by enginerds who had UNIX big-endian
    >boxes and thought that would make it convenient for them rather than
    >considering the other possibility and coming up with a better design. Error
    >of omission? Intential or a mistake? The world will never know.


    Trolling works better when the troller appears to read and understand
    what others write. Specifically, the world knows that UNIX "enginerds"
    had nothing to do with how the Babylonians decided to represent numbers
    five thousand years ago. http://en.wikipedia.org/wiki/Babylonian_numerals

    UNIX predates the founding of Intel and Microsoft, and so it is
    unreasonable to expect "UNIX enginerds" to anticipate the popularity
    of Wintel boxes on the Internet.


    Vernon Schryver vjs@rhyolite.com

  4. Re: Is "network" (bit endian) byte order beneficial for wire protocols?


    "Vernon Schryver" wrote in message
    news:geni9n$9i9$1@calcite.rhyolite.com...
    > In article ,


    > UNIX predates the founding of Intel and Microsoft, and so it is
    > unreasonable to expect "UNIX enginerds" to anticipate the popularity
    > of Wintel boxes on the Internet.


    Wintel didn't introduce the concept of endianness though.

    Tony



  5. Re: Is "network" (bit endian) byte order beneficial for wire protocols?

    In article <490f1812$0$184$e4fe514c@news.xs4all.nl>, Casper.Dik@Sun.COM (Casper H.S. Dik) writes:
    | Jorgen Grahn writes:
    |
    | >The way I heard it, it was you guys at Sun who discovered the
    | >endianness issue when making NFS portable, and so had to invent
    | >htonl() and friends and slap them on everywhere. Nobody had
    | >encountered little-endian machines on an IP network before.

    I wonder if this legend somehow evolved from the (in)famous history
    of the original talk protocol.

    | Well, I would suggest that the BSD Vax systems pre-dated
    | Sun's "anything"; they were little endian.
    |
    | >But this is probably urban folklore, and it contradicts other
    | >information (like the one about the PDP-11 above, and about the
    | >VAX being little-endian (was it really?)).
    |
    | Yes, the VAX was little endian.

    Mine still is.

    Dan Lanciani
    ddl@danlan.*com

  6. Re: Is "network" (bit endian) byte order beneficial for wire protocols?

    In article , Tony wrote:

    >> UNIX predates the founding of Intel and Microsoft, and so it is
    >> unreasonable to expect "UNIX enginerds" to anticipate the popularity
    >> of Wintel boxes on the Internet.

    >
    >Wintel didn't introduce the concept of endianness though.


    Neither did "UNIX enginerds."
    The UNIX (specifically BSD) connection to TCP/IP came long after the
    big endian nature of the Internet was fixed forever.


    Vernon Schryver vjs@rhyolite.com

  7. Re: Is "network" (bit endian) byte order beneficial for wire protocols?

    In article <490f65a9$0$506$5a6aecb4@news.aaisp.net.uk>,
    Andrew Gabriel wrote:

    >It's a question of do you store machine registers in memory
    >in the same order (big-endian), or do you apply some function to
    >swap bits of the registers around before storing them in memory
    >(little-endian)?
    >
    >Little-endian came about because of the very limited capabilities
    >of early processors, it being easier to swap bits of registers
    >around when writing them to narrower data-width store. That's not
    >been true for decades now, and only hangs on due to backwards
    >compatibility.


    That is nonsense from start to end. Early big endian CPUs did no more
    or less bit swapping when writing registers than early little endian
    CPUs, even among the CPUs that were not bit-serial. Bit serial CPUs
    (or any CPU with adders narrower than its word size) had to be in some
    sense little endian because they had to propagate carry bits from less
    significant toward more significant bits.

    There was no real "[swapping] of bits or registers around" on early
    CPUs of any flavor. They just wired things so that numbers worked. No
    "swapping" is needed for a little endian CPU to write registers; it
    just starts writing from the least significant bits. Only later CPUs
    had byte order switches and so could be said to swap bytes.

    Early processors were often neither big nor little endian. For example,
    when your word size is 22 bits (like the first computer I touched), it
    doesn't make sense to talk about big or little endian.


    >What goes out over the wire (on ethernet) is ordered differently
    >from any of the host endianisms mentioned so far. Octets are
    >transmitted in big-endian order.


    That is news to the people who worked on DECNET Phase IV decades ago.
    In other words, it is also nonsense. On the network, "bits is bits."
    How numbers are constructed from bits is defined by the protocol.
    IEEE 802.3 says the length and FCS fields are big endian, but those are
    merely two fields in one protocol.


    > It would make no sense to send
    >in little-endian as you don't know what the data-size of the
    >various components of a packet are, nor is there any natural data
    >width on which to base the endianism swapping.


    I can't make sense of that without the assumption that there is some
    innately big endian order to networks. DECNET shows that is nonsense.

    > However, the 8 bits
    >in each Octet are transmitted least significant bit first.


    That applies only to ancient 10 MHz 802.3. Newer generations of
    Ethernet have symbol sizes larger than 1 bit and so talking about
    bit order does not make much sense.


    Vernon Schryver vjs@rhyolite.com

  8. Re: Is "network" (bit endian) byte order beneficial for wireprotocols?

    On Nov 2, 6:55*am, Casper H.S. Dik wrote:
    > Is that really true? *I think they picked big endian BECAUSE they
    > wrote TCP/IP on a VAX (little endian) so they were sure that the
    > code identified the places were endian needed to be converted.



    A problem with that theory is that TCP/IP was under development in
    1973 well before the VAX shipped (1977). RFC 675, which is
    recognizable as the ancestor of the current IP4 spec, is not very
    explicit about byte order, but it does imply it in several places.

    OTOH there were bunches of PDP-11s (ancestors to the VAX) around
    during that period...


  9. Re: Is "network" (bit endian) byte order beneficial for wireprotocols?

    On Nov 3, 7:23*am, Jorgen Grahn wrote:
    > But this is probably urban folklore, and it contradicts other
    > information (like the one about the PDP-11 above, and about the
    > VAX being little-endian (was it really?)).



    The VAX was most certainly little-endian.

  10. Re: Is "network" (bit endian) byte order beneficial for wire protocols?


    "Vernon Schryver" wrote in message
    news:genomr$25mu$1@calcite.rhyolite.com...
    > In article , Tony
    > wrote:
    >
    >>> UNIX predates the founding of Intel and Microsoft, and so it is
    >>> unreasonable to expect "UNIX enginerds" to anticipate the popularity
    >>> of Wintel boxes on the Internet.

    >>
    >>Wintel didn't introduce the concept of endianness though.

    >
    > Neither did "UNIX enginerds."
    > The UNIX (specifically BSD) connection to TCP/IP came long after the
    > big endian nature of the Internet was fixed forever.


    "Broken for now", seems more appropriate verbage.



  11. Re: Is "network" (bit endian) byte order beneficial for wire protocols?


    "Andrew Gabriel" wrote in message
    news:490f65a9$0$506$5a6aecb4@news.aaisp.net.uk...
    > In article ,
    > "Tony" writes:
    >>
    >> "Andrew Gabriel" wrote in message
    >> news:490db0d3$0$506$5a6aecb4@news.aaisp.net.uk...
    >>> In article ,

    >>
    >>> My first computing was at assembly level on 8 bit micros, and
    >>> little-endian seemed perfectly natural, and big-endian was
    >>> strange when I first met it. It didn't take long to realise
    >>> that big-endian is more correct and natural -- it was just
    >>> that I hadn't been so familiar with it initially. I use both
    >>> equally now, and big-endian feels correct, and little-endian
    >>> more of a bodge.

    >>
    >> If you have a bent to shoehorn computers into looking like a wire! Ugly.

    >
    > It's a question of do you store machine registers in memory
    > in the same order (big-endian), or do you apply some function to
    > swap bits of the registers around before storing them in memory
    > (little-endian)?
    >
    > Little-endian came about because of the very limited capabilities
    > of early processors, it being easier to swap bits of registers
    > around when writing them to narrower data-width store. That's not
    > been true for decades now, and only hangs on due to backwards
    > compatibility.
    >
    > What goes out over the wire (on ethernet) is ordered differently
    > from any of the host endianisms mentioned so far. Octets are
    > transmitted in big-endian order. It would make no sense to send
    > in little-endian as you don't know what the data-size of the
    > various components of a packet are, nor is there any natural data
    > width on which to base the endianism swapping. However, the 8 bits
    > in each Octet are transmitted least significant bit first.
    >
    >>> They have more than enough CPU power nowadays not to care.

    >>
    >> That's not the point at all.

    >
    > What point did you have in mind then?


    Changing the programming model to (C code as example):

    WriteIPPacket(&my_ip_packet, sizeof(IPPacket)+data_sz);

    Other platform portability issues ignored, but you get the idea of the goal.
    What could be simpler?

    Tony



  12. Re: Is "network" (bit endian) byte order beneficial for wire protocols?

    In article , Tony wrote:

    >Changing the programming model to (C code as example):
    >
    > WriteIPPacket(&my_ip_packet, sizeof(IPPacket)+data_sz);
    >
    >Other platform portability issues ignored, but you get the idea of the goal.
    >What could be simpler?


    That is so simple that it would not work. Programmers would still have
    to apply htonl() and htons() on the IP header fields and the data fields
    in case their code happens to be compiled for a big endian CPU.

    If Tony were a joyous troll with at least minimal clues, his response
    to that fact would be to say something like "Real programmers don't
    want their code used on big endian CPUs."

    Or maybe he would point out that ip_output() in the BSD IP code already
    fits his WriteIPPacket() model. ip_output() takes a string of mbufs
    or "memory buffers" and a bunch of other parameters to sophisticated
    for this thread and uses hton*() to make the IP header correct for the
    wire.


    Vernon Schryver vjs@rhyolite.com

  13. Re: Is "network" (bit endian) byte order beneficial for wire protocols?


    "Vernon Schryver" wrote in message
    news:gent8q$93s$1@calcite.rhyolite.com...
    > In article , Tony
    > wrote:
    >
    >>Changing the programming model to (C code as example):
    >>
    >> WriteIPPacket(&my_ip_packet, sizeof(IPPacket)+data_sz);
    >>
    >>Other platform portability issues ignored, but you get the idea of the
    >>goal.
    >>What could be simpler?

    >
    > That is so simple that it would not work. Programmers would still have
    > to apply htonl() and htons() on the IP header fields and the data fields
    > in case their code happens to be compiled for a big endian CPU.


    But I was suggesting deprecating big endianness everywhere over time.

    >
    > If Tony were a joyous troll with at least minimal clues, his response
    > to that fact would be to say something like "Real programmers don't
    > want their code used on big endian CPUs."


    Or at least say it practically and with conviction as I just did above?

    >
    > Or maybe he would point out that ip_output() in the BSD IP code already
    > fits his WriteIPPacket() model. ip_output() takes a string of mbufs
    > or "memory buffers" and a bunch of other parameters to sophisticated
    > for this thread and uses hton*() to make the IP header correct for the
    > wire.


    That's at the IP level, and yes I am suggesting that at that level the new
    model would be good, but not just there. Obviously I want to be able to
    simply do:

    SendMessage(&my_msg, sizeof(MyMsg));

    And indeed I can: I specify that my protocol is little endian and therefore
    only those few big endian machines being used with my internet chat program
    protocol have to do any conversion, and yes, they will have to do it every
    time. Sure, I could do the endianness flag thing, but that isn't making any
    progress: it stagnates to both types of messages going over the wire and
    both endnodes having to have the conversion code. First step of protocol
    design: specify endianness?

    Tony



  14. Re: Is "network" (bit endian) byte order beneficial for wire protocols?

    On Mon, 3 Nov 2008 11:55:05 -0600, "Tony" wrote:

    >Does anyone under 60 years of age ever look at "a hex dump"?


    Oh, this is a very strong argument from your side. This shows that you
    know not much about the real world outside and regarding low-level
    programming and debugging. You also don't know much about computer
    history and programming languages. And concurrently you concede that my
    argument has a valid point, otherwise you wouldn't try to answer such a
    nonsense.

    Yes, a lot of high-level language programmers don't look at hex dumps.
    But the reason for this is that they are not able how to interprete
    them. Someone only used to high-level programming languages and only a
    high level interpreted (Wireshark) view to network protocols doesn't
    really have a glue what that means, especially regarding debugging.
    Debugging at this low level often means doing pattern recognition on hex
    dumps. And big-endian makes this job must easier then little-endian -
    point. Instead of doing serious debugging by looking into the memory
    when the high level debugger doesn't give any help, they often debug by
    try and error until the program *seems* to run stable. And the result is
    that what we see as program quality today in the market where you say
    are most of the computers with the right endianess.


    But anyway, this isn't the primary point. The primary point is that we
    have a mixed world of processor architectures and programming languages
    with different endianess, size and alignments of data types. And for
    this you have to convert a bit/byte stream on the network to and for the
    representation in the machine. In order to write portable code you have
    to handle bit by bit and byte by byte for the necessary conversions
    independent from whether the byte order of network representation
    matches or not. So the discussion regarding the endianess of network
    byte order is not relevant.

    BTW, I am only in the mid-40s, but doing low level programming since
    more than 30 years. I have worked with machine language on all kind of
    processors from 8-bit microprocessor till 64-bit mainframe. I worked
    even on a 48-bit TR440. I have started programming were teletypes, paper
    tape, punch cards and machine code input by flipping switches were
    common. During this years I have worked on network protocols from lowest
    bit level to high level application protocols. And I am pretty sure that
    here in this NG are older and younger people that still work with hex
    dumps when necessary.

  15. Re: Is "network" (bit endian) byte order beneficial for wire protocols?

    On Mon, 3 Nov 2008 13:20:49 -0800 (PST), "robertwessel2@yahoo.com"
    wrote:

    >OTOH there were bunches of PDP-11s (ancestors to the VAX) around
    >during that period...


    AFAIK they didn't run a UNIX with TCP/IP integrated. More of the history
    can be found at http://en.wikipedia.org/wiki/Unix . The decision
    regarding byte order in TCP/IP was much early independent from UNIX.

  16. Re: Is "network" (bit endian) byte order beneficial for wire protocols?

    On Mon, 3 Nov 2008 17:21:22 -0600, "Tony" wrote:

    >That's at the IP level, and yes I am suggesting that at that level the new
    >model would be good, but not just there. Obviously I want to be able to
    >simply do:
    >
    > SendMessage(&my_msg, sizeof(MyMsg));
    >


    This would only work if data type size, alignment (bit-fields, xx-bit
    data types, ...) and endianess would be forced to be the same on all the
    computers on the network.

  17. Re: Is "network" (bit endian) byte order beneficial for wire protocols?

    In article <491023fe$0$506$5a6aecb4@news.aaisp.net.uk>,
    Andrew Gabriel wrote:

    >>>Little-endian came about because of the very limited capabilities
    >>>of early processors, it being easier to swap bits of registers
    >>>around when writing them to narrower data-width store. That's not
    >>>been true for decades now, and only hangs on due to backwards
    >>>compatibility.

    >>
    >> That is nonsense from start to end. Early big endian CPUs did no more
    >> or less bit swapping when writing registers than early little endian
    >> CPUs, even among the CPUs that were not bit-serial. Bit serial CPUs
    >> (or any CPU with adders narrower than its word size) had to be in some
    >> sense little endian because they had to propagate carry bits from less
    >> significant toward more significant bits.

    >
    >I don't think I've ever seen a CPU with anything other than
    >big-endian registers.


    Then you've never seen the hardware of even one CPU.

    You evidently do not understand how binary aritimetic works. To
    add a pair of numbers, a CPU must start from the same end as you
    do when you add a pair of decimal numbers. You start with the least
    significant pair and work toward the left so that you can carry digits.
    "Carry propagation" is a fundamental constraint on CPU speed. See
    http://www.google.com/search?q=%22carry+propagation%22
    http://www.google.com/search?q=%22half+addr%22
    http://www.google.com/search?q=%22look+ahead+carry%22
    If you read only one web page, see
    http://en.wikipedia.org/wiki/Carry_look-ahead_adder

    My second favorite example of a bit serial CPU was the SDS 940 made
    famous by Berkeley's Project Genie. (I think the 940 was 3-bit serial
    while the 910 or 920 was bit-serial.) My favorite was the first system
    I touched, a PB 250. It used accoustic delay lines for memory and all
    3 registers. It put a sound pulse into one end of a coil of wire and
    "stored" the bit until the pulse came out the other end.
    If you understood how such a CPU works, you could not easily talk about
    "big endian CPU registers."

    Unless the bytes of the registers can be addressed as if they are memory,
    then they are neither really big nor little endian. How the schematics
    of the CPU label the bits of the registers is irrelevant. You've
    obviously never seen CPU schematics; I've never seen the modern
    equivalents which I suppose would be VHDL or net lists. To have big
    endian *registers*, you must have memory addresses that point to bytes
    in multi-byte registers and the smaller addresses must point to the
    more significant bytes. CPUs from the CDC 6000 through the Intel 386
    (and subsequent) that had or have instructions or other mechanisms for
    loading and storing all of the registers in a glup or "context" do not
    count as CPUs with big or little endian *registers*. How the registers
    are fetched and stored from and to memory makes the computer big or
    little endian. It says nothing about the registers themselves.

    On only a very few CPUs were the real, live, active CPU registers
    visible in the memory address space. I've worked one or two of
    those. They were rare and are now dead because they couldn't be
    as fast as the notions related to RISC systems


    >> Only later CPUs
    >> had byte order switches and so could be said to swap bytes.

    >
    >I'm not referring to such later technologies, just the flipping
    >of the register contents as they're written to narrower memory
    >due to it being easier to write the least significant portion
    >to the address in question, and the increment the address and
    >shift left the register to write the next portion to following
    >address, etc.


    Where did you get the idea that it is necessarily harder or easier
    to write the most or least significant bits from a register first?
    Unlike adders, shifters and multiplexers do not have endian preferences.

    If your registers are in some sort of byte-addressable store inside the
    CPU, they will probably be little endian so that artimetic works easier
    (carry bits again). 30 years ago there were chips sold as "register
    files" for such uses. However, fetching registers a byte at a time is
    slower than fetching them all at once, so you'd prefer a register file
    to be as wide as your words and not be able to address bytes within
    your registers. You would probably gang as many of those 1, 2, or 4
    bit wide "register files" together to make a single word-wide file. In
    that case, any shifting from word-wide registers to narrower external
    memory is likely to be equally easy from most *or* least significant.


    > As processor technology improved, that could be
    >done away with, so registers and memory are both big-endian.
    >Of course, where backwards compatibility is required (e.g. x86),
    >processors now really do have to flip between big-endian
    >registers and little-endian memory when doing loads and stores,
    >but it's no big deal.


    I know very little about the internal architecture of the many flavors
    of 80*86, but I do know enough to recognize that as more nonsense.


    >Actually, it's no big deal when you do it in software either.
    >I have written an emulator for a big-endian system which I run
    >on both big-endian and little-endian hosts. Of course, on little
    >endian hosts, I have to endian-swap almost every pseudo register
    >and pseudo store access I do (the exception being any direct copy
    >which doesn't impact any of the pseudo condition registers).
    >This overhead is however negligable on modern host systems.


    That suggests the root of your confusion. How your emulators fetch,
    store, and otherwise manipulate the bytes of your emulated computers
    has nothing to do with how the real hardware did it.

    Emulating one instruction set on some other computer (which I've
    also done) give *no* insight on how the original hardware works.
    Even writing microcode for system with writable control store
    to implement a CISC instruction set (something I've also been paid
    to do) tells you nothing about how the underlying hardware works.


    Vernon Schryver vjs@rhyolite.com

  18. Re: Is "network" (bit endian) byte order beneficial for wireprotocols?

    On Tue, 04 Nov 2008 07:34:06 +0100, Emil Naepflein wrote:

    > BTW, I am only in the mid-40s, but doing low level programming since
    > more than 30 years. I have worked with machine language on all kind of
    > processors from 8-bit microprocessor till 64-bit mainframe. I worked
    > even on a 48-bit TR440. I have started programming were teletypes, paper
    > tape, punch cards and machine code input by flipping switches were
    > common. During this years I have worked on network protocols from lowest
    > bit level to high level application protocols. And I am pretty sure that
    > here in this NG are older and younger people that still work with hex
    > dumps when necessary.


    FWIW, I'm mid 40s as well and have to look at hex dumps regularly. Why?
    Some reasons:
    - Debug some protocol wireshark does not (completely) decode
    - Figure out some memory corruption in C or C++ programs (usually not
    written by me).
    - Figure out magic for some binary file type.
    - Figure out wtf is really in that file instead of what they specd.
    - Figure out wtf is really in that file instead of what I thought my
    program wrote.
    - Debug filesystem corruption (not an everyday occurrence and not for the
    faint of heart)
    - Debug endiannes issues
    - Try to figure out what some file contains

    And many more...

    As Emil said, if you never look at hexdumps, you're hardly qualified to
    speak about the merits of endiannes systems.

    M4

  19. Re: Is "network" (bit endian) byte order beneficial for wire protocols?

    On Mon, 3 Nov 2008 15:04:06 +0000 (UTC), Vernon Schryver wrote:
    > In article ,
    > Jorgen Grahn wrote:
    >
    >>The way I heard it, it was you guys at Sun who discovered the
    >>endianness issue when making NFS portable, and so had to invent
    >>htonl() and friends and slap them on everywhere. Nobody had
    >>encountered little-endian machines on an IP network before.

    >
    >>But this is probably urban folklore, and it contradicts other
    >>information (like the one about the PDP-11 above, and about the
    >>VAX being little-endian (was it really?)).

    >
    > I hope that is just more trolling, because it contracts well known
    > facts that are also easily found with Google.


    (snip refs)

    > Such minor ancient history is not important except to the old farts who
    > were there. However, there's a big difference between ignorance of
    > minor history and flogging invented, easily refuted tales. If you can't
    > be bothered to check well documented minor ancient history when talking
    > to thousands of people, will your designs or code be worth using? Should
    > any of those thousands of people care about anything else you write?


    Now wait a minute. I wasn't trolling; I just retold a story I heard,
    and then immediately pointed out that it was probably false. Why do
    you believe that makes me (of all things) a bad programmer?

    I should have replaced the "probably" with "obviously", "contradicts"
    with "is contradicted by", and I should have mentioned that I might
    have misremembered the story (I heard it at university back in 1992 or
    so, when both students and professors in .se had less accurate
    information about recent computing history).

    Still, I think that it's hard to misread it as if I believed the
    story, and wanted others to believe it.

    /Jorgen

    --
    // Jorgen Grahn \X/ snipabacken.se> R'lyeh wgah'nagl fhtagn!

  20. Re: Is "network" (bit endian) byte order beneficial for wire protocols?

    In article ,
    Jorgen Grahn wrote:

    >>>The way I heard it, it was you guys at Sun who discovered the
    >>>endianness issue when making NFS portable, and so had to invent
    >>>htonl() and friends and slap them on everywhere. Nobody had
    >>>encountered little-endian machines on an IP network before.



    >Now wait a minute. I wasn't trolling; I just retold a story I heard,
    >and then immediately pointed out that it was probably false. Why do
    >you believe that makes me (of all things) a bad programmer?


    Those who can't be bothered to make easy checks of their recollections
    of contradictory rumors are unlikely to bother to check that a
    system call or module does what their intuitions say. They're the
    programmers who assume that because write() boundaries are often
    preserved through a TCP connection, record boundaries are always
    preserved, and get belligerent upon discovering that TCP only
    provides a simple byte stream. They don't bother about big endian
    hassles when "designing" their reinventions of ancient wheels like
    `talk`, and when told, rant and rave about the conspiracy of senile
    old UNIX "enginerds" to complicate things.


    >I should have replaced the "probably" with "obviously", "contradicts"
    >with "is contradicted by", and I should have mentioned that I might
    >have misremembered the story (I heard it at university back in 1992 or
    >so, when both students and professors in .se had less accurate
    >information about recent computing history).


    I can't condemn wrong and fading recollections, because I have so many
    of them. I can condemn the spread of urban legands. Even if Sweden
    was so benighted in 1992 to think the VAX was big endian or that 4.1a
    BSD TCP/IP somehow sent network endian bytes to other TCP/IP implementations
    before NFS was invented, why not make the easy and obvious sanity checks
    *today*? (My vague recollections of email and netnews from Sweden 20
    years ago make it impossible for me to believe that reputable Swedish
    professors thought such stuff.)


    >Still, I think that it's hard to misread it as if I believed the
    >story, and wanted others to believe it.


    If you don't believe something or want others to believe it, why say it?

    A standard propaganda technique is to make a statement and add that it
    contradicts other statements. Human nature ensures that some people
    will repeat the nonsense without the qualifications. That's why so
    many people claim that Barack Obama is a Muslim who does not meet the
    Constitutional requirements for President by virtue of having not been
    born in the U.S. (As can be easily checked, he has been going to
    Christian churches for decades and there is documentation for his birth
    in Hawaii.)


    Vernon Schryver vjs@rhyolite.com

+ Reply to Thread
Page 2 of 3 FirstFirst 1 2 3 LastLast