Hey! Keep Your Hands Out Of My Abstraction Layer! - TCP-IP

This is a discussion on Hey! Keep Your Hands Out Of My Abstraction Layer! - TCP-IP ; Jeff Liebermann wrote: > What I find amusing is your interest in "simplifying" the MAC layer > and possibly the other layers. The cheapest component in today's chip > designs is memory and CPU cycles. The more you do in ...

+ Reply to Thread
Page 2 of 6 FirstFirst 1 2 3 4 ... LastLast
Results 21 to 40 of 108

Thread: Hey! Keep Your Hands Out Of My Abstraction Layer!

  1. Re: Hey! Keep Your Hands Out Of My Abstraction Layer!


    Jeff Liebermann wrote:
    > What I find amusing is your interest in "simplifying" the MAC layer
    > and possibly the other layers. The cheapest component in today's chip
    > designs is memory and CPU cycles. The more you do in software, the
    > cheaper the product. Adding features and functions are only limited
    > by available horsepower, memory, and power consumption. Obviously,
    > this would tend to create complex software with the usual bugs which
    > is probably what you're really complaining about. So, a fairly simple
    > idea, like eliminating cables, turns into a complex feature infested
    > pretzel, like Bluetooth. I don't have a problem with this because to
    > implement the same thing in a highly modular fashion is both difficult
    > and expensive in chip count. Moving the applications support to some
    > off-chip device, really raises the costs. Also, it's perfectly
    > acceptable to tolerate some level of complexity, inefficiency,
    > non-elegance, and cute tricks, to obtain sufficient versatility to
    > sell the chips and the technology into a wider market area.


    Actually, if Bluetooth had every feature one could imagine and there
    were zero bugs in it, I still would not want it. What I am complaining
    about is the mess. I hesitate to use the word "complexity", because
    there are many things that are complex, but also beautiful. Bluetooth
    and many of these other protocols are downright ugly, and it is this
    ugliness, lack of coherence, malformation, whatever you want to call
    it...that makes them not useable. It is often the case that the
    "model", if there is one, is defective, and it's almost comical to see
    engineers attempting to speak intelligently about a something that is
    still in a state of a nothing.

    > Ask yourself why TCP/IP won over OSI 7 layer (as implemented by 3Com),


    OSI, was more like a mood. I have never seen one line of OSI "code".
    And trust me, I searched long and hard for it. When I read about OSI,
    I get the feeling it was "designed" by people who actually knew what
    they were talking about, but lost the ability to program computers
    (read implement their vague vision) several decades earlier.

    > LAN Manager (Microsloth), Netware (Novell), Lantastic (Artisoft), and
    > a mess of smaller networking vendors (MosesNet, Ungerman Bass,
    > DaVinci, etc)? If elegance of design was the chief requirement, we
    > should all be running OSI 7 layer 3com networks. If performance was a
    > major issue, we should be running Netware. If ease of integration was
    > the main requirement, we should now be running LAN Manager. If
    > simplicity were a requirement, we should be running NETBEUI. If
    > meeting a specific application requirement (i.e. CAN), we should be
    > running one of the minor network vendors products. Yet, TCP/IP has
    > successfully met all these requirements, but admittedly in a mediocre,
    > non-elegant, and compromise fashion. It's not elegant, it's not fast,
    > it doesn't configure easily, and it's not optimized for any particular
    > application. In other words, if you can do everything, then
    > inefficiency in design is more than acceptable.


    But let's face it...if you were to apply the label "universal" to all
    of these protocols, it would only stick on TCP/IP, and not because
    everyone is using TCP/IP, but because TCP/IP is inherently more
    universal than the others. I think we are saying the same thing here.

    > Also, you might consider that limiting applications vendors to
    > anything above the MAC (hardware) layer is not really going to solve
    > many problems. The big problem is applications coexistence. For
    > example, can a bluetooth headset coexist with 802.11, EV-DO, and IrDA
    > communications, in the same box or in the same chip? How do you
    > bridge between them? Will the necessary CPU cycles slow down the user
    > playing games on their cell phone? By standardizing the applications
    > interface along with the communications protocol, many of these
    > interactions are standardized. Removing the API's and interfaces
    > would simply re-create the problem they were intended to solve.


    Apart from power-management, I believe there exists a universal model
    by which one can regard these types of hardwares. TCP/IP came very
    close to finding it, but stopped short (with ARP, IP address bound to
    interface instead of something else, etc.)

    I also agree that CPU cycles and RAM are cheap. I would prefer that
    the upper layers follow the "heavy bitch" model where, whatever state
    is needed to create a solid coherent framework from layers 3 on up,
    that is what should be done. We should get away from attaching IP
    addresses to interfaces. That model alone causes a lot of problems.

    -Le Chaud Lapin-


  2. Re: Hey! Keep Your Hands Out Of My Abstraction Layer!

    [POSTED TO alt.internet.wireless - REPLY ON USENET PLEASE]

    In on Thu, 18 May 2006
    18:08:21 GMT, Robert Redelmeier wrote:

    >In comp.dcom.lans.ethernet Rich Grise wrote in part:
    >> On Thu, 18 May 2006 13:20:43 +0000, Robert Redelmeier wrote:
    >>> "Satisficing". IBM PC architecture is horrible, x86 is
    >>> bletcherous, according to "experts". Yet both persist.

    >>
    >> BTW, I agree, and I have had a modicum of experience with processors. :-)

    >
    >IMHO, a strong case can be made that IBM intended for
    >the original 5120 PC to be a flop. A sacrificial lamb.
    >They did everything against the proven IBM ways to success:
    >outsourced, open architecture, minimal testing/err chk.
    >And chose the i8088, arguably the worst CPU of the day.
    >
    >But, as time has shown, they failed at failure. The PC succeeded!


    That's a joke, right?

    --
    Best regards, SEE THE FAQ FOR ALT.INTERNET.WIRELESS AT
    John Navas

  3. Re: Hey! Keep Your Hands Out Of My Abstraction Layer!

    In comp.dcom.lans.ethernet John Navas wrote in part:
    > 18:08:21 GMT, Robert Redelmeier wrote:
    >>IMHO, a strong case can be made that IBM intended for
    >>the original 5120 PC to be a flop. A sacrificial lamb.
    >>They did everything against the proven IBM ways to success:
    >>outsourced, open architecture, minimal testing/err chk.
    >>And chose the i8088, arguably the worst CPU of the day.
    >>
    >>But, as time has shown, they failed at failure. The PC succeeded!

    >
    > That's a joke, right?


    No. I'm surprised the tinfoil-hatted crowd hasn't seized on this.
    The PC was an unexpected success. Perhaps it wasn't expected
    to be a success at all! At least at senior levels. It was not
    done using IBM's proven project methods. Large bureaucratic
    organizations like IBM was at the time are far more likely to be
    defensive (to the point of sacrifical lambs) than to be innovative.

    -- Robert

    >


  4. Re: Hey! Keep Your Hands Out Of My Abstraction Layer!

    In article ,
    redelm@ev1.net.invalid says...
    > In comp.dcom.lans.ethernet John Navas wrote in part:
    > > 18:08:21 GMT, Robert Redelmeier wrote:
    > >>IMHO, a strong case can be made that IBM intended for
    > >>the original 5120 PC to be a flop. A sacrificial lamb.
    > >>They did everything against the proven IBM ways to success:
    > >>outsourced, open architecture, minimal testing/err chk.
    > >>And chose the i8088, arguably the worst CPU of the day.
    > >>
    > >>But, as time has shown, they failed at failure. The PC succeeded!

    > >
    > > That's a joke, right?

    >
    > No. I'm surprised the tinfoil-hatted crowd hasn't seized on this.
    > The PC was an unexpected success. Perhaps it wasn't expected
    > to be a success at all! At least at senior levels. It was not
    > done using IBM's proven project methods. Large bureaucratic
    > organizations like IBM was at the time are far more likely to be
    > defensive (to the point of sacrifical lambs) than to be innovative.


    This is only half correct. IBM, at the time, was experimenting
    with skunk-works types of projects to try to get around some of the
    bureaucratic stodginess they'd built up. The PC was one of these
    semi-autonomous projects. No, it wasn't expected to turn the world
    on its ear, and likely would have been killed if anyone thought it
    really would. Of course it wasn't done with IBM's "proven project
    methods". That was the point of these independent projects (every
    development lab had them).

    --
    Keith

  5. Re: Hey! Keep Your Hands Out Of My Abstraction Layer!


    Keith writes:
    > This is only half correct. IBM, at the time, was experimenting with
    > skunk-works types of projects to try to get around some of the
    > bureaucratic stodginess they'd built up. The PC was one of these
    > semi-autonomous projects. No, it wasn't expected to turn the world
    > on its ear, and likely would have been killed if anyone thought it
    > really would. Of course it wasn't done with IBM's "proven project
    > methods". That was the point of these independent projects (every
    > development lab had them).


    they were suppose to be independent business units ... and they were
    funded to be lean and mean. however, they frequently conserved costs
    by being co-located at an existing corporate facility ... and had to
    deal with various bureaucratic issues at those locations.

    the frequent response to claiming that you weren't suppose to be
    subject to some bureaucratic process ... was that those rules only
    applied to other bureaucratic processes ... IT DIDN"T apply to THEIR
    bureaucratic processes. when nearly all of the bureaucrats made such
    assertions ... you found that you weren't funded and/or staffed to
    handle such bureaucratic processes.

    on a smaller scale in the 70s, most labs were supposed to set aside
    some portion of their budget for advanced technology projects ... and
    you found various labs sponsoring "adtech" conferences. however, going
    into the late 70s, you found some number of sites heavily using their
    "adtech" resources for fire fights in normal day-to-day product
    operation. as a result there was a derth of internal adtech
    conferences during the late 70s and early 80s.

    i managed to run one in mar82 out of SJR ... but it had been the first
    in a number of years. minor post mentioning the event and listing the
    CFP
    http://www.garlic.com/~lynn/96.html#4a

    --
    Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

  6. Re: Hey! Keep Your Hands Out Of My Abstraction Layer!

    On 18 May 2006 11:35:39 -0700, "Le Chaud Lapin"
    wrote:

    >Actually, if Bluetooth had every feature one could imagine and there
    >were zero bugs in it, I still would not want it. What I am complaining
    >about is the mess.


    The messier it is, the better it works. I think I invented that line
    after dealing with far too many complaints about my lack of
    "elligance" in radio design. If it works ship it. If you ship it,
    it's already obsolete because the next two generations of replacement
    products are already in the pipeline.

    If elligance were a sellable commodity demanded by consumers and
    OEM's, you would probably have your way. It results in products that
    are full of compromises, band-aids, and marginal design. I don't buy
    a Bluetooth headset because it's "elligant". I buy it because it
    works adequately (and looks cool). If you don't like a product, just
    wait about 3 months for the replacement. Welcome to the 21st century.

    Also, please realize that what you consider to be a mess is the end
    result of considerable wrangling, haggling, debates, yelling, and
    minor violence on the part of the various standards committees,
    consortiums, forums (fora?), and conspiracies in smoke filled hotel
    rooms and overpriced restaurants. (I may have a photo somewhere of
    the restaurant table cloth on which SNMP was first conceived). The
    compromises involved are often not optimal for best design practices,
    such as where some member of the consortium refuses to contribute a
    patent or supply sane license terms. Of course, after the job is
    done, there's *ALWAYS* someone who thinks it could have been done
    better, but somehow didn't feel that it was important to either
    participate or offer their brilliance at the time when changes would
    have been possible. At that point, unless you found a fatal flaw or
    major issue, it's done and ossified in stone.

    >It is often the case that the
    >"model", if there is one, is defective, and it's almost comical to see
    >engineers attempting to speak intelligently about a something that is
    >still in a state of a nothing.


    Nifty. A plea for serial development, where each step of the process
    from conception to deliver is performed one step at a time.
    Conceptually, this is the most eligant way, resulting in the fewest
    bugs and complications. Unfortunately, both paying customers and
    boards of directors are rather impatient beasts. They want it *NOW*
    and are not willing to wait for serial development. So, the product
    development cycle degenerates into parallel development, where many
    engineers and programs are working with vaporware, emulators, science
    fiction technology, real-soon-now component deliveries, and impossible
    schedules. Welcome (again) to the 21st century.

    >> Ask yourself why TCP/IP won over OSI 7 layer (as implemented by 3Com),

    >
    >OSI, was more like a mood. I have never seen one line of OSI "code".


    You haven't looked very hard. In the mid 80's, the OSI layer cake
    model was conceived to replace the lack of eligance found in Unix. A
    major problem with Unix was that networking was largely grafted into
    the kernel and was a rather bad fit. OSI networking would solve that
    by designing in the networking. It was largely conceived and
    implimented by academics and institutions. There was minimal industry
    participation. The design was de jure (in principle) instead of de
    facto (in practice). In other words, there was little testing.

    If you read the magazines of the day, everyone agreed that TCP/IP was
    on its way out, was badly designed, was not sufficient scaleable to
    survive much longer, and was going to be replaced by OSI model
    networking. Surveys of potential large network customers revealed an
    overwhelming interest in switching to an OSI model network.

    The first product to arrive was by 3com. 3com 3+share, 3+mail,
    3+remote, etc was the first OSI (DOS based) product. Slowest piece of
    junk I've ever tried to sell and support. Here was the alleged answer
    to all of TCP/IP's lack of eligance and it looked like a giant step
    backwards.

    The addition of X.400 email addressing and X.500 directory services
    really made life miserable for the customers. Putting the routeing
    and header information in the email address was considered clever at
    one time. I guess none of the academics bothered to ask the users.
    X.500 was even worse. Nobody could understand how it worked or how to
    make it useful. It took LDAP and possibly AD to mostly clean up the
    applications and user interfaces. X.500 is a great example of extreme
    design elegance resulting in difficult implimentations.

    I'm not going to go into what went wrong with OSI networking. Lots of
    problems and corresponding allegations. It doesn't matter. What does
    matter is the OSI didn't solve any of the real user problems and
    didn't meet the market requirements. However, it wasn't a mess and
    was rather eligant.

    >I get the feeling it was "designed" by people who actually knew what
    >they were talking about, but lost the ability to program computers
    >(read implement their vague vision) several decades earlier.


    Well, there's some truth in that. If you have a spare moment, try to
    predict the applications and technology of perhaps 5 to 10 years from
    now. Write it down on a piece of paper and seal it in an envelope to
    be opened 5 to 10 years from now. I actually did that in about 1990
    and found myself wrong on just about everything.

    >But let's face it...if you were to apply the label "universal" to all
    >of these protocols, it would only stick on TCP/IP, and not because
    >everyone is using TCP/IP, but because TCP/IP is inherently more
    >universal than the others. I think we are saying the same thing here.


    Exactly. Broad applications support, extensibility, and the ability
    to abuse the technology to make it do something it was never intended
    to do, is what makes a winner. Oh yeah, the lack IP (intellectual
    property) and licensing constraints.

    >We should get away from attaching IP
    >addresses to interfaces. That model alone causes a lot of problems.


    Enlighten me. What problems does haveing individual IP's on each
    interface port cause? It allows me to route between interfaces. It
    allows me to virtualize interfaces as in VPN and devices as in iSCSI.
    Failover is a bit tricky, but has been done with proper hardware
    support to allow moving a MAC address. I kinda like that idea which
    includes IPv6 attaching an IP address to everything including the
    kitchen sink. If it's IP addressable, you can talk to it with TCP/IP.
    I fail to see a problem.

    --
    # Jeff Liebermann 150 Felker St #D Santa Cruz CA 95060
    # 831-336-2558 jeffl@comix.santa-cruz.ca.us
    # http://802.11junk.com jeffl@cruzio.com
    # http://www.LearnByDestroying.com AE6KS

  7. Re: Hey! Keep Your Hands Out Of My Abstraction Layer!

    [POSTED TO alt.internet.wireless - REPLY ON USENET PLEASE]

    In on Thu, 18 May 2006
    20:25:09 GMT, Robert Redelmeier wrote:

    >In comp.dcom.lans.ethernet John Navas wrote in part:
    >> 18:08:21 GMT, Robert Redelmeier wrote:
    >>>IMHO, a strong case can be made that IBM intended for
    >>>the original 5120 PC to be a flop. A sacrificial lamb.
    >>>They did everything against the proven IBM ways to success:
    >>>outsourced, open architecture, minimal testing/err chk.
    >>>And chose the i8088, arguably the worst CPU of the day.
    >>>
    >>>But, as time has shown, they failed at failure. The PC succeeded!

    >>
    >> That's a joke, right?

    >
    >No. I'm surprised the tinfoil-hatted crowd hasn't seized on this.
    >The PC was an unexpected success. Perhaps it wasn't expected
    >to be a success at all! At least at senior levels. It was not
    >done using IBM's proven project methods. Large bureaucratic
    >organizations like IBM was at the time are far more likely to be
    >defensive (to the point of sacrifical lambs) than to be innovative.


    You're apparently unfamiliar with the actual history. It was explicitly set
    up outside of the IBM bureaucracy as a kind of "skunk works" project in order
    to give it the best chance of success -- see
    :

    Rather than going through the usual IBM design process, which had
    already failed to design an affordable microcomputer (for example the
    failed IBM 5100), a special team was assembled with authorization to
    bypass normal company restrictions and get something to market
    rapidly. This project was given the code name Project Chess.

    The team consisted of just 12 people headed by William Lowe. They
    succeeded -- development of the PC took about a year. To achieve this
    they first decided to build the machine with "off-the-shelf" parts
    from a variety of different original equipment manufacturers (OEMs)
    and countries. Previously IBM had developed their own components.
    Second they decided on an open architecture so that other
    manufacturers could produce and sell compatible machines -- the IBM PC
    compatibles, so the specification of the ROM BIOS was published. IBM
    hoped to maintain their position in the market by royalties from
    licensing the BIOS, and by keeping ahead of the competition
    ..
    --
    Best regards, SEE THE FAQ FOR ALT.INTERNET.WIRELESS AT
    John Navas

  8. Re: Hey! Keep Your Hands Out Of My Abstraction Layer!

    On Thu, 18 May 2006 14:21:08 -0400, Keith wrote:

    >In article ,
    >redelm@ev1.net.invalid says...
    >> In comp.dcom.lans.ethernet Rich Grise wrote in part:
    >> > On Thu, 18 May 2006 13:20:43 +0000, Robert Redelmeier wrote:
    >> >> "Satisficing". IBM PC architecture is horrible, x86 is
    >> >> bletcherous, according to "experts". Yet both persist.
    >> >
    >> > BTW, I agree, and I have had a modicum of experience with processors. :-)

    >>
    >> IMHO, a strong case can be made that IBM intended for
    >> the original 5120 PC to be a flop. A sacrificial lamb.
    >> They did everything against the proven IBM ways to success:
    >> outsourced, open architecture, minimal testing/err chk.
    >> And chose the i8088, arguably the worst CPU of the day.

    >
    >The 5120 was pretty much a flop. The original PC was the 5150.
    >;-)


    The 5120 was not initially a flop. It was introduced in 1980 and sold
    for about $10,000 per system (including the worlds biggest and
    noisiest small office printer). It came with quite a collection of
    transplanted Model 34/36/38 applications for what was at the time
    considered a reasonable price.

    However, one year later, in 1981, IBM introduced the 5150, also known
    as the IBM PC for about $2,000 systems price. Sales of the 5120 came
    to an immediate halt.

    See:
    http://www-03.ibm.com/ibm/history/exhibits/pc/pc_1.html (9 pages)
    for a terse history on the various products.

    As for the x86 architecture being "bletcherous", note that the various
    8088/8086 segmented registers and architecture were optimized by Intel
    to run Pascal, which was deemed to be the elegant language of the
    period. According to the pundits, everyone would soon be programming
    in Pascal because it is sooooooooo elegant. Didn't happen.


    --
    # Jeff Liebermann 150 Felker St #D Santa Cruz CA 95060
    # 831-336-2558 jeffl@comix.santa-cruz.ca.us
    # http://802.11junk.com jeffl@cruzio.com
    # http://www.LearnByDestroying.com AE6KS

  9. Re: Hey! Keep Your Hands Out Of My Abstraction Layer!

    [POSTED TO alt.internet.wireless - REPLY ON USENET PLEASE]

    In on Thu, 18 May 2006 23:32:58
    GMT, Jeff Liebermann wrote:

    >As for the x86 architecture being "bletcherous", note that the various
    >8088/8086 segmented registers and architecture were optimized by Intel
    >to run Pascal, which was deemed to be the elegant language of the
    >period. According to the pundits, everyone would soon be programming
    >in Pascal because it is sooooooooo elegant. Didn't happen.


    That's not entirely true. Turbo Pascal (Borland) was an early hit that
    greatly contributed to the success of the IBM PC. But for the Microsoft
    juggernaut it might well have continued to be an important factor.

    --
    Best regards, SEE THE FAQ FOR ALT.INTERNET.WIRELESS AT
    John Navas

  10. Re: Hey! Keep Your Hands Out Of My Abstraction Layer!


    Jeff Liebermann wrote:
    > Enlighten me. What problems does haveing individual IP's on each
    > interface port cause? It allows me to route between interfaces. It
    > allows me to virtualize interfaces as in VPN and devices as in iSCSI.
    > Failover is a bit tricky, but has been done with proper hardware
    > support to allow moving a MAC address. I kinda like that idea which
    > includes IPv6 attaching an IP address to everything including the
    > kitchen sink. If it's IP addressable, you can talk to it with TCP/IP.
    > I fail to see a problem.


    I've been doing some thinking about how to solve the mobility problem,
    and after much musing,I arrived at the conclusion that network
    interfaces should be regarded as dumb. The protocol stack should query
    each network interface for information at certain critical instances,
    but beyond that, they should be regarded as means to get data from one
    node to another in a frame, using one of the casting methodds (uni,
    multi, broad, etc).

    Then, if this is done, no IP address would ever be directly associated
    with an interface. Instead, the routing table would maintain the
    mappings. Then if nodes move, you simply update your routing table,
    very rapidly of course, at stategic instances. This assumes an
    assertion I made back in 1990: the day will come when almost all
    computers are powerful enough to maintain routing tables, not just
    routers. I think it is safe to say that that day has come.

    (I also said the same thing about DNS trees, but that's a different
    topic).

    -Le Chaud Lapin-


  11. Re: Hey! Keep Your Hands Out Of My Abstraction Layer!

    On Thu, 18 May 2006 23:29:53 GMT, John Navas
    wrote:

    > The team consisted of just 12 people headed by William Lowe. They
    > succeeded -- development of the PC took about a year.

    (...)

    As I recall, there were several IBM groups working on competitive
    models of the IBM home computer in 1980. One was an Apple ][ clone.
    Others were built in traditional IBM internal development models. The
    whole process took about a year to the point where management met to
    decide a single winner. When asked how long it would take to deliver
    the product to manufacturing, the 5150 was the fastest because of use
    of off the shelf parts and outsourced options (e.g. Epson printer).
    Never mind elegance, just deliver it NOW.


    --
    # Jeff Liebermann 150 Felker St #D Santa Cruz CA 95060
    # 831-336-2558 jeffl@comix.santa-cruz.ca.us
    # http://802.11junk.com jeffl@cruzio.com
    # http://www.LearnByDestroying.com AE6KS

  12. Re: Hey! Keep Your Hands Out Of My Abstraction Layer!

    On 18 May 2006 17:11:02 -0700, "Le Chaud Lapin"
    wrote:

    >I've been doing some thinking about how to solve the mobility problem,
    >and after much musing,I arrived at the conclusion that network
    >interfaces should be regarded as dumb.


    Brilliant. See:
    http://en.wikipedia.org/wiki/Dumb_network
    and David Isen's "Stupid Network" article:
    http://www.isen.com/stupid.html

    Incidentally, DHCP assumes that a network interface is dumb and feeds
    it the numbers it needs to tell it how to act.

    >This assumes an
    >assertion I made back in 1990: the day will come when almost all
    >computers are powerful enough to maintain routing tables, not just
    >routers. I think it is safe to say that that day has come.


    You're a bit late. Mesh networks have been around for quite a while.
    Ricochet/Metricom had routeing in the pole tops in 1986 with Utilnet.
    Rooftop Networks and many others have done the same with Wi-Fi. The
    entire basis of mesh networking is an intelligent routeing mechanism,
    which is often distributed into the nodes (poletops). Nothing new
    here. Your day has arrived about 20 years late.

    --
    # Jeff Liebermann 150 Felker St #D Santa Cruz CA 95060
    # 831-336-2558 jeffl@comix.santa-cruz.ca.us
    # http://802.11junk.com jeffl@cruzio.com
    # http://www.LearnByDestroying.com AE6KS

  13. Re: Hey! Keep Your Hands Out Of My Abstraction Layer!

    Why did TCP/IP win over OSI?

    1) As I understand it, RFCs needed two interoperating implementations
    before becoming standards. ISO standards were set first by well meaning
    individuals, then implementations were attempted.

    2) Many participants in OSI meetings were there to learn, not
    contribute.

    3) OSI standards were therefore pie in the sky - you only learned after
    the standards were set that performance or reliability or other killer
    problems had been baked in, or that there was an overwhelmingly simpler
    way that any successful implementation would have uncovered.

    4) The 7 layer model proved too complex, leading to many inefficiencies
    and complexities (including multiple buffer to buffer copies).

    We all thought that IP would give way to OSI. Then to IPv6 when OSI
    became a clear loser, since the mandate of IPv6 was much smaller, and
    the main difference was a longer address.

    Then DHCP, RFC 1918, NAT, and provider based address allocation solved
    the "1000 computers on the loading dock", "Dentist's office", and
    Internet address space exhaustion problems that IPv6 was intended to
    fix (at least for now).

    So the net was that OSI never offered anything worth while, the issues
    that IPv6 addressed are no longer pressing.

    >it's not optimized for any particular application

    TCP is very carefully optimized for two applications, telnet and ftp,
    and generally for interactive style applications and bulk transfer
    style applications. Nagle, Jacobsen, etc.

    It is robust enough that HTTP/HTTPS just uses it without provoking any
    need for further optimization. As does SMB/CIFS. NFS turned out to be
    a big performance loser in WAN applications because the designers
    decided to bypass TCP and use UDP - but only tuned it for LAN
    applications, losing out on all the then current and future
    optimizations.

    IP was good, but we are certainly seeing a future need for the IPv6
    agenda, plus mobility and a few other issues from
    http://www.geni.net/research.php.

    Wrolf


  14. Re: Hey! Keep Your Hands Out Of My Abstraction Layer!

    Jeff Liebermann wrote:
    >
    > You're a bit late. Mesh networks have been around for quite a while.
    > Ricochet/Metricom had routeing in the pole tops in 1986 with Utilnet.
    > Rooftop Networks and many others have done the same with Wi-Fi. The
    > entire basis of mesh networking is an intelligent routeing mechanism,
    > which is often distributed into the nodes (poletops). Nothing new
    > here. Your day has arrived about 20 years late.


    I was thinking of something more wholistic. A mesh network has N
    nodes, and no matter how large N is, when you have a discussion about
    it, everyone knows that N is a number significantly less than the total
    number of nodes in the Internet.

    A more comprehensive network would be one that provided mobility over
    Wi-Fi, Bluetooth, Wi-Max, CDMA transceivers, satellite, infrared,
    lasers, microwave, RS-232 RF transceivers, 802.15?, etc. and integrated
    with any type of wired network, including the proverbial barbed-wire
    network.

    And it would have to work for the whole planet. And it would have to
    maintain its sanity as the mobile travels down the road at 100km/hr, as
    links are broken and reestablish rapidly. This would have to occur
    over several hundred kilometers. As long as a connection of any kind
    is available, it should work. It should only not work when there is no
    possibility for a link.

    The session-layer end-to-end connections would have to be maintained
    the whole time (timeouts not withstanding), as the networks are broken
    and restablish.

    It should be possible to make new connections to the mobile node as it
    moves, no matter what mess the mobile node got itself into with its
    multiple wireless interfaces, at any instant.

    The reassociation latency should be so low that VOIP and other
    real-time applications should be uneffected.

    It should work with large scale multicasting (> 1,000,000 nodes).

    The mobile node should be able to roam on a golf cart, while the golf
    cart roams on a ship, while the ship moves past a port that provides
    Wi-Fi (or other) access. The routing should remain optimal to any
    location on the planet.

    I could go on, but you get the point. Real mobility has not yet been
    done.

    Should be interesting to watch if GENI gets it right.

    -Le Chaud Lapin-


  15. Re: Hey! Keep Your Hands Out Of My Abstraction Layer!

    "Wrolf" writes:
    > Why did TCP/IP win over OSI?
    >
    > 1) As I understand it, RFCs needed two interoperating implementations
    > before becoming standards. ISO standards were set first by well meaning
    > individuals, then implementations were attempted.
    >
    > 2) Many participants in OSI meetings were there to learn, not
    > contribute.
    >
    > 3) OSI standards were therefore pie in the sky - you only learned after
    > the standards were set that performance or reliability or other killer
    > problems had been baked in, or that there was an overwhelmingly simpler
    > way that any successful implementation would have uncovered.
    >
    > 4) The 7 layer model proved too complex, leading to many inefficiencies
    > and complexities (including multiple buffer to buffer copies).


    there were also organization issues ... ISO had a "rule" that you
    couldn't have standards work on protocols that didn't confirm to the
    osi model.

    we had taken high-speed protocol proposal to x3s3.3 (ansi iso
    chartered body repsonible for networking and transport layer). it
    would go directly from transport to mac ... including support for
    internetworking. it was rejected because

    1) lan mac interface sits in the middle of layer 3, going directly
    from transport to mac interface violated osi model by bypassing
    layer 3/4 interface.

    2) osi has no provisions at all for supporting internetworking ... so
    anything that supported internetworking violates osi model

    3) lan mac interface in the middle of layer 3 violates osi model,
    so anything that interfaces to lan mac violates osi model.

    http://www.garlic.com/~lynn/subnetwork.html#xtphsp

    in some sense, osi was similar/analogous to the arpanet,
    pre-internetworking ... having relatively homogeneous operation w/o
    gateway and internetworking support.

    i've frequently asserted that the internal network
    http://www.garlic.com/~lynn/subnetwork.html#internalnet

    was larger than the arpanet/internet (from just about the beginning
    until possibly mid-85) because the internal network had a kind of
    gateway support in every node ... which arpanet didn't get until the
    great switch-over to internetworking on 1/1/83.

    misc. recent postings on the subject:
    http://www.garlic.com/~lynn/2006i.html#17 blast from the past on reliable communication
    http://www.garlic.com/~lynn/2006j.html#34 Arpa address
    http://www.garlic.com/~lynn/2006j.html#45 Arpa address
    http://www.garlic.com/~lynn/2006j.html#46 Arpa address
    http://www.garlic.com/~lynn/2006j.html#49 Arpa address
    http://www.garlic.com/~lynn/2006j.html#50 Arpa address
    http://www.garlic.com/~lynn/2006j.html#53 Arpa address

    --
    Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

  16. Re: Hey! Keep Your Hands Out Of My Abstraction Layer!

    "Wrolf" writes:
    > It is robust enough that HTTP/HTTPS just uses it without provoking any
    > need for further optimization. As does SMB/CIFS. NFS turned out to be
    > a big performance loser in WAN applications because the designers
    > decided to bypass TCP and use UDP - but only tuned it for LAN
    > applications, losing out on all the then current and future
    > optimizations.


    tcp has minimum 7 packet exchange, vmtp (rfc1045) has minimum
    5 packet exchange, xtp has 3 packet exchange for reliable
    communication
    http://www.garlic.com/~lynn/subnetwork.html#xtphsp

    most tcp implementations assumed relatively few long running
    sessions. http totally changed all that. as some of the webservers saw
    increased load, some of them started finding 95+ plus processor cpu
    being spent running FINWAIT list ... checking for dangling packets on
    termination. it was crisis period for some webservers and required a
    bit of rework to handle the situation.

    we were called in to consult with this small client/server startup
    that wanted to do payments on the server. we worked with them creating
    and deploying a payment gateway and webservers communicating with the
    payment gateway to do financial transactions
    http://www.garlic.com/~lynn/aadsm5.htm#asrn2
    http://www.garlic.com/~lynn/aadsm5.htm#asrn3

    this small client/server startup had this technology called https.
    the way that https was suppose to work for e-commerce was that the
    client typed in a host name url. the browser contacted a the server.
    the server returned a ssl domain name digital certificate. the
    browser verified the digital certificate and then cross checked that
    the domain name (in the url typed in by the user) is the same as the
    domain name in the returned digital certificate. if they matched, then
    there was some assurance that the webserver that the person thot they
    were talking to ... was in fact the webserver they were talking
    to. this was a countermeasure to webserver impersonation/spoofing.

    so what happened? relatively quickly, webservers found that https
    reduced their processing capacity by 80-90percent ... that webservers
    could support to 5-6 times more load if they just used http instead of
    https.

    so webservers changed to use plain http for the shopping experience
    and saved https for the checkout/pay experience. the webserver
    provides a button to click for checkout/pay that invokes https.
    the issue here is that the button now provides the url for invoking
    the https process. now it means that the url domain name provided
    in the button from the webserver is matched against the domain
    name in the digital certificate provided by the webserver. misc
    past posts on ssl certificates
    http://www.garlic.com/~lyn/subpubkey.html#sslcert

    the process was suppose to check that the domain name provided by the
    user matches the domain name provided by the webserver digital
    certificate. now the processes checks that the domain name provided by
    the webserver matches the domain name in the digital certificate
    provided by the webserver. if it were really a fraudulent webserver
    .... it would be an incompetent crook that wouldn't specify a domain
    name in their checkout button that didn't correspond to the domain
    name in the certificate they supply.

    misc. postings discussing the change in SSL process negating the
    original assumptions about what integrity was provided by SSL.
    http://www.garlic.com/~lynn/aadsm19.htm#26 Trojan horse attack involving many major Israeli companies, executives
    http://www.garlic.com/~lynn/aadsm20.htm#6 the limits of crypto and authentication
    http://www.garlic.com/~lynn/aadsm20.htm#9 the limits of crypto and authentication
    http://www.garlic.com/~lynn/aadsm20.htm#31 The summer of PKI love
    http://www.garlic.com/~lynn/aadsm21.htm#22 Broken SSL domain name trust model
    http://www.garlic.com/~lynn/aadsm21.htm#36 browser vendors and CAs agreeing on high-assurance certificates
    http://www.garlic.com/~lynn/aadsm21.htm#39 X.509 / PKI, PGP, and IBE Secure Email Technologies
    http://www.garlic.com/~lynn/aadsm21.htm#40 X.509 / PKI, PGP, and IBE Secure Email Technologies
    http://www.garlic.com/~lynn/2003n.html#10 Cracking SSL
    http://www.garlic.com/~lynn/2005m.html#0 simple question about certificate chains
    http://www.garlic.com/~lynn/2005m.html#18 S/MIME Certificates from External CA
    http://www.garlic.com/~lynn/2005o.html#41 Certificate Authority of a secured P2P network
    http://www.garlic.com/~lynn/2005u.html#20 AMD to leave x86 behind?
    http://www.garlic.com/~lynn/2006f.html#33 X.509 and ssh
    http://www.garlic.com/~lynn/2006h.html#34 The Pankian Metaphor

    --
    Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

  17. Re: Hey! Keep Your Hands Out Of My Abstraction Layer!

    On 18 May 2006 18:45:48 -0700, "Le Chaud Lapin"
    wrote:

    >Jeff Liebermann wrote:
    >>
    >> You're a bit late. Mesh networks have been around for quite a while.
    >> Ricochet/Metricom had routeing in the pole tops in 1986 with Utilnet.
    >> Rooftop Networks and many others have done the same with Wi-Fi. The
    >> entire basis of mesh networking is an intelligent routeing mechanism,
    >> which is often distributed into the nodes (poletops). Nothing new
    >> here. Your day has arrived about 20 years late.


    >I was thinking of something more wholistic. A mesh network has N
    >nodes, and no matter how large N is, when you have a discussion about
    >it, everyone knows that N is a number significantly less than the total
    >number of nodes in the Internet.


    Welcome to capacity planning, queuing theory, and network simulation.
    I used to do that 15 years ago (when things were simpler). The
    basics: Every system has its limits. Systems can be broken by either
    starving them or overloading them. Each component of a system has an
    individual and often independent overload point. The ideal and most
    efficient system is one where all the overload points occur
    simultaneously. Overload capacity planning should be limited by
    economics, not technology. That's all about as "wholistic" (it's
    holistic) as I could possibly imagine. Now, how is a new and improved
    protocol stack going to fit into such a system? Are you prepared to
    optimize the components of your "wholeistic" solution? All I see is
    trading one problem for another.

    >A more comprehensive network would be one that provided mobility over
    >Wi-Fi, Bluetooth, Wi-Max, CDMA transceivers, satellite, infrared,
    >lasers, microwave, RS-232 RF transceivers, 802.15?, etc. and integrated
    >with any type of wired network, including the proverbial barbed-wire
    >network.


    Is being "comprehensive" a customer defined requirement? I don't
    think I've ever seen the term used on an RFC, proposal, bid, or
    contract. The closest approximation is SDR (software defined radio),
    where the ability to configure a radio in software allows rapid
    switching between the aforementioned modes. To the best of my limited
    knowledge, nobody is proposing that the SDR radio be optimized for any
    of the items you've listed, or that it be able to handle all of them.
    Comprehensive is beginning to sound more like "conglomerated".

    >And it would have to work for the whole planet.


    Suggestion: Instead of redefining the protocols and network model,
    how about simply defining the RF communications technology that would
    satisfy all the previously mentioned requirements. It would work at
    any frequency, over any medium, comply with any FCC/ISO regulation,
    and operate with any network protocol. It could be used for cellular,
    mobile data, HF communications, FSO (free space optics), or Infra-red.
    In other words, why stop at "fixing" the protocol stack when you can
    redefine the communications medium? Totally "wholeistic" methinks.
    All you have to do is change literally everything and convince a few
    dozen regulatory fiefdoms that everything they're doing is wrong.

    >And it would have to
    >maintain its sanity as the mobile travels down the road at 100km/hr, as
    >links are broken and reestablish rapidly.


    Well, such an error detection and correction mechanism will be very
    useful on noisy and unreliable channels as found in most RF
    environments. However, it would massive overkill and a huge waste in
    an environment that is fairly noise free, such as telephony. You
    could use FEC (forward error correcting) which is great for one way
    satellite channels, but horribly inefficient in its use of bandwidth
    in other applications. Of course, this new protocol could be
    self-adjusting, self-configuring, self-healing, and self-correcting.
    However, this does not go well with your interest in reducing
    complexity.

    >This would have to occur
    >over several hundred kilometers.


    .... and also work over a few meters as in the current indoor Wi-Fi
    LAN's. I guess self-adjusting timing should do that.

    >As long as a connection of any kind
    >is available, it should work.


    Reality check. There many different types of noise and interference.
    Building a universal noise and interference reduction mechanism is not
    a trivial exercise. If you don't make an effort to reduce noise and
    interference, you may have a "connection" of sorts, but no intact
    packets will arrive.

    >It should only not work when there is no
    >possibility for a link.


    So, it works at any S/N (signal to noise ratio)? Impressive if you
    can do it. BER (bit error rate) is totally dependent on the
    communications channel S/N ratio. The faster you shovel data through
    the communications channel, the higher the S/N ratio has to be in
    order to maintain a useable S/N ratio. For 802.11, the BER standard
    is 1 glitch in 10^5 or 10^6 bits. Some vendors use a PER (packet
    error rate) of 10^4 or 10^5 packets. You can communicate is very poor
    S/N ratios, but you won't be going very fast (i.e. PSK31). In
    wireless, *EVERYTHING* is a tradeoff.

    >The session-layer end-to-end connections would have to be maintained
    >the whole time (timeouts not withstanding), as the networks are broken
    >and restablish.


    Sounds more like you're complaining about your cell phone service.
    Cellular can usually maintain a connection until something changes.
    Usually, it's the driver moving out of range of one cell site, and
    discovering the next cell site has no available channels. The same
    thing happens in various ways on a wireless channel. As long as
    nothing changes, you'll stay connected. Add more users, move a few cm
    in any direction, nuke your dinner in the microwave, and the comm
    channel turns to useless, and you drop the connection. I don't see
    how you're going to maintain a connection in the presence of capacity
    of all the multitude of things that can change the channel S/N ratio.

    >It should be possible to make new connections to the mobile node as it
    >moves, no matter what mess the mobile node got itself into with its
    >multiple wireless interfaces, at any instant.


    I just can't wait until someone tried UWB (3.1 to 10.6GHz) while
    moving and re-discovers Doppler shift. Wheeee. Of course, your
    self-adjusting protocol will instruct the MAC layer to tell the radio
    to move slightly in frequency for one user.

    >The reassociation latency should be so low that VOIP and other
    >real-time applications should be uneffected.


    That's what 802.11r (fast roaming) is trying to accomplish. It will
    require synchronization between access points. That's easy enough in
    a homogenous network of almost identical hardware. That impossible
    with a tangle of proprietary devices, separated by random routers and
    cross connected only through the internet. For example, the total
    latency on my DSL connection is about 30msec. If I had to send a few
    packets to my neighbors access point to allow you to fast switch from
    mine ot theirs, it would take 60msec for the first packet to arrive.
    Assuming a mess of packets are exchanged, it could easily take perhaps
    500msec to switch. That's acceptable if you don't mine losing 2
    syllables.

    >It should work with large scale multicasting (> 1,000,000 nodes).


    Ah, video over the internet. I can't wait. Reality check. Work out
    the numbers for HDTV over DSL (at 1.5Mbits/sec).

    >The mobile node should be able to roam on a golf cart, while the golf
    >cart roams on a ship, while the ship moves past a port that provides
    >Wi-Fi (or other) access. The routing should remain optimal to any
    >location on the planet.


    Ah, self-configuring again. Don't forget self authorization
    (password) and self-authenticating (RADIUS) along with the usual
    privacy mechanisms. It's been done already so there's nothing new
    here. Whether any of the vendors want to do this is another question.
    I'm sure the ISP's have some rather interesting opinions on the merits
    of having your cell phone update their routing.

    >I could go on, but you get the point. Real mobility has not yet been
    >done.


    Most of the cellular wireless data providers have done quite well with
    wireless mobility.

    >Should be interesting to watch if GENI gets it right.


    They'll succeed only if they can offer at least a 2 times feature
    benefit, 2 times cost benefit, or 2 times performance benefit. Doing
    the same old things in a new and more elegant way, is not worth the
    effort. Actually, I'm not sure 2 time is sufficient. I think the old
    rule of thumb was that a competitor to IBM had to have a 4x advantage
    or nobody would buy.

    --
    # Jeff Liebermann 150 Felker St #D Santa Cruz CA 95060
    # 831-336-2558 jeffl@comix.santa-cruz.ca.us
    # http://802.11junk.com jeffl@cruzio.com
    # http://www.LearnByDestroying.com AE6KS

  18. Re: Hey! Keep Your Hands Out Of My Abstraction Layer!

    Le Chaud Lapin wrote:
    > Rich Grise wrote:
    >> So, do you intend to hold everyone at gunpoint, to ensure that they follow
    >> your standards?

    >
    > On the contrary, I would let the standards fight for best of breed.
    >
    > That's essentially what happens now with computers. Many CPU's, many
    > architectures, C is doing just great against Lisp (for example).
    >
    > I would love to see a new, programmable, USB-based RF transceiver.
    > It's job would be to simply transmit and receive frames, perhaps with
    > link-layer addresses encoded, much like Ethernet is done on the wire.
    > I would keep the collision-avoidance technology, but beyond that, I
    > would do nothing else.
    >
    > I am almost certain that if someone were to do this, the network-layer
    > people would figure out how to use it.
    >

    Most of these people have jobs at companies and are expected to work on
    what they are assigned, not what they feel is "neat".


  19. Re: Hey! Keep Your Hands Out Of My Abstraction Layer!

    Jeff Liebermann wrote:
    > On 18 May 2006 11:35:39 -0700, "Le Chaud Lapin"
    > wrote:
    >
    >> Actually, if Bluetooth had every feature one could imagine and there
    >> were zero bugs in it, I still would not want it. What I am complaining
    >> about is the mess.

    >
    > The messier it is, the better it works. I think I invented that line
    > after dealing with far too many complaints about my lack of
    > "elligance" in radio design. If it works ship it. If you ship it,
    > it's already obsolete because the next two generations of replacement
    > products are already in the pipeline.
    >
    > If elligance were a sellable commodity demanded by consumers and
    > OEM's, you would probably have your way. It results in products that
    > are full of compromises, band-aids, and marginal design. I don't buy
    > a Bluetooth headset because it's "elligant". I buy it because it
    > works adequately (and looks cool). If you don't like a product, just
    > wait about 3 months for the replacement. Welcome to the 21st century.
    >
    > Also, please realize that what you consider to be a mess is the end
    > result of considerable wrangling, haggling, debates, yelling, and
    > minor violence on the part of the various standards committees,
    > consortiums, forums (fora?), and conspiracies in smoke filled hotel
    > rooms and overpriced restaurants. (I may have a photo somewhere of
    > the restaurant table cloth on which SNMP was first conceived). The
    > compromises involved are often not optimal for best design practices,
    > such as where some member of the consortium refuses to contribute a
    > patent or supply sane license terms. Of course, after the job is
    > done, there's *ALWAYS* someone who thinks it could have been done
    > better, but somehow didn't feel that it was important to either
    > participate or offer their brilliance at the time when changes would
    > have been possible. At that point, unless you found a fatal flaw or
    > major issue, it's done and ossified in stone.
    >
    >> It is often the case that the
    >> "model", if there is one, is defective, and it's almost comical to see
    >> engineers attempting to speak intelligently about a something that is
    >> still in a state of a nothing.

    >
    > Nifty. A plea for serial development, where each step of the process
    > from conception to deliver is performed one step at a time.
    > Conceptually, this is the most eligant way, resulting in the fewest
    > bugs and complications. Unfortunately, both paying customers and
    > boards of directors are rather impatient beasts. They want it *NOW*
    > and are not willing to wait for serial development. So, the product
    > development cycle degenerates into parallel development, where many
    > engineers and programs are working with vaporware, emulators, science
    > fiction technology, real-soon-now component deliveries, and impossible
    > schedules. Welcome (again) to the 21st century.
    >
    >>> Ask yourself why TCP/IP won over OSI 7 layer (as implemented by 3Com),

    >> OSI, was more like a mood. I have never seen one line of OSI "code".

    >
    > You haven't looked very hard. In the mid 80's, the OSI layer cake
    > model was conceived to replace the lack of eligance found in Unix. A
    > major problem with Unix was that networking was largely grafted into
    > the kernel and was a rather bad fit. OSI networking would solve that
    > by designing in the networking. It was largely conceived and
    > implimented by academics and institutions. There was minimal industry
    > participation. The design was de jure (in principle) instead of de
    > facto (in practice). In other words, there was little testing.
    >
    > If you read the magazines of the day, everyone agreed that TCP/IP was
    > on its way out, was badly designed, was not sufficient scaleable to
    > survive much longer, and was going to be replaced by OSI model
    > networking. Surveys of potential large network customers revealed an
    > overwhelming interest in switching to an OSI model network.
    >
    > The first product to arrive was by 3com. 3com 3+share, 3+mail,
    > 3+remote, etc was the first OSI (DOS based) product. Slowest piece of
    > junk I've ever tried to sell and support. Here was the alleged answer
    > to all of TCP/IP's lack of eligance and it looked like a giant step
    > backwards.
    >
    > The addition of X.400 email addressing and X.500 directory services
    > really made life miserable for the customers. Putting the routeing
    > and header information in the email address was considered clever at
    > one time. I guess none of the academics bothered to ask the users.
    > X.500 was even worse. Nobody could understand how it worked or how to
    > make it useful. It took LDAP and possibly AD to mostly clean up the
    > applications and user interfaces. X.500 is a great example of extreme
    > design elegance resulting in difficult implimentations.
    >
    > I'm not going to go into what went wrong with OSI networking. Lots of
    > problems and corresponding allegations. It doesn't matter. What does
    > matter is the OSI didn't solve any of the real user problems and
    > didn't meet the market requirements. However, it wasn't a mess and
    > was rather eligant.
    >
    >> I get the feeling it was "designed" by people who actually knew what
    >> they were talking about, but lost the ability to program computers
    >> (read implement their vague vision) several decades earlier.

    >
    > Well, there's some truth in that. If you have a spare moment, try to
    > predict the applications and technology of perhaps 5 to 10 years from
    > now. Write it down on a piece of paper and seal it in an envelope to
    > be opened 5 to 10 years from now. I actually did that in about 1990
    > and found myself wrong on just about everything.
    >

    I get the distinct feeling that Le Chaud Lapin has never participated in
    a large scale "standards" effort. (Along with a lot of other folks.) I
    was involved in efforts to standardize transactions between insurance
    agencies and companies in the 80s. (Independent Property Casualty
    Agencies and Companies to be precise. And trust me, it matters.) Typical
    ground rules of these processes are:

    Everyone is dedicated to mom, apple pie, and the betterment of mankind.
    Everyone always tells the truth.
    No one is out to sabotage a competitor.
    Everyone is competent to discuss the matters at hand.

    And if you believe this, I have a bridge ....

    Thus standards look like the mess they are. When companies in your
    industries who've never implemented even a trial of what you're working
    on and have no plans to do so keep wanting to study things forever or
    include things that will never be used, well at some point you have to
    decide if you are going to fight for a pure great design or one you know
    you can implement and get working.

    It's an ugly, messing, cynical process behind the scenes with lots of
    fake comradely for the "public face".

  20. Re: Hey! Keep Your Hands Out Of My Abstraction Layer!

    Don Taylor wrote:
    > unoriginal_username@yahoo.com writes:
    >> I'm reading the specification for 802.11, and I cannot help but wonder
    >> why so many network standards seem to enjoy transgressing the
    >> boundaries of their abstraction layers.

    > ...
    >> Perhaps that's the problem. Perhaps we should not be putting so many
    >> "services" in the hardware.

    > ...
    >> I think if each layer were approached with this mindset, we'd actually
    >> do better than we have done so far.

    >
    > Try being a member of a standards committee sometime.
    > Or perhaps it is better to never be a member of one of those.
    > It can be a very frustrating time.


    One of the understatements of the last few 1000 years.

+ Reply to Thread
Page 2 of 6 FirstFirst 1 2 3 4 ... LastLast