Suggestions for custom application-layer protocol? - Embedded

This is a discussion on Suggestions for custom application-layer protocol? - Embedded ; On Fri, 27 May 2005 14:59:49 +0000, Grant Edwards wrote: > On 2005-05-27, James Antill wrote: > >>> If you're using a text-based protocol, you don't nead a header >>> with a payload count to tell you where the message ...

+ Reply to Thread
Page 3 of 3 FirstFirst 1 2 3
Results 41 to 56 of 56

Thread: Suggestions for custom application-layer protocol?

  1. Re: Suggestions for custom application-layer protocol?

    On Fri, 27 May 2005 14:59:49 +0000, Grant Edwards wrote:

    > On 2005-05-27, James Antill wrote:
    >
    >>> If you're using a text-based protocol, you don't nead a header
    >>> with a payload count to tell you where the message ends. Use
    >>> an end-of-message character or string. The cr/lf pair is used
    >>> as the end-of-message delimiter by a lot of text-based
    >>> runs-on-top of TCP protocols. That approach works wonderfully
    >>> with line-oriented high-level text-handling libraries and
    >>> utilties -- you can do all sorts of useful stuff using nothing
    >>> but a few utilities like netcat, expect, and so on.

    >>
    >> It interacts well with telnet,

    >
    > If you handle IAC seqeunces -- at least well enough to ignore
    > them.


    I don't know of any HTTP, SMTP, NNTP, POP3, IMAP, etc. server that does
    anything with IAC sequences. Generally servers that care initiate by
    sending IAC commands EOR, NAWS and TTYPE etc to the client.

    >> which is about the best thing that can be said for it.

    >
    > And it's a pretty big thing, in my experience.


    It's "big" because it's there already so noone has to write a
    simple client to test a service with. If SMTP had been written
    using netstrings, then a telnetstr command would be available
    everywhere and this "big" thing would be worth nothing.
    A similar thing is happening with HTTP, it's not "good"[1] but there are
    a significant number of tools that understand a HTTP stream ... so people
    hack it into places it shouldn't be so they can leverage those tools.

    >> There are large problems with how you limit "too long" lines,

    >
    > What problems?


    Say I connect to an SMTP service and keep sending 'a', the other end has
    to keep accepting data and parsing it upto the amount it limits itself to
    accepting for a single line. With something like a netstring the remote
    end can decide within 10 characters if it's going to just drop the
    connection/message or handle it.

    >> and what happens when you hit a CR or LF on it's own.

    >
    > Yes, that can be a problem if you have to be able to include arbitrary
    > strings in the message body. I was under the impression that this
    > wasn't the case for the OP's application. I could be wrong.


    It isn't just a problem of arbitrary data, but of different
    clients/servers parsing the same data in different ways. This is
    obviously a bigger problem the more implementations of clients and servers
    you have, and how compatible you want to be ... but even with just one
    client and one server it wouldn't be unique to have silent bugs where
    someone typed \n\r or r\\n or just \n instead of \r\n at one point in the
    code.
    At which point a third application implementing the protocol, or a
    supposedly compatible change to either the server or client can bring out
    bugs (often in the edge cases) ... something like netstrings is much
    less likely to have this kind of problem.

    [1] http://www.and.org/texts/server-http

    --
    James Antill -- james@and.org
    http://www.and.org/vstr/httpd


  2. Re: Suggestions for custom application-layer protocol?

    On Fri, 27 May 2005 15:07:15 -0000, Grant Edwards
    wrote:

    >A byte stream is a byte stream. The serial (as in RS-232) byte
    >stream isn't reliable, but I cann't see any difference between a
    >serial comm link and a TCP link when it comes to message framing.


    Except for Modbus RTU style framing, in which the time gaps between
    bytes _are_ the actual frame delimiters. Maintaining these over a
    TCP/IP link would be a bit problematic :-).

    Paul


  3. Re: Suggestions for custom application-layer protocol?

    On 2005-05-27, Paul Keinanen wrote:
    > On Fri, 27 May 2005 15:07:15 -0000, Grant Edwards
    > wrote:
    >
    >>A byte stream is a byte stream. The serial (as in RS-232) byte
    >>stream isn't reliable, but I cann't see any difference between a
    >>serial comm link and a TCP link when it comes to message framing.

    >
    > Except for Modbus RTU style framing, in which the time gaps between
    > bytes _are_ the actual frame delimiters. Maintaining these over a
    > TCP/IP link would be a bit problematic :-).


    True. Modbus RTU's 3.5 byte time delimiter sucks. You're
    screwed even if all you want to do is use the RX FIFO in a
    UART.

    --
    Grant Edwards grante Yow! It's a lot of fun
    at being alive... I wonder if
    visi.com my bed is made?!?

  4. Re: Suggestions for custom application-layer protocol?

    Grant Edwards wrote:
    > On 2005-05-27, Paul Keinanen wrote:
    >
    >>On Fri, 27 May 2005 15:07:15 -0000, Grant Edwards
    >>wrote:
    >>
    >>
    >>>A byte stream is a byte stream. The serial (as in RS-232) byte
    >>>stream isn't reliable, but I cann't see any difference between a
    >>>serial comm link and a TCP link when it comes to message framing.

    >>
    >>Except for Modbus RTU style framing, in which the time gaps between
    >>bytes _are_ the actual frame delimiters. Maintaining these over a
    >>TCP/IP link would be a bit problematic :-).

    >
    >
    > True. Modbus RTU's 3.5 byte time delimiter sucks. You're
    > screwed even if all you want to do is use the RX FIFO in a
    > UART.
    >


    Hardly any of the Modbus/RTU programs on PC:s handle the
    frame timing correctly.

    Modbus framing is one of the best examples how framing
    should not be done. The only thing that competes with
    it is the idea of tunneling Modbus datagrams with TCP
    instead of UDP.

    --

    Tauno Voipio
    tauno voipio (at) iki fi


  5. Re: Suggestions for custom application-layer protocol?

    On 2005-05-28, Tauno Voipio wrote:

    >>>Except for Modbus RTU style framing, in which the time gaps
    >>>between bytes _are_ the actual frame delimiters. Maintaining
    >>>these over a TCP/IP link would be a bit problematic :-).

    >>
    >> True. Modbus RTU's 3.5 byte time delimiter sucks. You're
    >> screwed even if all you want to do is use the RX FIFO in a
    >> UART.

    >
    > Hardly any of the Modbus/RTU programs on PC:s handle the
    > frame timing correctly.


    As long as they're the master, or it's a full-duplex bus, they
    can get away with it. Being a slave on a half-duplex bus
    (everybody sees both commands and responses) is where the
    problems usually happen.

    I once talked to somebody who used an interesting scheme to
    detect Modbus RTU messages. He ignored timing completely (so
    he could use HW FIFOs), and decided that he would just monitor
    the receive bytestream for any block of data that started with
    his address and had the correct CRC at the location indicated
    by the bytecount. It meant that he had to have a receive
    buffer twice as long as the max message and keep multiple
    partial CRCs running, but I guess it worked.

    > Modbus framing is one of the best examples how framing should
    > not be done.


    I'd have to agree that Modbus RTU's framing was a horrible
    mistake. ASCII mode was fine since it had unique
    start-of-message and end-of-message delimiters.

    > The only thing that competes with it is the idea of tunneling
    > Modbus datagrams with TCP instead of UDP.


    I don't even want to know how they did message delimiting in
    Modubs over TCP...

    --
    Grant Edwards grante Yow! What GOOD is a
    at CARDBOARD suitcase ANYWAY?
    visi.com

  6. Re: Suggestions for custom application-layer protocol?

    On Sat, 28 May 2005 14:08:17 -0000, Grant Edwards
    wrote:

    >On 2005-05-28, Tauno Voipio wrote:


    >> Hardly any of the Modbus/RTU programs on PC:s handle the
    >> frame timing correctly.

    >
    >As long as they're the master, or it's a full-duplex bus, they
    >can get away with it. Being a slave on a half-duplex bus
    >(everybody sees both commands and responses) is where the
    >problems usually happen.


    Being a slave on a multidrop network is the problem, a half-duplex
    point to point connection is not critical.

    >I once talked to somebody who used an interesting scheme to
    >detect Modbus RTU messages. He ignored timing completely (so
    >he could use HW FIFOs), and decided that he would just monitor
    >the receive bytestream for any block of data that started with
    >his address and had the correct CRC at the location indicated
    >by the bytecount. It meant that he had to have a receive
    >buffer twice as long as the max message and keep multiple
    >partial CRCs running, but I guess it worked.


    As long as you are not using broadcast messages, it should be
    sufficient for a multidrop slave to have just one CRC running (and
    check it after each byte received). This calculation should be done on
    all frames, not just those addressed to you.

    From time to time, a message frame may be corrupted when the master is
    communicating with an other slave and after that, your slave does no
    longer make any sense of the incomming bytes, it might as well ignore
    them and set a timeout that is shorter than the master retransmission
    timeout.

    As long as the master communicates with other slaves, your slave does
    not understand anything what is going on. When the master addresses
    your node, the first request will be lost and master is waiting for a
    response, until the retransmission timeout expires.

    The slave timeout will expire before this, synchronisation is regained
    and your slave is now eagerly waiting for the second attempt of the
    request from the master. If there are multiple slaves that have lost
    synchronisation, they all will regain synchronisation, when one
    addressed slave fails to respond to the first request.

    If the bus is so badly corrupted, that all slaves get a bad frame, the
    communication will timeout anyway, thus all slaves will regain
    synchronisation immediately. Only in situations (e.g. badly terminated
    bus) that the master and the actively communicating slave does not get
    an CRC error, but your slave will detect a CRC error, your slave will
    be out of sync, until the master addresses your slave and the timeout
    occurs.

    Thus, the only harm is really that broadcasts can not be used, as all
    out of sync slaves would loose them.

    >> The only thing that competes with it is the idea of tunneling
    >> Modbus datagrams with TCP instead of UDP.

    >
    >I don't even want to know how they did message delimiting in
    >Modubs over TCP...


    Since Modbus over TCP is really point to point, the situation is
    similar to the serial point to point case.

    Some converter boxes also convert Modbus RTU to Modbus/TCP before
    sending it over the net. The Modbus/TCP protocol contains a fixed
    header (including a byte count) and the variable length RTU frame
    (without CRC).

    The strange thing is why they created Modbus/TCP for running the
    Modbus protocol over Ethernet instead of Modbus/UDP.

    However, googling for "Modbus/UDP" gives quite a lot of hits, so quite
    a few vendors have implemented some kind of Modbus over UDP protocols
    in their products in addition to Modbus/TCP.

    Paul


  7. Re: Suggestions for custom application-layer protocol?

    On 2005-05-28, Paul Keinanen wrote:

    >>> Hardly any of the Modbus/RTU programs on PC:s handle the
    >>> frame timing correctly.

    >>
    >>As long as they're the master, or it's a full-duplex bus, they
    >>can get away with it. Being a slave on a half-duplex bus
    >>(everybody sees both commands and responses) is where the
    >>problems usually happen.

    >
    > Being a slave on a multidrop network is the problem, a
    > half-duplex point to point connection is not critical.


    In my experience the problems usually occur only on a
    half-duplex network. If the slave doesn't see the other slaves
    responses, it has no problem finding the start of commands from
    the host.

    >>I once talked to somebody who used an interesting scheme to
    >>detect Modbus RTU messages. He ignored timing completely (so
    >>he could use HW FIFOs), and decided that he would just monitor
    >>the receive bytestream for any block of data that started with
    >>his address and had the correct CRC at the location indicated
    >>by the bytecount. It meant that he had to have a receive
    >>buffer twice as long as the max message and keep multiple
    >>partial CRCs running, but I guess it worked.

    >
    > As long as you are not using broadcast messages, it should be
    > sufficient for a multidrop slave to have just one CRC running
    > (and check it after each byte received).


    How so? If you don't know where the frame started because you
    can't detect timing gaps, you have to have a seperate CRC
    running for each possible frame starting point, where each
    "starting point" is any byte matching your address (or the
    broadcast address).

    > This calculation should be done on all frames, not just those
    > addressed to you.


    The problem is that you don't kow where the frames start so the
    phrase "on all frames" isn't useful.

    >>I don't even want to know how they did message delimiting in
    >>Modubs over TCP...

    >
    > Since Modbus over TCP is really point to point, the situation is
    > similar to the serial point to point case.


    Ah.

    > Some converter boxes also convert Modbus RTU to Modbus/TCP
    > before sending it over the net. The Modbus/TCP protocol
    > contains a fixed header (including a byte count) and the
    > variable length RTU frame (without CRC).
    >
    > The strange thing is why they created Modbus/TCP for running
    > the Modbus protocol over Ethernet instead of Modbus/UDP.


    Using UDP would seem to be a lot more analgous to a typical
    multidrop

    > However, googling for "Modbus/UDP" gives quite a lot of hits, so quite
    > a few vendors have implemented some kind of Modbus over UDP protocols
    > in their products in addition to Modbus/TCP.


    Interesting.

    --
    Grant Edwards grante Yow! What a
    at COINCIDENCE! I'm an
    visi.com authorized "SNOOTS OF THE
    STARS" dealer!!

  8. Re: Suggestions for custom application-layer protocol?

    On Sun, 29 May 2005 15:11:42 -0000, Grant Edwards
    wrote:


    >>>I once talked to somebody who used an interesting scheme to
    >>>detect Modbus RTU messages. He ignored timing completely (so
    >>>he could use HW FIFOs), and decided that he would just monitor
    >>>the receive bytestream for any block of data that started with
    >>>his address and had the correct CRC at the location indicated
    >>>by the bytecount. It meant that he had to have a receive
    >>>buffer twice as long as the max message and keep multiple
    >>>partial CRCs running, but I guess it worked.




    >> This calculation should be done on all frames, not just those
    >> addressed to you.

    >
    >The problem is that you don't kow where the frames start so the
    >phrase "on all frames" isn't useful.


    If the master starts after all slaves are ready waiting for the first
    message, they all know were the first message starts. When the correct
    CRC is detected, the end of the first frame is known and it can now be
    assumed that the next byte received will start the next frame :-).
    This continues from frame to frame.

    Everything works well as long as there are no transfer errors on the
    bus or any premature "CRC matches" within the data part of a message.
    If the sync is lost, unless broadcasting is used, is there any need to
    regain sync until the master addresses your slave ? The master will
    not get an answer to the first request, but it will resend the request
    after the resend timeout period. The slave only needs to be able to
    detect this timeout period, which would usually be a few hundred byte
    transfer times.

    If the slave pops up into an active bus, it will regain
    synchronisation, when the master will address it first time and it
    will resend the command after a timeout period, since it did not get a
    reply. The slave just needs to detect the long resend timeout.

    >> As long as you are not using broadcast messages, it should be
    >> sufficient for a multidrop slave to have just one CRC running
    >> (and check it after each byte received).

    >
    >How so? If you don't know where the frame started because you
    >can't detect timing gaps, you have to have a seperate CRC
    >running for each possible frame starting point, where each
    >"starting point" is any byte matching your address (or the
    >broadcast address).


    A shift, bit test and xor operation is needed for each data bit
    received to calculate the CRC into the CRC "accumulator". This
    calculation can be done for all received bytes when the end of frame
    gap is detected or eight bits can be calculated each time a new byte
    is received (actually calculate with the byte received two bytes since
    the current byte).

    From a computational point of view, both methods are nearly equal. The
    only difference is that in the first case, the CRC accumulator is
    compared with the last two bytes in the frame (in correct byte order)
    only after the gap was detected, however in the latter case, the
    updated CRC accumulator must be compared with the two most recently
    received bytes each time a byte is received. Thus, the computational
    difference is only two byte compares for each received byte.

    Thus a single CRC accumulator is sufficient, if no broadcasts are
    expected.

    However, if it is required to receive broadcasts, then it might be
    justified to use about 260 parallel CRC accumulators (if the maximum
    size frames are expected). Each accumulator starts calculating from a
    different byte in the buffer. Each time a new character is received,
    the last two bytes (the CRC candidates) are compared with all 260 CRC
    accumulators and once a match is found, the start of that frame is
    also known and synchronisation is regained. After synchronisation is
    regained, one CRC accumulator is sufficient for keeping track of the
    frame gaps.

    Since the Modbus CRC is only 16 bits long, relying on solely the CRC
    calculations can cause premature CRC detections with the likelihood of
    1/65536 or about 16 ppm, so additional message header checks should be
    employed to detect these, before declaring a true CRC end of frame.

    Paul


  9. Re: Suggestions for custom application-layer protocol?

    On 2005-05-29, Paul Keinanen wrote:

    > Everything works well as long as there are no transfer errors on the
    > bus or any premature "CRC matches" within the data part of a message.
    > If the sync is lost, unless broadcasting is used, is there any need to
    > regain sync until the master addresses your slave ? The master will
    > not get an answer to the first request, but it will resend the request
    > after the resend timeout period. The slave only needs to be able to
    > detect this timeout period, which would usually be a few hundred byte
    > transfer times.


    Ah. Right. I should have thought of that. [I never used the
    scheme in question -- I always had an interrupt for each rx
    byte, and did the gap detection according to the spec.]

    > If the slave pops up into an active bus, it will regain
    > synchronisation, when the master will address it first time
    > and it will resend the command after a timeout period, since
    > it did not get a reply. The slave just needs to detect the
    > long resend timeout.


    Yup. And in most of the control systems I've run into,
    timeouts are usually on the order of a second or two, so
    detecting that isn't a problem -- even with a FIFO.

    > However, if it is required to receive broadcasts, then it
    > might be justified to use about 260 parallel CRC accumulators
    > (if the maximum size frames are expected). Each accumulator
    > starts calculating from a different byte in the buffer. Each
    > time a new character is received, the last two bytes (the CRC
    > candidates) are compared with all 260 CRC accumulators and
    > once a match is found, the start of that frame is also known
    > and synchronisation is regained. After synchronisation is
    > regained, one CRC accumulator is sufficient for keeping track
    > of the frame gaps.


    Hmm. I seem to have forgotten how broadcast messages work in
    the spec [I haven't done Modbus for a few years]. In our
    systems, we implimented a non-standard broadcast scheme by
    reserving address 0 as a broadcast address. One would only
    needed to start a CRC when a byte was seen that was 0 or that
    matched the slave address.

    > Since the Modbus CRC is only 16 bits long, relying on solely the CRC
    > calculations can cause premature CRC detections with the likelihood of
    > 1/65536 or about 16 ppm, so additional message header checks should be
    > employed to detect these, before declaring a true CRC end of frame.


    Good point. Thanks for the detailed explination.

    --
    Grant Edwards grante Yow! This ASEXUAL
    at PIG really BOILS
    visi.com my BLOOD... He's
    so... so... URGENT!!

  10. Re: Suggestions for custom application-layer protocol?


    James Antill writes:

    > On Wed, 25 May 2005 19:21:02 +0000, Sean Burke wrote:
    >
    > > One option to consider is to embed a web interface into your
    > > application. This has the advantage that you can use any web
    > > browser as the client side of the interface.

    >
    > Not a terrible idea, a simple HTTP/1.0 server can be pretty small esp. if
    > you don't mind stopping as soon as something works (the very basics can be
    > done in < 20 lines of C).
    >
    > > There are a variety of very small web servers that are suitable
    > > for embedding. One such that I have used successfully is "pserv".
    > > From the FreeBSD ports description:

    >
    > Yeh, pretty good ... I stopped looking after seeing 2 major
    > vulnerabilities on the first google page.
    > The code also looked promising ... for more exploits.


    Are you commenting on anything beyond the obvious fact
    that the code uses strcpy and sprintf?

    > Writing a custom "simple protocol" is likely to be much easier, using
    > netstrings is probably more likely to make you do the right thing ... but
    > a simple "CMD arg1 arg2" type telnet/SMTP/NNTP like protocol isn't hard to
    > get right.


    None of these are inherently simpler than HTTP, and you
    don't get the advantages that a web browser's sophisticated
    support for HTML confers.

    -SEan









  11. Re: Suggestions for custom application-layer protocol?

    Sean Burke wrote:

    > None of these are inherently simpler than HTTP, and you
    > don't get the advantages that a web browser's sophisticated
    > support for HTML confers.


    Speaking as one who has done it, adapting HTTP instead of using a custom
    protocol has many advantages besides the above. Think of all the proxies
    and filters out there, the tools that snoop the wire, making a nice
    graphical display sorted into request and response sequences (e.g.
    HTTPlook). Consider the existence of client-side libraries ready to use
    in any language (libwww or libcurl for C, java.net.* or Jakarta
    HttpClient for Java, lots of Perl modules, etc). None of this is
    available to a custom protocol, however easy to implement.

    --
    Henry Townsend

  12. Re: Suggestions for custom application-layer protocol?

    Glyn Davies wrote:

    >
    > Most STX/ETX stuff I have seen was over serial comms.
    >


    STX/ETX over serial is used to steady the line noise
    that occurs in rs485 communication, because when slave
    switches it's transmit line ON, it generates noise on the line
    as a sideffect that could be missineterpred, so it's a good
    practice for serial messages (bin/text) to start with several
    STX chars and end with several ETX chars - message itself should
    have some crc check...

    best regards,
    Mario


  13. Re: Suggestions for custom application-layer protocol?

    I didn't read all the answers, just butting in, but take a look at NMEA0183.
    That is a simple text based protocol to send formatted data, used by marine
    equipement (GPS for instance). Is is more or less one way but you can simply
    add the other way if you like. Use it over UDP and add some acknowledge
    messages for instance.

    John



  14. Re: Suggestions for custom application-layer protocol?

    On Mon, 30 May 2005 14:15:58 +0200, Mile Blenton
    wrote:

    >Glyn Davies wrote:
    >
    >>
    >> Most STX/ETX stuff I have seen was over serial comms.
    >>

    >
    >STX/ETX over serial is used to steady the line noise
    >that occurs in rs485 communication, because when slave
    >switches it's transmit line ON, it generates noise on the line
    >as a sideffect that could be missineterpred,


    Why should turning on the transmitter cause any noise ? In any
    properly terminated RS-485 system, the line is drawn by resistors to
    the Mark (idle) state when no transceiver is active driving the bus.
    Turning the transmitter on in the low impedance Mark state does not
    change the voltage levels. The voltages change when the transmitter
    start to send the start bit (Space).

    However, if the RS-485 line in a noisy environment that is used
    infrequently, i.e. there are long idle periods between messages, the
    line is more prone to random errors due to the high impedance idle
    state than the low impedance active Mark or Space state. The noise
    will often cause false start bit triggerings (often seen as 0xFF bytes
    in the UART).

    When the protocol frame always starts with a known character (such as
    STX), it is quite easy to ignore any noise received during the idle
    period.

    In fact this also applies to actively driven RS-232/422/20 mA lines in
    very noisy environments if there are long pauses between messages.

    However, the STX detection fails, if there is a Space noise pulse less
    than 10 bit times ahead of the STX (at 8N1 characters). The Space is
    assumed to be a start bit and the UART starts counting bits. While the
    UART is still counting data bits, the start bit for the true STX
    character is received, but it is interpreted as a data bit by the
    UART. When the UART is finally ready to receive the stop bit, actually
    some of the middle bits of the STX will be received. If this bit
    happens to be in the Mark state, the UART is satisfied with the stop
    bit and is waiting for the next Mark to Space transition, which is
    interpreted as the next start bit. However, if a Space data bit is
    received when the UART expects the stop bit, the framing error occurs.

    In both cases, the STX character will not be detected and usually the
    whole frame will be lost.

    BTW, the Modbus RTU specification (which does not use STX) specifies,
    that the transmitter should be turned on at least 1.5 bit times before
    the start of the transmission. This assumes that while errors may
    occur in the passively maintained Mark state, the actively driven low
    impedance Mark state will keep the line clean for at least 15 bit
    times. Thus, there should be no false start bits too close to the
    actual message, thus the first byte is always decoded correctly.

    >so it's a good
    >practice for serial messages (bin/text) to start with several
    >STX chars


    Using multiple STX characters makes sense only if there are more than
    10 bit actively driven Mark (idle) between the STX characters. Even if
    the first STX is lost due to the false start bit, the second will be
    reliably detected. Sending multiple STX characters without a time gap
    would just cause a few framing errors, but it is unlikely to regain
    synchronisation. In fact, it would make more sense to send a few 0xFF
    bytes, so if the first is lost due to a premature start bit, the UART
    would get a few bits in the Mark state and assume that the line is
    idle and correctly detect the start bit of the next 0xFF byte.

    >and end with several ETX chars


    A known end character (such as ETX) is used, since a simple interrupt
    service routine can independently receive characters until the end
    character is detected and the whole message can be passed to a higher
    level routine as a single entity.

    How would multiple ETX characters help ? The end of message is already
    detected.

    Paul


  15. Re: Suggestions for custom application-layer protocol?

    On Sun, 29 May 2005 20:45:42 +0000, Sean Burke wrote:

    >
    > James Antill writes:
    >
    >> On Wed, 25 May 2005 19:21:02 +0000, Sean Burke wrote:
    >>
    >> > There are a variety of very small web servers that are suitable
    >> > for embedding. One such that I have used successfully is "pserv".
    >> > From the FreeBSD ports description:

    >>
    >> Yeh, pretty good ... I stopped looking after seeing 2 major
    >> vulnerabilities on the first google page.
    >> The code also looked promising ... for more exploits.

    >
    > Are you commenting on anything beyond the obvious fact
    > that the code uses strcpy and sprintf?


    That isn't enough? See: http://www.and.org/vstr/security
    Programmers _cannot_ get this right.

    If, when I look outside, there is water falling from the sky I do not
    need to walk outside to know I'm going to get wet ... and if you are
    arguing that the drops of water are small and have large gaps between
    them, I am still not going to feel compelled to walk outside to see if
    I get wet.

    >> Writing a custom "simple protocol" is likely to be much easier, using
    >> netstrings is probably more likely to make you do the right thing ... but
    >> a simple "CMD arg1 arg2" type telnet/SMTP/NNTP like protocol isn't hard to
    >> get right.

    >
    > None of these are inherently simpler than HTTP, and you
    > don't get the advantages that a web browser's sophisticated
    > support for HTML confers.


    Which web browser? As I said, in theory you can get something "simplish"
    that looks like a HTTP/1.0 server from the right angle ... and it might
    even work with mozilla (as that client is very forgiving), but making it a
    real HTTP/1.0 server isn't trivial and supporting HTTP/1.1 is very hard.
    Also if you need state to cross message boundaries you'll have to
    implement a lot more code on the server side.

    --
    James Antill -- james@and.org
    http://www.and.org/vstr/httpd


  16. Re: Suggestions for custom application-layer protocol?

    Hi, Too much answers,seemingly, but one more why not. try to see the
    RFC 3117 and the protocol BEEP (the old BXXP) www.beepcore.org It is
    XML based but I think it can help you. good luck


+ Reply to Thread
Page 3 of 3 FirstFirst 1 2 3