Friday, 25 November 2016

communication - Serial protocol delimiting/synchronization techniques



As asynchronous serial communication is widely spread among electronic devices even nowadays, I believe many of us have encountered such a question from time to time. Consider an electronic device D and a computer PC connected with serial line (RS-232 or similar) and required to exchange information continuously. I.e. PC is sending a command frame each X ms, and D is replying with status report/telemetry frame each Y ms (The report can be sent as response to requests or independently - doesn't really matter here). The communication frames can contain any arbitrary binary data. Assuming the communication frames are fixed-length packets.


The problem:


As the protocol is continuous, the receiving side might loose the synchronization or just "join" in the middle of an ongoing sent frame, so it just won't know where the start of frame (SOF) is. A the data has different meaning based on its position relatively to the SOF, the received data will become corrupted, potentially forever.


The required solution


Reliable delimiting/synchronization scheme to detect the SOF with short recovery time (i.e. it shouldn't take more than, say 1 frame to resynchronize).


The existing techniques I am aware (and using some) of:


1) Header / checksum - SOF as predefined byte value. Checksum in the end of frame.



  • Pros: Simple.

  • Cons: Not reliable. Unknown recovery time.



2) Byte stuffing:



  • Pros: Reliable, fast recovery, can be used with any hardware

  • Cons: Not that suitable for fixed-size frame-based communication


3) 9th bit marking - prepend each byte with additional bit, while SOF marked with 1 and the data bytes are marked with 0:



  • Pros: Reliable, fast recovery

  • Cons: Requires hardware support. Not directly supported by most of PC hardware and software.



4) 8th bit marking - kind of emulation of the above, while using the 8th bit instead of 9th, which is leaving only 7bits for each data word.



  • Pros: Reliable, fast recovery, can be used with any hardware.

  • Cons: Requires an encoding/decoding scheme from/to the conventional 8-bit representation to/from 7-bit representation. Somewhat wasteful.


5) Timeout based - assume the SOF as the first byte coming after some defined idle time.



  • Pros: No data overhead, simple.

  • Cons: Not that reliable. Won't work well with poor timing systems like, say, Windows PC. Potential throughput overhead.



Question: What are the other possible techniques/solutions exist to address the problem? Can you point to the cons in the above list which can be easily worked around, thus removing them? How do you (or would you) design your systems protocol?



Answer




How do you (or would you) design your systems protocol?



In my experience, everyone spends a lot more time debugging communication systems than they ever expected. And so I strongly suggest that whenever you need to make a choice for a communication protocol, you pick whichever option that makes the system easier to debug if at all possible.


I encourage you to play with designing a few custom protocols -- it's fun and very educational. However, I also encourage you to look at the pre-existing protocols. If I needed to communicate data from one place to another, I would try very hard to use some pre-existing protocol that someone else has already spent a lot of time debugging.


Writing your own communication protocol from scratch is highly likely to slam against many of the same common problems that everyone has when they write a new protocol.


There's a dozen embedded system protocols listed at Good RS232-based Protocols for Embedded to Computer Communication -- which one is the closest to your requirements?



Even if some circumstance made it impossible to use any pre-existing protocol exactly, I am more likely to get something working more quickly by starting with some protocol that almost fits the requirements, and then tweaking it.


bad news


As I have said before:


Unfortunately, it is impossible for any communication protocol to have all these nice-to-have features:



  • transparency: data communication is transparent and "8 bit clean" -- (a) any possible data file can be transmitted, (b) byte sequences in the file always handled as data, and never mis-interpreted as something else, and (c) the destination receives the entire data file without error, without any additions or deletions.

  • simple copy: forming packets is easiest if we simply blindly copy data from the source to the data field of the packet without change.

  • unique start: the start-of-packet symbol is easy to recognize, because it is a known constant byte that never occurs anywhere else in the headers, header CRC, data payload, or data CRC.

  • 8-bit: only uses 8-bit bytes.



I would be surprised and delighted if there were any way for a communication protocol to have all of these features.


good news



What are the other possible techniques/solutions exist to address the problem?



Often it makes debugging much, much easier if a human at a text terminal can replace any of the communicating devices. This requires the protocol to be designed to be relatively time-independent (doesn't time-out during the relatively long pauses between keystrokes typed by a human). Also, such protocols are limited to the sorts of bytes that are easy for a human to type and then to read on the screen.


Some protocols allow messages to be sent in either "text" or "binary" mode (and require all possible binary messages to have some "equivalent" text message that means the same thing). This can help make debugging much easier.


Some people seem to think that limiting a protocol to only use the printable characters is "wasteful", but the savings in debugging time often makes it worthwhile.


As you already mentioned, if you allow the data field to contain any arbitrary byte, including the start-of-header and end-of-header bytes, when a receiver is first turned on, it is likely that the receiver mis-synchronizes on what looks like a start-of-header (SOH) byte in the data field in the middle of one packet. Usually the receiver will get a mismatched checksum at the end of that pseudo-packet (which is typically halfway through a second real packet). It is very tempting to simply discard the entire pseudo-message (including the first half of that second packet) before looking for the next SOH -- with the consequence the receiver could stay out of sync for many messages.


As alex.forencich pointed out, a much better approach is for the receiver to discard bytes at the beginning of the buffer up to the next SOH. This allows the receiver (after possibly working through several SOH bytes in that data packet) to immediately synchronize on the second packet.




Can you point to the cons in the above list which can be easily worked around, thus removing them?



As Nicholas Clark pointed out, consistent-overhead byte stuffing (COBS) has a fixed overhead that works well with fixed-size frames.


One technique that is often overlooked is a dedicated end-of-frame marker byte. When the receiver turned on in the middle of a transmission, a dedicated end-of-frame marker byte helps the receiver synchronize faster.


When a receiver is turned on in the middle of a packet, and the data field of a packet happens to contain bytes that appear to be a start-of-packet (the beginning of a pseudo-packet), the transmitter can insert a series of end-of-frame marker bytes after that packet so such pseudo-start-of-packet bytes in the data field don't interfere with immediately synchronizing on and correctly decoding the next packet -- even when you are extremely unlucky and the checksum of that pseudo-packet appears correct.


Good luck.


No comments:

Post a Comment

arduino - Can I use TI's cc2541 BLE as micro controller to perform operations/ processing instead of ATmega328P AU to save cost?

I am using arduino pro mini (which contains Atmega328p AU ) along with cc2541(HM-10) to process and transfer data over BLE to smartphone. I...