Receiving Data in Real-Time

Real Time Waits for No One
Posted on Feb 7 2015 by Fernando Zamora
Bits and Bytes Streaming

Some of you may know that I’ve been working on Arduino lately and my current project is to Send Data Over FM. My ultimate goal is to send data over FM but I am not there yet. I picked FM simply to see if it could be done and I figured I could learn some things about real data transmission along the way. I chose not to implement the communication using any shields on purpose. Arduino has many shields with the capability to transmit data such as Wi-Fi or blue tooth, all with easy to use APIs, but that would have been too easy. At the same time that would not give me the understanding that I am seeking. You see, by handling the data at the physical layer instead of the software layer this allows me to learn and understand certain things about data communication.

For instance, since I am handling the data at the physical layer, this means that I have to program in a way that my code handles the challenges of real-time. Real-time logic waits for no one and nothing. Think of a robot in an assembly line. What if the previous process did not properly prepare the widget? What would happen when the next robot in the assembly tried to perform the next step. These are real challenges that only occur in real time. We usually take for granted all the complexity and difficulty of handling real time data because that handling is all very well hidden in the implementation of the NIC drivers and such.

What exactly does it mean to work in real time? It means that since I am handling electronic pulses coming in, I have to handle the reading of each electronic pulse correctly at the exact moment that it arrives. This is contrast with any other higher level protocol such as sockets or even HTTP. At those layers you simply read the message in every day characters (well ASCII characters). You read the full string. In many cases you receive a fully formatted object. By the time the programmer works with those messages, the messages have already bubbled up through several layers of complexity and the programmer is only dealing with a very simple aspect of it. You don’t have to worry about timing each pulse exactly and then converting the bits back into bytes, then back into characters and then into XML and then possibly into a full POCO or POJO or JSON, you get the point. For instance if each pulse that equates to a bit comes in every millisecond and your program executes instructions that take longer than a millisecond that would allow your program to miss that next bit and you will end up with corrupted data.

For instance let’s say that every millisecond you are waiting for a bit. So every 8 milliseconds you receive a whole byte of data. Let’s say the transmitter sent you the following byte 01100001 (97 or the letter a) and for some reason you missed a bit, so instead you got the byte 01100011 (99 or the letter c). That would mess up that byte but not only would it corrupt that byte it would corrupt any other bytes that come after that. That’s because the bytes are now being read off sequence. Basically your reading is out of sync with the order in which they were sent.

I haven’t gotten to the FM portion of this project. Currently I am simply making sure that my messaging protocol works over wire transmission. This is so that I can make sure that coding and encoding or raw data works correctly. Once I complete that phase properly, I will consider moving on to the next phase which will include FM communications.

So… I spent the better part of yesterday trying to figure out why, instead of “hello world” (the message that the transmitter sends on every cycle in an infinite loop), I would get weird messages like “hel…” and a bunch of weird characters following the “hel” portion. I knew that I had a sync problem but I wasn’t sure how to fix it.

The reason that I had a sync problem was that I was reading the characters like this.

for(int i = 0; i < MAX_MESSAGE_BIT_SIZE; i++){ //analog reading is between 0 and 1023 //the low pulse usually come around in the low //tens and the high usually come in the 900s //so 100 is a safe bet for now messageBits[i] = analogRead(sensorPin) < 100 ? 0 : 1; //delay is set to the same frequency //as the transmitter... in this case 10 delay(DELAY); }

The problem was the delay. You see, for some reason the analogRead portion was making the code fall out of sync. If for instance the analog read took too long or was too quick, the delay was now out of sync with the bit flow. By around the third or fourth character read in, it was off by one bit and that was enough to corrupt the entire remainder of the message. So how did I solve it?

I did it by measuring the time instead and keeping it in sync after reading each byte.

while( i < MAX_MESSAGE_BIT_SIZE){ int newTime = millis(); //enough time has passed and only //if enough time has passed //read the next bit if((newTime - currentTime) >= DELAY)
{
messageBits[i] = analogRead(sensorPin) < 100 ? 0 : 1; currentTime = newTime; i++; } //this delay is much less than //the DELAY value of 10 milliseconds //so it's ok... //just avoiding a lot of dead cycles delayMicroseconds(250); }

By making that simple change my program started reading the data correctly each and every time. So basically what I did was measure the time and only read a bit at the exact interval. Think of it like jumping rope and jumping at the exact moment. Wait too long and you are out, don't wait long enough and likely you are also out. The same way the reading has to happen at the exact interval. Here is a sample of what the message looked like coming in.


00100000
00000000
00000000
00000000
00000000
00000000
00000000
00000000
00001010
10101010
10101010
10100000
000
***********MESSAGE***********

hello world

***********END OF MESSAGE***********

00000011
10000110
01010110
11000110
11000110
11110010
00000111
01110110
11101110
01001101
10001100
10000000
00000000


Notice all the other bytes coming in prior to the message. I displayed all the bytes so that I could see each byte coming in. But notice the pattern just prior to the message. Eight bytes of 0's (64 0 bits) then 3 bytes of alternating bits (01010101) then one more empty byte (00000000). I used that sequence as a preamble header to identify exactly where the message would start. That way I knew when to start buffering the message. It would likely make a good blog post on that concept alone. **The bits and bytes may not appear the way I describe them because I am not returning at the right break between bytes. This is a bug that I have fixed since the original posting.

If you are interested in looking at the full transmitter and receiver code sketches, you can find them on github. I plan to do a full blog post on how to configure and wire the hardware to make this sketch work. transmitter sketch and receiver sketch. Just be warned that although the sketches work as they are, they contain a lot of overhead test data.

Leave a Reply




Post Comment

Connect With Us

Recent Posts

A Guide for Learning Design Patterns

July 28th 2016 by Fernando Zamora If you’ve been writing code for more than a year, you might have h...

Read More

Using UML to Analyze Legacy Code

June 30th 2016 by Fernando Zamora For every programmer out there creating brand new code and working...

Read More

Python vs. Other Languages

April 29th 2016 by Fernando Zamora For the last two months or so my fellow McLane Advanced Technolog...

Read More

Naming Your Adapter When Implementing the Adapter Pattern

October 19th 2015 by Fernando Zamora At some point in your project you may need to use a third party...

Read More

10 Methods to Help You Debug Legacy Code

September 24th 2015 by Fernando Zamora A large majority of the work we do in our project is to fix r...

Read More