Common Sens(ors)

November 26, 2013

Reality is a pretty chaotic environment and requires a bit of thinking to get those messy real world signals into our nice neat digital world.

I’ve talked about different sensors lately, but it occurs to me that there are some tasks common to nearly all sensor types. I often think of sensors, not by their physical quantity that they measure, but by the way they present that measurement to my software.

By that definition, sensors usually fall into one of these categories:

  • Engineering units — Smart sensors send data in some already-processed form (a packed BCD number or an ASCII floating-point representation, for example)
  • Counts — Many sensors yield a raw count that doesn’t specifically mean anything without some sort of calibration curve (anything you read via an A/D converter probably falls into this category)
  • Time — A sensor that produces a varying pulse width or frequency is typically measured with a time count; sure, it is still a count, but it represents a time, not a typical measurement
  • Discrete — Sensors that essentially produce an on/off signal are read via a digital input

Of these, the discrete case is probably the odd one. I talked a few weeks ago about setup and hold times and bringing inputs into a synchronous clock domain. However, when looking at real-world discretes, you probably want to allow some time for the measurement to settle to reject noise. This is especially true of mechanical contacts (like switches) where you might expect “bounce”; that is, noise caused by the mechanism not fully making (or breaking) contact right away.

Of course, how you “debounce” the switch depends on what you expect to do with it. For example, if you need an emergency stop switch, you might not want to debounce it at all. When you see anything that looks like a switch closure, you act on it. If there are more spurious ones that follow, you don’t care because you stopped. The same thing might apply if the switch closure starts some long (multi-second) process where you stop reading the input.

If that isn’t the case, though, you should put some delay in the measurement. You probably also want to wait for the switch to go open again (and debounce that as well). For example, here is some “bad” C code:

while (1) if (get_switch()) do_something();

If the switch bounces you might do_something more than once per button push. If the button stays pushed, you will get multiple calls to do_something (which could be a feature if you wanted a very fast automatic repeat). Something like this is probably smarter:

while (1) {
   if (get_switch())  {
	if (get_switch()) {
	   while (1)  {   // wait for switch open
	       if (!get_switch()) {
		if (!get_switch()) break;
          } // if (!get_switch())
      }   // while (1)
   } // if (get_switch())
 }  // if (get_switch())
}  // while (1)

Usually DEBOUNCE_DELAY is set for a small number of milliseconds, but this can vary depending on what you are trying to debounce.

This is really a specific case of something you have to think about for all sensors: response time. If you are measuring, say, temperature, you can’t expect the reading to instantly reflect a change in the real world. Understanding the response time can be critical for making a system work.

Other things you have to understand include the measurement range of a sensor, as well as its precision, accuracy, and resolution. That sounds like three synonyms, but each one means something slightly different.

Resolution is easy to understand. This is the number of digits the device reports. A digital voltmeter that reads 1.250 volts has a resolution of 1mV (assuming it could also report 1.251 volts). However, suppose you had a lab-grade standard voltage that put out 1.2V. If the meter consistently read 1.250 to 1.255, it would not be terribly accurate (55mV off, in fact). However, it is actually fairly precise since multiple readings only spread by 5mV. So:

  • Precision — The repeatability of measurements without regard to their absolute accuracy
  • Accuracy — The correctness of readings compared to an arbitrary standard
  • Resolution — The number of digits reported

In the world of the analog to digital converter, this all becomes very important. Suppose you have a 1.024V reference and a 10-bit A/D converter. Assuming the converter is linear, each bit would be worth exactly 1mV. That’s the resolution. However, most converters specify at least +/- 1 LSB (least significant bit) of error so that’s a precision and accuracy problem. Other sources of error may push it even further than that. Suppose noise on the supply line is not isolated from the reference voltage and it varies by +/-100 microvolts. That counts against precision and accuracy. On the other hand, suppose the input stage and the measurement point form a 99% voltage divider. That 1% counts against accuracy, but not against precision since it is always a 1% error.

There are many common schemes for improving A/D performance in software. Most common is to use some form of oversampling and averaging. For example:

for (i=0;i<NUM_SAMPLES;i++) sum+=get_ad_value();

This can lead to other errors, though. If the input changes between samples, it will perturb the output. Typically, the total time for all sampling needs to be small compared to the frequency of the input. In other words, the sampling frequency needs to be very high compared to the input frequency. If you are interested in the detailed math, there are lots of good explanations on the Web, including the one in this Silicon Labs application note or this one from Atmel.

However, just intuitively, consider this: If a true DC source were connected to the A/D, ideally you would get a single count value from it. That’s ideally. In real life, you will get a spread and (presumably) the distribution would be a bell curve. The errors would be caused by different noise sources throughout the system. By taking the average of a large number of samples, you tend to get to the center of the bell curve and reduce the impact of the outliers. If you sample fast compared to the input signal, the same effect occurs for a non-DC input.

Note that averaging only helps if you have some noise in the system, and works best if the noise is random. If that ideal A/D converter existed, you’d add a bunch of the same number together, take the average, and wind up with the same number! If all the noise in the system pulls in one direction (that is, a 3V measurement ranges from 3V to 3.1V, but never below 3V) you get less benefit from averaging, as well. Luckily, most real systems have a pretty random distribution of measurement uncertainty.

You can also try a moving average:

// assume the list is preloaded
head=0;  // first entry in list
tail=7;    // last entry in list (list is 8 items long)
// get new sample
head=(head+1)&7;  // update first item
// compute new average of last 8 items
for (i=head;((i+1)&7)!=head;i=(i+1)&7) sum+=list[i];
result=sum/8;   // or sizeof(list)/sizeof(list[0]) if you prefer

This has the advantage of only requiring one sample per output, but is even more likely to have the signal change between the first and last sample. There are other schemes for averaging values, and perhaps I’ll take that up at some point in the future.

My real point, though, is that the real world is a messy place for programmers. We like to think of our perfect wires and ideal sensors, but reality is a pretty chaotic environment and requires a bit of thinking to get those messy real-world signals into our nice, neat digital world.