Choosing sensors: Specsmanship vs. reality

Accuracy and precision are paramount when specifying sensors. The two terms are often used interchangeably, but there are fundamental differences between them. Accuracy, a qualitative concept, indicates the proximity of measurement results to the true value; precision reflects the repeatability or reproducibility of the measurement.

 

ISO 3534-1:2006 defines precision to mean the closeness of agreement between independent test results obtained under stipulated conditions, and views the concept of precision as encompassing both repeatability and reproducibility. The standard defines repeatability as precision under repeatable conditions, and reproducibility as precision under reproducible conditions.

 

Precision, accuracy, repeatability, reproducibility, variability and uncertainty represent qualitative concepts and thus should be applied with care. The precision of an instrument reflects the number of significant digits in a reading; the accuracy of an instrument reflects how close the reading is to the true value being measured. An accurate instrument is not necessarily precise, and instruments are often precise but far from accurate.

 

Figure 1 (below) illustrates the difference between accuracy and precision, and shows that the precision of the measurement may instead vary in proportion to the signal level.


Concepts of accuracy

Sensor manufacturers and users employ one of two basic methods to specify sensor performance: parameter specification and the total error band envelope.

 

Parameter specification quantifies individual sensor characteristics without any attempt to combine them.

 

The total error band envelope yields a solution much nearer to that expected in practice, whereby sensor errors are expressed in the form of a total error band, or error envelope, into which all data points must fit regardless of their origin. As long as the sensor operates within the parameters specified in the data sheet, the sensor data can be relied on, giving the user confidence that all sensor data acquired will be accurate within the stated error band and thereby avoiding the need for lengthy and error-prone data analysis. Figure 2 illustrates the concept.


 

 

Many manufacturers, however, specify individual error parameters, unless there are legislative pressures compelling them to state the total error band of their sensors. For instance, if products or services are sold by weight, the weighing equipment is subject to legal metrology legislation and comes under the scrutiny of weights and measures authorities around the world.

 

The International Organization of Legal Metrology requires that load cells used in weighing equipment are accuracy-controlled by enforcing a strict adherence to an error-band performance specification. Typically, such an error band will include parameters such as nonlinearity, hysteresis, nonrepeatability, creep under load, and thermal effects on both zero and sensitivity. The user of such a sensor can rest assured that its measurement precision will be within the total error band specified, provided all the parameters of interest are included.

 

Unless there is external pressure to comply, manufacturers do not generally specify their products using the error band method, though it yields results that are more representative of how the product will respond in actual use. Instead, commercial pressures lead manufacturers to portray their sensors in the most favorable light vs. the competition.

 

The commonly used parameter method allows you to make a direct comparison between competing products by examining their specifications as detailed in the product data sheets. When selecting a sensor, carefully examine all performance parameters with respect to the intended application to ensure that the sensor you ultimately choose is suitable for its specific end use.

 

A typical sensor data sheet will list a number of individual error sources, not all of which affect the device in a given situation. Given the plethora of data provided, you may find it difficult to decide whether a given sensor is sufficiently accurate for your desired application.

 

Ideally, the mathematical relationship between a change in the measure and the output of a sensor over the entire compensated temperature and operational range should include all errors due to parameters such as zero offset, span rationalization, nonlinearity, hysteresis, repeatability, thermal effects on zero and span, thermal hysteresis and long-term stability.

 

Typically, users will focus on just one or two of these parameters, using them as benchmarks with which to compare products. One of the most commonly selected parameters is nonlinearity, which describes the degree to which the sensor’s output (in response to changes in the measured parameter) departs from a straight-line correlation.

 

A polynomial expression describing the true performance of the sensor-if manufacturers provided it-would yield accuracy improvements of perhaps an order of magnitude.

 

Many sensors do, in fact, have a quadratic relationship between sensor output and measured value, with a response that is linear to a first-order approximation. Thus, if you substitute the quadratic equation y = ax2 + bx + c as an alternative to using the manufacturer’s advertised sensitivity data, supplied in the form y = ax + b, you can improve the accuracy. In another example, although many gravity-referenced inertial angular sensors have a sine wave transfer function (the relationship between the output and the measured angle is a sine wave), the manufacturer’s data sheets will still list a linear expression, because there is a linear relationship between the sine of the angle and the angle itself.

 

If the specific thermal effects contributing to both zero and sensitivity errors are stated, then the measurement errors may be minimized by considering the actual errors rather than the global errors quoted on the sensor spec or data sheet, together with the actual temperature range encountered in the app.

 

Often, both errors are quoted in terms of the percentage of full-range output (FRO). In reality, sensitivity errors are normally a function of a percentage of reading. Thermal errors may be minimized by actively compensating for temperature through the use of a reference temperature sensor installed near to or on the sensor being used. Some manufacturers provide an on-board temperature sensor expressly for this purpose.

 

It is important to distinguish between the contribution of zero-based and sensitivity errors. Thermal zero errors are absolute errors and are generally quoted as a percentage of full scale (FS). In most cases, sensors are not used to their full-scale capacity; therefore, when expressed as a percentage of reading, errors can become very large indeed. For example, a sensor used at 25 percent FS will have a thermal zero error of four times its data sheet value as a percentage of reading. A similar mistake occurs when users specify sensors with an operating range much higher than that which will be encountered in practice “just to be safe.”

 

These examples illustrate that you can improve both accuracy and precision because you can minimize predictable errors mathematically. Stability errors and errors that are unpredictable and nonrepeatable present the largest obstacle to achievable accuracies.

Leave a Reply

Your email address will not be published. Required fields are marked *

*