Summary: Micrium’s Matt Gordon provides an introduction to real-time kernels (ESC-214) with a particular focus on what embedded developers of medical devices will need to know to begin writing multi-task applications.
The job of developing an embedded system for use in a medical device can be especially challenging. Developers in the medical field, like their counterparts elsewhere, must deal with tight schedules, stingy budgets, and lengthy requirement lists. However, developers of medical devices also must overcome a variety of industry-specific obstacles, many relating to the myriad rules and regulations that govern such devices.
As a result of the medical field’s regulatory environment, developers in this field tend to avoid many practices that are common amongst developers of non-medical products. One such practice is the use of a real‐time kernel. In the eyes of a large number of embedded systems developers, a kernel is a relatively low‐risk means of writing powerful, multitask applications. To a developer focused on the strictures of the medical realm, however, a kernel means little more than additional code and additional cost.
Unquestionably, medical products represent a unique case for kernel use. The additional risks faced by developers of medical devices who wish to use a kernel mostly involve the medical field’s regulatory environment, however. Kernel behavior is essentially the same in devices of all sorts.
Real‐Time Kernel Basics
One way to view a real‐time kernel is as a framework for developing multitask applications. When such a framework is unavailable, developers typically write their application code around a single infinite loop. A pseudocode example of this sort of loop is provided in Figure 1.
Figure 1: A simple approach to application design is to use an infinite loop.
Since it consists of functions that are called in a fixed sequence, a loop, by itself, is not suitable for applications requiring quick responses to events. If, for example, USB data were received immediately after the example loop’s call to USB_Packet(), the data would not be processed until the subsequent call to that function. In other words, there would be a delay nearly as long as the execution time of the entire loop. To avoid this type of delay, developers augment their loops with interrupt service routines (ISRs).
Applications that incorporate a loop along with ISRs are typically referred to as foreground/background systems. The ISRs (or the foreground) execute whenever hardware needs urgent attention, and the loop (or the background) fills in the gaps. Foreground/background systems are used in a wide range of products, including medical devices, and there are plenty of developers who never take any other approach to writing application code.
Despite this popularity, the foreground/background model is not without its problems. Many of these problems become apparent when the need to expand an application arises. The example loop shown in Figure 1 might initially meet all of its developers’ needs; however, the expansion of this loop by just one function call, as shown in Figure 2, could be problematic. If the newly added function, Ethernet_Packet(), were lengthy, it might prevent other functions in the loop from being invoked at a suitable frequency. The SPI function, SPI_Read(), for example, might fail to read data at an appropriate rate following the expansion.
Figure 2: Expansion of a foreground/background system can be difficult.
Most developers’ reaction in this situation would be to move SPI_Read(), and any other functions no longer able to meet their deadlines, to the foreground. However, a cluttered foreground can present just as many problems as a bloated background. On some microcontrollers, the execution of an ISR prevents not only background code from running, but also the code of other ISRs. Even on microcontrollers that support nested interrupts, at least a portion of the system’s interrupts are typically disabled while an ISR is running. Although a prioritized interrupt controller can provide some assistance for developers who have a large number of ISRs, code written around such controllers is often not portable or easy to maintain.
A real‐time kernel offers remedies to many of the problems that can plague a growing foreground/background system. At the foundation of any kernel‐based application are tasks, two examples of which are shown in Figure 3. Unlike the different functions called by a foreground/background system’s loop, the tasks in a kernel‐based application do not execute in a fixed sequence. Each task is assigned an importance, or priority, by its developer, and the kernel uses this priority to determine when the task should run.
Figure 3: Kernel‐based applications are made of tasks.
The loop is not the only aspect of a foreground/background system for which a kernel offers improvements; kernels also provide more flexibility with respect to handling interrupts. In a kernel‐based application, interrupts are tied to scheduling. Accordingly, an application developer who uses a kernel can write ISRs that make tasks run.
An example kernel‐based ISR is shown in Figure 4. The line in the ISR reading “Signal USB task” would correspond to a kernel function call in an actual application. This call could be used to notify the task of an available USB packet and to enable the task to run. Based on the call, the kernel could conclude the ISR by switching to the signaled USB task, instead of returning to the task that was originally interrupted.
Figure 4: ISRs in a kernel‐based application are capable of signaling tasks
The ability to signal tasks from ISRs is important, because it allows developers to keep ISRs brief without sacrificing performance. In a foreground/background system, code that has been moved from the foreground to the background must wait its turn to execute, just like everything else in the background.
In a kernel‐based application, however, signaling can be used to write tasks that promptly respond to events, such as the reception of a USB packet. There are multiple data structures through which a kernel can implement signaling, but the most common is probably the semaphore. Essentially, a semaphore is a counter that tracks the occurrence of events: a zero value corresponds to no activity, while a non‐zero value means that events have taken place. This data structure is managed not by application code but by the kernel and is tied to the kernel’s scheduling mechanisms. If a task calls a kernel function to wait on a particular semaphore and that semaphore’s counter has a zero value (meaning that the event associated with the semaphore has not yet occurred), then the kernel begins running other tasks.
Semaphores and other data structures used for signaling are part of a kernel’s synchronization services.
The primary job of a kernel is to manage tasks, but most kernels also offer a variety of other services. In addition to synchronization, these services typically include inter‐task communication (which often involves providing queues for message passing) and mutual exclusion. Together, the services allow developers to use a kernel to put together robust multitask applications.