Managing intelligent I/O processing on DSP + GPP SoCs

This Product How-To article about TI’s OMAP-L138 C6-Integra DSP + ARM processor details the steps a developer needs to follow in building an application that must balance I/O processing tasks between a general purpose microcontroller and a digital signal processor.


DSP system developers face a difficult task of choosing options to meet increasing system performance requirements. Up the speed, optimize the code, add more processors, or even all of the above! It’s possible, but the developers must partition code intelligently between the general purpose processor and digital signal processors (DSPs) to make the best use of their architectures.


For instance, certain input/output (I/O) tasks can be offloaded to the general purpose processor (GPP) to implement smart I/O processing with features such as predictive caching, buffer parsing, sequencing and more.


Adding an obstacle, the developer may need to change the functionality of the system down the road and must have the flexibility to change the roles of each core to make sure hard and soft real-time can be met.


System developers must also decide if an operating system is needed, and if so, how to make sure the system-level I/O throughput comes closer to theoretical raw throughput maximums. I will examine the options a DSP system developer faces when partitioning I/O processing tasks and how to best implement the GPP in certain cases.


Why a DSP-centric model

Since their invention, programmable DSPs were aimed at a specific signal-processing task, most of the time for executing a body of inter-related DSP algorithms.


Historical examples include the V-series modem, the Global System for Mobile Communications (GSM) baseband vocoder used in cellular phones, the audio processor in stereo receivers, the video encoder in security cameras and the more recent vision processor for automotive or software defined radio (SDR) phones.


Take the V.32 modem for example. The main purpose of the product is to squeeze as much data as possible down below the 3 KHz bandwidth of the Public Switching Telephone Network (PSTN).


It offered 9,600 bps, especially impressive when 1,200 bps was the best data transmission rate available (for single-pair, non-leased lines) most consumers could get for years. The programmable DSP can execute the main algorithm, quadrature-amplitude modulation (QAM), and a host of others, including equalization, error correction, echo cancellation and more.


Most of these algorithms are chained together, representing phases of a signal processing pipeline so that the DSP can process a block of data at a time within required time slots.

< ***a***p>
We differentiate the signal processing tasks in a DSP-based system from the rest of the tasks, commonly referred to as “control” tasks. A PCI-based modem card, for example, may need to respond to driver commands from the host side to abort the transmission, go to lower speed or begin a “power down” sequence.


While the abort or power down control tasks can be simple requests since they involve the termination of the pipeline, the dynamic switching of the data rate may not be as simple since this request may involve real-time switching of algorithms or coefficients, i.e. modifying the signal processing pipeline.


As embedded system designers look for ways to accelerate signal processing performance by adding “DSP” to the system, the DSP system designer can do the reverse, offloading control plane tasks to a GPP. This move will make a lot of sense if the purposes are to:


1) Retain the investment in existing DSP code, including familiarity of tools and instruction set architecture (ISA);
2) Incrementally add “control plane” features to accommodate expanding market requirements; and
3) Lastly but most importantly, not disturb the signal processing pipeline that is tuned to the data rates and the tested use-case, e.g. field trials or certified code.


Since GPPs are mainly designed for control plane processing, it is logical to partition event-driven tasks such as I/O control to this side. Now that we have a processor dedicated for this purpose, there are now a lot of possibilities in terms of what we can do:


– Add more complicated I/O with high-level stacks to the system;

– Create custom I/O handling module(s) to pre-process the data; or

– Enhance the intelligence of the I/O control process with strategies such as data caching schemes, adaptive rate control and buffer management.


The next big question is, “What are we doing to do with the GPP side?” This will bring the discussion to the pros and cons of using OSes for I/O management.


2  ***a***

   >

   >>

Leave a Reply

Your email address will not be published. Required fields are marked *

*