Embedded computing systems must meet tight cost, power consumption, and performance constraints. If one design requirement dominated, life would be much easier for embedded system designers-they could use fairly standard architectures with easy programming models.
But because the three constraints must be met simultaneously, embedded system designers have to mold hardware and software architectures to fit the needs of applications. Specialized hardware helps to meet performance requirements for lower energy consumption and at less cost than would be possible from a general-purpose system.
As we have seen, embedded computing systems are often heterogeneous multiprocessors with multiple CPUs and hardwired processing elements (PEs). In co-design, the hardwired PEs are generally called accelerators. In contrast, a co-processor is controlled by the execution unit of a CPU.
Hardware/software co-design is a collection of techniques that designers use to help them create efficient application-specific systems.
If you don’t know anything about the characteristics of the application, then it is difficult to know how to tune the system’s design. But if you do know the application, as the designer, not only can you add features to the hardware and software that make it run faster using less power.
But you also can remove hardware and software elements that do not help with the application at hand. Removing excess components is often as important as adding new features.
As the name implies, hardware/software co-design means jointly designing hardware and software architectures to meet performance, cost, and energy goals. Co-design is a radically different methodology than the layered abstractions used in general-purpose computing.
Because co-design tries to optimize many different parts of the system at the same time, it makes extensive use of tools for both design analysis and optimization.
Increasingly, hardware/software co-design is being used to design nonembedded systems as well. For example, servers can be improved with specialized implementations of some of the functions on their software stack. Co-design can be applied to Web hosting just as easily as it can be applied to multimedia.
In this series we will first take a brief look at some hardware platforms that can be used as targets for hardware/software co-design, followed by examination of performance analysis, hard ware/software co-synthesis and finally, hardware/software cosimulation.
Hardware/software co-design can be used either to design systems from scratch or to create systems to be implemented on an existing platform. The CPU+ accelerator architecture is one common co-design platform. A variety of different CPUs can be used to host the accelerator.
The accelerator can implement many different functions; furthermore, it can be implemented using any of several logic technologies. These choices influence design time, power consumption, and other important characteristics of the system.
The co-design platform could be implemented in any of several very different design technologies.
1) A PC-based system with the accelerator housed on a board plugged into the PC bus. The plug- ***a***in board can use a custom chip or a field programmable gate array (FPGA) to implement the accelerator. This sort of system is relatively bulky and is most often used for development or very low-volume applications.
2) A custom-printed circuit board, using either an FPGA or a custom integrated circuit for the accelerator. The custom board requires more design work than a PC-based system but results in a lower-cost, lower-power system.
3) A platform FPGA that includes a CPU and an FPGA fabric on a single chip. These chips are more expensive than custom chips but provide a single-chip implementation with one or more CPUs and a great deal of custom logic.
4) A custom integrated circuit, for which the accelerator implements a function in less area and with lower power consumption. Many embedded systems-on-chips (SoC) make use of accelerators for particular functions.