The key to realizing full multicore design functionality

In today’s increasingly complex and interconnected world, system-on-a-chip (SoC) performance requirements are influenced by existing, evolving and emerging applications.


Continued evolution of the functionality required to meet performance and cost targets makes this a great time for designers to undertake a deep exploration of the architectural underpinnings of multicore solutions they are considering. Ideally a multicore SoC architecture includes the following characteristics:

1) 
Supports a mix of execution engine (core) styles including digital signal processors (DSPs), vector signal processing (VSP) and reduced instruction set computing (RISC);

2) 
Provides full multicore entitlement thus using all the capabilities of the device for the intended application enabling industry leading performance;


3) Powers a family of devices to enable reuse;


4) Incorporates a software ecosystem that eases programming burdens and shortens development time.


This article reviews the architectural elements a SoC will need to provide for ideal characteristics of devices targeted at advanced communications infrastructure applications such as media servers and wireless baseband infrastructure.


Multicore, multilayer SoC architecture

A SoC is a concept where the basic approach is to integrate more and more functionality into a given device to the point where it performs nearly all or all the functions a targeted application requires. The SoC is embodied in the silicon device and the overall solution often incorporates substantial software.


Many SoC designs pair DSP cores with RISC cores to target specific application processing needs such as processing voice and transcoding in media gateways or radio channel and transport network processing in wireless infrastructure.


Traditionally performance improvements have come through process node migration and increasing clock frequencies. In today’s small geometry process nodes, the benefits of increasing clock frequencies and process node migration also result in an increasing cost to system power so the trade off analysis is more complex.


An alternate approach, where multiple processing cores are implemented to provide the desired performance lift at lower clock rates and lower power consumption while allowing all system parameters to be met, has emerged as the preferred choice for embedded applications based on multicore SoCs. In addition, application specific acceleration and coprocessors are incorporated to further increase capacity and reduce system power.


In this scenario, it’s important to provide parallel access to processing resources so that the full entitlement of the device can be realized. It is critical for the SoC architecture to provide capabilities within the chip infrastructure so the interconnect capacity yields full multicore entitlement.


The most straightforward approach to this is a large cross point matrix but this approach has power and cost penalties because at any point in time a large portion of the matrix is powered but not in use. A more sophisticated network-on-chip approach provides local capacity for processing elements closely associated and a common backbone to interconnect these localized functions.



   >

   >>

Leave a Reply

Your email address will not be published. Required fields are marked *

*