In this Product How-Two article, the Cadence authors describe how to use the company’s Verification IP solutions framework to implement ARM’s AMBA 4 Coherency Extensions (ACE) in embedded SoC designs.
The challenges facing designers of the next generation of devices such as multimedia smartphones, tablets, and other mobile devices are many. They have to deliver highly responsive systems yet must also consume the least power possible-certainly no more than their competitors.
To achieve these goals, designers have been employing multi-processor architectures for many years. However, the need for even greater performance has exceeded the capability of current multi-processor/multi-cluster architectures.
Squeezing every last drop of performance and power out of the ***a***se compute clusters is more important now than ever. One of the largest areas of opportunity for performance gains in multi-processor systems is in moving software-based cache-coherency management into hardware.
Why cache coherency? The primary goal of cache coherency is to minimize cache misses, as this is beneficial for both performance and power consumption. Every off-chip main memory access consumes much greater power versus a cache hit. Even worse, process cycles are wasted while waiting for the off-chip data. Because the cache management scheme must be aware of the status of all data in the system, these schemes have become quite complex. In addition, the engineering effort involved to develop and debug a software-based cache-coherence scheme is high.
This is the driving force behind ARM AMBA 4 Coherency Extensions (ACE). By embedding the responsibility for coherency in the hardware and defining a protocol to support it, ARM has addressed a key performance-limiting and power-consuming aspect of multi-processor systems while also assuring cache coherency.
However, this changeover from software to hardware-based cache coherence creates its own set of verification challenges. Every component in an ACE-based system has become more complex, thereby significantly expanding the verification team’s scope and required knowledge.
Coherency schemes are high-risk areas
Coherency management is required because there are multiple copies of the same data in different caches throughout the system. Since data in each cache can be modified locally, the risk of using invalid data is high. Therefore, it is essential to provide a mechanism that manages when and how changes can be made. Most typically coherency management is implemented in software today. This consumes processor cycles and power that could better be applied to user applications.
Though the protocol mechanisms to achieve this are conceptually simple, the implementation (involving multiple finite state machines all operating concurrently) is surprisingly complex. Bottom line: cache-coherent systems are high-risk elements of the design-they are difficult to design and even more difficult to verify. And, at the end of the day, you need a way to confidently sign off that your system is cache coherent-this a key verification challenge.
Another major challenge in verifying an ACE-based system is the tremendous size of the verification space. The cross between ACE specification elements such as Domain, Transaction Types, Response Types, and Cache States is huge, and every combination and permutation must be completely verified. Yet the logical behavior is only part of describing activity on the bus. The sequencing and timing of accesses to shared cache lines must also be accurately modeled and verified.
The keys to verifying ACE-based, cache-coherent designs
There are three major capabilities necessary to verify ACE-based designs. They are:
1. Mimicing all possible scenarios to cover the full verification space
2. Ensuring coherency and system compliance with the ACE specification
3. Measuring coverage and ensuring verification completeness
This article describes what to look for when considering ACE verification solutions and discusses in detail the three requirements for ACE Verification IP (VIP).