Minimizing yield fallout by avoiding over and under at-speed testing

Summary: In this article the authors discuss the problems associated with SoC at-speed testing such as over-testing and under-testing, which affect yield, and provide suggestions on how to overcome them.

In the nanometer technology used for automotive SoCs, most defects on silicon are due to timing issues. Thus, at-speed coverage requirements in automotive designs are stringent. To meet these requirements, engineers expend a lot of effort to get higher at-speed coverage. The principle challenge is to achieve silicon of the desired quality with high yield at the lowest possible cost. In this article we discuss the problems associated with over-testing and under-testing in at-speed testing, which can result in yield issues. We will provide a few suggestions that can help to overcome these problems.

The primary objective of at-speed testing is to detect any timing failure that may occur on silicon at its operating frequency. The most important part to be tested is the logic that generates controllable clock pulses having the same frequency as required for functional operation. The preferred way to supply controlled clock pulses is from the tester (ATE) through the input pads, as this will reduce complexity and minimize the additional test logic that needs to be built over the design.

However, this scheme will have frequency limitations because pads generally cannot support very high frequency clocks. So on-chip phase-locked loops (PLLs) and oscillators are used to provide clock pulses. Free running clocks from these sources cannot be used directly, however, because first we have to shift vectors through scan chains at slow frequency (shift frequency), capture at functional frequency, and then flush out data at shift frequency. We need controllable pulses while capturing at functional frequency, which can be achieved by using the chopper logic. A typical clock architecture with at-speed clocking is shown in Figure 1.

Figure 1: A typical clock architecture with at-speed clocking

For any SoC, STA (Static Timing Analysis) sign-off is integral to validating the timing performance. Timing sign-off ensures that the silicon will operate at the desired functional frequency. The same logic applies to at-speed testing as well. STA signoff must be done for at-speed mode along with the functional modes because the clock path might be different in at-speed mode, and added test control logic needs to be timed as well. The chopper logic is not required in normal functional mode, so we need to meet the timing requirements of the chopper logic as well.

Ideally speaking, closing timing in at-speed mode should not be a problem if the change in clocking is done in the common path, such as at the start of the clock path, so that the change is common for both launching and capturing flops, and hence does not affect setup and hold timing of the design. The test control logic generally works at slow frequency or is static and hence not very difficult to meet timing.

Typical SoC clocking scheme
However, modern Soc designs are not that simple. High performance and low leakage requirements result in the designs having various clock sources within a single SoC, such as PLLs, oscillators, clock dividers, etc. Depending upon the architecture, there can be a number of IO interfaces operating on an external clock running at a few MHz, such as SPI, JTAG, I2C, etc. As a result, different parts of the SoC can operate at different frequencies.

Here’s where things get complex. The clocking solutions (chopper logic) discussed earlier for at-speed clocking are not sufficient for complex chips operating at different frequencies. In at-speed testing, these complexities raise problems known as Under-testing and Over-testing, which then lead to the need for optimal testing.

Over-testing happens when logic is tested at a higher frequency in at-speed mode compared to the frequency of operation in functional mode. Referring to Figure 2, over testing happens if a pll_clock is provided to any low frequency modules like watchdog and RTC during at-speed mode. The one key reason for such an approach is simplicity of the test clock path, as this approach will require only minimal change in functional logic. In our example, we just need to bypass all divided clocks/RC osc clocks/external clocks by scan clock which in turn will be controlled by the pll clock.

Figure2: Memories and flash are operating on a divided PLL
clock while the platform is working at real pll_clock. The internal RC
oscillator is feeding clocks to blocks like the RTC (Real Time Counter)
and watchdog timer, which require very slow frequency clocks. Blocks
like display masters have both an IPS interface and a camera interface.
The IPS interface generally works at system frequency while camera logic
works at a slower frequency clock provided from the outside world. IO
interfaces like SPI and JTAG work on a few MHz. Thus the overall
configuration of an SOC requires multiple blocks working at multiple

Under-testing happens when any logic is tested at slower frequency in at-speed mode than the intended frequency of operation. This scenario generally exists when it is not possible to supply a test clock of the exact frequency as in functional mode, but at the same time closing design at high frequency is not possible either due to large data path delays or technology constraints. In this case we are forced to supply a clock of lower frequency.

Thus it is necessary to test the silicon for defects at exactly the same frequency as the functional frequency. Any deviation will lead to issues of either over-testing or under-testing:

Closing the designs on higher frequencies for at-speed testing, when functional logic is intended to work at slower frequencies, will affect area and power of the overall design. In case of timing critical designs, the at-speed testing tool will use high drive strength cells and even may require low Vt cells to meet these frequency targets.Even if the timing of the design is closed at higher frequency, at the cost of power and area, we could be unnecessarily pessimistic in our yield calculation. There can be unrealistic yield fallout during at-speed testing. For example, in a design with two clock domains, domain1 @ 120MHz and domain2 @ 80MHz, we close timing at 120MHz flat for the whole design to simplify clocking architecture in at-speed mode. All the ATPG pattern generation for both these domains will happen @ 120MHz. Due to process variability, on silicon, domain1 is working fine at 120MHz but domain2 is working at 110MHz only, thus all the dies will be treated as defective parts. Though the chip is good enough for functional requirements, based on at-speed pattern failure we will declare the die as a faulty one and this will reduce our yield.In the case of under-testing, at-speed patterns will not guarantee that the chip will actually work at the intended frequency. Since bad dies can pass at-speed tests, the original purpose of at-speed testing to filter out bad dies can be defeated. In this case we will be over-optimistic in our yield calculation.
Having understood the drawbacks, we will focus on the reasons for the presence of over-testing and under-testing in any SoC:

Simplicity of clock architecture
Given so many clock sources in the functional mode, the easiest way is to provide few controllable test clocks in at-speed mode.

Figure 3: The easiest and simplest test clock solution is to mux the PLL clock with the external clock even for at-speed mode, a case of over-testing.

Let us take an example of a DSPI module. The IP works on 2 clocks, an external clock of 15 MHz and a functional PLL clock of 120MHz for internal logic. As shown in Figure 3, the easiest and simplest test clock solution is to mux the PLL clock with the external clock even for at-speed mode, a case of over-testing.


Leave a Reply

Your email address will not be published. Required fields are marked *