Download View PDF - CiteSeerX

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Deformations of IC Structure in Test and Yield Learning
W. Maly, A. Gattiker1, T. Zanon, T. Vogels, R. D. Blanton and T. Storey2
Carnegie Mellon University, Pittsburgh, PA 15213, 1 IBM, 2 PDF Solutions
[email protected]
Abstract
This paper argues that the existing approaches to modeling
and characterization of IC malfunctions are inadequate for
test and yield learning of Deep Sub-Micron (DSM) products.
Traditional notions of a spot defect and local and global process variations are analyzed and their shortcomings are
exposed. A detailed taxonomy of process-induced deformations of DSM IC structures, enabling modeling and characterization of IC malfunctions, is proposed. The blueprint of a
roadmap enabling such a characterization is suggested.
Keywords: yield learning, fault modeling, defects, diagnosis, defect characterization.
1 Introduction
The motivation, purpose and overall structure of this paper
have already been explained in the abstract above. The discussion of the prior and relevant publications should be the
next natural component of this paper. But it is skipped as
well, even if there exists substantial body of relevant publications in the related domain (some of them are listed as references in [1,2].) It is skipped to avoid unnecessary
discussion of the weaknesses of related results presented in
the past. Simply, majority of published papers with the IC
technology-oriented flavour (and prime examples are the following papers co-written by the first author of this paper [3,
4, 5, 6, 7, 8]) do not offer sufficient insight into failure mechanisms to address challenges posed by the DSM era products.
A substantial portion of this paper attempts to justify the
above, somewhat provocative claim. Then the remaining
portion of the paper is used to suggest directions of the
research, which we should undertake to truly assist test and
yield learning of modern era ICs.
2 Test and Yield Learning
Both wasteful (low yield) manufacturing and the need for IC
test have the same root: Inability of IC manufacturing process to deliver identical IC devices that always meet required
functionality. In other words process induced deformations
cause some of the fabricated ICs to not deliver required functionality, and therefore, all of the fabricated devices must be
tested. This trivial observation leads to a fundamental, but
also straightforward, conclusion: Test and yield learning cannot be adequately addressed without sufficient understanding of the differences between a real IC and its idealization
that is used to guide the design. These differences, and more
precisely, the methodologies used to model them, are subject
of the discussion reported below.
The test and yield learning domains, despite undeniable
overlap, have been evolving as two separate disciplines,
driven by very different visions and intellectual mind-sets.
Test has been dominated by the Boolean algebra perspective
of IC functionality, while yield learning has been biased by
Paper 34.1
856
process development approaches and expertise. Such a separation has been acceptable for a long time because on both
sides of the fence that separates these two domains, simple
and inexpensive models that hide complexity of the reality
“on the other side of the fence” have been developed. Test
has been using the concept of the “stuck-at” fault model and
yield learning has been exploiting notion of the “killer
defect”. Both of these two modeling “shortcuts” have proved
to be very effective in hiding complexity associated with the
true mechanism of circuit’s malfunctioning. This way test
could concentrate on the question: how to find a malfunctioning product? Yield learning could focus on the questions:
what are the root causes of killer defects and how to avoid
them?
Strongly polarized by these two questions both yield learning and test domains have been developing visions, methodologies, algorithms and tools. And almost always it has been
done by using, directly or indirectly, all or only a certain specific subset, of the following nine simplifying assumptions:
A 1. Adequate gate level representation of all deformations
can be achieved with a single instance of netlist alteration
involving no more than two nets (including VDD and
GND);
A 2. Adequate gate level representation of all IC structure
deformations can be achieved ignoring both geometry of
the IC layout and geometry of the deformation itself;
A 3. Size of the deformed region is either comparable to the
size of a single transistor (small size) or is larger than the
manufactured wafer (large size);
A 4. Spatial distribution of the centers of small size deformations is uniform across single manufacturing wafer (it
may vary, however, between wafers);
A 5. Shape of the small size deformed region is very regular
(usually circular);
A 6. Spatial distributions of small size deformations are
product independent;
A 7. Large size deformation can be well characterized by
two random variables describing “global” (affecting all
fabricated devices the same way) and “local” (affecting
each single circuit element) variations;
A 8. Deformation of any IC structure that may cause circuit
malfunction does not change its properties over time;
A 9. Visually detectable deformations of electrically vital
circuit components are causing tester detectable circuit
malfunction.
The above set of assumptions proved very useful. Note that
without assumptions A1 and A2, real progress in test of ICs,
built on top of single stuck-at and bridging faults concepts,
could not have been obtained. It is also fair to notice that the
trust in correctness of a killer-defect concept has enabled
market success of multi-billion dollar companies selling
defect scanners such as KLA Tencor. Of course, all of the
above could not have taken place if only a single one of the
assumptions A1 through A9 would have been substantially
incorrect.
ITC INTERNATIONAL TEST CONFERENCE
0-7803-8106-8/03 $17.00 Copyright 2003 IEEE
Hence, one has to conclude that the somewhat artificial and
rather simplistic separation of test and yield learning reinforced by assumptions A1 through A9 in reality has been
very instrumental in achieving spectacular results by the IC
industry. On the other hand, it is also evident that the above
assessment is not impartial and arrogantly ignores many
research results that clearly undermine correctness of each of
the assumptions A1 through A9.
So, it is natural to ask: how to reconcile the inconsistency
between the correctness and completeness of these assumptions and the successful test and yield learning industrial
practice? Can we continue to ignore such obvious contradiction and at the same time confront reality of DSM technologies? What should we do if the answer is “no”? An attempt
to answer these questions is a subject of discussion of a substantial portion of this paper.
a
b
Figure 1. Pictures of 200 mm silicon wafer [9].
3 IC Structure Deformations
IC manufacturing is a sequence of inherently unstable processing steps resulting in a “sandwich” of conducting, semiconducting and isolating layers. Regardless of the achieved
level of perfection in the control of all IC processing steps,
physical characteristics of any point in a fabricated device
are different from the characteristics of a corresponding
point in any other device, and different from the expected
and the desired characteristics specified for the product. In
this paper we call this difference deformations of IC structures.
3.1 Examples of IC Structure Deformations
This subsection presents a collection of real-life examples of
the IC structure deformations that may result in tester detectable IC malfunctions. (The presented collection may, at first,
appear as a somewhat scattered list of unrelated facts,
focused on the geometry of each particular deformation
only. Such a choice has been dictated, however, by a rational, which should be more obvious once the conclusions of
this section are presented.)
3.1.1 Gradual Deviations of Physical Parameters
There are plenty of examples of a gradual change in physical
parameters of a layer formed on the surface of silicon wafer.
Such gradual changes are typically results of a slowly fluctuating process condition such as temperature. As an example
Fig. 1a shows a picture of a silicon wafer after thermal oxidation. Different shades in the picture indicate differences in
oxide thickness highlighted with superimposed contour
lines. Fig.1b shows a re-created map of variation of polysilicon critical dimension measured on real wafer [9].
3.1.2 Regions of Missing and Extra Material
Regions of extra or missing material that are added or subtracted from the pattern defined by IC design geometry - are
the most often recognizable reasons for the test to fail and
consequently yield loss. They are assumed to be “killer
defects” i.e. defects, which are assumed to cause circuit malfunction and consequently yield loss.
There are many types of such defects. Two examples of socalled “spot defects” are shown in Figs.2 a and b. Typically
it is assumed that all spot defects are small and have regular
shapes. There exist, however, defects with well defined
boundaries, such as pinholes, stringers, scratches and large
area defects, whose size or shape or both are very different
from the traditionally accepted vision of what a small
rounded spot defect should look like (Figs. 2 c,d,e and f)
a
b
c
d
e
f
Figure 2. SEMs of various IC structure deformations [10].
For instance, it is worth noticing that the pinhole in the gate
oxide shown in Fig.2 c has some unique characteristics: it is
very small and has developed over time (a “mature” pinhole
is shown in the picture). It’s likely to occur in so called
“weak spots” of the gate oxide i.e., locations of pinholes are
not uniformly distributed over the entire wafer. Another
example of small (i.e., very difficult to see) defects are silicon dislocations. An especially important property of dislocations is that they are generated only in specific locations of
the IC structure that are strongly correlated to the locations
of high thermal stress. Another category of spot defects are
stringers (see Fig.2 d). They are very thin “wires” bridging
nonequipotential IC regions, which are difficult to detect by
standard optical techniques. They are formed as a result of
few random events occurring in the process in specific locations of the layout. On top of that, they have a relatively
large spectrum of resistances. They also typically involve
many nets in the circuit. Scratches of various kinds (e.g.,
Fig.2 e) also have characteristics very different from the typical spot defect. This is especially true for scratches generated by Chemical-Mechanical Polishing (CMP) techniques.
And scratches may be of various sizes usually causing a
wide spectrum of shorts and opens.
Paper 34.1
857
Finally, one should mention large size deformations. They
introduce opens and shorts to a larger number of IC nets and,
therefore, may cause complex circuit behavior. They also
may have very complex forms and shapes. (See example in
Fig.2 f.) Such large deformations can be seen not only via
optical means but also as clusters of failing memory cells in
bit maps such as shown in Fig.3.
Figure 3. Large cluster of faulty cells in an SRAM bit map.
3.2 Deformations Statistics
To be useful, the above list of deformations should be substantiated with estimates of the likelihood of occurrence of
each type of deformation. Unfortunately, as of today there is
no simple way of obtaining required statistics. In this section
we attempt to assess how much we can learn about deformation statistics. Eventually we would like to assess the possible ranges of probabilities relative to the probability of the
traditional and still dominant type of a small spot defect
causing shorts in a conducting layer (Sec. 7.2). We begin
with the discussion of the most popular source of information about defects, Scanning Electron Microscopy (SEM)
images.
3.2.1 SEM Images of Defects
SEM images such as the ones shown in Fig. 2 are very powerful in analysis of defect mechanisms but are also almost
contra-productive in obtaining statistically valid information
about defect characteristics. The problem is that the field of
view of SEM is so small that searching for small defect with
SEM is very inefficient and, therefore, very expensive. As a
result of both power of visualization and difficulty of defect
localization, failure analysis is very often likely to focus on
defects that have been found first and not on the defects that
are the most frequently occurring in the process. This also
leads to some misleading statements (easy to find in some
test-oriented papers) claiming knowledge about “new important in DSM technologies” failure modes. The truth is that
very rarely there exists a statistically valid sample size
allowing for such a claim.
3.2.2 Defect Scanners
Defect scanners are perhaps the most “prolific” source of
information about spot defects with well defined boundaries.
Such scanners detect irregular deviations in the light scattered from the surface of the fabricated device. The problem
is that this way all deviations in geometry of the surface of
the scanned layer are recorded. This means that both electrically relevant defects and “cosmetic irregularities” are
accounted for in the same manner [12]. And this problem
Paper 34.1
858
seems to become more and more complicated to address
with the decreasing feature size of subsequent technology
generations. The reason is that to achieve desired minimum
feature size modern technologies use complex image
enhancement techniques. But these techniques are not applicable for optical scanning of the wafer surface, which result
in resolution of optical defect scanners having difficulty
tracking the rate of decrease of the minimum feature size.
Despite the above shortcomings, it has been confirmed many
times that the defect density is inversely proportional to the
defect size [12] and that defect density is likely to be design
pattern dependent [10]. We will use this information later to
formulate some of the conclusions of this paper.
3.2.3 Test Structures
Test structure measurements are the most useful source of
information about deformations. Discussion about test structure data will be continued in Section 6.1. Here only one specific test structure result is mentioned. Fig.4 shows the
distribution of defect sizes in a modern 0.13µm copper interconnect technology. The histogram reports number of occurrences of shorted lines in a special purpose defect monitoring
structure [13]. The key conclusion, used later in this paper, is
(see Fig.5) that the number of defects that have diameter
larger than 2S+W, and thus cause a short of more than two
electrical nodes in an IC, may be as high as almost 50% (in
this specific case).
Figure 4. Defect size distribution for spot defect shorts for
copper metal layer obtained from [13] test structures.
Figure 5. Segment of the layout of a metal layer.
This result seems to be “normal” for many modern copper
technologies although it should not be taken as certain.
3.3 Indirect Pointers to Deformation
Characteristics
Manufacturing often uses various maps on which test results
are superimposed on the top of the image of the wafer, die or
other design segment. Examples of the most popular types of
such maps are described below.
3.3.1 Wafer Maps
Values of the test structure measurements, or any other test
results, mapped onto the range of colors and then depicted as
a wafer shape mosaic of appropriately colored dies are used
Table 1. Number of pattern instances per die.
Number of 1
2
3
4
5
6
7,9,12
patterns
instances
Number of 657 209 78
29
17
7
1
dies
upper limit for the number of single small spot defects is
657. The rest of the patterns indicates either large area deformations or multiple defect clusters of small defects.
3.3.3 Results of IDDQ Test
IDDQ measurements can provide a rich source of information for gaining insight into defect characteristics. In this
subsection we use them to provide quantitative assessments
of what portion of fails cannot be explained by single spot
defects. We use the idea of the current signature [14, 17, 18,
19, 20], which identifies the number of unique defect-related
IDDQ levels, i.e., current signature “steps,” existing in a
chip’s measured IDDQ results. As argued in [18], in most
cases a single spot defect will result in only a few unique
levels of IDDQ. As a result, we can take a current signature
with many levels as an indication of a large area deformation. To illustrate this statement we analyzed the voltage test
failing chips from the Sematech Experiment [15, 16]. As an
example [21], Fig.6 shows the physical failure analysis
(PFA) results of an IC with a many-level current signature.
Note that the defect affects 28 transistors.
Fig.7a shows the IDDQ measurements for the IC from Fig.6.
Notice that the measurements are spread out over a range of
about 200 µA. For comparison, Fig. 7b shows the IDDQ
Area of defect
affecting 28
transistors
Figure 6. PFA results for current signature with many levels
-- poly to substrate and poly to diffusion leakage across 28
transistors [21].
IDDQ before burn−in [uA]
450
550
650
measurements for another IC from the same Sematech
Experiment that has IDDQ of similar magnitude, but only
two levels of IDDQ. In Fig.7b, the measurements are tightly
clustered around just two IDDQ values. (This indicates that
the large number of IDDQ levels in Fig.7a is due to the existence of many unique ways for the IDDQ to flow, rather than
due to measurement noise.)
IDDQ before burn−in [uA]
450
550
650
as a very common way of depicting a gradual change in the
value of parameter of interest across the wafer. It is important to stress that unlike maps in Fig.1 values of the parameter are usually readings from a single small test structure
which are “extrapolated” onto the entire die area. In other
words, to produce a wafer map very often spatial sampling
is performed, in which measurements taken from a small
region of the wafer, are used to represent a relatively larger
die area.
3.3.2 Bit Map Statistics
As indicated before, a deformation’s size and nature can be
deduced from the pattern of failing bits in a memory bit map.
To gain some insight into the characteristics of deformations
of DSM process, a set of 1000 medium-size failing SRAM
dies from a large number of manufacturing wafers fabricated
in 0.13 µm process has been analyzed.
A total of 1580 failure patterns instances have been
extracted. Approximately 60% of them were of one of the
following types: single bit, double bit, single row, single column, double row, double column, cross, double cross. We
assume that each of these bit map failure patterns is likely to
be caused by a single small spot defect located between
neighboring lines (and hence doesn’t contradict the notion of
traditional well-shaped small spot-defect). The remaining
40% of the failure patterns instances had a more (and also
much more) complicated geometry. This means that in the
analyzed sample around 40% of deformations could not be
represented by the traditional model of a simple spot defect.
In addition it was determined that there is a substantial number of dies which had more than one failure pattern
instances. Details are shown in Table 1. As one can see the
0
50 100
200
Vector index (execution order)
0
50 100
200
Vector index (execution order)
(a)
(b)
Figure 7. IDDQ for IC with many IDDQ levels (a)
compared with an IC having only two IDDQ levels (b).
Table 2: Number of unique levels of IDDQ for different
populations of Sematech Experiment chips.
Population
All voltage fails
Subtle voltage
fails
IDDQ -only fails
> 10
> 20
>30
23.6%
7.9%
14.0%
3.2%
10.0%
1.5%
7.5%
2.9%
1.4%
Fig.8 presents the results of counting the number of IDDQ
levels for voltage test failing chips in the Sematech Experiment [15, 16]. Note that the population of “many-level” dies
is substantial.
In general, the results of our analysis show a significant
number of chips with large numbers of IDDQ levels. Taking ten as the maximum number of IDDQ levels that can
result from a single spot defect, over 20% of the voltage-testfailing population cannot be explained by single spot
defects. Contrary to what may be a natural assumption,
Table 2 indicates that such defects are not necessarily easydetects at test. Specifically, significant portions of subtle
voltage fails (those that fail only speed-related tests or
exhibit a test supply voltage sensitivity) and of IDDQ -only
Paper 34.1
859
20
40
60
80
100
Number of unique current levels
The examples of deformations and observations made in
Section 3, provide in our opinion sufficient evidence1 to
claim that there is an urgent need for new way of describing
characteristics of process induced IC structure deformations.
In this section we summarize a proposal [22, 23, 24] of a
new classification of deformations, which may be general
enough to cover needed spectrum of DSM deformations and
is adequate with respect to the points made in this paper.
4 Taxonomy of Deformations in DSM Era
0
0
Number of chips
500
1000 1500
Number of chips
50
100
150
Close−up for number of levels > 10
yield losses. This, in turn, implies that some of the process’s
more severe deformations occur and disappear according to
complex stochastic processes of some kind.
0
4.1 Classification of Deformations
20
40
60
80
100
Number of unique current levels
Figure 8. Number of unique levels of IDDQ for Sematech
Experiment voltage test fails.
IDDQ after burn−in [uA]
0
400
8000
IDDQ before burn−in [uA]
0
400
8000
fails (here defined as chips that pass all voltage tests, but
exhibit IDDQ of at least 5 µA) also exhibit many-level current signatures.
Another source of evidence of the inadequacy of the single
spot defect assumption comes from looking at post-burn-in
behavior (Fig.9). Specifically, Fig.9a shows an example
burn-in fail chip that showed evidence of a 3-IDDQ -level
current-causing defect before burn-in. After burn-in (Fig.9b)
the IDDQ of the same chip shows in its IDDQ signature an
additional current level, which indicates that an additional
defect has been activated.
Although the Sematech Experiment burn-in sample size was
small, an estimate of the portion of burn-in fails for which
there is evidence of a new defect in effect after burn-in is
75%. Hence, the burn-in observations provide strong evidence that more than one defect exists and affects the die,
contradicting the single-defect assumption. They also highlight that deformations and their effect on circuits are not
permanent; instead they can undergo modifications with
time and use.
0
50 100
200
Vector index (exec. order)
(a)
0
50 100
200
Vector index (exec. order)
(b)
Figure 9. IDDQ for burn-in fail showing evidence of a
current-causing defect before burn-in and an additional
defect after burn-in.
3.4 Time Dependence of Deformation Statistics
For the lack of space we could not include a plot of yield as a
function of time. Such a plot could illustrate the extent of the
process induced random deformations inflicted on the fabricated IC devices. In a typical yield plot one can see that there
are periods of time in which yield fluctuates with relatively
low variance around a stable and high mean value. These
“calm” periods of time are interrupted by periods of severe
Paper 34.1
860
The proposed classification starts with the observations that
there are two major categories of deformations:
• Deformations that gradually change across the entire
wafer and do not have clearly defined borders, which we
call “boundless” and
• Deformations that have a boundary, which we call
“bounded”.
The extent of deformation in each case can be gradual and
continuous as well as severe i.e., such that resulting properties of the affected area are exceeding acceptable limits. Typically we will assume that bounded deformations are severe.
In other words bounded deformations are assumed to be
equivalent to “killer defects”. For gradual deformations we
do not assume that the range of deformation exclude values
of the severe nature; We only assume that the deformation
extent is a continuous function of x and y coordinates. With
such assumptions we focus on the geometry of boundary of
bounded deformations and on the nature of deformation
extent as a function of wafer coordinates in the case of
boundless deformations.
4.2 Bounded Deformations
Fig.10 attempts to capture the major geometrical characteristics of bounded deformations. It uses an novel way of ordering these geometries, which mimics the time readings from a
clock’s face. As one can see in Fig.10 there are two 12 hour
cycles labeled AM and PM. AM is for the “extra” and PM
for the “missing” material deformations. The first six hours
of each cycle are for mostly convex elements in defect clusters and second for mostly concave elements. 12 AM and 12
PM are used for un-deformed layers and 6 AM and 6PM for
the perfect spot defects. The time periods in between
describe clusters of defects with increasing/decreasing sizes
and decreased numbers of defects in the cluster. In addition
to the deformation size, which can be measured by a radius
of the circle onscribed onto the deformed area, one can
describe number and roughly a distribution of the components of the cluster of defects. Consequently, one can better
assess the impact of the deformation on the layout pattern in
the cases of deformations which are not of perfect 6 AM or 6
PM type (i.e. are not perfect spot defects). The classes,
which are likely to occur in DSM processes are suggested by
1. For more discussion of inadequacy of traditional way of characterizing process induced IC structure deformation see [23, 24].
Such discussion has been omitted in this paper due to paper size
constraints.
wafer maps (Fig 11 b-h) show accumulation of deformations
caused by subsequent process steps affecting particular
physical characteristic of a layer in the IC structure. As one
can see in the case depicted in Fig. 11, for this specific case,
there are seven different “frequency components” contributing to the final “model” of the deformation of interest.
5 Deformations, Fault Models and Yield
Both test and yield measurements/estimations are rooted in
a notion of a circuit failure i.e., the model of “fatal circuit
misbehavior”. Traditionally, yield domain has been using for
this purpose the concept of “killer defect” and test the notion
of a fault model. Fault models are used to assist in test pattern generation, fault coverage measurements and fault diagnosis. Yield is measured by a tester, which detects (or
misses) occurrence of circuit’s fatal misbehavior.
The true “work horse” of fault models today still is (and
always has been) the stuck-at models, which form the foundation of modern IC industrial test. Elegance and simplicity
of these models are the reason for wide spread belief that
they will remain useful for the years to come [26]. Authors
of this paper share this opinion, however, they have a vision
of novel ways of using them [27,28]. Below, the justification
of this opinion is summarized.
To address the key question of this paper about validity of
traditional test and yield learning methodologies we now
need to revisit the set of simplifying assumptions discussed
in Section 2. First we will confront these assumptions with
the characteristics of the spectrum of DSM process deformations described in Section 3. Then we will make an attempt
to foresee what the prospects are for the validity of these
assumptions in the nearest future.
5.1 Failure Modeling in DSM Reality
Figure 10. Possible classification of bounded deformations.
2 through 5. (For instance Fig. 2, shows very typical shape
which can be labeled as “5 PM active”).
4.3 Boundless Deformations
Classification of boundless deformations requires a new and
different approach of modeling deformation extent. Here we
describe the method [22, 23, 24] which is based on the
assumption that the gradual deformation can be described as
a sum of 2D (x,y) periodic functions with amplitudes and
spatial periods characteristic for the mechanism causing this
deformation. For instance, it is assumed that thermal processes cause deformations describable by a function with the
period larger than the size of a wafer (called “very slow”
deformations). Single wafer operation cause deformation
with the period comparable to the wafer’s diameter (called
“slow”). Reticle size period can be used, for instance, to
describe deformations caused by distortion of the stepper
lens and die size period can be used to describe design density related deformation. The transistor size deformation
period caused for instance by specific layout pattern such as
is characteristics for SRAM regular layout we call “fast”.
Fig.11 illustrates the essence of the classification/modeling
methodology proposed in this paper. In this figure, the deformation values are depicted by different shades. Subsequent
Key messages of Section 3 are as follows. First of all there is
a full continuum of sizes of bounded deformations (evidence
in Fig. 4 and Section 3.3.2). There are also a variety of these
deformations’ topologies (examples in Fig. 2). Bounded
defects may not be uniformly distributed over the wafer and
may be layout dependent [10] or in other word “systematic
in nature”[6]. Some fraction of bounded deformations
change shape and physical characteristics with time. Boundless deformations have full spectrum of spatial frequencies
in the range between 0.01 [1/cm] and 107 [1/cm]. It should
be stressed that adequate characterization of boundless
deformations must be capable of capturing process-layout
interference -- an important domain from yield and performance test stand point. With the above conclusions in mind
we now can assess validity of assumptions A1 through A9.
It is convenient to begin this assessment with the observation
that only a portion of small defects, called “bridging
defects” (i.e. defects that short no more than two nonequipotential regions) may obey assumptions A1, A3, A4, A5 and
A6. All other defects will invalidate one or more of them.
Observe also that from the presented defect size distribution
(Fig. 4) and IDDQ signature analysis as well as bit map
counts one can conclude that this fraction may be between
50% to 80%. Hence data in Section 3 suggests that for the
investigated cases no less than 20% of failing dies involves
deformations outside of the traditional domain defined by
the assumptions of Section 2.
Paper 34.1
861
Figure 11. Wafer with boundless deformation shown as product of an accumulation process of individual deformations with
increased spatial frequency.
This percentage may go as high as 50%. This observation
has the following important implications (listed in a
sequence corresponding to the assumptions A1 to A9):
A 1. Single net list alteration seems to be sufficient for modeling low resistivity bridges between two signals or one signal and VDD or GND nets. From the discussion above one
can conclude that in the worst case only 50% of dies faulty
due to metal shorts will obey this assumption. Some of the
small poly and active defects can also be modelled as two
node bridges. But some cannot, for instance, pinholes cannot. Another example of a small poly defect that must be
modeled as a bridge and an open, is described in [28].
Stringers and scratches caused by CMP that span several elements of interconnect must be modeled by several bridges.
And in all of the above cases single alteration of the netlist is
not sufficient to mimic the impact of spot defects on a circuit.
A 2. This assumption is acceptable if yield numbers and
fault models are needed for estimation of the die yield only
and the test defect coverage is not important. However, in
the case when yield learning requires partial yield numbers
e.g. computed per metal layer or per functional block, defect
size and layout geometry are indispensable [6, 8]. Similarly,
when one wants to use the bridging fault model, layout analysis is necessary to keep test generation cost and test quality
levels under control. The reason is obvious if we note that,
for instance, bridges between two nodes may occur if and
only if there exists a metal layer in which metal tracks of
these two nodes are immediate neighbors, or if all tracks separating them are shorted as well. Hence, only layout geometry decides which of the bridges may and which may not
occur. Finally, failure analysis cannot be performed without
layout information.
Paper 34.1
862
A 3. A “binomial” distribution of deformation sizes never
has been correct but was justified by the assumption that
“medium size” bounded deformations are easy to find and to
track down in the process. In the case of DSM technologies
the situation has changed completely because of the scale
shift (deformations which have been considered as small are
becoming medium or even large). Due to this shift, tail of
defect size distribution (Fig 4) is long and seems to grows
with the introduction of each new technology.
A 4. Not always true, however, this paper does not provide
evidence to support this claim.
A 5. Existence of stringers and scratches undermines this
assumption. However, one must remember that these kind of
deformations are infrequent and should be unlikely in a
mature process.
A 6. Seems to be incorrect and often is undermined by the
notion of systematic yield loss mechanism. However, there
is very little experimental evidence in public domain [10]
supporting this claim.
A 7. Modeling of boundless and gradual deformations with
two [6, 29] some times three random [31] variables has been
sufficient for processes whose variability were dominated by
instability of thermal processes. Such modeling justified
split of yield modeling into parametric and functional components [6] and provided rationale for the gate level delay
modeling and concept of “critical path”. Both of them, in
turn, have enabled conceptual framework for circuit timing
analysis and functional/performance testing. There are, however, at least two major difficulties with such a simplistic
way of modeling boundless deformations. First, correlations
between characteristics of different layer cannot be captured
by such a simple model [32, 33, 34]. Second, correlation
between single physical characteristics observed in two different points of the wafer [35] cannot be modelled in a sim-
ple way either. The problem is that boundless deformations
have spatial period in the entire spectrum of spatial frequencies. Consequently, correlation between IC parameters is a
function of x and y which can be estimated if and only if
complete spectrum of spatial frequencies is known. And this
cannot be done by modeling only two “averages” one for
entire wafer and one for very small region around specific x
and y. Hence, this assumption seems to be not valid for DSM
technologies.
A 8. Assumption about time stability of physical characteristics of deformations is, of course, not valid, from reliability
time perspective. In this paper it was meant, however, as
short term instability meaning change, which can occur in a
short time interval (comparable to the time of test). From the
analysis of IDDQ test data we concluded that some of the
defects can dramatically change in such a short time (e.g.
stringers can break and pinholes can melt a permanent current path through the gate oxide).
A 9. There is plenty of data obtained from the application of
defect scanners which proves this assumption to be wrong.
5.2 Discussion
It is useful at this point to recall the contradiction described
in Sec. 2, which seems to exist between the sophistication of
failure mechanisms of DSM processes and simplicity of IC
test practice discussed in Secs. 3 and 4. Simply speaking,
this paper, so far has provided evidence that failure mechanisms in DSM technologies can not be captured by a two
line bridging fault model and local and global parametric
variations for a substantial portion of DSM products. Direct
question is then: what are the practical consequences for
yield learning and test caused by this discrepancy? Below we
attempt to answer this question.
5.2.1 Test
There seems to be a consensus in the test community that
stuck-at fault models, applied for production test development purpose, have been sufficient to assure acceptable level
of defect coverage at the very best. This seems to be the case
in the nearest future as well, but there is a growing concern
that even 100% fault coverage test sets will produce imperfect results. From the perspective of this paper, however, the
production test seems to be the least critical and is likely to
be the last to abandon stuck-at practice (but it is likely that
the attention paid to its shortcomings will be increasing).
The only significant problem might be in using fault coverage as a test quality measure. In particular, resistive shorts
and resistive opens pose a serious challenge for test developers because the behavior of circuits with such faults depend
on the analog value of the resistor, which cannot be covered
by the stuck-at based concept.
5.2.2 Diagnosis
For product diagnosis, understood in a classical way i.e., as
an activity which is supposed to find the location of the
defect, the situation is dramatically different. By far the most
important is the fact that DSM technology may generate a
substantial number of faults, which will manifest themselves
as multiple stuck-at faults. (For instance, the simplest form
of a small rounded spot defect of extra conducting material
in any layer of ASIC product and the size exceeding 2S+W
where S and W stands for design rules for metal spacing and
metal width, respectively (see again Fig. 5), must be modeled as six stuck-at fault models in three nodes of the net-
work locations, which cannot be determined without detailed
layout analysis!).
Problems become equally nasty as far as spots of missing
conducting material (“opens”) are concerned. Behavior of
the circuit in such cases depends on geometry-dependent
capacitive coupling with a, sometimes, large number of
interfering nodes. It can be further complicated by the process dependent charging (antenna) effect.
Resistive opens and resistive shorts pose another challenge
for test developers, failure analysts and also for circuit
designers because the behavior of circuits with such a faults
depend on an analog value of the resistor, which cannot be
covered by stuck-at based concept [37].
Hence, concluding, one must recognize the fact that DSM
technologies will make single-defect-based diagnostic testing of today obsolete and sometimes even contra-productive.
5.2.3 Yield Learning
Direct implications on yield learning, which are due to inaccurate failure mechanism models, are of the following
nature. First, inefficient defect diagnosis will affect yield
learning rates. Second, yield decomposition into components
representing various different yield loss mechanisms will be
very difficult.
5.3 Partial Conclusions
Now we can summarize discussion of this section. The bottom line conclusion of the presented argument is that traditional approach to modeling of IC structure deformation
becomes inadequate in the DSM era. The first natural step
towards adequate way of describing DSM process-induced
deformation is to acknowledge complexity of geometrical
relationship between deformed area and the geometry of
wafer and fabricated design, This can be accomplished by
developing new taxonomy of deformations such as one proposed above.
6 Observability of Deformation
At this stage of our presentation one concludes that so far we
have been on one hand hinting that statistical validity of the
models describing DSM deformations is crucial and on the
other hand we have also use sometimes “circumstantial support” of our own claims. Hence, indirectly we admit that statistically valid conclusions are very difficult to obtain. In
addition, we have been claiming that we need to use rather
rich but also complicating deformation taxonomy. This
raises the question: Is it possible to have both more complex
deformation model and at the same time more confidence in
model parameters? The following subsections are attempting
to briefly address this question. The key to the explanation of
our point is a trade-off between the cost of silicon needed to
obtain required amount of information and the benefits provided by more accurate deformation modeling. Full answer
to this question [22] is much beyond acceptable size of this
paper. Therefore we will focus on the key aspect of this
trade-off: deformation observability. We will distinguish two
kinds of observability channels: a. direct observability i.e.
means used to directly conduct characterization of a deformation, and b. indirect i.e., via test results of fabricated DSM
device.
6.1 Direct Deformation Observability
Direct deformation observability can be accomplished via a
direct measurements (to measure oxide thickness etc., SEMs
and defect scanners) and special purpose test structures
Paper 34.1
863
designed to maximize sensitivity of this structure to the
physical characteristic of the deformation of interest.
Direct observability is a way of monitoring process induced
deformations by process development and yield learning
teams. The key limitation of this methodology is in “density
of sampling” - i.e., in the minimum distance between two
test sites intended for observing the same physical quantity.
For the test structures this density is limited because single
test die must contain hundreds of test devices each focused
on different physical phenomenon. Consequently, spatial frequency of sampling in the spatial frequency domain, used in
this paper to describe boundless deformations, is low. This
means that the test structures can be used for identification of
deformations, which are “spatially slow”. On the other hand
defect scanners are useful to detect small bounded severe
deformations which are spatially fast, random and noisy. But
deformations which are: “spatially fast” and “very fast”,
boundless and are interfering with the layout patterns are
almost impossible to characterize. (Simply speaking variations of the deformation which has a period of the size
between 1 transistor size and say 1000 transistor sizes would
require corresponding density of sampling test structure.
Such sampling frequency can be provided only via SRAM
based test structure, which has other limitations [7].)
6.2 Indirect Observability
Indirect deformation observability is achieved via DSM
product itself. It has been envisioned, long time ago (see
e.g.[38,39,40]), as a high volume source of information
about manufacturing process. Implementation of this vision
is, however, very difficult and the only practical implementation is achieved by using embedded memories test results to
determine basic characteristics of defects [7]. The essence in
this difficulty is in “convoluted nature” of test results. On the
other hand it is highly likely that indirect observability avenue is the only way to characterize already mentioned “spatially fast” and “very fast”, boundless and are interfering
with the layout patterns.
7 Conclusions
This paper has begun with the question: how one can reconcile on one hand successful but simplistic notion of killer
defect and stuck-at fault model and on the other hand
requirement for modeling complexity of deformations of
DSM process? And this question has been directly and indirectly asked before by others (see e.g. [41, 42, 43, 44, 45, 46,
47, 48,49, 50]). Our answer is as follows.
7.1 Concluding Observations
Discussion conducted in this paper also leads to the following specific observations:
1. Notion of a spot defect (seen as a disk of extra conducting or insulating material distributed uniformly on the manufacturing wafer) is no longer sufficient for modeling failures
in DSM ICs. DSM reality imposes the need to take the full
spectrum of process deformations into account. Especially
important are such features of the deformation as:
a. Properties of involved materials,
b. Statistical characterization of deformation geometry,
2. Test results are (for some important classes of process
deformations) practically the only source of information
about characteristics of these deformations. Therefore, it is
Paper 34.1
864
imperative that test-based characterization of process
induced deformations becomes a key point on the test
domain agenda.
3. Characterization must be obtained for yield modeling/
debugging as well as for assessment of test quality purposes.
This is because:
a. Without clear understanding and efficient modeling of
probability of failures in modern ICs the extremely
important task for the industry -- rapid yield learning -will be impossible to solve.
b. Design and test of top performance IC device will be
impossible unless detailed knowledge about the nature of
process-induced deformation is available.
4. Test community must accept the fact that these test methodologies, which rely on the assumption of single occurrence of a fault, are becoming inadequate in handling DSM
devices. Especially vivid example of such a need is provided
by the case of design repair schema of embedded memories.
7.2 Road Map
The set of conclusions from the above paragraph can be used
to formulate a range of objectives guiding big portion of the
test research efforts for many years to come. The CMU test
group will use this conclusion to formulate its research
agenda. But even today one can see that the new important
objective for the test and yield learning communities should
be reformulation of objectives of test based diagnosis in such
a way that it can accomplish DEFORMATION CHARACTERIZATION, which eventually should yield an on-line
statistical characterization of physical parameters of deformations induced into IC structure by DSM process instabilities.
These conclusions trigger also the question: how to begin
this non trivial shift of focus of the test domain and what can
be done to mitigate, as soon as possible, incompatibilities
between current test capability and the needs generated by
the introduction of subsequent DSM technologies. In the
opinion of the authors of this paper it is necessary to begin
with a large scale Sematech-like test experiment which can
(1) assess whether or not current test methodologies are
really incapable of characterizing growing portion of the
DSM process induced deformations and (2) provide large
enough sample size data base for the development of relevant deformation characterization methodologies. The CMU
test group has restarted research effort in this directions (see
also [27, 28, 23, 24, 25]) and is hoping to publish more
results in the nearest future.
Acknowledgments
The authors would like to acknowledge K.Komeyli, M.
Niewczas, W. Jonca and Y.Fei, as well as the members of the
CMU test group, K. Dwarkanath, R. Desineni, S. Motaparti,
S. Biswas and J. Brown for their valuable contribution. Also
the authors would like to thank IBM, Intel Corp., Philips and
GSRC for technical and financial support.
References
[1] C. Hawkins, et al., “Defect Classes – An overdue paradigm
for testing CMOS ICs,” Proc. of ITC 1994, pp. 413-425.
[2] J. Segura, et al., “Parametric Failures in CMOS ICs – A
Defect-Based Analysis,” Proc. of ITC 2002, pp. 90-99.
[3] W. Maly and J. Deszczka, “Yield Estimation Model for VLSI
Artwork Evaluation,” Electronics Letters, Vol. 19, No. 6, pp. 226-
227, March 1983.
[4] W. Maly, F.J. Ferguson, and J.P. Shen, “Systematic
Characterization of Physical Defects for Fault Analysis of MOS IC
Cells,” Proc. of ITC 1984, pp. 390-399.
[5] J.P. Shen, W. Maly, and F.J. Ferguson, “Inductive Fault
Analysis of MOS Integrated Circuits,” Special Issue of IEEE
Design &Test of Computers, Dec. 1985, pp. 11-26.
[6] W. Maly, A.J. Strojwas, and S.W. Director, “Yield Prediction
and Estimation: A Unified Framework,” IEEE Trans. on Computer
Aided Design, Jan. 1986.
[7] J. Khare and W. Maly, “Rapid Failure Analysis Using
Contamination-Defect-Fault
(CDF)
Simulation,”
IEEE
Transactions on Semiconductor Manufacturing, Vol. 9, No. 4, Nov.
1996, pp. 518-526.
[8] H.T. Heineken and W. Maly, “Manufacturability Analysis
Environment - MAPEX,”, in Proc. of the 1994 Custom Integrated
Circuits Conference, May 1994, pp. 309-312.
[9] B. E. Stine, et al., “Simulating the Impact of Poly-CD WaferLevel and Die-Level Variation On Circuit Performance,” Proc. of
IWSM 1997, pp. 24-27.
[10] P. Simon, “Yield Modeling for Deep Sub-Micron IC Design”,
Ph.D.Thesis, Technical University of Eindhoven, 2001
[11] P. Simon, et al., “Design Dependency of Yield Loss Due to
Tungsten Residues in Spin-on Glass Based Planarization Process,”
Proc. of ISSM 1997, pp-87-90.
[12] A. Elias, et al., “Accurate Prediction of “Kill Ratios: Based
on KLA Defect Inspection and Critical Area Analysis,” Proc. of
SPIE 1996 Symposium on Microelectronic Manufacturing Yield,
Reliability and Failure Analysis II, pp. 75-84.
[13] D. J. Ciplickas, X. Li, and A. J. Strojwas, “Predictive Yield
Modeling of VLSICs,” Proc. of IWSM 2000, pp. 28-37.
[14] A. E. Gattiker and W. Maly, “Current signatures,” Proc. of
VTS 1996, pp. 112-117.
[15] P. Nigh, et al., “An Experimental Study Comparing the
Relative Effectiveness of Functional, Scan, IDDQ and Delay Fault
Testing,” Proc. of the VTS 1997, pp. 459-464.
[16] P. Nigh, et al., “So What is an Optimal Test Mix? A
Discussion of the Sematech Methods Experiment,” Proc. of ITC
1997, pp. 1037 - 1038
[17] A. Gattiker and W. Maly, “Current Signatures: Application,”
Proc. of ITC 1997, pp. 156-165.
[18] A. Gattiker, Current Signatures for Integrated Circuit Test
Strategy Advisor, Ph.D. Thesis, Dept. of Electrical & Computer
Engineering, Carnegie Mellon University, May 1998.
[19] A. E. Gattiker and W. Maly, “Toward understanding “IDDQ only” fails,” Proc. of ITC 1998, pp. 174-183.
[20] A. Gattiker, P. Nigh and W. Maly, “Current-Signature-Based
Analysis of Complex Test Fails,” Proc. of ISTFA 1999, pp. 377387.
[21] P. Nigh, et al., “Failure Analysis of Timing and IDDq-Only
Failures from the Sematech Test Methods Experiment,” Proc. of
ITC 1998, pp. 43-52.
[22] W.Maly,”18-764 Lectures”, ECE Dept. CMU, Fall 2002.
[23] W.Maly, et al., “A Yield Modelling and Test Oriented
Taxonomy of Deep Submicron Technology Induced IC Structure
Deformations,” ISTFA 2003.
[24] T. Zanon, et. al., “Study of Geometry of 0.13 µm Process
Defects Using SRAM Fail Bit Maps,” ISTFA 2003.
[25] R. Desineni, et. al., “A Multi-Stage Approach to Fault
Identification Using Fault Tuples,” ISTFA 2003,
[26] J. H. Patel, “Stuck-at Fault: A Fault Model for the Next
Millennium,” Proc. of ITC 1998, p.1166.
[27] K.N. Dwarakanath and R.D. Blanton, “Universal Fault
Simulation Using Fault Tuples,” in Proc. of the 37th ACM/IEEE
Conf. On Design Automation, June 2000, pp. 786-789.
[28] R. D. Blanton, et al., “Fault Tuples in Diagnosis of DeepSubmicron Circuits,” Proc. of ITC 2002, pp. 233-241
[29] K. Bowman, et al., “Impact of extrinsic and intrinsic
parameter fluctuations on CMOS circuit performance,” IEEE J. of
Solid State Circuits, Vol. 35, No. 8, pp. 1186-1193, August 2000.
[30] D. Boning and J. Chung, “Statistical Metrology Measurement and Modeling of Variation for Advanced Process
Development and Design Rule Generation,” 1998 Int. Conf. on
Characterization and Metrology for ULSI Technology, pp. 395-404.
[31] M. Orshansky, et. al., “Impact of spatial intrachip gate length
variability on the performance of hi-speed digital circuits,” IEEE
Trans. on CAD, Vol. 21, No. 5, pp. 544-553, May 2002
[32] J. Kibarian and A. Strojwas, “Using Spatial Information to
Analyze Correlations Between Test Structure Data,” Trans. on
Semiconductor Manufacturing, Vol. 4, No 3, pp. 219-225
[33] J. Benkoski, A. J. Strojwas, “Computation of Delay Defect
and Delay Fault Probabilities Using a Statistical Timing Simulator,”
Proc. of ITC 1989, pp.153 -160.
[34] M. Sivaraman, A. J. Strojwas,”Towards Incorporating Device
Parameter Variations in Timing Analysis,?” Proc. EDAC-ETCEURASIC'94, Paris, France, pp. 338-342
[35] W. Maly, et al., “A Study of Intra-Chip Transistor
Correlations,” IWSM 1996.
[36] Y. Sato, et al. “A persistent diagnostic technique for unstable
defects,” Proc. of ITC 2002, pp. 242-249
[37] H.Hao and E. McClusky, “Very-Low-Voltage Testing for
Weak CMOS Logic ICs”, Proc.of ITC 1993., pp.275-284
[38] W. Maly, et al., “Yield Diagnosis Through Interpretation of
Tester Data,” Proc. of ITC 1987, pp. 10-20.
[39] W. Maly and S. Naik, “Process Monitoring Oriented IC
Testing,” Proc. of ITC 1989, pp. 527-532.
[40] W. Maly, “Computer-Aided Design for VLSI Circuit
Manufacturability,” Proc. of IEEE, Vol. 78, No. 2, pp. 356-390,
Feb. 1990.
[41] D.B. Lavo, et al., “Diagnosing realistic bridging faults with
single stuck-at information,” Trans. on CAD and Integrated Circuits
and Systems 17, no. 3, pp 255-268, March 1998.
[42] P. Maxwell and R. Aitken, “Biased Voting: A Method for
Simulating CMOS Bridging Faults in the Presence of Variable Gate
Logic Thresholds,” in Proc. of International Test Conference, 1993,
pp. 63-72
[43] T.M. Mak, et al., “Cache RAM inductive fault analysis with
fab defect modeling,” Proc. of ITC 1998, pp. 862-871
[44] S. Chakravarty and Y. Gong, “An Algorithm for Diagnosing
Two-Line Bridging Faults in CMOS Combinational Circuits,” in
Proc. of DAC, Dallas, TX, June 1993, pp. 520-524.
[45] S. Chakravarty, et al., “Layout analysis to extract open nets
caused by systematic failure mechanisms,” Proc. of VTS 2002, pp.
367-372.
[46] Z. Stanojevic and D.M.H. Walker, “FedEx—A Fast Bridging
Fault Extractor,” in Proc. of IEEE International Test Conference,
pp.696-703.
[47] Y.J. Kwon and D. M. H. Walker, “Yield Learning via
Functional Test Data”, Proc. of ITC 1995, pp. 626-635.
[48] R. R. Montañés, et al., “Resistance Characterization for Weak
Open Defects”, IEEE Design and Test of Computers 19, No. 5, pp.
18-26, Sep-Oct. 2002.
[49] C. Hora, et al., “On Electrical Fault Diagnosis in Full-Scan
Circuits”, Proc. of 2001 Int. Workshop on Defect Based Testing,
pp.17-22.
[50] W. Maly, “Testing-Based Failure Analysis: A Critical
Component of the SIA Roadmap Vision,” Proc. of ISTFA 1997, pp.
3-6.C.
Paper 34.1
865