Sunteți pe pagina 1din 9

Fault Simulation on Reconfigurable Hardware

Miron Abramovici

Prem Menon

Bell Labs - Lucent Technologies


600 Mountain Ave.
Murray Hill, NJ 07974
miron @ research.bel1-1abs.com

Department o Electrical & Com uter Engineering


diversity of Massac usetts
Amherst, MA 01003
menon@ecs.umass.edu

Abstract

Although many efficient algorithms have been


developed [I], complex circuits with large number of
faults and long test sequences make fault simulation a
very time-consuming computational process. Many
different hardware-based approaches have been tried to
speed up fault simulation. Methods dividing the set of
faults among parallel processors executing the same
algorithm (for example, [9]) usually result in a speedup which is a sublinear function-of the number of
processors. Hardware accelerators specially built for
fault simulation (for example, [SI) achieve higher
performance but at significantly higher cost. A specialpurpose processor with a hardwired algorithm is also a
very inflexible solution. A microprogrammed
multiprocessor architecture [3] offers more flexibility
(both logic and fault simulation are implemented on the
same machine), but the performance is lower. Other
solutions involve unconventional architectures such as
the Connection Machine [12].

In this paper we introduce a new approach to


fault simulation, using reconfigurable hardware to
implement a critical path tracing algorithm. Our
performance estimate shows that our approach is at
least on order of magnitude faster than serial fault
emulation used in prior work.

1. Introduction
Fault simulation consists of simulating a
circuit in the presence of faults. Comparing the fault
simulation results with those of the fault-free
simulation of the same circuit simulated with the same
applied test T, we can determine the faults detected by
T. One use of fault simulation is to evaluate (grade) a
test T. Usually the grade of T is given by its fault
coverage, which is the ratio of the number of faults
detected by T to the total number of simulated faults.
Fault simulation can be used with different fault
models, such as stuck-at faults, bridging faults, etc. In
this paper we will be concerned with single stuck-at
faults.

Logic emulation systems (for example, [7]) are


increasingly being used for rapid ASIC prototyping,
hardware-software
co-design,
and
in-system
verification. Recent work [6][SI[ 14][ 151 has used logic
emulators for serial fault simulation, where faults are
inserted one-at-a-time in the emulation model of the
circuit. The advantage is that fault simulation runs at
hardware speed, without any special-purpose
hardware, and on computing platforms which are
becoming widely available. An important requirement
is to have an efficient method of fault insertion that
avoids full reconfiguration of the emulator for different
faults. In [SI and [15], a fault-insertion circuit is added
to the emulation model, so that faults are inserted
without any reconfiguration, just by changing logic
values in the model. In [6] and [14], fault insertion
takes advantage of the incremental reconfigurability of
the target emulator, which allows only a small portion
of the model to be reconfigured. Although [8]
introduces a speed-up technique that allows several
faults to be concurrently simulated, the performance of

Fault simulation plays an important role in test


generation, by determining the faults accidentally
detected by a test vector (or test sequence) generated
for a specific target fault. Then all the detected faults
are discarded from the set of simulated faults, and a
new target fault is selected from the remaining ones.
This avoids the much greater computational effort
involved in generating tests by explicitly targeting the
accidentally detected faults. Fault simulation is
extensively used in fault diagnosis, either by
precomputing fault dictionaries, where all possible
faulty responses are stored for comparison with the
actual response of the device under test, or as part of
post-test diagnosis techniques, which first isolate a
reduced set of plausible faults, and then simulate
only those faults to find the one(s) whose response
agrees with the actual response.

0-8186-8159-4/97$10.00 0 1997 IEEE

182

2. Critical Path Tracing


Original
circuit

Mapping
program

Although our eventual goal is fault simulation


for sequential circuits, in this paper we analyze only
combinational circuits. We consider single stuck-at-0
(s-a-0) and stuck-at-I (s-a-1) faults. For every input
vector, critical path tracing first simulates the fault-free
circuit, and then determines the detected faults by
ascertaining which signal values are critical. We say
that a line 1 has a critical value v in test t if t detects the
fault 1 s-a-;. A line with a critical value in t is said to be
critical in t. Finding the lines critical in a test t, we
immediately know the faults detected by t. Clearly, the
primary outputs (POs) with binary values are critical in
any test. The other critical lines are found by a process
starting at the POs and backtracing paths composed of
critical lines, called critical paths.

ault sim.
circuit

downloa

Vectors W
U

ardwar
g

Figure 1. General data flow of the new approach


this process, referred to as fault emulation, is still
limited by its serial nature.

A gate input is sensitive in a test t if


complementing its value changes the value of the gate
output. In Figure 2 the sensitive gate inputs are marked
by dots. If a gate output is critical, then its sensitive
inputs are also critical [2]. This provides the basic rule
for backtracing critical paths through gates. Figure 2
shows critical paths as heavy lines.

In this paper, we present a new approach to


implement fault simulation on reconfigurable hardware.
Instead of serial fault simulation, we use a critical path
tracing algorithm [2][1 I], which does not require
explicit fault enumeration as it processes most faults
implicitly. Figure 1 illustrates the data flow of the
proposed approach. The main step is the mapping of the
original circuit C into a fault simulation circuit
FSZM(C), which implements a fault simulation
algorithm for C. While the data flow of fault emulation
is conceptually similar to that in Figure 1, its fault
simulation circuit is just the original circuit with the
addition of fault-insertion logic; in our case, FIM(C)is
much larger and more complex than C, since it
implements a non-trivial algorithm in hardware. In
contrast to a fault simulation accelerator, where the
same special-purpose hardware processes different
circuits, in our approach the fault simulation hardware
is designed specifically for a single circuit. This is one
of the advantages provided by the reconfigurable
hardware. The fault simulation circuit is downloaded
and run in an emulator or in any other reconfigurable
hardware.

Figure 2. Examples of critical paths


When a critical path being traced reaches a
fanout stem, we must determine whether the stem is
critical. This can always be done by injecting the
appropriate fault at the stem and checking whether its
fault effects propagate to any PO; this process is called
stem analysis. For example, in Figure 2(a), stem B is not
critical, since the fault effects of B s-a-0 propagate
along two paths with different inversion parities and
cancel each other at the inputs of the reconvergence gate

The remainder of this paper is organized as


follows. Section 2 gives a brief overview of critical path
tracing. Section 3 discusses the architecture of the fault
simulation circuit. Section 4 compares our approach
with prior work, and Section 5 presents conclusions and
future work.

183

conditions for stem analysis is given in [111.

(fault
effects
are
denoted
by
good_vatue/faulty_value). However, in Figure 2(b), the
fault effect propagating along the path (B, B I , D,F )
reaches F and therefore B is critical.

Although only a small percentage of the faults


are detected by any given vector, most conventional
fault simulation techniques waste a lot of computation
to propagate all fault effects toward POs [13. In contrast,
backtracing from POs allows critical path tracing to
directly determine only the faults detected by the
simulated vector. Except for a subset of the stem faults,
all other faults are dealt with implicitly.

B
C

3. The Fault S i ~ u l a t i o
Circuit
~
3.1, The Basic Idea

Figure 3. Examples a) stem not requiring analysis


b) broken critical path
In many cases, stem criticality can be
determined without stem analysis, based only on
criticality of the fanout branches. Such stems can be
identified by analyzing the reconvergent stnicture of the
circuit in a preprocessing phase. For example, in
Figure 3(a), both paths that fan out from B and
reconverge at F have the same inversion parity. Hence
fault effects originating at B can never cancel each
other, and, whenever BI or B2 is critical, we can mark
the stem as critical without simulating the stem fault.
It is not always necessary to propagate the fault
effects of stem all the way to POs. For example, in
Figure 2, assume that F is not a PO. In this case, if a
fault effect from B propagates to F, it is guaranteed to
also propagate to the same POs reached by the fault
effects from F, because F has already been proven as
critical. F is said to be a dominator of B, because all
paths from B to any PO must go through F. Thus the
detection of a stem fault can also be determined at a
dominator of a stem. We say that 0 is an observation
point of stem S,if the detection of S can be determined
by observing at U a fault effect propagated from S.
Figure 3(b) illustrates a
ation when a critical
is not continuous. Although backtracing stops at
gate F, (which has no sensitive inputs), stem B is critical
since the effect of B s-a-0 is observed at E. In this case,
whenever E is critical and D=E=O, stem analysis must
be performed for 8. Such conditions are identified
during preprocessing. A detailed discussion of

The main problem in implementing critical


path tracing in hardware is the backtracing process,
which involves a backward traversal through the circuit,
because a hardware model (of the type used in
emulation) can only propagate values forward. The key
idea that has solved this dichotomy is to have two
distinct models of the circuit: a forward network for
propagating values and a backward network for
propagating criticality. As illustrated in Figure 4,every
gate G in the original circuit is mapped into an element
G,, in the forward network and an element G , in the
backward network. Here A, B, and Z represent the
values of their corresponding signals, while Ac* B,,
and Zc&represent their criticalities. We use 0, 1, and x
as logic values, where x stands for unknown or
unspecified. Criticality indicators are binary. Note that
the computation of criticality requires the knowledge of
values. The circuit needed to compute the criticality of
a stem will be discussed shortly.

Figure 4. Mapping of a gate G into G , and G,


Figure 5 shows the block diagram of the fault
simulation circuit. The forward network is divided into
a fault-free (good) circuit model and a faulty circuit

184

Forward
network
r - - - -

~~~

I t

Values

v
t I
I
I

----

omparatort- 1
Faulty I
4
cirC$t
Insert-Faults

Table 2: Mapping for forward network


Original

Forward model

A
BO

Backward
network j

B
B

model. The good circuit performs 3-valued (zero-delay)


simulation of the original circuit for the vector t. The
faulty circuit is a copy of the fault-free circuit with
additional logic that allows stem-fault insertion for
every stem that needs to be analyzed. The backward
network receives all the values computed in the forward
network and computes the criticality of all the signals in
the simulated circuit. It also generates the signals that
control stem-fault insertion in the forward network. By
comparing the fault-free values at POs (or dominators)
with their corresponding values obtained in the
presence of stem faults, the Comparators block
determines whether these faults are detected; this
information is sent to the backward network to
determine criticality of the analyzed stems.

3.2. The Forward Network


The forward network is used for propagating
values in the fault-free circuit and also for injecting
faults on stems that need to be analyzed; fault insertion
is controlled by the backward network. Table 1 shows
the coding used in the forward network for 3-valued
logic. To represent the value of a signal A we use two
bits, A, and AI, where A, ( A l ) means A is 0 ( 1 ) or 2;
then AFA,=l means A has valuex. Table 2 illustrates
Table 1: Value coding

A
Bl I

derive. For an AND gate, for example, Z is 0 or x when


A is 0 or x, or when B is 0 or x (Z,=Ao+B,). Z has value
1 or x when both A and B are 1 or x (Zl=A,*B,). The
mapping for an inverter with input A and output Z is
given by
and ZI=A,, which shows that negation
is realized without logic, just by swapping A, and A,.

Figure 6 shows the fault injection circuit for a


stem S. Note that fault injection is done only in the
faulty circuit. The signal Insert-Syault, generated in
the backward network, inverts the current value of S by
swapping S, and S I . In practice, the fault injection logic
will be embedded within the logic generating S, and S I ,
and will require one additional input.

soG
SW

SI
Insert-SJault

Slf

Figure 6. Fault injection on a stem

3.3. The Backward Network

The backward network has one PI for every PO,


and one PO for every PI in the original circuit. It also
receives all the signal values computed in the forward
network. For every signal A in the original circuit, the
backward network computes A,,, which is 1 when A is
critical. For every PO Q, the backward network
provides a NAND gate whose output is Qerir = Q, Q ,,

the mapping process for the forward network ffor AND


and OR gates. The equations for 2, and 2, are easy to

185

analyzed and POs. The graph has a directed edge from


vertex i to vertexj if there is a direct path in the circuit
from stem i to stemj, that does not pass through any
other stem. Treating POs as level-0 vertices, the level L,
of vertex v is computed as L, = max{Li]+1, where i
ranges over all successors of v. In Figure 7, X and Yare
at level 0, P , Q,and R are at level 1, and A, B, C, D,and
E are at level 2. Stem criticality is determined in
increasing level order, so that the status of all stems at
level k is known when stems at level k+l are analyzed.

that is, Qcrit is 1 only when Q has a binary (non-x) value.


Asserting POs with binary values as critical starts the
backtracing process.Table 3 gives the truth table of a
combinational element G , that computes the criticality
for an AND gate with output Z and inputs A and B.
Similar tables can be easily derived for other gate types.
Thus backtracing through gates involves only
combinational logic.

Table 3: Criticality computation for an AND gate

*
.*

-:
.-

*-*.

I
@

.--x

c _

a)

The most time-consuming part of the algorithm


is stem analysis, as the continuation of backtracing from
a stem must wait until the stem fault has been inserted
in the forward network and the Comparators block
reports that its effects have propagated to a PO or to a
dominator. Note that not all stems require analysis, as
stems whose fanouts do not reconverge and those with
equal parity reconvergent paths may be marked as
critical without further analysis. To speed up serial fault
emulation, [8] determines groups of independent faults
which may be concurrently inserted. Two faults are said
to be independent (non-interacting) if they cannot affect
any common area of the circuit [lo]. Grouping of
independent faults in [SI is static, as it is done as a
preprocessing step. Our approach relies on dynamic
grouping to determine sets of stems whose faults may
be concurrently simulated. The dynamic grouping is
advantageous because the set of stems to be analyzed
changes with the applied vector. Another advantage is
that our technique is not limited to grouping
independent stem faults.

Figure 7. a) Fanout structure of a circuit


b) Stem dependency graph
c) Stem incompatibility graphs
Two stems at the same level are said to be

We will use the example in Figure 7 to illustrate


our method for stem grouping. Figure 7(a) shows the
fanout structure of a circuit, where the triangles denote
fanout-free regions (FFRs). The inputs of a FFR are
fanout branches and/or primary inputs without fanout
(the latter are not shown). The output of a FFR is either
a stem or a PO. The stems to be analyzed are first
assigned levels as follows. We construct a stem
dependency graph whose vertices are stems to be

compatible if their fault effects cannot interact before

reaching observation points for both stems. Faults on


compatible stems may be inserted and simulated
concurrently. In Figure 7, for example, P and X are
observation points for A, and X and Y are observation
points for Q. When several observation points for a
stem S are on the same path from S, we will consider
only the one closest to S. Thus we will use P as the

186

observation point for A . Stems A , C, and E are pairwise


compatible because their fault effects cannot interact
before reaching their respective observation points, P,
Q, and R. Although A and Care compatible, they are not
independent, because they feed the same PO (X). In
general, there are many more stems that may be
concurrently simulated based on compatibility than
based on independence.
Instead of the compatibility relation between
stems, it is more convenient to work with the opposite
relation, incompatibility. Incompatible stems may not
be concurrently simulated. Figure 7 shows the stem
incompatibility graph for our example. Stem D is
incompatible with all the other stems at level 2, because
its fault effects may interact with those from the other
stem (A, B, C, or E ) before the observation points for D
(Xor Y) are reached. The incompatibility graph is used
in building the backward network, whose logic is set up
so that only compatible stems are grouped. Dynamic
grouping is performed during backtracing. To see the
advantage of dynamic, as compared with static, stem
grouping, consider two situations when stems (A, C, E),
and (B, E), respectively, have to be analyzed. As both
groups include only compatible stems, all their faults
may be simulated concurrently. But there is no static
grouping that can achieve the same result, because B
and C are incompatible.

BI

S-done

Insert-SJault

E
S-done

Figure 8 depicts the circuit for stem analysis


corresponding to stem S.As illustrated in Figure 8(a),
assume that S has 4 fanout branches: BI is nonreconvergent, B2 and B3 reconverge with the same
parity, and B3 and B4 reconverge with different parities.
Figure 8(b) shows the top-level view, and Figure 8(c)
the detailed view, of the circuit in the backward network
that computes the criticality of S. When BI or B2 is
critical, SCd is set to 1 without any analysis. But when
B3 or B4 is critical, a request to analyze S for criticality
is issued by asserting S-req, provided that S has not yet
been analyzed for the current input vector; this
information is stored in the flip-flop S-done. The
requests from all stems that may require analysis are
sent to the level and stem selection circuit shown in
Figure 9. When S-req is granted, the Insert-Sjault
signal is asserted for all the stems at the same level that
are simultaneously analyzed (the fault insertion
mechanism is shown in Figure 6).

Vector-reset1

wr*r

"y

Vector-reset 1

Figure 8. Circuit for stem analysis a) Fanout


structure of S b) Top-level view of the backward
circuit for S c) Detailed view
Figure 5 ) that correspond to the observation points of
stem S,and indicates whether a fault effect propagates
to at least one of them. If these errors have been caused
by inserting the stem fault of S in the forward network
(indicated by Insert-SJault), the flip-flop Sj7t-det is
set to record the result of the stem analysis for S.
Detecting the stem fault leads to asserting Sed.
The same clock that sets SJlt-det also sets the

The signal S f a u l t j r o p is the OR of the


outputs from the Comparators block (shown in

187

S-done flip-flop. Both flip-flops remain locked in the 1state until Vector-reset is activated when the next vector
is simulated; this insures that S is not analyzed more
than once, and that S will remain marked as critical for
the current vector.

I level requests

stem requests
at level L

Level
priority
selector

Level-L
selector

level requests

stem requests
at level L
L enable1

Figure 9. Level and stem selection circuit


Figure 9 shows the circuit for selecting a group
of stem faults for insertion. Assume that S is a stem at
level L. At least one stem request at this level sets the
level request L-req. If several levels have requests, the
lowest level is selected by a priority selector, whose
outputs are level-enable signals. Let L-enable be the
output corresponding to L-req. The logic implemented
by the stem selector for level L reflects the stem
incompatibility graph for that level. The outputs of the
stem selector are ANDed with L-enable to produce the
fault insertion signals for the selected set of compatible
stems. The simulation of the current vector is complete
when all level requests signals are 0, which results in
Done= 1.

Table 4: Stem selection logic for level 2 in Figure 7

Requests

IFault inserted 0x11

For example, Table 4 shows the truth table of


the stem selection circuit for the level-2 incompatibility
graph in Figure 7. For a graph with n vertices, the truth
table has n+l rows (n=5 in our example). Each one of
the first n rows corresponds to a stem. The row for a
stem S is constructed as follows. The first set of n
columns have a request pattern consisting of a definite
request (1) for S, no request (0) for the stems whose
rows are before S, dont care (-) entries for the
stems incompatible with S, and potential requests for
the stems compatible with S. The potential requests are
denoted with boolean variables. For example c is the
variable associated with stem C, and its value denotes
the request for C. The second set of n columns show the
corresponding pattern of fault insertion signals: 1 for S,
0 for the stems incompatible with S and for stems whose
rows are before S, and the request variables for the
stems compatible with S. For example, the second row
corresponding to stem A covers a request from A, any
request from B, potential requests from C and E, and no
request from D,which is analyzed in the first row. The
corresponding fault-insertion pattern specifies insertion
on A, precludes insertion on B and D (which are
incompatible with A), and allows potential insertions on
C and E (which are compatible with A ) . The last row has
an all-0 pattern to cover the case of no requests. It is
easy to verify that such a truth table is complete and it
correctly handles all 2n possible input patterns.

3.4. Size and Speed


Our approach is hardware-intensive, as the size
of the FSZM circuit may be 10-12 times that of the
original circuit (the fault emulation model in [ 151 is 18
times larger than the original model). Since the FSZM
circuit is implemented on reconfigurable logic, its size
is not important, unless it does not fit in the emulator.
The largest capacity available today for emulation is
about 6 .million gates [13]. Also, fault simulation is
usually run only on subcircuits of the large models for
design verification.
After the fault-free values are computed, one
group of stems is analyzed in every clock cycle. During
each clock period, logic values must propagate from the
inserted faults to observation points, and criticality
values must propagate from stems to PIS. Thus, the
clock rate has to allow for the worst case which requires
full propagation through both the forward and the
backward networks. Compared to the fastest clock that
may be used in fault emulation, the clock for our

188

approach should be about twice as slow.

4. Comparison with Prior Work


As in 181 and [Is], we avoid the need for
repeated reconfiguration by having the fault-insertion
logic for all faults embedded in the emulated model. But
we insert a smaller number of faults (the numerical
advantage will be presented shortly.) Another important
difference is that we introduce the fault injection gates
in the fault simulation circuit at the logic level, before
mapping into FPGAs, while fault injection in [6] and [8]
is done after mapping, which makes it dependent on the
particular FPGA family used in the reconfigurable
hardware. Another disadvantage of doing fault insertion
after mapping is that portions of the original logic may
be duplicated, so single faults in the duplicated logic
must be modeled by multiple stuck faults in the
emulated circuit; in our approach we consider only
single stuck faults. Since [6] and [8] insert faults by
partial reconfiguration, these techniques are also highly
dependent of the target emulator and hence not
generally applicable.
Since [8] is the only fault emulation method
that injects more than one fault at a time, we will
compare our approach only with [8]. The average time
spent in fault simulating one test vector is given by the
product of the clock period and the average number of
fault-insertion steps per vector. Let N be the number of
lines in a circuit. The total number of faults in the circuit
is 2N. Since, in general, fault collapsing reduces the set
of fault by 50% [ 13, the number of faults to be simulated
will be approximately N . In [8], every simulated fault
must be injected in every vector. Grouping independent
faults reduces the number of fault-insertion steps from
N to Nlg, where g is the average size of a group.

groups (say, r) will issue active group requests in eve:ry


vector (see Figure 8). Thus the ratio between the
average number of fault-insertion steps per vector in 181
and our method is 5 g / ( ( 1 - d ) g r ). Assuming d i n the
range 0.1-0.3, r in the range 0.1-0.4, and g/g in the
range 1.2-1.7, this ratio will be between approximate:ly
17 and 120. Taking into account the 50% slower clock,
our method will be about 8 to 60 times faster than [8].

5. Conclusions and Future Work


In this paper, we propose a new approach fior
fault simulation, which maps a given circuit C into a
fault simulation circuit FSIM(C),that implements a
critical path tracing algorithm for C. Then FSIM(C) is
implemented on reconfigurable hardware. Unlike prior
work relying on serial fault emulation, our approach is
independent of the technology used in the target
reconfigurable hardware that emulates the fault
simulation circuit. Our performance estimate shows
that our method will be between one and two orders of
magnitude faster than serial fault emulation.
At the time of this writing, the implementation
work is still in progress. The next phase of this researclh
will extend this approach to sequential circuits.

References
[ 13 M. Abramovici, M. A. Breuer, and A. D. Friedman, Dig-

ital Systems Testing and Testable Design, IEEE Press,


1994
[2] M. Abramovici, P.R. Menon, and D.T.Miller, Critical
Path Tracing: An Alternative to Fault Simulation,IEEE
Design & Test of Computers, February, 1984
[3] P. Agrawal, V.D. Agrawal, and K.T. Cheng, Fault Simulation on a Pipelined Multiprocessor System, Proc.
Intnl. Test Con$, pp. 727-734, 1989
[4] E Brglez and H. Fujiwara, Neutral netlist of ten combi-

national benchmark circuits and a target translator in1


In our method, the maximum number of faults
FORTRAN, Proc. IEEE lntnl. Symp. on Circuits ana
to be injected for simulating each vector is equal to the
Systems, June 1985.
number of stems to be analyzed. Based on an analysis
[5] N. Van Brunt, The Zycad Logic Evaluator and its Appliof the MCNC combinational benchmark circuits [4], we
cation to Modem System Design, Proc. lntnl. Con$ on
Computer Design, pp. 232-233,1983
estimate the average number of stems in the circuit to be
[a] L. Burgun, F. Reblewski, G. Fenelon, J. Barbier, and 0.
approximately 0.2N. Let d be the fraction of stems that
Lepape, Serial Fault Simulation,Proc. Design Automanever have to be analyzed because they do not have
tion Con$, pp. 801-806, 1996
reconvergent paths or have reconvergent paths of equal
[7] M. Butts, J. Bacheler, and J. Varghese, An Efficient
inversion parity. Let the average size of a group of
Logic Emulation System, Proc. Intnl. Conf. on Computet Design, pp. 138- 14 1, 1992
independent faults be g, and a group of stem faults with
[8] K.-T. Cheng, S.-Y. Huang, and W.-J. Dai, Fault Emuladynamic fault grouping be g . Thus the maximum
tion: A New Approach to Fault Grading: Proc. lntn1.
number of groups that may be inserted is O.Z(l-d)Nfg.
Conf. on Computer-Aided Design, pp. 68 1-686, Nov.
However, on the average, only a fraction of these
1995.

189

[9] P. A. Duba, R. K. Roy, J. A. Abraham, and W. A. Rogers,


Fault Simulation in a Distributed Environment, Proc.
Design Automation Con$, pp. 686-691, 1988
[lo] V. S. Iyengar and D. T. Tang, On Simulating Faults In
Parallel, Proc. Fault-Tolerant Computing Symp., pp.
110-115, 1988
[ 111 P.R. Menon, Y. Levendel, and M. Abramovici,
SCRIPT: A Critical Path Tracing Algorithm for Synchronous Sequential Circuits, IEEE Transactions on
Computer-Aided Design, June, 1991.
[ 121 V.Narayanan, A Parallel Algorithm for Fault Simulation on the Connection Machine, Proc. Intnl. Test
Con$, pp. 89-93, 1988

[ 131 RPM Emulation System Data Sheet, Quicktum Systems

Inc., 1991
[ 141R. W. Wieler, Z. Zhang, and R. D. McLeod, Simulating

Static and Dynamic Faults in BIST Structures with a


FPGA Based Emulator, Proc. Intnl. Workshop on
Field-Programmable Logic and Applications, pp. 240-

250, 1994
[ 151 R. W. Wieler, Z. Zhang, and R. D. McLeod, Emulating
Static Faults Using a Xilinx Based Emulator, Proc.
IEEE Symp. on FPGAs f o r Custom Computing
Machines, pp. 110-115, 1995

190

S-ar putea să vă placă și