Documente Academic
Documente Profesional
Documente Cultură
A THESIS
SUBMITTED BY
L.SWARNA JYOTHI
FOR THE AWARD OF THE DEGREE
OF
DOCTOR OF PHILOSOPHY
CHENNAI 600095
JUNE 2009
ii
BONAFIDE CERTIFICATE
Resources” is the bonafide work of Mrs.L.SWARNA JYOTHI, who carried out the
research under my supervision. Certified further that to the best of my knowledge the
work reported here does not form part of any other thesis or dissertation on the basis of
which a degree or award was conferred on an earlier occasion on this or any other
candidate.
iii
DECLRATION
ACKNOWLEDGEMENTS
thank Dr.P.Nirmal Kumar for accepting the request to become an expert for the
dissertation committee.
This research has been funded by All India Council for Technical
thank AICTE for the support. I thank Management of JSS Academy of Technical
Education for providing the facilities. I also like to thank Dr.MGR University for
I would like Thank Mr. Harish for the Help he has extended by becoming
my co-author for the key papers and technical support. I also thank Mr.Ajit and
Mr.Janardhan for making this work happen. I would like to thank my mother,
L.SWARNA JYOTHI
viii
TABLE OF CONTENTS
Chapters
1. Introduction………………………………………………………….……….…...….1
1.1 Functional Verification…………………………………..……….…….………..1
1.2 Verification Reuse………………………………..…………...….……….….…..4
1.3 Objectives…………………………………………….…………....……….….....6
1.4 Motivation………………………………………………………..….…..…..…...7
1.7 Overview of the thesis…………………………………...………….…..…….….9
2. Literature Reviews………………………………………………....…....…….……10
2.1 Functional Verification…………………...…………………….………..…10
2.1.1 Simulation, Emulation and FPGA Prototyping…………….….………....11
2.1.2 Formal verification……………………………………..………...………13
2.1.3 Test Generation……………………………….………………...………..14
2.1.4 Transaction level modeling………………………………...……...……..16
ix
3. Introduction to Verification…………………………...……………………………33
3.1 Issues in Verification…………………………………………..………………33
3.2 Issues in Verification Methodologies………………………………………….34
3.3 Test Methodologies……………………………………………………………35
3.3.1 Deterministic…………………………….………………………………35
3.3.2 Pre-run Generation………………………………………………………35
3.3.3 Checking Strategies……………………………...………………………36
3.3.4 Coverage Metrics………………………………………………………...37
3.4 Types of Checking……………………………………………………………..38
3.5 Issues in Traditional Verification Methodology…………………………...…..39
3.5.1 Productivity Issues……………………………...………………………..39
3.5.2 Requirement for productivity improvement……………………….….….39
3.5.3 Quality Issues…………………………………...………………….…….39
3.5.4 Requirement for quality improvement…………………….…….……….39
3.5.5 Task-based strategy……………………………...…………….…………40
3.6 Verification methodology………………………………………………………40
3.7 A Verification Environment for Verifying Ethernet Packet….…….……….….41
-in Ethernet IP core
3.7.1 Management Data Input/ Output………………………………….………42
3.7.2 Verification Strategy……………………………..……………….………43
3.8 Chapter conclusion………………………………………………………………46
x
Appendices
1. PHY Code…………………………………………………………………….....….83
4. Bug File……………………………………………………………….…………...104
References………………………………………………………………….………...105
Publication Details………………………………..……………………….…………117
Vitae
xiv
LIST OF FIGURES
LIST OF TABLES
LIST OF ABBREVIATIONS
ABSTRACT
The Physical Layer is a fundamental layer upon which all higher level
functions in a network are based. However, due to the plethora of available
hardware technologies with widely varying characteristics, this is perhaps the most
complex layer in the OSI architecture. The implementation of this layer is often
v
termed Physical layer device (PHY). A PHY chip is commonly found on Ethernet
devices. Its purpose is to provide digital access of the modulated link and interface
to Ethernet Media Access Control (MAC) using media independent interface
(MII). This work demonstrates reuse of design environment with reference to
verifying the Ethernet packet in Ethernet Intellectual Property (IP) Core. Design
Reuse is achieved through verilog tasks which were used in specman environment.
Ethernet Phy, e Verification component (eVC) is an in house development.
Ethernet eVC is built with phy as a separate eVC and host being a task driven
verilog Bus functional model (BFM). This allows the creation of virtual host
environment using a combination of verilog BFM and eVC.
The wrappers can be used to integrate the IP core in another design when
the interfaces of the IP core are different from the interfaces in the current design.
Wrapper refers to an interfacing component which provides the necessary logic to
attach a user specified custom IP with a bus in an architecture. Wrappers can also
be used in verification when there is altogether a different type of interface on the
verification environment and a different type of interface on the DUT. This work
demonstrates design, development and verification of Transaction level wrappers,
with reference to wrapper interfaces between Ethernet MAC and the PCI device
and Processor interface block between 8 bit processor/ micro controller and DUT
core.
vi
The present approach was able to meet all the stringent requirements related
to the verification of a complex system. The extensibility of the e language, the
macro facility and the features of Specman Elite's built-in generator were some of
the novel elements that enabled this approach to be successful in a short duration.
Also, it involves a smaller team effort compared to doing the same in a verilog
environment completely.
CHAPTER 1
INTRODUCTION
Stuart 2000 [57]). Up to 70% of design development time and resources are spent in
FV. Recent studies highlights the challenges of FV: The study highlights that by 2007,
a complex SoC needs 2000 engineer years to write 25 millions of Register Transfer
Level (RTL) code and one trillion simulation vectors for functional verification
(Spirakis 2004 [87]).
Pre-Silicon Logic Bugs of the order of 20k-30k bugs are designed into next
generation and 100k in the subsequent generation (Tom Schubert, Intel, DAC 2003
[96]). In the face of shrinking time-to-markets, the amount of validation effort rapidly
becomes intractable, and significantly impact product schedule, with the additional
risk of shipping the products with undetected bugs.
of generating flexible and modular test environment, Reusability and readability of test
environment and handling of simulation time in automated test environment are some
other issues.
Design Complexity and size makes version control and tracking of design and
verification process difficult, both at specification level and functional, which can
often lead to architectural-level bugs that require enormous effort to debug. Debugging
is always a problem, especially when they occur in unpredictable places. Even the
most comprehensive functional test plan can completely miss bugs generated by
obscure functional combinations or ambiguous spec interpretations. This is why so
many bugs are found in emulation, or after first silicon is produced. Without the ability
to make the specification itself executable, there is really no way to ensure
comprehensive functional coverage for the entire design intent.
Perhaps the most important problem faced by design and verification engineers
is the lack of effective metrics to measure the progress of verification. Indirect metrics,
such as toggle testing or code coverage, indicate if all the flip-flops are toggled or all
lines of code were executed, but they do not give any indication of what functionality
was verified. For example, they do not indicate if a processor executed all possible
combinations of consecutive instructions. There is simply no correspondence between
any of these metrics and coverage of the functional test plan. As a result, the
4
Requirement for quality improvement are to increase confidence about the ratio
of identified bugs and an automatic way to know what has been tested. To improve
test-writing productivity a higher level of abstraction for specifying the vector stream
is used. Higher-level tasks in HDL or C are created. Task-based strategy limitations
exist and the test writing effort is still high. High-level intent is not readily apparent
and designer must select many parameters values manually.
industry and facilitates delivery of open resources for further research, compatible to
industry requirement. This work demonstrates the ways of performing organized effort
in doing considerable amount of research in Design reuse, Verification reuse and
design for verification.
The Physical Layer is a fundamental layer upon which all higher level functions
in a network are based. However, due to the plethora of available hardware
technologies with widely varying characteristics, this is perhaps the most complex
layer in the OSI architecture. The implementation of this layer is often termed Physical
layer device (PHY). A PHY chip is commonly found on Ethernet devices. Its purpose
is to provide digital access of the modulated link and interface to Ethernet Media
Access Control (MAC) using media independent interface (MII). This work
demonstrates reuse of design environment with reference to verifying the Ethernet
packet in Ethernet Intellectual Property (IP) Core. Design Reuse is achieved through
verilog tasks which were used in specman environment. Ethernet Phy e Verification
component (eVC) is an in house development. Ethernet eVC is built with phy as a
separate eVC and host being a task driven verilog Bus functional model (BFM). This
allowed us the creation of virtual host environment using a combination of verilog
BFM and eVC. Verification environment reuse for different application with different
interface is done by developing a wrapper around the Design Under Test (DUT)
interface and then interfacing it to the environment. A detailed test plan is made to
perform complete and exhaustive test for Ethernet MAC Receiver. Coverage goals,
coverage analysis and coverage obtained indicate efficiency of the verification
methodology.
The wrappers can be used to integrate the IP core in another design when the
interfaces of the IP core are different from the interfaces in the current design.
Wrapper refers to an interfacing component which provides the necessary logic to
6
attach a user specified custom IP with a bus in an architecture. Wrappers can also be
used in verification when we have altogether a different type of interface on the
verification environment and a different type of interface on the DUT. This work
demonstrates design, development and verification of Transaction level wrappers with
reference to wrapper interfaces between Ethernet MAC and the PCI device and
Processor interface block between 8 bit processor/ micro controller and DUT core.
1.3 OBJECTIVES
1.4 MOTIVATION
In a series of studies Collett International Research has shown that first silicon
success rate for ASICs has fallen from 48% in 2000 to 39% in 2002 to 34% in 2003.
Forty percent of designs required more than one re-spin as explained by (Rindert
Schutten 2003[78], Jack Horgan 2004 [104]). With the total cost of re-spins costing
hundreds of thousands of dollars and requiring months of additional development time.
Companies that are able to curb this trend have a huge advantage over their
competitors, both in terms of the ensuing reduction in engineering cost and the
business advantage of being to market sooner with high-quality products. Chips fail
for many reasons, ranging from physical effects like IR drop, to mixed-signal issues,
power issues, and logic/functional flaws. However, logic/functional flaws are the
biggest cause of flawed silicon. Of all tapeouts that required silicon re-spin, the Collett
International Research study shows that more than 60 percent contained logic or
functional flaws.
The data that Collett International Research collected from North American
design engineers has revealed that the top three categories of functional flaws are:
Design errors-82%, specification errors-47% and 14%.
Design reuse is a critical element in closing the SoC design gap. Designers
learned years ago that reinventing every new chip from scratch is not a scalable
approach. With the emergence of the semiconductor intellectual property (IP) market,
8
designers have gradually grown more comfortable with licensing virtual components
(VCs) from ASIC vendors, FPGA suppliers and dedicated IP companies. Perhaps the
largest single factor in the limited success of IP design reuse has been the lack of
attention to corresponding verification reuse.
CHAPTER 2
LITERATURE REVIEWS
verification environment is based on the simulation technique, and the design and
implementation is based on the two infrastructures, Verification Collaborative
Infrastructure (VCI) and TestWizard. The verification environment contains complete
models and tools including Bus Functional Models (BFMs), arbiter, protocol monitor,
C-based test generator, and compliance test suits.
generated stimuli, usually in the form of test programs, trigger architecture and micro-
architecture events defined by a verification plan. In general, test programs must meet
the validity requirement and the quality requirement.
Although formal methods such as model checking, and theorem proving have
resulted in noticeable progress, these approaches apply only to the verification of
relatively small design blocks or much focused verification goals. Earlier test
generators incorporated a biased, pseudorandom, dynamic generation scheme. (Allon
Adir 2004 [4]) suggested a generic solution, applicable to any architecture, led to the
development of a model based test generation scheme that partitions the test generator
into two main components: a generic, architecture independent engine and a model
that describes the targeted architecture.
One of the earliest test generation methods suggested by (C.Bellon 1984 [13]),
Starts from the behavioral sequential machine model. A set of representative
transitions to be tested is sought. The transition set is partitioned into a finite and
limited number of equivalence classes such that a verification of a representative of
each class will be verified by inducting the entire class. Design Verification Using
Logic Tests suggested by (Warren H Debany 1991 [100]), uses fault simulation to
grade the coverage of test cases used for hardware design verification. (Eugene Zhang
1997 [24]) suggests completely self checking tests using a transaction-based
verification method, and concurrent programming techniques. Another methodology to
synthesize a self-test program for stuck-at faults is suggested by (F.Corno 2003 [19]).
This approach generates a sequence of instructions that enumerates all the
combinations of the operations and systematically selects operands. However, users
need to determine the heuristics to assign values to instruction operands to achieve
high stuck-at fault coverage. Priority Directed Test Generation (PDG) for Functional
Verification using Neural Networks is suggested by (Hao Shen 2005 [33]). With PDG,
16
a test vector which hasn’t been simulated is granted a priority attribute. The priority
indicates the possibility of detecting new bugs by simulating this vector.
presents an approach to design and verify SystemC models at the transaction level.
Here the verification is integrated as part of the design flow. Both the design and the
properties are written in Property Specification language and modeled in Unifed
Modeling Language (UML). In this approach a methodology is offered to apply
assertion-based verification for reusing the already defined Property Specification
Language (PSL) properties. Here a genetic algorithm that optimizes the probability
distribution of the inputs over the space of their possible values is proposed.
ABV has been identified as a modern, powerful verification paradigm that can
assure enhanced productivity, higher design quality and, ultimately, faster time to
market and higher value to engineers and end-users of electronics products. Assertions
are orthogonal to the design, they can be thought of as a separate layer of verification
added on top of the design itself. With ABV, assertions are used to capture the
required temporal behavior of the design, in a formal and unambiguous way. The
design then can be verified against those assertions using simulation and/or static
18
verification (e.g. model checking) techniques to assure that it indeed conforms to the
intended design intent, as captured by the assertions.
Designers write assertions that describe the behaviors they expect the
environment to exhibit, the behaviors they want to model, and specific structural
details about the design implementation. Designers also write functional coverage
assertions to identify corner cases that the verification environment must properly
stimulate. Verification teams typically write assertions that deal more with the end-to-
end behavior of a block or system, specifying the behavior independently of the
specific implementation choices of the designer.
The generation of behavioral models from a set of assertions, and calling these
models “cando-objects” because cando-objects behave randomly in all situations not
covered by the corresponding set of assertions is proposed by (Hans Eveking 2007
[32]). The cando-objects reflect the intended non-determinism of assertions as well as
the non-determinism caused by the incompleteness of a set of assertions.
2.1.6 Validation
high level test sequence is driver based on the abstract FSM model using FSM testing
techniques.
CDV approach makes coverage the core engine that drives the whole
verification flow. Coverage space is defined up front, and coverage is used to measure
the quality of the random testing and steer verification resources towards covering
holes until a satisfactory level of coverage is attained. This, in theory, enables reaching
high quality verification in a timely manner.
23
certain aspects of the coverage data to extract relevant, useful information, thereby
improving the quality of the coverage analysis.
This map is the verification plan, an executable natural language document that
defines the scope of the verification problem and its solution. The scope of the
problem is defined by implicit and explicit coverage models. The solution to the
verification problem is described by the methodology employed to achieve full
coverage: dynamic and static verification. Simulation (dynamic) contributes to
coverage closure through RTL execution. Formal (static) contributes to coverage
closure through proven properties. By annotating the verification plan with these
progress metrics as well as others, it becomes a live, executable document able to
direct the design team to their goal.
For SOC’s it is observed that most of the peripherals are reused from the
previous design step with some modifications done on the feature set. Verification
reuse methodology is very critical for these systems. Verification Planning for Core
based Designs is suggested by (Anjali Vishwanath 2007 [12]). In this work the
discussion about the importance and completeness of verification planning in order to
achieve the verification requirements and reuse techniques adapted during the planning
phase to enhance reuse between different cores based designs is done.
designed to dynamically predict the response of the DUT. There is need to design
different reference model to verify other DUT.
provide both speed and easiness for simulation. The proposed verification environment
PCI-Xactor is a complete and flexible verification environment for PCI, PCI-X, PCI
Express, and Advanced Switching.
P1500, a language is being defined for embedded core test to enable the reuse
of intellectual property in a system on a chip environment suggested by (Rohit Kapur
1999 [79]). A Core Test Description Language (CTL) is being proposed as an industry
standard method for capturing and expressing test related information for reusable
cores. CTL intends to introduce an industry standard method and associated
description language for capturing and expressing certain types of test related
information for reusable cores. CTL provides all the information about the core to
enable the testing of the user defined logic.
System design projects are extremely large and complex. An average design
today is over one million gates, with increasingly complex interfaces, and short design
schedules. This is creating a verification crisis, causing teams to run orders of
magnitude more simulation cycles. In addition, directed tests are not easily created or
maintained and the interface interactions have grown exponentially. The team must
also verify that the design adheres to the specification at each abstraction level, and
each level requires a different execution engine. Considering all this, combined with
the ever-increasing cost of implementation as geometries continue to shrink, the cost
of failure has never been higher.
example, indicates lines of verification code that was visited in a simulation, but it
does not offer any indication of which functionality it was verified. As result, the
engineer never is sure if a sufficient quantity of verification has been realized.
There has never been a time when verification solutions have had to be faster or
smarter than they need to be now. Where engineers previously had the luxury of
working in small domains and in hardware models with limited interactions, the
growing momentum in system-on-chip (SoC) development has meant a tremendous
growth in the amount of verification required for a given design.
CHAPTER 3
VERIFICATION ENVIRONMENT
Design Complexity and size makes version control and tracking of design and
verification process difficult, both at specification level and functional, which can
often lead to architectural-level bugs that require enormous effort to debug. Debugging
is always a problem, especially when they occur in unpredictable places. Even the
most comprehensive functional test plan can completely miss bugs generated by
obscure functional combinations or ambiguous spec interpretations. This is why so
many bugs are found in emulation, or after first silicon is produced. Without the ability
to make the specification itself executable, there is really no way to ensure
comprehensive functional coverage for the entire design intent.
Perhaps the most important problem faced by design and verification engineers
is the lack of effective metrics to measure the progress of verification. Indirect metrics,
such as toggle testing or code coverage, indicate if all the flip-flops are toggled or all
lines of code were executed, but they do not give any indication of what functionality
35
was verified. For example, they do not indicate if a processor executed all possible
combinations of consecutive instructions. There is simply no correspondence between
any of these metrics and coverage of the functional test plan. As a result, the
verification engineer is never really sure whether a sufficient amount of verification
has been performed.
3.3.1 Deterministic
The oldest and most common test methodology used today is deterministic
testing. These tests are developed manually and normally correspond directly to the
functional test plan. Engineers often use deterministic tests to exercise corner cases,
specific sequences that cause the device to enter extreme operational modes. These
tests are normally checked manually. However, with some additional programming the
designer can create self-checking deterministic tests.
Although simple tests can be written in minutes, the more complex ones can
take days to write and debug. Moreover, midstream changes to the design is temporal
behavior can cause the engineer to go through this process repeatedly. When this test
is completed, the corner case is tested through only one possible path.
the test generation process. C or C++ programs (and sometimes even VHDL and
Verilog, despite the lack of good software constructs) are usually used to create the
tests prior to simulation. The programs read in a parameter/directives file that controls
the generation of the test. Often these files contain simple weighting systems to direct
the random selection of inputs.
The two most popular ways to determine test results are to compare them to a
reference model or to create rule-based checks. Both of these checking methods must
include both the temporal behavior and protocols of the device as well as the
verification of data. Reference models are most common for processor-like designs
where the correct result can be predicted with relative ease. Engineers perform these
checks either on-the-fly or post-run. Simple checks and protocol checks can be
performed on-the-fly by the stubs and monitors using an HDL. Post-run checks are
often performed using a C/C++ or PERL program.
The problem with these checking strategies stems from the way they are most
commonly implemented today. Post-run checking wastes cycles. In addition, since the
post-run checking cannot detect a problem in real-time, the designer does not have
access to the values of the registers and memories of the device at the time the problem
37
Toggle testing verifies that over a series of tests, all nodes toggled at least once from
1 to 0 and back;
Code coverage demonstrates that, over a series of tests, all the source lines were
exercised. In many cases there is also an indication as to whether branches in
conditional code were executed. Sometimes an indication of state-machine
transitions is also available;
Possibly the most common metric used to measure progress is to track how many
bugs are found each week. After a period of a few weeks with very low or zero bugs
found, the designer assumes that the verification process has reached a point of
diminishing returns.
Unfortunately, none of the metrics described above has any direct relation to
the functionality of the device, nor is there any correlation to common user
applications. Neither toggle testing nor code coverage can indicate if all the types of
cells in a communication chip with and without Cyclic Redundancy Check (CRC)
errors have entered on all ports. These metrics cannot determine if all possible
sequences of the instructions in a row were tested in a processor.
38
Data check verifies the data correctness while temporal check verifies timing
and protocol. Higher level languages are needed as High-level language eases the
creation of an expected value generator. Figure 3.1 pictorially represents data
checking. It shows sharing of objects used for input stimulus. It supports cycle-based
behavior, events, and synchronization with HDL simulator. Basis of verification test
cases are Directed test cases, Random test cases and constrained Random test cases.
Expected-
Value
Generator
01001
DUT 1
11100 =?
0
01010
Figure 3.1 Data Checking
39
In addition, as with many verification projects today, our goals were to develop
a high-quality device in an extremely tight time schedule. This section describes our
approach to the verification of this complex device and how we addressed the
conflicting needs of quality versus complexity versus time.
42
The functional architecture (Fig 3.2) consists of a host interface and a standard
MII interface. The IP core consists of MDIO, Direct Memory Access (DMA) support,
Configuration registers, Control logic, Transmitter FSM and Receiver FSM.
The MDIO (Figure 3.2) is a simple serial interface between the host and an
external PHY device. It is used for configuration and status read of the physical
device. A host processor responsible for system configuration and monitoring typically
uses the MDIO to perform individual accesses to the various PHY devices.
MAC
HOST
Configur
MII/GMII
ation
register PHY
MDIO
MDIO
O
A Mux is used for selecting 8 bit field of MDIO frame at a time and loading it
to parallel in serial out register. The PHY data is shifted out at the rising edge of PHY
clock. The most significant bit of the data is shifted first. During a read cycle the data
from PHY is shifted into the serial in parallel out register and a Demux drives it to the
required data bus.
The control block consists of counters which generates enable signal for the
functioning of mux, Demux, SIPO and PISO. It generates the status signals to be
written into the configuration registers. Control block also generates the PHY enable
signal to drive PHY data as the data line is a shared tri-stateable bus, which is driven
by the MAC for write transactions or by the PHY devices during read.
The clock generator module generates Management Data Clock by dividing host
clock. The division factor is set in the configuration register field.
Verification strategy for the Ethernet core is fairly sophisticated. The design is
very complex and the verification team was tasked with ensuring as high a quality
device as possible. Also, the verification team had the requirement that the
environment lend itself easily to reuse for future generations, and that engineers who
did not create the environment be able to be productive within it as quickly as possible.
The verification environment designed is shown figure 3.3.
The Specman Elite environment is built with the PHY eVC and the host eVC
with few verilog tasks. This allowed us to create a virtual host environment using e
masters, slaves and bus arbitrators.
TASKLIBRARY(GENERAL PURPOSE)
TestCase Master
PCI BUS
IP Core
G/T BFM
Data
Generator PIB
Monitor
Data Checker
(ScoreBoard)
Figure 3.3 Verification Environment for Media Access Control (MAC) DUT
A Bus functional Model (BFM) is the unit instance that interacts with the DUT
by driving and/or sampling the DUT signals. A sequence driver drives a data item
generated by a data generation unit. A BFM should be self-contained, not dependent
on other drivers. All stimulus interaction with the DUT should come from common
drivers. This makes the verification IP more modular and reusable. BFMs drive and
45
sample only one interface. An interface is defined as a set of signals that implements a
specific protocol. It makes the design more modular and allows drivers to be reusable.
Monitors are used to check and observe all transactions on the interface. The
separation of data checking and protocol checking makes verification elements more
reusable, as well as less complicated in the implementation. Monitors should be self-
contained, with each monitor handling only one interface, and should not drive any
design signals. A monitor verifies the protocol on the interface. As the IP or block is
integrated into a multiple-unit or SoC environment, the monitors should be reusable to
check for violations on the interfaces of the IP or block.
A scoreboard is the verification element that predicts stores and compares data.
Determining the correctness of data received and transaction happened on an interface
is the job of scoreboard.
One of the major objectives for the verification environment was the ability for
the non-Specman "savvy" engineers to be able to easily use the environment to
develop tests. We achieved this through “verilog tasks" that were built in a layered
way using Specman Elite macros.
CHAPTER 4
VERIFICATION REUSE
Figure 4.1 shows the structure of the PHY model. The PHY model provides
translation between high level frame descriptions and the low level interface at the
DUT. The Frame Decoder reconstructs high-level frames from the DUT interface
and classifies them according to the level and type of errors encountered. The
Protocol Checker Monitors the data-path and station management signals to and
from the DUT at both low and high levels. The various components of the PHY
layer collect coverage and logging information and generate statistics.
49
The PHY element also models the station management interface and the
associated register sets. There is a Station Management Executive within the PHY
to control the DUT access to the register sets via an ‘MDIO’ interface. It is also
possible for the verification environment to directly read and write the contents of
the STA registers. All mandatory registers for each kind of PHY are implemented
to both reflect and affect the internal state of the eVC PHY model. Optional
registers can be modeled or restricted under user control.
The DUT Agent provides the interface between the Virtual Medium and a PHY
layer model that provides connectivity to the DUT. Normally there is a single
DUT Agent per instance of the eVC.
The Ethernet eVC (Figure 4.2) can be used to verify any IEEE802.3:2000
and IEEE Draft P802.3ae/ D4.0 compliant MAC or PHY device. The eVC can be
used for the functional verification of IP cores and SoC designs incorporating
Ethernet MAC and PHY layer functionality and can be configured to have an
unlimited number of Ethernet ports, each interfacing with one of the DUT’s
Ethernet ports.
Ethernet Evc:
There are two types of Ethernet CRC errors: (a) MAC frame CRC errors,
and (b) internal protocol CRC errors.
52
These CRC errors are the most common, and are what most devices and
analyzers are referring to when they claim a CRC error has occurred. Ethernet
packets are encapsulated in a MAC frame that contains a preamble, and a post-
envelope CRC check. The Ethernet adapter on the sending station is responsible
for creation of the preamble, the insertion of the packet data and then calculating a
CRC checksum and inserting this at the end of the packet. The receiving station
uses the checksum to make a quick judgment if the packet was received intact. If
the checksum is not correct, the packet is assumed to be bogus and is discarded.
If a collision happens during packet transmission, the signal for the specific
packet will be interrupted, and the resulting received packet will be damaged. If
the signal is interrupted partially during transmission, the CRC checksum that was
calculated by the network adapter will no longer be valid and the packet will be
flagged as a CRC error and discarded. CRC errors are common on a busy network,
and a small percentage does not reflect a network problem. When the percentage is
large, or when a single station shows a larger percent CRC errors there is probably
a problem that needs to be addressed.
53
Some protocols have a second checksum for data integrity purposes. This
checksum is calculated on only a portion of the internal data of each packet, and
can give a second and independent check for the validity of the packet’s contents.
Observer calculates this checksum independent of the MAC layer CRC and
displays the results in the decode display. These CRC errors are very rare and can
be caused by malfunctioning software or protocol drivers.
Longer key will be better for error checking. On the other hand, the
calculations with long keys can get pretty involved. Ethernet packets use a 32-bit
CRC corresponding to degree-31 remainder. Since the degree of the remainder is
always less than the degree of the divisor, the Ethernet key must be a polynomial
of degree 32. A polynomial of degree 32 has 33 coefficients requiring a 33-bit
number to store it. There is no need to store highest coefficient as it has a value 1.
The key used by the Ethernet is 0x04c11db7. It corresponds to the polynomial:
There is one more trick used in packaging CRCs. The CRC is calculated
after appending 32 zero bits to the message. The message with N bits corresponds
to polynomial of degree N-1. This operation is equivalent to multiplying the
54
4.4 WRAPPERS
Designer must be able to reuse Intellectual Property in the design for other
projects and designs. Reuse of IP across different designs of the domain saves
time to market further and improves total productivity and quality. The wrappers
can be used to integrate the IP core in another design when the interfaces of the IP
core are different from the interfaces in the current design. Wrapper refers to an
interfacing component which provides the necessary logic to attach a user
specified custom IP with a bus in an architecture. Wrappers can also be used in
verification when we have altogether a different type of interface on the
verification environment and a different type of interface on the DUT.
The wrapper (Figure 4.3) interfaces the Ethernet MAC with the PCI device.
This interface serves to access the configuration registers and the memory. Direct
Memory Access (DMA) transfers are used to transfer data to and from memory.
The wrapper can be changed for other type of interfaces by having the MAC
engine constant. The new wrapper needs to take care of all interfacing issues with
host and device. The features supported by the wrapper are:
Error signaling.
The host interface block generates the DMA signals based on the host
generated signals. During configuration process the host writes configuration data
to the configuration registers block. This process need not initiate the DMA
transfer. Every time only one register will be accessed. This process will be
executed by enabling ‘configdata_ready’ signal. With this signal high the DMA
block will read the config register address from the address bus (‘reg_addr’)
during the next clock cycle and the config data from the data bus (‘reg_data’)
during subsequent clock cycle.
During transmission of packet the host will initiate the DMA transfer. This
will be intimated to DMA block by enabling the dma_transfr signal. With this
signal being high the DMA block will keep on reading the data from the data bus
and writes to its tx_fifo. Once the transfer is over the dma_transfr signal will be
disabled by the host interface block. During this process the host interface block is
responsible for getting the host memory address location from the descriptor.
57
During reception of packet DMA block will initiate the DMA transfer by
issuing dma_req. This request goes to host through host interface block and once
the grant is given the host interface block will get the host memory address from
the descriptors and enable the signal dma_gnt to DMA block, then DMA block
will continuously read the data from receive buffer and puts on the data bus . This
data in turn will go to host memory through host interface block.
1. IDLE:
For the host to start TX flow
Write first TX descriptor address into TDAR.
Set Txqueued bit.
2. ARB_REQ:
TX dma would raise a request to arbiter block.
3. DESR READ:
Once TX dma receives grant it
Asserts frame_req.
Puts TDAR onto address bus.
Asserts read request.
Total no of bytes to be read = 16 (4 dwords of descriptor table).
Once TX dma receives frame_req_ack, read operation begins.
Data transfer is valid when both mac_rdy and host_rdy is high.
58
4. INT:
If own = 1
Set interrupt, Txqueue empty.
Itxqueued is cleared.
If TX ram full = 1
Set interrupt, transmit buffer full.
Itxqueued is cleared.
If TX frame status not read
Set interrupt, status not read.
Itxqueued is cleared.
5. BUF READ:
If tx_descriptor read = 1 and grant is received
Buffer address from descriptor table is put on addbus.
Read request is asserted.
Byte count is Frame length from descriptor table.
If frame req_ack is received
Data transfer is valid when both mac_rdy and host_rdy is high.
Data is written to TX ram.
59
6. BUF WRITE:
If rx_descriptor read = 1 and grant is received
Buffer address from descriptor table is put on addbus.
write request is asserted.
Byte count is Frame length from RX RAM logic.
If frame req_ack is received
Data transfer is valid when both mac_rdy and host_rdy is high.
Data is read from RX RAM.
Frame count determines how many bytes are moved to memory.
If all frames were not able to be moved, due to small buffer, one
more transaction is requested to read the next descriptor table.
Transaction is completed successfully if frame_comp is asserted and
target abort remains deasserted.
If target abort is asserted the transaction has failed.
Arbiter
Arbiter arbitrates between the transmit and receive DMA controllers
for accessing the bus. Two priority select bits, written by host in the
60
Interrupt Controller.
61
Prioritizer.
Initialization.
RAM
PIB
Tx
Interrupt Prioritiser
CPU controller
Rx
Initialization sequence
When Receiver is trying to write (figure 4.5) and CPU is trying to read
simultaneously, CPU is provided with priority of 3-message objects access
continuously by using a counter and after counter reached the terminal count
(Tc_Rd), Pib_Busy will be enabled so that CPU does not access the RAM. Same
conditions apply for Receiver also. When Transmitter is trying to read and CPU is
trying to write simultaneously access for 3 message objects is provided
continuously by implementing a counter. Whenever counter reaches terminal
count 3 Pib_Busy will be enabled. Same conditions apply for Transmitter also.
62
Writing into and reading from the host interface registers is performed here.
The following configuration read and writes are verified by writing into a
particular register and reading back from it.
Host writes data to the configuration registers directly and PCI target
interface is used.
Data frames are written to and from memory and PCI master
interface is used.
Support for storage of 2 frames at any time.
CHAPTER 5
TESTING
A test plan is the document used to define each test case. It should be
written before creating the test cases, since this document is used to identify the
number of tests required to fully verify a specific design. Before creating the test
plan, the following information should be collected and listed:
Scoreboard
PHY
Phy Protocols
Monitor
Figure 5.2 displays a detailed plan for verification of MAC receiver stages.
66
dont receive
receiver enable(bit52=1)
frames
yes
Individual
Multicast Broadcast
Address
addressed field addressed field
Field
if too short frame Data Frame Control Frame Too long frame VLAN
Check the
In Full Duplex Mode
Allignment of Length Error
Only
Octets
yes No
yes if full
to transmiiter no
Length
Data
to mac client
The test cases to check the functionality of the Ethernet Mac Receiver are
planned according to specification as follows:
Check the Behavior of the receiver with the physical layer protocol
conditions like :
1. Carrier sense signal.
2. Receiver data Valid signal.
3. Receiver error.
Check for the Normal Data/Jumbo Data Reception on 8 bit Data Buses
from Phy side made for Mac station.
Check the different fields (Preamble, SFD, DA, SA, Length/Type, Data
frame body, CRC and EOF) coming in to the data frame under several
conditions.
Check the Receiver behavior in case of error in SFD.
68
Check the Receiver behavior for the Broadcast, Multicast data frame (DA
and SA Field).
Reception of Control frame i.e. Pause Frame.
Reception of VLAN Tagged Frame.
Check the CRC coming in the different type of Frames with the CRC
calculated in the receiver portion (FCS Error).
Check the behavior of the receiver in case of too long frame/ too short
length frame (Indication to Host side, Removal of padding bits in short
frame).
Length Error in case of matching the Number of Data Octets with the
length field.
Check the Data Alignment Error in different circumstances.
In case of RXRAM Full, Receiver behavior (Discarding the previous Frame
and signals to transmitter to transmit the Pause Frame).
Check the Behavior of the receiver with the physical layer protocol
conditions like :
Check for the Normal Data/Jumbo Data Reception made for Mac station.
Check the different fields (Preamble, SFD, DA, SA, Length/Type, Data
frame body, CRC and EOF) coming in to the data frame under several
conditions.
Check the behavior of the Receiver in case of Error in the SFD for burst of
data/ packet of data.
Check the Burst of Data reception (differentiating the Extension bit from
the Data bit).
Check the Receiver behavior for the Broadcast, Multicast data frame (DA
and SA Field).
Reception of VLAN Tagged Frame.
Check the CRC coming in the different type of Frames with the CRC
calculated in the receiver portion (FCS Error).
Check the behavior of the receiver in case of too long frame/ too short
length frame (Indication to Host side, detection of padding bits in short
frame).
Length Error in case of matching the Number of Data Octets with the
length field.
Check the Data Alignment Error in different circumstances.
Check the Successful Data reception.
In case of RXRAM Full, Receiver behavior (Missing Packet signal to host
side).
70
Check the Behavior of the receiver with the physical layer protocol
conditions like :
1 Carrier sense signal.
2 Receiver data Valid signal.
3 Receiver error.
Check for the Normal Data/Jumbo Data Reception on 4 bit Data buses from
Phy Side made for Mac station.
Check the different fields (Preamble, SFD, DA, SA, Length/Type, Data
frame body, CRC and EOF) coming in to the data frame under several
conditions.
Check the Receiver behavior for the Broadcast, Multicast data frame (DA
and SA Field).
Reception of Control frame i.e. Pause Frame.
Reception of VLAN Tagged Frame.
Check the CRC coming in the different type of Frames with the CRC
calculated in the receiver portion (FCS Error).
Check the behavior of the receiver in case of too long frame/ too short
length frame (Length Error Indication to Host side, detection of padding
bits in short frame).
Length Error in case of matching the Number of Data Octets with the
length field.
Check the Data Alignment Error in different circumstances.
Check the Successful Data reception.
71
Check the Behavior of the receiver with the physical layer protocol
conditions like :
1 Carrier sense signal.
2 Collision detection
(i) Late Collision Condition.
(ii) Normal Collision under different instance of data Reception.
3 Receiver data Valid signal.
4 Receiver error.
Check for the Normal Data/Jumbo Data Reception made for Mac station.
Check the different fields (Preamble, SFD, DA, SA, Length/Type, Data
frame body, CRC and EOF) coming in to the data frame under several
conditions.
Check the Receiver behavior for the Broadcast, Multicast data frame (DA
and SA Field).
Reception of VLAN Tagged Frame.
Check the CRC coming in the different type of Frames with the CRC
calculated in the receiver portion (FCS Error).
Check the behavior of the receiver in case of too long frame/ too short
length frame (Length Error indication to Host side in case of too long
frame, detection of padding bits in short frame).
Length Error in case of matching the Number of Data Octets with the
length field.
Check the Data Alignment Error in different circumstances.
72
Check behavior of Receiver for Promiscuous Mode for the Different Frame
reception.
Check the Successful Data reception.
Check for end of Reception (Receive status).
Receiver Mode
Non-
Promiscuous
Promiscuous
State
state
IN RCB Bit50
if '1' if '0'
If RXRAM Full
Figure 5.5 RXRAM Full condition at the time of reception of frame for
Half/Full Duplex mode
if Rx_err asserted
<IP_ROOT>
rtl - *.v
synthesis
fpga
scripts - *.csh *.tcl *.pl ( synthesis scripts )
run - *.csh ( tool directory )
constraints - *.ucf ( synthesis constraints )
gate - *.v ( gate netlist from DC )
reports – ( synthesis reports )
asic
75
Statement coverage
Branch coverage
Condition and Expression coverage
Path coverage
Toggle coverage
Triggering coverage
Signal-tracing coverage
FSM coverage
Condition coverage measures that the test bench has tested all combinations
of the sub-expressions that are used in complex branches. If one complete
branching statement follows another branching statement in a sequential block of
77
HDL code, then a series of paths can occur between the blocks. Path coverage
measures how many of these combinations were actually executed during the
simulation phase.
Visited state coverage, ensures that every state of an FSM has been visited.
Arc coverage, ensures that each arc between FSM states has been traversed.
Line coverage
The line coverage to be achieved is 100%.
Condition Coverage
The condition coverage to be achieved is 85%.
FSM Coverage
The FSM coverage to be achieved is 100%.
Toggle Coverage
The toggle coverage to be achieved is 90%.
Line coverage
The line coverage obtained is 100%.
Condition Coverage
The condition coverage achieved is 90%.
FSM Coverage
The FSM coverage obtained is 95%.
Toggle Coverage
The toggle coverage obtained is 90%.
79
A detailed test plan is made and executed for different stages of MAC
Receiver in the MAC Receiver environment. Test cases for GMII and MII modes
in Half- Duplex and Full-Duplex modes are performed as per the standards set by
IEEE 802.3. The test cases are planned according to specification as follows:
Check the Reset condition of Receiver, Check the Receiver Enable and Disable
condition, Check the clock reception in case of GMII/MII, Check the transfer of
MII to GMII mode and vice versa and Check for the GMII Mode. Directory
structure for Ethernet IP Verification is planned as per the requirement. Coverage
analysis indicates the efficiency of the methodology. There is hardly any
difference between set goal and what is achieved. Toggle coverage is not as
important as statement, FSM and conditional coverage for behavioral designs
80
CHAPTER 6
Physical layer device (PHY) is perhaps the most complex layer in the OSI
architecture. Ethernet eVC is built with Phy as a separate eVC and host being a task
driven verilog Bus functional model (BFM). This allows us the creation of virtual host
environment using a combination of verilog BFM and eVC.
amount of tests given the complexity of the design, and it enabled us to get
unparalleled coverage with a minimum number of tests and lines of code.
Team size and total design hours are small. Average coverage is around 90%.
The offset is most in Condition Coverage and Toggle coverage. This is expected as not
all combinations in a truth table will happen.
The extensibility of the e language, the macro facility and the Specman Elite's
built-in generator were key elements that enabled this approach to be successful in
short duration and smaller team effort compared to doing the same using complete
verilog environment.
Two open resources which are of great help to academics and industry are listed
in appendix 4 and 5. The free tools are thoroughly evaluated and performance is
measured with proper parameters.
REFERENCES
2. Adrian Evans, Allan Silburt, Gary Vrckovnik, Thane Brown, Mario Dufresne,
Geoffrey Hall, Tung Ho, Ying Liu, (1998) “ Functional Verification of Large
ASICs” -ACM-DAC98 - 06/98 San Francisco, CA USA
3. Ali Habibi, and Sofiène Tahar, (2006) “ Design and Verification of SystemC,
Transaction-Level Models”-IEEE transactions on Very Large Scale Integration
(VLSI) Systems, vol. 14, no. 1
4. Allon Adir, Eli Almog, Laurent Fournier, Eitan Marcus, Michal Rimon,
Michael Vinov, and Avi Ziv, IBM Research Lab, Haifa, (2004) “Genesys-Pro:
Innovations in Test Program Generation for Functional Processor Verification”
- Design & Test of Computers, IEEE ,Volume 21, Issue 2, Page(s): 84 - 93
6. Amir Hekmatpour, Alley, C.; Stempel, B.; Coulter, J.; Salehi, A.; Shafie, A.;
Palenchar, C.,(2005) “A heterogeneous functional verification platform”-
Custom Integrated Circuits Conference, Proceedings of the IEEE 2005 Volume ,
Issue , 18-21 Sept. 2005 Page(s): 63 – 66
8. Ananth Dahan.; Geist, D.; Gluhovsky, L.; Pidan, D.; Shapir, G.; Wolfsthal, Y.;
Benalycherif, L.; Kamidem, R.; Lahbib, Y , (2005) “Combining system level
modeling with assertion based verification”.- Quality of Electronic Design,
ISQED 2005. Sixth International Symposium on Volume , Issue , 21-23 March
2005 Page(s): 310 - 315
106
11. Aniruddha Baljekar, NXP Semiconductors India Pvt. Ltd., Bangalore, INDIA,
(2006), “Re-Use of Verification Environment for Verification of Memory
Controller”, www.design-reuse.com/articles/18329/verification-memory-
controller.html
12. Anjali Vishwanath, Ranga Kadambi Infineon Technologies Asia Pacific Pte Ltd
Singapore , (2007) “Verification Planning for Core based Designs” -
www.design-reuse.com/articles/16141/verification-planning-for-core-based-
designs.html
14. Bentley, B., Intel Corp., Hillsboro, OR, USA, (2002) “High level validation of
next-generation microprocessors”- High-Level Design Validation and Test
Workshop, 2002. Seventh IEEE International, 27-29 Oct. 2002, 31- 35
15. Bob Bentley, (2001) "Validating the Intel Pentium 4 Microprocessor," dac,
pp.244-248, 38th Conference on Design Automation (DAC'01)
16. Brendan Mullane and Ciaran MacNamee, Circuits and System Research Centre
(CSRC), University of Limerick, Limerick, Ireland, (2006) “Developing a
Reusable IP Platform within a System-on-Chip Design Framework targeted
towards an Academic R&D Environment”: www.design-
reuse.com/ipbasedsocdesign/slides_2006-csrc_01.html
17. Byeong Min, Gwan Choi, (2001) "RTL Functional Verification Using
Excitation and Observation Coverage," hldvt, pp.58, Sixth IEEE International
High-Level Design Validation and Test Workshop (HLDVT'01)
107
21. Dinos Moundanos, Abraham, J.A. Hoskote, Y.V., Comput. Eng. Res. Center,
Texas Univ., Austin, TX , (2002) “Abstraction techniques for validation
coverage analysis and testgeneration”- IEEE Transactions on Computers, Jan
1998, Volume: 47, Issue: 1, PP: 2-14, Current Version Published: 2002-08-06
23. Eduard Cerny. Synopsys, Inc. Marlborough, USA , Dmitry Korchemny Intel
Corp , (2007) ”Using SystemVerilog Assertions for Creating Property-Based
Checkers”: www.eda.org/ovl/pages/pdfs/dvcon07_cerny.pdf
24. Eugene Zhang, E. Yogev, E. Cisco Syst. Inc., USA, (1997) “Functional
verification with completely self-checking tests”- Verilog HDL Conference,
IEEE International, 31 Mar-3 Apr 1997,pp: 2-9, Santa Clare, CA, USA, Current
Version Published: 2002-08-06
25. Fallah, F.; Devadas, S.; Keutzer, K., (2001) “OCCOM-efficient computation of
observability-based code coveragemetrics for functional verification”-
Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions
on Volume 20, Issue 8, Aug 2001 Page(s):1003 – 1015
26. Falconeri, G.; Naifer, W.; Romdhane, N., (2005) “Common reusable
verification environment for BCA and RTL models Design”- Automation and
Test in Europe, Proceedings, Volume , Issue , 7-11 March 2005 Page(s): 272 -
277 Vol. 3
108
29. J.P. Grossman, John K. Salmon, C. Richard Ho, Douglas J. Gerardo, (2007) “a
special-purpose machine for molecular dynamics simulation”- International
Symposium on Computer Architecture, Proceedings of the 34th annual
international symposium on Computer architecture, San Diego, California, USA,
SESSION: Special purpose to warehouse computers, Pages: 1 – 12
30. Han Ke; Deng Zhongliang; Shu Qiong, (2007) “Verification of AMBA Bus
Model Using SystemVerilog”- Electronic Measurement and Instruments, 2007.
ICEMI apos;07. 8th International Conference on, Volume , Issue , Aug. 16
2007-July 18 2007 Page(s):1-776 - 1-780
33. Hao Shen Yuzhuo Fu Sch. of Microelectron., Shanghai Jiao Tong Univ.,
China, (2005) “Priority directed test generation for functional verification using
neural networks”, Design Automation Conference, Proceedings of the ASP-
DAC 2005. Asia and South Pacific, 18-21 Jan. 2005, Volume: 2, On page(s):
1052- 1055 Vol. 2
35. IEEE, (2000) “Ethernet-Mac, CSMA/CD Access method and Physical Layer
specification”: IEEE Std 802.3, 2000 Edition
38. Jae-Gon Lee; Woong Hwangbo; Seonpil Kim; Chong-Min Kyung , (2005)
“Top-down implementation of pipelined AES cipher and its verification with
FPGA based simulation accelerator”, ASIC, ASICON 2005. 6th International
Conference On, Volume 1, Issue , 24-0 Oct. 2005 Page(s):68 - 72
41. Jayanta Bhadra, Magdy S. Abadir, Li-C. Wang, Sandip Ray, (2007) "A Survey
of Hybrid Techniques for Functional Verification," IEEE Design and Test of
Computers, vol. 24, no. 2, pp. 112-122, June 2007
42. Jean Yves Brunel, J.-Y.; Di Natale, M.; Ferrari, A.; Giusto, P.; Lavagno, L.,
( 2004) “SoftContract: an assertion-based software development process that
enables design-by-contract”- Design, Automation and Test in Europe
Conference and Exhibition, Proceedings, Volume 1, Issue , 16-20 Feb. 2004
Page(s): 358 - 363 Vol.1
43. John Penix, Baraona, P.; Alexander, P.,(1995) “Classification and retrieval of
reusable components using semanticfeatures”- Knowledge-Based Software
Engineering Conference, 1995 .Proceedings., 10th Volume , Issue , 12-15 Nov
1995 Page(s):131 – 138
110
44. John Penix, Alexander, P., System Sciences, (1998) “Using formal
specifications for component retrieval and reuse”- Proceedings of the Thirty-
First Hawaii International Conference on, Volume 3, Issue , 1998 Page(s):356 -
365 vol.3
46. Kai-Hui Chang, Yu-Chi Su, Wei-Ting Tu, Yi-Jong Yeh, and Sy-Yen Kuo,
Department of Electrical Engineering, National Taiwan University, Taipei,
Taiwan, (2003) “A PCI-X Verification Environment Using C and Verilog”-
VLSI Design/CAD Symposium, Taiwan,
www.eecs.umich.edu/~changkh/publication/pcisim.pdf
47. Kambiz Khalilian, Stephen Brain, Richard Tuck, Glenn Farrall, Infineon
Technologies, (2002) “Reusable Verification Infrastructure for A Processor
Platform to deliver fast SOC development “- www.design-reuse.com/
48. Kausik Datta, Das, P.P., Interra Syst. Private Ltd., xx, India, (2004), Assertion
based verification using HDVL- VLSI Design, 2004. Proceedings. 17th
International Conference , On page(s): 319- 325
49. Kenneth M. Butler, Kapur, R. Mercer, M.R. Ross, D.E., Texas Instruments,
Dallas, TX , (2006) “The roles of controllability and observability in design for
test”- VLSI Test Symposium, 1992. '10th Anniversary. Design, Test and
Application: ASICs and Systems-on-a-Chip', Digest of Papers., 1992 IEEE, 7-9
Apr 1992, On page(s): 211-216, Current Version Published: 2002-08-06
51. Laurent Fournier, Arbetman, Y. Levinger, M., Res. Lab., IBM Israel Sci. &
Technol. Center, Haifa, (1999) “ Functional verification methodology for
microprocessors using theGenesys test-program generator. Application to the
x86 microprocessorsfamily”- Design, Automation and Test in Europe
Conference and Exhibition 1999. Proceedings, On page(s): 434-441
52. Lukai Cai, Gajski, D., Center for Embedded Comput. Syst., California Univ.,
Irvine, CA, USA, (2003) “Transaction level modeling: an overview”-
Hardware/Software Codesign and System Synthesis, 2003. First
111
53. Marines Puig-Medina, Ezer, G. Konas, P., Tensilica, Inc, (2000) “Verification
of configurable processor cores”- Design Automation Conference, 2000.
Proceedings 2000. 37th, 2000, On page(s): 426-431
54. Mark Litterick, Verilab, (2005) “Using SystemVerilog Assertions for Functional
Coverage”: www.verilab.com/files/dac2005_mal_sva_cov_paper_a4.pdf
56. Michael Keating, Pierre Bricaud, (1999) “Reuse Methodology Manual for
System-On-A-Chip Designs”, KIuwer Academic Publishers
57. Michael Stuart and D. Dempster. Verification Methodology Manual for Code
Coverage in HDL Designs. Teamwork International, Hampshire, UK, 2000.
62. Morel, B.; Alexander, P., (2004) “SPARTACAS: automating component reuse
and adaptation”- Software Engineering, IEEE Transactions on, Volume 30,
Issue 9, Sept. 2004 Page(s): 587 – 600
112
63. Narcizo Sabbatini, Jr. Brochi, A.M. Nunes, T.I., Motorola Semicond.
Products Sector, Jaguariuna, (2002) “Reuse issues on the verification of
embedded MCU cores”- Devices, Circuits and Systems, 2002. Proceedings of
the Fourth IEEE International Caracas Conference on, PP: C012-1- C012-6
65. Nihar Shah, Sasan Iman, Santa Clara, CA, (2006) “Verification Plan Reuse.
Extending Verification Reuse to Verification Plan Definition and Verification
Environment Implementation”. Session # 1.10, CDN Live! 2006,
www.cdnusers.org/Portals/0/cdnlive/na2006/1.10/1.10_paper.pdf
66. Noppanunt Utamaphethai, Blanton, R.D. Shen, J.P., Dept. of Electr. &
Comput. Eng., Carnegie Mellon Univ., Pittsburgh, PA, (2002) “Relating buffer-
oriented microarchitecture validation to high-levelpipeline functionality”- High-
Level Design Validation and Test Workshop, 2001. Proceedings. Sixth IEEE
International, 2001, On page(s): 3-8, Current Version Published: 2002-08-06
67. Oded Lachish, Eitan Marcus, Shmuel Ur, Avi Ziv, (2002) "Hole Analysis for
Functional Coverage Data," dac, pp.807, 39th Design Automation Conference
(DAC'02)
68. Olaf Luthje. CoWare, Inc. Aachen, Germany, (2004) “A Methodology for
Automated Test Generation for LISA. Processor Models”:
www.coware.com/PDF/SASIMI2004.PDF
71. Prabhat Mishra, Dutt, N.; Krishnamurthy, N.; Ababir, M.S., (2004) “A top-
down methodology for microprocessor validation”- Design & Test of
Computers, IEEE, Volume 21, Issue 2, Mar-Apr 2004 Page(s): 122 – 131
72. Prabhat Mishra, Dutt, N., (2002) “Automatic functional test program generation
for pipelined processors using model checking”- High-Level Design Validation
113
and Test Workshop. Seventh IEEE International, Volume , Issue , 27-29 Oct.
2002 Page(s): 99 – 103
73. Prabhat Mishra, Dutt, N., (2005) “Functional coverage driven test generation for
validation of pipelined processors”- Design, Automation and Test in Europe.
Proceedings, Volume , Issue , 7-11 March 2005 Page(s): 678 - 683 Vol. 2
74. Prabhat Mishra, Dutt, N., Center for Embedded Comput. Syst., California
Univ., Irvine, CA, USA, (2004) “Functional validation of programmable
architectures”-Digital System Design, 2004. DSD 2004. Euromicro Symposium
on, 31 Aug.-3 Sept., On page(s): 12- 19
75. Raghuram S.Tupuri, R.S.; Krishnamachary, A.; Abraham, J.A., (1999) “Test
generation for gigahertz processors using an Automaticfunctional constraint
extractor”- Design Automation Conference 1999. Proceedings. 36th Volume ,
Issue , Page(s):647 – 652
77. Rebeca P. Díaz Redondo, José J. Pazos Arias, (2001) "Reuse of Verification
Efforts and Incomplete Specifications in a Formalized, Iterative and Incremental
Software Process," icse, pp.0801, 23rd International Conference on Software
Engineering (ICSE'01).
78. Rindert Schutten and Tom Fitzpatrick, Senior Technical Marketing Manager at
Synopsys, (2003) “Design for verification methodology allows silicon
success”: www.eetimes.com
79. Rohit Kapur, Keller, B.; Koenemann, B.; Lousberg, M.; Reuter, P.; Taylor, T.;
Varma, P., (1999) “P1500-CTL: Towards a Standard Core Test Language”-
VLSI Test Symposium, 1999. Proceedings. 17th IEEE, Volume , Issue , 1999
Page(s):489 – 490
80. Rui Wang; Wenfa Zhan; Guisheng Jiang; Minglun Gao; Su Zhang, Computer
Supported Cooperative Work in Design, (2004) “Reuse issues in SoC
verification platform”- 2004 Proceedings. The 8th International Conference on,
Volume 2, Issue, 26-28 May 2004 Page(s): 685 - 688 Vol.2
81. Samir Palnitkar, (2004) “Design Verification with e” , Prentice Hall PTR
114
82. Sang-Heon Lee Jae-Gon Lee Seonpil Kim Woong Hwangbo Chong-Min
Kyung, Dept. of Electr. Eng., KAIST, Daejeon, South Korea , (2005) “SoC
design environment with automated configurable bus generation for rapid
prototyping”- ASIC, 2005. ASICON 2005. 6th International Conference On, 24-
27 Oct. 2005, Volume: 1, On page(s): 41- 45
84. Scott Taylor, Quinn, M.; Brown, D.; Dohm, N.; Hildebrandt, S.; Huggins, J.;
Famey, C., (1998) “Functional verification of a multiple-issue, out-of-order,
superscalar Alpha processor-the DEC Alpha 21264 microprocessor”-Design
Automation Conference, 1998. Proceedings, Volume, Issue, 15-19 Jun 1998
Page(s): 638 – 643
85. Sigal Asaf, Marcus, E.; Ziv, A., (2004) “Defining coverage views to improve
functional coverage analysis”- Design Automation Conference, 2004.
Proceedings. 41st Volume , Issue , 2004 Page(s): 41 – 44
86. Shai Fine, Avi Ziv, (2003) "Coverage Directed Test Generation for Functional
Verification using Bayesian Networks," dac, pp.286, 40th Design Automation
Conference (DAC'03)
87. G. S. Spirakis, (2004) “Designing for 65nm and Beyond”, Keynote Address at
Design Automation and Test in Europe (DATE), 2004.
89. St. Pierre, M. Yang, S.-W. Cassiday, D., Thinking Machines Corp.,
Cambridge, MA, (2002) “Functional VLSI design verification methodology for
the CM-5massively parallel supercomputer”- Computer Design: VLSI in
Computers and Processors, 1992. ICCD '92. Proceedings., IEEE 1992
International Conference on, Publication Date: 11-14 Oct 1992, On page(s):
430-435, Current Version Published: 2002-08-06
115
97. Tom Schubert, DPG CPU Design Validation, Intel Corp., Hillsboro, OR, USA,
(2003) “High-level formal verification of next-generation microprocessors”;
Design Automation Conference, 2003. Proceedings, 2-6 June 2003, On page(s):
1- 6
100. Warren H. Debany, Jr. Gorniak, M.J. Macera, A.R. Kwiat, K.A.
Dussault, H.B. Daskiewich, D.E., RL/ERDA, Griffiss AFB, NY, (1991).
“Shortening the Path from Specification to Prototype”, Rapid System
116
103. Zhang Yuhong He Lenian Xu Zhihan Yan Xiaolang Wang Leyu , Inst. of
Digital Technol. & Instrum., Zhejiang Univ., Hangzhou, China, (2003) “A
system verification environment for mixed-signal SOC design based on IP
bus”- ASIC, 2003. Proceedings, 5th International Conference on, 21-24 Oct.
2003, Volume: 1, on page(s): 278- 281 Vol.1