Sunteți pe pagina 1din 110

DESIGN AND VERIFICATION OF REUSABLE

VERIFICATION ENVIRONMENT WITH


WRAPPERS FOR ETHERNET IP CORE AND OPEN
RESOURCES

A THESIS

SUBMITTED BY
L.SWARNA JYOTHI
FOR THE AWARD OF THE DEGREE
OF
DOCTOR OF PHILOSOPHY

DEPARTMENT OF ELECTRONICS AND


COMMUNICATIONS ENGINEERING
Dr.MGR
EDUCATIONAL AND RESEARCH INSTITUTE
UNIVERSITY

(Declared U/S of the UGC Act, 1956)

CHENNAI 600095

JUNE 2009
ii

BONAFIDE CERTIFICATE

Certified that this thesis titled “Design and Verification of Reusable

Verification Environment with Wrappers for Ethernet IP Core and Open

Resources” is the bonafide work of Mrs.L.SWARNA JYOTHI, who carried out the

research under my supervision. Certified further that to the best of my knowledge the

work reported here does not form part of any other thesis or dissertation on the basis of

which a degree or award was conferred on an earlier occasion on this or any other

candidate.
iii

DECLRATION

I declare that the thesis entitled “Design and Verification of Reusable


Verification Environment with Wrappers for Ethernet IP core and open
resources” submitted by me for the degree Doctor of Philosophy is the record of work
carried out by me during the period from Feb 2005 to March 2009, under the guidance
of Dr.T.Jayasingh and to the best of my knowledge this work has not formed the
basis for the award of any other thesis or dissertation of any other university or other
similar institution of higher learning.
vii

ACKNOWLEDGEMENTS

I would like to express my gratitude and sincere thanks to my advisor

Dr.T.Jayasingh, whose constant support, guidance and encouragement have

enabled me to do this work. I sincerely thank Dr.S.Ravi Head of Electronics and

communications, Dr.MGR Research and Educational Institute, for his constant

encouragement, guidance and help in reviewing my work throughout. I also

thank Dr.P.Nirmal Kumar for accepting the request to become an expert for the

dissertation committee.

This research has been funded by All India Council for Technical

Education (AICTE), under Research Promotion Scheme (RPS). I would like to

thank AICTE for the support. I thank Management of JSS Academy of Technical

Education for providing the facilities. I also like to thank Dr.MGR University for

providing opportunity to do my Doctoral Work

I would like Thank Mr. Harish for the Help he has extended by becoming

my co-author for the key papers and technical support. I also thank Mr.Ajit and

Mr.Janardhan for making this work happen. I would like to thank my mother,

daughter and my family for the emotional support.

L.SWARNA JYOTHI
viii

TABLE OF CONTENTS

Bonafide Certificate ……………………………………………………………...……ii


Declaration…………………………………………………………………………….iii
Abstract ………………………………………………...……………………………..iv
Acknowledgements…………………………………………...………………………vii
Table of Contents…………………………………………………...………………..viii
List of Tables………………………………...……………………………………….xiii
List of Figures…………………………………………………..………………….....xiv
List of Abbreviations……………………………………………….……………...…..xv

Chapters
1. Introduction………………………………………………………….……….…...….1
1.1 Functional Verification…………………………………..……….…….………..1
1.2 Verification Reuse………………………………..…………...….……….….…..4
1.3 Objectives…………………………………………….…………....……….….....6
1.4 Motivation………………………………………………………..….…..…..…...7
1.7 Overview of the thesis…………………………………...………….…..…….….9

2. Literature Reviews………………………………………………....…....…….……10
2.1 Functional Verification…………………...…………………….………..…10
2.1.1 Simulation, Emulation and FPGA Prototyping…………….….………....11
2.1.2 Formal verification……………………………………..………...………13
2.1.3 Test Generation……………………………….………………...………..14
2.1.4 Transaction level modeling………………………………...……...……..16
ix

2.1.5 Assertion-Based Verification……………….………………...………….17


2.1.6 Validation……………………...……………..…………………………..19
2.1.7 Bus Functional Model……………………………………..………….…21
2.1.8 Coverage-Driven Verification…………………..………...……………..22
2.2 Verification Planning, Reuse and Environment Reuse………….…………..…24
2.3 Inference from reported works…………………..…………………………….30

3. Introduction to Verification…………………………...……………………………33
3.1 Issues in Verification…………………………………………..………………33
3.2 Issues in Verification Methodologies………………………………………….34
3.3 Test Methodologies……………………………………………………………35
3.3.1 Deterministic…………………………….………………………………35
3.3.2 Pre-run Generation………………………………………………………35
3.3.3 Checking Strategies……………………………...………………………36
3.3.4 Coverage Metrics………………………………………………………...37
3.4 Types of Checking……………………………………………………………..38
3.5 Issues in Traditional Verification Methodology…………………………...…..39
3.5.1 Productivity Issues……………………………...………………………..39
3.5.2 Requirement for productivity improvement……………………….….….39
3.5.3 Quality Issues…………………………………...………………….…….39
3.5.4 Requirement for quality improvement…………………….…….……….39
3.5.5 Task-based strategy……………………………...…………….…………40
3.6 Verification methodology………………………………………………………40
3.7 A Verification Environment for Verifying Ethernet Packet….…….……….….41
-in Ethernet IP core
3.7.1 Management Data Input/ Output………………………………….………42
3.7.2 Verification Strategy……………………………..……………….………43
3.8 Chapter conclusion………………………………………………………………46
x

4. Introduction to Verification Reuse…………………………………………………47


4.1 Verification IP prerequisites…………………………………..………………..47
4.2 Physical Layer Device………………………………………...………………..48
4.3 Cyclic Redundancy Check -Reusable Verilog task………………….…………51
4.3.1 MAC Frame CRC Errors………………………………...……………….52
4.3.2 Protocol CRC Checksums………………………………….…….……….53
4.3.3 Calculating CRC……………………………………………..…………...53
4.3.4 CRC Algorithm…………………….……………………………….…….54
4.4 Wrappers…………………………………………………………………….….55
4.4.1 Host Interface Block (HIB)…………………….…...……………………55
4.4.2 HIB Test Bench Algorithm…………………...…………………….……57
4.4.3 Processor Interface Block (PIB)..……………..…………….……………60
4.4.4 PIB Test Bench Algorithm…………………………...….……………….62
4.5 Chapter Conclusion…………………………………………….….……………63

5. Introduction to Test Plan………………………………………….………………...64


5.1 Test Plan for Ethernet MAC Receiver………………………………………….65
5.2 Test Cases………………………………………………….……………………67
5.2.1 Check for GMII Mode…………………………………………………...67
5.2.1.1 Full Duplex Mode……………………………………………….67
5.2.1.2 Half Duplex Mode………………………………………………68
5.2.2 Check for MII Mode……………………………………………………..70
5.2.2.1 Full Duplex Mode………………………………………………70
5.2.2.2 Half Duplex Mode………………………………………….…...71
5.3 Test Case Details…………………………………………………………….….72
5.4 Directory Structure for Ethernet IP Verification……………………….……… 74
5.5 Coverage Analysis……………………………………………………………...76
5.5.1 Coverage Goals………………………………...………….……….……..78
xi

5.5.2 Coverage Tools/Methodology……………………...……….…….………78


5.5.3 Coverage Obtained……………………………………...….……………..78
5.5.4 Coverage Analysis………………………………………...………….…..79
5.6 Chapter Conclusion………………………….………………………………… 79

6. Contributions and Future work………………………………………………..……80

Appendices

1. PHY Code…………………………………………………………………….....….83

2. Wrappers Signal Details and Coverage Details………………………………...…..94

3. Cyclic Redundancy Check………………………………………………….……..100

4. Bug File……………………………………………………………….…………...104

References………………………………………………………………….………...105

Publication Details………………………………..……………………….…………117

Vitae
xiv

LIST OF FIGURES

Figures Page No.

1. Figure 3.1 Data Checking 38


3. Figure 3.2. MDIO Overview 42
4. Figure 3.3 Verification Environment for Media Access Control (MAC) DUT 44
5. Figure 4.1 PHY Structure 49
6. Figure 4.2 Overview of the Ethernet e Verification Component 50
7. Figure 4.3. Block diagram of Host interface block 56
8. Figure 4.4. Block diagram of Processor interface block 61
9. Figure 4.5 Processor interface block I/O signals 62
10. Figure 5.1 Receiver Environment 65
11. Figure 5.2 Different Stages of Receiver Verification 66
12. Figure 5.3 Modes of Receiver 72
13. Figure 5.4 MII/GMII (Full/Duplex) at different speeds 73
14. Figure 5.5 RXRAM Full condition at the time of reception of frame for Half/Full
Duplex mode 73
15. Figure 5.6 Collision condition 73
16. Figure 5.7 Rx Error from Phy side 74
xiii

LIST OF TABLES

Table Page No.

1. Table A 2.1 Host Interface Block –Signal description…………......….96,97


2. Table A 2.2 Processor Interface Block- Signal description…….……..97,98
3. Table A 2.3 Line coverage………………………………………..….……98
4. Table A 2.4 Toggle-Coverage…………………………………….............99
5. Table A 2.5 FSM Coverage…………………………………………..…...99
6. Table A 2.6 Conditional coverage………………………………….….….99
7. Table A 3.1 CRC–32 Register (LSW) after 16 Shifts with Xi Substitution
………………………………,…….…..100
8. Table A 3.2 CRC–32 Register (MSW) after 16 Shifts…………….…….101
9. Table A 3.3 CRC Window contents after link synchronization…………103
xv

LIST OF ABBREVIATIONS

ABV- Assertion Based Verification


ADL- Architectural Description Language
AES- Advanced Encryption Standard
AHB - Advanced High Performance Bus
AMBA- Advanced Microprocessor Bus Architecture
ASIC- Application Specific Integrated Circuit RDS - Random Dynamic Simulation
ATPG- Automatic Test Pattern Generation
BDD- Binary Decision Diagram
BFM- Bus Functional Module
CAN – Control Area Network
CDG- Coverage Directed Test Generation
CFI – Canonical Format Identifier
CPU – Central Processing Unit
CRC – Cyclic Redundancy Check
CSMA/CD – Carrier Sense Multiple Access/ Collision Detection
CTL- Core Test Description Language
DA – Destination Address
DEMUX – Demultiplexer
DFV- Design for Verification
DMA – Direct Memory Access
DUT- Design Under Test
DV- Design and Verification
eRM - e Reusable Methodology
xvi

eVC- e Verification Component


EOF – End Of Frame
FCS – Frame Check Sequence
FPGA-Field Programmable Gate Array
FSM- Finite State Machine
FV- Formal Verification
GMII – Gigabit Media Independent Interface
HDL- Hardware Description Language
HIB – Host Interface Block
HVL- High Verification Language
IBM- International Business Machines
IFG – Interframe Gap
IP- Intellectual Property
IPG – Inter Packet Gap
ISS – Instruction Set Simulator
LSB – Least Significant Bit
MAC- Media Access Control
MDIO- Management Data Input Output
MII- Media Independent Interface
MMV- Model Based test generation for Microprocessor Validation
MUX – Multiplexer
NRE – Non Recoverable Engineering
OV- Open Vera
OVM- Open Verification Methodology
PCI- Peripheral component Interconnect
PDG- Priority Directed Test Generation
PHY- Physical Layer Device
PIB – Processor Interface Block
xvii

PISO – Parallel in Serial Out


PSL- Property Specification Language
RAM – Random Access Memory
RTL- Register Transfer Level
SA – Source Address
SFD – Start Frame Delimiter
SIPO – Serial in Parallel Out
SRTL - Structural Register Transfer Level
SOC- System On Chip
SV- System Verilog
SVA- System Verilog Assertions
SVM- System Verilog Methodology
RIF – Routing Information Field
TIP- Tricore Microprocessor Core
TLM- Transaction Level Model
VC- Virtual Component
VCI- Verification Collaborative Infrastructure
VCM – VCS Coverage Metrics
VID – Virtual LAN Identifier
VIP- Verification Intellectual Property
VLAN- Virtual Local Area Network
VPA- Verification Process Automation
UML- Unified Modeling Language
iv

ABSTRACT

A study by Collett International Research revealed that first silicon success


rate has fallen to 39 percent from about 50 percent. With the total cost of re-spins
costing hundreds of thousands of dollars and requiring months of additional
development time, companies that are able to curb this trend have a huge
advantage over their competitors, both in terms of the ensuing reduction in
engineering cost and the business advantage of being to market sooner with high-
quality products.

Design reuse and verification reuse are important to satisfy time-to-market


requirements. Designer must be able to reuse Intellectual Property in the design as
golden model. Reuse of verification environment across different designs of the
domain saves time to market further and improves total design verification quality.
This research focuses on reducing time to market in the chip design-verification
industry and facilitates delivery of open resources for further research, compatible
to industry requirement. This work demonstrates the ways of performing
organized effort in doing considerable amount of research in Design reuse,
Verification reuse and design for verification.

The Physical Layer is a fundamental layer upon which all higher level
functions in a network are based. However, due to the plethora of available
hardware technologies with widely varying characteristics, this is perhaps the most
complex layer in the OSI architecture. The implementation of this layer is often
v

termed Physical layer device (PHY). A PHY chip is commonly found on Ethernet
devices. Its purpose is to provide digital access of the modulated link and interface
to Ethernet Media Access Control (MAC) using media independent interface
(MII). This work demonstrates reuse of design environment with reference to
verifying the Ethernet packet in Ethernet Intellectual Property (IP) Core. Design
Reuse is achieved through verilog tasks which were used in specman environment.
Ethernet Phy, e Verification component (eVC) is an in house development.
Ethernet eVC is built with phy as a separate eVC and host being a task driven
verilog Bus functional model (BFM). This allows the creation of virtual host
environment using a combination of verilog BFM and eVC.

Verification environment reuse for different application with different


interface is done by developing a wrapper around the Design Under Test (DUT)
interface and then interfacing it to the environment. A detailed test plan is made to
perform complete and exhaustive test for Ethernet MAC Receiver. Coverage
goals, coverage analysis and coverage obtained, indicate the efficiency of the
verification methodology.

The wrappers can be used to integrate the IP core in another design when
the interfaces of the IP core are different from the interfaces in the current design.
Wrapper refers to an interfacing component which provides the necessary logic to
attach a user specified custom IP with a bus in an architecture. Wrappers can also
be used in verification when there is altogether a different type of interface on the
verification environment and a different type of interface on the DUT. This work
demonstrates design, development and verification of Transaction level wrappers,
with reference to wrapper interfaces between Ethernet MAC and the PCI device
and Processor interface block between 8 bit processor/ micro controller and DUT
core.
vi

The present approach was able to meet all the stringent requirements related
to the verification of a complex system. The extensibility of the e language, the
macro facility and the features of Specman Elite's built-in generator were some of
the novel elements that enabled this approach to be successful in a short duration.
Also, it involves a smaller team effort compared to doing the same in a verilog
environment completely.

Key words: Ethernet MAC Receiver, Verification reuse, Wrappers, ‘e’


Verification component, Media Independent Interface, Physical layer device,
Ethernet IP Core
1

CHAPTER 1

INTRODUCTION

With the complexity of the design on the rise, coverage of functional


verification is one of the increasing challenges faced by the design and the verification
teams. Reducing the verification time without compromising the quality of the
verification is the greatest challenge to the verification engineers. Improving the
verification process is highly critical to improve upon time to market. To achieve this,
re-use of the verification environment across different levels is the way forward. The
re-use of the verification environment can be achieved at following levels:

 Reuse with different IPs.


 Reuse at different levels of integration.
 Reuse at different levels of abstraction.

1.1 FUNCTIONAL VERIFICATION

Functional verification is paramount within the hardware design cycle. With so


many new techniques available today to help with functional verification (FV),
selection of technique is not straight forward and is therefore often confusing.
Hardware complexity growth continues to follow Moore’s law (Graham Pullan 2009
[27]). Design Productivity growth continues to remain low compared to complexity
growth. Verification complexity is even more challenging. It theoretically rises
exponentially with hardware complexity doubling exponentially with time (Michael
2

Stuart 2000 [57]). Up to 70% of design development time and resources are spent in
FV. Recent studies highlights the challenges of FV: The study highlights that by 2007,
a complex SoC needs 2000 engineer years to write 25 millions of Register Transfer
Level (RTL) code and one trillion simulation vectors for functional verification
(Spirakis 2004 [87]).

Pre-Silicon Logic Bugs of the order of 20k-30k bugs are designed into next
generation and 100k in the subsequent generation (Tom Schubert, Intel, DAC 2003
[96]). In the face of shrinking time-to-markets, the amount of validation effort rapidly
becomes intractable, and significantly impact product schedule, with the additional
risk of shipping the products with undetected bugs.

Increased design complexity brings about Random Dynamic Simulation (RDS)


to maximize functional space that can be covered. To solve unbounded problem that
could occur in RDS, EDA industry has introduced languages such as Open Vera (OV),
Specman ‘e’ and System Verilog (SV). They introduced concepts such as constrained
random stimulus, Random stimulus distribution and reactive test benches. The new
verification languages and tools increased productivity by decreasing the design
verification time. The test scenarios can be written to the highest level of abstraction
and can be extended to any low level of abstraction.

Issues to be addressed during any verification activity are: Capturing of all


functional features of design, compliance to all protocols, coverage for all possible
corner cases and Checking for locking states in the Finite State Machines (FSMs). In
addition, to check whether the design works in any random state, Elimination of
redundant tests to save unnecessary simulation cycles and cost, equivalence check with
reference design, check whether methodology is available for verification of each
design modification and generation of data patterns for directed or random test. Ease
3

of generating flexible and modular test environment, Reusability and readability of test
environment and handling of simulation time in automated test environment are some
other issues.

Design Complexity and size makes version control and tracking of design and
verification process difficult, both at specification level and functional, which can
often lead to architectural-level bugs that require enormous effort to debug. Debugging
is always a problem, especially when they occur in unpredictable places. Even the
most comprehensive functional test plan can completely miss bugs generated by
obscure functional combinations or ambiguous spec interpretations. This is why so
many bugs are found in emulation, or after first silicon is produced. Without the ability
to make the specification itself executable, there is really no way to ensure
comprehensive functional coverage for the entire design intent.

The relative inefficiency with which today's verification environments


accommodate midstream specification changes also poses a serious problem. Since
most verification environments are an ad hoc collection of HDL code, C code, a
variety of legacy software, and newly acquired point tools, a single change in the
design can force a ripple of required changes throughout the environment, eating up
time and adding substantial risk.

Perhaps the most important problem faced by design and verification engineers
is the lack of effective metrics to measure the progress of verification. Indirect metrics,
such as toggle testing or code coverage, indicate if all the flip-flops are toggled or all
lines of code were executed, but they do not give any indication of what functionality
was verified. For example, they do not indicate if a processor executed all possible
combinations of consecutive instructions. There is simply no correspondence between
any of these metrics and coverage of the functional test plan. As a result, the
4

verification engineer is never really sure whether a sufficient amount of verification


has been performed.

Test methodologies such as Deterministic, Pre-run generation, checking


strategies and coverage metrics have their own drawbacks. The issues in traditional
verilog verification method are: May require tens of thousands of lines of verification
code, design spec changes cause major verification delays and implementing all
identified tests in test plan within the project schedule. The verification environment
must be created and/or maintained efficiently. Engineer should spend more time at
higher level details, providing simulation goals, analyzing errors reported by checkers
and Provide more direction when goals are not being met. Verification complexity
makes it a challenge to think of all possible failure scenarios. It does not provide a way
to try scenarios beyond the expected failure scenarios.

Requirement for quality improvement are to increase confidence about the ratio
of identified bugs and an automatic way to know what has been tested. To improve
test-writing productivity a higher level of abstraction for specifying the vector stream
is used. Higher-level tasks in HDL or C are created. Task-based strategy limitations
exist and the test writing effort is still high. High-level intent is not readily apparent
and designer must select many parameters values manually.

1.2 VERIFICATION REUSE

Design reuse and verification reuse are important to satisfy time-to-market


requirements. Designer must be able to reuse Intellectual Property in the design as
golden model. Reuse of verification environment across different designs of the
domain saves time to market further and improves total design verification quality.
This research focuses on reducing time to market in the chip design-verification
5

industry and facilitates delivery of open resources for further research, compatible to
industry requirement. This work demonstrates the ways of performing organized effort
in doing considerable amount of research in Design reuse, Verification reuse and
design for verification.

The Physical Layer is a fundamental layer upon which all higher level functions
in a network are based. However, due to the plethora of available hardware
technologies with widely varying characteristics, this is perhaps the most complex
layer in the OSI architecture. The implementation of this layer is often termed Physical
layer device (PHY). A PHY chip is commonly found on Ethernet devices. Its purpose
is to provide digital access of the modulated link and interface to Ethernet Media
Access Control (MAC) using media independent interface (MII). This work
demonstrates reuse of design environment with reference to verifying the Ethernet
packet in Ethernet Intellectual Property (IP) Core. Design Reuse is achieved through
verilog tasks which were used in specman environment. Ethernet Phy e Verification
component (eVC) is an in house development. Ethernet eVC is built with phy as a
separate eVC and host being a task driven verilog Bus functional model (BFM). This
allowed us the creation of virtual host environment using a combination of verilog
BFM and eVC. Verification environment reuse for different application with different
interface is done by developing a wrapper around the Design Under Test (DUT)
interface and then interfacing it to the environment. A detailed test plan is made to
perform complete and exhaustive test for Ethernet MAC Receiver. Coverage goals,
coverage analysis and coverage obtained indicate efficiency of the verification
methodology.

The wrappers can be used to integrate the IP core in another design when the
interfaces of the IP core are different from the interfaces in the current design.
Wrapper refers to an interfacing component which provides the necessary logic to
6

attach a user specified custom IP with a bus in an architecture. Wrappers can also be
used in verification when we have altogether a different type of interface on the
verification environment and a different type of interface on the DUT. This work
demonstrates design, development and verification of Transaction level wrappers with
reference to wrapper interfaces between Ethernet MAC and the PCI device and
Processor interface block between 8 bit processor/ micro controller and DUT core.

1.3 OBJECTIVES

The objective of this research to find new methodologies to curb, increase in


verification effort due to increase in design complexity and size. This work is to
suggest improved approaches which are very much better than earlier methods, and to
overcome limitations of existing methods, design languages and approaches. Ethernet
related applications are important in the telecommunication industry. Applications
require better precision, cost effectiveness and lower time to market requirements.
About 70% of the design cycle time is spent on verification. Hence an efficient method
for design and development and verification of reusable Verification Environment for
verification of Ethernet packet in Ethernet IP core is presented in this thesis. For
SOC’s it is observed that most of the peripherals are reused from the previous design
step with some modifications done on the feature set. Verification reuse methodology
is very critical for these systems. Reuse of IP across different designs of the domain
saves time to market further and improves total productivity and quality. Design and
verification of wrappers for host interface and processor interface is presented here.
Development of open resources for research in Design and Verification of Complex
Chip Designs is also proposed.
7

1.4 MOTIVATION

In a series of studies Collett International Research has shown that first silicon
success rate for ASICs has fallen from 48% in 2000 to 39% in 2002 to 34% in 2003.
Forty percent of designs required more than one re-spin as explained by (Rindert
Schutten 2003[78], Jack Horgan 2004 [104]). With the total cost of re-spins costing
hundreds of thousands of dollars and requiring months of additional development time.
Companies that are able to curb this trend have a huge advantage over their
competitors, both in terms of the ensuing reduction in engineering cost and the
business advantage of being to market sooner with high-quality products. Chips fail
for many reasons, ranging from physical effects like IR drop, to mixed-signal issues,
power issues, and logic/functional flaws. However, logic/functional flaws are the
biggest cause of flawed silicon. Of all tapeouts that required silicon re-spin, the Collett
International Research study shows that more than 60 percent contained logic or
functional flaws.

The data that Collett International Research collected from North American
design engineers has revealed that the top three categories of functional flaws are:
Design errors-82%, specification errors-47% and 14%.

Furthermore, assertions facilitate design reuse through self-checking code. In


assertion-based verification, RTL assertions are used to capture design intent in a
verifiable form as the design is created, providing portable monitors that check for
correct behavior. ABV reduces 50%of the verification effort as per IBM research.

Design reuse is a critical element in closing the SoC design gap. Designers
learned years ago that reinventing every new chip from scratch is not a scalable
approach. With the emergence of the semiconductor intellectual property (IP) market,
8

designers have gradually grown more comfortable with licensing virtual components
(VCs) from ASIC vendors, FPGA suppliers and dedicated IP companies. Perhaps the
largest single factor in the limited success of IP design reuse has been the lack of
attention to corresponding verification reuse.

The solution is clear. The VC provider (internal or external) must consider


verification reuse when providing the total IP package to the SoC team. VC integrators
in turn must demand reusable verification components from their providers. VC
verification is a complex problem. The typical stand-alone VC verification
environment consists of many components, including transaction and stimulus
generators, verification test suites, results, checkers, protocol monitors on the ports and
internal assertions to check that the VC is always operating correctly. An interface VC
such as a PCI or UTOPIA controller typically sits on the external I/O of the SoC, so
the interface ports connect to chip pins. Thus, the bus stimulus generation and results
checking elements from the stand-alone verification environment may also be used at
the chip level.

Verification reuse involves reusing the existing verification environments or


components of verification environments developed for the other designs or blocks. It
includes verification code reuse (monitor, bus-functional model [BFM], scoreboard,
and data item), test case reuse, assertion reuse, simulation script reuse and coverage
analysis reuse. As design complexity grows, the complexity of the functional
verification task rises exponentially. Considering that verification consumes 50 percent
to 80 percent of the total development effort, verification reuse brings tremendous
benefits to the verification team.
9

1.5 OVERVIEW OF THE THESIS

Chapter 1 discusses technology growth in functional verification and verification


reuse. It also discusses objectives and motivation of the thesis and overview of the thesis.
Chapter 2 Presents literature reviews in functional verification and verification reuse. This
also draws inference from reviews to justify the proposed methodology. Chapter 3 discusses
the Ethernet MAC Protocols, Ethernet frame and IEEE 802.3 Ethernet IP Standards. IEEE
Standards are used as reference while designing the proposed verification environment.
Chapter 4 presents issues in verification and verification methodologies, Test methodologies
and their limitations. It also presents types of checking, issues in traditional verification
methodologies and verification process that overcomes the drawbacks of traditional
verification methodologies. Chapter 5 presents Specman Based Verification methodology and
verification environment for verifying Ethernet packet in Ethernet IP core. MDIO, MII, and
Phy are the important components of the verification environment. Verification strategy and
Verification environment for MAC DUT are also discussed. Chapter 6 presents introduction
to verification reuse, Verification IP prerequisite for verification and the techniques proposed
to facilitate verification environment reuse. Phy device architecture and its usage in the
verification and Cyclic redundancy check (CRC), a Verilog reusable task are discussed.
Wrapper signal details and test bench algorithms for processor and host interface are also
presented. Chapter 7 discusses Receiver environment, a detailed Test plan for Ethernet MAC
Receiver and Test cases for GMII and MII modes in half and full duplex modes to conform to
IEEE 802.3 Standards. It also presents directory structure planned for Ethernet IP
Verification. Chapter 8 presents coverage analysis measurements for behavioral and RTL
designs, Coverage goals, Coverage obtained, tools and methodology used. It also presents
Coverage analysis to report compatibility of goals with coverage obtained and to justify the
deviations. Chapter 9 presents details of contributions in this thesis, benefits of the
methodology and future directions for research.
10

CHAPTER 2

LITERATURE REVIEWS

Literature reviews are classified into functional verification and verification


reuse.

2.1 Functional Verification

Increase in functional complexity because of the heterogeneous nature of


designs today; for example, co-existence of hardware and software, analog and digital.
The requirement for higher system reliability forces verification tasks to ensure that a
chip level function will perform satisfactorily in a system environment, especially
when a chip level defect has a multiplicative effect. As complexity continued to grow,
new verification languages were created and introduced that could verify complex
designs at various levels of abstraction. Along with new verification languages,
technologies and tools came into existence that supported them.
11

2.1.1 Simulation, Emulation and FPGA Prototyping

Functional verification is necessarily incomplete because it is not


computationally feasible to exhaustively simulate designs. A survey of hybrid
techniques for functional verification is presented by (Jayanta Bhadra 2007 [41]). It
discusses the recent advances in Hybrid Techniques. The most effective way to deal
with complexity is to combine the strengths of all the techniques. A major challenge is
to ensure that the techniques complement rather than subvert each other when working
in tandem. It is important, therefore, to quantitatively measure the degree of
verification coverage of the design as explained by (F. Corno 2003 [19]). This method
breaks up the computation into two phases: functional simulation of a modified HDL
model followed by analysis of a flowgraph extracted from the HDL model.

Simulation based verification at the Register Transfer Level (RTL) is


augmented by Design for Verification (DFV) presented by (Indradeep Ghosh 2002
[36]). In this technique the designer of an RTL circuit introduces some well
understood extra behavior, through some extra circuitry into the circuit under
verification to exercise the design more thoroughly. Once the circuit is thoroughly
verified for functionality the extra behavioral constructs can be removed to produce
the original verified design.

A Design and verification (DV) methodology for pipelined Advanced


Encryption Standard (AES) Cipher is suggested by (Jae-Gon Lee 2005 [38]). The
design at functional level is done in C, it is refined later to behavioral-level design and
finally to RTL design with SystemC and FPGA-based simulation accelerator is
adopted in the final stage. To reuse the original testvectors, proxy module is
introduced for interconnecting simulation environment with acceleration.
12

Hierarchical Simulation-Based Verification of Anton, a Special-Purpose


Parallel Machine is suggested by (J.P.Grossman 2007 [29]). This verification
methodology addressed the problem of providing evidence that computations spanning
more than a quadrillion clock cycles will produce valid scientific results by using a
hierarchy of RTL, architectural, and numerical simulations. Block- and chip-level RTL
models were verified by means of extensive co-simulation with a detailed C++
architectural simulator, ensuring that the RTL models could perform the same
molecular dynamics computations as the architectural simulator. An earlier work for
the similar purpose is proposed by (St. Pierre M.1992 [89]). Suggested verification
methodology employs multiple layers of abstraction and concurrent development of
design and test to reduce overall development time and increase the effectiveness and
coverage of the functional tests. Simulation-based functional verification based on the
logic design using implementation-directed, pseudo-random exercisers, supplemented
with implementation specific, hand-generated tests for a similar purpose is presented
by (Scott Taylor 1998 [84]).

A verification methodology for configurable processor cores is proposed by


(Marines Puig Medina 2000[53]). The simulation-based approach uses directed
diagnostics and pseudo-random program generators both of which are tailored to
specific processor instances. A configurable and extensible test-bench serves as the
framework for the verification process and offers components necessary for the
complete SOC verification. Thesis Titled, Integrating C and Verilog into a Simulation-
Based Verification Environment for PCI-X 2.0 Bus is presented by (Yu-Chi Su 2003
[102]). In this thesis, a verification environment for PCI-X 2.0 bus specification is
proposed, and the goal is to build a verification platform that can efficiently and
effectively help in the functional verification of the PCI-X 2.0 devices. The proposed
13

verification environment is based on the simulation technique, and the design and
implementation is based on the two infrastructures, Verification Collaborative
Infrastructure (VCI) and TestWizard. The verification environment contains complete
models and tools including Bus Functional Models (BFMs), arbiter, protocol monitor,
C-based test generator, and compliance test suits.

An alternative resource aware rapid prototyping method for microcontrollers


using low-cost FPGA-based prototyping board is suggested by (Schmitt S. 2004 [83]).
The method is based on the efficient usage of all resources, of the prototyping system
to emulate special parts of the microcontroller. IP integration infrastructure of FPGA
and Prototyping system are used efficiently in this method.

Functional verification Methodology for large ASICs is suggested by (Adrian


Evans 1998 [2]). Here the ASIC sub-system level behavioral modeling, large multi-
chip simulations, and random pattern simulations are use. The emulation strategy is
based on a plan that consisted of integrated parts of, the real software on the emulated
system. The throughput afforded by emulation could find bugs that would require
extremely long simulations.

2.1.2 Formal verification (FV)

FV provides properties on mathematical models representing the system


implementation. In particular, the identification of design errors is addressed by means
of model checking tools. In this context, temporal logic properties are defined to
formally check the correctness of a design implementation with respect to the
specification. Such properties are derived from informal design specifications. Thus,
their formalization often requires a direct interaction of the verification engineers with
both architects and designers and it relies on the collective expertise of the verification
14

group. Some properties could possibly be falsified, requiring an iterated refinement


phase, either of the design implementation or of the properties themselves.

The OVL checkers are encapsulated in Verilog modules or SystemVerilog


interfaces and fully parameterized. They are associated with a design the same way as
any other Verilog module, by direct instantiation or using the SystemVerilog bind
statement. (Eduard Cerny 2007 [23]) describes a method for constructing property-
based checkers using SystemVerilog assertions. The checker is defined using a
property with default parameters for clock and reset. The property is instantiated in a
verification statement using a macro. Macro definitions are also provided for message
generation in the fail action block.

Symbolic model checking is usually expected to handle designs with up to a


few hundred storage elements; on more complex designs, the graph representation of
states can grow extremely large, resulting in space-outs or severe performance
degradations due to paging. (Jun Yuan 2002 [45]) in his thesis for University of Texas
At Austin addresses the major issues in simulation verification by applying formal and
symbolic methods. The author uses Binary Decision Diagrams (BDDs) to
symbolically generate simulation vectors. The constraints used in vector generation
also provide a formal description of the design interface. They can be treated as the
assumption for the environment in a formal verification of the design, or the proof
obligation in the verification of the parent module of the design.

2.1.3 Test Generation

Current industry practice is to use separate, automatic, dynamic, directed and


directed random stimuli generators for the verification of complicated designs. The
15

generated stimuli, usually in the form of test programs, trigger architecture and micro-
architecture events defined by a verification plan. In general, test programs must meet
the validity requirement and the quality requirement.

Although formal methods such as model checking, and theorem proving have
resulted in noticeable progress, these approaches apply only to the verification of
relatively small design blocks or much focused verification goals. Earlier test
generators incorporated a biased, pseudorandom, dynamic generation scheme. (Allon
Adir 2004 [4]) suggested a generic solution, applicable to any architecture, led to the
development of a model based test generation scheme that partitions the test generator
into two main components: a generic, architecture independent engine and a model
that describes the targeted architecture.

One of the earliest test generation methods suggested by (C.Bellon 1984 [13]),
Starts from the behavioral sequential machine model. A set of representative
transitions to be tested is sought. The transition set is partitioned into a finite and
limited number of equivalence classes such that a verification of a representative of
each class will be verified by inducting the entire class. Design Verification Using
Logic Tests suggested by (Warren H Debany 1991 [100]), uses fault simulation to
grade the coverage of test cases used for hardware design verification. (Eugene Zhang
1997 [24]) suggests completely self checking tests using a transaction-based
verification method, and concurrent programming techniques. Another methodology to
synthesize a self-test program for stuck-at faults is suggested by (F.Corno 2003 [19]).
This approach generates a sequence of instructions that enumerates all the
combinations of the operations and systematically selects operands. However, users
need to determine the heuristics to assign values to instruction operands to achieve
high stuck-at fault coverage. Priority Directed Test Generation (PDG) for Functional
Verification using Neural Networks is suggested by (Hao Shen 2005 [33]). With PDG,
16

a test vector which hasn’t been simulated is granted a priority attribute. The priority
indicates the possibility of detecting new bugs by simulating this vector.

It is a requirement for any functional test pattern generation approach is to


avoid the exponential growth of the test generation complexity in terms of the growth
in design size. An approach, to cast Coverage Directed Test Generation (CDG) in a
statistical inference framework, and apply computer learning techniques to achieve the
CDG goals is proposed by (Shai Fine 2003 [86]). This approach is based on modeling
the relationship between the coverage information and the directives to the test
generator using Bayesian networks. (Charles H 2006 [18]) proposes another
functional test generation approach where simulation results are used to guide the
generation of additional tests. The author has suggested two sets of techniques to
achieve these conversions. One is called Boolean learning to be applied on random
logic and the other is called arithmetic learning to be applied on datapath modules.

2.1.4 Transaction level modeling

An overview of transaction level modeling is presented by (Lukai Cai 2003


[52]). In a transaction-level model (TLM), the details of communication among
computation components are separated from the details of computation components.
Communication is modeled by channels, while transaction requests take place by
calling interface functions of these channel models. Unnecessary details of
communication and computation are hidden in a TLM and may be added later. TLMs
speed up simulation and allow exploring and validating design alternatives at the
higher level of abstraction.

This method allows exploring several SoC design architectures, leading to


better performance and easier verification of the final product. (Ali Habibi 2006 [3])
17

presents an approach to design and verify SystemC models at the transaction level.
Here the verification is integrated as part of the design flow. Both the design and the
properties are written in Property Specification language and modeled in Unifed
Modeling Language (UML). In this approach a methodology is offered to apply
assertion-based verification for reusing the already defined Property Specification
Language (PSL) properties. Here a genetic algorithm that optimizes the probability
distribution of the inputs over the space of their possible values is proposed.

A design and verification methodology for TLM models is suggested by


(Mohammad Reza Kakoee 2007 [60]). Verification is integrated as part of the design
flow. In the proposed method, the design is modeled in UML. Then, it is translated
into the Reactive Objects Language, Rebeca, which is an actor-based language with
formal foundation. A model in Rebeca is a set of concurrently executed reactive
objects interacted by asynchronous message passing. After mapping UML to Rebeca,
Rebeca code will be translated into Promela which is a language for formal
verification. Checking the correctness of the design is performed on-the-fly using
model checker.

2.1.5 Assertion-Based Verification (ABV)

ABV has been identified as a modern, powerful verification paradigm that can
assure enhanced productivity, higher design quality and, ultimately, faster time to
market and higher value to engineers and end-users of electronics products. Assertions
are orthogonal to the design, they can be thought of as a separate layer of verification
added on top of the design itself. With ABV, assertions are used to capture the
required temporal behavior of the design, in a formal and unambiguous way. The
design then can be verified against those assertions using simulation and/or static
18

verification (e.g. model checking) techniques to assure that it indeed conforms to the
intended design intent, as captured by the assertions.

Designers write assertions that describe the behaviors they expect the
environment to exhibit, the behaviors they want to model, and specific structural
details about the design implementation. Designers also write functional coverage
assertions to identify corner cases that the verification environment must properly
stimulate. Verification teams typically write assertions that deal more with the end-to-
end behavior of a block or system, specifying the behavior independently of the
specific implementation choices of the designer.

Using SystemVerilog Assertions (SVA) for Functional Coverage is presented


by (Mark Litterick 2005 [54]). This paper explores the issues and implementation of
such a functional coverage model to demonstrate both the capabilities of SVA
coverage and illustrate coding techniques which can also be applied to the more
typical use of SVA coverage, which is to specify key corner cases for the RTL from
the designer’s detailed knowledge of the structural implementation. (Kausik Datta
2004 [48]) presents an overview of the Assertion based Verification methodology in
general.

A basic technique to implement ABV is to embed temporal assertions in RTL


code. (Anat Dahan 2005 [8]) describes the use of a Property specification Language
(PSL) based ABV methodology in a C++-based system level modeling and simulation
environment. He describes the considerations of porting a tool which translates PSL to
VHDL/Verilog, to support C++. A platform called FoCs ("Formal Checkers") is
developed in this work. FoCs takes PSL/Sugar properties as input and translates them
into assertion checking modules which are integrated into the simulation environment
and monitor simulation on a cycle-by-cycle basis for violations of the property.
19

The generation of behavioral models from a set of assertions, and calling these
models “cando-objects” because cando-objects behave randomly in all situations not
covered by the corresponding set of assertions is proposed by (Hans Eveking 2007
[32]). The cando-objects reflect the intended non-determinism of assertions as well as
the non-determinism caused by the incompleteness of a set of assertions.

An Assertion-Based Software Development Process that Enables Design-by-


Contract is proposed by (Jean –Yves 2004 [42]). This work discusses a model-based
design flow for requirements in distributed embedded software development and the
automated generation of property-checking code in multiple target languages, from
simulation, to prototyping, to final production.

2.1.6 Validation

Pre-silicon validation is generally performed at a chip, multi-chip or system


level. The objective of pre-silicon validation is to verify the correctness and
sufficiency of the design. This approach typically requires modeling the complete
system, where the model of the design under test may be RTL, and other components
of the system may be behavioral or bus functional models. The goal is to subject the
DUT (design under test) to real-world-like input stimuli. It uncovers unexpected
system component interactions, inadequate or missing functionality in RTL. The major
considerations for a pre-silicon validation strategy are concurrency, automated test
generation, high quality verification IP, ease of use, Leveraging Design and
Application Knowledge, right level of abstraction, debugging, configurability and
reusing the test environment.
20

High level validation of next-generation microprocessors is proposed by (Bob


Bentley 2001, 2002 [14, 15]). This work outlines some of the approaches being
considered to address the challenge of validating Intel’s next-generation IA32 micro
architecture. This work explains a focused effort to identify the causes of bugs in the
Pentium 4 design and ways of preventing them on future microprocessor development
projects. This approach adds another, more abstract model above the traditional
Structural Register Transfer Level (SRTL) description to capture the design at the
micro-architectural level. The comparable development of Architecture Description
Languages (ADLs) allows architects to develop machine-readable, executable
specifications of the micro-architecture. Pre-silicon logic validation was done using
either a cluster-level or full-chip SRTL model running in the csim simulation
environment from Intel Design Technology.

Abstraction Techniques for Validation Coverage Analysis and Test Generation


is presented by (Dinos 1998 [21]). Functional coverage is defined as the amount of
control behavior covered by the test suite. The formal verification techniques are
combined using BDDs as the underlying representation, with traditional Automatic
Test Pattern Generation (ATPG) techniques to automatically generate additional
sequences which traverse uncovered parts of the control state graph. This work
demonstrates how abstraction techniques can complement ATPG techniques when
attacking hard-to-detect faults in the control part of the design.

A buffer-oriented methodology for validating micro architectural specification


is proposed by (Noppanunt 2001 [66]). This approach is to determine the functionality
of buffer type, model its operation at the micro architectural level. This method uses
abstract Finite state machine (FSM) models, and rigorously generates instruction
sequences that systematically exercise the model of each instance of the buffer type. A
21

high level test sequence is driver based on the abstract FSM model using FSM testing
techniques.

Functional Validation of Programmable Architectures such as processor cores,


coprocessors, and memory subsystems is presented by (Prabhat Mishra 2004 [71]).
This work presents methodology that uses an Architecture Description Language
(ADL) based specification as a golden reference model for validation and generation
of executable models such as simulators and hardware prototypes. This work also
presents a validation framework that uses the generated hardware as a reference model
to verify the hand-written implementation using a combination of symbolic simulation
and equivalence checking. It also presents functional coverage based test generation
techniques for validation of pipelined processor architectures.

Model Based Test Generation for Microprocessor Architecture Validation


(MMV) is suggested by (Sreekumar V. 2007 [88]). The key ingredients of MMV are
its metamodeling capability and language-independent representation. The
metamodeling capability lets MMV to create a customizable multi-abstraction
modeling framework using generic modeling concepts. The inter-operable
representation allows implementing back-end passes that performs various
interpretations and translations of the processor model.

2.1.7 Bus Functional Model

A bus functional model is a model that provides a task or procedural interface


to specify certain bus operations for a defined bus protocol. For microprocessors these
transactions usually take the form of read and write operations on the bus. Bus
functional models are easy to use and provide good performance. ASIC design
engineers have traditionally used bus functional models (BFM) to verify ASIC
interaction with defined bus protocols and other off-the-shelf chips. BFM runs the
22

specified bus transactions. All of the overhead of a full-functional hardware model is


not there.

Automatic Generation of Bus Functional Models from Transaction level


Models is presented by (Dongwan Shin 2004 [22]). This work presents methodology
and algorithms for generating bus functional models from transaction level models in
system level design. The communication refinement tool produces a bus functional
model that reflects the bus architecture of the system. In the bus functional model, the
top level of the design consists of system components and wires of the system bus.

SoC Design Environment with Automated Configurable Bus Generation for


Rapid Prototyping is presented by (Sang-Heon 2005 [82]). This work proposes a SoC
verification environment in which hardware parts are accelerated in FPGA and cores
are modeled with Instruction set simulator (ISS). To connect ISS in high abstraction
level with emulator in pin-level accuracy, bus functional model (BFM) is used. A tool
is developed in this work which generates bus architecture in fast and easy way. This
work can reduce the design time considerably, and make design space exploration for
the bus architecture possible.

2.1.8 Coverage-Driven Verification (CDV)

CDV approach makes coverage the core engine that drives the whole
verification flow. Coverage space is defined up front, and coverage is used to measure
the quality of the random testing and steer verification resources towards covering
holes until a satisfactory level of coverage is attained. This, in theory, enables reaching
high quality verification in a timely manner.
23

Functional coverage, derived from the explicit functional specification of the


device, is considered the answer in multiple references. (Alon Gluska 2003 [5])
presents a Coverage-Oriented approach, where in verification is driven first by the
detection of as many RTL bugs as possible using random and direct-random tests that
follow a detailed test plan. When this method produces a drop in bug detection,
coverage is gradually measured and the results steer the verification toward the
completion of the missing events. Coverage-Oriented verification was applied in the
verification of Banias, a microprocessor designed exclusively for the mobile
computers.

A functional verification methodology suggested by (Amir Hekmatpour 2003


[7]) is based on a system developed at the IBM Microelectronics Embedded PowerPC
Design Center, in order to improve the coverage and convergence of random test
generators in general and model-based random test generators in particular. This paper
describes methods for calibrating the test generation process to improve functional
coverage. In addition, it outlines a strategy for improved management and control of
the test generation for faster convergence across corner cases, complex scenarios, and
deep interdependencies.

Observability Based Code Coverage Metrics for Functional Verification is


suggested by (Fallah F. 2001[25]). In this work the details of an efficient method to
compute an observability-based code coverage metric are provided that can be used
while simulating complex hardware description language (HDL) designs. This method
breaks up the computation into two phases: functional simulation of a modified HDL
model followed by analysis of a flowgraph extracted from the HDL model. The work
of (Sigal Asaf 2004 [85]) presents a method for defining views onto the coverage data
of cross-product functional coverage models. This method allows users to focus, on
24

certain aspects of the coverage data to extract relevant, useful information, thereby
improving the quality of the coverage analysis.

2.2 Verification Planning, Reuse and Environment Reuse

Coverage, in the broadest sense, is responsible for measuring verification


progress across a plethora of metrics and aiding the engineer in assessing their location
relative to design completion as explained by (Andrew Piziali 2006 [10]). The map to
be referenced must be created by the design team, so that they not only know where
they are starting from specification and where they are going, towards fully functional
first silicon. The metrics of the map must be chosen for their utility: RTL written,
software written, features, properties, assertion count, simulation count, failure rate
and coverage closure rate.

This map is the verification plan, an executable natural language document that
defines the scope of the verification problem and its solution. The scope of the
problem is defined by implicit and explicit coverage models. The solution to the
verification problem is described by the methodology employed to achieve full
coverage: dynamic and static verification. Simulation (dynamic) contributes to
coverage closure through RTL execution. Formal (static) contributes to coverage
closure through proven properties. By annotating the verification plan with these
progress metrics as well as others, it becomes a live, executable document able to
direct the design team to their goal.

If verification plan is obsolete as soon as it is written, the effort is not justified.


However, by transforming the verification plan into an active specification that
controls the verification process, the planning effort is more than justified. The method
25

suggested by (Andrew Piziali 2006 [10]) illustrates the application of an executable


verification plan to a processor-based SoC.

Re-Use of Verification Environment for Verification of Memory Controller


proposed by(Aniruddha Baljekar 2006 [11]) presents that The re-use of the verification
environment can be achieved by Reuse with different IPs, Reuse at different levels of
integration and Reuse at different levels of abstraction (SystemC, RTL) levels. The
tools such as Specman Elite, vManager, Scenario Builder and eVCs as verification IPs
are used. The verification components such as eVCs (extended to support the
functionality at both TLM and RTL interfaces), virtual sequences, scoreboard
implementation, coverage implementation, verification plan, from RTL testbench are
re-used to verify the TLM SystemC DUT

For SOC’s it is observed that most of the peripherals are reused from the
previous design step with some modifications done on the feature set. Verification
reuse methodology is very critical for these systems. Verification Planning for Core
based Designs is suggested by (Anjali Vishwanath 2007 [12]). In this work the
discussion about the importance and completeness of verification planning in order to
achieve the verification requirements and reuse techniques adapted during the planning
phase to enhance reuse between different cores based designs is done.

Verification of AMBA Bus Model Using SystemVerilog is presented by (Han


Ke 2007 [30]). This paper explains the design of Advanced Microprocessors Bus
Architecture (AMBA) verification IP (Intellectual Property) and Advanced High
Performance Bus (AHB) Master and Monitor using SystemVerilog. It also discusses
reuse of verification IP to verify any AMBA protocol based SoC. Reference model is
26

designed to dynamically predict the response of the DUT. There is need to design
different reference model to verify other DUT.

Classification and Retrieval of Reusable Components Using Semantic Features


One of the earliest works suggested by (Jhon Penix 1995 [43]), propose a methodology
that shifts the overhead of formal reasoning from the retrieval to the classification
phase of reuse. Software components are classified using semantic features that are
derived from their formal specification. Retrieval of functionally similar components
can then be accomplished based on the stored feature sets. Formal verification can be
applied to precisely determine the reusability of the set of similar components. (Jhon
Penix 1998 [44]) suggests the Component Retrieval and Reuse using formal
specifications. This work presents a pragmatic approach to the use of formal interface
specifications in the component retrieval process. Formal specifications provide
precise descriptions of problem requirements, component function; and component
structure. Formal inference defines a mechanism for reliably and formally comparing
problem requirements and component specifications.

Among standard interfaces in PCI/PCI-X is probably the most popular and


widely used. (Kai-Hui Chang 2003 [46]) propose a PCI-X verification environment
that integrates both C and Verilog in the testbench development. Two benchmarks,
including a PCI-X to PCI-X bridge and a PCI-X core, are presented in this work.
Proposed PCI-X verification environment can efficiently and effectively perform the
verification for PCI-X designs. A bus functional Model acts as a pseudo device that
interacts with other devices on the bus. In the verification environment, all the BFMs
are based on the Verilog behavioral model and are controlled and driven by the C-
based diagnostic driver. These BFMs include both PCI initiator (PCI Master) and PCI
target (PCI Slave), and support the PCI standard including PCI 2.2, PCI-X 1.0, and
PCI-X 2.0.
27

Reusable Verification Infrastructure for A Processor Platform to deliver fast


SOC development is proposed by (Kambiz Khalilian 2002 [47]). This work describes
a platform developed at Infineon for the efficient re-use of IP for SOC design. The
platform includes the configurable TriCore microprocessor core (TIP), and required
system and general purpose peripherals. Standard bus ports are provided for the
addition of user peripherals. TIP is integrated into a complete chip reference design
and a chip level test bench, which runs “out-of-the-box”. The reference chip shortens
customers’ time-to-market and allows the rapid development of derivatives.

A white paper on Verification Reuse Methodology titled ‘Essential Elements


for Verification Productivity Gains’ is presented by (Michael Keating 2003 [55]). The
issues that must be dealt with are delineated and the requirements of a complete
verification reuse methodology are outlined. Verisity’s e Reuse Methodology (eRM™)
is used to exemplify many of the key concepts. The developer can focus on
implementing the unique, value added parts of the design and not re-inventing the
wheel for the elements common to all verification components. Reuse Methodology
Manual for System-On-A-Chip Designs authored by (Michael Keating 1999 [56]) is
published by KIuwer Academic Publishers. This book outlines a set of best practices
for creating reusable designs for use in a SoC design methodology. The fundamental
aspects of the methodology described in this book have become widely adopted. The
reuse-based design requires an explicit methodology for developing reusable macros
that are easy to integrate into SoC designs.

Architecture for verifying Advanced Switch hardware and to successfully


verify the compliance of the design under test’s features of the given protocol is
proposed by (Min An Song 2006 [59]). Here the Avery Design System’s TestWizard
is provided as Verilog system tasks that can manipulate the data structures during
simulation. With the help of TestWizard, the verification environment is able to
28

provide both speed and easiness for simulation. The proposed verification environment
PCI-Xactor is a complete and flexible verification environment for PCI, PCI-X, PCI
Express, and Advanced Switching.

SPARTACAS: Automating Component Reuse and Adaptation is presented by


(Morel 2004 [62]). SPARTACAS, a framework for automating specification-based
component retrieval and adaptation has been successfully applied to synthesis of
software for embedded and digital signal processing systems. Using specifications to
abstractly represent implementations allows automated theorem-provers to formally
verify logical reusability relationships between specifications. These logical
relationships are used to evaluate the feasibility of reusing the implementations of
components to implement a problem.

P1500, a language is being defined for embedded core test to enable the reuse
of intellectual property in a system on a chip environment suggested by (Rohit Kapur
1999 [79]). A Core Test Description Language (CTL) is being proposed as an industry
standard method for capturing and expressing test related information for reusable
cores. CTL intends to introduce an industry standard method and associated
description language for capturing and expressing certain types of test related
information for reusable cores. CTL provides all the information about the core to
enable the testing of the user defined logic.

Reuse issues on the verification of embedded cores is presented by (Narcizo


2002 [63]). An analysis of verification environment is performed from the perspective
of reuse across the design cycle is focused on the core stand alone and chip level
verification.
29

e Verification Environment for FlexRay Advanced Automotive Networks, a


white paper is presented by (Stefan Schmechtig 2006 [90]). The author discusses
verification challenge in FlexRay system. This work will explore how this
environment can validate modifications to a FlexRay core and confirm correct
operation of a SoC within a simulated FlexRay network. The FlexRay standard defines
more than 70 parameters, which results in a huge configuration space of global
(cluster) and local (node) parameters. This work discusses FlexRay eVC kit.

Building reusable verification environments with Open Verification


Methodology (OVM), a white paper is presented by (Stephen D’Onofrio 2008 [91]).
The author reviews the reuse potential within the OVM, with special focus on four
particularly fruitful areas: testbench architecture, testbench configuration control,
sequences and class factories. A simple router verification example, pw_router,
illustrates schemes for building reusable OVM testbenches.

Best Practices for a Reusable Verification Environment, an article by (Steve Ye


2004 [93]), discusses essential for the designer to learn practical, real-world techniques
on how to create a highly reusable verification environment using an environment such
as Specman e. ‘e’ language reference manual (Verisity 1998-2002 [99]) of Verisity
gives an insight of the features of the language which facilitate design of reusable
verification components (eVCs) and costrained random test generation etc. The
verification-specific constructs that distinguish e from other object-oriented languages
such as C++ include: Constructs to define legal values for data items (constraints),
Constructs to describe sequences over time (temporal constructs), Constructs to
support concurrency (multi-threaded execution) and Constructs to support connectivity
(bit-level access). ‘e’ Libraries reference manual discusses predefined methods,
pseudomethods, predefined routines and modeling state machines.
30

Extending Verification Reuse to Verification Plan Definition and Verification


Environment Implementation, a white paper is presented by (Nihar Shah 2006 [65]). In
spite of ready availability of verification components (VIPs) for major interfaces,
significant effort is still duplicated in each project in building the verification plan and
creating the scenarios for verifying each interface in the design, be it a single block, a
chip, or a system. Beyond using VIPs, the next logical step in improving verification
productivity is to reuse verification plans and the verification environment for carrying
out this verification plan. He discusses the potentials and limitations of reuse.

2.3 INFERENCE FROM REPORTED WORKS

Functional Verification is widely recognized as the bottleneck of the hardware


design cycle. With the ever-growing demand for greater performance and faster time
to market, coupled with the exponential growth in hardware size, verification has
become increasingly difficult. Although formal methods such as model checking and
theorem proving have resulted in noticeable progress, these approaches apply only to
the verification of relatively small design blocks or much focused verification goals. In
current industrial practice, simulation based techniques play a major role in the
functional verification of microprocessors. Faster simulators will enable simulation to
track ASIC growth, but will not significantly reduce the overall effort devoted to
functional verification.

A wide variety of technology is available today to describe and model the


behavior of system-level devices, from high-level and abstract architectural modeling
languages, to hardware description languages running in software simulators, to
hardware acceleration and beyond. The emergence of hardware verification languages
31

and comprehensive environments designed to automate the functional verification of


processor has significantly affected simulation-based technology. Engineers typically
use these environments to verify ASICs, SoCs, and unit-level components in a
processor. Using such environments to verify large designs still requires significant
effort.

Verification is a process used to demonstrate the functional correctness of a


design. The simplest way to find bugs in a Verilog or VHDL design is with a Verilog
or VHDL testbench. Testbenches written in Verilog or VHDL usually run directed
tests, while testbenches written using an HVL (High Verification Language) such as
‘e’ or SystemVerilog usually run random tests.

System design projects are extremely large and complex. An average design
today is over one million gates, with increasingly complex interfaces, and short design
schedules. This is creating a verification crisis, causing teams to run orders of
magnitude more simulation cycles. In addition, directed tests are not easily created or
maintained and the interface interactions have grown exponentially. The team must
also verify that the design adheres to the specification at each abstraction level, and
each level requires a different execution engine. Considering all this, combined with
the ever-increasing cost of implementation as geometries continue to shrink, the cost
of failure has never been higher.

Many of the problems associated with the functional verification methodologies


of today are based on the absence of an effective automation to combat the
discouraging growth in the size and complexity of the design. This has forced to rely
on manual effort in the development of environments for tests. The examination and
the treatment of the results of the test is also a manual procedure. Perhaps the most
notorious problem facing the engineers of design and verification is the lack of
effective metric to measure the progress of the verification. Code coverage for
32

example, indicates lines of verification code that was visited in a simulation, but it
does not offer any indication of which functionality it was verified. As result, the
engineer never is sure if a sufficient quantity of verification has been realized.

The biggest of all the efforts in the verification is to determine a comprehensive


methodology of the verification capable of verifying arbitrary designs. For now, the
majority of the efforts are contained in developing and improving specific skills, each
of which is excellent in some area of the verification. A common theme of verification
effort is to find a methodology comprehensive of verification.

There has never been a time when verification solutions have had to be faster or
smarter than they need to be now. Where engineers previously had the luxury of
working in small domains and in hardware models with limited interactions, the
growing momentum in system-on-chip (SoC) development has meant a tremendous
growth in the amount of verification required for a given design.

In addition to a fundamental constrained-random, coverage-driven


methodology, Verification Process Automation (VPA) comprises e Reuse
Methodology (eRM), which simplifies the creation of reusable verification IP and the
System Verification Methodology (sVM), which encapsulates sophisticated system-
level verification technology targeted at SoC verification.
33

CHAPTER 3

VERIFICATION ENVIRONMENT

Verification is a methodology used to demonstrate the functional correctness of


a design. The main purpose of functional verification is to ensure that a design
implements intended functionality. Choosing the common origin and reconvergence
points determines what is being verified. These origin and reconvergence points are
often determined by the tool used to perform the verification. It is important to
understand where these points lie to know which transformation is being verified.

3.1 ISSUES IN VERIFICATION

Issues to be addressed during any verification activity are


 Capturing of all features of design (functional aspects).
 Compliance to all protocols.
 Coverage for all possible corner cases.
 Checking for locking states in the Finite State Machines (FSMs).
 Working of design in any random state.
 Elimination of redundant tests to save unnecessary simulation cycles and cost.
 Equivalence check with reference design.
 Methodology availability, for verification of each design modification.
 Generation of data patterns for directed or random test.
 Ease of generating flexible and modular test environment
 Reusability and readability of test environment.
 Handling of simulation time in automated test environment.
34

3.2 ISSUES IN VERIFICATION METHODOLOGIES

Lack of effective automation during functional verification due to size and


complexity of design, makes development of test environment, scheme for test
generation and deterministic tests an intensive manual effort. Checking and debugging
test results is also predominantly a manual process.

Design Complexity and size makes version control and tracking of design and
verification process difficult, both at specification level and functional, which can
often lead to architectural-level bugs that require enormous effort to debug. Debugging
is always a problem, especially when they occur in unpredictable places. Even the
most comprehensive functional test plan can completely miss bugs generated by
obscure functional combinations or ambiguous spec interpretations. This is why so
many bugs are found in emulation, or after first silicon is produced. Without the ability
to make the specification itself executable, there is really no way to ensure
comprehensive functional coverage for the entire design intent.

The relative inefficiency with which today's verification environments


accommodate midstream specification changes also poses a serious problem. Since
most verification environments are an ad hoc collection of HDL code, C code, a
variety of legacy software, and newly acquired point tools, a single change in the
design can force a ripple of required changes throughout the environment, eating up
time and adding substantial risk.

Perhaps the most important problem faced by design and verification engineers
is the lack of effective metrics to measure the progress of verification. Indirect metrics,
such as toggle testing or code coverage, indicate if all the flip-flops are toggled or all
lines of code were executed, but they do not give any indication of what functionality
35

was verified. For example, they do not indicate if a processor executed all possible
combinations of consecutive instructions. There is simply no correspondence between
any of these metrics and coverage of the functional test plan. As a result, the
verification engineer is never really sure whether a sufficient amount of verification
has been performed.

3.3 TEST METHODOLOGIES

Four prominent Test Methodologies are:

3.3.1 Deterministic

The oldest and most common test methodology used today is deterministic
testing. These tests are developed manually and normally correspond directly to the
functional test plan. Engineers often use deterministic tests to exercise corner cases,
specific sequences that cause the device to enter extreme operational modes. These
tests are normally checked manually. However, with some additional programming the
designer can create self-checking deterministic tests.

Although simple tests can be written in minutes, the more complex ones can
take days to write and debug. Moreover, midstream changes to the design is temporal
behavior can cause the engineer to go through this process repeatedly. When this test
is completed, the corner case is tested through only one possible path.

3.3.2 Pre-run generation

Pre-run generation is a newer methodology for generating tests that addresses


some of the productivity problems associated with deterministic testing by automating
36

the test generation process. C or C++ programs (and sometimes even VHDL and
Verilog, despite the lack of good software constructs) are usually used to create the
tests prior to simulation. The programs read in a parameter/directives file that controls
the generation of the test. Often these files contain simple weighting systems to direct
the random selection of inputs.

Reaching corner cases using pre-run generation is nearly impossible. The


engineer has very little control over the sequences generated. This makes it difficult to
force the occurrence of specific combinations. As such, pre-run generation makes a
suitable complement to deterministic testing, but cannot replace it. A side-effect of
this methodology is that the full test is usually very large, since it is generated in
advance.

3.3.3 Checking Strategies

The two most popular ways to determine test results are to compare them to a
reference model or to create rule-based checks. Both of these checking methods must
include both the temporal behavior and protocols of the device as well as the
verification of data. Reference models are most common for processor-like designs
where the correct result can be predicted with relative ease. Engineers perform these
checks either on-the-fly or post-run. Simple checks and protocol checks can be
performed on-the-fly by the stubs and monitors using an HDL. Post-run checks are
often performed using a C/C++ or PERL program.

The problem with these checking strategies stems from the way they are most
commonly implemented today. Post-run checking wastes cycles. In addition, since the
post-run checking cannot detect a problem in real-time, the designer does not have
access to the values of the registers and memories of the device at the time the problem
37

occurred. In addition, reference model checking is often hard to implement on-the-fly,


since intermediate results are not always available. On-the-fly reference models also
require a direct interface to the simulator (through PLI or FLI) which is not easy to
write and maintain.

3.3.4 Coverage Metrics

Measuring progress is one of the most important tasks in verification, and is


the critical element that enables the designer to decide when to end the verification
effort. Several methods are commonly used:

 Toggle testing verifies that over a series of tests, all nodes toggled at least once from
1 to 0 and back;
 Code coverage demonstrates that, over a series of tests, all the source lines were
exercised. In many cases there is also an indication as to whether branches in
conditional code were executed. Sometimes an indication of state-machine
transitions is also available;
 Possibly the most common metric used to measure progress is to track how many
bugs are found each week. After a period of a few weeks with very low or zero bugs
found, the designer assumes that the verification process has reached a point of
diminishing returns.

Unfortunately, none of the metrics described above has any direct relation to
the functionality of the device, nor is there any correlation to common user
applications. Neither toggle testing nor code coverage can indicate if all the types of
cells in a communication chip with and without Cyclic Redundancy Check (CRC)
errors have entered on all ports. These metrics cannot determine if all possible
sequences of the instructions in a row were tested in a processor.
38

As a result, coverage is still measured mainly by the gut feeling of the


verification manager, and eventually the decision to tape out is made by management
without the support of concrete qualitative data. Not knowing the real state of the
verification progress causes verification engineers to perform many more simulations
than necessary, trading off CPU cycles for "confidence". This usually results in
redundant tests that provide no additional coverage or assurance that verification is
complete. The real risk is that the design will be sent to production with bugs in it,
resulting in another round of silicon. The cost of re-spinning silicon includes non-
recoverable engineering (NRE) costs to do the additional production process, the cost
of extending the teams work on the project, and the major cost of reaching the market
a few weeks late.

3.4 TYPES OF CHECKING

Data check verifies the data correctness while temporal check verifies timing
and protocol. Higher level languages are needed as High-level language eases the
creation of an expected value generator. Figure 3.1 pictorially represents data
checking. It shows sharing of objects used for input stimulus. It supports cycle-based
behavior, events, and synchronization with HDL simulator. Basis of verification test
cases are Directed test cases, Random test cases and constrained Random test cases.

Expected-
Value
Generator

01001
DUT 1
11100 =?
0
01010
Figure 3.1 Data Checking
39

3.5 ISSUES IN TRADITIONAL VERIFICATION METHODOLOGY

3.5.1 Productivity issues

 Verification is more than 50% of an overall project cycle.


 It May require tens of thousands of lines of verification code.
 Design spec changes cause major verification delays
 Implementing all identified tests in test plan within the project schedule.

3.5.2 Requirement for productivity improvement

 The verification environment must be created and/or maintained efficiently.


 Human should spend more time at higher level details
 Providing simulation goals
 Analyzing errors reported by checkers.
 Providing more direction when goals are not being met.

3.5.3 Quality issues

 Verification complexity makes it a challenge to think of all possible failure


scenarios
 It does not provide a way to try scenarios beyond the expected failure scenarios

3.5.4 Requirement for quality improvement

 Confidence about the ratio of identified bugs must increase


 An automatic way to know what has been tested must be available.
40

3.5.5 Task-based strategy

 To improve test-writing productivity higher level of abstraction for


specifying the vector stream must be used.
 Higher-level tasks are created in HDL or C.
 The test writing effort is still high.
 Selection of parameters values is done manually.
 High-level intent is not readily apparent.

3.6 VERIFICATION METHODOLOGY

An effective application of Specman Based verification methodology provides


four essential capabilities to help break through the verification bottleneck:

 Automates the verification process, reducing by as much as four times the


amount of manual work needed to develop the verification environment and
tests.
 Increases product quality by focusing the verification effort to areas of new
functional coverage and by enabling the discovery of bugs not anticipated in the
functional test plan.
 Provides functional coverage analysis capabilities to help measure the progress
and completeness of the verification effort.
 Raises the level of abstraction used to describe the environment and tests from
the RTL level to the specification level, capturing the rules defined in the specs
in a declarative form and automatically ensuring conformance to these rules.
41

Important considerations when designing a verification environment include:

 Reuse of existing verification components, especially such components that


have already been in place in other verification projects and can therefore be
seen as successfully proven.
 Providing a metric methodology that allows for statements about the quality of
the verification results. At the end of the verification project, numbers must be
presented about the quality of the tests that were executed.
 The verification components should be easy to integrate into an existing system
verification environment. Plug & Play can be achieved by using the same
standard for the system test bench and the verification components to be added.
 A modular and scalable architecture of the verification solution allows upgrades
and extensions.
 Generation and injection of dedicated erroneous behavior.
 Detailed, comprehensive documentation complements the ideal solution.

3.7 A VERIFICATION ENVIRONMENT FOR VERIFYING ETHERNET


PACKET IN ETHERNET IP CORE

When verifying the Ethernet IP core it is necessary to take several critical


components into consideration. The interfaces to the host and Physical Layer Device
(PHY) posed serious verification challenges. An automated test bench and creation of
a verification environment takes time. The time could be reduced by reusing common
elements between designs and different applications developments.

In addition, as with many verification projects today, our goals were to develop
a high-quality device in an extremely tight time schedule. This section describes our
approach to the verification of this complex device and how we addressed the
conflicting needs of quality versus complexity versus time.
42

The functional architecture (Fig 3.2) consists of a host interface and a standard
MII interface. The IP core consists of MDIO, Direct Memory Access (DMA) support,
Configuration registers, Control logic, Transmitter FSM and Receiver FSM.

3.7.1 Management Data Input/ Output (MDIO)

The MDIO (Figure 3.2) is a simple serial interface between the host and an
external PHY device. It is used for configuration and status read of the physical
device. A host processor responsible for system configuration and monitoring typically
uses the MDIO to perform individual accesses to the various PHY devices.

It implements the IEEE 802.3 Clause 22 standard MDIO interface used in


Ethernet systems up to 1Gbit/s. MDIO Master Core allows access to registers within
multiple connected Slaves. The features such as simple register based user application
interface for the MDIO, MDIO frame generation with serial port tristate control, busy
indication to user application during ongoing transaction are provided. PHY interrupt
goes active when status change is indicated to application.

Host initiates an operation by writing into the configuration registers of the


MAC. MDIO reads these registers and performs the tasks. It then reports it to the host
by writing into the configuration registers which is polled by the Host continuously.

MAC

HOST
Configur
MII/GMII
ation
register PHY
MDIO

MDIO
O

Figure 3.2. MDIO Overview


43

A Mux is used for selecting 8 bit field of MDIO frame at a time and loading it
to parallel in serial out register. The PHY data is shifted out at the rising edge of PHY
clock. The most significant bit of the data is shifted first. During a read cycle the data
from PHY is shifted into the serial in parallel out register and a Demux drives it to the
required data bus.

The control block consists of counters which generates enable signal for the
functioning of mux, Demux, SIPO and PISO. It generates the status signals to be
written into the configuration registers. Control block also generates the PHY enable
signal to drive PHY data as the data line is a shared tri-stateable bus, which is driven
by the MAC for write transactions or by the PHY devices during read.

The clock generator module generates Management Data Clock by dividing host
clock. The division factor is set in the configuration register field.

3.7.2 Verification Strategy

Verification strategy for the Ethernet core is fairly sophisticated. The design is
very complex and the verification team was tasked with ensuring as high a quality
device as possible. Also, the verification team had the requirement that the
environment lend itself easily to reuse for future generations, and that engineers who
did not create the environment be able to be productive within it as quickly as possible.
The verification environment designed is shown figure 3.3.

Same components of the existing environment can be used for different


application with different interface by developing a wrapper around the DUT interface
and then interfacing it to the environment. Verification environment consists of BFM,
test case generator, monitor and checker.
44

As with any commercial IP or SOC development with an aggressive timescale,


the minimization of risk is a key to delivery. For many commercial developers the
decision to adopt a new verification paradigm can seem too risky.

The Specman Elite environment is built with the PHY eVC and the host eVC
with few verilog tasks. This allowed us to create a virtual host environment using e
masters, slaves and bus arbitrators.

Figure 3.3 shows the proposed verification environment. The environment


consists of the components, Input BFM driver, Collector, Coverage, Test case
generator, Error injector, constraints, Scoreboard and Monitor.

TASKLIBRARY(GENERAL PURPOSE)

Transcation Coverage DUT


Generator

TestCase Master
PCI BUS

IP Core
G/T BFM

Data
Generator PIB

Monitor

Data Checker
(ScoreBoard)

Figure 3.3 Verification Environment for Media Access Control (MAC) DUT

A Bus functional Model (BFM) is the unit instance that interacts with the DUT
by driving and/or sampling the DUT signals. A sequence driver drives a data item
generated by a data generation unit. A BFM should be self-contained, not dependent
on other drivers. All stimulus interaction with the DUT should come from common
drivers. This makes the verification IP more modular and reusable. BFMs drive and
45

sample only one interface. An interface is defined as a set of signals that implements a
specific protocol. It makes the design more modular and allows drivers to be reusable.

Monitors are used to check and observe all transactions on the interface. The
separation of data checking and protocol checking makes verification elements more
reusable, as well as less complicated in the implementation. Monitors should be self-
contained, with each monitor handling only one interface, and should not drive any
design signals. A monitor verifies the protocol on the interface. As the IP or block is
integrated into a multiple-unit or SoC environment, the monitors should be reusable to
check for violations on the interfaces of the IP or block.

Monitors should be capable of being established and disabled. This is important


for reusing the verification IP in a SoC environment. The pads of a SoC are often
multiplexed in order to provide multiple functions while reducing the package pin
count. The functionality will be selected by primary pins and/or internal registers.
Therefore, it should be possible to change the monitor according to the SoC setup.

A scoreboard is the verification element that predicts stores and compares data.
Determining the correctness of data received and transaction happened on an interface
is the job of scoreboard.

Specman Elite automated key aspects of the verification environment. It works


by generating stimuli into the device, which can be fully random or fully directed. It is
possible to reach specific corner cases in the design without having to force a specific
state. We can generate the inputs pre-run, on the fly, or a combination of the two.

An eVC is a reusable piece of verification code written in e verification


language with ERM (e reusable methodology) methodology. Specifically, the PHY
46

eVC is a reusable verification environment developed in house. Just like an IP design


core, this allowed time to be spent on verifying the unique aspects of the design rather
than the standard components.

One of the major objectives for the verification environment was the ability for
the non-Specman "savvy" engineers to be able to easily use the environment to
develop tests. We achieved this through “verilog tasks" that were built in a layered
way using Specman Elite macros.

3.8 CHAPTER CONCLUSION

This chapter discusses issues in verification and verification methodologies that


need to be addressed. The proposed methodology addresses the limitations of
traditional verification methodologies. The verification process will address the
limitations of existing functional verification methodologies. The Proposed
methodology is based on Specman Verification environment. All the components used
in the proposed verification environment for verifying Ethernet packet in an Ethernet
IP Core are reusable in nature. MDIO, MII, MAC and Physical layer device eVC are
some important components of the proposed verification environment. The Specman
Elite environment is built with the PHY eVC and the host eVC with few verilog tasks.
This allowed us to create a virtual host environment using e masters, slaves and bus
arbitrators.
47

CHAPTER 4

VERIFICATION REUSE

Verification reuse involves reusing the existing verification environments


or components of verification environments developed for the other designs or
blocks. It includes verification code reuse (monitor, bus-functional model [BFM],
scoreboard, and data item), test case reuse, assertion reuse, simulation script reuse
and coverage analysis reuse.

Verification reuse can:

 Dramatically reduce the verification environment build effort.


 Reduce verification risk, improve product quality and increase efficiency.
 Reduce the need for deep protocol expertise on the verification team.

The foundation of this technique is well-designed verification codes and


components that implement reusability techniques. Before developing the code,
however, it is essential for the designer to learn practical, real-world techniques on
how to create a highly reusable verification environment.

4.1 VERIFICATION IP PREREQUISITES

Verification IP is a verification component, and an eVC is a verification


component in the e language. It is a ready-to-use, configurable verification
module, typically focusing on a specific protocol or architecture.
48

A verification component must be:

 Self-contained, thus, it can be easily instantiated, either alone or within an


existing environment.
 Have the ability to specify a different configuration for each instance.
 Easily configured at both the component and the element levels.
 Reusable at different levels of DUT integration.
 Implement all protocol elements of the specific interface.

4.2 PHYSICAL LAYER DEVICE (PHY)

Ethernet Phy e Verification component (eVC) is an in house development.


Ethernet eVC is built with phy as a separate eVC and host being a task driven
verilog Bus functional model (BFM). This allowed us to create a virtual host
environment using a combination of verilog BFM and eVC. Verification
environment reuse for different application with different interface is done by
developing a wrapper around the Design Under Test (DUT) interface and then
interfacing it to the environment.

Figure 4.1 shows the structure of the PHY model. The PHY model provides
translation between high level frame descriptions and the low level interface at the
DUT. The Frame Decoder reconstructs high-level frames from the DUT interface
and classifies them according to the level and type of errors encountered. The
Protocol Checker Monitors the data-path and station management signals to and
from the DUT at both low and high levels. The various components of the PHY
layer collect coverage and logging information and generate statistics.
49

The PHY element also models the station management interface and the
associated register sets. There is a Station Management Executive within the PHY
to control the DUT access to the register sets via an ‘MDIO’ interface. It is also
possible for the verification environment to directly read and write the contents of
the STA registers. All mandatory registers for each kind of PHY are implemented
to both reflect and affect the internal state of the eVC PHY model. Optional
registers can be modeled or restricted under user control.

The DUT Agent provides the interface between the Virtual Medium and a PHY
layer model that provides connectivity to the DUT. Normally there is a single
DUT Agent per instance of the eVC.

Figure 4.1 PHY Structure


50

The Ethernet eVC (Figure 4.2) can be used to verify any IEEE802.3:2000
and IEEE Draft P802.3ae/ D4.0 compliant MAC or PHY device. The eVC can be
used for the functional verification of IP cores and SoC designs incorporating
Ethernet MAC and PHY layer functionality and can be configured to have an
unlimited number of Ethernet ports, each interfacing with one of the DUT’s
Ethernet ports.

Figure 4.2 Overview of the Ethernet e Verification Component

Ethernet Evc:

 Simulates single or multiple Ethernet devices on a medium, generating and


collecting Ethernet packets
 Supports 10 Mb, 100 Mb, 1 Gb, 10 Gb bandwidths
51

 Supports Ports capable of Full Duplex or Half duplex mode


 Supports the management interface for all supported interfaces
 Supports configuration of different ports independently
 Monitors protocol and reports violations
 Users can control generation of transactions for each device model
 Fully supports eRM features such as message(), sequences, packaging and
naming conventions
 Extensive sequence library for developing advanced sequences
 Fully configurable error generation
 allows testing of error detection mechanisms under realistic scenarios
 Utilizes eRM methodology to ensure compatibility and inter-operability
with other eVCs

4.3 CYCLIC REDUNDANCY CHECK -REUSABLE VERILOG TASK

A CRC is an error-detecting code. Its computation resembles a long


division operation in which the quotient is discarded and the remainder becomes
the result, with the important distinction that the arithmetic used is the carry-less
arithmetic of a finite field. The length of the remainder is always less than or equal
to the length of the divisor, which therefore determines how long the result can be.
Typically, an n-bit CRC, applied to a data block of arbitrary length, will detect any
single error burst not longer than n bits, and will detect a fraction 1-2-n of all
longer error bursts.

There are two types of Ethernet CRC errors: (a) MAC frame CRC errors,
and (b) internal protocol CRC errors.
52

4.3.1 MAC Frame CRC Errors

These CRC errors are the most common, and are what most devices and
analyzers are referring to when they claim a CRC error has occurred. Ethernet
packets are encapsulated in a MAC frame that contains a preamble, and a post-
envelope CRC check. The Ethernet adapter on the sending station is responsible
for creation of the preamble, the insertion of the packet data and then calculating a
CRC checksum and inserting this at the end of the packet. The receiving station
uses the checksum to make a quick judgment if the packet was received intact. If
the checksum is not correct, the packet is assumed to be bogus and is discarded.

MAC frame CRC errors can be caused by a number of factors. Typically


they are caused by either faulty cabling, or as the result of a collision. If the
cabling connecting an Ethernet Adapter or hub is faulty the electric connection
may be on and off many times during a transmission. This “on and off” state can
interrupt parts of a transmission, and damage the signal.

If a collision happens during packet transmission, the signal for the specific
packet will be interrupted, and the resulting received packet will be damaged. If
the signal is interrupted partially during transmission, the CRC checksum that was
calculated by the network adapter will no longer be valid and the packet will be
flagged as a CRC error and discarded. CRC errors are common on a busy network,
and a small percentage does not reflect a network problem. When the percentage is
large, or when a single station shows a larger percent CRC errors there is probably
a problem that needs to be addressed.
53

4.3.2 Protocol CRC Checksums

Some protocols have a second checksum for data integrity purposes. This
checksum is calculated on only a portion of the internal data of each packet, and
can give a second and independent check for the validity of the packet’s contents.
Observer calculates this checksum independent of the MAC layer CRC and
displays the results in the decode display. These CRC errors are very rare and can
be caused by malfunctioning software or protocol drivers.

CRC takes a binary message and converts it to a polynomial then divides it


by another predefined polynomial called the key. The remainder from this division
is the CRC. The message and the CRC are transmitted together. The recipient of
the transmission does the same operation and compares reference CRC with
received CRC. If they differ, the message must have been mangled. Most localized
corruptions will be detected in this scheme.

4.3.3 Calculating CRC

Longer key will be better for error checking. On the other hand, the
calculations with long keys can get pretty involved. Ethernet packets use a 32-bit
CRC corresponding to degree-31 remainder. Since the degree of the remainder is
always less than the degree of the divisor, the Ethernet key must be a polynomial
of degree 32. A polynomial of degree 32 has 33 coefficients requiring a 33-bit
number to store it. There is no need to store highest coefficient as it has a value 1.
The key used by the Ethernet is 0x04c11db7. It corresponds to the polynomial:

x32 + x26 + ... + x2 + x + 1 (4.1)

There is one more trick used in packaging CRCs. The CRC is calculated
after appending 32 zero bits to the message. The message with N bits corresponds
to polynomial of degree N-1. This operation is equivalent to multiplying the
54

message polynomial by x32. If we denote the original message polynomial by


M(x), the key polynomial by K (x) and the CRC by remainder R (x) we have:

M * x32 = Q (x) * K (x) + R (x) (4.2)

The CRC is send with the augmented message.

M * x32 + R (x) = Q (x) * K (x) - indicates no transmission error (4.3)

4.3.4 CRC Algorithm

 Ri is the i th bit of the CRC register.


 Ci is the contents of the i th bit of the initial CRC register, before
any shifts have taken place.
 R1 is the least significant bit (LSB).
 The entries under each CRC register bit indicate the values to be XORed
together to generate the content of that bit in the CRC register.
 Di is the data input, with LSB input first.
 D8 is the MSB of the input byte, and D1 is the LSB.
 A substitution is made to reduce the table size, such that
Xi = Di XOR Ci.
 The results of the CRC are calculated one bit at a time and the resulting
equations for each bit are examined. This process continues until eight
shifts have occurred. Xi was substituted for the various Di XOR Ci
combinations.
55

4.4 WRAPPERS

Designer must be able to reuse Intellectual Property in the design for other
projects and designs. Reuse of IP across different designs of the domain saves
time to market further and improves total productivity and quality. The wrappers
can be used to integrate the IP core in another design when the interfaces of the IP
core are different from the interfaces in the current design. Wrapper refers to an
interfacing component which provides the necessary logic to attach a user
specified custom IP with a bus in an architecture. Wrappers can also be used in
verification when we have altogether a different type of interface on the
verification environment and a different type of interface on the DUT.

4.4.1 Host Interface Block (HIB)

The wrapper (Figure 4.3) interfaces the Ethernet MAC with the PCI device.
This interface serves to access the configuration registers and the memory. Direct
Memory Access (DMA) transfers are used to transfer data to and from memory.
The wrapper can be changed for other type of interfaces by having the MAC
engine constant. The new wrapper needs to take care of all interfacing issues with
host and device. The features supported by the wrapper are:

 Interfacing device and host.


 Use of DMA for data transfer to and from host memory.
 Host writes data to the configuration registers directly. Device target uses the
data and functions accordingly.
 Data frames are written to memory and host master interface.
 Support for storage of 2 frames at any time.
56

 Error signaling.

The host interface block generates the DMA signals based on the host
generated signals. During configuration process the host writes configuration data
to the configuration registers block. This process need not initiate the DMA
transfer. Every time only one register will be accessed. This process will be
executed by enabling ‘configdata_ready’ signal. With this signal high the DMA
block will read the config register address from the address bus (‘reg_addr’)
during the next clock cycle and the config data from the data bus (‘reg_data’)
during subsequent clock cycle.

Figure 4.3 Block diagram of Host interface block

During transmission of packet the host will initiate the DMA transfer. This
will be intimated to DMA block by enabling the dma_transfr signal. With this
signal being high the DMA block will keep on reading the data from the data bus
and writes to its tx_fifo. Once the transfer is over the dma_transfr signal will be
disabled by the host interface block. During this process the host interface block is
responsible for getting the host memory address location from the descriptor.
57

During reception of packet DMA block will initiate the DMA transfer by
issuing dma_req. This request goes to host through host interface block and once
the grant is given the host interface block will get the host memory address from
the descriptors and enable the signal dma_gnt to DMA block, then DMA block
will continuously read the data from receive buffer and puts on the data bus . This
data in turn will go to host memory through host interface block.

4.4.2 HIB Testbench algorithm

DMA logic is verified as shown in the following algorithm. Data transfer


according to the request and the amount of data that needs to be transferred is
verified. Grant and release of DMA is also verified.

1. IDLE:
For the host to start TX flow
 Write first TX descriptor address into TDAR.
 Set Txqueued bit.
2. ARB_REQ:
TX dma would raise a request to arbiter block.
3. DESR READ:
Once TX dma receives grant it
 Asserts frame_req.
 Puts TDAR onto address bus.
 Asserts read request.
 Total no of bytes to be read = 16 (4 dwords of descriptor table).
 Once TX dma receives frame_req_ack, read operation begins.
 Data transfer is valid when both mac_rdy and host_rdy is high.
58

 Transaction is completed successfully if frame_comp is asserted.


 If target abort is asserted the transaction is terminated and registers
are cleared.
If own = 1, next state is interrupt
If own = 0, descriptor read = 1 and buffer data need to be read; next state is
ARB_REQ.

4. INT:
If own = 1
 Set interrupt, Txqueue empty.
 Itxqueued is cleared.
If TX ram full = 1
 Set interrupt, transmit buffer full.
 Itxqueued is cleared.
If TX frame status not read
 Set interrupt, status not read.
 Itxqueued is cleared.

5. BUF READ:
If tx_descriptor read = 1 and grant is received
 Buffer address from descriptor table is put on addbus.
 Read request is asserted.
 Byte count is Frame length from descriptor table.
If frame req_ack is received
 Data transfer is valid when both mac_rdy and host_rdy is high.
 Data is written to TX ram.
59

 If control bit [0] = 0 (indicates last descriptor) and control [1] = 0


(indicates burst transfer), more bytes are to be fetched for the current
frame.
 If control bit [0] = 0 (indicates last descriptor) and control [1] = 1
(indicates burst transfer), more frames are to be fetched for the burst
transfer.
 Transaction is completed successfully if frame_comp is asserted and
target abort remains deasserted.
 If target abort is asserted the transaction has failed.

6. BUF WRITE:
If rx_descriptor read = 1 and grant is received
 Buffer address from descriptor table is put on addbus.
 write request is asserted.
 Byte count is Frame length from RX RAM logic.
If frame req_ack is received
 Data transfer is valid when both mac_rdy and host_rdy is high.
 Data is read from RX RAM.
 Frame count determines how many bytes are moved to memory.
 If all frames were not able to be moved, due to small buffer, one
more transaction is requested to read the next descriptor table.
 Transaction is completed successfully if frame_comp is asserted and
target abort remains deasserted.
 If target abort is asserted the transaction has failed.

Arbiter
Arbiter arbitrates between the transmit and receive DMA controllers
for accessing the bus. Two priority select bits, written by host in the
60

configuration registers determine the relative priority of each DMA


controller. When RXPRI is set, the receive DMA process may preempt
the transmit DMA process (when the Latency Timer expires). When
TXPRI is set, the transmit DMA process may preempt the receive DMA
process (when the latency Timer expires). When both bits are set, round
robin algorithm is used for arbitrating. When neither bit is set, no
preemptions occur. Preemption does not occur when either process has
only one dword left to transfer.

4.4.3 Processor Interface Block (PIB)

Processor interface block (figure 4.4) interfaces between processor/ micro


controller (8 bit) and DUT core. This PIB takes care of writing and reading of
internal registers, provide data buffer, interrupt generation, error signaling and
other handshaking between the host controller and transmit and receive blocks.
The features supported by the wrapper are

 To provide access to internal register (status/control) map to the processor.


 To provide data buffer access to the processor.
 Interrupt generation depending on certain internal conditions.
 Updating of status/ control registers.
 Data length and actual intended data for transmission, mismatches to be taken
-care in the wrapper.
 Error signaling.

Three functional blocks (Figure 4.4) of PIB are:

 Interrupt Controller.
61

 Prioritizer.
 Initialization.

Interrupt Controller is responsible to generate interrupt to the processor on various


conditions and keep track of them. Prioritizer is responsible to prioritize between
CPU and CAN controller access. It has read Counter and write counter logic.

RAM

PIB
Tx
Interrupt Prioritiser
CPU controller

Rx
Initialization sequence

Figure 4.4 Block diagram of Processor interface block

When Receiver is trying to write (figure 4.5) and CPU is trying to read
simultaneously, CPU is provided with priority of 3-message objects access
continuously by using a counter and after counter reached the terminal count
(Tc_Rd), Pib_Busy will be enabled so that CPU does not access the RAM. Same
conditions apply for Receiver also. When Transmitter is trying to read and CPU is
trying to write simultaneously access for 3 message objects is provided
continuously by implementing a counter. Whenever counter reaches terminal
count 3 Pib_Busy will be enabled. Same conditions apply for Transmitter also.
62

Initialization Sequence block is responsible to send sync related parameters


like Tseg, Baud rate prescaler to Receiver block. CAN validates the phy
information after completion of initialization.

Figure 4.5 Processor interface block I/O signals

4.4.4 PIB Testbench algorithm

Writing into and reading from the host interface registers is performed here.
The following configuration read and writes are verified by writing into a
particular register and reading back from it.

 Memory write (DMA writes data to memory).


 Memory read (DMA reads data from memory).
 DMA register write.
 DMA register read.
 MAC Configuration register write.
 MAC configuration register read.
 DMA transfers used for data transfer to and from host memory.
63

 Host writes data to the configuration registers directly and PCI target
interface is used.
 Data frames are written to and from memory and PCI master
interface is used.
 Support for storage of 2 frames at any time.

4.5 CHAPTER CONCLUSION

Phy eVC is an in house development which satisfies all IP prerequisites.


Specman ‘e’ is used for Implementation of eVCs and verilog for reusable tasks.
Details are presented in appendix 1 and 2. CRC methodology and algorithm are
presented as an example of reusable verilog task. Details are presented in appendix
3. Wrappers for reuse of existing IPs are demonstrated through Processor interface
block and Host interface block. Signal details and test bench algorithms are
written to suit reusability without compromising on quality and productivity.
64

CHAPTER 5

TESTING

A test plan is the document used to define each test case. It should be
written before creating the test cases, since this document is used to identify the
number of tests required to fully verify a specific design. Before creating the test
plan, the following information should be collected and listed:

 All configuration attributes.


 All variations of every data item.
 All the important attributes of each data item that you would like to control,
along with the range of values for each generated data item.
 All interesting sequences for every device-under-test (DUT) input port.
 All corner cases to be tested.
 All error conditions to be created and all erroneous inputs to be injected.

This information is used to identify the verification targets or goals. Based


on those goals, test cases are created and documented in the test plan. For
reusability, it is desirable to separate the verification goals from the test case
implementation. The same verification goal could be achieved at the different
levels, such as the block or chip level, and through different methods, such as
directed tests or random tests. The goals are reusable, but the implementations of
the test cases are not.
65

5.1 Test Plan for Ethernet Mac Receiver

Various Environment components used in the verification of Receiver


(Figure 5.1) are, Receiver DUT, BFM to drive different types of frames and
various inputs to the DUT, Monitor to monitor the various inputs and outputs of
the Receiver DUT, Collector to collect the data from output of the DUT and
Scoreboard to compare the data driven by the BFM and the output of the DUT.

Scoreboard

PHY

Phy Protocols

Collector Receiver DUT BFM

Monitor

Figure 5.1 Receiver Environment

Figure 5.2 displays a detailed plan for verification of MAC receiver stages.
66

Physical layer side

Wait for Phy_Mac_Carrier_ sense

dont receive
receiver enable(bit52=1)
frames
yes

receives the preamble and SFD


and receiver valid becomes'1'

Check the DA and SA Field to mac client

Individual
Multicast Broadcast
Address
addressed field addressed field
Field

Check with Mac Address Field

RCB bit 49=0(length/type enable)

Discard Frame RCB bit 49=1(length/type disable) to mac client


Check Length/Type Field

if too short frame Data Frame Control Frame Too long frame VLAN

Check the
In Full Duplex Mode
Allignment of Length Error
Only
Octets

yes No

Match the Data octets Allignment


stores in RXRAM
with the Length Field Error

yes if full
to transmiiter no

Check FCS Length Error

if too short FCS Correct FCS error


frame

strip the Pad bits Received Ok Status


DA,SA

Length
Data

to mac client

Figure 5.2 Different Stages of Receiver Verification


67

5.2 TEST CASES

The test cases to check the functionality of the Ethernet Mac Receiver are
planned according to specification as follows:

1. Check the Reset condition of Receiver by making the RESET Bit 0 or 1 in


RCB Block.
2. Check the Receiver Enable and Disable condition (RECEN=1).
3. Check the clock reception in case of GMII/MII.
4. Check the transfer of MII to GMII mode and vice versa.
5. Check for the GMII Mode.

5.2.1. Check for GMII Mode

5.2.1.1 Full Duplex Mode

 Check the Behavior of the receiver with the physical layer protocol
conditions like :
1. Carrier sense signal.
2. Receiver data Valid signal.
3. Receiver error.
 Check for the Normal Data/Jumbo Data Reception on 8 bit Data Buses
from Phy side made for Mac station.
 Check the different fields (Preamble, SFD, DA, SA, Length/Type, Data
frame body, CRC and EOF) coming in to the data frame under several
conditions.
 Check the Receiver behavior in case of error in SFD.
68

 Check the Receiver behavior for the Broadcast, Multicast data frame (DA
and SA Field).
 Reception of Control frame i.e. Pause Frame.
 Reception of VLAN Tagged Frame.
 Check the CRC coming in the different type of Frames with the CRC
calculated in the receiver portion (FCS Error).
 Check the behavior of the receiver in case of too long frame/ too short
length frame (Indication to Host side, Removal of padding bits in short
frame).
 Length Error in case of matching the Number of Data Octets with the
length field.
 Check the Data Alignment Error in different circumstances.
 In case of RXRAM Full, Receiver behavior (Discarding the previous Frame
and signals to transmitter to transmit the Pause Frame).

5.2.1.2 Half Duplex Mode

 Check the Behavior of the receiver with the physical layer protocol
conditions like :

1. Carrier sense signal.


2. Collision detection
(i) Late Collision Condition.
(ii) Normal Collision under different instance of Data Reception.
3. Receiver data Valid signal.
4. Receiver error.
69

 Check for the Normal Data/Jumbo Data Reception made for Mac station.
 Check the different fields (Preamble, SFD, DA, SA, Length/Type, Data
frame body, CRC and EOF) coming in to the data frame under several
conditions.
 Check the behavior of the Receiver in case of Error in the SFD for burst of
data/ packet of data.
 Check the Burst of Data reception (differentiating the Extension bit from
the Data bit).
 Check the Receiver behavior for the Broadcast, Multicast data frame (DA
and SA Field).
 Reception of VLAN Tagged Frame.
 Check the CRC coming in the different type of Frames with the CRC
calculated in the receiver portion (FCS Error).
 Check the behavior of the receiver in case of too long frame/ too short
length frame (Indication to Host side, detection of padding bits in short
frame).
 Length Error in case of matching the Number of Data Octets with the
length field.
 Check the Data Alignment Error in different circumstances.
 Check the Successful Data reception.
 In case of RXRAM Full, Receiver behavior (Missing Packet signal to host
side).
70

5.2.2 Check for MII Mode

5.2.2.1 10/100 Mbps (Full Duplex Mode)

 Check the Behavior of the receiver with the physical layer protocol
conditions like :
1 Carrier sense signal.
2 Receiver data Valid signal.
3 Receiver error.
 Check for the Normal Data/Jumbo Data Reception on 4 bit Data buses from
Phy Side made for Mac station.
 Check the different fields (Preamble, SFD, DA, SA, Length/Type, Data
frame body, CRC and EOF) coming in to the data frame under several
conditions.
 Check the Receiver behavior for the Broadcast, Multicast data frame (DA
and SA Field).
 Reception of Control frame i.e. Pause Frame.
 Reception of VLAN Tagged Frame.
 Check the CRC coming in the different type of Frames with the CRC
calculated in the receiver portion (FCS Error).
 Check the behavior of the receiver in case of too long frame/ too short
length frame (Length Error Indication to Host side, detection of padding
bits in short frame).
 Length Error in case of matching the Number of Data Octets with the
length field.
 Check the Data Alignment Error in different circumstances.
 Check the Successful Data reception.
71

 In case of RXRAM Full, Receiver behavior (Discarding the previous


Frame and signals to transmitter to transmit the Pause Frame).

5.2.2.2 Half Duplex Mode

 Check the Behavior of the receiver with the physical layer protocol
conditions like :
1 Carrier sense signal.
2 Collision detection
(i) Late Collision Condition.
(ii) Normal Collision under different instance of data Reception.
3 Receiver data Valid signal.
4 Receiver error.
 Check for the Normal Data/Jumbo Data Reception made for Mac station.
 Check the different fields (Preamble, SFD, DA, SA, Length/Type, Data
frame body, CRC and EOF) coming in to the data frame under several
conditions.
 Check the Receiver behavior for the Broadcast, Multicast data frame (DA
and SA Field).
 Reception of VLAN Tagged Frame.
 Check the CRC coming in the different type of Frames with the CRC
calculated in the receiver portion (FCS Error).
 Check the behavior of the receiver in case of too long frame/ too short
length frame (Length Error indication to Host side in case of too long
frame, detection of padding bits in short frame).
 Length Error in case of matching the Number of Data Octets with the
length field.
 Check the Data Alignment Error in different circumstances.
72

 Check the Successful Data reception.


 In case of RXRAM Full, Receiver behavior (Missing Packet signal to
host side).

Other Checks include:

 Check behavior of Receiver for Promiscuous Mode for the Different Frame
reception.
 Check the Successful Data reception.
 Check for end of Reception (Receive status).

5.3 TEST CASE- DETAILS

Figures 5.3 to 5.7 display the configuration details of test plan.

Receiver Mode

Non-
Promiscuous
Promiscuous
State
state

Figure 5.3 Modes of Receiver


73

Ethernet MAC Mode


Configuration
Register(Bit79:78)
00=10Mbps
01=100Mbps
10=1000Mbps

IN RCB Bit50

if '1' if '0'

Half Duplex Full Duplex

Datawidth=4 bits from Phy Datawidth=8Bit from Phy

Figure 5.4 MII/GMII (Full/Duplex) at different speeds

Data from Phy

Half Duplex Full duplex

If RXRAM Full

Half Duplex Full duplex

frame discard Discard frame and RX-


and TX
indication to 1-Pause frame timer
Mac client value
missing 2-Dest.Address
frame 3-request to send the
frame

Figure 5.5 RXRAM Full condition at the time of reception of frame for
Half/Full Duplex mode

If from Phy layer


Collision detected

before slot time

Late collision and


discard frame
discard frame

Figure 5.6 Collision condition


74

if Rx_err asserted

with receiver valid without receiver valid

discard frame False carrier


status:??? Indication

Figure 5.7 Rx Error from Phy side

Ethernet Mac Supports:


1- GMII/MII Mode.
2- Half/Full Duplex Modes.
3- Different Speeds (10/100/1000 Mbps).
4- Different Frames:
 Control Frame (VLAN and Pause Frame).
 Data Frame (Too short/Too long/ burst).
5- Multicast, Broadcast and Unicast Frames.
6- Different Errors: Length, FCS, Alignment and Missed Packet.
7- Collision detection.

5.4 DIRECTORY STRUCTURE

The directory structure for Ethernet IP Verification is shown below.

<IP_ROOT>
rtl - *.v
synthesis
fpga
scripts - *.csh *.tcl *.pl ( synthesis scripts )
run - *.csh ( tool directory )
constraints - *.ucf ( synthesis constraints )
gate - *.v ( gate netlist from DC )
reports – ( synthesis reports )
asic
75

scripts - *.csh *.tcl *.pl ( synthesis scripts )


run - *.csh ( tool directory )
constraints - *.tcl ( synthesis constraints )
db - *.db ( db files from DC )
gate - *.v ( gate netlist from DC )
postlayout - *.v ( postlayout netlist from DC )
testbench
toplevel
drivers - *.v *.e ( signal level drivers )
transactors - *.v *.e ( transaction level drivers
models - *.v *.e ( bus functional models )
monitors - *.v *.e ( signal level monitors )
checkers - *.v *.e ( transaction level checkers )
top - *.v ( top level testbench files and clock/reset drivers )
stand_alone
<submodule> - e.g. rx, tx, dma
simulation
run - *.csh ( test run scripts )
tests - *.v *.e *.sv ( testcase stimulus )
config - *.cfg ( test configuration files )
drivers - *.v *.e ( signal level drivers )
transactors - *.v *.e ( transaction level drivers
models - *.v *.e ( bus functional models )
monitors - *.v *.e ( signal level monitors )
checkers - *.v *.e ( transaction level checkers )
top - *.v ( top level testbench file and clock/reset
drivers )
simulation
run - *.csh ( test run scripts )
tests - *.v *.e *.sv ( testcase stimulus )
config - *.cfg ( test configuration files )
regression
run – makefile ( to run regression )
scripts - *.csh *.pl ( regression scripts )
config - *.cfg ( config files )
reports - *.rpt ( regression reports )
coverage
reports - *.rpt ( coverage reports )
docs
design
verification
methodology
76

5.5 COVERAGE ANALYSIS

Coverage analysis measurements are:

 Statement coverage
 Branch coverage
 Condition and Expression coverage
 Path coverage
 Toggle coverage
 Triggering coverage
 Signal-tracing coverage
 FSM coverage

Coverage metrics is often used by design engineers to refer to the overall


set of coverage analysis measurements mentioned above. Although statement
coverage is the least powerful of all the coverage metrics it is probably the easiest
to understand and use. It gives a very quick overview of which parts of the design
have failed to achieve 100% coverage and where extra verification effort is
needed.

Branch coverage is invaluable to the designer as it offers diagnostics as to


why certain sections of HDL code, containing zero or more assignment
statements, were not executed. This coverage metric measures how many times
each branch in an IF or CASE construct was executed and is particularly useful in
situations where a branch does not contain any executable statements.

Condition coverage measures that the test bench has tested all combinations
of the sub-expressions that are used in complex branches. If one complete
branching statement follows another branching statement in a sequential block of
77

HDL code, then a series of paths can occur between the blocks. Path coverage
measures how many of these combinations were actually executed during the
simulation phase.

In Verilog code the toggle coverage is known as variable toggle coverage


and checks that each bit in the registers and nets of a module change polarity
(toggles) and are not stuck at one particular level. It shows the amount of activity
within the design and helps to pinpoint areas that have not been adequately
verified by the test bench.

Triggering coverage is normally applied to designs written using the VHDL


hardware description language. It checks that every signal in the sensitivity list of
a PROCESS or a WAIT statement changes without any other signal changing at
the same time.

In Verilog code the signal tracing coverage is known as variable trace


coverage and checks that variables (i.e. nets and registers) and combinations of
variables take a range of values.

FSM Metrics are:

 Visited state coverage, ensures that every state of an FSM has been visited.
 Arc coverage, ensures that each arc between FSM states has been traversed.

Statement, Conditional, Branch, FSM and Path Coverage are important to


Behavioral designs. Triggering, toggling, Statement, Conditional, Branch, FSM
and Path Coverage are important for RTL designs. Toggling coverage is essential
for gate level designs.
78

5.5.1 COVERAGE GOALS

Line coverage
The line coverage to be achieved is 100%.

Condition Coverage
The condition coverage to be achieved is 85%.

FSM Coverage
The FSM coverage to be achieved is 100%.

Toggle Coverage
The toggle coverage to be achieved is 90%.

The average coverage to be achieved is 90%.

5.5.2 COVERAGE TOOLS/METHODOLOGY FOLLOWED

VCS Coverage Metrics (vcm) is used for coverage.

5.5.3 COVERAGE OBTAINED

Line coverage
The line coverage obtained is 100%.

Condition Coverage
The condition coverage achieved is 90%.

FSM Coverage
The FSM coverage obtained is 95%.

Toggle Coverage
The toggle coverage obtained is 90%.
79

5.5.4 COVERAGE ANALYSIS

The average coverage achieved is around 90%. Since Random testing is


not done, the coverage achieved is not close to 100%. The offset is most in
Condition Coverage and Toggle coverage. This is expected as not all combinations
in a truth table will happen.

5.6 CHAPTER CONCLUSION

A detailed test plan is made and executed for different stages of MAC
Receiver in the MAC Receiver environment. Test cases for GMII and MII modes
in Half- Duplex and Full-Duplex modes are performed as per the standards set by
IEEE 802.3. The test cases are planned according to specification as follows:
Check the Reset condition of Receiver, Check the Receiver Enable and Disable
condition, Check the clock reception in case of GMII/MII, Check the transfer of
MII to GMII mode and vice versa and Check for the GMII Mode. Directory
structure for Ethernet IP Verification is planned as per the requirement. Coverage
analysis indicates the efficiency of the methodology. There is hardly any
difference between set goal and what is achieved. Toggle coverage is not as
important as statement, FSM and conditional coverage for behavioral designs
80

CHAPTER 6

CONTRIBUTIONS AND FUTURE WORK

In this work we have discussed Design and verification of reusable verification


environment for verifying Ethernet IP Core. Verification Reuse is made possible with
Phy, an in house development, Reusable Verilog Tasks and wrappers. Specman Based
Verification Methodology is used in this work. The advantages of Specman ‘e ‘
language are explored to write test benches and BFMs. Verilog tasks such as CRC are
designed to suit for reusing in different environment for different protocols. Wrappers
are designed for reuse of existing IPs in different environment with different interface.

One of the major objectives of the proposed verification environment is the


ability for the non-Specman "savvy" engineers to be able to easily use the environment
to develop tests. This is achieved through “verilog tasks" that were built in a layered
way using Specman Elite macros. CRC is one such example of reusable verilog task.

Physical layer device (PHY) is perhaps the most complex layer in the OSI
architecture. Ethernet eVC is built with Phy as a separate eVC and host being a task
driven verilog Bus functional model (BFM). This allows us the creation of virtual host
environment using a combination of verilog BFM and eVC.

Through a constraint mechanism it was a simple matter to direct the traffic to be


of say one burst length. Different frames and different configurations of the core were
tested. The complete verification suite in the Specman Elite environment comprised
more than 56 test cases and randomization of the packet. This is an extremely small
81

amount of tests given the complexity of the design, and it enabled us to get
unparalleled coverage with a minimum number of tests and lines of code.

Team size and total design hours are small. Average coverage is around 90%.
The offset is most in Condition Coverage and Toggle coverage. This is expected as not
all combinations in a truth table will happen.

Wrappers map interfaces to be consistent for re-used IP. Transaction level


wrappers are used for simplicity as it is regular, with simple interfaces and
communication architecture. The coverage obtained with the proposed wrappers is
quite close to the environment built with components such as MDIO and MII . We
have demonstrated the environment reuse with wrappers through examples for host
interface and processor interface. All signal details and test benches are designed to test
design thoroughly with wrappers. The effectiveness of the design with wrappers is
indicated by the coverage statistics.

The extensibility of the e language, the macro facility and the Specman Elite's
built-in generator were key elements that enabled this approach to be successful in
short duration and smaller team effort compared to doing the same using complete
verilog environment.

Two open resources which are of great help to academics and industry are listed
in appendix 4 and 5. The free tools are thoroughly evaluated and performance is
measured with proper parameters.

Beyond using Verification components (VIPs), the next logical step in


improving verification productivity is to reuse verification plans and the verification
environment for carrying out this verification plan. Given that the success of a
82

verification project relies heavily on the completeness and accurate implementation of


a verification plan, facilitating verification plan reuse leads to benefits in completeness
of verification as well as speeding the actual process of carrying out verification
activities.

Multiple factors that must be considered in implementation and delivery of this


concept are:

 Its usefulness across diverse architectures and different levels of abstraction


 Its usefulness moving from block level to system level verification
 Requirements it imposes on building VIPs
105

REFERENCES

1. Accellera Home: System Verilog 3.1a LRM: (2004): http://www.accellera.org/home

2. Adrian Evans, Allan Silburt, Gary Vrckovnik, Thane Brown, Mario Dufresne,
Geoffrey Hall, Tung Ho, Ying Liu, (1998) “ Functional Verification of Large
ASICs” -ACM-DAC98 - 06/98 San Francisco, CA USA

3. Ali Habibi, and Sofiène Tahar, (2006) “ Design and Verification of SystemC,
Transaction-Level Models”-IEEE transactions on Very Large Scale Integration
(VLSI) Systems, vol. 14, no. 1

4. Allon Adir, Eli Almog, Laurent Fournier, Eitan Marcus, Michal Rimon,
Michael Vinov, and Avi Ziv, IBM Research Lab, Haifa, (2004) “Genesys-Pro:
Innovations in Test Program Generation for Functional Processor Verification”
- Design & Test of Computers, IEEE ,Volume 21, Issue 2, Page(s): 84 - 93

5. Alon Gluska , (2003) “Coverage-Oriented Verification of Banias” - IEEE-


Design Automation Conference, Proceedings, 2-6 June 2003, Page(s): 280 - 285

6. Amir Hekmatpour, Alley, C.; Stempel, B.; Coulter, J.; Salehi, A.; Shafie, A.;
Palenchar, C.,(2005) “A heterogeneous functional verification platform”-
Custom Integrated Circuits Conference, Proceedings of the IEEE 2005 Volume ,
Issue , 18-21 Sept. 2005 Page(s): 63 – 66

7. Amir Hekmatpour, James Coulter, (2003) "Coverage-Directed Management and


Optimization of Random Functional Verification," itc, pp.148, International
Test Conference 2003 (ITC'03)

8. Ananth Dahan.; Geist, D.; Gluhovsky, L.; Pidan, D.; Shapir, G.; Wolfsthal, Y.;
Benalycherif, L.; Kamidem, R.; Lahbib, Y , (2005) “Combining system level
modeling with assertion based verification”.- Quality of Electronic Design,
ISQED 2005. Sixth International Symposium on Volume , Issue , 21-23 March
2005 Page(s): 310 - 315
106

9. Andrea Fedeli, Fummi, F. Pravadelli, G., STMicroelectron., Agrate, (2007)


“Properties Incompleteness Evaluation by Functional Verification”- IEEE
Transactions on Computers, April 2007,Volume: 56, Issue: 4, 528-544

10. Andrew Piziali, Cadence Design Systems, (2006)”Verification Planning to


Functional Closure of Processor-Based SoCs”, , DesignCon 2006

11. Aniruddha Baljekar, NXP Semiconductors India Pvt. Ltd., Bangalore, INDIA,
(2006), “Re-Use of Verification Environment for Verification of Memory
Controller”, www.design-reuse.com/articles/18329/verification-memory-
controller.html

12. Anjali Vishwanath, Ranga Kadambi Infineon Technologies Asia Pacific Pte Ltd
Singapore , (2007) “Verification Planning for Core based Designs” -
www.design-reuse.com/articles/16141/verification-planning-for-core-based-
designs.html

13. Bellon, C. Velazco, R., Laboratory "Circuits et Systemes" - Institut IMAG,


Saint Martin D'heres Cedex – France, (2006) “Taking into Account
Asynchronous Signals in Functional Test of Complex Circuits”, Design
Automation, 1984. 21st Conference on 25-27 June 1984,490- 496, Current
Version Published: 2006-02-06

14. Bentley, B., Intel Corp., Hillsboro, OR, USA, (2002) “High level validation of
next-generation microprocessors”- High-Level Design Validation and Test
Workshop, 2002. Seventh IEEE International, 27-29 Oct. 2002, 31- 35

15. Bob Bentley, (2001) "Validating the Intel Pentium 4 Microprocessor," dac,
pp.244-248, 38th Conference on Design Automation (DAC'01)

16. Brendan Mullane and Ciaran MacNamee, Circuits and System Research Centre
(CSRC), University of Limerick, Limerick, Ireland, (2006) “Developing a
Reusable IP Platform within a System-on-Chip Design Framework targeted
towards an Academic R&D Environment”: www.design-
reuse.com/ipbasedsocdesign/slides_2006-csrc_01.html

17. Byeong Min, Gwan Choi, (2001) "RTL Functional Verification Using
Excitation and Observation Coverage," hldvt, pp.58, Sixth IEEE International
High-Level Design Validation and Test Workshop (HLDVT'01)
107

18. Charles H. P. Wen, Li-C. Wang, Kwang-Ting Cheng, (2006) "Simulation-Based


Functional Test Generation for Embedded Processors"- IEEE Transactions on
Computers, vol. 55, no. 11, pp. 1335-1343, Nov. 2006

19. F. Corno, G. Cumani, M. Sonza Reorda, G. Squillero, (2003) "Fully Automatic


Test Program Generation for Microprocessor Cores," date, vol. 1, pp.11006,
Design, Automation and Test in Europe Conference and Exhibition (DATE'03)

20. Dejan Markovic, Chen Chang Richards, B. So, H. Nikolic, B. Brodersen,


R.W., California Univ., Los Angeles, (2007) “ASIC Design and Verification in
an FPGA Environment”- Custom Integrated Circuits Conference, CICC '07.
IEEE, 16-19 Sept. 2007, PP: 737-740

21. Dinos Moundanos, Abraham, J.A. Hoskote, Y.V., Comput. Eng. Res. Center,
Texas Univ., Austin, TX , (2002) “Abstraction techniques for validation
coverage analysis and testgeneration”- IEEE Transactions on Computers, Jan
1998, Volume: 47, Issue: 1, PP: 2-14, Current Version Published: 2002-08-06

22. Dongwan Shin, Abdi, S. Gajski, D.D., University of California , (2004)


“Automatic generation of bus functional models from transaction level models”-
Design Automation Conference, Proceedings of the ASP-DAC 2004. Asia and
South Pacific, 27-30 Jan. 2004, pp: 756- 758

23. Eduard Cerny. Synopsys, Inc. Marlborough, USA , Dmitry Korchemny Intel
Corp , (2007) ”Using SystemVerilog Assertions for Creating Property-Based
Checkers”: www.eda.org/ovl/pages/pdfs/dvcon07_cerny.pdf

24. Eugene Zhang, E. Yogev, E. Cisco Syst. Inc., USA, (1997) “Functional
verification with completely self-checking tests”- Verilog HDL Conference,
IEEE International, 31 Mar-3 Apr 1997,pp: 2-9, Santa Clare, CA, USA, Current
Version Published: 2002-08-06

25. Fallah, F.; Devadas, S.; Keutzer, K., (2001) “OCCOM-efficient computation of
observability-based code coveragemetrics for functional verification”-
Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions
on Volume 20, Issue 8, Aug 2001 Page(s):1003 – 1015

26. Falconeri, G.; Naifer, W.; Romdhane, N., (2005) “Common reusable
verification environment for BCA and RTL models Design”- Automation and
Test in Europe, Proceedings, Volume , Issue , 7-11 March 2005 Page(s): 272 -
277 Vol. 3
108

27. GrahamPullan, Whittle Laboratory, Cambridge University, (2008)


“Accellerating CFD with graphics hardware”:
www.dspace.cam.ac.uk/bitstream/1810/202012/1/Graham_Pullan.pdf

28. Gregg D. Lahti, Tim L. Wilson, Intel Corporation, (1999) “Designing


Procedural-Based Behavioral Bus Functional. Models for High Performance
Verification”: www.gateslinger.com/chiphead/snug99_vhd_bfm.pdf

29. J.P. Grossman, John K. Salmon, C. Richard Ho, Douglas J. Gerardo, (2007) “a
special-purpose machine for molecular dynamics simulation”- International
Symposium on Computer Architecture, Proceedings of the 34th annual
international symposium on Computer architecture, San Diego, California, USA,
SESSION: Special purpose to warehouse computers, Pages: 1 – 12

30. Han Ke; Deng Zhongliang; Shu Qiong, (2007) “Verification of AMBA Bus
Model Using SystemVerilog”- Electronic Measurement and Instruments, 2007.
ICEMI apos;07. 8th International Conference on, Volume , Issue , Aug. 16
2007-July 18 2007 Page(s):1-776 - 1-780

31. Hannes Froehlich, Verisity Design, (2002) “Increased Verification Productivity


through extensive Reuse”: www.design-reuse.com/articles/7355/increased-
verification-productivity-through-extensive-reuse.html

32. Hans Eveking, Braun, M. Schickel, M. Schweikert, M. Nimbler, V.,


Comput. Syst. Group, Darmstadt Univ. of Technol., Darmstadt, (2007) “Multi-
Level Assertion-Based Design Formal Methods and Models for Codesign”-
MEMOCODE 2007. 5th IEEE/ACM International Conference on, May 30
2007-June 2 2007, On page(s): 85-86

33. Hao Shen Yuzhuo Fu Sch. of Microelectron., Shanghai Jiao Tong Univ.,
China, (2005) “Priority directed test generation for functional verification using
neural networks”, Design Automation Conference, Proceedings of the ASP-
DAC 2005. Asia and South Pacific, 18-21 Jan. 2005, Volume: 2, On page(s):
1052- 1055 Vol. 2

34. Hekmatpour, A. Alley, C. Stempel, B. Coulter, J. Salehi, A. Shafie, A.


Palenchar, C., Embedded Processor Dev., IBM Microelectron., Research
Triangle Park, NC, USA, (2005) “A heterogeneous functional verification
platform”, Custom Integrated Circuits Conference, Proceedings of the IEEE
2005, 18-21 Sept. 2005, On page(s): 63- 66
109

35. IEEE, (2000) “Ethernet-Mac, CSMA/CD Access method and Physical Layer
specification”: IEEE Std 802.3, 2000 Edition

36. Indradeep Ghosh, Sekar, K. Boppana, v. Fujitsu Labs., America Inc.,


Sunnyvale, CA , (2002) “Design for verification at the register transfer level”-
Design Automation Conference, Proceedings of ASP-DAC 2002. 7th Asia and
South Pacific and the 15th International Conference on VLSI Design.
Proceedings. 2002, On page(s): 420-425

37. Ivan Petkov, Amblard, P. Hristov, M. Jerraya, A. TIMA Lab., IMAG,


Grenoble, France, (2005) “Systematic design flow for fast hardware/software
prototype generation from bus functional model for MPSoC”- Rapid System
Prototyping, (RSP 2005). The 16th IEEE International Workshop on,
Publication Date: 8-10 June 2005, On page(s): 218- 224

38. Jae-Gon Lee; Woong Hwangbo; Seonpil Kim; Chong-Min Kyung , (2005)
“Top-down implementation of pipelined AES cipher and its verification with
FPGA based simulation accelerator”, ASIC, ASICON 2005. 6th International
Conference On, Volume 1, Issue , 24-0 Oct. 2005 Page(s):68 - 72

39. Janick Bergeron, Qualis Design Corporation, (2003) “Writing Testbenches,


Functional Verification of HDL Models”- Kluwer Academic Publishers

40. Jason C. Chen, Synopsys Professional Services, Synopsys Inc,(2007)


“Applying CRV to Microprocessor Verification”:
http://www.snug-universal.org

41. Jayanta Bhadra, Magdy S. Abadir, Li-C. Wang, Sandip Ray, (2007) "A Survey
of Hybrid Techniques for Functional Verification," IEEE Design and Test of
Computers, vol. 24, no. 2, pp. 112-122, June 2007

42. Jean Yves Brunel, J.-Y.; Di Natale, M.; Ferrari, A.; Giusto, P.; Lavagno, L.,
( 2004) “SoftContract: an assertion-based software development process that
enables design-by-contract”- Design, Automation and Test in Europe
Conference and Exhibition, Proceedings, Volume 1, Issue , 16-20 Feb. 2004
Page(s): 358 - 363 Vol.1

43. John Penix, Baraona, P.; Alexander, P.,(1995) “Classification and retrieval of
reusable components using semanticfeatures”- Knowledge-Based Software
Engineering Conference, 1995 .Proceedings., 10th Volume , Issue , 12-15 Nov
1995 Page(s):131 – 138
110

44. John Penix, Alexander, P., System Sciences, (1998) “Using formal
specifications for component retrieval and reuse”- Proceedings of the Thirty-
First Hawaii International Conference on, Volume 3, Issue , 1998 Page(s):356 -
365 vol.3

45. Jun Yuan, (2002) “Symbolic Methods in Simulation-based Verification”-


Dissertation, Doctor of Philosophy, The University of Texas AT AUSTIN,
August 2002

46. Kai-Hui Chang, Yu-Chi Su, Wei-Ting Tu, Yi-Jong Yeh, and Sy-Yen Kuo,
Department of Electrical Engineering, National Taiwan University, Taipei,
Taiwan, (2003) “A PCI-X Verification Environment Using C and Verilog”-
VLSI Design/CAD Symposium, Taiwan,
www.eecs.umich.edu/~changkh/publication/pcisim.pdf

47. Kambiz Khalilian, Stephen Brain, Richard Tuck, Glenn Farrall, Infineon
Technologies, (2002) “Reusable Verification Infrastructure for A Processor
Platform to deliver fast SOC development “- www.design-reuse.com/

48. Kausik Datta, Das, P.P., Interra Syst. Private Ltd., xx, India, (2004), Assertion
based verification using HDVL- VLSI Design, 2004. Proceedings. 17th
International Conference , On page(s): 319- 325

49. Kenneth M. Butler, Kapur, R. Mercer, M.R. Ross, D.E., Texas Instruments,
Dallas, TX , (2006) “The roles of controllability and observability in design for
test”- VLSI Test Symposium, 1992. '10th Anniversary. Design, Test and
Application: ASICs and Systems-on-a-Chip', Digest of Papers., 1992 IEEE, 7-9
Apr 1992, On page(s): 211-216, Current Version Published: 2002-08-06

50. Kuang-Chien Chen, (2003) “Assertion-based verification for SoC designs”-


ASIC, 2003. Proceedings. 5th International Conference on, Volume 1, Issue ,
21-24 Oct. Page(s): 12 - 15 Vol.1

51. Laurent Fournier, Arbetman, Y. Levinger, M., Res. Lab., IBM Israel Sci. &
Technol. Center, Haifa, (1999) “ Functional verification methodology for
microprocessors using theGenesys test-program generator. Application to the
x86 microprocessorsfamily”- Design, Automation and Test in Europe
Conference and Exhibition 1999. Proceedings, On page(s): 434-441

52. Lukai Cai, Gajski, D., Center for Embedded Comput. Syst., California Univ.,
Irvine, CA, USA, (2003) “Transaction level modeling: an overview”-
Hardware/Software Codesign and System Synthesis, 2003. First
111

IEEE/ACM/IFIP International Conference on, Publication Date: 1-3 Oct. 2003,


On page(s): 19- 24

53. Marines Puig-Medina, Ezer, G. Konas, P., Tensilica, Inc, (2000) “Verification
of configurable processor cores”- Design Automation Conference, 2000.
Proceedings 2000. 37th, 2000, On page(s): 426-431

54. Mark Litterick, Verilab, (2005) “Using SystemVerilog Assertions for Functional
Coverage”: www.verilab.com/files/dac2005_mal_sva_cov_paper_a4.pdf

55. Michael Keating, Verisity White papers, (2003) “Verification Reuse


Methodology- Essential Elements for Verification Productivity Gains”

56. Michael Keating, Pierre Bricaud, (1999) “Reuse Methodology Manual for
System-On-A-Chip Designs”, KIuwer Academic Publishers

57. Michael Stuart and D. Dempster. Verification Methodology Manual for Code
Coverage in HDL Designs. Teamwork International, Hampshire, UK, 2000.

58. Michelle D, (2005)1500TM- IEEE Standard Testability Method for Embedded


Core-based Integrated Circuits

59. Min-An Song; Ting-Chun Huang; Sy-Yen Kuo, (2006) “A functional


verification environment for advanced switching architecture Electronic Design,
Test and Applications”, DELTA 2006. Third IEEE International Workshop on
Volume , Issue , 17-19 Jan. 2006 Page(s): 4 pp.

60. Mohammad Reza Kakoee, Hamid Shojaei, Hassan Ghasemzadeh, Marjan


Sirjani, Zainalabedin Navabi, Electr. & Comput. Eng., Tehran Univ, (2007) “A
New Approach for Design and Verification of Transaction Level Models”-
Circuits and Systems, 2007. ISCAS 2007. IEEE International Symposium on,
27-30 May 2007, On page(s): 3760-3763

61. A. Molina and O.Cadenas, (2007) “Functional Verification: Approaches and


Challenges”- Latin American Applied Research, Lat. Am. Appl.
Res. v.37 n.1 Bahía Blanca ene./mar. , ISSN 0327-0793 versión impresa

62. Morel, B.; Alexander, P., (2004) “SPARTACAS: automating component reuse
and adaptation”- Software Engineering, IEEE Transactions on, Volume 30,
Issue 9, Sept. 2004 Page(s): 587 – 600
112

63. Narcizo Sabbatini, Jr. Brochi, A.M. Nunes, T.I., Motorola Semicond.
Products Sector, Jaguariuna, (2002) “Reuse issues on the verification of
embedded MCU cores”- Devices, Circuits and Systems, 2002. Proceedings of
the Fourth IEEE International Caracas Conference on, PP: C012-1- C012-6

64. Nick Heaton and Ed Flaherty, CommsDesign, (2001) “A Verification


Environment for an Embedded Processor”: www.commsdesign.com/

65. Nihar Shah, Sasan Iman, Santa Clara, CA, (2006) “Verification Plan Reuse.
Extending Verification Reuse to Verification Plan Definition and Verification
Environment Implementation”. Session # 1.10, CDN Live! 2006,
www.cdnusers.org/Portals/0/cdnlive/na2006/1.10/1.10_paper.pdf

66. Noppanunt Utamaphethai, Blanton, R.D. Shen, J.P., Dept. of Electr. &
Comput. Eng., Carnegie Mellon Univ., Pittsburgh, PA, (2002) “Relating buffer-
oriented microarchitecture validation to high-levelpipeline functionality”- High-
Level Design Validation and Test Workshop, 2001. Proceedings. Sixth IEEE
International, 2001, On page(s): 3-8, Current Version Published: 2002-08-06

67. Oded Lachish, Eitan Marcus, Shmuel Ur, Avi Ziv, (2002) "Hole Analysis for
Functional Coverage Data," dac, pp.807, 39th Design Automation Conference
(DAC'02)

68. Olaf Luthje. CoWare, Inc. Aachen, Germany, (2004) “A Methodology for
Automated Test Generation for LISA. Processor Models”:
www.coware.com/PDF/SASIMI2004.PDF

69. O.Petlin, A. Genusov, L. Wakeman, (2000) “Methodology and Code Reuse in


the Verification of Telecommunication SOCs” -IEEE- ASIC/SOC Conference,
. Proceedings. 13th Annual IEEE International, Arlington, VA, USA ,187-191

70. Ping Yeung, PhD., Director of Verification Methodology, 0-In Design


Automation, (2005) “Assertion-Based Verification of ARM Core-Based
Designs”: www.arm.com/iqonline/currentissue/design.html

71. Prabhat Mishra, Dutt, N.; Krishnamurthy, N.; Ababir, M.S., (2004) “A top-
down methodology for microprocessor validation”- Design & Test of
Computers, IEEE, Volume 21, Issue 2, Mar-Apr 2004 Page(s): 122 – 131

72. Prabhat Mishra, Dutt, N., (2002) “Automatic functional test program generation
for pipelined processors using model checking”- High-Level Design Validation
113

and Test Workshop. Seventh IEEE International, Volume , Issue , 27-29 Oct.
2002 Page(s): 99 – 103

73. Prabhat Mishra, Dutt, N., (2005) “Functional coverage driven test generation for
validation of pipelined processors”- Design, Automation and Test in Europe.
Proceedings, Volume , Issue , 7-11 March 2005 Page(s): 678 - 683 Vol. 2

74. Prabhat Mishra, Dutt, N., Center for Embedded Comput. Syst., California
Univ., Irvine, CA, USA, (2004) “Functional validation of programmable
architectures”-Digital System Design, 2004. DSD 2004. Euromicro Symposium
on, 31 Aug.-3 Sept., On page(s): 12- 19

75. Raghuram S.Tupuri, R.S.; Krishnamachary, A.; Abraham, J.A., (1999) “Test
generation for gigahertz processors using an Automaticfunctional constraint
extractor”- Design Automation Conference 1999. Proceedings. 36th Volume ,
Issue , Page(s):647 – 652

76. Raghuram S. Tupuri, R.S.,Texas Microprocessor Division , (2002) “Automatic


Functional Test Generation- A Reality”-Test Conference, 1999. Proceedings.
International, On page(s): 1130-1130, Current Version Published: 2002-08-06

77. Rebeca P. Díaz Redondo, José J. Pazos Arias, (2001) "Reuse of Verification
Efforts and Incomplete Specifications in a Formalized, Iterative and Incremental
Software Process," icse, pp.0801, 23rd International Conference on Software
Engineering (ICSE'01).

78. Rindert Schutten and Tom Fitzpatrick, Senior Technical Marketing Manager at
Synopsys, (2003) “Design for verification methodology allows silicon
success”: www.eetimes.com

79. Rohit Kapur, Keller, B.; Koenemann, B.; Lousberg, M.; Reuter, P.; Taylor, T.;
Varma, P., (1999) “P1500-CTL: Towards a Standard Core Test Language”-
VLSI Test Symposium, 1999. Proceedings. 17th IEEE, Volume , Issue , 1999
Page(s):489 – 490

80. Rui Wang; Wenfa Zhan; Guisheng Jiang; Minglun Gao; Su Zhang, Computer
Supported Cooperative Work in Design, (2004) “Reuse issues in SoC
verification platform”- 2004 Proceedings. The 8th International Conference on,
Volume 2, Issue, 26-28 May 2004 Page(s): 685 - 688 Vol.2

81. Samir Palnitkar, (2004) “Design Verification with e” , Prentice Hall PTR
114

82. Sang-Heon Lee Jae-Gon Lee Seonpil Kim Woong Hwangbo Chong-Min
Kyung, Dept. of Electr. Eng., KAIST, Daejeon, South Korea , (2005) “SoC
design environment with automated configurable bus generation for rapid
prototyping”- ASIC, 2005. ASICON 2005. 6th International Conference On, 24-
27 Oct. 2005, Volume: 1, On page(s): 41- 45

83. Schmitt, S.; Rosenstiel, W., Design, (2004) “Verification of a microcontroller IP


core for system-on-a-chip designs using low-cost prototyping environments”-
Automation and Test in Europe Conference and Exhibition. Proceedings
Volume 3, Issue , 16-20 Feb. 2004 Page(s): 96 - 101 Vol.3

84. Scott Taylor, Quinn, M.; Brown, D.; Dohm, N.; Hildebrandt, S.; Huggins, J.;
Famey, C., (1998) “Functional verification of a multiple-issue, out-of-order,
superscalar Alpha processor-the DEC Alpha 21264 microprocessor”-Design
Automation Conference, 1998. Proceedings, Volume, Issue, 15-19 Jun 1998
Page(s): 638 – 643

85. Sigal Asaf, Marcus, E.; Ziv, A., (2004) “Defining coverage views to improve
functional coverage analysis”- Design Automation Conference, 2004.
Proceedings. 41st Volume , Issue , 2004 Page(s): 41 – 44

86. Shai Fine, Avi Ziv, (2003) "Coverage Directed Test Generation for Functional
Verification using Bayesian Networks," dac, pp.286, 40th Design Automation
Conference (DAC'03)

87. G. S. Spirakis, (2004) “Designing for 65nm and Beyond”, Keynote Address at
Design Automation and Test in Europe (DATE), 2004.

88. Sreekumar V. Kodakara, Deepak A. Mathaikutty, Ajit Dingankar, Sandeep


Shukla, David Lilja, (2007) "Model Based Test Generation for Microprocessor
Architecture Validation," -20th International Conference on VLSI Design held
jointly with 6th International Conference on Embedded Systems (VLSID'07),
vlsid, pp.465-472

89. St. Pierre, M. Yang, S.-W. Cassiday, D., Thinking Machines Corp.,
Cambridge, MA, (2002) “Functional VLSI design verification methodology for
the CM-5massively parallel supercomputer”- Computer Design: VLSI in
Computers and Processors, 1992. ICCD '92. Proceedings., IEEE 1992
International Conference on, Publication Date: 11-14 Oct 1992, On page(s):
430-435, Current Version Published: 2002-08-06
115

90. Stefan Schmechtig, IPextreme Inc., Joern Ungermann, NXP Semiconductors,


Christian Lipsky, IPextreme Inc., Munich, Germany, (2006) “e Verification
Environment for FlexRay Advanced Automotive Networks”:
http://www.us.design-reuse.com/ipsoc2006/

91. Stephen D’Onofrio verification architect engineer, Ning Guo, principal


consulting engineer Paradigm Works, (2008) “Building reusable verification
environments with OVM”- EDA Tech Forum Journal, September 2008

92. Steve Brown, director of Marketing, Enterprise Verification Process


Automation, Cadence Design Systems, (2005) “How to improve verification
planning”: www.design-reuse.com

93. Steve Ye (2004), “Best Practices for a Reusable Verification Environment”,


eetimes: URL: http://www.eetimes.com/showArticle.jhtml?articleID=22104451

94. L.Swarna Jyothi , Harish R Dr.A.S.Manjunath, (2008) “Reusable Verification


Environment for verification of Ethernet packet in Ethernet IP core, a
verification strategy- an analysis”- IJCSNS International Journal of Computer
Science and Network Security, VOL.8 No.11, November 2008

95. Team Work International, (2002) “Verification Architecture for Pre-Silicon


Validation”: http://www10.edacafe.com/book/parse_book.php?article=transeda%2Fch-10.html

96. Thomas Anderson, director of engineering in the Semiconductor IP Group at


Phoenix Technologies, (1999) “Using System-on Chip Design with Virtual
components”

97. Tom Schubert, DPG CPU Design Validation, Intel Corp., Hillsboro, OR, USA,
(2003) “High-level formal verification of next-generation microprocessors”;
Design Automation Conference, 2003. Proceedings, 2-6 June 2003, On page(s):
1- 6

98. Verisity ‘e’ Libraries: (1998- 2002)


www.ieee1647.org: www.ieee1647.org/downloads

99. Verisity ‘e’ Language Reference Manual: (1998- 2002):


www.ieee1647.org/downloads/prelim_e_lrm.pdf

100. Warren H. Debany, Jr. Gorniak, M.J. Macera, A.R. Kwiat, K.A.
Dussault, H.B. Daskiewich, D.E., RL/ERDA, Griffiss AFB, NY, (1991).
“Shortening the Path from Specification to Prototype”, Rapid System
116

Prototyping, 1991,Second International Workshop on, 11-13 Jun 1991, On


page(s): 17-24

101. Yervant Zorian, (1997) "Test Requirements for Embedded Core-based


Systems and IEEE P1500," itc, pp.191, International Test Conference 1997
(ITC'97)

102. Yu Chi Su, Institute of Electrical Engineering, National Taiwan University,


(2003) “Integrating C and Verilog into a Simulation-Based Verification
Environment for PCI-X 2.0”-Thesis

103. Zhang Yuhong He Lenian Xu Zhihan Yan Xiaolang Wang Leyu , Inst. of
Digital Technol. & Instrum., Zhejiang Univ., Hangzhou, China, (2003) “A
system verification environment for mixed-signal SOC design based on IP
bus”- ASIC, 2003. Proceedings, 5th International Conference on, 21-24 Oct.
2003, Volume: 1, on page(s): 278- 281 Vol.1

104. Jack Horgan, (2004) “Structured ASICs”, EDACafe:


http://www10.edacafe.com/nbc/articles/view_weekly.php?articleid=209214

S-ar putea să vă placă și