Sunteți pe pagina 1din 3

In electronic design automation, is the task of verifying that the logic design

conforms to specification.
The purpose of verification is to ensure that the result of some transformation
is as intended or as expected.
RTL coding from a specification, insertion of a scan chain, synthesizing RTL cod
e into a gate-level netlist and layout of a gate-level netlist are some of the t
ransformations performed in a hardware design project.
The designers deploy verification environment utilizing C/C++ and SystemC langua
ges through clearly predefined verification strategy and simulate a complex SoC
with its embedded software.
Verification stratergies
*************************
Testcases can be either white-box or black-box, depending on the visibility and
knowledge you have of the internal implementation of each
unit under verification
With higher levels of abstraction, you have less control over the timing and coo
rdination of the stimulus and response, but it is easier to generate large
amount of stimulus and observe the response over a long period of time. If detai
led controls are required to perform certain testcases, it
may be necessary to work at a lower level of abstraction
For example, verifying a processor interface can be accomplished at the individu
al read and write cycle levels. But that requires each
testcase to have an intimate knowledge of the memory-mapped registers and how to
program them. That same interface could be
driven at the device driver level. The testcase would have access to a set of hi
gh-level procedural calls to perform complete operations.
You must plan how you will determine the expected response, then how to verify t
hat the design provided the
response you expected.
For
example, verifying a graphic engine involves checking the output
picture for expected content. A self-checking simulation would be
very good at verifying individual pixels in the picture. But a human
would be more efficient in recognizing a filled red circle.

BFM:
The Bus Functional Model (BFM) for a device interacts with the DUT by bo
th driving and sampling the DUT signals. A bus functional model is a model that
provides a task or procedural interface to specify certain bus operations for a
defined bus protocol. For a memory DUT, transactions usually take the form of re
ad and write operations. Bus functional models are easy to use and provide good
performance. It has to follow the timing protocol of the DUT interface. BFM desc
ribes the functionality and provides a cycle accurate interface to DUT. It model
s external behavior of the device. For re usability, the implementation of the B
FM functionality should be kept as independent of the communication to the BFM a
s it can be.
Protocol monitor do not drive any signals, monitor the DUT outputs, identifies a
ll the transactions and report any protocol violations. Again lets take a packe
t protocol. The monitor gets the information from the packet like, length of the

packet, address of the packet etc.


The following types of test bench are the most common:
Stimulus only
Contains only the stimulus driver and DUT; does not contain an
y results verification.
Full test bench Contains stimulus driver, known good results, and results co
mparison.
Simulator specific The test bench is written in a simulator-specific format.
Hybrid test bench
Combines techniques from more than one test bench style.
Fast test bench Test bench written to get ultimate speed from simulation.

In its most common use, equivalence checking compares two netlists to ensure tha
t some netlist post-processing, such as scanchain
insertion, clock-tree synthesis or manual modification1, did not change the func
tionality of the circuit.
Another popular use of equivalence checking is to verify that the netlist correc
tly implements the original RTL code
For example, all state machines in a design could be checked for unreachable or
isolated states. A more powerful property checker may be able to determine if de
adlock
conditions can occur.
Functional: You can prove presence of bugs but u cant prove absence of it.
black box:
With a black-box approach, functional verification is performed
without any knowledge of the actual implementation of a design.
All verification is accomplished through the available interfaces,
without direct access to the internal state of the design, without
knowledge of its structure and implementation
white: a white-box approach has full visibility and
controllability of the internal structure and implementation of the
design being verified.
Grey-box verification is a compromise between the aloofness of a
black-box verification and the dependence on the implementation
of white-box verification
Verification approaches
************************
Top Down Testing is an approach to integrated testing where the top integrated m
odules are tested and the branch of the module is tested step by step until the
end of the related module.
If coverage is missing, it usually indicates either unused code or incomplete t
ests.
Types: brach coverage,statement coverage,path,expression,toggle
Bottom Up Testing is an approach to integrated testing where the lowest level co
mponents are tested first, then used to facilitate the testing of higher level c
omponents. The process is repeated until the component at the top of the hierarc
hy is tested.

A co-simulation environment employs an ISS (instruction set simulator) that mod


els processor and a SystemC/HDL simulator based on a FPGA. A co-verification env
ironment employs FPGA-based hardware emulator and an actual embedded processor.
It is used to verify the whole SoC integration before the fabrication of a targe
t SoC.
co-simulation environment mainly consisting of two parts; an ISS and a hardware
simulator. An ISS is used to execute software design of target SoC. A hardware s
imulator is to implement the hardware part using SystemC and HDL
we employ a co-verification framework to validate the whole SoC system integrate
d in single hardware platform, which contains an embedded processor core and a f
ast hardware emulator employing a hardware accelerator to reduce the verificatio
n time
Co-verification has some advantages compared with respective verification. Key c
oncept behind co-verification is to merge the respective debug environments used
by hardware and software teams into a single framework. It provides designers a
n early access to both hardware and software components of the designs.

S-ar putea să vă placă și