Sunteți pe pagina 1din 26

Process Variation Delay and Congestion Aware Routing

Algorithm for Asynchronous


Network on Chip Design

PROPOSED ABSTRACT:

Creating Network on Chip is done here, we present a new network-on-chip


(NoC) that handles accurate localizations of the faulty parts of the NoC. The
implemented design NoC is based on new error detection mechanisms suitable for
dynamic NoCs, where the number and position of processor elements or faulty blocks
vary during runtime. Indeed, We designed a online detection of data packet and
adaptive routing algorithm errors. Both presented mechanisms are able to distinguish
permanent and transient errors and localize accurately the position of the faulty blocks
(data bus, input port, output port) in the NoC routers, while preserving the throughput,
the network load, and the data packet latency. We provide localization capacity analysis
of the presented mechanisms, NoC performance evaluations, and field-programmable
gate array synthesis.

The proposed NoC is based on new error detection mechanisms suitable for
dynamic NoCs, where the number and position of processor elements or faulty blocks
vary during runtime. Recently the trend of embedded systems has been moving toward
multiprocessor systems-on-chip in order to meet the requirements of real-time
applications. The complexity of these SoCs is increasing and the communication
medium is becoming a major issue of them MPSoC. Generally, integrating a network-
on-chip into the SoC provides an effective means to interconnect several processor
elements (PEs) or intellectual properties (IP) .The NoC medium features a high level of
modularity, flexibility, and throughput. An NoC comprises routers and interconnections
allowing communication between the PEs and/or IPs. The NoC relies on data packet
exchange. The path for a data packet between a source and a destination through the
routers is defined by the routing algorithm. Therefore, the path that a data packet is
allowed to take in the network depends mainly on the adaptiveness permitted by the
routing algorithm, which is applied locally in each router being crossed and to each
data packet .

The increasing complexity and the reliability evolution of SoCs, MPSoCs are
becoming more sensitive to phenomena that generate permanent, transient, or
intermittent faults . These faults may generate data packet errors, or may affect router
behavior leading to data packet losses or permanent routing errors . Indeed, a fault in a
routing logic will often lead to packet routing errors and might even crash the router. To
detect these errors, specific error detection blocks are required in the network to locate
the faulty sources. Moreover, permanent errors must be distinguished from transient
errors. Indeed, the precise location of permanent faulty parts of the NoC must be
determined, in order for them to be bypassed effectively by the adaptive routing
algorithm. To protect data packets against errors, error correcting codes are
implemented inside the NoC components. Among the well known solutions, three are
usually applied for the MPSoC communications based NoC. First, the end-to-end
solution requires an ECC to be implemented in each input port of the IPs or PEs in the
NoC . The main drawback of this solution is its incapacity to locate the faulty
components in the NoC. Consequently, it is inadequate for dynamic NoCs,

where the faulty and unavailable zones must be bypassed. Second, the switch-toswitch
detection is based on the implementation of an ECC in each input port of the NoC
switches. For instance, in a router of four communication directions, four ECC blocks
are implemented. Therefore, when a router receives a data packet from a neighbor, the
ECC block analyzes its content to check the correctness of the data. This process
detects and corrects data errors according to the effectiveness of the ECC being used.
Third, another proposed solution is the code disjoint . In this approach, routers include
one ECC in each input and output data port. This solution localizes the error sources,
which can be either in the switches or on the data links between routers. However, if an
error source is localized inside a router, this solution mechanism disables the totality of
the switch. These online detection mechanisms cannot disconnect just the faulty parts
of the NoC, and hence do not give an accurate localization of the source of errors. The
result is that the network throughput decreases while the network load and data packet
latency increase. Moreover, they are not able to distinguish between permanent and
transient errors. For all these techniques, each ECC implemented in the routers of the
network adds cost in terms of logic area, latency in data packet transmission, and power
consumption.

We designed a online detection of data packet and adaptive routing algorithm errors.
Both presented mechanisms are able to distinguish permanent and transient errors and
localize accurately the position of the faulty blocks (data bus, input port, output port) in
the NoC routers, while preserving the throughput, the network load, and the data packet
latency. We provide localization capacity analysis of the presented mechanisms, NoC
performance evaluations, and field-programmable gate array synthesis.

In the proposed design advanced routing algorithm is implemented where


the data packets are sent in high speed and priority based approach. The advantage of
doing this reduces the frequent checking of unconnected nodes and making the return
path delay. In the proposed system we implement various nodes and then design some
logic module inside each node. The logic module inside the each node is considering as
the process designed inside the integrated circuit. Moreover, we implement the self
replacement concept in the proposed system so it handle error 0r fault by itself. The
implementation of self replacement concept achieves advantage in terms of time..
SOFTWARE REQUIREMENT:
Design Environment: XILINX ISE

Language: VHDL

Simulation: MODELSIM / XILINX ISE Simulator

HARDWARE REQUIREMENT:

XILINX SPARTAN Development Board

Device: XC3S500E

Advantages

The data packets are sent in high speed and priority based approach.
The power consumption is reduced by using particular nodes at a time.
The self repairing concept is implemented

Disadvantages

In the existing design, if all nodes are used it will increase the power
consumption.

The fault detection ability is less compared to the proposed.

Algorithm :

Network on chip based on adaptive routing algorithm.


BLOCK DIAGRAM

FPGA

Power System Hardwar


supply clock e reset

Clock Clock
generator divider

Input Receiver
Nodes
buffer module

Information Transmitter
module Output
source
buffer
Net work
MODULE DISCRIPTION:

MODULE 1: DESIGN AND ANALYSIS OF DATA TRANSMISION MODULE

MODULE DISCRIPTION:

The data transmission module is the basic module in the smart reliable network
on chip. In transmission module first we encrypted the data or information before
transmission and we transmit the encrypted data according to the enable signal.

MODULE 2: DESIGN AND ANALYSIS OF NODES

MODULE DISCRIPTION:

In this module, design and analyses the nodes in the reliable network. Thus the
node in the network is designed for establishing the link between the source and
destination. The nodes in the network having different node value, calculate the all
possible route value before transmit the data to the destination with respect to the node
value and choose the reliable path for networking.
MODULE 3: DESIGN AND ANALYSIS OF NETWORK

MODULE DISCRIPTION:

In this module, design the network and analyses the performance. The network is
designed by make the link between the active nodes in the design.

To make the reliable operation we use the advanced adaptive routing algorithm. In this
method, detect the faulty nodes in the network is adaptive and make sure the reliability.

MODULE 4: INTEGRATION AND ANALYSIS OF SUB_MODULES

MODULE DISCRIPTION:

In this module we integrate all the sub modules and analyze the performance.
Architecture Diagram
ABSTRACT:

Creating Network on Chip is done here, we present a new network-on-chip


(NoC) that handles accurate localizations of the faulty parts of the NoC. The
implemented design NoC is based on new error detection mechanisms suitable for
dynamic NoCs, where the number and position of processor elements or faulty blocks
vary during runtime. Indeed, We designed a online detection of data packet and
adaptive routing algorithm errors. Both presented mechanisms are able to distinguish
permanent and transient errors and localize accurately the position of the faulty blocks
(data bus, input port, output port) in the NoC routers, while preserving the throughput,
the network load, and the data packet latency. We provide localization capacity analysis
of the presented mechanisms, NoC performance evaluations, and field-programmable
gate array synthesis.

The proposed NoC is based on new error detection mechanisms suitable for
dynamic NoCs, where the number and position of processor elements or faulty blocks
vary during runtime. Recently the trend of embedded systems has been moving toward
multiprocessor systems-on-chip in order to meet the requirements of real-time
applications. The complexity of these SoCs is increasing and the communication
medium is becoming a major issue of them MPSoC. Generally, integrating a network-
on-chip into the SoC provides an effective means to interconnect several processor
elements (PEs) or intellectual properties (IP) .The NoC medium features a high level of
modularity, flexibility, and throughput. An NoC comprises routers and interconnections
allowing communication between the PEs and/or IPs. The NoC relies on data packet
exchange. The path for a data packet between a source and a destination through the
routers is defined by the routing algorithm. Therefore, the path that a data packet is
allowed to take in the network depends mainly on the adaptiveness permitted by the
routing algorithm, which is applied locally in each router being crossed and to each
data packet .
The increasing complexity and the reliability evolution of SoCs, MPSoCs are
becoming more sensitive to phenomena that generate permanent, transient, or
intermittent faults . These faults may generate data packet errors, or may affect router
behavior leading to data packet losses or permanent routing errors . Indeed, a fault in a
routing logic will often lead to packet routing errors and might even crash the router. To
detect these errors, specific error detection blocks are required in the network to locate
the faulty sources. Moreover, permanent errors must be distinguished from transient
errors. Indeed, the precise location of permanent faulty parts of the NoC must be
determined, in order for them to be bypassed effectively by the adaptive routing
algorithm. To protect data packets against errors, error correcting codes are
implemented inside the NoC components. Among the well known solutions, three are
usually applied for the MPSoC communications based NoC. First, the end-to-end
solution requires an ECC to be implemented in each input port of the IPs or PEs in the
NoC . The main drawback of this solution is its incapacity to locate the faulty
components in the NoC. Consequently, it is inadequate for dynamic NoCs,

where the faulty and unavailable zones must be bypassed. Second, the switch-toswitch
detection is based on the implementation of an ECC in each input port of the NoC
switches. For instance, in a router of four communication directions, four ECC blocks
are implemented. Therefore, when a router receives a data packet from a neighbor, the
ECC block analyzes its content to check the correctness of the data. This process
detects and corrects data errors according to the effectiveness of the ECC being used.
Third, another proposed solution is the code disjoint . In this approach, routers include
one ECC in each input and output data port. This solution localizes the error sources,
which can be either in the switches or on the data links between routers. However, if an
error source is localized inside a router, this solution mechanism disables the totality of
the switch. These online detection mechanisms cannot disconnect just the faulty parts
of the NoC, and hence do not give an accurate localization of the source of errors. The
result is that the network throughput decreases while the network load and data packet
latency increase. Moreover, they are not able to distinguish between permanent and
transient errors. For all these techniques, each ECC implemented in the routers of the
network adds cost in terms of logic area, latency in data packet transmission, and power
consumption.

We designed a online detection of data packet and adaptive routing algorithm errors.
Both presented mechanisms are able to distinguish permanent and transient errors and
localize accurately the position of the faulty blocks (data bus, input port, output port) in
the NoC routers, while preserving the throughput, the network load, and the data packet
latency. We provide localization capacity analysis of the presented mechanisms, NoC
performance evaluations, and field-programmable gate array synthesis.

In the proposed design advanced routing algorithm is implemented where


the data packets are sent in high speed and priority based approach. The advantage of
doing this reduces the frequent checking of unconnected nodes and making the return
path delay. In the proposed system we implement various nodes and then design some
logic module inside each node. The logic module inside the each node is considering as
the process designed inside the integrated circuit. Moreover, we implement the self
replacement concept in the proposed system so it handle error 0r fault by itself. The
implementation of self replacement concept achieves advantage in terms of time..

Introduction

Network on a chip

Network on chip or network on a chip (NoC or NOC) is a


communication subsystem on an integrated circuit (commonly called a "chip"), typically
between IP cores in a system on a chip (SoC). NoCs can span synchronous and
asynchronous clock domains or use unclocked asynchronous logic. NoC technology
applies networking theory and methods to on-chip communication and brings notable
improvements over conventional bus and crossbar interconnections. NoC improves the
scalability of SoCs, and the power efficiency of complex SoCs compared to other
designs.
Network on chip is an emerging paradigm for communications within
large VLSI systems implemented on a single silicon chip. In a NoC system, modules
such as processor cores, memories and specialized IP blocks exchange data using a
network as a "public transportation" sub-system for the information traffic. A NoC is
constructed from multiple point-to-point data links interconnected by switches (a.k.a.
routers), such that messages can be relayed from any source module to any destination
module over several links, by making routing decisions at the switches. A NoC is similar
to a modern telecommunications network, using digital bit-packet switching over
multiplexed links. Although packet-switching is sometimes claimed as necessity for a
NoC, there are several NoC proposals utilizing circuit-switching techniques. This
definition based on routers is usually interpreted so that a single shared bus, a single
crossbar switch or a point-to-point network are not NoCs but practically all other
topologies are.

Parallelism and scalability

The wires in the links of the NoC are shared by many signals. A high
level of parallelism is achieved, because all links in the NoC can operate simultaneously
on different data packets. Therefore, as the complexity of integrated systems keeps
growing, a NoC provides enhanced performance (such as throughput) and scalability in
comparison with previous communication architectures (e.g., dedicated point-to-point
signal wires, shared buses, or segmented buses with bridges). Of course, the
algorithms must be designed in such a way that they offer large parallelism and can
hence utilize the potential of NoC.

Benefits of adopting NoCs

Traditionally, ICs have been designed with dedicated point-to-point


connections, with one wire dedicated to each signal. For large designs, in particular, this
has several limitations from a physical design viewpoint. The wires occupy much of the
area of the chip, and in nanometer CMOS technology, interconnects dominate both
performance and dynamic power dissipation, as signal propagation in wires across the
chip requires multiple clock cycles.

NoC links can reduce the complexity of designing wires for predictable
speed, power, noise, reliability, etc. From a system design viewpoint, with the advent of
multi-core processor systems, a network is a natural architectural choice. A NoC can
provide separation between computation and communication, support modularity and IP
reuse via standard interfaces, handle synchronization issues, serve as a platform for
system test, and, hence, increase engineering productivity.
ALGORITHM FLOW
SOFTWARE INTRODUCTION

VLSI DESIGN:

VLSI stands for "Very Large Scale Integration". This is the field which involves packing
more and more logic devices into smaller and smaller areas. Thanks to VLSI, circuits
that would have taken board of space can now be put into a small space few millimeters
across! This has opened up a big opportunity to do things that were not possible before.
VLSI circuits are everywhere ... your computer, your car, your brand new state-of-the-art
digital camera, the cell-phones, and what have you. All this involves a lot of expertise on
many fronts within the same field, which we will look at in later sections.
VLSI has been around for a long time, there is nothing new about it ... but as a side
effect of advances in the world of computers, there has been a dramatic proliferation of
tools that can be used to design VLSI circuits. Alongside, obeying Moore's law, the
capability of an IC has increased exponentially over the years, in terms of computation
power, utilization of available area, yield. The combined effect of these two advances is
that people can now put diverse functionality into the IC's, opening up new frontiers.
Examples are embedded systems, where intelligent devices are put inside everyday
objects, and ubiquitous computing where small computing devices proliferate to such an
extent that even the shoes you wear may actually do something useful like monitoring
your heartbeats! These two fields are kind related, and getting into their description can
easily lead to another article.

DEALING WITH VLSI CIRCUITS

Digital VLSI circuits are predominantly CMOS based. The way normal
blocks like latches and gates are implemented is different from what students have seen
so far, but the behavior remains the same. All the miniaturization involves new things to
consider. A lot of thought has to go into actual implementations as well as design. Let us
look at some of the factors involved.

1. Circuit Delays: Large complicated circuits running at very high frequencies have one
big problem to tackle - the problem of delays in propagation of signals through gates
and wires ... even for areas a few micrometers across! The operation speed is so large
that as the delays add up, they can actually become comparable to the clock speeds.

2. Power: Another effect of high operation frequencies is increased consumption of


power. This has two-fold effect - devices consume batteries faster, and heat dissipation
increases. Coupled with the fact that surface areas have decreased, heat poses a major
threat to the stability of the circuit itself.

3. Layout: Laying out the circuit components is a task common to all branches of
electronics. Whats so special in our case is that there are on.
The power dissipation and speed in a circuit present a trade-off; if we try to optimize on
one, the other is affected. The choice between the two is determined by the way we
chose the layout the circuit components. Layout can also affect the fabrication of VLSI
chips, making it either easy or difficult to implement the components on the silicon.

MODELSIM SIMULATOR

Designers of digital systems are inevitably faced with the task of testing their
designs. Each design can be composed of many components, each of which has to be
tested in isolation and then integrated into a design when it operates correctly. To verify
that a design operates correctly we use simulation, which is a process of testing the
design by applying inputs to a circuit and observing its behavior. The output of a
simulation is a set of waveforms that show how a circuit behaves based on a given
sequence of inputs. The general flow of a simulation is shown in below Figure. There
are two main types of simulation: functional and timing simulation. The functional
simulation tests the logical operation of a circuit without accounting for delays in the
circuit. Signals are propagated through the circuit using logic and wiring delays of zero.
This simulation is fast and useful for checking the fundamental correctness of the
designed circuit. The second step of the simulation process is the timing simulation. It is
a more complex type of simulation, where logic components and wires take some time
to respond to input stimuli. In addition to testing the logical operation of the circuit, it
shows the timing of signals in the circuit. This type of simulation is more realistic than
the functional simulation; however, it takes longer to perform.

THE SIMULATION FLOW:


VHDL

VHDL is an acronym which stands for VHSIC Hardware Description Language.


VHSIC is yet another acronym which stands for Very High Speed Integrated Circuits. If
you can remember that, then you're off to a good start. The language has been known
to be somewhat complicated. The acronym does have a purpose, though; it is supposed
to capture the entire theme of the language that is to describe hardware much the same
way we use schematics.

VHDL can wear many hats. It is being used for documentation, verification, and
synthesis of large digital designs. This is actually one of the key features of VHDL, since
the same VHDL code can theoretically achieve all three of these goals, thus saving a lot
of effort. In addition to being used for each of these purposes, VHDL can be used to
take three different approaches to describing hardware. These three different
approaches are the structural, data flow, and behavioral methods of hardware
description. Most of the time a mixture of the three methods is employed. The following
sections introduce you to the language by examining its use for each of these three
methodologies. There are also certain guidelines that form an approach to using VHDL
for synthesis.

VHDL is a standard (VHDL-1076) developed by IEEE (Institute of Electrical and


Electronics Engineers). The language has been through a few revisions, and you will
come across this in the VHDL community. Currently, the most widely used version is the
1987 (STD 1076-1987) version, sometimes referred to as VHDL'87, but also just VHDL.
However, there is a newer revision of the language referred to as VHDL'93. VHDL'93
(adopted in 1994 of course) is fairly new and is still in the process of replacing VHDL'87.
VHDL is an IEEE and U.S. Department of Defense standard for electronic
system descriptions. It is also becoming increasingly popular in private industry as
experience with the language grows and supporting tools become more widely

available. Therefore, to facilitate the transfer of system description information, an


understanding of VHDL will become increasingly important.

Design Flow:

As mentioned above, one of the major utilities of VHDL is that it allows the
synthesis of a circuit or system in a programmable device (PLD or FPGA) or in an ASIC.
The steps followed during such a project are summarized in below figure. We start the
design by writing the VHDL code, which is saved in a file with the extension .vhd and
the same name as its ENTITYs name. The first step in the synthesis process is
compilation.

Fig:3 VHDL Design Flow


Compilation is the conversion of the high-level VHDL language, which describes the
circuit at the Register Transfer Level (RTL), into a net list at the gate level. The second
step is optimization, which is performed on the gate-level net list for speed or for area.
At this stage, the design can be simulated. Finally, place and-route (fitter) software will
generate the physical layout for a PLD/FPGA chip or will generate the masks for an
ASIC.

FPGA

Field-programmable gate arrays (FPGAs) arrived in 1984 as an alternative to


programmable logic devices (PLDs) and ASICs. As their name implies, FPGAs offer the
significant benefit of being readily programmable. Unlike their fore bearers in the PLD
category, FPGAs can (in most cases) be programmed again and again, giving
designers multiple opportunities to tweak their circuits. Innovative design often happens
with FPGAs as an implementation platform. But there are some downsides to FPGAs as
well. The economics of FPGAs force designers to balance their relatively high piece-
part pricing compared to ASICs with the absence of high NREs and long development
cycles. Theyre also available only in fixed sizes, which matters when youre determined
to avoid unused silicon area.

FPGAs fill a gap between discreet logic and the smaller PLDs on the low end
of the complexity scale and costly custom ASICs on the high end. They consist of an
array of logic blocks that are configured using software. Programmable I/O blocks
surround these logic blocks. Both are connected by programmable interconnects (Fig.
1). The programming technology in an FPGA determines the type of basic logic cell and
the interconnect scheme. In turn, the logic cells and interconnection scheme determine
the design of the input and output circuits as well as the programming scheme.
FPGAs offer all of the features needed to implement most complex designs.
Clock management is facilitated by on-chip PLL (phase-locked loop) or DLL (delay-
locked loop) circuitry. Dedicated memory blocks can be configured as basic single-port
RAMs, ROMs, FIFOs, or CAMs. Data processing, as embodied in the devices logic
fabric, varies widely. The ability to link the FPGA with backplanes, high-speed buses,
and memories is afforded by support for various single ended and differential I/O
standards. Also found on todays FPGAs are system-building resources such as high
speed serial I/Os, arithmetic modules, embedded processors, and large amounts of
memory.
The highest capacity general purpose logic chips available today are the
traditional gate arrays sometimes referred to as Mask-Programmable Gate Arrays
(MPGAs). MPGAs consist of an array of pre-fabricated transistors that can be
customized into the users logic circuit by connecting the transistors with custom wires.
Customization is performed during chip fabrication by specifying the metal interconnect,
and this means that in order for a user to employ an MPGA a large setup cost is
involved and manufacturing time is long. Although MPGAs are clearly not FPDs, they
are mentioned here because they motivated the design of the user-programmable
equivalent: Field-Programmable Gate Arrays (FPGAs). Like MPGAs, FPGAs comprise
an array of uncommitted circuit elements, called logic blocks, and interconnect
resources, but FPGA configuration is performed through programming by the end user.
An illustration of a typical FPGA architecture appears below figure. As the only type of
FPD that supports very high logic capacity, FPGAs have been responsible for a major
shift in the way digital circuits are designed.
Commercially Available FPGAs:

As one of the largest growing segments of the semiconductor industry, the FPGA
market-place is volatile. As such, the pool of companies involved changes rapidly and it
is somewhat difficult to say which products will be the most significant when the industry
reaches a stable state. For this reason, and to provide a more focused discussion, we
will not mention all of the FPGA manufacturers that currently exist, but will instead focus
on those companies whose products are in widespread use at this time. In describing
each device we will list its capacity, nominally in 2-input NAND gates as given by the
vendor. Gate count is an especially contentious issue in the FPGA industry, and so the
numbers given in this paper for all manufacturers should not be taken too seriously.
Wags have taken to calling them dog gates, in reference to the traditional ratio
between human and dog years.

There are two basic categories of FPGAs on the market today:

1. SRAM-based FPGAs and

2. antifuse-based FPGAs.

In the first category, Xilinx and Altera are the leading manufacturers in terms of number
of users, with the major competitor being AT&T. For antifuse-based products, Actel,
Quicklogic and Cypress, and Xilinx competing offer products.

SRAM based FPGA:

The basic structure of Xilinx FPGAs is array-based, meaning that each chip comprises a
two dimensional array of logic blocks that can be interconnected via horizontal and
vertical routing channels.
Spartan 3E family:

The Spartan-3E family of Field-Programmable Gate Arrays (FPGAs) is specifically


designed to meet the needs of high volume, cost-sensitive consumer electronic
applications. The five-member family offers densities ranging from 100,000 to 1.6 million
system gates. The Spartan-3E family builds on the success of the earlier Spartan-3
family by increasing the amount of logic per I/O, significantly reducing the cost per logic
cell. New features improve system performance and reduce the cost of configuration.
These Spartan-3E FPGA enhancements, combined with advanced 90 nm process
technology, deliver more functionality and bandwidth per dollar than was previously
possible, setting new standards in the programmable logic industry. Because of their
exceptionally low cost, Spartan-3E FPGAs are ideally suited to a wide range of
consumer electronics applications, including broadband access, home networking,
display/projection, and digital television equipment. The Spartan-3E family is a superior
alternative to mask programmed

ASICs. FPGAs avoid the high initial cost, the lengthy development cycles, and the
inherent inflexibility of conventional ASICs. Also, FPGA programmability permits design
upgrades in the field with no hardware replacement necessary, an impossibility with
ASICs.

Features:

Very low cost, high-performance logic solution for

high-volume, consumer-oriented applications

Proven advanced 90-nanometer process technology

Multi-voltage, multi-standard Select IO interface pins

- Up to 376 I/O pins or 156 differential signal pairs

- LVCMOS, LVTTL, HSTL, and SSTL single-ended


Architectural Overview:

Spartan 3E Architecture:

The Spartan-3E family architecture consists of five fundamental programmable


functional elements:

Configurable Logic Blocks (CLBs) contain flexible Look-Up Tables (LUTs) that
implement logic plus storage elements used as flip-flops or latches. CLBs perform a
wide variety of logical functions as well as store data.
Input/output Blocks (IOBs) control the flow of data between the I/O pins and the
internal logic of the device. Each IOB supports bidirectional data flow plus 3-state
operation. Supports a variety of signal standards, including four high-performance
differential standards. Double Data-Rate (DDR) registers are included.

Block RAM provides data storage in the form of 18-Kbit dual-port blocks.

Multiplier Blocks accept two 18-bit binary numbers as inputs and calculate the product

Digital Clock Manager (DCM) Blocks provide self-calibrating, fully digital solutions for
distributing, delaying, multiplying, dividing, and phase-shifting clock signals

S-ar putea să vă placă și