Documente Academic
Documente Profesional
Documente Cultură
PROPOSED ABSTRACT:
The proposed NoC is based on new error detection mechanisms suitable for
dynamic NoCs, where the number and position of processor elements or faulty blocks
vary during runtime. Recently the trend of embedded systems has been moving toward
multiprocessor systems-on-chip in order to meet the requirements of real-time
applications. The complexity of these SoCs is increasing and the communication
medium is becoming a major issue of them MPSoC. Generally, integrating a network-
on-chip into the SoC provides an effective means to interconnect several processor
elements (PEs) or intellectual properties (IP) .The NoC medium features a high level of
modularity, flexibility, and throughput. An NoC comprises routers and interconnections
allowing communication between the PEs and/or IPs. The NoC relies on data packet
exchange. The path for a data packet between a source and a destination through the
routers is defined by the routing algorithm. Therefore, the path that a data packet is
allowed to take in the network depends mainly on the adaptiveness permitted by the
routing algorithm, which is applied locally in each router being crossed and to each
data packet .
The increasing complexity and the reliability evolution of SoCs, MPSoCs are
becoming more sensitive to phenomena that generate permanent, transient, or
intermittent faults . These faults may generate data packet errors, or may affect router
behavior leading to data packet losses or permanent routing errors . Indeed, a fault in a
routing logic will often lead to packet routing errors and might even crash the router. To
detect these errors, specific error detection blocks are required in the network to locate
the faulty sources. Moreover, permanent errors must be distinguished from transient
errors. Indeed, the precise location of permanent faulty parts of the NoC must be
determined, in order for them to be bypassed effectively by the adaptive routing
algorithm. To protect data packets against errors, error correcting codes are
implemented inside the NoC components. Among the well known solutions, three are
usually applied for the MPSoC communications based NoC. First, the end-to-end
solution requires an ECC to be implemented in each input port of the IPs or PEs in the
NoC . The main drawback of this solution is its incapacity to locate the faulty
components in the NoC. Consequently, it is inadequate for dynamic NoCs,
where the faulty and unavailable zones must be bypassed. Second, the switch-toswitch
detection is based on the implementation of an ECC in each input port of the NoC
switches. For instance, in a router of four communication directions, four ECC blocks
are implemented. Therefore, when a router receives a data packet from a neighbor, the
ECC block analyzes its content to check the correctness of the data. This process
detects and corrects data errors according to the effectiveness of the ECC being used.
Third, another proposed solution is the code disjoint . In this approach, routers include
one ECC in each input and output data port. This solution localizes the error sources,
which can be either in the switches or on the data links between routers. However, if an
error source is localized inside a router, this solution mechanism disables the totality of
the switch. These online detection mechanisms cannot disconnect just the faulty parts
of the NoC, and hence do not give an accurate localization of the source of errors. The
result is that the network throughput decreases while the network load and data packet
latency increase. Moreover, they are not able to distinguish between permanent and
transient errors. For all these techniques, each ECC implemented in the routers of the
network adds cost in terms of logic area, latency in data packet transmission, and power
consumption.
We designed a online detection of data packet and adaptive routing algorithm errors.
Both presented mechanisms are able to distinguish permanent and transient errors and
localize accurately the position of the faulty blocks (data bus, input port, output port) in
the NoC routers, while preserving the throughput, the network load, and the data packet
latency. We provide localization capacity analysis of the presented mechanisms, NoC
performance evaluations, and field-programmable gate array synthesis.
Language: VHDL
HARDWARE REQUIREMENT:
Device: XC3S500E
Advantages
The data packets are sent in high speed and priority based approach.
The power consumption is reduced by using particular nodes at a time.
The self repairing concept is implemented
Disadvantages
In the existing design, if all nodes are used it will increase the power
consumption.
Algorithm :
FPGA
Clock Clock
generator divider
Input Receiver
Nodes
buffer module
Information Transmitter
module Output
source
buffer
Net work
MODULE DISCRIPTION:
MODULE DISCRIPTION:
The data transmission module is the basic module in the smart reliable network
on chip. In transmission module first we encrypted the data or information before
transmission and we transmit the encrypted data according to the enable signal.
MODULE DISCRIPTION:
In this module, design and analyses the nodes in the reliable network. Thus the
node in the network is designed for establishing the link between the source and
destination. The nodes in the network having different node value, calculate the all
possible route value before transmit the data to the destination with respect to the node
value and choose the reliable path for networking.
MODULE 3: DESIGN AND ANALYSIS OF NETWORK
MODULE DISCRIPTION:
In this module, design the network and analyses the performance. The network is
designed by make the link between the active nodes in the design.
To make the reliable operation we use the advanced adaptive routing algorithm. In this
method, detect the faulty nodes in the network is adaptive and make sure the reliability.
MODULE DISCRIPTION:
In this module we integrate all the sub modules and analyze the performance.
Architecture Diagram
ABSTRACT:
The proposed NoC is based on new error detection mechanisms suitable for
dynamic NoCs, where the number and position of processor elements or faulty blocks
vary during runtime. Recently the trend of embedded systems has been moving toward
multiprocessor systems-on-chip in order to meet the requirements of real-time
applications. The complexity of these SoCs is increasing and the communication
medium is becoming a major issue of them MPSoC. Generally, integrating a network-
on-chip into the SoC provides an effective means to interconnect several processor
elements (PEs) or intellectual properties (IP) .The NoC medium features a high level of
modularity, flexibility, and throughput. An NoC comprises routers and interconnections
allowing communication between the PEs and/or IPs. The NoC relies on data packet
exchange. The path for a data packet between a source and a destination through the
routers is defined by the routing algorithm. Therefore, the path that a data packet is
allowed to take in the network depends mainly on the adaptiveness permitted by the
routing algorithm, which is applied locally in each router being crossed and to each
data packet .
The increasing complexity and the reliability evolution of SoCs, MPSoCs are
becoming more sensitive to phenomena that generate permanent, transient, or
intermittent faults . These faults may generate data packet errors, or may affect router
behavior leading to data packet losses or permanent routing errors . Indeed, a fault in a
routing logic will often lead to packet routing errors and might even crash the router. To
detect these errors, specific error detection blocks are required in the network to locate
the faulty sources. Moreover, permanent errors must be distinguished from transient
errors. Indeed, the precise location of permanent faulty parts of the NoC must be
determined, in order for them to be bypassed effectively by the adaptive routing
algorithm. To protect data packets against errors, error correcting codes are
implemented inside the NoC components. Among the well known solutions, three are
usually applied for the MPSoC communications based NoC. First, the end-to-end
solution requires an ECC to be implemented in each input port of the IPs or PEs in the
NoC . The main drawback of this solution is its incapacity to locate the faulty
components in the NoC. Consequently, it is inadequate for dynamic NoCs,
where the faulty and unavailable zones must be bypassed. Second, the switch-toswitch
detection is based on the implementation of an ECC in each input port of the NoC
switches. For instance, in a router of four communication directions, four ECC blocks
are implemented. Therefore, when a router receives a data packet from a neighbor, the
ECC block analyzes its content to check the correctness of the data. This process
detects and corrects data errors according to the effectiveness of the ECC being used.
Third, another proposed solution is the code disjoint . In this approach, routers include
one ECC in each input and output data port. This solution localizes the error sources,
which can be either in the switches or on the data links between routers. However, if an
error source is localized inside a router, this solution mechanism disables the totality of
the switch. These online detection mechanisms cannot disconnect just the faulty parts
of the NoC, and hence do not give an accurate localization of the source of errors. The
result is that the network throughput decreases while the network load and data packet
latency increase. Moreover, they are not able to distinguish between permanent and
transient errors. For all these techniques, each ECC implemented in the routers of the
network adds cost in terms of logic area, latency in data packet transmission, and power
consumption.
We designed a online detection of data packet and adaptive routing algorithm errors.
Both presented mechanisms are able to distinguish permanent and transient errors and
localize accurately the position of the faulty blocks (data bus, input port, output port) in
the NoC routers, while preserving the throughput, the network load, and the data packet
latency. We provide localization capacity analysis of the presented mechanisms, NoC
performance evaluations, and field-programmable gate array synthesis.
Introduction
Network on a chip
The wires in the links of the NoC are shared by many signals. A high
level of parallelism is achieved, because all links in the NoC can operate simultaneously
on different data packets. Therefore, as the complexity of integrated systems keeps
growing, a NoC provides enhanced performance (such as throughput) and scalability in
comparison with previous communication architectures (e.g., dedicated point-to-point
signal wires, shared buses, or segmented buses with bridges). Of course, the
algorithms must be designed in such a way that they offer large parallelism and can
hence utilize the potential of NoC.
NoC links can reduce the complexity of designing wires for predictable
speed, power, noise, reliability, etc. From a system design viewpoint, with the advent of
multi-core processor systems, a network is a natural architectural choice. A NoC can
provide separation between computation and communication, support modularity and IP
reuse via standard interfaces, handle synchronization issues, serve as a platform for
system test, and, hence, increase engineering productivity.
ALGORITHM FLOW
SOFTWARE INTRODUCTION
VLSI DESIGN:
VLSI stands for "Very Large Scale Integration". This is the field which involves packing
more and more logic devices into smaller and smaller areas. Thanks to VLSI, circuits
that would have taken board of space can now be put into a small space few millimeters
across! This has opened up a big opportunity to do things that were not possible before.
VLSI circuits are everywhere ... your computer, your car, your brand new state-of-the-art
digital camera, the cell-phones, and what have you. All this involves a lot of expertise on
many fronts within the same field, which we will look at in later sections.
VLSI has been around for a long time, there is nothing new about it ... but as a side
effect of advances in the world of computers, there has been a dramatic proliferation of
tools that can be used to design VLSI circuits. Alongside, obeying Moore's law, the
capability of an IC has increased exponentially over the years, in terms of computation
power, utilization of available area, yield. The combined effect of these two advances is
that people can now put diverse functionality into the IC's, opening up new frontiers.
Examples are embedded systems, where intelligent devices are put inside everyday
objects, and ubiquitous computing where small computing devices proliferate to such an
extent that even the shoes you wear may actually do something useful like monitoring
your heartbeats! These two fields are kind related, and getting into their description can
easily lead to another article.
Digital VLSI circuits are predominantly CMOS based. The way normal
blocks like latches and gates are implemented is different from what students have seen
so far, but the behavior remains the same. All the miniaturization involves new things to
consider. A lot of thought has to go into actual implementations as well as design. Let us
look at some of the factors involved.
1. Circuit Delays: Large complicated circuits running at very high frequencies have one
big problem to tackle - the problem of delays in propagation of signals through gates
and wires ... even for areas a few micrometers across! The operation speed is so large
that as the delays add up, they can actually become comparable to the clock speeds.
3. Layout: Laying out the circuit components is a task common to all branches of
electronics. Whats so special in our case is that there are on.
The power dissipation and speed in a circuit present a trade-off; if we try to optimize on
one, the other is affected. The choice between the two is determined by the way we
chose the layout the circuit components. Layout can also affect the fabrication of VLSI
chips, making it either easy or difficult to implement the components on the silicon.
MODELSIM SIMULATOR
Designers of digital systems are inevitably faced with the task of testing their
designs. Each design can be composed of many components, each of which has to be
tested in isolation and then integrated into a design when it operates correctly. To verify
that a design operates correctly we use simulation, which is a process of testing the
design by applying inputs to a circuit and observing its behavior. The output of a
simulation is a set of waveforms that show how a circuit behaves based on a given
sequence of inputs. The general flow of a simulation is shown in below Figure. There
are two main types of simulation: functional and timing simulation. The functional
simulation tests the logical operation of a circuit without accounting for delays in the
circuit. Signals are propagated through the circuit using logic and wiring delays of zero.
This simulation is fast and useful for checking the fundamental correctness of the
designed circuit. The second step of the simulation process is the timing simulation. It is
a more complex type of simulation, where logic components and wires take some time
to respond to input stimuli. In addition to testing the logical operation of the circuit, it
shows the timing of signals in the circuit. This type of simulation is more realistic than
the functional simulation; however, it takes longer to perform.
VHDL can wear many hats. It is being used for documentation, verification, and
synthesis of large digital designs. This is actually one of the key features of VHDL, since
the same VHDL code can theoretically achieve all three of these goals, thus saving a lot
of effort. In addition to being used for each of these purposes, VHDL can be used to
take three different approaches to describing hardware. These three different
approaches are the structural, data flow, and behavioral methods of hardware
description. Most of the time a mixture of the three methods is employed. The following
sections introduce you to the language by examining its use for each of these three
methodologies. There are also certain guidelines that form an approach to using VHDL
for synthesis.
Design Flow:
As mentioned above, one of the major utilities of VHDL is that it allows the
synthesis of a circuit or system in a programmable device (PLD or FPGA) or in an ASIC.
The steps followed during such a project are summarized in below figure. We start the
design by writing the VHDL code, which is saved in a file with the extension .vhd and
the same name as its ENTITYs name. The first step in the synthesis process is
compilation.
FPGA
FPGAs fill a gap between discreet logic and the smaller PLDs on the low end
of the complexity scale and costly custom ASICs on the high end. They consist of an
array of logic blocks that are configured using software. Programmable I/O blocks
surround these logic blocks. Both are connected by programmable interconnects (Fig.
1). The programming technology in an FPGA determines the type of basic logic cell and
the interconnect scheme. In turn, the logic cells and interconnection scheme determine
the design of the input and output circuits as well as the programming scheme.
FPGAs offer all of the features needed to implement most complex designs.
Clock management is facilitated by on-chip PLL (phase-locked loop) or DLL (delay-
locked loop) circuitry. Dedicated memory blocks can be configured as basic single-port
RAMs, ROMs, FIFOs, or CAMs. Data processing, as embodied in the devices logic
fabric, varies widely. The ability to link the FPGA with backplanes, high-speed buses,
and memories is afforded by support for various single ended and differential I/O
standards. Also found on todays FPGAs are system-building resources such as high
speed serial I/Os, arithmetic modules, embedded processors, and large amounts of
memory.
The highest capacity general purpose logic chips available today are the
traditional gate arrays sometimes referred to as Mask-Programmable Gate Arrays
(MPGAs). MPGAs consist of an array of pre-fabricated transistors that can be
customized into the users logic circuit by connecting the transistors with custom wires.
Customization is performed during chip fabrication by specifying the metal interconnect,
and this means that in order for a user to employ an MPGA a large setup cost is
involved and manufacturing time is long. Although MPGAs are clearly not FPDs, they
are mentioned here because they motivated the design of the user-programmable
equivalent: Field-Programmable Gate Arrays (FPGAs). Like MPGAs, FPGAs comprise
an array of uncommitted circuit elements, called logic blocks, and interconnect
resources, but FPGA configuration is performed through programming by the end user.
An illustration of a typical FPGA architecture appears below figure. As the only type of
FPD that supports very high logic capacity, FPGAs have been responsible for a major
shift in the way digital circuits are designed.
Commercially Available FPGAs:
As one of the largest growing segments of the semiconductor industry, the FPGA
market-place is volatile. As such, the pool of companies involved changes rapidly and it
is somewhat difficult to say which products will be the most significant when the industry
reaches a stable state. For this reason, and to provide a more focused discussion, we
will not mention all of the FPGA manufacturers that currently exist, but will instead focus
on those companies whose products are in widespread use at this time. In describing
each device we will list its capacity, nominally in 2-input NAND gates as given by the
vendor. Gate count is an especially contentious issue in the FPGA industry, and so the
numbers given in this paper for all manufacturers should not be taken too seriously.
Wags have taken to calling them dog gates, in reference to the traditional ratio
between human and dog years.
2. antifuse-based FPGAs.
In the first category, Xilinx and Altera are the leading manufacturers in terms of number
of users, with the major competitor being AT&T. For antifuse-based products, Actel,
Quicklogic and Cypress, and Xilinx competing offer products.
The basic structure of Xilinx FPGAs is array-based, meaning that each chip comprises a
two dimensional array of logic blocks that can be interconnected via horizontal and
vertical routing channels.
Spartan 3E family:
ASICs. FPGAs avoid the high initial cost, the lengthy development cycles, and the
inherent inflexibility of conventional ASICs. Also, FPGA programmability permits design
upgrades in the field with no hardware replacement necessary, an impossibility with
ASICs.
Features:
Spartan 3E Architecture:
Configurable Logic Blocks (CLBs) contain flexible Look-Up Tables (LUTs) that
implement logic plus storage elements used as flip-flops or latches. CLBs perform a
wide variety of logical functions as well as store data.
Input/output Blocks (IOBs) control the flow of data between the I/O pins and the
internal logic of the device. Each IOB supports bidirectional data flow plus 3-state
operation. Supports a variety of signal standards, including four high-performance
differential standards. Double Data-Rate (DDR) registers are included.
Block RAM provides data storage in the form of 18-Kbit dual-port blocks.
Multiplier Blocks accept two 18-bit binary numbers as inputs and calculate the product
Digital Clock Manager (DCM) Blocks provide self-calibrating, fully digital solutions for
distributing, delaying, multiplying, dividing, and phase-shifting clock signals