Sunteți pe pagina 1din 9

Assignment No: 01 Date: 14-09-2018

Group Members:
Name: Itlal Ahmed Reg No: FA15-BSE-008
Name: Yawar Abbas Reg No: FA15-BSE-080
Name: Imran khan Reg No: FA15-BSE-142
Name: Hamza Ejaz Reg No: FA15-BSE-110
Name: Zawar Hussain Reg No: FA15-BSE-117
Name:Qaim Raza Reg No: FA15-BSE-041
Name:Shoaib Maqbool Reg No: FA15-BSE-053
Name:Arif Mehmood Reg No: FA15-BSE-070
Name: Hammad Ahmad Reg No: FA14-BCS-093

1. Harvard and Von Neumann Architecture:


Harvard Architecture:
The Harvard architecture is a computer architecture with physically
separate storage and signal pathways for instructions and data. The term originated
from the Harvard Mark I relay-based computer, which stored instructions on punched
tape (24 bits wide) and data in electro-mechanical counters. These early machines had
data storage entirely contained within the central processing unit, and provided no
access to the instruction storage as data. Programs needed to be loaded by an
operator; the processor could not initialize itself.
Today, most processors implement such separate signal pathways for performance
reasons, but actually implement a modified Harvard architecture, so they can support
tasks like loading a program from disk storage as data and then executing it.

Von Neumann Architecture:


Von Neumann architecture was first published by John von Neumann in 1945.
Von Neumann Architecture also known as the Von Neumann model,
the computer consisted of a CPU, memory and I/O devices. The program is stored in
the memory. The CPU fetches an instruction from the memory at a time and executes it.
Thus, the instructions are executed sequentially which is a slow process. Neumann m/c
are called control flow computer because instruction are executed sequentially as
controlled by a program counter. To increase the speed, parallel processing of
computer have been developed in which serial CPU’s are connected in parallel to solve
a problem. Even in parallel computers, the basic building blocks are Neumann
processors.
The von Neumann architecture is a design model for a stored-program digital
computer that uses a processing unit and a single separate storage structure to hold
both instructions and data. It is named after mathematician and early computer scientist
John von Neumann. Such a computer implements a universal Turing machine, and the
common "referential model" of specifying sequential architectures, in contrast with
parallel architectures.
One shared memory for instructions (program) and data with one data bus and one
address bus between processor and memory. Instructions and data have to be fetched
in sequential order (known as the Von Neumann Bottleneck), limiting the operation
bandwidth. Its design is simpler than that of the Harvard architecture. It is mostly used
to interface to external memory.

Comparison of Von Neumann and Harvard Architecture:


Basics of Von Neumann and Harvard Architecture:
The Von Neumann architecture is a theoretical computer design based on the concept
of stored-program where programs and data are stored in the same memory. The
concept was designed by a mathematician John Von Neumann in 1945 and which
presently serves as the basis of almost all modern computers. The Harvard architecture
was based on the original Harvard Mark I relay-based computer model which employed
separate buses for data and instructions.

Memory System of Von Neumann and Harvard Architecture:


The Von Neumann architecture has only one bus that is used for both instruction
fetches and data transfers, and the operations must be scheduled because they cannot
be performed at the same time. The Harvard architecture, on the other hand, has
separate memory space for instructions and data, which physically separate signals and
storage for code and data memory, which in turn makes it possible to access each of
the memory system simultaneously.

Instruction Processing of Von Neumann and Harvard Architecture:


In Von Neumann architecture, the processing unit would need two clock cycles to
complete an instruction. The processor fetches the instruction from memory in the first
cycle and decodes it, and then the data is taken from memory in the second cycle. In
the Harvard architecture, the processing unit can complete an instruction in one cycle if
appropriate pipelining strategies are in place.
Cost of Von Neumann and Harvard Architecture:
As instructions and data use the same bus system in the Von Neumann architecture, it
simplifies design and development of the control unit, which eventually brings down the
production cost to minimal. Development of control unit in the Harvard architecture is
more expensive than the former because of the complex architecture that employs two
buses for instructions and data.

Use of Von Neumann and Harvard Architecture:


Von Neumann architecture is mainly used in every machine you see from desktop
computers and notebooks to high performance computers and workstations. Harvard
architecture is a fairly new concept used primarily in microcontrollers and digital signal
processing (DSP).

Internal vs. External Design:


Modern high performance CPU chip designs incorporate aspects of both Harvard and
von Neumann architecture. In particular, the "split cache" version of the modified
Harvard architecture is very common. CPU cache memory is divided into an instruction
cache and a data cache. Harvard architecture is used as the CPU accesses the cache.
In the case of a cache miss, however, the data is retrieved from the main memory,
which is not formally divided into separate instruction and data sections, although it may
well have separate memory controllers used for concurrent access to RAM, ROM and
(NOR) flash memory.
A von Neumann architecture is visible in some contexts, such as when data and code
come through the same memory controller, the hardware implementation gains the
efficiencies of the Harvard architecture for cache accesses and at least some main
memory accesses.
In addition, CPUs often have write buffers which let CPUs proceed after writes to non-
cached regions. The von Neumann nature of memory is then visible when instructions
are written as data by the CPU and software must ensure that the caches (data and
instruction) and write buffer are synchronized before trying to execute those just-written
instructions.

Use of caches:
At higher clock speeds, caches are useful as the memory speed is proportionally
slower. Harvard architectures tend to be targeted at higher performance systems, and
so caches are nearly always used in such systems.
Von Neumann architectures usually have a single unified cache, which stores both
instructions and data. The proportion of each in the cache is variable, which may be a
good thing. It would in principle be possible to have separate instruction and data
caches, storing data and instructions separately. This probably would not be very useful
as it would only be possible to ever access one cache at a time.
Caches for Harvard architectures are very useful. Such a system would have separate
caches for each bus. Trying to use a shared cache on a Harvard architecture would be
very inefficient since then only one bus can be fed at a time. Having two caches means
it is possible to feed both buses simultaneously....exactly what is necessary for a
Harvard architecture.
This also allows to have a very simple unified memory system, using the same address
space for both instructions and data. This gets around the problem of literal pools and
self-modifying code. What it does mean, however, is that when starting with empty
caches, it is necessary to fetch instructions and data from the single memory system, at
the same time. Obviously, two memory accesses are needed therefore before the core
has all the data needed. This performance will be no better than a von Neumann
architecture. However, as the caches fill up, it is much more likely that the instruction or
data value has already been cached, and so only one of the two has to be fetched from
memory. The other can be supplied directly from the cache with no additional delay.
The best performance is achieved when both instructions and data are supplied by the
caches, with no need to access external memory at all.
This is the most sensible compromise and the architecture used by ARMs Harvard
processor cores. Two separate memory systems can perform better, but would be
difficult to implement.

Compare of the two in running programs:


By reason of the wider bit of instructions, Harvard Architecture supports more
instructions with less hardware requiring. For example, the ARM9 processor has 24-bit
instruction length, so that it could have 2 24=16777216 instructions, which are much
more than 16-bit processors have (65536). So, with uniform bus width which von-
Neumann architecture has, the processor has to take more requirement of hardware in
data length, if it wants to have 24-bit instruction width. Secondly, two buses accessing
memory synchronously provides more CPU time. The von Neumann processor has to
perform a command in 2 steps (first read an instruction, and then read the data that the
instruction requires. On picture-1 has shown the process of reading instructions and
their data.). But the Harvard architecture can read both instruction and its data at the
same time (On picture-2 shows that 2 buses 2 work synchronically). Evidently, the
parallel method is faster and more efficiently, because it only takes one step for each
command. As a result, Harvard architecture is especially powerful in digital signal
process. Because most commands in DSP require data memory access, the 2-bus-
architecture saves much more CPU time.

Comparison Chart:
Von Neumann Harvard Architecture
It is a theoretical design based on the It is a modern computer architecture
stored-program computer concept based on the Harvard mark I relay based
computer model
It uses same physical memory address It uses separate memory addresses for
for instructions and data instructions and data
Processor needs two clock cycles to Processor needs one cycle to complete
execute an instruction an instruction
Simpler control unit design and Control unit for two buses is more
development of one is cheap and faster complicated which adds to development
cost
Data transfers and instruction fetches Data transfers and instruction fetches can
cannot be performed simultaneously be performed at the same time
Used in personal computers, laptops and Used in microcontrollers and signal
workstations processing

Advantages and Disadvantages of Harvard and Von Neumann


Architectures:
Advantages of Harvard Architecture:
1. Efficient Pipelining - Operand Fetch and Instruction Fetch can be overlapped.
2. Separate Buses for data and instructions.
3. Tailored towards an FPGA implementation.
4. This has faster execution time it allow concurrent access of data and instructions.
5. There are two or more internal data buses which allow simultaneous access to
both instructions and data.

Disadvantages of Harvard Architecture:


1. Not widely used.
2. More difficult to implement.
3. More pins.

Advantages of Neumann Architecture:


1. One of the major advantages of the Von Neumann architecture is a separate
memory and buses are not needed for data and instructions.
2. This concept reduces hardware requirements and makes computers cheaper.
3. Data from memory and devices are accessed in the same way.
4. Memory Organization is in the hands of programmer.
5. Control Unit gets data and information in the same way from one memory.
Disadvantages of Neumann Architecture:
1. Executes instruction serially.
2. Limited by the times it takes to process each instruction
3. Some registers are not being used during the fetch-decode-execute-reset-cycle.
4. Parallel Executions are simulated later by the operating system.

2. CISC and RISC computer architecture:


1. CISC [Complex instruction set Computing]:

1. Very large instruction sets reaching up to and above three hundred separate
instructions.

2. Performance was improved by allowing the simplification of program compilers, as


the range of more advanced instructions available led to less refinements having to be
made at the compilation process.

3. More specialized addressing modes and registers also being implemented, with
variable length instruction codes.

4. Instruction pipelining cannot be implemented easily.

5. Many complex instructions can access memory, such as direct addition between data
in two memory locations.

6. Mainly used in normal PC’s, Workstations and servers.

7. CISC systems shorten execution time by reducing the


number of instructions per program.

8. Examples of CISC Processors: Intel x86.

TYPICAL CHARACTERISTICS OF CISC ARCHITECTURE:


· Complex instruction-decoding logic,
· Small number of general purpose registers.
· Several special purpose registers.
· Condition code register

EXAMPLES OF CISC PROCESSORS:


1. IBM 370/168
Introduced in 1970, this CISC design is a 32 bit processor with 4 general purpose and 4
64-bit floating point registers.
2. VAX 11/780
This CISC design is again a 32-bit processor from DEC (Digital Equipment
Corporation). It supports large number of addressing modes and machine instructions
3. Intel 80486
Launched in 1989, this CISC processor has instructions with their lengths varying from
1 to 11 and had 235 instructions.

2. RISC [Reduced instruction set Computing]:

1. Small set of instructions.

2. Simplified and reduced instruction set, numbering one hundred instructions or less.
Because of simple instructions, RISC chips requires fewer transistors to produce
processors. Also the reduced instruction set means that the processor can execute the
instructions more quickly, potentially allowing for greater speeds.

3. Addressing modes are simplified back to four or less, and the length of the codes is
fixed in order to allow standardization across the instruction set.

4. Instruction pipelining can be implemented easily.

5. Only LOAD/STORE instructions can access memory.

6. Mainly used for real time applications.

7. RISC systems shorten execution time by reducing the clock


cycles per instruction (i.e. simple instructions take less time
to interpret).

8. Examples of RISC Processors: Atmel AVR, PIC, ARM.


TYPICAL CHARACTERISTICS OF RISC ARCHITECTURE:
a. Simple Instructions.
b. Few Data types
c. Simple Addressing Modes
d. Identical General Purpose Registers
e. Harvard Architecture

Example of RISC Architecture:


1. Digital Equipment Corporation (DEC) - Alpha
2. Advanced Micro Devices (AMD) 29000
3. Advanced RISC Machine (ARM).
4. Atmel AVR.
5. Microprocessor without Interlocked Pipeline Stages (MIPS).
6. Precision Architecture – Reduced Instruction Set Computer (PA-RISC).
7. Performance Optimization with Enhanced RISC – Performance
8. SuperH.
9. Scalable Processor Architecture (SPARC).

RISC CISC
1. RISC stands for Reduced Instruction 1. CISC stands for Complex Instruction
Set Computer. Set Computer.
2. RISC processors have simple 2. CSIC processor has complex
instructions taking about one clock cycle. instructions that take up multiple clocks
The average clock cycle per instruction for execution. The average clock cycle
(CPI) is 1.5 per instruction (CPI) is in the range of 2
and 15.
3. Performance is optimized with more 3. Performance is optimized with more
focus on software focus on hardware.
4. It has no memory unit and uses a 4. It has a memory unit to implement
separate hardware to implement complex instructions.
instructions..
5. It has a hard-wired unit of 5. It has a microprogramming unit.
programming.
6. The instruction set is reduced i.e. it has 6. The instruction set has a variety of
only a few instructions in the instruction different instructions that can be used for
set. Many of these instructions are very complex operations.
primitive.
7. The instruction set has a variety of 7. CISC has many different addressing
different instructions that can be used for modes and can thus be used to represent
complex operations. higher-level programming language
statements more efficiently.
8. Complex addressing modes are 8. CISC already supports complex
synthesized using the software. addressing modes
9. Multiple register sets are present 9. Only has a single register set
10. RISC processors are highly pipelined 10. They are normally not pipelined or
less pipelined
11. The complexity of RISC lies with the 11. The complexity lies in the
compiler that executes the program microprogram
12. Execution time is very less 12. Execution time is very high
13. Code expansion can be a problem 13. Code expansion is not a problem
14. Decoding of instructions is simple. 14. Decoding of instructions is complex
15. It does not require external memory 15. It requires external memory for
for calculations calculations
16. The most common RISC
microprocessors are Alpha, ARC, ARM, 16. Examples of CISC processors are the
AVR, MIPS, PA-RISC, PIC, Power System/360, VAX, PDP-11, Motorola
Architecture, and SPARC. 68000 family, AMD and Intel x86 CPUs.
17. RISC architecture is used in high-end
applications such as video processing, 17. CISC architecture is used in low-end
telecommunications and image applications such as security systems,
processing. home automation, etc.

S-ar putea să vă placă și