Sunteți pe pagina 1din 25

PROCESSORS

USED IN
SOCS
COMPLEX INSTRUCTION SET COMPUTER
A complex instruction set computer (CISC)
is a computer instruction set architecture (ISA)
in which each instruction can execute several
low-level operations, such as a load from memory,
an arithmetic operation, and a memory store, all
in a single instruction. The term was
retroactively coined in contrast to reduced
instruction set computer (RISC).

Examples of CISC processor families are


System/360, PDP-11, VAX, 68000, and x86.
REDUCED INSTRUCTION SET COMPUTER
The acronym RISC, for reduced instruction
set computer, represents a CPU design strategy
emphasizing the insight that simplified
instructions that "do less" may still provide for
higher performance if this simplicity can be
utilized to make instructions execute very
quickly.

Well known RISC families include Alpha,


Am29k, ARC, ARM, AVR, MIPS, PA-RISC,
Power Architecture (including PowerPC),
SuperH, and SPARC.
CHARACTERISTICS OF RISC
For any given level of general performance, a RISC chip
will typically have far fewer transistors dedicated to the
core logic which originally allowed designers to increase the
size of the register set and increase internal parallelism.
Other features, which are typically found in RISC
architectures are:
Uniform instruction format, using a single word with the
opcode in the same bit positions in every instruction,
demanding less decoding;
Identical general purpose registers, allowing any register to be
used in any context, simplifying compiler design (although
normally there are separate floating point registers);
Simple addressing modes. Complex addressing performed via
sequences of arithmetic and/or load-store operations;
Few data types in hardware, some CISCs have byte string
instructions, or support complex numbers; this is so far
unlikely to be found on a RISC.
RISC
RISC designs are also more likely to feature a
Harvard memory model, where the instruction stream
and the data stream are conceptually separated; this
means that modifying the memory where code is held
might not have any effect on the instructions
executed by the processor (because the CPU has a
separate instruction and data cache), at least until a
special synchronization instruction is issued.
On the upside, this allows both caches to be accessed
simultaneously, which can often improve
performance.
Many early RISC designs also shared the
characteristic of having a branch delay slot. A branch
delay slot is an instruction space immediately
following a jump or branch.
EARLY RISC
The first system that would today be known as RISC
was the CDC 6600 supercomputer, designed in 1964,
a decade before the term was invented.
The CDC 6600 had a load-store architecture with only
two addressing modes and 74 opcodes.
The most public RISC designs, however, were the
results of university research programs run with
funding from the DARPA VLSI Program.
UC Berkeley's RISC project started in 1980.
The RISC project delivered the RISC-I processor in
1982. Consisting of only 44,420 transistors RISC-I
had only 32 instructions, and yet completely
outperformed any other single-chip design.
They followed this up with the 40,760 transistor, 39
instruction RISC-II in 1983, which ran over three
times as fast as RISC-I.
VON NEUMANN ARCHITECTURE

Mr. John Von Neumann


VON NEUMANN ARCHITECTURE
The von Neumann
architecture is a design
model for a stored-program
digital computer that uses a
processing unit and a single
separate storage structure to
hold both instructions and
data.

It is named after the


mathematician and early
computer scientist John von
Neumann.
VON NEUMANN BOTTLENECK
The separation between the CPU and memory leads
to the von Neumann bottleneck, the limited
throughput (data transfer rate) between the CPU and
memory compared to the amount of memory. In most
modern computers, throughput is much smaller than
the rate at which the CPU can work.
The performance problem can be alleviated (to some
extent) by several mechanisms. Providing a cache
between the CPU and the main memory, providing
separate caches with separate access paths for data
and instructions.
The problem can also be sidestepped somewhat by
using parallel computing, using for example the
NUMA architecturethis approach is commonly
employed by supercomputers.
EARLY VON NEUMANN-ARCHITECTURE
COMPUTERS
ORDVAC (U-Illinois) at Aberdeen Proving Ground,
Maryland (completed Nov 1951[13])
IAS machine at Princeton University (Jan 1952)
MANIAC I at Los Alamos Scientific Laboratory (Mar 1952)
ILLIAC at the University of Illinois, (Sept 1952)
AVIDAC at Argonne National Laboratory (1953)
ORACLE at Oak Ridge National Laboratory (Jun 1953)
JOHNNIAC at RAND Corporation (Jan 1954)
BESK in Stockholm (1953)
BESM-1 in Moscow (1952)
DASK in Denmark (1955)
PERM in Munich (1956?)
SILLIAC in Sydney (1956)
WEIZAC in Rehovoth (1955)
EARLY STORED-PROGRAM COMPUTERS
The IBM SSEC was a stored-program electromechanical computer and was
publicly demonstrated on January 27, 1948.
The Manchester SSEM was the first fully electronic computer to run a
stored program. It ran a factoring program for 52 minutes on June 21, 1948.
The ENIAC was modified to run as a primitive read-only stored-program
computer and was demonstrated as such on September 16, 1948.
The BINAC ran some test programs in February, March, and April 1949,
although it wasn't completed until September 1949.
The Manchester Mark 1 developed from the SSEM project. It wasn't
completed until October 1949.
The EDSAC ran its first program on May 6, 1949.
The EDVAC was delivered in August 1949, but it had problems that kept it
from being put into regular operation until 1951.
The CSIR Mk I ran its first program in November 1949.
The SEAC was demonstrated in April 1950.
The Pilot ACE ran its first program on May 10, 1950 and was demonstrated
in December 1950.
The SWAC was completed in July 1950.
The Whirlwind was completed in December 1950 and was in actual use in
April 1951.
The first ERA Atlas (later the commercial ERA 1101/UNIVAC 1101) was
installed in December 1950.
HARVARD ARCHITECTURE
The Harvard architecture is a computer architecture with
physically separate storage and signal pathways for
instructions and data.

The term originated from the Harvard Mark I relay-based


computer, which stored instructions on punched tape (24 bits
wide) and data in electro-mechanical counters. These early
machines had limited data storage, entirely contained within
the central processing unit, and provided no access to the
instruction storage as data.

Today, most processors implement such separate signal


pathways for performance reasons but actually implement a
Modified Harvard architecture, so they can support tasks like
loading a program from disk storage as data and then
executing it.
MEMORY DETAILS
In a Harvard architecture, there is no need to make the
two memories share characteristics.

In particular, the word width, timing, implementation


technology, and memory address structure can differ.

In some systems, instructions can be stored in read-only


memory while data memory generally requires read-
write memory.

In some systems, there is much more instruction


memory than data memory so instruction addresses are
wider than data addresses.
CONTRAST WITH VON NEUMANN
ARCHITECTURES
In a computer with the contrasting von Neumann
architecture, the CPU can be either reading an
instruction or reading/writing data from/to the memory.
Both cannot occur at the same time since the
instructions and data use the same bus system.
In a computer using the Harvard architecture, the CPU
can both read an instruction and perform a data
memory access at the same time, even without a cache.
A Harvard architecture computer can thus be faster for
a given circuit complexity because instruction fetches
and data access do not contend for a single memory
pathway.
Also, a Harvard architecture machine has distinct code
and data address spaces: instruction address zero is not
the same as data address zero. Instruction address zero
might identify a twenty-four bit value, while data
address zero might indicate an eight bit byte that isn't
part of that twenty-four bit value.

S-ar putea să vă placă și