Sunteți pe pagina 1din 12

BANARSIDAS CHANDIWALA INSTITUTE OF

INFORMATION TECHNOLOGY

Presentation Report
Information
technology
CISC & RISC ARCHITECTURE
SHONBIR SINGH TOMAR
NOV|7|2008

Short description on Cisc and Risc architecture and different type of processor
which were build following this architecture. Advantage and disadvantage of each.
And some basic concepts of these two architecture.
Contents:

• What is Cisc?

• What is Risc?

• Advantage and Disadvantage of Cisc & Risc.

• Working of both architecture.


CISC (Complex Instruction Set Computer)


What is CISC?
CISC, which stands for Complex Instruction Set Computer, is a philosophy for designing
chips that are easy to program and which make efficient use of memory. Each instruction in a
CISC instruction set might perform a series of operations inside the processor. This reduces the
number of instructions required to implement a given program, and allows the programmer to
learn a small but flexible set of instructions.

Since the earliest machines were programmed in assembly language and memory was slow and
expensive, the CISC philosophy made sense, and was commonly implemented in such large
computers as the PDP-11 and the DEC system 10 and 20 machines.
Most common microprocessor designs --- including the Intel(R) 80x86 and Motorola 68K series
--- also follow the CISC philosophy.
As we shall see, recent changes in software and hardware technology have forced a re-
examination of CISC. But first, let's take a closer look at the decisions which led to CISC.
CISC philosophy 1: Use Microcode
The earliest processor designs used dedicated (hardwire) logic to decode and execute each
instruction in the processor's instruction set. This worked well for simple designs with few
registers, but made more complex architectures hard to build, as control path logic can be hard to
implement. So, designers switched tactics --- they built some simple logic to control the data
paths between the various elements of the processor, and used a simplified microcode instruction
set to control the data path logic. This type of implementation is known as a microprogrammed
implementation.

In a microprogrammed system, the main processor has some built-in memory (typically ROM)
which contains groups of microcode instructions which correspond with each machine-language
instruction. When a machine language instruction arrives at the central processor, the processor
executes the corresponding series of microcode instructions.
Because instructions could be retrieved up to 10 times faster from a local ROM than from main
memory, designers began to put as many instructions as possible into microcode. In fact, some
processors could be ordered with custom microcode which would replace frequently used but
slow routines in certain application.
There are some real advantages to a microcoded implementation:
since the microcode memory can be much faster than main memory, an instruction set can be
implemented in microcode without losing much speed over a purely hard-wired implementation.
new chips are easier to implement and require fewer transistors than implementing the same
instruction set with dedicated logic, and...
a microprogrammed design can be modified to handle entirely new instruction sets quickly.
Using microcoded instruction sets, the IBM 360 series was able to offer the same programming
model across a range of different hardware configurations.
Some machines were optimized for scientific computing, while others were optimized for
business computing. However, since they all shared the same instruction set, programs could be
moved from machine to machine without re-compilation (but with a possible increase or
decrease in performance depending on the underlying hardware.)
This kind of flexibility and power made microcoding the preferred way to build new computers
for quite some time.
CISC philosophy 2: Build "rich" instruction sets
One of the consequences of using a microprogrammed design is that designers could build more
functionality into each instruction. This not only cut down on the total number of instructions
required to implement a program, and therefore made more efficient use of a slow main memory,
but it also made the assembly-language programmer's life simpler.
Soon, designers were enhancing their instruction sets with instructions aimed specifically at the
assembly language programmer. Such enhancements included string manipulation operations,
special looping constructs, and special addressing modes for indexing through tables in memory.
For example:
ABCD Add Decimal with Extend
ADDA Add Address
ADDX Add with Extend
ASL Arithmentic Shift Left
CAS Compare and Swap Operands
NBCD Negate Decimal with Extend
EORI Logical Exclusive OR Immediate
TAS Test Operand and Set
CISC philosophy 3: Build high-level instruction sets
Once designers started building programmer-friendly instruction sets, the logical next step was to
build instruction sets which map directly from high-level languages. Not only does this simplify
the compiler writer's task, but it also allows compilers to emit fewer instructions per line of
source code.
Modern CISC microprocessors, such as the 68000, implement several such instructions,
including routines for creating and removing stack frames with a single call.
For example:
DBcc Test Condition, Decrement and Branch
ROXL Rotate with Extend Left
RTR Return and Restore Codes
SBCD Subtract Decimal with Extend
SWAP Swap register Words
CMP2 Compare Register against Upper and Lower Bounds
The rise of CISC
CISC Design Decisions:
use microcode
build rich instruction sets
build high-level instruction sets

Taken together, these three decisions led to the CISC philosophy which drove all computer
designs until the late 1980s, and is still in major use today. (Note that "CISC" didn't enter the
computer designer's vocabulary until the advent of RISC --- it was simply the way that
everybody designed computers.)
The next lesson discusses the common characteristics that all CISC designs share, and how those
characteristics affect the operation of a CISC machine.
Characteristics of a CISC design

Introduction
While the chips that emerged from the 1970s and 1980s followed their own unique design paths,
most were bound by what we are calling the "CISC Design Decisions". These chips all have
similar instruction sets, and similar hardware architectures.
In general terms, the instruction sets are designed for the convenience of the assembly-language
programmer and the hardware designs are fairly complex.
Instruction sets
The design constraints that led to the development of CISC (small amounts of slow memory, and
the fact that most early machines were programmed in assembly language) give CISC instruction
sets some common characteristics:

A 2-operand format, where instructions have a source and a destination. For example, the add
instruction "add #5, D0" would add the number 5 to the contents of register D0 and place the
result in register D0.
Register to register, register to memory, and memory to register commands.
Multiple addressing modes for memory, including specialized modes for indexing through
arrays
Variable length instructions where the length often varies according to the addressing mode
Instructions which require multiple clock cycles to execute. If an instruction requires
additional information before it can run (for example, if the processor needs to read in two
memory locations before operating on them), collecting the extra information will require extra
clock cycles. As a result, some CISC instructions will take longer than others to execute.

Hardware architectures
Most CISC hardware architectures have several characteristics in common:
Complex instruction-decoding logic, driven by the need for a single instruction to support
multiple addressing modes.
A small number of general purpose registers. This is the direct result of having instructions
which can operate directly on memory and the limited amount of chip space not dedicated to
instruction decoding, execution, and microcode storage.
Several special purpose registers. Many CISC designs set aside special registers for the stack
pointer, interrupt handling, and so on. This can simplify the hardware design somewhat, at the
expense of making the instruction set more complex.
A "Condition code" register which is set as a side-effect of most instructions. This register
reflects whether the result of the last operation is less than, equal to, or greater than zero, and
records if certain error conditions occur.

The ideal CISC machine


CISC processors were designed to execute each instruction completely before beginning the next
instruction. Even so, most processors break the execution of an instruction into several definite
stages; as soon as one stage is finished, the processor passes the result to the next stage:
An instruction is fetched from main memory.
The instruction is decoded: the controlling code from the microprogram identifies the type of
operation to be performed, where to find the data on which to perform the operation, and where
to put the result. If necessary, the processor reads in additional information from memory.
The instruction is executed. the controlling code from the microprogram determines the
circuitry/hardware that will perform the operation.
The results are written to memory.

In an ideal CISC machine, each complete instruction would require only one clock cycle (which
means that each stage would complete in a fraction of a cycle.) In fact, this is the maximum
possible speed for a machine that executes 1 instruction at a time.

A realistic CISC machine


In reality, some instructions may require more than one clock per stage, as the animation shows.
However, a CISC design can tolerate this slowdown since the idea behind CISC is to keep the
total number of cycles small by having complicated things happen within each cycle.

CISC and the Classic Performance Equation


The usual equation for determining performance is the sum for all instructions of (the number of
cycles per instruction * instruction cycle time) = execution time.
This allows you to speed up a processor in 3 different ways --- use fewer instructions for a given
task, reduce the number of cycles for some instructions, or speed up the clock (decrease the cycle
time.)
CISC tries to reduce the number of instructions for a program, and RISC tries to reduce the
cycles per instruction.
CISC Pros and Cons
The advantages of CISC
At the time of their initial development, CISC machines used available technologies to optimize
computer performance.

Microprogramming is as easy as assembly language to implement, and much less expensive


than hardwiring a control unit.
The ease of microcoding new instructions allowed designers to make CISC machines
upwardly compatible: a new computer could run the same programs as earlier computers because
the new computer would contain a superset of the instructions of the earlier computers.
As each instruction became more capable, fewer instructions could be used to implement a
given task. This made more efficient use of the relatively slow main memory.
Because microprogram instruction sets can be written to match the constructs of high-level
languages, the compiler does not have to be as complicated.

The disadvantages of CISC


Still, designers soon realized that the CISC philosophy had its own problems, including:

Earlier generations of a processor family generally were contained as a subset in every new
version --- so instruction set & chip hardware become more complex with each generation of
computers.
So that as many instructions as possible could be stored in memory with the least possible
wasted space, individual instructions could be of almost any length---this means that different
instructions will take different amounts of clock time to execute, slowing down the overall
performance of the machine.
Many specialized instructions aren't used frequently enough to justify their existence ---
approximately 20% of the available instructions are used in a typical program.
CISC instructions typically set the condition codes as a side effect of the instruction. Not only
does setting the condition codes take time, but programmers have to remember to examine the
condition code bits before a subsequent instruction changes them.
RISC (Reduced Instruction Set Computer)

A RISC (reduced instruction set computer) is a microprocessor that is designed to perform a


smaller number of types of computer instruction so that it can operate at a higher speed (perform
more million instructions per second, or millions of instructions per second). Since each
instruction type that a computer must perform requires additional transistors and circuitry, a
larger list or set of computer instructions tends to make the microprocessor more complicated
and slower in operation.

John Cocke of IBM Research in Yorktown, New York, originated the RISC concept in 1974 by
proving that about 20% of the instructions in a computer did 80% of the work. The first
computer to benefit from this discovery was IBM's PC/XT in 1980. Later, IBM's RISC
System/6000, made use of the idea. The term itself (RISC) is credited to David Patterson, a
teacher at the University of California in Berkeley. The concept was used in Sun Microsystems'
SPARC microprocessors and led to the founding of what is now MIPS Technologies, part of
Silicon Graphics. DEC's Alpha microchip also uses RISC technology.
The RISC concept has led to a more thoughtful design of the microprocessor. Among design
considerations are how well an instruction can be mapped to the clock speed of the
microprocessor (ideally, an instruction can be performed in one clock cycle); how "simple" an
architecture is required; and how much work can be done by the microchip itself without
resorting to software help.
Besides performance improvement, some advantages of RISC and related design improvements
are:

A new microprocessor can be developed and tested more quickly if one of its aims is to be less
complicated.
Operating system and application programmers who use the microprocessor's instructions will
find it easier to develop code with a smaller instruction set.
The simplicity of RISC allows more freedom to choose how to use the space on a
microprocessor.
Higher-level language compilers produce more efficient code than formerly because they have
always tended to use the smaller set of instructions to be found in a RISC computer.
RISC characteristics

Simple instruction set.


In a RISC machine, the instruction set contains simple, basic instructions, from which more
complex instructions can be composed.
Same length instructions.
Each instruction is the same length, so that it may be fetched in a single operation.
1 machine-cycle instructions.
Most instructions complete in one machine cycle, which allows the processor to handle several
instructions at the same time. This pipelining is a key technique used to speed up RISC
machines.
Inside a RISC Machine

Pipelining: A key RISC technique


RISC designers are concerned primarily with creating the fastest chip possible, and so they use a
number of techniques, including pipelining.
Pipelining is a design technique where the computer's hardware processes more than one
instruction at a time, and doesn't wait for one instruction to complete before starting the next.
Remember the four stages in our typical CISC machine? They were fetch, decode, execute, and
write. These same stages exist in a RISC machine, but the stages are executed in parallel. As
soon as one stage completes, it passes on the result to the next stage and then begins working on
another instruction.
As you can see from the animation above, the performance of a pipelined system depends on the
time it takes only for any one stage to be completed---not on the total time for all stages as with
non-pipelined designs.
In an typical pipelined RISC design, each instruction takes 1 clock cycle for each stage, so the
processor can accept 1 new instruction per clock. Pipelining doesn't improve the latency of
instructions (each instruction still requires the same amount of time to complete), but it does
improve the overall throughput.
As with CISC computers, the ideal is not always achieved. Sometimes pipelined instructions take
more than one clock to complete a stage. When that happens, the processor has to stall and not
accept new instructions until the slow instruction has moved on to the next stage.
Since the processor is sitting idle when stalled, both the designers and programmers of RISC
systems make a conscious effort to avoid stalls. To do this, designers employ several techniques,
as shown in the following sections.
Performance issues in pipelined systems
A pipelined processor can stall for a variety of reasons, including delays in reading information
from memory, a poor instruction set design, or dependencies between instructions. The following
pages examine some of the ways that chip designers and system designers are addressing these
problems.
Memory speed
Memory speed issues are commonly solved using caches. A cache is a section of fast memory
placed between the processor and slower memory. When the processor wants to read a location
in main memory, that location is also copied into the cache. Subsequent references to that
location can come from the cache, which will return a result much more quickly than the main
memory.

Caches present one major problem to system designers and programmers, and that is the problem
of coherency. When the processor writes a value to memory, the result goes into the cache
instead of going directly to main memory. Therefore, special hardware (usually implemented as
part of the processor) needs to write the information out to main memory before something else
tries to read that location or before re-using that part of the cache for some different information.
Instruction Latency
A poorly designed instruction set can cause a pipelined processor to stall frequently. Some of the
more common problem areas are:

Highly encoded instructions---such as those used on CISC machines---that require a ulating


and testing thed of cal to decode
Variable-length instructions which require multiple references to memory to fetch in the entire
instruction.
Instructions which access main memory (instead of registers), since main memory can be slow
Complex instructions which require multiple clocks for execution (many floating-point
operations, for example.)
Instructions which need to read and write the same register. For example "ADD 5 to register
3" had to read register 3, add 5 to that value, then write 5 back to the same register (which may
still be "busy" from the earlier read operation, causing the processor to stall until the register
becomes available.)
Dependence on single-point resources such as a condition code register. If one instruction sets
the conditions in the condition code register and the following instruction tries to read those bits,
the second instruction may have to stall until the first instruction's write completes.
Dependencies
One problem that RISC programmers face is that the processor can be slowed down by a poor
choice of instructions. Since each instruction takes some amount of time to store its result, and
several instructions are being handled at the same time, later instructions may have to wait for
the results of earlier instructions to be stored. However, a simple rearrangement of the
instructions in a program (called Instruction Scheduling) can remove these performance
limitations from RISC programs.

One common optimization involves "common sub expression elimination." A compiler which
encounters the commands:
B = 10 * (A / 3);
C = (A/ 3) / 4;
might calculate (A/3) first, put that result into a temporary variable, and then use the temporary
variable in later calculations.

Another optimization involves "loop unrolling." Instead of executing a sequence of instruction


inside a loop, the compiler may replicate the instructions multiple times. This eliminates the
overhead of calculating and testing the loop control variable.
Compilers also perform function inlining, where a call to a small subroutine is replaced by the
code of the subroutine itself. This gets rid of the overhead of a call/return sequence.
This is only a small sample of the optimizations which are available. Consult a good textbook on
compilers for other ideas on how compiled code may be optimized.

RISC Pros and Cons


The advantages of RISC
Implementing a processor with a simplified instruction set design provides several advantages
over implementing a comparable CISC design:

Speed. Since a simplified instruction set allows for a pipelined, superscalar design RISC
processors often achieve 2 to 4 times the performance of CISC processors using comparable
semiconductor technology and the same clock rates.
Simpler hardware. Because the instruction set of a RISC processor is so simple, it uses up
much less chip space; extra functions, such as memory management units or floating point
arithmetic units, can also be placed on the same chip. Smaller chips allow a semconductor
manufacturer to place more parts on a single silicon wafer, which can lower the per-chip cost
dramatically.
Shorter design cycle. Since RISC processors are simpler than corresponding CISC processors,
they can be designed more quickly, and can take advantage of other technological developments
sooner than corresponding CISC designs, leading to greater leaps in performance between
generations.
The hazards of RISC
The transition from a CISC design strategy to a RISC design strategy isn't without its problems.
Software engineers should be aware of the key issues which arise when moving code from a
CISC processor to a RISC processor.

Code Quality
The performance of a RISC processor depends greatly on the code that it is executing. If the
programmer (or compiler) does a poor job of instruction scheduling, the processor can spend
quite a bit of time stalling: waiting for the result of one instruction before it can proceed with a
subsequent instruction.
Since the scheduling rules can be complicated, most programmers use a high level language
(such as C or C++) and leave the instruction scheduling to the compiler.
This makes the performance of a RISC application depend critically on the quality of the code
generated by the compiler. Therefore, developers (and development tool suppliers such as Apple)
have to choose their compiler carefully based on the quality of the generated code.

Debugging
Unfortunately, instruction scheduling can make debugging difficult. If scheduling (and other
optimizations) are turned off, the machine-language instructions show a clear connection with
their corresponding lines of source. However, once instruction scheduling is turned on, the
machine language instructions for one line of source may appear in the middle of the instructions
for another line of source code.
Such an intermingling of machine language instructions not only makes the code hard to read, it
can also defeat the purpose of using a source-level compiler, since single lines of code can no
longer be executed by themselves.
Therefore, many RISC programmers debug their code in an un-optimized, un-scheduled form
and then turn on the scheduler (and other optimizations) and hope that the program continues to
work in the same way.

Code expansion
Since CISC machines perform complex actions with a single instruction, where RISC machines
may require multiple instructions for the same action, code expansion can be a problem.
Code expansion refers to the increase in size that you get when you take a program that had been
compiled for a CISC machine and re-compile it for a RISC machine. The exact expansion
depends primarily on the quality of the compiler and the nature of the machine's instruction set.
Fortunately for us, the code expansion between a 68K processor used in the non-PowerPC
Macintoshes and the PowerPC seems to be only 30-50% on the average, although size-optimized
PowerPC code can be the same size (or smaller) than corresponding 68K code.
System Design
Another problem that faces RISC machines is that they require very fast memory systems to feed
them instructions. RISC-based systems typically contain large memory caches, usually on the
chip itself. This is known as a first-level cache.

S-ar putea să vă placă și