Sunteți pe pagina 1din 19

Serial, Parallel and Concurrent Programming

The need for concurrent programming


Theoretical gain of concurrency
Taxonomy of parallel processor architectures
Communication Paradigms for Concurrent Processes

Formal Modelling of Concurrent Processes


Introduction to Concurrency
(based on slides from B. Meyer and S. Nanz, ETH Zurich)

A. Sterca

A. Sterca Intro to Concurrency


Serial, Parallel and Concurrent Programming
The need for concurrent programming
Theoretical gain of concurrency
Taxonomy of parallel processor architectures
Communication Paradigms for Concurrent Processes

Serial vs. Parallel vs. Concurrent Programming

A. Sterca Intro to Concurrency


Serial, Parallel and Concurrent Programming
The need for concurrent programming
Theoretical gain of concurrency
Taxonomy of parallel processor architectures
Communication Paradigms for Concurrent Processes

Serial Programming
traditional programming paradigm; an algorithm is a serial
sequence of instructions, each instruction being executed one
after another by a CPU;
execution is serial, at most with branches (if) and repetitions
(while, for); single thread of execution and control
appropriate for a Turing machine
has stable programming methodologies: structured
programming, data abstraction, object-oriented programming,
design by contract
strong methodologies for verifying correctness of sequential
programs: Petri nets, deterministic automata, labelled
transition systems
not that efficient with respect to concurrent and parallel
programming
A. Sterca Intro to Concurrency
Serial, Parallel and Concurrent Programming
The need for concurrent programming
Theoretical gain of concurrency
Taxonomy of parallel processor architectures
Communication Paradigms for Concurrent Processes

Parallel Programming

a parallel algorithm decomposes a problem in independent


tasks which are executed in parallel
a natural consequence of multiprocessor architecture and
multi-core architecture which evolved as a solution to the
frequency scaling in processor design (i.e. computer
performance = CPU clock frequency; increase CPU frequency
= increase power consumption)
writing parallel code does not have a clear recipe like
sequential code has
increased performance with respect to sequential code
pure parallel applications are rare, most of them are
concurrent

A. Sterca Intro to Concurrency


Serial, Parallel and Concurrent Programming
The need for concurrent programming
Theoretical gain of concurrency
Taxonomy of parallel processor architectures
Communication Paradigms for Concurrent Processes

Concurrent Programming

concurrent programs = parallel programs which share


common data
no single thread of execution and control
more efficient than sequential programming, harder to verify
correctness
various problems (race conditions)

A. Sterca Intro to Concurrency


Serial, Parallel and Concurrent Programming
The need for concurrent programming
Theoretical gain of concurrency
Taxonomy of parallel processor architectures
Communication Paradigms for Concurrent Processes

Arguments for concurrency

Previously perceived as a very specialized topic: high-performance


computing, systems programming, databases Reasons for
introducing concurrency into programs:
Efficiency
Time (load sharing)
Cost (resource sharing)
Availability
Multiple access
Convenience
Perform several tasks at once
Modeling power
Describing systems that are inherently parallel

A. Sterca Intro to Concurrency


Serial, Parallel and Concurrent Programming
The need for concurrent programming
Theoretical gain of concurrency
Taxonomy of parallel processor architectures
Communication Paradigms for Concurrent Processes

Modeling concurrent systems

Computer systems are used for modeling objects in the real world
=⇒ Object-oriented programming

The world often includes parallel operation.

Typical examples:
Limited number of seats on the same plane
Several booking agents active at the same time

A. Sterca Intro to Concurrency


Serial, Parallel and Concurrent Programming
The need for concurrent programming
Theoretical gain of concurrency
Taxonomy of parallel processor architectures
Communication Paradigms for Concurrent Processes

Multiprocessing, Parallelism
Many of todays computations can take advantage of multiple
processing units (through multi-core processors):

Terminology:
Multiprocessing : the use of more than one processing unit in
a system
Parallel execution : processes running at the same time

A. Sterca Intro to Concurrency


Serial, Parallel and Concurrent Programming
The need for concurrent programming
Theoretical gain of concurrency
Taxonomy of parallel processor architectures
Communication Paradigms for Concurrent Processes

Multitasking, Concurrency
Even on systems with a single processing unit we may give the
illusion of that several programs run at once.
The OS switches between executing different tasks.

Terminology:
Interleaving : several tasks active, only one running at a time
Multitasking : the OS runs interleaved executions
Concurrency : multiprocessing, multitasking, or any
combination
A. Sterca Intro to Concurrency
Serial, Parallel and Concurrent Programming
The need for concurrent programming
Theoretical gain of concurrency
Taxonomy of parallel processor architectures
Communication Paradigms for Concurrent Processes

The end of Moore’s Law ?

A. Sterca Intro to Concurrency


Serial, Parallel and Concurrent Programming
The need for concurrent programming
Theoretical gain of concurrency
Taxonomy of parallel processor architectures
Communication Paradigms for Concurrent Processes

Impact of Moore’s Law ending

The end of Moores law as we knew it has important


implications on the software construction process
Computing is taking an irreversible step toward parallel
architectures
Hardware construction of ever faster sequential CPUs has hit
physical limits
Clock speed no longer increases for every new processor
generation
Moores Law expresses itself as exponentially increasing number
of processing cores per chip
If we want programs to run faster on the next processor
generation, the software must exploit more concurrency

A. Sterca Intro to Concurrency


Serial, Parallel and Concurrent Programming
The need for concurrent programming
Theoretical gain of concurrency
Taxonomy of parallel processor architectures
Communication Paradigms for Concurrent Processes

Amdahl’s Law
We go from 1 processor to n. What gain may we expect?

Amdahls law severely limits our hopes!


sequential execution time
Define gain as: speedup =
parallel execution time
Not everything can be parallelized!
1
speedup =
1−p + ( pn )

where 1 − p = sequential part


p
n = parallel part
p = % parallelizable
n = number of processors
A. Sterca Intro to Concurrency
Serial, Parallel and Concurrent Programming
The need for concurrent programming
Theoretical gain of concurrency
Taxonomy of parallel processor architectures
Communication Paradigms for Concurrent Processes

Amdahl’s Law: Example 1

Assume 10 processing units. How close are we to a 10-fold


speedup?
60% concurrent, 40% sequential:
1
speedup = = 2.17
1 − 0.6 + ( 0.6
10 )

80% concurrent, 20% sequential:


1
speedup = = 3.57
1 − 0.8 + ( 0.8
10 )

A. Sterca Intro to Concurrency


Serial, Parallel and Concurrent Programming
The need for concurrent programming
Theoretical gain of concurrency
Taxonomy of parallel processor architectures
Communication Paradigms for Concurrent Processes

Amdahl’s Law: Example 2

90% concurrent, 10% sequential:


1
speedup = = 5.26
1 − 0.9 + ( 0.9
10 )

99% concurrent, 1% sequential:


1
speedup = = 9.17
1 − 0.99 + ( 0.99
10 )

A. Sterca Intro to Concurrency


Serial, Parallel and Concurrent Programming
The need for concurrent programming
Theoretical gain of concurrency
Taxonomy of parallel processor architectures
Communication Paradigms for Concurrent Processes

Processor architectures from parallel computation

Flynns taxonomy: classification of computer architectures


Considers relationship of instruction streams to data streams:

Single Instruction Multiple Instruction


Single Data SISD
Multiple Data SIMD MIMD

SISD: No parallelism (uniprocessor)


SIMD: Vector processor, GPU
MIMD: Multiprocessing (predominant today)

A. Sterca Intro to Concurrency


Serial, Parallel and Concurrent Programming
The need for concurrent programming
Theoretical gain of concurrency
Taxonomy of parallel processor architectures
Communication Paradigms for Concurrent Processes

MIMD variants

SPMD (Single Program Multiple Data): All processors run


same program, but at independent speeds; no lockstep as in
SIMD
MPMD (Multiple Program Multiple Data): Often
manager/worker strategy: manager distributes tasks, workers
return result to manager

A. Sterca Intro to Concurrency


Serial, Parallel and Concurrent Programming
The need for concurrent programming
Theoretical gain of concurrency
Taxonomy of parallel processor architectures
Communication Paradigms for Concurrent Processes

Communication through Shared Memory

All processors share a common memory. Shared-memory


communication

A. Sterca Intro to Concurrency


Serial, Parallel and Concurrent Programming
The need for concurrent programming
Theoretical gain of concurrency
Taxonomy of parallel processor architectures
Communication Paradigms for Concurrent Processes

Communication through Message Passing


Each processor has own local memory, inaccessible to others.
Message passing communication is common for SPMD
architecture.

A. Sterca Intro to Concurrency


Serial, Parallel and Concurrent Programming
The need for concurrent programming
Theoretical gain of concurrency
Taxonomy of parallel processor architectures
Communication Paradigms for Concurrent Processes

Client-Server Paradigm
Specific case of the Message Passing Paradigm.
Examples: Database-centered systems, World-Wide Web

A. Sterca Intro to Concurrency

S-ar putea să vă placă și