Documente Academic
Documente Profesional
Documente Cultură
NAME:-MADHURIMA PATRA
ROLL NO:-14401062011
In the simplest sense, parallel programming is the
simultaneous use of multiple compute resources to solve a
computational problem:
1
speed up= ------------------
1 - P
• If all of the code is parallelized, P = 1 and the speedup is infinite (in theory).
Introducing the number of processors
performing the parallel fraction of
work, the relationship can be modeled by:
1
speedup = --------------
P+S
---
N
Thread A Thread B
1A: Read variable V 1B: Read variable V
MUTUAL EXCLUSION
A collection of techniques for sharing resources so that different
uses do not conflict and cause unwanted interactions.
SYNCHRONISATION
The coordination of parallel tasks in real time, very often associated with
communications. Often implemented by establishing a synchronization point
within an application where a task may not proceed further until another task(s)
reaches the same or logically equivalent point.
PARALLEL SLOWDOWN
GRANULITY
EMBARRASSINGLY PARALLEL
PARALLEL SLOWDOWN
When a task is split up into more and more threads, those threads spend an
ever-increasing portion of their time communicating with each other.
Eventually, the overhead from communication dominates the time spent
solving the problem, and increases the amount of time required to finish.
GRANULITY
In parallel computing, granularity is a qualitative measure of the ratio of
computation to communication.
•Coarse
•Fine
EMBARRASSINGLY PARALLEL
If the sub tasks rarely or never have to communicate .
LOAD BALANCING
Load balancing refers to the practice of distributing work among tasks so
that all tasks are kept busy all of the time. It can be considered a
minimization of task idle time.
DS1
PU1 MM1
DS2 MM2
PU2
CU IS
SM
DSn
PUn MMn
IS
MIMD Multiple Instruction, Multiple Data Stream
IS2
IS2 IS2 DS2 MM2
CU2 PU2
SM
ISn ISn
ISn DSn
CUn PUn MMn
SHARED MEMORY
Shared memory parallel computers vary widely, but generally have in
common the ability for all processors to access all memory as global
address space.
Multiple processors can operate independently but share the same
memory resources.
Changes in a memory location effected by one processor are visible to
all other processors.
Shared memory machines can be divided into two main classes based
upon memory access times:
• UMA
• NUMA.
UNIFORM MEMORY ACCESS
Most commonly represented today by Symmetric Multiprocessor
(SMP) machines
Identical processors
Equal access and access times to memory
DISTRIBUTED MEMORY
HYBRID DISTRIBUTED SHARED MEMORY
Shared Memory
Threads
Message Passing
Data Parallel
Although it might not seem apparent, these models are NOT specific
to a particular type of machine or memory architecture. In fact, any of
these models can (theoretically) be implemented on any underlying
hardware
SHARED MEMORY MODEL
In the shared-memory programming model, tasks share a common
address space, which they read and write asynchronously.