Sunteți pe pagina 1din 21

Speed-up Laws

Parallelism Profile in Programs [2]


Degree of Parallelism For each time period, the
number of processors used to execute a program
is defined as the degree of parallelism (DOP).
The plot of the DOP as a function of time is called
the parallelism profile of a given program.
Fluctuation of the profile during an observation
period depends on the algorithmic structure,
program optimization, resource utilization, and
run-time conditions of a computer system.

Average Parallelism [2]

The average parallelism A is computed by:

i 1

i ti

i 1

ti

where:

m is the maximum parallelism in a


profile
ti is the total amount of time that DOP =
m
i
ti t2 t1
i 1

Example [2]

The parallelism profile of an example divide-and-conquer


algorithm increases from 1 to its peak value m = 8 and
then decreases to 0 during the observation period (t l, t2).
A = (1 5 + 2 3 + 3 4 + 4 6 + 5 2 + 6 2 + 8 3)/
/(5 + 3 + 4 + 6 + 2 + 2 + 3)=93/25= 3.72.
4

In Amdahls law, computational


workload W is fixed while the
number of processors that can work
on W can be increased.
We assume that the workload
consists of sequential work W and
n parallel work (1) where is
between 0 and 1.

Speedup of n processor system is defined


using a ratio of execution time, i.e.,

Substituting the execution time in


relation W gives,
...
.........(1)

Eq.(1) is called the Amdahls law. If the number of


processors is increased infinity, the speedup becomes,

............................ (2)

Notice that the speedup can NOT be increased to infinity


even if the number of processors is increased to infinity.
Therefore, Eq.(2) is referred to as a sequential bottle
neck of multiprocessor systems.

pure sequential mode


1 - ~ a probability that the
system operates in a fully
parallel
mode
using
n
processors.

Example

20 hours

A
must walk

200 miles

Walk 4 miles /hour


Bike 10 miles / hour
Car-1 50 miles / hour
Car-2 120 miles / hour
Car-3 600 miles /hour
10

Example

20 hours

A
must walk

Walk 4 miles /hour


1
Bike 10 miles / hour
1.8
Car-1 50 miles / hour
2.9
Car-2 120 miles / hour

200 miles

50 + 20 = 70 hours

S=

20 + 20 = 40 hours

S=

4 + 20 = 24 hours

S=

1.67 + 20 = 21.67 hours

11
S

Amdahls law for a fixed load

Gustafsons
Law
This law says that increase of
problem size for large machines can
retain scalability with respect to the
number of processors.
Assume that the workload is scaled
up on an n-node machine as,

Speedup for the scaled up workload


is then,

.............
.............(3)
Simplifying Eq.(3) produces the
Gustafsons law:

Notice that if the workload is scaled


up to maintain a fixed execution time
as the number of processors
increases, the speedup increases
linearly. What Gustafsons law says is
that the true parallel power of a large
multiprocessor system is only
achievable when a large parallel
problem is applied.

Gustafsons law for a Scaled Problem

Sun and Nis Law


This one is referred to as a memory
bound model. It turns out that when
the speedup is computed by the
problem size limited by the available
memory in n-processor system, it
leads to a generalization of Amdahls
and Gustafsons law.

For n nodes, assume the parallel


portion of workload is increased by
G(n) times reflecting the increase of
memory in n-node system, i.e.,
The memory bound speedup is then
given as

Simplification leads to the Sun and


Nis law:

Depending on G(n), there are three


cases:
Case 1: G(n)=1

Case 2: G(n)=n

Case 3: G(n)>n, Let G(n)=m where


m>n. The speedup is then

Memory Bound Speedup Model by


Sun and Ni

S-ar putea să vă placă și