Sunteți pe pagina 1din 5

Running Time (§1.

1)
w Most algorithms transform best case

input objects into output average case

Analysis of Algorithms objects. 120


worst case

w The running time of an 100


algorithm typically grows

Running Time
80
with the input size.
60
w Average case time is often
difficult to determine. 40

w We focus on the worst case 20

Input Algorithm Output running time. 0


1000 2000 3000 4000
n Easier to analyze Input Size
An algorithm is a step-by-step procedure for n Crucial to applications such as
solving a problem in a finite amount of time. games, finance and robotics

Analysis of Algorithms 2

Experimental Studies (§ 1.6) Limitations of Experiments


w Write a program 9000
w It is necessary to implement the
implementing the 8000

algorithm 7000
algorithm, which may be difficult
w Run the program with 6000 w Results may not be indicative of the
Time (ms)

inputs of varying size and 5000 running time on other inputs not included
composition
w Use a method like
4000
in the experiment.
3000
System.currentTimeMillis() to 2000
w In order to compare two algorithms, the
get an accurate measure
of the actual running time 1000 same hardware and software
w Plot the results 0 environments must be used
0 50 100
Input Size

Analysis of Algorithms 3 Analysis of Algorithms 4

Theoretical Analysis Pseudocode (§1.1)


w High-level description Example: find max
w Uses a high-level description of the of an algorithm element of an array
algorithm instead of an implementation w More structured than Algorithm arrayMax(A, n)
English prose
w Characterizes running time as a Input array A of n integers
w Less detailed than a
function of the input size, n. program
Output maximum element of A
currentMax ← A[0]
w Takes into account all possible inputs w Preferred notation for
for i ← 1 to n − 1 do
describing algorithms
w Allows us to evaluate the speed of an w Hides program design if A[i] > currentMax then
currentMax ← A[i]
algorithm independent of the issues
return currentMax
hardware/software environment
Analysis of Algorithms 5 Analysis of Algorithms 6
The Random Access Machine
Pseudocode Details (RAM) Model
w Control flow w Method call w A CPU
n if … then … [else …] var.method (arg [, arg…])
n while … do … w Return value
n repeat … until … return expression w An potentially unbounded
n for … do … w Expressions bank of memory cells, 2
1
Indentation replaces braces ← Assignment
n
(like = in Java) each of which can hold an 0
w Method declaration = Equality testing arbitrary number or
(like == in Java)
Algorithm method (arg [, arg…])
character
Input … n 2 Superscripts and other
mathematical
Output …
formatting allowed
w Memory cells are numbered and accessing
any cell in memory takes unit time.
Analysis of Algorithms 7 Analysis of Algorithms 8

Counting Primitive
Primitive Operations Operations (§1.1)
w Basic computations w By inspecting the pseudocode, we can determine the
w Examples: maximum number of primitive operations executed by
performed by an algorithm
n Evaluating an an algorithm, as a function of the input size
w Identifiable in pseudocode expression
w Largely independent from the n Assigning a value Algorithm arrayMax(A, n) # operations
to a variable currentMax ← A[0]
programming language Indexing into an
2
n
for i ← 1 to n − 1 do 2 +n
w Exact definition not important array
if A[i] > currentMax then 2(n − 1)
(we will see why later) n Calling a method
currentMax ← A[i] 2(n − 1)
Returning from a
w Assumed to take a constant n
method
{ increment counter i } 2(n − 1)
amount of time in the RAM return currentMax 1
model Total 7n − 1
Analysis of Algorithms 9 Analysis of Algorithms 10

Estimating Running Time Growth Rate of Running Time


w Algorithm arrayMax executes 7n − 1 primitive
w Changing the hardware/ software
operations in the worst case. Define:
a = Time taken by the fastest primitive operation
environment
b = Time taken by the slowest primitive operation n Affects T(n) by a constant factor, but
w Let T(n) be worst-case time of arrayMax. Then n Does not alter the growth rate of T(n)
a (7n − 1) ≤ T(n) ≤ b(7n − 1) w The linear growth rate of the running
w Hence, the running time T(n) is bounded by two time T(n) is an intrinsic property of
linear functions algorithm arrayMax

Analysis of Algorithms 11 Analysis of Algorithms 12


Growth Rates Constant Factors
1E+30 1E+26
w Growth rates of 1E+28 Cubic w The growth rate is 1E+24 Quadratic

functions: 1E+26
Quadratic not affected by 1E+22 Quadratic
1E+24 1E+20 Linear
n Linear ≈ n 1E+22 Linear n constant factors or 1E+18 Linear
n Quadratic ≈ n2 1E+20
n lower-order terms 1E+16
1E+18

T ( n)
1E+14
Cubic ≈ n 3 w Examples
T (n )

n 1E+16
1E+12
1E+14
1E+12 n 102 n + 105 is a linear 1E+10
w In a log-log chart, 1E+10 function
1E+8
1E+6
the slope of the line 1E+8
1E+6 n 105 n2 + 108 n is a 1E+4
corresponds to the 1E+4 quadratic function 1E+2
1E+2
growth rate of the 1E+0
1E+0 1E+0 1E+2 1E+4 1E+6 1E+8 1E+10
function 1E+0 1E+2 1E+4 1E+6 1E+8 1E+10 n
n

Analysis of Algorithms 13 Analysis of Algorithms 14

Big-Oh Notation (§1.2) Big-Oh Example


10,000
w Given functions f(n) and 3n
1,000,000
n^2
g(n), we say that f(n) is 1,000 w Example: the function 100n
2n+10 100,000
O(g(n)) if there are n2 is not O(n) 10n
positive constants n
n n 2 ≤ cn 10,000 n

c and n0 such that 100 n n≤c


n The above inequality 1,000
f(n) ≤ cg(n) for n ≥ n0 cannot be satisfied
10
w Example: 2n + 10 is O(n) since c must be a 100

n 2n + 10 ≤ cn constant
1 10
n (c − 2) n ≥ 10
1 10 100 1,000
n n ≥ 10/(c − 2) n 1
n Pick c = 3 and n 0 = 10 1 10 100 1,000
n

Analysis of Algorithms 15 Analysis of Algorithms 16

More Big-Oh Examples Big-Oh and Growth Rate


n 7n-2
7n-2 is O(n) w The big-Oh notation gives an upper bound on the
need c > 0 and n 0 ≥ 1 such that 7n-2 ≤ c•n for n ≥ n 0 growth rate of a function
this is true for c = 7 and n 0 = 1 w The statement “f(n) is O(g(n))” means that the growth
rate of f(n) is no more than the growth rate of g(n)
n 3n 3 + 20n2 + 5
3n 3 + 20n2 + 5 is O(n3 )
w We can use the big-Oh notation to rank functions
according to their growth rate
need c > 0 and n 0 ≥ 1 such that 3n3 + 20n 2 + 5 ≤ c•n3 for n ≥ n 0
this is true for c = 4 and n 0 = 21 f(n) is O(g(n)) g(n) is O(f(n))
n3 log n + log log n g(n) grows more Yes No
3 log n + log log n is O(log n) f(n) grows more No Yes
need c > 0 and n 0 ≥ 1 such that 3 log n + log log n ≤ c•log n for n ≥ n 0
Same growth Yes Yes
this is true for c = 4 and n 0 = 2
Analysis of Algorithms 17 Analysis of Algorithms 18
Big-Oh Rules Asymptotic Algorithm Analysis
w The asymptotic analysis of an algorithm determines
the running time in big-Oh notation
w If is f(n) a polynomial of degree d, then f(n) is w To perform the asymptotic analysis
O(n d), i.e., n We find the worst-case number of primitive operations
executed as a function of the input size
n Drop lower-order terms n We express this function with big-Oh notation
n Drop constant factors w Example:
w Use the smallest possible class of functions n We determine that algorithm arrayMax executes at most
7n − 1 primitive operations
n Say “2n is O(n)” instead of “2n is O(n2 )” n We say that algorithm arrayMax “runs in O(n) time”
w Use the simplest expression of the class w Since constant factors and lower-order terms are
eventually dropped anyhow, we can disregard them
n Say “3n + 5 is O(n)” instead of “3n + 5 is O(3n)”
when counting primitive operations

Analysis of Algorithms 19 Analysis of Algorithms 20

Computing Prefix Averages Prefix Averages (Quadratic)


w We further illustrate w The following algorithm computes prefix averages in
asymptotic analysis with 35
X
quadratic time by applying the definition
two algorithms for prefix 30 A Algorithm prefixAverages1(X, n)
averages 25 Input array X of n integers
w The i-th prefix average of Output array A of prefix averages of X #operations
20
an array X is average of the A ← new array of n integers n
first (i + 1) elements of X: 15
for i ← 0 to n − 1 do n
A[i] = (X[0] + X[1] + … + X[i])/(i+1) 10
s ← X[0] n
w Computing the array A of 5 for j ← 1 to i do 1 + 2 + …+ (n − 1)
prefix averages of another 0 s ← s + X[j] 1 + 2 + …+ (n − 1)
array X has applications to 1 2 3 4 5 6 7 A[i] ← s / (i + 1) n
financial analysis return A 1
Analysis of Algorithms 21 Analysis of Algorithms 22

Arithmetic Progression Prefix Averages (Linear)


7 w The following algorithm computes prefix averages in
w The running time of linear time by keeping a running sum
prefixAverages1 is 6
Algorithm prefixAverages2(X, n)
O(1 + 2 + …+ n) 5 Input array X of n integers
w The sum of the first n 4 Output array A of prefix averages of X #operations
integers is n(n + 1) / 2 A ← new array of n integers n
3
n There is a simple visual s←0 1
proof of this fact 2 for i ← 0 to n − 1 do n
w Thus, algorithm 1 s ← s + X[i] n
prefixAverages1 runs in A[i] ← s / (i + 1) n
0
O(n 2) time return A 1
1 2 3 4 5 6
w Algorithm prefixAverages2 runs in O(n) time
Analysis of Algorithms 23 Analysis of Algorithms 24
Math you need to Review Relatives of Big-Oh
w Summations (Sec. 1.3.1) w big-Omega
w Logarithms and Exponents (Sec. 1.3.2) nf(n) is Ω(g(n)) if there is a constant c > 0
and an integer constant n0 ≥ 1 such that
w properties of logarithms:
f(n) ≥ c•g(n) for n ≥ n0
log b(xy) = logbx + logby
log b (x/y) = log bx - log by w big-Theta
n f(n) is Θ(g(n)) if there are constants c ’ > 0 and c’’ > 0 and an
log bxa = alogbx integer constant n 0 ≥ 1 such that c’•g(n) ≤ f(n) ≤ c’’•g(n) for n ≥ n0
log ba = logx a/log x b w little-oh
w properties of exponentials: n f(n) is o(g(n)) if, for any constant c > 0, there is an integer
a(b+c) = aba c constant n 0 ≥ 0 such that f(n) ≤ c•g(n) for n ≥ n 0
abc = (ab)c w little-omega
ab /ac = a(b-c)
w Proof techniques (Sec. 1.3.3) b = a loga b n f(n) is ω(g(n)) if, for any constant c > 0, there is an integer
constant n 0 ≥ 0 such that f(n) ≥ c•g(n) for n ≥ n 0
w Basic probability (Sec. 1.3.4) b c = a c*log a b

Analysis of Algorithms 25 Analysis of Algorithms 26

Intuition for Asymptotic Example Uses of the


Notation Relatives of Big-Oh
n 5n 2 is Ω(n2 )
Big-Oh f(n) is Ω(g(n)) if there is a constant c > 0 and an integer constant n 0 ≥ 1
n f(n) is O(g(n)) if f(n) is asymptotically less than or equal to g(n) such that f(n) ≥ c•g(n) for n ≥ n0
let c = 5 and n0 = 1
big-Omega
n f(n) is Ω(g(n)) if f(n) is asymptotically greater than or equal to g(n)
n 5n 2 is Ω(n)
big-Theta f(n) is Ω(g(n)) if there is a constant c > 0 and an integer constant n 0 ≥ 1
n f(n) is Θ(g(n)) if f(n) is asymptotically equal to g(n)
such that f(n) ≥ c•g(n) for n ≥ n0
let c = 1 and n0 = 1
little-oh
n f(n) is o(g(n)) if f(n) is asymptotically strictly less than g(n) n 5n is ω(n)
2

little-omega f(n) is ω(g(n)) if, for any constant c > 0, there is an integer constant n0 ≥
n f(n) is ω(g(n)) if is asymptotically strictly greater than g(n)
0 such that f(n) ≥ c•g(n) for n ≥ n0
need 5n02 ≥ c•n0 → given c, the n0 that satifies this is n0 ≥ c/5 ≥ 0

Analysis of Algorithms 27 Analysis of Algorithms 28

S-ar putea să vă placă și