Documente Academic
Documente Profesional
Documente Cultură
1)
Most algorithms transform best case
average case
input objects into output
Analysis of Algorithms objects. 120
worst case
Running Time
80
with the input size.
60
Average case time is often
difficult to determine. 40
algorithm 7000
algorithm, which may be difficult
Run the program with 6000 Results may not be indicative of the
Time (ms)
inputs of varying size and 5000 running time on other inputs not included
composition
Use a function, like the
4000
in the experiment.
3000
built-in clock() function, to 2000
In order to compare two algorithms, the
get an accurate measure same hardware and software
1000
of the actual running time
Plot the results
0 environments must be used
0 50 100
Input Size
1
The Random Access Machine
Pseudocode Details (RAM) Model
Control flow Method/Function call A CPU
if … then … [else …] var.method (arg [, arg…])
while … do … Return value
repeat … until … return expression An potentially unbounded
for … do … Expressions bank of memory cells, 2
1
← Assignment 0
Indentation replaces braces
(like = in C++)
each of which can hold an
Method declaration = Equality testing arbitrary number or
Algorithm method (arg [, arg…]) (like == in C++) character
Input … n2 Superscripts and other
Output … mathematical Memory cells are numbered and accessing
formatting allowed
any cell in memory takes unit time.
Analysis of Algorithms 7 Analysis of Algorithms 8
Counting Primitive
Primitive Operations Operations (§3.4.1)
Basic computations By inspecting the pseudocode, we can determine the
Examples: maximum number of primitive operations executed by
performed by an algorithm
Evaluating an an algorithm, as a function of the input size
Identifiable in pseudocode expression
Largely independent from the Assigning a value Algorithm arrayMax(A, n) # operations
to a variable currentMax ← A[0] 2
programming language
Indexing into an for i ← 1 to n − 1 do 2+n
Exact definition not important array
if A[i] > currentMax then 2(n − 1)
(we will see why later) Calling a method
currentMax ← A[i] 2(n − 1)
Returning from a
Assumed to take a constant
method
{ increment counter i } 2(n − 1)
amount of time in the RAM return currentMax 1
model Total 7n − 1
Analysis of Algorithms 9 Analysis of Algorithms 10
2
Growth Rates Constant Factors
1E+30 1E+26
Growth rates of 1E+28 Cubic
The growth rate is 1E+24 Quadratic
Quadratic
functions: 1E+26
1E+24 Quadratic
not affected by 1E+22
Linear
1E+20
Linear ≈ n 1E+22
Linear
constant factors or 1E+18 Linear
1E+20
Quadratic ≈ n2 1E+18 lower-order terms 1E+16
1E+14
T (n )
Cubic ≈ n3 Examples
T (n )
1E+16
1E+12
1E+14
1E+10
1E+12 102n + 105 is a linear
In a log-log chart, 1E+10
function
1E+8
1E+8 1E+6
the slope of the line 1E+6 105n2 + 108n is a 1E+4
corresponds to the 1E+4 quadratic function 1E+2
1E+2 1E+0
growth rate of the 1E+0
1E+0 1E+2 1E+4 1E+6 1E+8 1E+10
function 1E+0 1E+2 1E+4 1E+6 1E+8 1E+10 n
n
2n + 10 ≤ cn constant
1 10
(c − 2) n ≥ 10
1 10 100 1,000
n ≥ 10/(c − 2) n 1
Pick c = 3 and n0 = 10 1 10 100 1,000
n
3
Big-Oh Rules Asymptotic Algorithm Analysis
The asymptotic analysis of an algorithm determines
the running time in big-Oh notation
If is f(n) a polynomial of degree d, then f(n) is To perform the asymptotic analysis
O(nd), i.e., We find the worst-case number of primitive operations
executed as a function of the input size
1. Drop lower-order terms We express this function with big-Oh notation
2. Drop constant factors Example:
Use the smallest possible class of functions We determine that algorithm arrayMax executes at most
7n − 1 primitive operations
Say “2n is O(n)” instead of “2n is O(n2)” We say that algorithm arrayMax “runs in O(n) time”
Use the simplest expression of the class Since constant factors and lower-order terms are
eventually dropped anyhow, we can disregard them
Say “3n + 5 is O(n)” instead of “3n + 5 is O(3n)”
when counting primitive operations
4
Math you need to Review Relatives of Big-Oh
Summations (Sec. 1.3.1) big-Omega
f(n) is Ω(g(n)) if there is a constant c > 0
Logarithms and Exponents (Sec. 1.3.2)
and an integer constant n0 ≥ 1 such that
properties of logarithms:
f(n) ≥ c•g(n) for n ≥ n0
logb(xy) = logbx + logby
big-Theta
logb (x/y) = logbx - logby
f(n) is Θ(g(n)) if there are constants c’ > 0 and c’’ > 0 and an
logbxa = alogbx
integer constant n0 ≥ 1 such that c’•g(n) ≤ f(n) ≤ c’’•g(n) for n ≥ n0
logba = logxa/logxb
little-oh
properties of exponentials:
f(n) is o(g(n)) if, for any constant c > 0, there is an integer
a(b+c) = aba c constant n0 ≥ 0 such that f(n) ≤ c•g(n) for n ≥ n0
abc = (ab)c
little-omega
ab /ac = a(b-c)
Proof techniques (Sec. 1.3.3) f(n) is ω(g(n)) if, for any constant c > 0, there is an integer
b = a logab
constant n0 ≥ 0 such that f(n) ≥ c•g(n) for n ≥ n0
Basic probability (Sec. 1.3.4) bc = a c*logab