Sunteți pe pagina 1din 9

UNIT- I

1. Algorithm:
Basically algorithm is a finite set of instructions that can be used to perform certain task.
The algorithm is defined as the collection of unambiguous instructions occurring in some
specific sequence and such an algorithm should produce output for given set of input in finite
amount of time.

1.1 Properties of Algorithm:


 Input: The input should be taken from specified set of objects.
 Output: At least one quantity is produced, which has a specified relation to the inputs.
 Finiteness: An algorithm must always terminate after a finite number of steps.
 Definiteness: Each instruction must be clear and unambiguous.

1.2 Algorithm Correctness:


The correctness of an algorithm is related to its purpose.
The algorithm is correct with respect to the specification, if it produces the desired results for all
the inputs defined by the specification.

Any algorithm has two types of correctness.


1. Partial Correctness
2. Total Correctness

1.2.1 Partial Correctness: The partial correctness of the algorithm defines that for every legal
input when an algorithm terminates, the result produced will be valid. An algorithm is said to be
“partial correctness” because it does not halt (terminate) in some condition.

1.2.2 Total Correctness: The total correctness of an algorithm defines that for every legal
input, the algorithm halts and then output produced will be valid.

1.3 Analysis of Algorithm:


 The analysis of algorithm is the determination of the amount of resources (such as space
and time) necessary to execute them.

 There are many algorithms that can solve a given problem. They will have different
characteristics that will determine how efficiently each will operate. When we analyze an
algorithm, we first have to show that the algorithm does properly solve the problem
because if it doesn’t, its efficiency is not important.
 Analyzing an algorithm determines the amount of “time” that algorithm takes to execute.
This is not really a number of seconds or any other clock measurement but rather an
approximation of the number of operations that an algorithm performs. The number of
operations is related to the execution time, so we will sometimes use the word time to
describe an algorithm’s computational complexity.

 The analysis will determine an equation that relates the number of operations that a
particular algorithm does to the size of the input. We can then compare two algorithms by
comparing the rate at which their equations grow.

1.4 Complexity of Algorithm:


1.4.1 Space Complexity: The space complexity of an algorithm is the amount of memory it
needs to run to completion.

1.4.2 Time Complexity: The time complexity of an algorithm is the amount of computer time
it needs to run to completion. Time complexity of algorithm is calculated in three ways.
i. Best Case Time Complexity
ii. Worst Case Time Complexity
iii. Average Case Time Complexity

1.4.2.1 Best Case Time Complexity: The Best-Case time complexity of an algorithm is the
minimum amount of computer time it needs to run to completion.

1.4.2.2 Worst Case Time Complexity: The Worst-Case time complexity of an algorithm is the
maximum amount of computer time it needs to run to completion.

1.4.2.3 Average Case Time Complexity: The Average-Case time complexity of an algorithm is
the average amount of computer time it needs to run to completion.

1.5 Designing Approach of Algorithm:


1. Incremental Approach
2. Divide and Conquer Approach

1.5.1 Incremental Approach: It is a simple design approach of algorithm. In this design, we use
conditional statements, loop statements for solving the problem. It is a non-recursive approach.
Ex. Insertion Sort.

1.5.1.1 Loop Invariant: A loop invariant is a condition that is necessarily true immediately
before and immediately after each iteration of a loop. Loop invariant is used to help us
understand why an algorithm is correct. There are three things about a loop invariant.

Initialization: It is true prior to the first iteration of the loop.


Maintenance: If it is true before an iteration of the loop, it remains true before the next iteration.

Termination: When the loop terminates, the invariant gives us a useful property that helps show
that the algorithm is correct.

1.5.1.2 Insertion Sort:

INSERTION-SORT (A)
where A is an array of n numbers.

1. for j= 2 to length [A] do


2. key = A[j]
3. // insert A[j] into the sorted sequence A[1…j-1].
4. i= j-1
5. while i>0 and A[i] > key do
6. A[i+1] = A[i]
7. i=i-1
8. A[i+1]= key

Analysis of insertion sort:

 A constant amount of time is required to execute each line of our pseudo-code. One line
may take a different amount of time than another line, but we shall assume that each
execution of the ith line takes time ci, where ci is a constant.

No. of statement Cost times


1. c1 n
2. c2 n-1
3. 0 n-1
4. c4 n-1
𝑛
5. c5
∑ 𝑡𝑗
𝑗=2
𝑛
6. c6
∑(𝑡𝑗 − 1)
𝑗=2
𝑛
7. c7
∑(𝑡𝑗 − 1)
𝑗=2
8. c8 n-1

Running time of insertion sort: The running time of the algorithm is the sum of running times
for each statement executed.

T(n)= c1 n+ c2 (n-1) + c4 (n-1) + c5 ∑𝑛𝑗=2 𝑡𝑗 +c6 ∑𝑛𝑗=2(𝑡𝑗 − 1) + c7 ∑𝑛𝑗=2(𝑡𝑗 − 1) + c8 (n-1)


Best Case:

In best case, the array is already sorted. For each j=2, 3. . . n, we then find A[i] <=key in line 5
when i has its initial value of j-1. Thus tj=1 for j=2, 3. . . n and best case running time is

T(n)= c1 n + c2(n-1) + c4 (n-1) + c5 (n-1) + c6 (0) + c7 (0) + c8 (n-1)

T(n)= c1 n + c2(n-1) + c4 (n-1) + c5 (n-1) + c8 (n-1)

= (c1 + c2 + c4 + c5 + c8) n – (c2 + c4 + c5 + c8)

This running time can be expressed as (an + b) for constants a and b. It is a linear function of n.
T(n) = O (n)

Worst Case:

In worst case, the array is reverse sorted. We must compare each element A[j] with each element
in the entire sorted sub-array A[1…j-1], so tj= j for j= 2, 3, …, n.
𝑛 𝑛(𝑛 + 1)
∑ 𝑗= − 1
𝑗=2 2
𝑛 𝑛(𝑛 − 1)
∑ (𝑗 − 1) =
𝑗=2 2

T(n) = c1 n + c2(n-1) + c4 (n-1) + c5 (n(n+1)/ 2 - 1) + c6 (n(n-1)/2) + c7 (n(n-1)/2) +c8 (n-1)


= (c5 /2 + c6 /2 + c7 /2) n2 + (c1 + c2 + c4 + c5 /2 - c6 /2 - c7 /2 + c8) n – (c2 + c4 + c5 + c8)

This worst-case running time can be expressed as an2 + bn + c for constants a, b, and c. This is a
quadratic function.
T(n) = O(n2)

1.5.2 Divide-and-Conquer Approach (Recursive Algorithms): In this approach, the problem


is divided into several sub-problems that are similar to the original problem but smaller in size.
After then, sub-problems are solved recursively, and then those solutions are combined to create
a solution to the original problem.

 Divide the problem into a number of sub-problems.

 Conquer the sub-problems by solving them recursively.


 Combine the solutions to the sub-problems into the solution for the original problem.

Ex. Merge Sort.

1.5.2.1 Analyzing divide-and-conquer algorithm:


T(n) is the running time on a problem of size n. If the problem size is small (i.e. n<=c) for some
constant c, the straightforward solution takes constant time, which we write θ(1). Suppose our
problem is divided into a sub-problems each of which is 1/b the size of the original. If we take
D(n) time to divide the problem into sub-problems and C(n) time to combine the solutions to the
sub-problems into the solution to the original problem, we get the recurrence-

𝜃(1) 𝑖𝑓 𝑛 ≤ 𝑐
𝑇(𝑛) = { 𝑛
𝑎𝑇 ( ) + 𝐷(𝑛) + 𝐶(𝑛) 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
𝑏

1.5.2.2 Merge Sort:

MERGE-SORT(A, p, r )

1. if p < r
2. then q ←(p + r)/2 //Divide
3. MERGE-SORT(A, p, q) // Conquer
4. MERGE-SORT(A, q + 1, r ) //Conquer
5. MERGE(A, p, q, r ) // Combine

MERGE(A, p, q, r )
1. n1 ← q − p + 1
2. n2 ←r − q
3. create arrays L[1 . . n1 + 1] and R[1 . . n2 + 1]
4. for i ← 1 to n1
5. do L[i ] ← A[p + i − 1]
6. for j ← 1 to n2
7. do R[ j ] ← A[q + j ]
8. L[n1 + 1]←∞
9. R[n2 + 1]←∞
10. i ← 1
11. j ← 1
12. for k ← p to r
13. do if L[i ] ≤ R[ j ]
14. then A[k] ← L[i ]
15. i ←i + 1
16. else A[k] ← R[ j ]
17. j←j+1

Running time of Merge-Sort:

When n ≥ 2, time for merge sort steps:

Divide: Just compute q as the average of p and r ⇒ D(n) = θ(1).


Conquer: Recursively solve 2 sub-problems, each of size n/2 ⇒2T (n/2).
Combine: MERGE on an n-element sub-array takes θ(n) time, so C(n) = θ(n).
𝜃(1) 𝑖𝑓 𝑛 = 1
𝑇(𝑛) = { 𝑛
2𝑇 ( ) + 𝜃(𝑛) 𝑖𝑓 𝑛 > 1
2

We can convert θ notation into one constant.

𝑐 𝑖𝑓 𝑛 = 𝑐
𝑇(𝑛) = { 𝑛
2𝑇 ( ) + 𝑐𝑛 𝑖𝑓 𝑛 > 1
2

The equation T(n)= 2T(n/2) + cn can be solved by Master method.

Compare equation with T(n)= aT(n/b)+ f(n)


a=2, b=2, f(n)=cn

now calculate nlogba= nlog2 2 = n


then f(n) = nlogba

by case 2 of master method


f(n)= θ(nlogba)

Therefore,
T(n)= θ(nlogba lg n)
T(n)= θ(n lg n)

2. Growth of Function:
The order of growth of the running time of an algorithm gives a simple characterization of the
algorithm's efficiency and also allows us to compare the relative performance of alternative
algorithms.

Table1. Rate of growth of standard functions.

lg n n n lg n n2 n3 2n
g(n)

N
5 3 5 15 25 125 32
10 4 10 40 100 103 103
100 7 100 700 104 106 1030
1000 10 103 104 106 109 10300
2.1 Asymptotic Notation:
The notations we use to describe the asymptotic running time of an algorithm are defined in
terms of functions whose domains are the set of natural numbers N = {0, 1, 2, ...}.

2.1.1 O (big oh) Notation (asymptotic upper bound)

For a given function g(n), we denote by O(g(n)) (pronounced "big-oh of g of n" or sometimes
just "oh of g of n") the set of functions

O(g(n)) = {f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0}.

e.g. f(n)= 6n2 + 5n + 4


The higher term of f(n) is n2, Let g(n)= n2

According to O notation’s condition


f(n) ≤ c g(n)

6n2 + 5n +4 ≤ c n2
6 + 5/n + 4/n2 ≤ c

For n= 1, c= 6 +5 +4 = 15
n=2, c= 6+2.5+1= 9.5
n=3, c= 6+1.66+ 0.44= 8.1

That means if we increase the value of n, the value c will decrease. The maximum value of c is15
for n=1.

6n2 + 5n +4≤ 15n2 where n ≥ n0, n0=1

f(n)= O(n2)

2.1.2 θ-notation: (asymptotic tight bound)


For a given function g(n), we denote by θ(g(n)) the set of functions

θ(g(n)) = {f(n) : there exist positive constants c1, c2, and n0 such that 0 ≤ c1g(n) ≤ f(n) ≤ c2g(n) for
all n ≥ n0}.

e.g. f(n)= 6n2 + 5n + 4


The higher term of f(n) is n2, Let g(n)= n2

According to θ notation’s condition


c1g(n) ≤ f(n) ≤ c2g(n)

c1 n2 ≤ 6n2 + 5n +4 ≤ c2 n2
c1 ≤ 6 + 5/n + 4/n2 ≤ c2

For n= 1, c= 6 +5 +4 = 15
n=2, c= 6+2.5+1= 9.5
n=3, c= 6+1.66+ 0.44= 8.1

That means if we increase the value of n, the value c will decrease. The maximum value of c is15
for n=1.

For infinite value of n, c= 6

c1= 6 and c2 =15

6n2 ≤ 6n2 + 5n +4 ≤ 15 n2 for all n ≥ n0, n0= 1

f(n)= θ(n2)

2.1.3 Ω (big Omega)-notation: (asymptotic lower bound)

For a given function g(n), we denote by Ω(g(n)) the set of functions

Ω (g(n)) = {f(n) : there exist positive constants c and n0 such that 0 ≤ cg(n) ≤ f(n)for all n ≥ n0}.
2.2 Asymptotic notation in equation and inequalities.
o-notation:

The asymptotic upper bound provided by O-notation may or may not be asymptotically tight.
The bound 2n2= O(n2) is asymptotically tight, but the bound 2n=O(n2) is not. We use o-notation
to denote an upper bound that is not asymptotically tight. We formally define o(g(n)) (“little-oh
of g of n”).

o(g(n)) = {f(n): for any positive constant c>0, there exists a constant n0 >0 such that
0 ≤ f(n) < cg(n) for all n ≥ n0}.

E.g. 2n= o(n2), but 2n2 ≠ o(n2).


Intuitively, in the o-notation, the function f(n) becomes insignificant relative to g(n) as n
approaches infinity; that is.

𝑓(𝑛)
lim =0
𝑛→∞ 𝑔(𝑛)
ω Notation:

ω-notation denotes a lower bound that is not asymptotically tight.

ω (g(n)) = {f(n): for any positive constant c>0, there exists a constant n0 >0 such that
0 ≤ cg(n) < f(n) for all n ≥ n0}.

E.g. n2/2 = ω(n), but n2/2 ≠ ω(n2). The relation f(n)= ω(g(n)) implies that

𝑓(𝑛)
lim =∞
𝑛→∞ 𝑔(𝑛)

If this limit exists, f(n) becomes arbitrarily large relative to g(n) as n approaches infinity.

S-ar putea să vă placă și