Sunteți pe pagina 1din 8

Introduction to theory of

Algorithm

Assignment 1

Submitted byRAHUL PAUL

1> Referring back to the searching problem observe that if the sequence A is sorted, we
can check the midpoint of the sequence against v and eliminate half of the sequence from
further consideration. The binary search algorithm repeats this procedure, halving the size
of the remaining portion of the sequence each time. Write pseudocode, either iterative or
recursive for binary search. Argue that the worse case running time of binary search is (lg
n).

SolutionWe can do binary search only on a sorted array. So for binary search a sorted array A, a searching
element n and a range [low high] of the array is required. The procedure compares the number n to
mid point of the array and then choose which half of the given range should be searched further for
finding the location.
ITERATIVE-BINARY (A,n,low,high)
BINARY(A,n,low,high)
while low high
mid = (low+high)/2
if n =A[mid]
return mid
if n > A[mid]
low = mid+1
BINARY(A,n,mid+1,high)
else
high = mid-1
BINARY(A,n,low,mid-1)

T(n) = T(n/2)+(1)
= [T(N/4)+ (1)] + (1)

RECURSIVEwhile low high


mid = (low+high)/2
if n =A[mid]
return mid
if n > A[mid]
RECURSIVEelse
RECURSIVE-

= [T(N/8)+ (1)] + (1)+ (1)


.
.
.
=[T(n/2i) + i
let 2i = x => lg(2i) = lg x => i* lg 2 = lg n => i = lg n
so, it is , lg n + 1
the Solution is T(n) = (lg n).

2. Observe that the while loop of lines 5-7 of the insertion sort procedure in section 2.1
uses a linear search to scan(backward)through the sorted sub-array A[1j-1]. Suppose we
can also use a binary search instead of linear search. Answer the following questions:
(1) How many element moving operations (i.e, A[i+1]=A[i]) are required if the binary
search is used? How many element moving operations are required if the linear search is
used? Justify your answer for the worst case.
(2) How many comparisons are required if the binary search is used? How many
comparisons are required if the linear is used? Justify your answer for both the best and
worst cases?

Solutions
INSERTION-SORT-LINEAR (A)
for j =2 to A[length]
key =A[j]

INSERTION-SORT-BINARY(A)
for j=2 to A[length]
key=A[j]

i=j-1

low =0, high =j

while i>0 and A[i] >key

while(low<high)

A[i+1] =A[i]

mid = (low+high)/2

i=i-1
A[i+1] =key

if key <= A[mid]


high =mid-1
else low=mid+1
for k = j to high
A[k] = A[k-1]
A[high]= key

(2) As we all know insertion sort pushes the elements of the array in order to find a space for the next
element. So if I want to enter a new element using binary search, I still need to push all the elements

after that index to the right. In a linear search comparison requires for worst case is O(n) whereas for
the binary search it is O(lg n) .
So an array is given sorted backwards, for this I will need (i-1) pushes to the right in order to insert i- th
element. So the number of comparison required for a linear insertion sort is the . O(n) in worst case
and running time is O(n2). Whereas for the binary insertion sort no of swaps needed is n and the no of
comparison needed is also n. So the worst case running time for binary insertion sort is O(n 2).

For the best case , linear insertion sort , the array is already sorted so we just need to compare n times
and no swap is needed. so the comparison required is O(n) and running time is O(n) as well. Whereas
for the binary insertion sort, number of comparison is fewer than linear insertion sort. But the binary
insertion sort always does the binary search method . So no of caompariosn required is O(lgn) and its
running time is O(nlg n)

(1) for the first for loop exactly (n-1) iterations are done. In each iteration a binary search is working on
to find the position to insert the elements. Once correct position is found it requires at most i iteration
to insert the element in its right place and for the worst case running time of the binary insertion sort I
swaps are needed for insertion. So the worst case time for binary insertion sort is O(n 2).
for linear insertion sort the first loop will be iterated n-1 times. In each iteration linear search is
working on to find the position to insert the elements. Once correct position is found it requires at most
I iteration to insert the element in its right place and for the worst case running time at most I swaps
are needed for insertion. So the worst case time for linear insertion sort is O(n 2).
So for the both case moving operations required for the worst case is O(n2) .

3> Given the triple Merge-sort algorithm below


Triple_Merge Sort(A,p,r)
if(p<r)
q1 = floor(p+r)/3
q2 = floor(p+2r)/3
Triple_Merge Sort(A,p,q1)

Triple_Merge Sort(A,q1+1,q2)
Triple_Merge Sort(A,q2+1,r)
Triple_Merge (A,p,q1,q2,r)
Implement the triple merge algorithm and analyze the running time.

SolutionTriple-Merge(A,p,q1,q2,r)
{
n1 = A[p,,q1-1]
n2 = A[q1,.,q2-1]
n3= A[q2,.,r]
merge (A[p.q1-1], A[q1..q2-1])
merge (A[p.q2-1], A[q2..r])
}

MERGE (A, p, q, r )
n1 q p + 1
n2 r q
Create arrays L[1 . . n1 + 1] and R[1 . . n2 + 1]
FOR i 1 TO n1
DO L[i] A[p + i 1]
FOR j 1 TO n2
DO R[j] A[q + j ]
L[n1 + 1]
R[n2 + 1]
i1
j1
FOR k p TO r
DO IF L[i ] R[ j]
THEN A[k] L[i]
ii+1
ELSE A[k] R[j]
jj+1

The time for this merge sort is,


T(n)= 3T(n/3) + O(n)

I am trying to solve the equation using master theorem.


T(n) = aT(n/b) + nc if n > 1
d

if n = 1

then for n a power of b,


1. if logb a <c, T(n) = (nc).
2. if logb a = c, T(n) = (nc log n),
3. if logb a>c, T(n) = (nlogb a).
In our case , a =3 and b=3 and c =1 so logba = 1 = c
so according to rule number 2 , T(n) = (nlg n)

4> Let f(n), h(n) and g(n) be asymptotically nonnegative functions. Assume that f(n) =
O(h(n)) and g(n)
= O(h(n)) . Prove that f(n) + g(n) = O(h(n)).

Solution
The functions f(n) and g(n) are asymptotically nonnegative , that means there is a n 0 such that f(n) 0
and g(n) 0 for all n n0 .

So, that means, f(n) + g(n) f(n) 0 and f(n) + g(n) g(n) 0 for all n n 0 .
It also follows that there is a positive constant c such that f(n) c h(n) and g(n) c h(n) for all n n 0 .
Now,
f(n)+g(n) c(h(n)) for all n n0.
That is, f(n)+ g(n) = O h(n).

5> Does the statement the running time of algorithm A is at least O(n lg n) make sense?
Why?

Solution
The statement doesnt make sense. The running time of algorithm A cant be at least O(n lg n) , can be
happen. The Big O notation represents the upper bound of a given algorithm that means the maximum
time needed to run a algorithm. It is denoted as.

O (g(n)) = { f(n) : there exist a positive constant c and n0 such that 0 f(n) cg(n) for all n n0 }
Thats why the given statement doesnt make sense.

6> Prove that

e1/n O(nt)

Solution
otherwise there exist c> 0 and n0 >0 with e1/n cnt for all n n0
Now take log in both sides, ln (e1/n) ln(cnt) => 1/n (ln e) ln c + t (ln n)
Now divide both sides using ln n ,

=>1/n ln c+ t(ln n)

1/n(ln n) lnc /ln n + t

When n e, 1/n(ln n) lnc /ln n + t ln c + t (a constant)

lim

1
1
=lim =0
n ( lnn ) n n

So for a given c and n n0 , it is proved that e1/n O(nt) .

7> Express function

n2 /100 - 100n -50lg n in terms of notation.

SolutionAccording to the definition of there must be constants c1 and c2 and n0 positive such that,
0 c1 n2 n2 /100 - 100n -50lg n c2 n2 for all n n0 .
As written I can write n2 /100 - 100n -50lg n = (n2 /100 - 100n -50lg n)
A better rephrasing would be to find all values of c for which n 2 /100 - 100n -50lg n = (nc) .

n2
100 n 50lg n
100
lim
c
n
n

= 1 if c=2

0 if c>2 and
if c<2
We have that n2 /100 - 100n -50lg n = (nc) if and only if c= 2 .

S-ar putea să vă placă și