Sunteți pe pagina 1din 11

ASSIGNMENT I

Design and Analysis of Algorithms

SUBMITTED TO SUBMITTED BY
Praveen Kumar Khushboo Bhagchandani
1647220
(IVMCA)
Que1. Consider one real-time application and explain how algorithms
are useful.

Ans. The application name is Google Page Rank and Facebook.

In the era of internet, the analysis of relationships between different


entities is crucial. From search engines and social networks to
marketing analysis tools, everybody is trying to find the real structure
of the Internet through time.

Link analysis is arguably one of the algorithms with the most myths
and confusion in the general public. The problem is that there are
different ways to make link analysis and there are also characteristics
that make each algorithm a little different (which allows to patent the
algorithms) but in their bases they are similar.

The idea behind link analysis is simple, you can represent a graph in a
Matrix form making it a eigenvalue problem. This eigenvalues can
give you a really good approach of the structure of the graph and the
relative importance of each node. The algorithm was developed in
1976 by Gabriel Pinski and Francis Narin.

Facebook when it shows you your news feed (this is the reason why
Facebook news feed is not an algorithm but the result of one),
Google+ and Facebook friend suggestion, LinkedIn suggestions for
jobs and contacts, Netflix and Hulu for movies, YouTube for videos,
etc. Each one has a different objective and different parameters, but
the math behind each remains the same.

Que2. Explain Five Asymptotic Notations.

Ans. 1) Notation: The theta notation bounds a functions from


above and below, so it defines exact asymptotic behavior.
A simple way to get Theta notation of an expression is to drop low
order terms and ignore leading constants. For example, consider the
following expression.
3n3 + 6n2 + 6000 = (n3)
Dropping lower order terms is always fine because there will always
be a n0 after which (n3) has higher values than n2) irrespective of
the constants involved.
For a given function g(n), we denote (g(n)) is following set of
functions.

(g(n)) = {f(n): there exist positive constants c1, c2 and n0 such


that 0 <= c1*g(n) <= f(n) <= c2*g(n) for all n >= n0}

The above definition means, if f(n) is theta of g(n), then the value f(n)
is always between c1*g(n) and c2*g(n) for large values of n (n >=
n0). The definition of theta also requires that f(n) must be non-
negative for values of n greater than n0.

Fig1:Graph of (g(n))

2) Big O Notation: The Big O notation defines an upper bound of an


algorithm, it bounds a function only from above. For example,
consider the case of Insertion Sort. It takes linear time in best case and
quadratic time in worst case. We can safely say that the time
complexity of Insertion sort is O(n^2). Note that O(n^2) also covers
linear time.
If we use notation to represent time complexity of Insertion sort, we
have to use two statements for best and worst cases:
1. The worst case time complexity of Insertion Sort is (n^2).
2. The best case time complexity of Insertion Sort is (n).

Fig2: Graph of O(g(n))

The Big O notation is useful when we only have upper bound on time
complexity of an algorithm. Many times we easily find an upper
bound by simply looking at the algorithm.

O(g(n)) = { f(n): there exist positive constants c and


n0 such that 0 <= f(n) <= cg(n) for
all n >= n0}

3) Notation: Just as Big O notation provides an asymptotic upper


bound on a function, notation provides an asymptotic lower bound.

Notation< can be useful when we have lower bound on time


complexity of an algorithm. As discussed in the previous post, the
best case performance of an algorithm is generally not useful, the
Omega notation is the least used notation among all three.

Fig2: Graph of (g(n))


For a given function g(n), we denote by (g(n)) the set of functions.

(g(n)) = {f(n): there exist positive constants c and


n0 such that 0 <= cg(n) <= f(n) for
all n >= n0}.

Let us consider the same Insertion sort example here. The time
complexity of Insertion Sort can be written as (n), but it is not a
very useful information about insertion sort, as we are generally
interested in worst case and sometimes in average case.

4) -notation:
The asymptotic upper bound provided by O-notation may or may not
be asymptotically tight. The bound 2n2 = O(n2) is asymptotically
tight, but the bound 2n = O(n2) is not. We use o-notation to denote an
upper bound that is not asymptotically tight. We formally define
o(g(n)) ("little-oh of g of n") as the set

o(g(n)) = { (n): for any positive constant c > 0, there exists a


constant

n0 > 0 such that 0 f(n) < cg(n) for all n n0}.

For example, 2n = o(n2), but 2n2 o(n2).

The definitions of O-notation and o-notation are similar. The main


difference is that in (n) = O(g(n)), the bound 0 (n) cg(n) holds
for some constant c > 0, but in (n) = o(g(n)), the bound 0 (n) <
cg(n) holds for all constants c > 0. Intuitively, in the o-notation, the
function f(n) becomes insignificant relative to g(n) as n approaches
infinity; that is,
Fig4: Graph of all asymptotic notation

5) w-Notation:
Definition : Let f(n) and g(n) be functions that map positive integers
to positive real numbers. We say that f(n) is (g(n)) (or f(n)
(g(n))) if for any real constant c > 0, there exists an integer constant
n0 1 such that f(n) > c * g(n) 0 for every integer n n0.

f(n) has a higher growth rate than g(n) so main difference between
Big Omega () and little omega () lies in their definitions. In the
case of Big Omega f(n)=(g(n)) and the bound is 0<=cg(n)0, but in
case of little omega, it is true for all constant c>0.
We use notation to denote a lower bound that is not asymptotically
tight.
f(n) (g(n)) if and only if g(n) ((f(n)).

In mathematical relation,
if f(n) (g(n)) then, lim f(n)/g(n) = ; n

Que3. Discuss the growth of the functions.

Ans. Growth of function:


To characterize the time cost of algorithms, we focus on functions that
map input size to (typically, worst-case) running time. (Similarly for
space costs.) We are interested in precise notation for characterizing
running-time differences that are likely to be significant across
different platforms and different implementations of the algorithms.
This naturally leads to an interest in the asymptotic growth" of
functions. We focus on how the function behaves as its input grows
large.
Asymptotic notation is a standard means for describing families of
functions that share similar asymptotic behavior.
Asymptotic notation allows us to ignore small input sizes, constant
factors, lower-order terms in polynomials.

Fig5: Growth of Function

Que4. Explain the below mentioned sorting algorithms with space


and time complexity.
a) Insertion Sort
b) Selection Sort
c) Merge Sort
d) Bubble Sort
e) Quick Sort
f) Heap Sort
g) Radix sort
h) Shell Sort

Ans.
Algorithm Time Space
Complexity Complexity
Best Average Worst Worst
Quick sort (nlog(n)) (n log(n)) O(n^2) O(log(n))
Merge sort (nlog(n)) (n log(n)) O(n log(n)) O(n)
Heap sort (nlog(n)) (n log(n)) O(nlog(n)) O(1)
Bubble Sort (n) (n^2) O(n^2) O(1)
Insertion Sort (n) (n^2) O(n^2) O(1)
Selection Sort (n^2) (n^2) O(n^2) O(1)
Shell Sort (n log(n)) (n(log(n))^2) O(n(log(n))^ O(1)
2)
Radix Sort (nk) (nk) O(nk) O(n+k)

a) Insertion Sort:

Insertion sort is a simple sorting algorithm that builds the final sorted
array (or list) one item at a time. It is much less efficient on large lists
than more advanced algorithms such as quicksort, heapsort, or merge
sort. However, insertion sort provides several advantages:

Simple implementation: Jon Bentley shows a three-line C


version, and a five-line optimized version.
Efficient for (quite) small data sets, much like other quadratic
sorting algorithms
More efficient in practice than most other simple quadratic (i.e.,
O(n2)) algorithms such as selection sort or bubble sort
Adaptive, i.e., efficient for data sets that are already
substantially sorted: the time complexity is O(nk) when each
element in the input is no more than k places away from its
sorted position
Stable; i.e., does not change the relative order of elements with
equal keys
In-place; i.e., only requires a constant amount O(1) of additional
memory space
Online; i.e., can sort a list as it receives it.
b) Selection sort:

The selection sort algorithm starts by finding the minimum value in


the array and moving it to the first position. This step is then repeated
for the second lowest value, then the third, and so on until the array
is sorted. The selection sort's computational complexity is O(n2).
Although selection sort, bubble sort, insertion sort, and gnome sort
all have the same computational complexity, selection sort typically
performs better than bubble sort and gnome sort, but not as well as
insertion sort. Heapsort greatly improves on the selection sort
algorithm by using an implicit heap data structure to speed up finding
and removing the next lowest value. This enhancement has the
potential to change the computational complexity to O(n log n).

c) Merge Sort:

Merge sort is a recursive algorithm that continually splits a list in


half. If the list is empty or has one item, it is sorted by definition (the
base case). If the list has more than one item, we split the list and
recursively invoke a merge sort on both halves. Once the two halves
are sorted, the fundamental operation, called a merge, is performed.
Merging is the process of taking two smaller sorted lists and
combining them together into a single, sorted, new list.

d) Bubble Sort:

Bubble sort is a sorting algorithm that works by repeatedly stepping


through lists that need to be sorted, comparing each pair of adjacent
items and swapping them if they are in the wrong order. This passing
procedure is repeated until no swaps are required, indicating that the
list is sorted. Bubble sort gets its name because smaller elements
bubble toward the top of the list.

Bubble sort is also referred to as sinking sort or comparison sort.


e) Quick Sort:

Quicksort is a popular sorting algorithm that is often faster in


practice compared to other sorting algorithms. It utilizes a divide-
and-conquer strategy to quickly sort data items by dividing a large
array into two smaller arrays. It was developed by Charles Antony
Richard Hoare (commonly known as C.A.R. Hoare or Tony Hoare)
in 1960 for a project on machine translation for the National Physical
Laboratory.

f) Heap Sort:

A sorting algorithm that works by first organizing the data to be


sorted into a special type of binary tree called a heap. The heap
itself has, by definition, the largest value at the top of the tree, so
the heap sort algorithm must also reverse the order. It does this
with the following steps:

1. Remove the topmost item (the largest) and replace it with the
rightmost leaf. The topmost item is stored in an array.

2. Re-establish the heap.

3. Repeat steps 1 and 2 until there are no more items left in the heap.

The sorted elements are now stored in an array. A heap sort is


especially efficient for data that is already stored in a binary tree. In
most cases, however, the quick sort algorithm is more efficient.

g) Radix Sort:

Radix Sort is an algorithm that sorts a list of numbers and comes


under the category of distribution sort.
This sorting algorithm doesn't compare the numbers but distributes
them, it works as follows:

1. Sorting takes place by distributing the list of number into a


bucket by passing through the individual digits of a given number
one-by-one beginning with the least significant part. Here, the
number of buckets are a total of ten, which bare key values starting
from 0 to 9.
2. After each pass, the numbers are collected from the buckets,
keeping the numbers in order
3. Now, recursively redistribute the numbers as in the above step '1'
but with a following reconsideration: take into account next most
significant part of the number, which is then followed by above
step '2'.

h) Shell Sort:
Shell Sort is mainly a variation of Insertion Sort. In insertion sort,
we move elements only one position ahead. When an element has to
be moved far ahead, many movements are involved. The idea of
shellSort is to allow exchange of far items. In shellSort, we make the
array h-sorted for a large value of h. We keep reducing the value of
h until it becomes 1. An array is said to be h-sorted if all sublists of
every hth element is sorted.

S-ar putea să vă placă și