Documente Academic
Documente Profesional
Documente Cultură
/*
Notice: selectionsort does O(n2) item comparisons but only O(n) swaps.
That makes it faster than bubbleSort and insertionSort when the array elements
are large chunks of data (e.g. long strings).
insertionSort is O(n2) for both item comparisons & assignments in the average
case but
O(n) in the best case where the array is already sorted or close to being
sorted.
*/
Analysis for selectionsort
The minIndex function does O(n) item comparisons but no item assignments.
minIndex is called O(n) times from selectionSort.
Therefore selectionSort does O(n2) item comparisons.
Each call to minIndex is followed by one swap (= 3 assignments).
Therefore selectionSort does O(n) item assignments.
Even though each item comparison take less time than an item assignments, the time
is not negligible.
As the array gets big the bulk of the runtime will be taken by item comparisons,
not item assignments.
That's why selection sort is an O(n2) algorithm, in both the best case and the
worst case.
In the worst case the the array items start out in descending order.
Therefore each insertion will involve O(n) item assignments (via the shifts).
Since there are O(n) insertions, the worst case runtime is O(n2).
In the best case the array items start out in ascending order.
Therefore only O(1) comparisons and item assignments are required for each insertion.
So insertionSort is O(n) in the best case. Also, when the array is "almost sorted" in the sense that
a value is never far from its correct position, insertionSort will have O(n) performance.
That makes insertionSort better in some cases than an O(nlogn) algorithm like quicksort.
Nested loops
count = 0;
for (int i = 1; i < n ; i++)
for (int j = 0; j < i; j++)
{
// some constant time operation
count++; //
count gets incremented i times.
}
cout << "The final value of count is " << count << endl;
Notice this is O( n) since the leading term can be written (/sqrt(5)) n and the term on (- )-n-1 vanishes as n gets big. So adding 1 to input size should increase runtime by
approximately 1.6. You can verify this experimentally.
Of course this means our algorithm is exponentially slow. Remember we took that result on faith a few weeks ago.
To gauge the complexity of an program we count the operations that are executed most frequently.
For large inputs these operations will eat up the bulk of the clock cycles devoted to a running that
program. In fact, we can do our analysis even before implementing the program. By looking at
repeated operations in the algorithm we can get a measure of the efficiency of the algorithm
independent of the implementation or hardware.
Our operation count will generally be a function f that depends on the size of the algorithm's input.
This allows us to characterize an algorithm's complexity by looking at how fast f grows. If f is a linear
function, we say the algorithm is a linear time algorithm. If f is a quadratic function we say the
algorithm has polynomial or quadratic runtime. Here are some of the complexity classes that an
algorithm may belong to:
O(1) Constant time
e.g. stack push, pop, peek (both array and linked
implementations)
O(log n) Log Time
e.g. binary search
O(sqrt n) Square root time
e.g. determine if n is prime. (This takes exp. time in terms
of #bits in n though, so it depends what we mean by size.)
O(n) Linear Time e.g. linear search. Any algorithm that loops through an array once
and does constant time op on each iteration.
O (n log n) Linearithmic Time ( or "nlogn Time") e.g. quicksort (average case),
mergesort (all cases)
O(n2) Quadratic Time e.g. insertion sort, selection sort, bubble sort, quicsksort worst
case
O(nk) Polynomial Time e.g. optimized slowsort is cubic time,
Strassen's matrix multiplication is n2.8 time, approximately.
O(2n), O(3n) etc and generally O(bn) b > 1. Exponential Time e.g. canMake() was
O(3n) in worst case. Factoring a number in terms of the number's length in bits.
O(n!) Factorial Time e.g. find shortest path to visit all nodes in a weighted graph
(Traveling salesman) using exhaustive search.
nullptr;
? makeTree(maxheight-1) : nullptr;
? makeTree(maxheight-1) : nullptr;
15, p, q);