Documente Academic
Documente Profesional
Documente Cultură
Introduction
Algorithm analysis allows a programmer to quantify the running times of particular code without having the need to implement it or
run it. However, in order to truly understand the significance of this powerful tool, we must, in fact, run experimental trials that
attempt to prove or disproof this analysis in the effort of obtaining optimal comprehension. Nonetheless, this paper primarily focuses
on analyzing multiple algorithms that obtain the same result, yet in different means – namely the Fibonacci sequence and a selection
problem.
Approach No.1
Since shifting the digits to the right accomplishes division by the base of that number system, raised to the number of
digits shifted, we can calculate how many times we have seen a one. For example, if the digits of 1000 2 , the
equivalent of 810 , are shifted one place to the right, the resulting number will be 1002 or 10 2 if shifted twice.
In conjunction with using the shifting operator, the bit-wise operator AND (&) allows the algorithm to systematically
count each bit until the number is shifted to zero. To accomplish this recursively, in the fashion that the textbook requires,
one simply supplies one base case: if n=0 then return 0, otherwise return 1 or 0 depending if n AND 1=1 plus the
recursive method with n shifted once to the right, as argument.
Approach No.2
Since the number of ones in n are equal to the number of ones in n/ 2 , plus one (granted that the number is odd), we
can simply return the value of the recursive call, with n /2 as argument, plus one. However, if the number is even,
then we simply return the recursive call, with n1 as argument, minus one. Adding one to n, in the recursion, turns
the right-most bit to 1, meaning that the final result will be 1 unit more. We subtract 1 to the result to account for this
discrepancy. Finally, when n reaches 1, then simply return 1, hence, the base case.
Because dividing n by 2 accomplishes the same operation as shifting bytes to the right once, Approach No.2 rests on the same
premise as No.1, and simply differs in semantics. In addition, the shifting operator is the equivalent of dividing n by two and
the additional condition statements replace the functionality of the AND operator. Approach No.1 should be more efficient
due the fact that the shifting operator is faster than repeated subtraction, which happens to be what division is.
Selection sort – implement a selection-sort algorithm that will order the entire list, so to access the kth element. Performs at
2
O n .
Quick sort – implement a quick-sort algorithm that will order the list, so to access the kth element. Performs at O(nlogn) .
Modified Quick-sort – implement a variant of the quick-sort algorithm, that will keep track of the size of sub-arrays
constructed, during the partitioning of the entire list. Once the size is known, the algorithm can zero-in to the kth element
without having the need to order previous elements.
The entire program is structured in multiple classes that encapsulate each aspect of the lab. There are classes for sorting, solving the
selection problem, the Fibonacci sequence, exception handling, and for benchmarks: the methods which will wrap everything together.
Each benchmark is encased in parts. Exercise 1.5 is in Part1.java, the Fibonacci algorithms in Part2.java, and the selection problem in
Part3.java. The file Main.java organizes these parts in a menu that adequately handles exceptions and wrong inputs.
Both the Part 2 and Part 3 display the running times of the algorithms. The user has a choice to receive either the number of steps that
the algorithm completes, or the amount of milliseconds that elapse during the execution.
To run the program, execute the Run.bat in the root folder. For a demo session run, execute the Demo.bat instead. The demo version
of the program allows the user to insert the elements of the array (Part 3), instead of the program randomly generating them.
Experimental Results
-Fibonacci Algorithms
There were three algorithms to test: the closed-form, iterative, and recursive algorithms. Because the computer works too
fast, both the closed iterative/loop form terminate virtually instantly. To keep track of the complexity of the algorithms, the
number of steps each algorithm took was recorded. Below are the results of all three algorithms at several inputs:
Figure 1.2 – Illustrates the the running time of all the three algorithms.
Table 1.4
Conclusion
The purpose of this laboratory was to learn how algorithm analysis can server as a tool to avoid inefficient, resource demanding code.
In addition, this tool also allows the programmer to estimate an algorithm's running time without having the need to implement it.
However, in this project, we took a step further.
The algorithms were implemented and compared to each other based on running times or number of steps needed to complete. The
Big-O notation proved to be an effective tool to predict how fast an algorithm could execute. For instance, the quick sort terminated
considerably sooner than the selection sort, hence O log nO n log n . So the moral of the story is that one must not only
focus on how to resolve a problem while composing an algorithm, but one must also be concerned with the resources at hand and how
much of it is reasonable to invest.