Sunteți pe pagina 1din 10

Running Head: MACHINE LEARNING ALGORITHMS

Machine Learning Algorithms

Michael Mace

Liberty High School

Abstract
MACHINE LEARNING ALGORITHMS
2

Many open source machine learning frameworks have become available recently

and may be utilized in a variety of applications. The main goal of this project is to

demonstrate Machine learning in the real world. A small three wheeled robot is the test

subject of our custom machine learning program designed to allow the robot to perfect a

circular path on a given surface.

Introduction

Machine learning is a process by which a program “learns” by making changes to

itself based on experiences. Machine learning algorithms can be used in a variety of

ways, they can be used to, “identify objects in images, transcribe speech into text,

match news items, posts or products with users’ interests, and select relevant results of

search” (LeCun, 2015, p.1). Machine learning techniques can vary from utilizing neural

networks to genetic algorithms. These two examples are great pieces to look at exactly

how machine learning works, as well as a simple custom program created to portray

exactly that utilizing a robot.

Neural Networks

Neural networks “attempt to model the relationship between a set of inputs and

known outputs” (Olden, 2008, p.2). Neural networks are a form of deep learning that are

based upon and representative of biological networks, brains. Neural networks are

artificial brains designed to be able to consider different examples presented to them

and attempt to do a task after “learning” how to do it. They are used for recognizing
MACHINE LEARNING ALGORITHMS
3

images or playing through a game as done by MarI/O. The image processing is simpler

to explain.

In a neural network there are 3 sets of layers; the input layer, any number of

hidden layers, and the output layer. We’ll explain a neural network trying to identify

whether an image is bright or dark. The input layer is self explanatory. It takes inputs

and shoves them off to hidden layers for processing. This can be found in image

processing on a pixel by pixel basis. Each pixel can be seen as a color value and sent

off to hidden layers. At the hidden layers these color value numbers are compared with

each other and condensed into smaller sets of numbers approaching, in this example, 0

or 1. The hidden layers combine pixel values in a vicinity together to determine the

value of that larger vicinity, then they keep doing so until it determines the overall

brightness of the picture to be a value between 0 or 1 which are dark and bright

respectively. At this point the output layer comes in. The output layer checks the final
MACHINE LEARNING ALGORITHMS
4

overall brightness value and determines based on it whether the picture is bright or

dark.

In order for it to get to this point the network will recieve examples of dark

pictures and bright pictures and is told which they are. It tries to determine the

brightness value itself, determines where errors lie, and changes how it combines

values together in the hidden layer and what values constitute bright and dark in the

output layer. This is repeated until a human decides it has learned what is bright and

what is dark.

Genetic Algorithm

Genetic algorithms are much different from neural networks. Genetic algorithms

are a subset of Evolutionary Computation, “EC is based on the process of evolution in

natural systems and was inspired by a direct analogy to sexual reproduction and

Charles Darwin's principle of natural selection” (Olden, 2008, p.14).


MACHINE LEARNING ALGORITHMS
5

A good example here is a genetic algorithm trying to solve how to complete a

virtual maze with 20 bots equipped with proximity sensors and the ability to turn and

adjust movement speed. When a bot touches a wall of the maze the bot stops and is

considered dead. Each bot can have different reactions to proximity in what way they

turn, how fast they turn, and how fast the go. They each have base values and

separately mutate in one manner with one value. Then the top 10 are decided by time

spent without hitting the wall and how close they make it to the end of the maze and the

10 that did the worst are removed. The top 10 are copied and both copies mutate again.

This repeats until bots make it through that maze and then it decides the top 10 by the
MACHINE LEARNING ALGORITHMS
6

fastest time. This is a survival of the fittest formula. With basic knowledge of these two

types of machine learning, we can delve into the project conducted.

The Tricycle

The robot created for displaying machine learning is aptly named The Tricycle.

The Tricycle was created using old parts from Liberty’s FTC team. Among the parts

seen above a phone is also attached to act as the connection between robot and

program and our way of collecting acceleration data. The tricycle is programmed to

send a set velocity to the wheels in order to drive in a circle. We originally were going to

utilize a PID (Proportional Integral Derivative) controller for the machine learning

algorithm. The PID controller works by taking the difference between the desired

outcome and the process variable that accomplishes said outcome. It then changes

itself based upon proportional terms, integral terms, and derivative terms. The PID

controller was too advanced for our programming abilities, however. So we had to enter

a simpler realm of machine learning, one of the first neural networks. The solution we

found was called the Feedforward Neural Network. Feedforward is a concept that is not
MACHINE LEARNING ALGORITHMS
7

limited to just machine learning. It is also applied in real life situations and processes

within the brain. Feedforward is a system that changes based on instructions received

as opposed to feedback, which takes a look at the outcome to change (Baker, 2013,

p.1-2). Our Feedforward Neural Network was a simple one consisting of a single hidden

layer, therefore not a deep learning program, which contains at least two hidden layers.

It took in the input of data from the phone, had one hidden layer of variables, and sent

out 2 outputs for power given to the back and front wheels.

Our Feedforward Neural Network was in control of the power in the wheels of the

tricycle, the back two and single front wheel separately. After running through an

attempt at driving in a circle we calculate the percentage of error by using a phone to

collect acceleration data and using it according to this equation:

Which was derived from a simple centripetal acceleration formula:

It then receives instructions based on a back propagation program of how to modify

itself and repeats.

Data & Analysis

The percent error over time data is given as such:


MACHINE LEARNING ALGORITHMS
8

It was not a quick process to “perfect” its circle, data collection took place over the

course of several days, and in the end it was not close to perfect at all. The program

was very slow to improve itself as well. It landed at a percent error of around 13%.

While this is still not a circular path, it is certainly better than the original 80%. Even if it

wasn’t perfect, the results still accurately represent machine learning. It got better over

time and that was the ultimate goal with this project.

Conclusion

Overall This project was nonoptimal for demonstrating machine learning. It was

overly simple in a manner that made the time investment not worth the results. Each

test took a long time to go through a single generation. For future projects along
MACHINE LEARNING ALGORITHMS
9

machine learning lines a more complicated process would actually be better for

demonstrating how this technology can really be used.

Resources

Baker, D. J., & Zuvela, D. (2013). Feedforward strategies in the first-year

experience of online and distributed learning environments. Assessment &

Evaluation In Higher Education, 38(6), 687-697.


MACHINE LEARNING ALGORITHMS
10

Bei, Y., Minhong, W., Kushniruk, A. W., & Jun, P. (2017). Deep Learning towards

Expertise Development in a Visualization-based Learning Environment. Journal

Of Educational Technology & Society, 20(4), 233-246.

LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553),

436-444.

OLDEN, J. D., LAWLER, J. J., & POFF, N. L. (2008). MACHINE LEARNING

METHODS WITHOUT TEARS: A PRIMER FOR ECOLOGISTS. Quarterly

Review Of Biology, 83(2), 171-193.

Psonis, T. K., Nikolakopoulos, P. G., & Mitronikas, E. (2015). Design of a PID

Controller for a Linearized Magnetic Bearing. International Journal Of Rotating

Machinery, 1-12.

S-ar putea să vă placă și