Documente Academic
Documente Profesional
Documente Cultură
We’re trying to enable Machine Learning and Deep Learning to one and all. Irrespective of
whether a user knows how to code or not.
Nov 1 · 8 min read
Intro:
Understanding what is Arti cial Intelligence and how does Machine
Learning and Deep Learning powers it is an overwhelming experience.
We are a group of self-taught engineers who have gone through that
experience and we are sharing (in blogs) our understanding and what
helped us in simpli ed form, so that anyone who is new to this eld can
easily start making sense of the technicalities of this technology.
. . .
Connections — It connects one neuron in one layer to another neuron
in other layer or the same layer. A connection always has a weight value
associated with it. Goal of the training is to update this weight value to
decrease the loss(error).
. . .
Activation Function(Transfer Function) — Activation functions are
used to introduce non-linearity to neural networks. It squashes the
values in a smaller range viz. a Sigmoid activation function squashes
values between a range 0 to 1. There are many activation functions
used in deep learning industry and ReLU, SeLU and TanH are preferred
over sigmoid activation function. In this article I have explained
di erent activation functions available.
. . .
Output Layer — This layer is the last layer in the network & receives
input from the last hidden layer. With this layer we can get desired
number of values and in a desired range. In this network we have 3
neurons in the output layer and it outputs y1, y2, y3.
. . .
Input Shape — It is the shape of the input matrix we pass to the input
layer. Our network’s input layer has 4 neurons and it expects 4 values of
1 sample. Desired input shape for our network is (1, 4, 1) if we feed it
one sample at a time. If we feed 100 samples input shape will be (100,
4, 1). Di erent libraries expect shapes in di erent formats.
. . .
Forward Propagation
Backward Propagation
. . .
Precision and Recall
. . .
In the eld of machine learning and speci cally the problem of statistical
classi cation, a confusion matrix, also known as an error matrix, is a
speci c table layout that allows visualization of the performance of an
algorithm, typically a supervised learning one (in unsupervised learning it
is usually called a matching matrix). Each row of the matrix represents
the instances in a predicted class while each column represents the
instances in an actual class (or vice versa). The name stems from the fact
that it makes it easy to see if the system is confusing two classes (i.e.
commonly mislabelling one as another).
Confusion Matrix
. . .
. . .
. . .
. . .
One epoch = one forward pass and one backward pass of all the
training examples.
. . .
About
At Mate Labs we have built Mateverse, a Machine Learning Platform,
where you can build customized ML models in minutes without
writing a single line of code. Our platform enables everyone to easily
build and train Machine Learning models, without writing a single line of
code.
Tell us if you have some new suggestion. Our ears and eyes are always
open for something really exciting.
. . .