Documente Academic
Documente Profesional
Documente Cultură
NETWORKS”
Project Report
17BEC0479 V.B.Srivathsava
Slot:G1+TG1
1. Abstract ................................................................................ 1
2. Introduction .......................................................................... 2
3. Methodology ....................................................................... 3
4. Algorithms............................................................................. 4
5. Results ................................................................................. 10
6. Inference.............................................................................. 13
Abstract:
The intent of the classification process is to categorize all pixels in a digital
image into one of several land cover classes, or "themes". This categorized
data may then be used to produce thematic maps of the land cover present
in an image. Normally, multispectral data are used to perform the
classification and, indeed, the spectral pattern present within the data for
each pixel is used as the numerical basis for categorization (Lillesand and
Kiefer, 1994). The objective of image classification is to identify and
portray, as a unique gray level (or color), the features occurring in an image
in terms of the object or type of land cover these features actually represent
on the ground.
Deep learning is a vast field so we’ll narrow our focus a bit and take up the
challenge of solving an Image Classification project. Additionally, we’ll be
using a Convolutional Neural Network(CNN) to achieve a pretty impressive
accuracy score.
You can consider the Python code we’ll see in this project as a benchmark
for building Image Classification models
Introduction:
This part gives us the introduction about some terms which we will be using
in our CNN creation and training.
Methodology:
Stage 1 & 2: Loading and pre-processing the data
Data is gold as far as deep learning models are concerned. Your image
classification model has a far better chance of performing well if you have a
good amount of images in the training set. Also, the shape of the data
varies according to the architecture/framework that we use.
But we are not quite there yet. In order to see how our model performs on
unseen data (and before exposing it to the test set), we need to create a
validation set. This is done by partitioning the training set data.
In short, we train the model on the training data and validate it on the
validation data. Once we are satisfied with the model’s performance on the
validation set, we can use it for making predictions on the test data.
And many more. These are essentially the hyperparameters of the model
which play a MASSIVE part in deciding how good the predictions will be.
We also define the number of epochs in this step. For starters, we will run
the model for 10 epochs (you can change the number of epochs later).
Algorithms:
Task 1: Import Libraries
In [ ]:
import tensorflow as tf
import os
import numpy as np
if not os.path.isdir('models'):
os.mkdir('models')
In [ ]:
x = x[indices]
y = y[indices]
count = x.shape[0]
indices = np.random.choice(range(count), count, replace=False)
x = x[indices]
y = y[indices]
y = tf.keras.utils.to_categorical(y)
return x, y
In [ ]:
print(x_train.shape, y_train.shape)
print(x_test.shape, y_test.shape)
In [ ]:
class_names = ['aeroplane', 'car', 'bird']
x = x[indices]
y = y[indices]
p = p[indices]
plt.figure(figsize=(10, 5))
for i in range(10):
plt.subplot(2, 5, i + 1)
plt.imshow(x[i])
plt.xticks([])
plt.yticks([])
col = 'green' if np.argmax(y[i]) == np.argmax(p[i]) else 'red'
plt.xlabel(class_names[np.argmax(p[i])], color=col)
plt.show()
In [ ]:
show_random_examples(x_test, y_test, y_test)
Task 4: Create Model
In [ ]:
from tensorflow.keras.layers import Conv2D, MaxPooling2D,
BatchNormalization
from tensorflow.keras.layers import Dropout, Flatten, Input, Dense
def create_model():
return model
model = tf.keras.models.Sequential()
model.add(Input(shape=(32, 32, 3)))
model.add(Flatten())
model.add(Dense(3, activation='softmax'))
model.compile(loss='categorical_crossentropy',optimizer='adam',
metrics=['accuracy'])
return model
model = create_model()
model.summary()
In [ ]:
%%time
h = model.fit(
x_train/255., y_train,
validation_data=(x_test/255., y_test),
epochs=20, batch_size=256,
callbacks=[
tf.keras.callbacks.EarlyStopping(monitor='val_accuracy',
patience=2),
tf.keras.callbacks.ModelCheckpoint('models/model_{val_acc
uracy:.3f}.h5', save_best_only=True,
save_weights_only=False, monitor='val_accuracy’)]
)
Task 6: Final Predictions
In [ ]:
losses = h.history['loss']
accs = h.history['accuracy']
val_losses = h.history['val_loss']
val_accs = h.history['val_accuracy']
epochs = len(losses)
plt.figure(figsize=(12, 4))
for i, metrics in enumerate(zip([losses, accs], [val_losses,
val_accs], ['Loss', 'Accuracy'])):
plt.subplot(1, 2, i + 1)
plt.plot(range(epochs), metrics[0], label='Training
{}'.format(metrics[2])) plt.plot(range(epochs), metrics[1],
label='Validation {}'.format(metrics[2]))
plt.legend()
plt.show()
In [ ]:
model = tf.keras.models.load_model('models/model_0.913.h5')
preds = model.predict(x_test/255.)
In [ ]:
show_random_examples(x_test, y_test, preds)
Results:
Inference:
In This way we can be able to build a CNN network and train it to do Image
Classification.