Sunteți pe pagina 1din 21

IMAGE SUPER RESOLUTION USING DCNN

A Major Project Report

In partial fulfilment of the degree

BACHELOR OF TECHNOLOGY
In

COMPUTER SCIENCE AND ENGINEERING

BY

15K41A0573 BUTHAM SANJAY KUMAR

15K41A0590 KATTAGANI ROHITH KUMAR

15K41A0513 GADDAM SANDEEP

15K41A05G1 PINGILI SRIYA

Under the guidance of


Mr D.Ramesh
Assistant Professor.,

Submitted to

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING


S.R.ENGINEERING COLLEGE (A), ANANTHASAGAR, WARANGAL
(Affiliated to JNTUH, Accredited by NBA)
April, 2019.

DEPARTMENT OF COMPUTER SCIENCE AND


ENGINEERING

CERTIFICATE

This is to certify that A Major Project entitled “IMAGE SUPER RESOLUTION USING

DCNN” is a record of bonafide work carried out by the student B.Sanjay Kumar

(15K41A0573), K.Rohith Kumar (15K41A0590), G.Sandeep (15K41A0513), P.Sriya

(15K41A05G1) during the academic year of 2015-2019 in the partial fulfilment for the award of

the degree of Bachelor of Technology in COMPUTER SCIENCE AND ENGINEERING

AND ELECTRONICS AND COMMUNICATION ENGINEERING by the Jawaharlal

Nehru Technological University, Hyderabad.

SUPERVISOR HEAD OF THE DEPARTMENT

(Mr D.RAMESH) (Mr A.SRINIVAS)


EXTERNAL EXAMINER

ACKNOWLEDGEMENT

I wish to take this opportunity to express my sincere gratitude and deep sense of
Respect to our beloved Principal, Dr.V.MAHESH, for his continuous support and guidance
to carry out this project.
I express my heartfelt thanks to our Head of the Department of Computer Science and
Engineering, Mr A.Srinivas,Asst.Professor, for his encouragement, guidance from time -to-
time, and we are thankful to all Faculty members & Programmers of CSE Department of SR
Engineering College for their help during our course.
We would like to express our sincere gratitude to our internal guide Mr D.Ramesh
Sr.Asst.Professor of Department of Computer Science and Engineering, for providing me with
necessary infrastructure and encouragement in everything in our work on this project whose
guidance, and suggestions has motivated me to achieve goals we never thought possible. The
time we have spent working under is truly been a pleasure.

Finally I would like to thank all my B.Tech. Classmates for their constant support during
our class work. And special thanks to our parents and family members for their support and
encouragement throughout completion of the project.

B.Sanjay Kumar (15K41A0573)

K.Rohith Kumar (15K41A0590)


G.Sandeep (15K41A0513)

P.Sriya (15K41A05G1)

ABSTRACT

We propose a deep learning method for single image super-resolution (SR). Our method directly
learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a
deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the
high resolution one. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art
restoration quality, and achieves fast speed for practical on-line usage. We explore different network
structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we
extend our network to cope with three color channels simultaneously, and show better overall
reconstruction quality.
INDEX

S.NO TITLE PAGE NO

1. INTRODUCTION

1.1 EXISTING SYSTEM

1.2 PROPOSED SYSTEM

2. LITERATURE SURVEY

2.1 RELATED WORK

2.2 SYSTEM STUDY

3. DESIGN

3.1 REQUIREMENT SPECIFICATION

3.1.1 HARDWARE SPECIFICATION

3.1.2 SOFTWARE SPECIFICATION

3.2 UML DIAGRAMS

3.3 E-R DIAGRAM

4. IMPLEMENTATION

4.1 MODULES

4.2 OVERVIEW TECHNOLOGY

5. TESTING

5.1 TEST CASES


5.2 TEST RESULT

6. RESULT

7. CONCLUSION

8. FUTURE SCOPE

9. BIBLIOGRAPHY

LIST OF FIGURES

S.NO FIGURE NAME PAGE NO.


1. FIG 1.2.2.1 ARCHITECTURE OF PROPOSED SYSTEM
2. FIG 3.2.1 USECASE DIAGRAM FOR ADMIN
3. FIG 3.2.2 USECASE DIAGRAM FOR USER
4. FIG 3.2.3 CLASS DIAGRAM
5. FIG 3.2.4 SEQUENCE DIAGRAM FOR ADMIN
6. FIG 3.2.5 SEQUENCE DIAGRAM FOR USER
7. FIG 3.2.6 COMPONENT DIAGRAM FOR ADMIN
8. FIG 3.2.7 COMPONENT DIAGRAM FOR USER
9. FIG 3.2.8 STATECHART DIAGRAM FOR ADMIN
10. FIG 3.2.9 STATECHART DIAGRAM FOR USER
11. FIG 3.2.10 DATAFLOW DIAGRAM
12. FIG 3.3 E-R DIAGRAM
13. FIG 6.1 MAIN PAGE
14. FIG 6.2 STUDENT LOGIN
15. FIG 6.3 LIST OF EXAMS TO BE WRITTEN
16. FIG 6.4 STUDENT SCORE
17. FIG 6.5 STUDENTS WHO SUBMITTED THE ANSWER
18. FIG 6.6 STUDENT CHAT-BOX
19. FIG 6.7 ADMIN LOGIN
20. FIG 6.8 ADMIN HOME
21. FIG 6.9 STUDENTS WHO REGISTERED FOR COURSE
22. FIG 6.10 RANKING BASED ON ALLOTTED MARKS
23. FIG 6.11 STUDENTS WHO SUBMITTED ANSWERS
24. FIG 6.12 ADDING DETAILS ABOUT QUESTIONS
25. FIG 6.13 ADDING OF QUESTIONS
26. FIG 6.14 REMOVAL OF QUESTIONS
27. FIG 6.15 STUDENTS WHO SUBMITTED THE FEEDBACK
28. FIG 6.16 ADMIN CHAT-BOX
29. FIG 6.17 FEEDBACK FORM
LIST OF ACRONYMS

S.NO NAME OF ACRONYM PAGE NO.

1.

2.

3.

4.
LIST OF TABLES

S.NO TABLE NAME PAGE NO.

1.

2.

3.

4.

5.

1. INTRODUCTION
1.1 ABOUT PROJECT
An Image consist of pixels (smallest element of image). Pixels are further stored in memory in
the form of raster image or raster map, which is in 2d array of small integers. When a low resolution
image is up-scaled, the image gets distorted which is very unpleasant. So in order to tackle this problem
we implement a machine learning technique to improve the resolution of the image. This technique learns
an end-to-end mapping between training data and corresponding target attributes.

There are three types of methods for this state-of-the-art[1]:

1. Self-similarity based

2. Dictionary learning based

3. Deep learning based

Self-similarity based methods work on pairing low-resolution and high resolution patches to
improve the quality of low resolution image. Our method focuses on deep learning where we use
convolutional neural network and show that the deep learning based methods overcome the shortcomings
of state-of-the-art methods.

“The deeper the better” doesn’t work here. Our technique uses 3 convolutional-layer but doesn’t
show much of improvement for 4 to 5 layer. Thus, harnessing the power of GPU using less no of layers as
we have to consider training efficiency and storage.

1.2 PROPOSED SYSTEM

We chose image super-resolution because of its vast applications in a number of fields like
forensics, image information enhancement, surveillance, medical diagnosis, earth-observation remote-
sensing, etc.
The low resolution images can be converted to high resolution without buying expensive
cameras.
It is one of the most active researches in the present scenario and also poses a lot of potential
observing its development over the past three decades.

In real world we have problems like:-


1. It is difficult to identify vehicle number on number plate in low resolution image, which is taken by
traffic camera.
2. We zoom in image which is taken by satellite we get a low resolution image, It is difficult to Identify
object in Image.
We try to solve above problems. We improve quality of an Image using deep-convolution neural network
We provide better result as compared to existing state-of-the-art methods comparison with other method.

1.3 PROBLEM DEFINATION


Single Image super-resolution (SISR):-

1. Approximating high frequency information such as edges and texture that are lost.
2. Severely ill-posed inverse problems. There is no unique solution or they might not have
3. Solution in strict sense.
4. Algorithm should not exploit contextual information.
5. Most of the Super-Resolution algorithms work on single color channel. For colored images, we have to
work on three channels i.e. RGB which is to be converted to YCbCr.
2. LITERATURE SURVEY

2.1 RELATED WORK


2.1.1 Image Super Resolution

According to the image priors, single-image super resolution algorithms can be categorized into
four types –prediction models, edge based methods, image statistical methods and patch based (or
example-based) methods. These methods have been thoroughly investigated and evaluated. Among them,
the example-based methods achieve the state-of-the-art performance.

The internal example-based methods exploit the self-similarity property and generate exemplar
patches from the input image and several improved variants are proposed to accelerate the
implementation. The external example-based methods learn a mapping between low/high-resolution
patches from external datasets. These studies vary on how to learn a compact dictionary or manifold
space to relate low/high-resolution patches, and on how representation schemes can be conducted in such
spaces.

2.1.2 Convolutional Neural Networks

Convolutional neural networks (CNN) date back decades and deep CNNs have recently shown an
explosive popularity partially due to its success in image classification . They have also been successfully
applied to other computer vision fields, such as object detection face recognition, and pedestrian
detection.Several factors are of central importance in this progress: (i) the efficient training
implementation on modern powerful GPUs,(ii) the proposal of the Rectified Linear Unit (ReLU) which
makes convergence much faster while still presents good quality, and (iii) the easy access to an
abundance of data (like ImageNet) for training larger models. Our method also benefits from these
progresses.

2.1.3 Deep Learning for Image Restoration

There have been a few studies of using deep learning techniques for image restoration. The multi-
layer perceptron (MLP), whose all layers are fully-connected (incontrast to convolutional), is applied for
natural image denoising and post-deblurring denoising. More closely related to our work, the
convolutional neural network is applied for natural image denoising and removing noisy patterns
(dirt/rain). These restoration problems are more or less denoising-driven. propose to embed auto-encoder
networks in their super-resolution pipeline under the notion internal example-based approach. The deep
model is not specifically designed to be an end-to-end solution, since each layer of the cascade requires
independent optimization of the self-similarity search process and the auto-encoder. On the contrary, the
proposed SRCNN optimizes an end-to-end mapping. Further, the SRCNN is faster at speed. It is not only
a quantitatively superior method, but also a practically useful one.

2.2 SYSTEM STUDY


The feasibility of the project is analyzed in this phase and business proposal is put forth with a
very general plan for the project and some cost estimates. During system analysis the feasibility study of
the proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the
company. For feasibility analysis, some understanding of the major requirements for the system is
essential.

Three key considerations involved in the feasibility analysis are

➢ ECONOMICAL FEASIBILITY
➢ TECHNICAL FEASIBILITY
➢ SOCIAL FEASIBILITY

ECONOMICAL FEASIBILITY

This study is carried out to check the economic impact that the system will have on the
organization. The amount of fund that the company can pour into the research and development of the
system is limited. The expenditures must be justified. Thus the developed system as well within the
budget and this was achieved because most of the technologies used are freely available. Only the
customized products had to be purchased.

TECHNICAL FEASIBILITY

This study is carried out to check the technical feasibility, that is, the technical requirements of
the system. Any system developed must not have a high demand on the available technical resources.
This will lead to high demands on the available technical resources. This will lead to high demands being
placed on the client. The developed system must have a modest requirement, as only minimal or null
changes are required for implementing this system.

SOCIAL FEASIBILITY
The aspect of study is to check the level of acceptance of the system by the user. This includes the
process training the wet to use the system efficiently. The user mast not feel threatened by the system,
instead must accept it as a necessary. The level of acceptance by the users solely depends on the methods
that are employed to educate the user about the system and to make him familiar with i

3. DESIGN

3.1 REQUIREMENT SPECIFICATION


Performance is measured in terms of the output provided by the application. Requirement
specification plays an important part in the analysis of a system. Only when the requirement
specifications are properly given, it is possible to design a system, which will fit into required
environment. This is because the requirements have to be known during the initial stage so that the
system can be designed according to those requirements. It is very difficult to change the system once it
has been designed and on the other hand designing a system, which does not cater to the requirements of
the user, is of no use.

The requirements specifications for any system can be broadly stated as given below

➢ The system should be able to interface with the existing system


➢ The system should be accurate
➢ The system should be better than the existing system.

3.1.1 HARDWARE SPECIFICATION

Hardware Operating Processors RAM Graphics


System

System Ubuntu Intel Core i5 8GB 2GB (Nvidia


GeForce)

3.1.2 SOFTWARE SPECIFICATION

Software and Tools Requirements


Python Basic Language

CUDU For parallel training

OpenCV Patch extraction,Linear mapping and


Reconstruction

Keras For Training data

TensorFlow Backend for Keras

4. IMPLEMENTATION
Our technique uses 3 convolutional-layer but doesn’t show much of improvement for 4 to 5 layer.
Thus, harnessing the power of GPU using less no of layers as we have to consider training efficiency and
storage. Traditionally, the image is up-scaled using bi-cubic interpolation. But it leads to loss of texture
due to excessive smoothness which leads to unnatural texture. Instead, we use discrete-wavelet transform
for upscaling and back-projection technique to enhance the edges. The up-sampled image is then
iteratively back projected using back-projection filter based on Laplacian of Gaussian (LOG) this gives a
more natural texture of the image in smooth areas. The method also removes spurious colors along the
edges.

Assumption:

If (high-resolution patch representation is in image domain):

Reshape each representation to form the patch.

We expect that the filters act like an averaging filter

Else:

We expect, W3 behaves like

I. projecting the coefficient onto the image domain

II. Then averaging

Consider a single low-resolution image, we first upscale it to the desired size using bi-cubic interpolation,
which is the only pre-processing we perform. Let us denote the up-scaled image as Y.

We consider up-scaled image Y as input image on which we wish to learn a mapping F which
conceptually consists of 3 operations:

1. Patch Extraction & Representation

2. Non-Linear Mapping

3. Reconstruction
fig:An illustration of sparse-coding-based methods in the view of a convolutional neural network

4.1 Patch Extraction and representation

We extract patches from the image which is a stratagy used for image restoration and apply below
operation,

f1(Y1) = maximum(0, We1*Y1 + b1), and

We1 is k1 filters of ch x m1 x m2 where ch number of channels in the image Y1

We1 applies k1 convolution.

b1 is k1 dimensional vectors associated with filters.

We have applied ReLU (maximum (0, ,x )) on the filter responses.

The ReLU belongs to the part of 2nd operation.

4.2 Non-Linear Mapping

We are having k1 dim feature for each and every patch.we will map this k1-dim vectors to k2-dim one
with filter size 1x1 by applying below operation,

f2(Y1) = max(0, We2*(f1(Y1) + b2), where We2 contains k2 filters of size k1 x m2 x m2 ,

b2 is k2 dimensional.

4.3 Reconstruct
By applying below operation we construct the final image

f(Y1) = We3 * f2(Y1) + b3 , where

We3 contains ch filters of a size k2 x m3 x m3

b3 is ch-Dimensional Vector Learning:

To learn end to end mapping f we need to find out the network


parameters{We1,We2,We3,b1,b2,b3}.That can be accomplished by minimizing the loss between
reconstructed image and respective ground truth

image.we will take some set of images and train it.

S-ar putea să vă placă și