Sunteți pe pagina 1din 60

Lab no#01

Introduction to Digital Image Processing Using MATLAB


Lab Objectives
The objective of this lab is to understand
1. How to read an image in Matlab
2. How to show an Image in Matlab
3. How to access Image Pixels in Matlab
4. How to write Image in Matlab
5. Mirror Image generation.
6. Flipped Image generation

Reading an Image
To import an image from any supported graphics image file format, in any of the supported
bit depths, use the imread function.
Syntax
A = imread(filename,fmt)
Description
A = imread(filename, fmt) reads a greyscale or color image from the file specified by the
string filename, where the string fmt specifies the format of the file. If the file is not in the
current directory or in a directory in the MATLAB path, specify the full pathname of the
location on your system.

Display An Image
To display iamge, use the imshow function.
Syntax
imshow(A)
Description
imshow(A) displays the image stored in array A.

Writing Image Data


Imwrite
Write image to graphics file
Syntax
imwrite(A,filename,fmt)
Example:
a=imread('pout.tif');
imwrite(a,gray(256),'b.bmp');
imshow('b.bmp')% imshow is used to display image
Conversion of image
As we know images are of 4 types:
1. Gray image
2. Binary image
3. Color image
4. Indexed image
In MATLAB image is converted from one form to another.
Following is a table which shows different conversion of images in MatLab
Function Use Format

ind2gray Indexed to Greyscale y=ind2gray(x,map);

gray2ind Greyscale to indexed [y,map]=gray2ind(x);

rgd2gray RGB to greyscale y=rgb2gray(x);

gray2rgb Greyscale to RGB y=gray2rgb(x);

rgb2ind RGB to indexed [y,map]=rgb2ind;

ind2rgb Indexed to RGB y=ind2rgb(x,map);

Order of matrix:
Image is basically an array whose order is represented by rows and columns.
Command used to know the order of image is size.
Syntax:
[r,c]=size(a)
r represents the rows and c represents the column.
Accessing the Pixel data
There is a one-to-one correspondence between pixel coordinates and the coordinates MATLAB® uses
for matrix subscripting. This correspondence makes the relationship between an image’s data matrix
and the way the image is displayed easy to understand. For example, the data for the pixel in the fifth
row, second column is stored in the matrix element (5,2). You use normal MATLAB matrix
subscripting to access values of individual pixels. For example, the MATLAB code
A(2,15)
returns the value of the pixel at row 2, column 15 of the image A.

TASK 1
Write a MATLAB code that reads a gray scale image and generates the flipped image of original
image. Your output should be like the one given below
TASK 2
Write a MATLAB code that will do the following
1. Read any gray scale image.
2. Display that image.
3. Again display the image such that the pixels having intensity values below than 50 will
display as black and pixels having intensity values above than 150 will display as white. And
the pixels between these will display as it is.
Lab no#02
Reconstruction of Images using Interpolation
Lab Objectives
The objective of this lab is to understand
1. Changing the gray level effect on the quality of image.
2. Changing the spatial resolution on the quality of image.
Changing the Gray level
The quality of gray level image is affected by its gray level resolution (i.e. increasing the
number of bits per pixel has a great effect in improving the quality of gray level images). The
higher the number of gray levels would give a smooth transition along the details of the
image and hence improving its quality to human eye.
EXAMPLE:
I=imread('cameraman.tif');
s=size(I); %256*256*3
s1=size(I); %256*256
% A 256 gray-level image:
[I256,map256]=gray2ind(I,256);
% A 128 gray-level image:
[I128,map128]=gray2ind(I,128);
% A 64 gray-level image:
[I64,map64]=gray2ind(I,64);
figure(1)
subplot(221),subimage(I),title('I'),axis off
subplot(222),subimage(I256,map256),title('I256'),axis off
subplot(223),subimage(I128,map128),title('I128'),axis off
subplot(224),subimage(I64,map64),title('I64'),axis off
pause %press any key to cont.
% A 32 gray-level image:
[I32,map32]=gray2ind(I,32);
% A 16 gray-level image:
[I16,map16]=gray2ind(I,16);
% A 8 gray-level image:
[I8,map8]=gray2ind(I,8);
figure(2)
subplot(221),subimage(I),title('I'),axis off
subplot(222),subimage(I32,map32),title('I32'),axis off
subplot(223),subimage(I16,map16),title('I16'),axis off
subplot(224),subimage(I8,map8),title('I8'),axis off
pause
% A 4 gray-level image:
[I4,map4]=gray2ind(I,4);
% A 2 gray-level image:
[I2,map2]=gray2ind(I,2);
figure(3)
subplot(221),subimage(I),title('I'),axis off
subplot(222),subimage(I8,map8),title('I8'),axis off
subplot(223),subimage(I4,map4),title('I4'),axis off
subplot(224),subimage(I2,map2),title('I2'),axis off

Changing the spatial resolution:


Changing the spatial resolution of digital image, by zooming or shrinking are the operations
of oversampling and underdamping a digital image, respectively.
Zooming a digital image requires two steps:
- The creation of new pixel locations.
- Then assigning the gray level to that new location.
The assignment of gray levels to new pixel location is a great challenge.
There are three methods for assigning the gray level to new pixel location:
- Nearest Neighbor Interpolation : each pixel in the zoomed image is assigned the
gray level value of its closest pixel in the original image.

- Bilinear Interpolation: the value of each pixel in the zoomed image is a weighted
average of the gray level value of the pixels in the nearest 2-by-2 neighborhood in the
original image.
𝒗(𝒙,𝒚) = 𝒂𝒙 + 𝒃𝒚 + 𝒄𝒙𝒚 + 𝒅

Bicubic Interpolation: The intensity value assigned to point (x,y) is obtained by the
following equation:
𝟑 𝟑

𝒗(𝒙,𝒚) = ∑ ∑ 𝒂𝒊𝒋 𝒙𝒊 𝒚𝒋
𝒊=𝟎 𝒋=𝟎

The sixteen coefficients are determined by using the sixteen nearest neighbors.

EXAMPLE:
% Shrinking the image to 1/2
clc;
close all;
clear all;
I = imread('cameraman.tif');
K= imfinfo('cameraman.tif');
if(K.BitDepth ==24)
I=rgb2gray(I);
end
[r,c] = size(I);
I2(1:r/2, 1:c/2) = I(1:2:r, 1:2:c);
imshow(I);
figure
imshow(I2);

In the MATLAB there is an inbuilt function for spatial resolution:


b=imresize(A,M,method) %returns an image that is M times the size of A.
METHOD can be a string naming a general interpolation method:
▪ 'nearest' - nearest-neighbor interpolation
▪ 'bilinear' - bilinear interpolation
▪ 'bicubic' - cubic interpolation; the default method

EXAMPLE:
clc;
close all;
clear all;
i = imread('cameraman.tif');
h=imresize(i,50,'bicubic');
subplot(121),subimage(i)
subplot(122),subimage(h)

There is another code below which shows how spatial resolution effect on the quality of
image.
EXAMPLE:
%gray_level.m
% Reading the image and converting it to a gray-level image.
%Spatial_resolution.m
clc
close all
clear all
% Reading the image and converting it to a gray-level image.
I=imread('cameraman.tif');
% Reducing the Size of I using Bicubic interpolation
I128=imresize(I,0.5); imshow(I128),pause
I64=imresize(I,0.25);close,imshow(I64),pause
I32=imresize(I,0.125);close,imshow(I32),pause
I16=imresize(I,0.0625);close,imshow(I16),pause
% Resizing the Reduced Images to the Original Size (256 X 256) and Compare them:
I16=imresize(I16,16);
I32=imresize(I32,8);
I64=imresize(I64,4);
I128=imresize(I128,2);
close,figure
subplot(121),subimage(I),title('I'),axis off
subplot(122),subimage(I128),title('I128'),axis off
pause,close
figure
subplot(221),subimage(I),title('I'),axis off
subplot(222),subimage(I64),title('I64'),axis off
subplot(223),subimage(I32),title('I32'),axis off
subplot(224),subimage(I16),title('I16'),axis off
pause
close all
% Reducing the Size of I using bilinear interpolation
I128_b=imresize(I,0.5,'bilinear');imshow(I128_b),pause;
I64_b=imresize(I,0.25,'bilinear');close,imshow(I64_b),pause
I32_b=imresize(I,0.125,'bilinear');close,imshow(I32_b),pause
I16_b=imresize(I,0.0625,'bilinear');close,imshow(I16_b),pause
% Resizing the Reduced Images to the Original Size (256 X 256) and Compare them:
I128_b=imresize(I128_b,2,'bilinear');
I64_b=imresize(I64_b,4,'bilinear');
I32_b=imresize(I32_b,8,'bilinear');
I16_b=imresize(I16_b,16,'bilinear');
close,figure
subplot(121),subimage(I),title('I'),axis off
subplot(122),subimage(I128_b),title('I128_b'),axis off
pause,close
figure
subplot(221),subimage(I),title('I'),axis off
subplot(222),subimage(I64_b),axis off,title('I64_b'),
subplot(223),subimage(I32_b),axis off,title('I32_b'),
subplot(224),subimage(I16_b),axis off,title('I16_b'),

Task 1
Reducing the Number of Gray Levels in an Image
Write a computer program capable of reducing the number of gray levels in an image from 256 to 2,
in integer powers of 2. The desired number of gray levels needs to be a variable input to your
program.

Task 2
Zooming and Shrinking Images by Nearest Neighbour
Write a computer program capable of zooming and shrinking an image by nearest neighbour
algorithm. Assume that the desired zoom/shrink factors are integers. You may ignore aliasing effects.

Task 3
Zooming and Shrinking Images by Bilinear Interpolation
Write a computer program capable of zooming and shrinking an image by bilinear interpolation. The
input to your program is the desired size of the resulting image in the horizontal and vertical direction.
You may ignore aliasing effects.
Lab no#03
To write and execute programs for image arithmetic & logical
operations
Lab Objectives
The objective of this lab is to understand

1. To write and execute programs for image arithmetic operations


2. To write and execute programs for image logical operations

Arithmetic operation:
Standard arithmetic operations are also applying on images to enhance and suppress the
information of image, to detect the differences between two or more images of the same
scene etc.
Adding/Subtracting Images
the operation act by applying a simple function y=f(x) to each gray value in the image. Thus
f(x) is a function which maps the range 0……255 onto itself. Simple function include adding
or subtract a constant value to each pixel.
Y=x+-c
If there is two images I1 and I2 then addition of image can be given by:
I(x,y) = I1(x,y) + I2(x,y)
Where I(x,y) is resultant image due to addition of two images. x and y are coordinates of
image. Image addition is pixel to pixel. Value of pixel should not cross maximum allowed
value that is 255 for 8 bit gray scale image. When it exceeds value 255, it should be clipped
to 255. To increase overall brightness of the image, we can add some constant value
depending on brightness required.
In example program we will add value 50 to the image and compare brightness of original
and modified image. To decrease brightness of image, we can subtract constant value. In
example program, value 100 is subtracted for every pixel. Care should be taken that pixel
value should not less than 0 (minus value). Subtract operation can be used to obtain
complement (negative) image in which every pixel is
subtracted from maximum value i.e. 255 for 8 bit greyscale image.
EXAMPLE (Addition and Subtraction of two images):
clear all
close all
clc
[namefile,pathname]=uigetfile({'*.bmp;*.tif;*.tiff;*.jpg;*.jpe g;*.gif'})
P=imread(strcat(pathname,namefile));
K=imfinfo(strcat(pathname,namefile));
if(K.BitDepth ==24)
P=rgb2gray(myimage);
end
subplot(221)
imshow(P)
title('orignal image P')
[namefile,pathname]=uigetfile({'*.bmp;*.tif;*.tiff;*.jpg;*.jpe g;*.gif'})
Q=imread(strcat(pathname,namefile));
K=imfinfo(strcat(pathname,namefile));
if(K.BitDepth ==24)
Q=rgb2gray(myimage);
end
subplot(222)
imshow(Q)
title('orignal image Q')

% for addition

R=imadd(P,Q);
subplot(223)
imshow(R)
title(' R after addition P and Q')

% for subtraction
subplot(224)
S=imabsdiff(R,Q);
imshow(S)
title('taking the difference between R and Q')

EXAMPLE(Brightness of image):
close all
clear all
clc
[namefile,pathname]=uigetfile({'*.bmp;*.tif;*.tiff;*.jpg;*.jpe g;*.gif'})
I1=imread(strcat(pathname,namefile));
K=imfinfo(strcat(pathname,namefile));
if(K.BitDepth ==24)
I1=rgb2gray(myimage);
end
I1 = imread('cameraman.tif');
subplot(221)
imshow(I1);
title('Original imageI1');
I=I1+50;
subplot(223);
imshow(I);
title('Bright Image I');
I=I1-100;
subplot(224);
imshow(I);
title('Dark Image I');

Multiplication/division images:
Multiplication operation can be used to mask the image for obtaining region of interest. Black
and white mask (binary image) containing 1s and 0s is multiplied with image to get resultant
image. To obtain binary image function im2bw() can be used. Multiplication operation can
also be used to increase brightness of the image.
Division operation results into floating point numbers. So data type required is floating point
numbers. It can be converted into integer data type while storing the image. Division can be
used to decrease the brightness of the image.

EXAMPLE(for mask):

I1 = imread('cameraman.tif');
I2 = imread('rice.png');

figure;
subplot(2, 2, 1);
imshow(I1);
title('Original image I1');
subplot(2, 2, 2);
imshow(I2);
title('Original image I2');
I=I1*2;
subplot(2, 2, 3);
imshow(I);
title('Bright Image I');
I=I1/3;
subplot(2, 2, 4);
imshow(I);
title('Dark Image I');
%for mask
M=imread('rice.png');
M=im2bw(M); % Converts into binary image having 0s and 1s
I=uint8(I1).*uint8(M); %Type casting before multiplication
subplot(211);
imshow(M);
title('converted image I');
subplot(212);
imshow(I);
title('Masked Image I');

EXAMPLE:
close all;
clear all;
clc
[namefile,pathname]=uigetfile({'*.bmp;*.tif;*.tiff;*.jpg;*.jpe g;*.gif'})
myimage=imread(strcat(pathname,namefile));
K=imfinfo(strcat(pathname,namefile));
if(K.BitDepth ==24)
myimage=rgb2gray(myimage);
end

[Rows, Cols] = size(myimage);


newimage=zeros(Rows,Cols);
k=1;
while k<5
for i = 1:Rows
for j = 1:Cols
if k==1
newimage(i,j)=myimage(i,j)-100;
end
if k==2 newimage(i,j)=myimage(i,j)- 50;
end
if k==3
newimage(i,j)=myimage(i,j)+50;
end
if k==4
newimage(i,j)=myimage(i,j)+50;end
end
end
subplot(2,2,k);
imshow(newimage,[]); k=k+1;
end
Logical operation:
The logical operations are binary in nature and most of the time operate on binary images as
well, but they can be tuned to work with grayscale images. The operators work on the same
principles as discrete gates in digital logic circuit designs.
In the following table the basic logical operations commonly used in image processing are
given below:

AND/NAND Logical operations can be used for following applications:


Compute intersection of the images
Design of filter masks
Slicing of gray scale images
OR/NOR logical operations can be used for following applications:
Merging of two images
XOR/XNOR operations can be used for following applications:
To detect change in gray level in the
image Check similarity of two images
NOT operation is used for:
To obtain negative image
Making some features clear

EXAMPLE:
close
all clc
close all
myimageA=imread('C:\Program Files\MATLAB\MATLAB Production
Server\R2015a\toolbox\images\imdata\circle.png');
myimageA=rgb2gray(myimageA);
myimageB=imread('C:\Program Files\MATLAB\MATLAB Production
Server\R2015a\toolbox\images\imdata\PENTAGON.png');
myimageB=rgb2gray(myimageB);
subplot(3,2,1)
imshow(myimageA),title('Image A ');
% Display the Original Image B
subplot(3,2,2)
imshow(myimageB),title('Image B');
% Take a complement of Image A and Display it
subplot(3,2,3)
cimageA= ~myimageA ;
imshow(cimageA), title('Complement of Image A');
% Take a Ex-OR of Image A and Image B and Display it
subplot(3,2,4)
xorimage= xor(myimageA,myimageB);
imshow(xorimage), title('Image A XOR Image B');
% Take OR of Image A and Image B and Display it
subplot(3,2,5)
orimage= myimageA | myimageB;
imshow(orimage), title('Image A OR Image B ');
% Take AND of Image A and Image B and Display it
subplot(3,2,6)
andimage= myimageA & myimageB;
imshow(andimage), title('Image A AND Image B ');
Task 01
In above program, use functions imadd() functions to addition, imsubtract() for subtraction
immultiply() for multiplication operations. Use imcomplement() function to get complement of
image. Write program again using these functions in following space.

Task 02
Write Program to read any image, resize it to 256x256. Apply square mask shown in
following figure so that only middle part of the image is visible.

Task 03
Write your own MATLAB function addbrightness() and use it to increase brightness of given
image.

Task 04
Prepare any two images of size 256x256 in paint. Save it in JPEG format 256 gray levels.
Perform logical NOR, NAND operations between two images. Write program and paste your
results.
Lab no#04
Some Basic Relationship of Pixels
Lab Objectives
The objective of this lab is to understand
1. Basic Relationship of Pixel i.e. Connectivity based on two method:
a. 4-Adjacency
b. 8-Adjacency
Neighbors of a pixel
1.N4 (p): 4-neighbors of p.
•Any pixel p(x, y)has two vertical and two horizontal neighbours, given by
(x+1, y), (x-1, y), (x, y+1), (x, y-1)
•This set of pixels are called the 4-neighborsof P, and is
denoted by N4(P)
•Each of them is at a unit distance from P.
2.ND(p)
•ND(p): four diagonal neighbours of p have coordinates:
(x+1,y+1), (x+1,y-1), (x-1,y+1), (x-1,y-1)

3.N8(p): 8-neighbors of p.
•N4(P)and ND(p) together are called 8-neighbors of p, denoted by N8(p).
•N8= N4UND
•Some of the points in the N4 , ND and N8 may fall outside image when P lies on the
border of image.

F(x-1, y-1) F(x-1, y) F(x-1, y+1)

F(x, y-1) F(x,y) F(x, y+1)

F(x+1, y-1) F(x+1, y) F(x+1, y+1)

Fig : Sub-image of size 3x3 of 8- neighbor

Connectivity
Two pixels are said to be connected if they are adjacent in some sense.
- They are neighbours (N4, ND, N8) and
- Their intensity values (gray levels) are similar.
Adjacency
Two pixels are adjacent if they are neighbours and their intensity level ‘V’ satisfy some
specific criteria of similarity.
e.g. V = {1}
V = {0, 2}
Binary image = {0, 1}
Gray scale image = {0, 1, 2, ------, 255}
- In binary images, 2 pixels are adjacent if they are neighbours & have some intensity values
either 0 or 1.
- In gray scale, image contains more gray level values in range 0 to 255.

4-adjacency: Two pixels p and q with the values from set ‘V’ are 4-adjacent if q is in the set
of N4 (p). e.g. V = { 0, 1}

p in RED color q can be any value in GREEN color.

8-adjacency: Two pixels p and q with the values from set ‘V’ are 8-adjacent if q is in the set
of N8 (p). e.g. V = { 1, 2}

p in RED color q can be any value in GREEN color.


Task 1
Write a program to find the sets of connected components in binary image based on 4-
adjacent neighbours.

Task 2
Write a program to find the sets of connected components in a binary image based on 8-
adjacent neighbours.

Note. Binary images can be created inmspaint or any other tool of your choice.
Lab no#05
Intensity Transformations
Lab Objectives
The objective of this lab is to understand
1. Image enhancement in spatial domain through Gray level Transformation function
2. Linear Transformation
a. Image Negation function
b. Identity function
3. Logarithmic Transformation
4. Power Law Transformation
5. Piece Wise Linear Transformation
BACKGROUND MATERIAL:
Image Enhancement in Spatial Domain -Basic Grey Level Transformations
Image enhancement is a very basic image processing task that defines us to have a better
subjective judgement over the images. And Image Enhancement in spatial domain (that is,
performing operations directly on pixel values) is the very simplistic approach. Enhanced
images provide better contrast of the details that images contain. Image enhancement is
applied in every field where images are ought to be understood and analysed. For example,
Medical Image Analysis, Analysis of images from satellites, etc.
Image enhancement simply means, transforming an image f into image g using T. Where T is
the
transformation. The values of pixels in images f and g are denoted by r and s, respectively. As
said, the pixel values r and s are related by the expression,
s = T(r)
where T is a transformation that maps a pixel value r into a pixel value s. The results of this
transformation are mapped into the grey scale range as we are dealing here only with grey
scale digital images. So, the results are mapped back into the range [0, L-1], where L=2k, k
being the number of bits in the image being considered. So, for instance, for an 8-bit image
the range of pixel values will be [0, 255].
There are three basic types of functions (transformations) that are used frequently in image
enhancement. They are,
• Linear,
• Logarithmic,
• Power-Law.
The transformation
map plot shown below depicts
various curves that fall into the
above three types of
enhancement techniques.
The Identity and Negative curves fall under the category of linear functions. Identity curve
simply indicates that input image is equal to the output image. The Log and Inverse-Log
curves fall under the category of Logarithmic functions and nth root and nth power
transformations fall under the category of Power-Law functions.

Image Negation
The negative of an image with grey levels in the range [0, L-1] is obtained by the negative
transformation shown in figure above, which is given by the expression,
s=L-1-r
This expression results in reversing of the grey level intensities of the image thereby
producing a negative like image. The output of this function can be directly mapped into the
grey scale look-up table consisting values from 0 to L-1.

Log Transformations
The log transformation curve shown in fig. A, is given by the expression,
s = c log(1 + r)
where c is a constant and it is assumed that r≥0. The shape of the log curve in fig. A tells that
this transformation maps a narrow range of low-level grey scale intensities into a wider range
of output values. And similarly maps the wide range of high-level grey scale intensities into a
narrow range of high level output values. The opposite of this applies for inverse-log
transform. This transform is used to expand values of dark pixels and compress values of
bright pixels.

Power-Law Transformations
The nth power and nth root curves shown in fig. A can be given by the expression,
s = crγ
This transformation function is also called as gamma correction. For various values of γ
different levels of enhancements can be obtained. This technique is quite commonly called as
Gamma Correction. If you notice, different display monitors display images at different
intensities and clarity. That means, every monitor has built-in gamma correction in it with
certain gamma ranges and so a good monitor automatically corrects all the images displayed
on it for the best contrast to give user the best experience.

The difference between the log-transformation function and the power-law functions is
that
using the power-law function a family of possible transformation curves can be
obtained
just by varying the λ.

These are the three basic image enhancement functions for grey scale images that can be
applied easily for any type of image for better contrast and highlighting. Using the image
negation formula given above, it is not necessary for the results to be mapped into the grey
scale range [0, L-1]. Output of L-1-r automatically falls in the range of [0, L-1]. But for the
Log and Power-Law transformations resulting values are often quite distinctive, depending
upon control parameters like λ and logarithmic scales. So, the results of these values should
be mapped back to the grey scale range to get a meaningful output image. For example, Log
function s = c log (1 + r) results in 0 and 2.41 for r varying between 0 and 255, keeping c=1.
So, the range [0, 2.41] should be mapped to [0, L-1] for getting a meaningful image.
Power Law Transform
MATLAB CODE
image=imread('pout.tif');
figure;
imshow(image);
image_double=im2double(image);
[r c]=size(image_double);
cc=input('Enter the value for c==>');
ep=input('Enter the value for gamma==>');
for i=1:r
for j=1:c
imout(i,j)=cc*power(image_double(i,j),ep);
end
end
figure,imshow(imout);
.
OUTPUT
Enter the value for c==>1 Enter the value for gamma==>.2 % for gamma value less than 1 u
gets Bright image

Enter the value for c==>1 Enter the value for gamma==>5 % for gamma value GREATER
THAN 1 u gets dark image
TASK 1
Implement negation transform.
TASK 2
Implement Logarithmic transform.
TASK 3
Implement Piece wise linear transform.
Lab no#06
Histogram Processing
Lab Objectives
The objective of this lab is to understand
1. Histogram Calculation
2. Histogram Equalization
3. Histogram Matching
4. Localized Histogram

Introduction:
Histogram is bar graph which is used to show the gray level in the image. Mathematically it
is represented by:
𝒉(𝒓𝒌)=𝒏𝒌

rk is kth gray level and nK is number of pixel having that gray level.
Histogram of the image tells us that whether the image was scanned properly or not.
Histogram equalization is applied to improve the appearance of the image. Histogram also
tells us about objects in the image. Object in an image have same gray levels so histogram
helps us to select threshold value for object detection. it is also be used for image
segmentation.

Histogram Equalization

The idea behind Histogram Equalization is that we try to evenly distribute the occurrence of
pixel intensities so that the entire range of intensities is used more fully. We are trying to give
each pixel intensity equal opportunity; thus, equalization. Especially for images with a wide
range of values with detail clustered around a few intensities, histograms will improve the
contrast in the image.

Histogram Matching

Histogram matching is a process where a time series, image, or higher dimension scalar data
is modified such that its histogram matches that of another (reference) dataset. A common
application of this is to match the images from two sensors with slightly different responses,
or from a sensor whose response changes over time.

Standard built in function:


- imhist function: it computes the histogram of given image and plot
- histeq function: it computes the histogram and equalize it.
Example (using built in function):
close all;
clear all;
clc
[filename,pathname]=uigetfile({'*.bmp;*.jpg;*.gif','Choose Poorly scanned Image'});
myimage=imread(strcat(pathname,filename));
if(size(myimage,3)==3)
myimage =rgb2gray(myimage);
end
imhist(myimage);
newimage= histeq(myimage);
figure
imshow(myimage);
title('Original image');
figure
imshow(newimage);
title('Histogram equalized image');

Example (without the built in function):

close all
clear all;
clc
[filename,pathname]=uigetfile({'*.bmp;*.jpg;*.gif','Choose Poorly scanned Image'});
myimage=imread(strcat(pathname,filename));
if(size(myimage,3)==3)
myimage =rgb2gray(myimage);
end
subplot(2,2,1);imshow(myimage); title('Original image');
[rows cols]=size(myimage);
myhist=zeros(1,256);
% Calculation of histogram
for i=1:rows
for j=1:cols m=double(myimage(i,j));
myhist(m+1)=myhist(m+1)+1;
end
end
subplot(2,2,2);bar(myhist); title('Histogram of original image');
sum=0;
%Cumulative values
for i=0:255
sum=sum+myhist(i+1);
sum_of_hist(i+1)=sum;
end
area=rows*cols;
%Dm=input('Enter no. of gray levels in output image: ');
Dm=256;
for i=1:rows
for j=1:cols
n=double(myimage(i,j));
myimage(i,j)=sum_of_hist(n+1)*Dm/area;
end
end
%Calculation of histogram for equalised image
for i=1:rows
for j=1:cols
m=double(myimage(i,j));
myhist(m+1)=myhist(m+1)+1;
end
end
subplot(2,2,3);bar(myhist);title('Equalised Histogram');
subplot(2,2,4);imshow(myimage); title('Image after histogram equalisation');

TASK 1
Plot histogram of image lab5.jpg. Although MATLAB has a histogram function imhist but
you write your code to calculate histogram. Note
1. Effects on histogram after changing gray levels and spatial resolution.
2. Effects on histogram after intensity transformation.

TASK 2
Write a program to equalize the histogram and repeat the task1.

TASK 3
Write a program to implement Histogram Matching Algorithm in MATLAB.
a. Exchange the histogram of images lab6q1a.tif and lab6q1b.tif
b. Specify the histogram for image lab6q1c.tif and lab6q1d.tif to enhance the viewer
interpretation.

TASK 4
Write a program for Local Histogram Equalization and note
a. Differences between local and global histogram equalization.
Lab no#07
Color Image Processing
Lab Objectives
The objective of this lab is to understand

Colour processing
For human beings, colour provides one of the most important descriptors of the world around us. The
human visual system is particularly attuned to two things: edges, and colour. We have mentioned that
the human visual system is not particularly good at recognizing subtle changes in grey values. In this
section we shall investigate colour briefly, and then some methods of processing colour images

What is colour?
Colour study consists of

1. the physical properties of light which give rise to colour,

2. the nature of the human eye and the ways in which it detects colour,

3. the nature of the human vision centre in the brain, and the ways in which messages from the eye
are perceived as colour.

Physical aspects of colour


Visible light is part of the electromagnetic spectrum: radiation in which the energy takes the form of
waves of varying wavelength. These range from cosmic rays of very short wavelength, to electric
power, which has very long wavelength. Figure 12.1 illustrates this. The values for the wavelengths of
blue, green and red were set in 1931 by the CIE (Commission Internationale d’Eclairage), an
organization responsible for colour standards.

Perceptual aspects of colour


The human visual system tends to perceive colour as being made up of varying amounts of red, green
and blue. That is, human vision is particularly sensitive to these colours; this is a function of the cone
cells in the retina of the eye. These values are called the primary colours. If we add together any two
primary colours we obtain the secondary colours:
The amounts of red, green, and blue which make up a given colour can be determined by a colour
matching experiment. In such an experiment, people are asked to match a given colour (a colour
source) with different amounts of the additive primaries red, green and blue. Such an experiment was
performed in 1931 by the CIE, and the results are shown in figure 12.2. Note that for some
wavelengths, various of the red, green or blue values are negative. This is a physical impossibility, but
it can be interpreted by adding the primary beam to the colour source, to maintain a colour match.
To remove negative values from colour information, the CIE introduced the XYZ colour model. The
values of , and can be obtained from the corresponding , and values by a linear transformation:

The inverse transformation is easily obtained by inverting the matrix:

The XYZ colour matching functions corresponding to the, , curves of figure 7.2 are shown in figure
7.3. The matrices given are not fixed; other matrices can be defined according to the definition of the
colour white. Different definitions of white will lead to different transformation matrices.
function plotxyz()
%This function simply plots the colour matching curves for CIE XYZ (1931),
% obtaining the data from the file ciexyz31.txt
%
wxyz=load('ciexyz31.txt');
w=wxyz(:,1);
x=wxyz(:,2);
y=wxyz(:,3);
z=wxyz(:,4);
figure,plot(w,x,'-k',w,y,'.k',w,z,'--k')
text(600,1.15,'X','Fontsize',12)
text(550,1.1,'Y','Fontsize',12)
text(460,1.8,'Z','Fontsize',12)
Figure 7.4: A function for plotting the XYZ curves

function plotrgb()
%
% This function simply plots the colour matching curves for CIE RGB (1931),
% obtaining the original XYZ data from the file ciexyz31.txt
%
wxyz=load('ciexyz31.txt');
x2r=[3.063 -1.393 -0.476;-0.969 1.876 0.042;0.068 -0.229 1.069];
xyz=wxyz(:,2:4)';
rgb=x2r*xyz;
w=wxyz(:,1);
figure,plot(w,rgb(1,:)','-k',w,rgb(2,:)','.k',w,rgb(3,:)','--k')
text(450,2,'Blue','Fontsize',12)
text(530,1.7,'Green','Fontsize',12)
text(640,1.7,'Red','Fontsize',12)
Figure 12.5: A function for plotting the RGB curves
>> z=zeros(size(xyz));
>> xy=xyz./(sum(xyz')'*[1 1 1]);
>> x=xy(:,1)';
>> y=xy(:,2)';
>> figure,plot([x x(1)],[y y(1)]),axis square

Here the matrix xyz consists of the second, third and fourth columns of the data, and plot is a
function which draws a polygon with vertices taken from the x and y vectors. The extra x(1) and
y(1) ensures that the polygon joins up. The result is shown in figure 7.6. The v alues of x and y
which lie within the horseshoe shape in figure 7.6 represent v alues which correspond to physically
realizable colours.

Colour models
A colour model is a method for specifying colours in some standard way . It generally consists
of a three dimensional coordinate system and a subspace of that system in which each colour is
represented by a single point. We shall in vestigate three systems.

RGB (Red, Green, Blue) Color model:


Basically image divided into three types. Binary image require 1 bit/pixel and it has 1 plane. Gray scale
image require 8 bit/pixel and it has 8 bitplanes. Color image having 3 planes, red green and blue, each
plane require 8 bits/pixel, it has 24 bitplanes.
The RGB color model is based on a Cartesian coordinate system whose axes represent the three primary
colors of light (R, G, and B), usually normalized to the range [0, 1]. The eight vertices of the resulting
cube correspond to the three primary colors of light, the three secondary colors, pure white and pure
black.
HSV Model:
The RGB color model is an additive color model in which red, green, and blue light are added
together in various ways to reproduce a broad array of colors. The name of the model comes from the
initials of the three additive primary colors, red, green, and blue.

Hue, Saturation, Value or HSV is a color model that describes colors (hue or tint) in terms of their
shade (saturation or amount of gray) and their brightness (value or luminance).

· 1) Hue is expressed as a number from 0 to 360 degrees representing hues of red (starts at 0), yellow
(starts at 60), green (starts at 120), cyan (starts at 180), blue (starts at 240), and magenta (starts at
300).
· 2) Saturation is the amount of gray (0% to 100%) in the color.
· 3) Value (or Brightness) works in conjunction with saturation and describes the brightness or
intensity of the color from 0% to 100%.

rgb2hsv command is used to convert RGB to HSV.


YIQ

Color images in MATLAB


Since a colour image requires three separate items of information for each pixel, a (true) colour image
of size mxn is represented in MATLAB by an array of size mxnx3: a three dimensional array. We can
think of such an array as a single entity consisting of three separate matrices aligned vertically. Figure
7.11 shows a diagram illustrating this idea. Suppose we read in a RGB image:
>> x=imread('lily.tif');
>> size(x)
ans =
186 230 3
We can isolate each colour component by the colon operator:
x(:,:,1) The first, or red component
x(:,:,2) The second, or green component
x(:,:,3) The third, or blue component

These can all be viewed with imshow:


>> imshow(x)
>> figure,imshow(x(:,:,1))
>> figure,imshow(x(:,:,1))
>> figure,imshow(x(:,:,2))

These are all shown in figure 7.12. Notice how the colours with particular hues show up with

high intensities in their respective components. F or the rose in the top right, and the flower in the
bottom left, both of which are predominantly red, the red component shows a very high intensity for
these two flowers. The green and blue components show much lower intensities. Similarly the green
leaves-at the top left and bottom rightshow up with higher intensity in the green component than the
other two.
We can convert to YIQ or HSV and view the components again:
>> xh=rgb2hsv(x);
>> imshow(xh(:,:,1))
>> figure,imshow(xh(:,:,2))
>> figure,imshow(xh(:,:,3))

and these are shown in figure 7.13. We can do precisely the same thing for the YIQ colour space:

>> xn=rgb2ntsc(x);
>> imshow(xn(:,:,1))
>> figure,imshow(xn(:,:,2))
>> figure,imshow(xn(:,:,3))

and these are shown in figure 7.14. Notice that the Y component of YIQ gives a better grey scale
version of the image than the value of HSV. The top right rose is quite washed out in figure 7.13
(Value), but shows better contrast in figure 7.14 (Y).We shall see below how to put three matrices,
obtained by operations on the separate components, back in to a single three dimensional array for
display .

Processing of colour images


There are two methods we can use:

1. we can process each R, G, B matrix separately,


2. we can transform the colour space to one in which the intensity is separated from the colour, and
process the intensity component only.
Schemas for these are given in figures 12.18 and 12.19.
We shall consider many different image processing tasks, and apply either of the above schema to
colour images.
Contrast enhancement
This is best done by processing the intensity component. Suppose we start with the image cat.tif,
which is an indexed colour image, and convert it to a truecolour (RGB) image.
>> [x,map]=imread('cat.tif');
>> c=ind2rgb(x,map);

Now we have to convert from RGB to YIQ, so as to be able to isolate the intensity component:
>> cn=rgb2ntsc(c);

Now we apply histogram equalization to the intensity component, and convert back to RGB for
display:

>> cn(:,:,1)=histeq(cn(:,:,1));
>> c2=ntsc2rgb(cn);
>> imshow(c2)

The result is shown in figure 12.20. Whether this is an improvement is debatable, but it has had its
contrast enhanced.
But suppose we try to apply histogram equalization to each of the RGB components:

Figure 7.18: RGB processing Figure 7.19: Intensity processing


Now we have to put them all back in to a single 3-dimensional array for use with imshow. The cat
function is what we want:

The first variable to cat is the dimension along which we want our arrays to be joined. The result is
shown for comparison in figure 7.20. This is not acceptable, as some strange colours have been
introduced; the cat’s fur has developed a sort of purplish tint, and the grass colour is somewhat
washed out.

Intensity processing Using each RGB component


Figure 7.20: Histogram equalization of a colour image

Spatial filtering
It very much depends on the filter as to which schema we use. For a low pass filter, say a blurring
filter, we can apply the filter to each RGB component:

and the result is shown in figure 7.21. We could also obtain a similar effect by applying the filter to
the intensity component only. But for a high pass filter, for example an unsharp masking filter, we are
better off working with the intensity component only:
>> cn=rgb2ntsc(c);
>> a=fspecial('unsharp');
>> cn(:,:,1)=filter2(a,cn(:,:,1));
>> cu=ntsc2rgb(cn);
>> imshow(cu)
and the result is shown in figure 7.21. In general, we will obtain reasonable results using the

Low pass filtering High pass filtering


Figure 7.21: Spatial filtering of a colour image

intensity component only. Although we can sometimes apply a filter to each of the RGB components,
as we did for the blurring example above, we cannot be guaranteed a good result. The problem is that
any filter will change the values of the pixels, and this may introduce unwanted colours.

Noise reduction
we can add noise, and lo ok at the noisy image, and its RGB components:

>> tw=imread('twins.tif');
>> tn=imnoise(tw,'salt & pepper');
>> imshow(tn)
>> figure,imshow(tn(:,:,1))
>> figure,imshow(tn(:,:,2))
>> figure,imshow(tn(:,:,3))

These are all shown in figure 12.22. It would appear that we should apply median filtering to each of
the RGB components. This is easily done:

and the result is shown in figure 7.23. We can't in this instance apply the median filter to the intensity
component only , because the conversion from RGB to YIQ spreads the noise across all the YIQ
components. If we remove the noise from Y only:
Salt & pepper noise The red component

The green component The blue component


Figure 7.22: Noise on a colour image

we see, as shown in figure 7.23 that the noise has been slightly diminished, but it is still there.
Denoising each RGB component Denoising Y only
Figure 7.23: Attempts at denoising a colour image

Edge detection
An edge image will be a binary image containing the edges of the input. We can go about obtaining an
edge image in two ways:

1. we can take the intensity component only, and apply the edge function to it,

2. we can apply the edge function to each of the RGB components, and join the results.

To implement the first method, we start with the rgb2gray function:

Recall that edge with no parameters implements Sobel edge detection. The result is shown in figure
7.24. For the second method, we can join the results with the logical “or”:
fe1: Edges after rgb2gray fe2: Edges of each RGB component
Figure 7.24: The edges of a colour image

and this is also shown in figure 7.24. The edge image fe2 is a much more complete edge image.
Notice that the rose now has most of its edges, where in image fe1 only a few were shown. Also note
that there are the edges of some leaves in the bottom left of fe2 which are completely missing from
fe1.

Lab task
Q. By hand, determine the saturation and intensity components of the following image, where the
RGB values are as given:

Q. Suppose the intensity component of an HSI image was thresholded to just two values. How
would this affect the appearance of the image?

Q. By hand, perform the conversions between RGB and HSV or YIQ, for the values:

You may need to normalize the RGB values


Q. Check your answers to the conversions in question 3 by using the Matlab functions rgb2hsv,
hsv2rgb, rgb2ntsc and ntsc2rgb.
Q. Threshold the intensity component of a colour image, say flowers.tif, and see if the result agrees
with your guess from question 2 above.
Q. The image spine.tif is an indexed colour image; however the colours are all very close to shades
of grey. Experiment with using imshow on the index matrix of this image, with varying colourmaps
of length 64.
Which colourmap seems to give the best results? Which colourmap seems to give the worst
results?

Q. View the image autumn.tif. Experiment with histogram equalization on:

the intensity component of HSV, (b) the


intensity component of YIQ.
Which seems to produce the best result?

Q. Create and view a random “patchwork quilt” with:

What RGB values produce (a) a light brown colour? (b) a dark brown colour? Convert
these brown values to HSV, and plot the hues on a circle.
Q. Using the flowers image, see if you can obtain an edge image from the intensity component
alone, that is as close as possible to the image fe2 in figure 7.24. What parameters to the edge
function did you use? How close to fe2 could you get?

Q. Add Gaussian noise to an RGB colour image x with

View your image, and attempt to remove the noise with


average filtering on each RGB component,
Wiener filtering on each RGB component.
Lab no#08
To write and execute program for geometric transformation of image
Translation, Scaling, Rotation, Shrinking, Zooming
Lab Objectives
The objective of this lab is to understand and implement geometric transformation
1. Translation
2. Scaling
3. Rotation
4. Shrinking
5. Zooming

Geometric Transformation:
The process of changing the spatial location of pixels in an image is called geometric
transformation.
Translation:
Translation is movement of image to new position. Mathematically translation is represented as:

x’ = x + x and y’ = y + y

In matrix form translation is represented by:

1 0 x x
x’ y’ 1 = 0 1 y y
0 0 1 1
Scaling:

Scaling means enlarging or shrinking. Mathematically scaling can be represented by:


x’ = x × Sx and y’ = y × Sy
In matrix form scaling is represented by:
Sx 0 0 x
x’ y’ 1 = 0 Sy 0 y
0 0 1 1
Rotation:

Image can be rotated by an angle, in matrix form it can be represented as:


x’ = xcos - ysin and y’ = xsin + ycos
In matrix form rotation is represented by:
cos Sin 0 x
x’ y’ 1 = sin cos 0 y
0 0 1 1
If is substituted with -, this matrix rotates the image in clockwise direction.
Shearing:
Image can be distorted (sheared) either in x direction or y direction. Shearing can be
represented as:
x’ = shx × y y’ = y
In matrix form rotation is represented by:
1 0 0 x
Xshear = shx 1 0 y
0 0 1 1
Shearing in Y direction can be given by:
x’ = x y’ = y × shy
1 shy 0 x
Yshear = 0 1 0 y
0 0 1 1

Zooming:
zooming of image can be done by process called pixel replication or interpolation. Linear
interpolation or some non-linear interpolation like cubic interpolation can be performed for
better result.
Program:
clc
close all
filename=input('Enter File Name
:','s'); x=imread(filename);
x=rgb2gray(x);
subplot(2,2,1); imshow(x); title('Orignial Image');
y=imrotate(x,45,'bilinear','crop');
subplot(2,2,2); imshow(y); title('Image rotated by 45
degree'); y=imrotate(x,90,'bilinear','crop');
subplot(2,2,3); imshow(y); title('Image rotated by 90
degree'); y=imrotate(x,-45,'bilinear','crop');
subplot(2,2,4); imshow(y); title('Image rotated by -45
degree'); x = imread('cameraman.tif');
tform = maketform('affine',[1 0 0; .5 1 0; 0 0
1]); y = imtransform(x,tform);
figure;
subplot(2,2,1); imshow(x); title('Orignial Image');
subplot(2,2,2); imshow(y); title('Shear in X direction');
tform = maketform('affine',[1 0.5 0; 0 1 0; 0 0 1]);
y = imtransform(x,tform);
subplot(2,2,3); imshow(y); title('Shear in Y direction');
tform = maketform('affine',[1 0.5 0; 0.5 1 0; 0 0 1]);
y = imtransform(x,tform);
subplot(2,2,4); imshow(y); title('Shear in X-Y direction');
TASK
In above program, modify matrix for geometric transformation and use imtransform()
function for modified matrix. Show the results and your conclusions.
Lab no#09
To understand various image noise models and to write programs
for image restoration
(Remove Salt and Pepper Noise, Minimize Gaussian noise, Median filter and Weiner filter)

Lab Objectives
The objective of this lab is to understand

Introduction:
Image restoration is the process of removing or minimizing known degradations in the given
image. No imaging system gives perfect quality of recorded images due to various reasons.

Image restoration is used to improve quality of image by various methods in which it will try
to decrease degradations & noise.

Degradation of images can occur due to many reasons. Some of them are as under:
Poor Image sensors
Defects of optical lenses in camera
Non-linearity of the electro-optical sensor;
Graininess of the film material which is utilised to store the image
Relative motion between an object and camera
Wrong focus of camera
Atmospheric turbulence in remote sensing or astronomy
Degradation due to temperature sensitive sensors like CCD
Poor light levels
Degradation of images causes:
Radiometric degradations;
Geometric distortions;
Spatial degradations.
Model of Image Degradation/restoration Process is shown in the following figure.

Spatial domain: g(x,y)=h(x,y)*f(x,y) + (x,y)


Frequency domain: G(u,v)=H(u,v)F(u,v)+N(u,v)

G(x,y) is spatial domain representation of degraded image and G(u,v) is frequency domain
representation of degraded image. Image restoration applies different restoration filters to
reconstruct image to remove or minimize degradations.
Program :
clear all;
close all;
clc
[filename,pathname]=uigetfile({'*.bmp;*.jpg;*.gif','Choose Poorly scanned Image'});
A=imread(strcat(pathname,filename));
if(size(A,3)==3)
A=rgb2gray(A);
end
subplot(2,2,1);
imshow(A);
title('Original image');
% Add salt & pepper noise
B = imnoise(A,'salt & pepper', 0.1);
subplot(2,2,2);
imshow(B);
title('Image with salt & pepper noise');
% Remove Salt & pepper noise by median filters
K = medfilt2(B);
subplot(2,2,3);
imshow(K);
title('Remove salt & pepper noise by median filter' );
% Remove salt & pepper noise by Wiener filter
L = wiener2(B,[10 10]);
subplot(2,2,4);
imshow(L);
title('Remove salt & pepper noise by Wiener filter');

figure;
subplot(2,2,1);
imshow(A);
title('Original image');
% Add gaussian noise
M = imnoise(A,'gaussian',0,0.05);
subplot(2,2,2);
imshow(M);
title('Image with gaussian noise');
% Remove Gaussian noise by Wiener filter
L = wiener2(M,[15 15]);
subplot(2,2,3);
imshow(L);
title('Remove Gaussian noise by Wiener filter');
K = medfilt2(M);
subplot(2,2,4);
imshow(K);
title('Remove Gaussian noise by median filter');
Task 01
Draw conclusion from two figures in this experiment. Which filter is better to remove salt
and pepper noise ?

Task 02
Explain algorithm used in Median filter with example

Task 03
Write mathematical expression for arithmetic mean filter, geometric mean filter, harmonic
mean filter and contra-harmonic mean filter

Task 04
What is the basic idea behind adaptive filters?
Lab no#10
Sharpening Spatial Filtering
Lab Objectives
The objective of this lab is to understand and implement
1. Sharpening spatial filtering
2. The Laplacian
3. Use of Second Derivate for Image Enhancement: The Laplacian
4. Use of First Derivate for Image Enhancement: The Gradient

Sharpening:
• The term sharpening is referred to the techniques suited for enhancing the intensity
transitions.
• In images, the borders between objects are perceived because of the intensity change: more
crisp the intensity transitions, more sharp the image.
• the intensity transitions between adjacent pixels are related to the derivatives of the image.
• Hence, operators (possibly expressed as linear filters) able to compute the derivatives of a
digital image are very interesting.
Laplacian:
• Usually the sharpening filters make use of the second order operators.
o A second order operator is more sensitive to intensity variations than a first order
operator.
• Besides, partial derivatives has to be considered for images.
o The derivative in a point depends on the direction along which it is computed.
• Operators that are invariant to rotation are called isotropic.
o Rotate and differentiate (or filtering) has the same effects of differentiate and
rotate.
• The Laplacian is the simpler isotropic derivative operator (wrt. the principal directions):

Laplacian filter:
• In a digital image, the second derivatives wrt. x and y are computed as:

• Hence, the Laplacian results:


• Also, the derivatives along to the diagonals can be considered:

First derivative of an image


• Since the image is a discrete function, the traditional definition of derivative cannot be
applied.
• Hence, a suitable operator have to be defined such that it satisfies the main properties of
the first derivative:
o it is equal to zero in the regions where the intensity is constant;
o it is different from zero for an intensity transition;
o it is constant on ramps where the intensity transition is constant.
• The natural derivative operator is the difference between the intensity of neighbouring
pixels (spatial differentiation).
• For simplicity, the one-dimensional case can be considered:

• Since ∂f/∂x is defined using the next pixel:


o it cannot be computed for the last pixel of each row (and column);
o it is different from zero in the pixel before a step.
Second derivative of an image
• Similarly, the second derivative operator can be defined as:

• This operator satisfies the following properties:


o it is equal to zero where the intensity is constant;
o it is different from zero at the begin of a step (or a ramp) of the intensity;
o it is equal to zero on the constant slope ramps.
• Since ∂2f/∂x2 is defined using the previous and the next pixels:
o it cannot be computed with respect to the first and the last pixels of each row (and
column);
o it is different from zero in the pixel that precedes and in the one that follows a step.
TASK 1
Write a program to implement “The Laplacian” and note the effects on given images.
TASK 2
Use “The Laplacian” to exercise “High Boost Filtering” and write down your observations.

TASK 3
Write a program to implement “Robert Cross Gradient Operator” and observe the changes on
image.

TASK 4
Write a program to implement “Sobel Operators” and observe the changes on image.
Lab no#11
Write and execute programs for image frequency domain
filtering
Lab Objectives
The objective of this lab is to understand and implement image frequency domain filtering

Introduction:
In spatial domain, we perform convolution of filter mask with image data. In frequency
domain we perform multiplication of Fourier transform of image data with filter transfer
function.
Fourier transform of image f(x,y) of size MxN can be given by:

Where, u = 0,1,2 ……. M-1 and v = 0,1,2……N-1


Inverse Fourier transform is given by:

Where, x = 0,1,2 ……. M-1 and y = 0,1,2……N-1


Basic steps for filtering in frequency domain:

Pre-processing: Multiply input image f(x,y) by (-1)x+y to centre the transform


Computer Discrete Fourier Transform F(u,v) of input image f(x,y) Multiply F(u,v) by filter
function H(u,v)
Result: H(u,v)F(u,v) Computer inverse DFT of the result Obtain real part of the result
Post-Processing: Multiply the result by (-1)x+y

Program:
clc;
close all;
clear all;
% Read the image, resize it to 256 x 256
% Convert it to grey image and display it
[filename,pathname]=uigetfile({'*.bmp;*.tif;*.tiff;*.jpg;*.jpeg;*.gif','IMAGE Files
(*.bmp,*.tif,*.tiff,*.jpg,*.jpeg,*.gif)'},'Chose Image File');
myimg=imread(cat(2,pathname,filename));
if(size(myimg,3)==3)
myimg=rgb2gray(myimg);
end
myimg = imresize(myimg,[256
256]); myimg=double(myimg);
subplot(2,2,1);
imshow(myimg,[]),title('Original Image');
[M,N] = size(myimg); % Find size
%Preprocessing of the image
for x=1:M
for y=1:N
myimg1(x,y)=myimg(x,y)*((-1)^(x+y)); end
end
% Find FFT of the image
myfftimage = fft2(myimg1);
subplot(2,2,2);
imshow(myfftimage,[]); title('FFT Image');
% Define cut off frequency
low = 30;
band1 = 20;
band2 = 50;
%Define Filter Mask
mylowpassmask = ones(M,N);
mybandpassmask = ones(M,N);
% Generate values for ifilter pass mask
for u = 1:M
for v = 1:N
tmp = ((u-(M/2))^2 +(v-(N/2))^2)^0.5; if tmp > low
mylowpassmask(u,v) =
0; end
if tmp > band2 || tmp < band1;
mybandpassmask(u,v) = 0;
end
end
end
% Apply the filter H to the FFT of the Image
resimage1 = myfftimage.*mylowpassmask;
resimage3 = myfftimage.*mybandpassmask;
% Apply the Inverse FFT to the filtered image
% Display the low pass filtered image
r1 = abs(ifft2(resimage1));
subplot(2,2,3);
imshow(r1,[]),title('Low Pass filtered image');
% Display the band pass filtered
image r3 = abs(ifft2(resimage3));
subplot(2,2,4);
imshow(r3,[]),title('Band Pass filtered
image'); figure;
subplot(2,1,1);imshow(mylowpassmask);
subplot(2,1,2);imshow(mybandpassmask);

Task 01
Instead of following pre-processing step in above program use fftshift function to shift FFT
in the center. See changes in the result and write conclusion.

%Preprocessing of the image


for x=1:M
for y=1:N myimg1(x,y)=myimg(x,y)*((-1)^(x+y));
end
end
Remove above step and use following commands.
myfftimage = fft2(myimg);
myfftimage=fftshift(myfftimage);

Task 02
Write a routine for high pass filter mask.

Task 03
Write a routine for high pass filter mask
Lab no#12
Write a program in MATLAB for edge detection using different
edge detection mask
Lab Objectives
The objective of this lab is to understand

Introduction:
Image segmentation is to subdivide an image into its component regions or objects.
Segmentation should stop when the objects of interest in an application have been isolated.
Basic purpose of segmentation is to partition an image into meaningful regions for
application. the segmentation is based on measurements taken from the image and might be
grey level, colour, texture, depth or motion.

There are basically two types of image segmentation approaches:


1. Discontinuity based: Identification of isolated points, lines or edges
2. Similarity based: Group pixels which has similar characteristics by thresholding,
region growing, region splitting and merging
Edge detection is discontinuity based image segmentation approach. Edges play a very
important role in many image processing applications. It provides outline of an object. In
physical plane, edges are corresponding to changes in material properties, intensity
variations, discontinuity in depth. Pixels on the edges are called edge points. Edge detection
techniques basically try to find out grey level transitions.

Edge detection can be done by first order derivative and second order derivative operators.
First order line detection 3x3 mask are:

Popular edge detection masks:


o Sobel operator performs well for image with noise compared to Prewitt operator
because Sobel operator performs averaging along with edge detection.
o Because Sobel operator gives smoothing effect, spurious edges will not be detected by
it.
Second derivative operators are sensitive to the noise present in the image so it is not directly
used to detect edge but it can be used to extract secondary information like …
o Used to find whether point is on darker side or white side depending on sign of the
result
o Zero crossing can be used to identify exact location of edge whenever there is gradual
transitions in the image

MATLAB Code using standard function:


clear all;
[filename,pathname]=uigetfile({'*.bmp;*.tif;*.tiff;*.jpg;*.jpeg;*.gif','IMA
GE Files(*.bmp,*.tif,*.tiff,*.jpg,*.jpeg,*.gif)'},'Choose Image');
A=imread(filename);
if(size(A,3)==3)
A=rgb2gray(A);
end
imshow(A);
figure;
BW = edge(A,'prewitt');
subplot(3,2,1); imshow(BW);title('Edge detection with prewitt mask');
BW = edge(A,'canny');
subplot(3,2,2); imshow(BW);;title('Edge detection with canny mask');
BW = edge(A,'sobel');
subplot(3,2,3); imshow(BW);;title('Edge detection with sobel mask');
BW = edge(A,'roberts');
subplot(3,2,4); imshow(BW);;title('Edge detection with roberts mask');
BW = edge(A,'log');
subplot(3,2,5); imshow(BW);;title('Edge detection with log ');
BW = edge(A,'zerocross');
subplot(3,2,6); imshow(BW);;title('Edge detection with zerocorss');

MATLAB Code for edge detection using convolution in spatial domain


clear all;
clc;
while 1
K = menu('Choose mask','Select Image File','Point Detect','Horizontal line
detect','Vertical line detect','+45 Detect','-45 Detect','ractangle
Detect','exit')
M=[-1 0 -1; 0 4 0; -1 0 -1;] % Default mask
switch K
case 1,
[namefile,pathname]=uigetfile({'*.bmp;*.tif;*.tiff;*.jpg;*.jpeg;*.gif','IMA
GE Files(*.bmp,*.tif,*.tiff,*.jpg,*.jpeg,*.gif)'},'Chose GrayScale Image');
data=imread(strcat(pathname,namefile));
%data=rgb2gray(data);
imshow(data);
case 2,
M=[-1 -1 -1;-1 8 -1;-1 -1 -1]; % Mask for point
detection case 3,
M=[-1 -1 -1; 2 2 2; -1 -1 -1]; % Mask for horizontal
edges case 4,
M=[-1 2 -1; -1 2 -1; -1 2 -1]; % Mask for vertical
edges case 5,
M=[-1 -1 2; -1 2 -1; 2 -1 -1]; % Mask for 45 degree diagonal line
case 6,
M=[2 -1 -1;-1 2 -1; -1 -1 2]; % Mask for -45 degree diagonal line
case 7,
M=[-1 -1 -1;-1 8 -1;-1 -1 -1]; case 8,
break;
otherwise,
msgbox('Select proper mask');
end
outimage=conv2(double(data),double(M));
figure;
imshow(outimage);
end
close all %Write an image to a file
imwrite(mat2gray(outimage),'outimage.jpg','quality',99);

Task 01

Get mask for “Prewitt”, “Canny”, “Sobel” from the literature and write MATLAB program for edge
detection using 2D convolution.
Lab no#13
Write and execute program for image morphological operations
Lab Objectives
The objective of this lab is to understand

Introduction:
• Morphology is a branch of biology that deals with form and structure of animal and
plant
• In image processing, we use mathematical morphology to extract image components
which are useful in representation and description of region shape such as …
Boundaries, Skeletons, Convex hull, Thinning, Pruning etc.
• Two Fundamental morphological operations are: Erosion and dilation
• Dilation adds pixels to the boundaries of objects in an image, while erosion removes
pixels on object boundaries.
Erosion and dilation are two fundamental image morphological operations.
• Dilation adds pixels to the boundaries of objects in an image, while erosion removes
pixels on object boundaries.
• Dilation operation: The value of the output pixel is the maximum value of all the
pixels in the input pixel’s neighbourhood. In a binary image, if any of the pixels is set
to the value 1, the output pixel is set to 1.
• Erosion operation: The value of the output pixel is the minimum value of all the
pixels in the input pixel’s neighbourhood. In a binary image, if any of the pixels is set
to 0, the output pixel is set to 0.
• Opening and closing operations can be done by combination of erosion and dilation in
different sequence.

Program:
clear all;
clc;
while 1
K = menu('Erosion and dilation demo','ChooseImage','Choose 3x3
Mask','Choose 5x5 Mask','Choose Structure
Image','Erosion','Dilation','Opening','Closing','EXIT')
%B=[1 1 1;1 1 1;1 1 1;];
B = strel('disk', 9);
%B = strel('disk', 5);
switch K
case 1,
[namefile,pathname]=uigetfile({'*.bmp;*.tif;*.tiff;*.
jpg;*.jpeg;*.gif','IMAGE Files (*.bmp,*.tif,*.tiff,*.jpg,
*.jpeg,*.gif)'},'Chose GrayScale Image');
A=imread(strcat(pathname,namefile));
%data=rgb2gray(data);
imshow(A);
case 2,
1 1;1 1 1;1 1 1;];
case
B=[1
3,
1;1 1 1 1 1;1 1 1 1 1;1 1 1 1 1;1 1 1 1 1;]; B=[1 1 1 1
case 4,
[namefile,pathname]=uigetfile({'*.bmp;*.tif;*.tiff;*.
jpg;*.jpeg;*.gif','IMAGE Files (*.bmp,*.tif,*.tiff,*.jpg,
*.jpeg,*.gif)'},'Chose GrayScale Image');
B=imread(strcat(pathname,namefile));
%data=rgb2gray(data);
figure;
imshow(B);
case 5,
C=imerode(A,B);
figure;
imshow(C);
case 6,
C=imdilate(A,B);
figure;
imshow(C)
case 7,
C=imdilate(A,B);
D=imerode(C,B);
figure;
imshow(D)
case 8,
C=imerode(A,B);
D=imdilate(C,B);
figure;
imshow(D)
case 9,
break;
otherwise,
msgbox('Select proper mask');
end
end
close all
%Write an image to a file
imwrite(mat2gray(outimage),'outimage.jpg','quality',99);

Task 01

What is cryptography?

Task 02

What is steganography?

Task 03

How watermarking differs from cryptography and steganography?

Task 04

Execute above program given in this experiment on suitable image, cut and paste resultant images
on separate page and write your conclusion.
Lab no#14
To write and execute program for wavelet transform on given
image and perform inverse wavelet transform to reconstruct
image.
Lab Objectives
The objective of this lab is to understand

Introduction:
Wavelet transform is relatively recent signal and image processing tool which has many
applications. Basis functions of Fourier transform is sinusoids while basis functions of
wavelet transform are wavelets.
• Wavelets are oscillatory functions vanishing outside the small interval hence, they are
called wavelets. wave-let (small fraction of wave).
• Wavelets are building blocks of the signal.
• Wavelets are the functions which are well suited to the expansion of real time non-
stationary signals.
• Wavelets can be used to de-correlate the correlations present in real time signal such
as speech/audio, video, biomedical, seismic etc.

General block diagram of two-dimensional wavelet transforms:

Following standard MATLAB functions are used in this experiment.


1. wavedec2: Function for multi-level decomposition of 2D data using wavelet
transform.
[C, S] = WAVEDEC2(X,N,'wname') returns the wavelet decomposition of the
matrix X at level N, using the wavelet named in string 'wname' (wavelet name can be
Daubechies wavelet db1, db2, db3 or any other wavelet).

Outputs are the decomposition vector C and the corresponding bookkeeping matrix S.
N is level of decomposition which must be positive integer.

2. appcoef2: This function utilize tree developed by wavelet decomposition function and
constructs approximate coefficients for 2D data.
A = APPCOEF2(C,S,'wname',N) computes the approximation coefficients at level
N using the wavelet decomposition structure [C,S] which is generated by function
wavedec2.

3. detcoef2: This function utilize tree develet by wavelet decomposition function and
generates detail coefficients for 2D data.
D = DETCOEF2(O,C,S,N) extracts from the wavelet decomposition structure[C,S],
the horizontal, vertical or diagonal detail coefficients for O = 'h' or 'v' or 'd', for
horizontal, vertical and diagonal detail coefficients respectively.

4. waverec2: Multilevel wavelet 2D reconstruction (Inverse wavelet transform)

WAVEREC2 performs a multilevel 2-D wavelet reconstruction using wavelet named


in string 'wname' (wavelet name can be Daubechies wavelet db1, db2, db3 or any
other wavelet).

X = WAVEREC2(C,S,'wname') reconstructs the matrix X based on the multi-level


wavelet decomposition structure [C,S] which is generated by wavedec2

Program:
clc;
close;
[namefile,pathname]=uigetfile({'*.bmp;*.tif;*.tiff;*.jpg;*.jpeg;*.gif','IMA
GE Files (*.bmp,*.tif,*.tiff,*.jpg,*.jpeg,*.gif)'},'Chose GrayScale
Image');
X=imread(strcat(pathname,namefile));
if(size(X,3)==3)
X=rgb2gray(X);
end imshow(X);
% Perform wavelet decomposition at level 2.
[c,s] = wavedec2(X,2,'db1');
figure;
imshow(c,[]); title('Wavelet decomposition data generated by wavedec2');
figure;
%Calculate first level approx. and detail components
ca1 = appcoef2(c,s,'db1',1);
subplot(2,2,1);imshow(ca1,[]);title('First level
approx'); ch1 = detcoef2('h',c,s,1);
subplot(2,2,2);imshow(ch1,[]);title('First level horixontal
detail'); cv1 = detcoef2('v',c,s,1);
subplot(2,2,3);imshow(cv1,[]);title('First level vertical detail');
cd1 = detcoef2('d',c,s,1);
subplot(2,2,4);imshow(cd1,[]);title('First level diagonal detail');
%Calculate second level approx. and detail components figure;
ca2 = appcoef2(c,s,'db1',2);
subplot(2,2,1);imshow(ca2,[]);title('Second level approx');
ch2 = detcoef2('h',c,s,2);
subplot(2,2,2);imshow(ch2,[]);title('Second level horizontal
detail'); cv2 = detcoef2('v',c,s,2);
subplot(2,2,3);imshow(cv2,[]);title('Second level vertical detail');
cd2 = detcoef2('d',c,s,2);
subplot(2,2,4);imshow(cd2,[]);title('Second level diagonal detail');
figure;
a0 = waverec2(c,s,'db1');
imshow(a0,[]);title('Reconstructed image using inverse wavelet transform');

Task 01
Execute program given in this experiment on suitable image, cut and paste resultant images
on separate page and write your conclusion.

Task 02
What is wavelet ?

Task 03
How wavelet transform used to remove noise from the image?

Task 04
Write analysis low pass and high pass wavelet filter coefficients for Daubechies-2,4 and 8
wavelets.
(Hint: use matlab function wfilters() to find filter coefficients)

S-ar putea să vă placă și