Sunteți pe pagina 1din 22

CHAPTER 1

INTRODUCTION
The influence and impact of digital images on modem society is tremendous,
and image processing is now a critical component in science and technology. The rapid
progress in computerized medical image reconstruction, and the associated
developments in analysis methods and computer-aided diagnosis, has propelled medical
imaging into one of the most important sub-fields in scientific imaging. Imaging is an
essential aspect of medical science to visualize the anatomical structures of the human
body. Several new complex medical imaging modalities, such as X-ray, magnetic
resonance imaging (MRl), and ultrasound, strongly depend on computer technology to
generate or display digital images. With computer techniques, multidimensional digital
images of physiological structures can be processed and manipulated to help visualize
hidden diagnostic features that are otherwise difficult or impossible to identify using
planar imaging methods.
Image Segmentation is the process of partitioning a digital image into multiple
regions or sets of pixels which are similar with respect to some characteristic such as
color, texture or intensity. Adjacent regions are significantly different with respect to the
same characteristics. Segmentation produces a set of non-overlapping regions whose
union is the entire image. Segmentation algorithms for images generally based on the
discontinuity and similarity of image intensity values. Discontinuity approach is to
partition an image based on abrupt changes in intensity and similarity is based on
partitioning an image into regions that are similar according to a set of predefined
criteria. Therefore the choice of image segmentation technique is problem dependent
that has been considered. In this paper an attempt is made to segment an image based on
edge detection and further smoothening the image using various types of low pass filters
and analyse their effectiveness.

ECE DEPT. OF MGUCE

Page 1

CHAPTER 2
LITERATURE REVIEW
2.1. EDGE DETECTION TECHNIQUES FOR IMAGE SEGMENTATION
Image segmentation is an essential step in image analysis. Segmentation
separates an image into its component parts or objects. The level to which the separation
is carried depends on the problem being solved. When the objects of interest in an
application have been inaccessible the segmentation must stop. Segmentation
algorithms for images generally based on the discontinuity and similarity of image
intensity values. Discontinuity approach is to partition an image based on abrupt
changes in intensity and similarity is based on partitioning an image into regions that
are similar according to a set of predefined criteria. Thus the choice of image
segmentation technique is depends on the problem being considered. Edge detection is a
part of image segmentation. The effectiveness of many image processing also computer
vision tasks depends on the perfection of detecting meaningful edges. It is one of the
techniques for detecting intensity discontinuities in a digital image.
The process of classifying and placing sharp discontinuities in an image is called
the edge detection. The discontinuities are immediate changes in pixel concentration
which distinguish boundaries of objects in a scene. Classical methods of edge detection
engage convolving the image through an operator, which is constructed to be perceptive
to large gradients in the image although returning values of zero in uniform regions.
There is a very large amount of edge detection techniques available, each technique
designed to be perceptive to certain types of edges. Variables concerned in the selection
of an edge detection operator consist of Edge orientation, Edge structure and Noise
environment.
2.1.1 General Steps In Edge Detection
Generally, Edge detection contains three steps namely Filtering, Enhancement
and Detection.
i. Filtering: Some major classical edge detectors work fine with high quality pictures,
but often are not good enough for noisy pictures because they cannot distinguish edges
of different significance. Noise is unpredictable contamination on the original image.
There are various kinds of noise, but the most widely studied two kinds are white noise
and salt and pepper noise. In salt and pepper noise, pixels in the image are very
ECE DEPT. OF MGUCE

Page 2

different in color or intensity from their surrounding pixels; the defining characteristic is
that the value of a noisy pixel bears no relation to the color of surrounding pixels.
Generally this type of noise will only affect a small number of image pixels. When
viewed, the image contains dark and white dots, hence the term salt and pepper noise. In
Gaussian noise, each pixel in the image will be changed from its original value by a
small amount. Random noise to describe an unknown contamination added to an image.
To reduce the influence of noise, Marr suggested filtering the images with the Gaussian
before edge detection.
ii. Enhancement: Digital image enhancement techniques are concerned with improving
the quality of the digital image. The principal objective of enhancement techniques is to
produce an image which is better and more suitable than the original image for a
specific application. Linear filters have been used to solve many image enhancement
problems. Throughout the history of image processing, linear operators have been the
dominating filter class. Not all image sharpening problems can be satisfactorily
addressed through the use of linear filters. There is a need for nonlinear geometric
approaches, and selectively in image sharpening is the key to its success. A powerful
nonlinear methodology that can successfully address the image sharpening problem is
mathematical morphology.
iii. Detection: Some methods should be used to determine which points are edge points
or not.
2.1.2 Image Segmentation
Image Segmentation is the process of partitioning a digital image into multiple
regions or sets of pixels. Essentially, in image partitions are different objects which have
the same texture or color. The image segmentation results are a set of regions that cover
the entire image together and a set of contours extracted from the image. All of the
pixels in a region are similar with respect to some characteristics such as color,
intensity, or texture. Adjacent regions are considerably different with respect to the same
individuality. The different approaches are (i) by finding boundaries between regions
based on discontinuities in intensity levels, (ii) thresholds based on the distribution of
pixel properties, such as intensity values, and (iii) based on finding the regions directly.
Thus the choice of image segmentation technique is depends on the problem being
considered.
Region based methods are based on continuity. These techniques divide the
entire image into sub regions depending on some rules like all the pixels in one region
ECE DEPT. OF MGUCE

Page 3

must have the same gray level. Region-based techniques rely on common patterns in
intensity values within a cluster of neighboring pixels. The cluster is referred to as the
region in addition to group the regions according to their anatomical or functional roles
are the goal of the image segmentation. Threshold is the simplest way of segmentation.
Using thresholding technique regions can be classified on the basis range values, which
is applied to the intensity values of the image pixels. Thresholding is the transformation
of an input image to an output that is segmented binary image. Segmentation Methods
based on finding the regions directly find for abrupt changes in the intensity value.
These methods are called as Edge or Boundary based methods. Edge detection is the
problem of fundamental importance in image analysis. Edge detection techniques are
generally used for finding discontinuities in gray level images. To detect consequential
discontinuities in the gray level image is the important common approach in edge
detection. Image segmentation methods for detecting discontinuities are boundary based
methods.
2.1.3 Edge Detection Techniques
The edge representation of an image significantly reduces the quantity of data to
be processed, yet it retains essential information regarding the shapes of objects in the
scene. This explanation of an image is easy to incorporate into a large amount of object
recognition algorithms used in computer vision along with other image processing
applications. The major property of the edge detection technique is its ability to extract
the exact edge line with good orientation as well as more literature about edge detection
has been available in the past three decades. On the other hand, there is not yet any
common performance directory to judge the performance of the edge detection
techniques. The performance of an edge detection techniques are always judged
personally and separately dependent to its application.
Edge detection is a fundamental tool for image segmentation. Edge detection
methods transform original images into edge images benefits from the changes of grey
tones in the image. In image processing especially in computer vision, the edge
detection treats the localization of important variations of a gray level image and the
detection of the physical and geometrical properties of objects of the scene. It is a
fundamental process detects and outlines of an object and boundaries among objects
and the background in the image. Edge detection is the most familiar approach for
detecting significant discontinuities in intensity values.

ECE DEPT. OF MGUCE

Page 4

There are many edge detection techniques in the literature for image
segmentation. The most commonly used discontinuity based edge detection techniques
are reviewed in this section. Those techniques are Roberts edge detection, Sobel Edge
Detection, Prewitt edge detection.
2.1.3.1 Roberts edge detection
The Roberts edge detection is introduced by Lawrence Roberts (1965). It
performs a simple, quick to compute, 2-D spatial gradient measurement on an image.
This method emphasizes regions of high spatial frequency which often correspond to
edges. The input to the operator is a grayscale image the same as to the output is the
most common usage for this technique. Pixel values in every point in the output
represent the estimated complete magnitude of the spatial gradient of the input image at
that point.
+1
0

-1 0
-1
G

+1
0

Gy

Table 2.1. Roberts Edge Detection


2.1.3.2. Sobel edge detection
The Sobel edge detection method is introduced by Sobel in 1970 (Rafael
C.Gonzalez (2004)). The Sobel method of edge detection for image segmentation finds
edges using the Sobel approximation to the derivative. It precedes the edges at those
points where the gradient is highest. The Sobel technique performs a 2-D spatial
gradient quantity on an image and so highlights regions of high spatial frequency that
correspond to edges. In general it is used to find the estimated absolute gradient
magnitude at each point in n input grayscale image. In conjecture at least the operator
consists of a pair of 3x3 complication kernels as given away in under table. One kernel
is simply the other rotated by 90o. This is very alike to the Roberts Cross operator.
-1

-2

+1

+2

-1
-2
-1

-1
0
+1

Gx

-1

+2

+1
Gy

Table 2.2 Sobel Edge Detection

ECE DEPT. OF MGUCE

Page 5

2.1.3.3. Prewitt edge detection


The Prewitt edge detection is proposed by Prewitt in 1970 (Rafael C.Gonzalez . To
estimate the magnitude and orientation of an edge Prewitt is a correct way. Even though
different gradient edge detection wants a quiet time consuming calculation to estimate the
direction from the magnitudes in the x and y-directions, the compass edge detection obtains the
direction directly from the kernel with the highest response. It is limited to 8 possible directions;
however knowledge shows that most direct direction estimates are not much more perfect. This
gradient based edge detector is estimated in the 3x3 neighborhood for eight directions. All the
eight convolution masks are calculated. One complication mask is then selected, namely with
the purpose of the largest module.

-1
0
+1

-1
0
+1

-1
-1
0
-1
+1
-1

0
0
0

+1
+1
+1

Gx

Gy
Table 2.3 Prewitt Edge Detection
Prewitt detection is slightly simpler to implement computationally than the
Sobel detection, but it tends to produce somewhat noisier results.
2.1.4 Different Approaches
2.1.4.1 Genetic algorithm approach
Genetic algorithms (GA) are random search algorithms based on the theory of
biological evolution. These algorithms require an initial population of individuals,
which are representatives of possible solutions of the problem being solved. The
population evolves by transformations applied to its individuals while a fitness function
is used to determine the strength of the elements in the population. The elements of the
population are known as chromosomes and are represented by strings of bits. The
fitness function is usually computed with basis on the values of those bits. An iteration
of the algorithm is basically equivalent to a generation in the evolutionary process.
Mainly, a genetic algorithm consists of three most important operations. They are
Selection, Crossover and Mutation.
Selection: Fitness-proportional selection-The chromosome with minimum fitness value
and another randomly chosen chromosome are selected from the parent pool to process
crossover and mutation.

ECE DEPT. OF MGUCE

Page 6

Crossover: The crossover recombines two individuals to have new ones which might be
better.
Mutation: The mutation procedure introduces random changes in the population in
order to steer the algorithm from local minimums that could prevent the discovery of
the global solutions to the problem.
The GA is started with a set of abstract candidate solutions population. These
solutions are represented by chromosomes (genotypes or genomes). The Solutions from
one population are taken and used to form a new population. This last process is
motivated by the hope that the new population will be better than the old one. In each
generation, the fitness of each candidate solution is evaluated, multiple candidate
solutions are stochastically selected from the current solutions (based on their fitness),
and modified (recombined and/or mutated and/or others genetic operations) to form a
new population of candidate solutions. The new population is then used in the next
iteration of the algorithm.
2.1.4.2 Neural network
Neural networks are nothing but the computer algorithms contend with how the
way the information is processed in nervous system. Neural network diversifies from
other artificial intelligence technique by means of the learning capacity. Digital images
are segmented by using neural networks in two step process. First step is pixel
classification that depends on the value of the pixel which is part of a segment or not.
Second step is edge detection that is the detection of all pixels on the borders between
different homogeneous areas which is part of the edge or not . Several neural networks
are available in the literature for edge detection. The potential base function for digital
image processing can be created using differential operators.
Generally, the neural network consists of three layers such as input layer, hidden
layer and output layer as in the fig. Each layer consists of fixed number of neurons
equal to the number of pixels in the image. The activation function of neuron is a multisigmoid. The major advantage of this technique is that, it does not require a priori
information of the image. The number of objects in the image is found out
automatically.
Wavelet neural network (WNN) based on the wavelet transform theory, is a
novel, multiresolution, hierarchical artificial neural network, which originally combines
the good localization characteristics of the wavelet transform theory and the adaptive

ECE DEPT. OF MGUCE

Page 7

learning virtue of neural networks..The first module extracts the feature and the second
one is WNN classifier which is used to locate the position of the edges.

Figure 2.1. Different Layers Of Neural Network


2.1.4.3 Morphology
We can organize images into two sets.(i) Binary images (ii)grey level and color
images. The binary images are two valued one that may be 0 or 1.The grey level
images, the pixel value may be any integer between 0 and some value. Color image is
an extension of grey level image where the values at each pixel represented by a vector
of three elements with respect to red, green and blue components of color information.
The most basic operations are dilation and erosion which may be defined by
using union and intersection. Dilation increases the object and erosion shrinks the
object. Using the basic operators dilation and erosion, two more operators are defined.
They are Opening and Closing. Opening retains only those parts of the objects that can
fit in the structuring element. Closing fills up small holes and gulfs. Thus they both can
extract fine shape features that are narrower than the structuring element.
2.2 OPTIMIZED CLUSTERING METHOD FOR CT BRAIN
IMAGE SEGMENTATION
Over last two decades bio-image analysis and processing occupied an important
position. Image segmentation is the process of distinguishing the objects and
background in an image. It is an essential preprocessing task for many applications that
depend on computer vision such as medical imaging, locating objects in satellite
images, machine vision, finger print and face recognition, agricultural imaging and
other many applications. The accuracy of image segmentation stage would have a great
ECE DEPT. OF MGUCE

Page 8

impact on the effectiveness of subsequent stages of the image processing. Image


segmentation problem has been studied by many researchers for several years; however,
due to the characteristics of the images such as their different modal histograms, the
problem of image segmentation is still an open research issue and so further
investigation is needed. Image Segmentation is the process of partitioning a digital
image into multiple regions or sets of pixels. Partitions are different objects in image
which have the same texture or colour. All of the pixels in a region are similar with
respect to some characteristic or computed property, such as colour, intensity, or texture.
Adjacent regions are significantly different with respect to the same characteristics. The
critical step in image interpretation is separation of the image into object and
background.

ECE DEPT. OF MGUCE

Page 9

CHAPTER 3
MORPHOLOGY
Morphological processing is constructed with operations on sets of pixels.
Binary morphology uses only set membership and is indifferent to the value, such as
gray level or color, of a pixel. Morphological image processing relies on the ordering of
pixels in an image and many times is applied to binary and gray scale images. Through
processes such as erosion, dilation, opening and closing, binary images can be modified
to the user's specifications. Binary images are images whose pixels have only two
possible intensity values. They are normally displayed as black and white. Numerically,
the two values are often 0 for black, and either 1 or 255 for white. Binary images are
often produced by thresholding a gray scale or color image, in order to separate an
object in the image from the background. The color of the object (usually white) is
referred to as the foreground color. The rest (usually black) is referred to as the
background color. However, depending on the image which is to be threshold, this
polarity might be inverted, and in which case the object is displayed with 0 and the
background is with a non-zero value. Some morphological operators assume a certain
polarity of the binary input image so that if we process an image with inverse polarity
the operator will have the opposite effect. For example, if we apply a closing operator to
a black text on white background, the text will be opened.
3.1. IMAGE PRE-PROCESSING
The procedure for image pre-processing includes the following Steps: acquiring
a necessary bio-medical image specifically a MRI image; converting it to gray for a
colour image followed by type-casting the image to uint8 to have a reference data-type.
ROI within the brain image is extracted as a patient case is considered here.
The use of color in image processing is motivated by two principal factors; First
color is a powerful descriptor that often simplifies object identification and extraction
from a scene. Second, human can discern thousands of color shades and intensities,
compared to about only two dozen shades of gray. In RGB model, each color appears in
its primary spectral components of red, green and blue. This model is based on
Cartesian coordinate system. Images represented in RGB color model consist of three
component images. One for each primary, when fed into an RGB monitor, these three
ECE DEPT. OF MGUCE

Page 10

images combines on the phosphor screen to produce a composite color image. The
number of bits used to represent each pixel in RGB space is called the pixel depth.
Consider an RGB image in which each of the red, green and blue images is an 8-bit
image. Under these conditions each RGB color pixel is said to have a depth of 24 bit.
3.2. IMAGE SEGMENTATION
To identify individual objects in an image, a segmentation operation is
performed. Both thresholding technique and edge based segmentation methods like
sobel, prewitt edge detector operators is applied.
The threshold value is obtained using the concept of symmetrical structure of the
brain. It is a well-known fact that human brain is symmetrical about its central axis and
throughout this work it has been assumed that the tumor is either on the left or on the
right side of the brain.
If the histograms of the images corresponding to the two halves of the brain are
plotted, a symmetry between the two histograms should be observed due to symmetrical
nature of the brain along its central axis. On the other hand, if any asymmetry is
observed, the presence of the tumor is detected. The differences of the two histograms
are plotted and the peak of the difference is chosen as the threshold point.
The proposed method of segmentation is implemented in two phases as follows:
3.2.1 Algorithm for Segmenting Image
1. In this work, bio-medical image has been considered. Hence the brain image is
taken for the work as input image.
2. As it is a colour image, initially it has converted into gray scale imageThe
process of conversion to grayscale image is performed as:Grayscale image
matrix A (x, y)= (Red component*0.3)+ (Green component *0.59)+ (Blue
component *0.11)
3. The point of interest is a particular area within the brain image, as it is of the
patient case. For this reason, the ROI was extracted for analysis of the
image.The ROI is evaluated by statistical method from the pixel by considering
the boundary.
4. The desired threshold value was obtained using the following method:
a. The image pixels were stored as a variable (say I) where I showed the
values of the pixels in a 2D matrix (row - column) form.
ECE DEPT. OF MGUCE

Page 11

b. Number of rows and columns were assigned some other variables (say
Rand CL).
c. Floor division by 2 was performed on the column value and assigned
another variable name (say C).
d. For the left-half of the image, new matrix was formed using for loop by
rows from 1: R as outer loop and columns from 1: C as inner loop.
e. For the other half of the image, the column value from C+ 1: CL is
considered as the inner loop.
f. The histogram of the resulting left half and right half image was
calculated.
g. The difference of the two histograms was then calculated and the
resultant difference is plotted using bar graph to select the threshold
point.
5. The image was then binarized using this threshold value by the following
method:
a. A zero matrix of same size of original image matrix (say F) was
considered.
b. Each pixel value of the image matrix was compared with the threshold
point.
c. If the value of pixel is greater than threshold, that pixel value in 'f matrix
was assigneda value 255, otherwise 0 was assigned to that.
d. This process was repeated till all the pixel values were compared to
threshold point
6. The unwanted boundary is removed.
7. The edge detection was performed initially using Sobel operator, but the final
smoothed method is with the following morphological operations:
a. Suitable structuring elements (SE) was created. The shape of all
structuring elements may be line based flat, linear or both. Different
structuring elements were selected for the erosion and dilation
operations. In order to have a basic link between both the operations a
difference angle= 90 between the dilation angle and the erosion angle is
considered.
b. Dilate the image. Dilation of a gray-scale image A (x, y) by a gray-scale
structuring element B(s, t) can be performed by:
m j , nk

a [ j , k +b[] }
3.1
A B=max ( i , j ) B
c. Start from forming an array XO of zeros which is same size as the array
containing A, except at the locations in XO corresponding to the given
ECE DEPT. OF MGUCE

Page 12

point in each hole, which is set to one. Then, the following procedure
Xk=( Xk1 B) A . The algorithm

fills all the holes with ones:

terminates at the iteration step k if

Xk= Xk1

d. Erode the image. Erosion of image A (x, y) by a gray-scale structuring


element B (s, t) can be performed by:
A B=min ( i , j ) B {a [ m j , mk ] +b [ j , k ] }
e. Find

the

edges

using

morphological

3.2
operator

for

structuringelements
Edge( A)=( A B)( A B)

different
3.3

Where' A' represents the input image


f. Closing of gray-scale image A (x, y) by gray-scale structuring element B
(s, t) is denoted by as follows:
A B=(( A B) B)

3.4

3.2.2 Algorithm for Smoothening


1. The image of size M*N was acquired.
2. The padding parameters P and Q were obtained, where P=2*M and Q=2*N.
3. The padded image size P*Q was formed by appending necessary number of
zeros to input image.
4. The DFT of the padded image was computed.
5. The filter with the size P*Q was generated using the mathematical formula/filter
function.
6. The filter was multiplied with the image in the frequency domain.
7. The Inverse FFT of the resulting image was performed.
8. Final smoothed image was obtained by extracting the M*N region.

CHAPTER 4
FILTER
The found result requires to be smoothen. In smoothing, the data points of a
signal are modified so individual points (presumably because of noise) are reduced, and
points that are lower than the adjacent points are increased leading to a smoother signal.
Smoothing may be used in two important ways that can aid in data analysis by being
ECE DEPT. OF MGUCE

Page 13

able to extract more information from the data as long as the assumption of smoothing
is reasonable and by being able to provide analyses that are both flexible and
robust.Noise in an image tends to produce some peak values in the image which can be
reduced by using low-pass filter.Different types of low pass filters are used to verify the
experimental result.
1.
2.
3.
4.

Butterworth filter
Bessel filter
Chebyshev filter
Gaussian filter
The basic steps for filtering in frequency domain is shown as:

Input
image
(f(x,y))

Fourier
transfor
m

Filter
function
(H(u,v))

Inverse
Fourier
transfor
m

post
processi
ng

Figure 4.1 Frequency Domain Filtering Operation

The DO value is fixed initially, by taking 25% the width of Fourier transform.

ECE DEPT. OF MGUCE

Page 14

Enhance
d image
(g(x,y))

Figure 4.2 Frequency Rectangle


Then the distance from point (u, v) to the centre of the frequency rectangle D (u,
v) is calculated from the mesh grid frequency using the following formula:
uM /2

vN /2

2+

D (u , v )=

4.1

Where m,n are the no of rows and columns of the image


4.1 BUTTERWORTH FILTER
The Butterworth filter is a type of signal processing filter designed to have a flat
frequency response as possible in the pass band. It is also referred to as a maximally flat
magnitude filter. The transfer function is,
H (u , v ) =

1
D (u , v ) 2n
1+[
]
D0

4.2

ECE DEPT. OF MGUCE

Page 15

Figure 4.3 Output Using Butterworth Filter

4.2. BESSEL FILTER


Bessel's Differential Equation is defined as:
x 2 y + x y ' + ( x 2n2 ) y=0

4.3

Where 'n' is a non-negative real number. The solutions of this equation are called
Bessel Functions of order n. Bessel function, for integer values of n, in integral
representation:
T
()
nT x sin

i
e

1
J n ( X )=

4.4

Figure 4.4 Output Using Bessel Filter


4.3. CHEBYSHEV FILTER
ECE DEPT. OF MGUCE

Page 16

Chebyshev filters are filters having a steeper roll-off and more pass band ripple
(type I) or stop band ripple (type II) than Butterworth filters. Chebyshev filters have the
property that they minimize the error between the idealized and the actual filter
characteristic over the range of the filter, but with ripples in the pass band. Its transfer
function is:
Gn ( )=|H n ( j )|=

1+ Tn (

)
0

4.5

Where: is ripple factor. 000 is cut-off frequency

x
()
Tn is the nth order Chebyshev polynomial ,
n cos1
Tn(x)=cos

4.6

Figure 4.5 Output Using Chebyshev Filter


4.4. GAUSSIAN FILTER
Gaussian filters have the properties of having no overshoot to a step function
input while minimizing the rise and fall time. This behaviour is closely connected to the
fact that the Gaussian filter has the minimum possible group delay.Thetransfer function
of the Gaussian filter is:
ECE DEPT. OF MGUCE

Page 17

H (u , v )=eD (u ,v)/2 D

2
0

4.7

Figure 4.6 Output Using Gaussian Filter

CHAPTER 5
QUALITY MEASUREMENT
5.1. MSE and PSNR
During the process of smoothing, the reconstructed image is subject to a wide
variety of distortion. Subjective evaluations emphasizes on the visual image quality,
which is too inconvenient, time consuming, and complex. The objective image quality
metrics like Peak Signal to Noise Ratio (PSNR), or Mean Squared Error (MSE) is
thought to be the best for the image processing application. The MSE metric is most
widely used for it is simple to calculate, having clear physical interpretation and
mathematically convenient. MSE is computed by averaging the squared intensity
difference of reconstructed image 12, and the original image, 11. Then from it the PSNR
is calculated. Mathematically,
I1
2
[ ( m, n ) I2 (m , n)]

5.1

M,N

MN
MSE=
ECE DEPT. OF MGUCE

Page 18

Where, M X N is the size of the image and assuming the gray scale image of 8
bits per pixel (bpp).
PSNR is defined as,
R2
PSNR=10 log 10 (
)
MSE

5.2

Where, R is the maximum pixel value of image.

Figure 5.1 Mean Squared Error

Figure 5.2 Peak Signal to Noise Ratio


5.2. SSIM
ECE DEPT. OF MGUCE

Page 19

But, they are not very well matched to human visual perception. It has been seen
that even for the same PSNR values, the visual quality for two reconstructed images is
different at different bit rate. It is because the pixels of the natural images are highly
structured and exhibit strong dependencies. The SSIM system separates the task of
similarity measurement in to three comparisons: luminance, contrast and structure. The
overall similarity measure is defined as:
'

'

'

'

S ( x , x ) =f (l ( x , x ) , c ( x , x ) , s ( x , x ) )

5.3

Where, x is the original assuming to be completely perfect and x' is the


reconstructed image such that both are non-negative images.
I(x, x') = luminance comparison function, which in tum the function of the mean
values of x and x'.
c (x, x') = contrast comparison function and they are function of the standard
deviation of x and x'
s (x, x' ) = structure comparison function

SL.No.
1
2
3
4

SMOOTHING FILTERS
Chebyshev filter
Butterworth filter
Bessel filter
Gaussian filter

SSIM
0.9995
0.9996
0.9991
0.9997

Table 5.1 Structural Similarity Index Measurement

ECE DEPT. OF MGUCE

Page 20

CHAPTER 6
CONCLUSION
The proposed method can be applied to detect the region of the tumor and for
further analysis. This technique can be proved to be handy tool for the practitioners
especially the physicians engaged in this field. Though the number of steps is more, but
it is simple to apply and the result should be accurate. It has been established in the
environment of MATLAB 8.2 platform. The alternative methods may be applied for
accuracy level such as thresholding and smoothing method. Also optimization can be
performed and kept for future work.

ECE DEPT. OF MGUCE

Page 21

ECE DEPT. OF MGUCE

Page 22

S-ar putea să vă placă și