Sunteți pe pagina 1din 6

International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 7July 2013

ISSN: 2231-2803 http://www.ijcttjournal.org Page 2178



Rotation and Illumination Invariant Texture
Classification for Image Retrieval using Local
Binary Pattern
Harshal S. Patil
#1
, Sandip S. Patil
*2

#1
Research Scholar,*
2
Associate Professor
Department of Computer Engineering
SSBTs College of Engineering & Technology Bambhori Jalgaon M.S. India


Abstract-Continuous extension of digital images requires new
methods for sorting, browsing, and searching through huge
image databases. Texture classification is very important in
image analysis. Content based image retrieval, examination of
surfaces, object detection by texture, document segmentation
are only some examples where texture classification plays a
major role. This is a domain of Content-Based Image Retrieval
(CBIR) systems, which are database search engines for images.
A user typically submits a query image or series of images and
the CBIR system tries to find and to retrieve the most similar
images from the database. Optimally, the retrieved images
should not be sensitive to circumstances during their
acquisition. Unfortunately, the appearance of natural objects
and materials is highly illumination and viewpoint dependent.
Classification of texture images, especially those with
different direction and illumination changes, is a challenging
and important problem in image analysis and classification.
Here we propose an effective scheme for representing and
retrieval of homogeneous image called texture, under the
circumstances with variable illumination and texture rotation.
For rotation and illumination invariant feature extraction we
used Local Binary Pattern method with rotation and
illumination invariance and for classification we used Support
Vector Machine and k- Nearest neighbor method. The
experimental results based on Outex dataset with different
rotations and illuminations.
Experiments demonstrate that by using SVM classification
we obtained better accuracy than KNN classification.

Keywords: LBP, Texture classification, Feature Extraction,
Pattern Recognition, SVM, KNN.

1 INTRODUCTION

Image texture is an important surface
characteristic used to identify and recognize
objects. Texture is difficult to be defined. It may
be informally defined as a structure composed of a
large number of more or less ordered similar
patterns or structures. Textures provide the idea
about the perceived smoothness, coarseness or
regularity of the surface.

Texture has played an increasingly important
role in diverse applications of image processing
such as in computer vision, pattern recognition,
remote sensing, industrial inspection and medical
diagnosis.
Many existing systems do not care about such
variations or they handle it in a very limited way.
Recently, it is demonstrated that textural features
can be successfully used for image understanding,
if the variation of acquisition circumstances is
considered.
In texture analysis, rotation and illumination
invariance plays a great attention. Many
researchers have been done on rotation and
illumination invariance. There are various
algorithms, such as GLCM [1], Gabor filters [2],
wavelet transforms [3], Markov random field [4],
have already been proposed [5].
LBP is a new rotation and illumination invariant
texture analysis method which is theoretically
simple but very powerful [6].
Many algorithms for texture classification are
not rotation and illumination invariant. The
effectiveness of a texture classification algorithm
can be increased by using a module for feature
extraction followed by classification. This will be
particularly useful for very large images such as
those used for medical image processing, remote-
sensing applications and large content based
image retrieval systems, forest analysis, fabric
industry etc. In this paper, we are developing the
system for rotation and illumination invariant
texture classification using Local Binary Pattern
method for extracting feature and for classification
International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 7July 2013

ISSN: 2231-2803 http://www.ijcttjournal.org Page 2179

we are using two different methods one is Support
Vector Machine and another is k- nearest
neighbour and we are find that by using which
classifier it gives better result and the
experimental results we uses the standard dataset
i.e. Outex which contains various classes of
images textures with different rotation and
illumination.

2 POROPOSED METHODOLOGY
2.1 Background and Motivation
Texture classification is the process to classify
different textures from the given images.
Although the classification of textures itself often
seems to be meaningless in its own sense, texture
classification can however be implemented a large
variety of real world problems involving specific
textures of different objects.
Some of the real world applications that involve
textured objects of surfaces include Rock
classification, Wood species recognition, Face
detection, Fabric classification, Geographical
landscape segmentation and etc. All these
applications allowed the target subjects to be
viewed as a specific type of texture and hence they
can be solved using texture classification
techniques. Many existing systems do not care
about such variations or they handle it in a very
limited way. Recently, it is demonstrated that
textural features can be successfully used for
image understanding, if the variation of
acquisition circumstances is considered.
From now, we will describes all methods which
are used in our project i.e. first for feature
extraction we will used Local binary pattern
method which are rotation and illumination
invariant, secondly we explain some introductory
part of classification methods these are KNN and
SVM and lastly we give the introduction of the
Outex dataset which are used for our experimental
purpose.

2.2 Feature Extraction Method
(Rotation and Illumination Invariance LBP with
Sign Magnitude)
The original Local Binary Pattern (LBP) is
proposed by Ojala and Pietikainen back in 1999. It
is a Statistical Method. The original LBP
calculates a value that reflects the relationship
within a 3 3 neighbourhood through a threshold
neighbourhood that is multiplied with the
respective binomial weights. Since the LBP is
used to calculate local features, it is often used for
texture segmentation problems. It has yet to be a
very popular method in texture classification
problem [7,8,9].
Here we will see the rotation and illumination
invariant texture feature extraction method that is
Local Binary Pattern (LBP) for Local Difference
Sign-Magnitude Transform which is same as
CLBP_S.

Fig. : Central pixel and its P circularly and evenly spaced neighbours with
radius R.
Referring to Fig. 2.1, given a central pixel g
c
and
its P circularly and evenly spaced neighbors g
p
, p
= 0,1,,P-1, we can simply calculate the
difference between g
c
and g
p
as d
p
=g
p
-g
c
. The
local difference vector [d
0,..,
d
P-1
] characterizes
the image local structure at g
c
. Because the
central gray level g
c
is removed, [d
0,..,
d
P-1
] is
robust to illumination changes and they are more
efficient than the original image in pattern
matching. d
p
can be further decomposed into two
components:

(1)


Where


(2) is the sign of d
P
and m
P
is the magnitude of d
P
.
With Eq. (1), [d
0,..,
d
P-1
] is transformed into a
sign vector [s
0,..,
s
P-1
] and a magnitude vector
[m
0,..,
m
P-1
].
International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 7July 2013

ISSN: 2231-2803 http://www.ijcttjournal.org Page 2180

We call Eq. (1) the local difference sign-
magnitude transform (LDSMT).

Fig. : (a) A 33 sample block; (b) the local differences; (c) the sign and (d)
magnitude components.
Obviously, [s
0,..,
s
P-1
] and [m
0,..,
m
P-1
] are
complementary and the original difference
[d
0,..,
d
P-1
] vector can be perfectly reconstructed
from them. Fig. 2.2 shows an example. (Fig. 2.2
(a)) is the original 33 local structure with central
pixel being 25. The difference vector (Fig. 2.2 (b))
is [3, 9,-13,-16,-15, 74, 39, 31]. After LDSMT, the
sign vector (Fig. 2.2 (c)) is [1,1,-1,-1,-1,1,1,1] and
the magnitude vector (Fig. 3.6(d)) is[3, 9, 13, 16,
15, 74, 39, 31]. It is seen that the original LBP
uses only the sign vector to code the local pattern
as an 8-bit string 11000111 (-1 is coded as
0).
Several observations can be made for CLBP [7].
1. First, LBP is a special case of CLBP by
using only CLBP_S.
2. Second, we will show that the sign
component preserves more image local
structural information than the magnitude
component. This explains why the simple
LBP (i.e. CLBP_S) operator works much
better than CLBP_M for texture
classification.

2.3 Classification Methods
Texture classification refers to the process of
grouping test samples of texture into classes,
where each resulting class contains related
samples according to some similarity criterion.
The goal of classification in general is to select the
most appropriate category for an unknown object,
given a set of known categories. While perfect
classification is frequently impossible, the
classification may also be performed by
determining the probability for each of the known
categories.
There are three major groups of classifiers
are popularly used, including k-Nearest Neighbors,
Artificial Neural Networks (ANN) and Support
Vector Machines (SVM). But in our work we used
k-NN and SVM classification methods. So, here
we introduced some concept of that both
classifiers.

2.3.1 k-Nearest Neighbors
In the method k-NN (k nearest neighbors
Fix and Hodges, 1951) is a supervised
classification method. The k-nearest neighbor (k-
NN) is an algorithm used in the recognition
of patterns for the classification of objects
(elements) based on training by examples in the
space near the elements. k-NN is a type of "Lazy
Learning", where function approximates only
locally and all computation is delayed to the
classification.


Fig: Example of k-NN.


The figure shows an example of
classification by means k-NN. The point under
observation is the green dot. The two classes are:
I. that of the red triangles;
II. that the blue squares.
If k = 3 (i.e. we consider the three nearest
objects), then the green dot is placed in the same
class of red triangles because there are 2 triangles
and 1 square. If k = 5 is then placed in the same
class of blue squares as there are 3 squares and 2
triangles.
In short, the nearest neighbor algorithms are
simple classifiers that select the training samples
with the closest distance to the query sample.
These classifiers will calculate the distance from
the query sample to every training sample and
International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 7July 2013

ISSN: 2231-2803 http://www.ijcttjournal.org Page 2181

select the best neighbour or neighbours with the
shortest distance [12].
2.3.2 SVM
The original SVM algorithm was invented
by Vladimir N. Vapnik and the current standard
soft margin was proposed by Vapnik and Corinna
Cortes in 1995. SVM is supervised learning
classifier. SVM are the newer trends in machine
learning algorithm which is popular in many
pattern recognition problems in current years, as
well as texture classification. SVM is designed to
maximize the marginal distance between classes
with decision boundaries drawn using different
kernels. SVM is designed to work with only two
classes by determining the hyperplane to divide
two classes. This is prepared by maximizing the
margin from the hyperplane to the two classes.
The samples nearest to the margin that were
selected to determine the hyperplane is known as
support vectors. Multiclass classification is also
applicable, the multiclass SVM is basically built
up by various two class SVMs to solve the
problem, either by using one versus all or one
versus one. The winning class is then determined
by the highest output function or the maximum
votes respectively. Despite that, SVM is still
considered to be powerful classifier which was
replacing the ANN and has slowly evolved into
one of the most important main stream classifier.
They are now widely used in the research of
texture classification [13, 14]

2.4 TEXTURE DATASETS
There are a number of texture databases or
datasets that were used in experiments on texture
classification and which are freely available for
comparison of texture analysis algorithms, e.g. the
Brodatz texture album, OuTex dataset and CUReT
texture dataset which were more widely used. But
for our experimental purose we used OuTex
database.

2.4.1 OuTex Dataset
The OuTex dataset include variations of
illumination spectrum, illumination direction or
both of them. Some databases also include
rotation or viewpoint variation. The best
variations of illumination spectra is comprised in
Outex database (Ojala et al., 2002a), which
consists of color texture images acquired under
three different illumination spectra and nine in-
plane rotations. Outex also define several
classification tests, which differ in recognition
conditions.
The collection of surface textures is growing
continuously. At this very moment the database
contains 320 surface textures, both macrotextures
and microtextures [15]. Sample images of the
OuTex textures are shown in Figure 2.4.


Fig : Images of OuTex database

2.5 Conceptual view of proposed work
When we study above methods we are now able
to decide the flow of our proposed system. The
working steps are:
Database of Test
Images from
OutexDatabase
Pick up
Query
Image
Obainthe
LBP features
of the query
image
Match theLBP features
of the query imagewith
thoseof dataset using
SVM and KNN classifier
Show Similar
Image which are
invariant to
Rotation and
Illumination
Stored LBP features of
all Outeximages into
CLBP_SH matrix
Obtain LBP Feature of
all Outexdatabase
images
Outex
Database for
Training

Fig : Block diagramproposed system

Above figure shows the flow of proposed image
retrieval system. In that first we used Outex
database from that we used half images as test
image and half for train image. We pick up query
image from test image after that we extract the
LBP features of that query image. On other hand
International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 7July 2013

ISSN: 2231-2803 http://www.ijcttjournal.org Page 2182

we extract the LBP features of all Outex database
images and that will stored in CLBP_SH matrix
and by using that matrix we train SVM and
classify using SVM classifier. Also by using KNN
classifier we find nearest one match of query
image with CLBP_SH matrix and obtained the
result using KNN classifier.

3. RESULTS AND DISSCUSIONS
We have used six texture datasets of Outex
database to evaluate the strength of our proposed
method. These are Outex_00000, Outex_00001,
Outex_00002, Outex_00003, Outex_00004,
Outex_00005. Each dataset has various numbers
of classes of textures and each class has same
number of textures in same datasets. Half of the
textures are used for training and half for testing.
Here, the textures of each column are considered
in the same class. Note that, different datasets are
created to meet different challenges such as
Outex_TC_00003 for rotation effects,
Outex_TC_00002 for resolution effects,
Outex_00003 and Outex_000005 for different
illumination effects.

Table : Experimental used Outex Dataset Description
Test Suites ID Text
ures
Window
Size
Illumin
ants
Rotations
Outex_00000 24 128 128 Inca 00
Outex_00001 24 128 128 Inca 00
Outex_00002 24 128 128 Inca 00
Outex_00003 24 128 128 Horizo
n, Inca,
TL84
00, 05, 10,
15, 30, 45,
60, 75, 90
Outex_00004 68 128 128 Inca 00
Outex_00005 68 128 128 Horizo
n, Inca,
TL84
00










3.1 Results for Outex_00000

Fig Experimental result for 5 test images

In this figure, we extract 000402.ras query
image of category carpet002 and by using KNN
and SVM classification we get 000405.ras image
of same category. Also the time required for
classification using Knn is 1.6459 and SVM is
0.24041. From that we conclude that the
classification time for SVM is less than Knn , it
gives the final average result for 5 query images
i.e. Accuracy of Knn is 88.4583, Accuracy of
SVM is 95.7944. Time for training SVM is
45.4275. And average time for SVM classification
is 0.42038 and Knn is 2.138. From that we
conclude that the accuracy of SVM is greater than
Accuracy of Knn.


Fig : Accuracy of KNN and SVM
This graph show the Accuracy of KNN and SVM
for 5 test images. And it is clearly shown that the
accuracy of SVM is greater than Accuracy of
KNN.
International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 7July 2013

ISSN: 2231-2803 http://www.ijcttjournal.org Page 2183


Fig : Time for KNN and SVM

This is the graph for time required for classify
KNN and time required for training plus
classification using KNN. It shows that the time
for Knn for 5 test images is less than SVM. And
this is because of the time for training SVM is
large and here we required to train SVM for per
image.

4 CONCLUSIONS AND FUTURE WORK
In this project an algorithm for rotation and
illumination invariant image texture classification
for image retrieval using Local Binary Pattern is
implemented. This algorithm classifies texture
images using LBP for feature extraction and for
classification KNN and SVM classifiers are used.
The developed texture classification system
consists of Outext dataset texture images. Each
class consists of 24 and 68 category of images
with 128 128 size of images.
Experimental results show that accuracy of
SVM classification is greater than accuracy of
KNN classification but about time factor SVM
required more time than K-NN. But the prime
factor is related to accuracy so whether time
required for SVM is more we conclude that SVM
is best for classification because of its accuracy.
In future by using Multiclass SVM the efficiency
of above approach may be improved.


REFERENCES

[1] .R.J .Bhiwani, S.M.Agrawal and M.A.Khan, Texture Based Pattern
Classification, International Journal of Computer Applications, Vol. 1
No. 1, pp. 60-62, 2010.
[2] J .Y. Tou, Y.H. Tay, and P.Y. Lau, Gabor Filters as Feature Images for
Covariance Marix on Texture Classification Problem, ICONIP 2008, vol.
5507, pp. 745-751, 2009.
[3] Semler L, Furst J . Wavelet-Based texture classification of tissues in
computed tomography. Proc, of 18th IEEE Symposiumon Computer Based
Medical Systems, pp. 265-270, 2005.
[4] H. Deng, and D.A. Clausi, Gaussian VZ-MRF rotation-invariant
features for image classification, IEEE Trans. on Pattern Analysis and
Machine Intelligence, vol. 26, no. 7, pp. 951-955, 2004.
[5] J ing Yi Tou, Yong Haur Tay and Phooi Yee Lau, Recent Trends in
Texture Classification: A Review Symposium on Progress in Information
& Communication Technology, pp. 63-68, 2009.
[6] T. Menp, and M. Pietikinen, Texture analysis with local binary
patterns, Handbook of Pattern Recognition and Computer Vision, pp. 197-
216, 2005.
[7] T. Menp , T. Ojala, M. Pietikinen, and M. Soriano, "Robust texture
classification by subsets of Local Binary Patterns," in Proc. 15
th

International Conference on Pattern Recognition. pp. 947-950, 2000.
[8] T. Ojala, M. Pietikinen, and T. Menp Gray scale and rotation
invariant texture classification with local binary patterns, Computer
Vision-ECCV 2000, pp. 404-420, 2000.
[9] T. Ojala, M. Pietikinen, and T. Menp, Multiresolution gray-scale
and rotation invariant texture classification with local binary patterns,
IEEE Transform on Pattern Analysis and Machine Intelligence, vol. 24, no.
7, pp. 971-987, 2002.
[10] Zhenhua Guo, Lei Zhang and David Zhang, 2010. A Completed
Modeling of Local Binary Pattern Operator for Texture Classification
IEEE transaction on Image Processing, Vol. 19, pp.1657-1663.
[11] Zhao et al, Texture Classification Based on Completed Modelling of
Local Binary Pattern IEEE International Conference on Computational
and Information Science, vol. 2, pp. 268-271, 2011.
[12] Padraig Cunningham1 and Sarah J ane Delany2, k-Nearest Neighbour
Classifiers Technical Report UCD-CSI-2007-4 March 27, 2007
[13] By Marti A. Hearst University of California, Berkeley SVM IEEE
Intelligent Systems, vol. 13, no. 4, pp. 18-28, J uly/August, 1998.
[14] C. Chen, C. Chen and C. Chen, A Comparison of Texture Features
Based on SVM and SOM, ICPR, vol. 2, pp. 630-633, 2006.
[15] http://www.outex.oulu.fi/temp/orig.html


About the Authors

Harshal S. Patil received the B.E. degree
in Computer Engineering, in 2010 from
North Maharashtra University, J algaon
(M.S.). She is presently pursuing M.E.in
Computer Science and Engineering from
North Maharashtra University, J algaon
(M.S.). Her area of interests are Pattern
Recognition and Image Processing.




Sandip S. Patil received the B.E. degree in Computer
Engineering, in 2001 from North
Maharashtra University J algaon (M.S.),
M.Tech. in Computer Science and
Engineering from Samrat Ashok
Technological Institute Vidisha in 2009,
Presently working as Associate Professor
in department of Computer Engineering at
S.S.B.T. College of Engineering and
Technology, Bambhori, J algaon. (India),
having 12 years of research experience. His area of interests is
Pattern Recognition, machine learning and Soft Computing. He
achieved Promising Engineer Award-2011 and Young Engineer
Award-2013 of I.E. India.

S-ar putea să vă placă și