Documente Academic
Documente Profesional
Documente Cultură
(a) Parang
(b) Lereng
(c) Dutch
(d) Chinese
(e) Ceplokan
(f) Semen
I.
I NTRODUCTION
269
(g) Lunglungan
2015 7th International Conference on Information Technology and Electrical Engineering (ICITEE), Chiang Mai, Thailand
II.
The Scale Invariant Feature Transform (SIFT) is an approach for extracting image features [6]. The features extracted
using this approach are invariant to image scaling and rotation
and are also partially robust against changes in illumination
and camera angle (in 3-dimensional space) [6]. The extracted
features are usually referred to as SIFT keys.
Due to the robustness of the SIFT keys, this approach has
been widely used in various applications. One such application
is in the detection of particular objects within an image
[6]. Another example is provided by [8], where the authors
proposed an approach to construct facial templates based
on SIFT features extracted from multiple facial images. The
authors in [9] use adapted SIFT features in a face recognition
application. The SIFT feature extraction is adapted so that the
scheme is much more robust against illumination changes. The
authors in [10] incorporates SIFT to extract features from video
frames that are used to detect whether a particular video has
duplicates within a certain set of videos. The ability of the
SIFT approach to detect robust points from an image has also
been used to combat desynchronization (geometric) attacks on
digital watermarking systems. Examples of such approaches
are presented in [11] and [12]. Despite the wide range of
applications of SIFT features, to the best of our knowledge
the use of SIFT features for automatic classification of batik
motifs is novel.
The SIFT approach essentially consists of four steps,
namely [6]:
1)
270
(1)
(2)
where
1 (x 2 +y 2 )/2 2
e
2 2
The parameter k S defines the separation between two
adjacent scales while controls the width of the
Gaussian.
Localization of keypoints: This step determines the
location and scale of each potential point identified
in the first step. Keypoints are selected based on their
stability.
Assignment of orientation: In this step, one (or
more) orientations are assigned to each keypoints
found in the previous step. The assigned scale, location and orientation are then used as a basis for
all future operations. Specifically, images are first
transformed relative to this basis prior to undergoing
such operations. This is done to achieve invariance to
scaling and rotation.
G(x, y, ) =
2)
3)
The rest of the paper is organized as follows. In Section 2
we provide a more detailed discussion of the SIFT algorithm.
In Section 3 we discuss the feature moments combinations
used to construct the feature vectors. In Section 4 we present
the our proposed system. In Section 5 we discuss the experimental setup and results. Finally, in Section 6 we provide our
conclusions and pointers to our future works.
2015 7th International Conference on Information Technology and Electrical Engineering (ICITEE), Chiang Mai, Thailand
4)
(3)
(4)
1 X
Xi
N i=1
PN
(X i X )
2X = i=1
N
1
s
PN
i=1 (X i X )
X =
N 1
E[(X X ) 3 ]
3X =
3X
E[(X X ) 4 ]
4X =
4X
X =
(7)
(8)
(9)
(10)
(11)
It should be obvious that after performing these calculations, we have 10 different feature moments for any given input
image. These feature moments are then combined to construct
the feature vectors. We choose to use the moments of the SIFT
features rather than the SIFT features directly because the
number of SIFT features generated from the batik cloth images
varies widely and it is very difficult to pick the best features
to use. The feature moments, on the other hands, give us the
general properties of the SIFT features extracted from each
image. In other words, these moments act like a digest or a
hash that can represent the properties of the batik motif.
B. Construction of the feature vectors
The feature vectors used in this paper are constructed
by combining the available feature moments into vectors of
varying length. The length of the vectors varies from 1 (using
a single moment) to 10 (using all available moments). For
example, the vector constructed by using 4 different moments
would have a length of 4. An example of such a vector is
V = { , c , c , 4 }.
The number of vectors of a given length , constructed
from the 10 available feature moments, can be computed as
follows
10!
C10 =
(13)
!(10 )!
Thus, for example, the number of feature vectors of length
= 4 is 210.
IV.
In this section we present the details of our proposed automatic batik motifs classification system, presented in Figure
2. Our proposed system consists of two main steps, namely
the training and testing steps. During the training step, we
construct the feature vectors based on our training images.
During the testing step, we generate feature vectors based on
the testing images. The training- and testing-vectors are then
fed into a k-NN algorithm to classify the batik motifs. The kNN algorithm is one of the very basic classification algorithm.
We use this algorithm because our main interest in this paper
is to investigate the suitability of SIFT feature moments for
batik motifs classification. By using a basic classifier, the
overall system performance will reflect the suitability these
features for this particular application. Additionally, the k-NN
algorithm is suitable in a multi-class classification scenario.
Other, more advanced algorithm, such as Support Vector
Machines (SVM) are basically a 2-class classifier algorithm [7]
and thus in this particular application we would need multiple
SVMs.
The steps to construct the feature vector of length from
an input image I (x, y), are as follow:
1)
271
P ROPOSED S YSTEM
2)
3)
4)
2015 7th International Conference on Information Technology and Electrical Engineering (ICITEE), Chiang Mai, Thailand
Training image
Testing image
Grayscale
conversion
1
2
3
4
5
6
7
8
9
10
Grayscale
conversion
SIFT keys
calculation
SIFT keys
calculation
(k S , )
Feature moments
calculation
Feature moments
calculation
Feature vectors
construction
k-NN
classifier
Classification result
Fig. 2: Proposed batik motifs classification system
The performance of the automatic batik motifs classification system is evaluated by performing classification on
seven different batik motifs, as shown in Figure 1. In our
experiments, the batik motifs images are taken from the
collection of the Danar Hadi Batik Museum in Solo, Central
Java, Indonesia. These images are taken using a DSLR camera
on auto settings using only the incandescent light available
in the museum (no additional lighting or flashlight is used).
The images are taken such that the position of the camera
is perpendicular to the batik cloth. All images used for the
experiments are resized to 500331 pixels. For the training
image database, we use 10 images for each batik motif class
(a total of 70 images). For the testing image database we use
5 images for each batik motif class (35 images in total). For
each image, the number of feature vectors constructed, nV , is
given by
nV =
10
X
C10
A (%)
5.71
5.71
5.71
5.71
5.71
8.57
8.57
8.57
11.43
11.43
Feature vectors
construction
A+ (%)
31.43
28.57
34.29
37.14
37.14
34.29
31.43
31.43
25.71
11.43
(14)
=1
= 1023
Table I shows that the highest overall classification accuracy rates are achieved by feature vectors with = 4 and
= 5. Specifically, the 4-dimensional vectors giving the highest accuracy rate are { , c , 3c , 4 } and { c , , 3c , 4 },
while for the 5-dimensional vectors the highest accuracy
rate are achieved by the vectors { , c , , 3 , 3c } and
{ c , , 3 , 3c , 4 }. The table also shows that the lowest
overall accuracy rate is achieved by the feature vector containing 10 feature moments.
Although the 4- and 5-dimensional feature vectors give
the highest overall accuracy rates, in both of these cases not
all batik motifs can be classified. Specifically, in both cases
the Semen motif has zero accuracy rate. The next highest
overall accuracy rate is given by the 3- and 6-dimensional
feature vectors (34.29%). Again, in these cases not all batik
motifs can be classified. The 3-dimensional feature vector
that give the highest accuracy rate, { c , 3c , 4 } fails to
classify both the Chinese and Semen motifs while the
6-dimensional feature vector with the highest accuracy rate,
{ , c , , 3 , 3c , 4 }, fails to classify the Semen motif.
The 7- and 8-dimensional vectors both give lower maximum
average accuracy rate of 31.43%, but in these cases all batik
motifs can be classified (i.e., have non-zero classification
accuracy rate). Using 1-dimensional vector also gives a maximum average accuracy rate of 31.43%, but in this case the
Chinese motif cannot be correctly classified. The 7- and 8dimensional vectors that give the highest accuracy rates are
listed in Table II. Since both the 7- and 8-dimensional feature
vectors give the same accuracy rate, these results suggest that
using 7-dimensional feature vector is enough for batik motifs
classification using the proposed system.
Table III shows the overall average accuracy rates (for all
272
2015 7th International Conference on Information Technology and Electrical Engineering (ICITEE), Chiang Mai, Thailand
Vectors
{ , c , 2, , c , 3 , 4c }
{ , c , 2, , c , 3c , 4c }
{ , c , 2, , c , 4 , 4c }
(a) Chinese #4
{ , c , 2, c , 3 , 3c , 4c }
{ , c , 2, c , 3c , 4 , 4c }
{ , c , 2, c , 3 , 4 , 4c }
{c , 2, , c , 3 , 3c , 4c }
{c , 2, , c , 3 , 4 , 4c }
{c , 2, , c , 3c , 4 , 4c }
{c , 2, c , 3 , 3c , 4 , 4c }
{ , c , 2, , c , 3 , 3c , 4c }
{ , c , 2, , c , 3 , 4 , 4c }
{ , c , 2, , c , 3c , 4 , 4c }
{c , 2, , c , 3 , 3c , 4 , 4c }
(c) Lunglungan #4
TABLE III: Average accuracy rates for each batik motif class
Motif class
Parang
Lereng
Dutch
Chinese
Ceplokan
Semen
Lunglungan
(b) Lereng #3
273
In this paper we present an automatic batik motifs classification system. The feature vectors used for the classification
process is constructed from various combinations of SIFT
feature moments extracted from the batik cloth images. The
classification was performed using the k-NN method. Our
results show that the most suitable feature vectors for the
classification are the 7- and 8- dimensional vectors, yielding
an overall average classification accuracy of 31.43%. Higher
overall classification accuracy rates can be achieved using the
3-, 4-, 5-, and 6-dimensional feature vectors (yielding accuracy
rates of 34.29%, 37.14%, 37.14% and 34.29%, respectively).
2015 7th International Conference on Information Technology and Electrical Engineering (ICITEE), Chiang Mai, Thailand
(a) Dutch #5
Feature moment
c
2
c2
c
3
3c
4
4c
(b) Lunglungan #3
(c) Ceplokan #4
(d) Ceplokan #5
(e) Lereng #4
(f) Lereng #5
(g) Parang #4
(h) Parang #5
(i) Semen #2
(j) Semen #4
Accuracy (%)
14.29
31.43
17.14
5.71
17.14
5.71
14.29
8.57
14.29
8.57
274
UNESCO, Indonesian Batik, Inscribed in 2009 on the Representative List of the Intangible Cultural Heritage of Humanity,
http://www.unesco.org/culture/ich/RL/00170.
[2] N. Suciati, W.A. Pratomo, and D. Purwitasari, Batik Motif Classification using Color-Texture-Based Feature Extraction and Backpropagation
Neural Network, Proc. of IIAI 3rd Int. Conf. on Advanced Applied
Informatics (IIAIAAI), Kitakyushu, pp. 517 521, 2014.
[3] K.S. Loke and M. Cheong, Efficient Textile Recognition via Decomposition of Co-occurrence Matrices, Proc. IEEE Int. Conf. on Signal and
Image Processing Applications (ICSIPA), pp. 257 261, Kuala Lumpur,
2009.
[4] I. Nurhaida, R. Manurung, and A.M. Arymurthy, Performance Comparison Analysis Features Extraction Methods for Batik Recognition, Proc.
Int. Conf. on Advanced Computer Science and Information Systems
(ICACSIS), pp. 207 212, Depok, 2012.
[5] A.E. Minarno, Y. Munarko, A. Kurniawardhani, F. Bimantoro and N.
Suciati, Texture Feature Extraction Using Co-Occurrence Matrices of
Sub-Band Image For Batik Image Classification, Proc. 2nd Int. Conf. on
Information and Communication Technology (ICoICT), pp. 249 254,
Bandung, 2014.
[6] D.G. Lowe, Distinctive image features from scale-invariant keypoints,
Int. J. Computer Vision 60:2, pp. 91110, 2004.
[7] R.O. Duda, P.E. Hart, and D.G. Stork, Pattern Classification, John
Wiley & Son, Inc., 2 ed., 2001.
[8] A. Rattani, D.R. Kisku, A. Lagorio and M. Tistarelli, Facial Template
Synthesis based on SIFT Features, Proc. IEEE Workshop Automatic
Identification Advanced Technologies, Alghero, Italy, pp. 6973, 2007.