Sunteți pe pagina 1din 18

Content-based image retrieval system with most relevant features among wavelet and

color features

Abdolreza Rashno
Department of Computer Engineering,
Lorestan University
Khorramabad, Iran
ar.rashno@gmail.com

Elyas Rashno
Department of computer engineering
Iran University of science and technology
Tehran, Iran
elyas.rashno@gmail.com

Abstract

Content-based image retrieval (CBIR) has become one of the most important research
directions in the domain of digital data management. In this paper, a new feature extraction
schema including the norm of low frequency components in wavelet transformation and color
features in RGB and HSV domains are proposed as representative feature vector for images
in database followed by appropriate similarity measure for each feature type. In CBIR
systems, retrieving results are so sensitive to image features. We address this problem with
selection of most relevant features among complete feature set by ant colony optimization
(ACO)-based feature selection which minimize the number of features as well as maximize
F-measure in CBIR system. To evaluate the performance of our proposed CBIR system, it has
been compared with three older proposed systems. Results show that the precision and recall
of our proposed system are higher than older ones for the majority of image categories in
Corel database.
Key words: Content-based image retrieval, wavelet transform, color, ant colony optimization,
feature selection.

1. Introduction
Image retrieval is a task of retrieving relevant images in image databases which has an
important role in different medical, economic and other fields recently [7-12, 21]. The first
ideas were the retrieving of images based on text describers which were created by human,
without considering their visual characteristics [10]. CBIR is a search method using the
structural and conceptual characteristics of the image. Due to the increasing demand for
optimum retrieving of images in large image data bases, research in the field of image
retrieval has gained high attention. In general, image retrieval system is divided into three
parts of user interface, data base and the processing section. One of the most important
achievements of image retrieval is CBIR system. Many methods have been proposed for
CBIR [7-10-26-51-56, 57]. Prior to this work, we have proposed CBIR systems based on
color and texture features with feature selection methods and neutrosophic (NS) theory [43-
45]. After that, we also applied NS theory in other applications [31-38]. NS theory also
applied on speech processing and clustering applications [58-59].
One of the most significant parts in CBIR system is feature extraction [22, 57]. Color,
texture and shape are three types of low-level features which are proposed for CBIR systems.
The color is widely used in CBIR systems since its extraction is usually easy as well as its
performance is relatively high for retrieving task [17-22-26, 34]. For shape features, a
comprehensive and exact definition has not been presented yet. Generally, methods like
aspect ratio, circularity, and Fourier descriptor are used for extracting the geometric shape
features [8, 52]. Also, three methods include statistical, structural, and spectrum methods are
used for describing the texture features [1, 29]. Furthermore, wavelet features are a type of
texture features which have been used in many image processing applications such as image
compressing[53], edge detection[54], Image Retrieval[12] and so on.
Feature selection is a method that finds and removes redundant and irrelevant feature
components. As a result, low time complexity and high system accuracy could be achieved.
Many feature selection approaches have been proposed for different applications such as face
recognition [14], data mining and pattern recognition [16], automatic speaker verification
system [50, 41-42] and image processing applications [39-40 and 46-47]. Recently ACO has
been applied for feature selection task [3, 20]. Feature selection has also been applied to
CBIR systems [11, 49].
In this paper, two feature categories including texture and color features are extracted in
feature extraction phase. In texture features, the wavelet transform of image is computed and
then the Frobenius norm of low frequency components are computed as texture features in
decomposed images. For color features, dominant color descriptor is extracted from RGB and
HSV image color domains for compact representative of image colors. Also, color statistic
features and color histogram features are extracted as color features. The similarity measure
must be adapted for feature types, so, we applied appropriate similarity measures for each
feature type. Finally, the new feature selection schema based on ant colony optimization is
proposed to select the most relevant features among texture and color features which lead the
CBIR system with lower feature number and higher F-measure.
The rest of this paper is organized as follows: Section 2 describes the wavelet features. Color
features are explained in more details in Section 3. The similarity measures for the proposed
features are described in Section 4. Proposed feature selection algorithm based on ACO is
presented in section 5. Also, Experimental setup and results are described in Sections 6 and 7
respectively. Finally, the conclusion and future works are discussed in Section 8.

2. Wavelet features
2. 1 Wavelet transformation
Wavelet transform could be applied to images as 2-dimensional signals. To refract an image
into k level, first the transform is applied on all rows up to k level while columns of the image
are kept unchanged. Then this task is applied on columns while keeping rows unchanged. In
this manner frequency components of the image are obtained up to k level. These
components are LL that is approximation of image and HL, LH and HH that are horizontal,
vertical and diagonal frequency details [6].
These frequency components in various levels let us to better analyze original image or
signal. Wavelet Transform in 2-D can be defined as following:

1 𝑥−𝑎 𝑦−𝑏
𝐶𝑊𝑇(𝑠, 𝑎, 𝑏) = ∬ 𝑓(𝑥, 𝑦) 𝛹 ( , 𝑠 ) 𝑑𝑥𝑑𝑦 (1)
√𝑠 𝑠
For a sample image, the wavelet decomposition for 2 levels is shown in Figure 1.

LL2 LH2
LL1 LH1 LH1
HL2 HH2
Orginal image

HL1 HH1 HL1 HH1

a b c
Figure1 . Wavelet decomposition of a sample image for 2 levels

2.1. Wavelet Coefficient Features


Wavelet features are extracted based on the wavelet transformation which is described in the
previous section. Here we extract features using approximation, vertical and horizontal
frequency components of the image. The decomposed image consists of LL, LH, HL and HH
components in each level. LL includes low frequency factors and HL, LH and HH include
factors of high frequencies in horizontal, vertical and diagonal directions respectively. These
frequency components are equal-sized matrices. The Frobenius norm of the rows of LL and
LH matrices and also the Frobenius norm of the columns of HL matrix are proposed to be
used in feature extraction phase as shown in Figure 2. The Frobenius norm of a vector is
defined as following:
𝑛
1
𝑛
𝑀∈𝑅 ⟹ ‖𝑀‖𝐹 = (∑(𝑀𝑖 )2 )2 (2)
𝑖=1
In which 𝑀 = [𝑀1 , 𝑀2 , … , 𝑀𝑛 ] is n dimensional vector.
For images in RGB space, each color component (R,G and B) is decomposed to first level of
Haar filter LL, LH and HL component. Then the vectors a, h and v are computed by the way
that each element of a, h and v is the norm of each row in LL, and LH and each column of HL
respectively. The standard deviation and mean of a, h and v create the features for images.
This method aims for reducing sensitivity toward local differences existing in each group of
images in the data set such as changes in the state and position of objects in images. These
differences can result in decreasing distances within a single group and increasing them
between different groups.
Image
V1 V2 V3 Vn

a1
a2
A1 H1 V1 D1

an

h1
h2

hn

Figure2 . Frequency components for feature extraction

3. Color Features
Color features are the most intuitive and most dominant low-level image features which are
very stable and robust in comparison with other image features such as texture and shape.
Since, these features are not sensitive to rotation, translation and scale changes, they could be
applicable for CBIR systems. What is more, the color feature calculation cost is relatively
lower than other features. In this work we have used the following color-based features.

3.1. Dominant Color Descriptor (DCD)


DCD is one of the approved color descriptors in the MPEG-7 Final Committee Draft among
several number of histogram descriptors. Both representative colors and the percentage of
each color are included in DCD. Moreover, DCD provide an effective and compact color
representation, could be applied for color distribution in an image or a region of interesting
[15]. In DCD, first, each color is divided into the number of partitions named course
partitions as depicted in Figure 5.

Figure 3. Color space partitioning

The center of each partition is calculated. Then, all points in a same partition are assumed to
be similar and near to each other. Partition centers are the average value of all pixels in each
partition and they are calculated as following:
∑𝑝∈𝑃𝑖 𝑝 (3)
𝐶𝑗 =
∑𝑝∈𝑃𝑖 1
Which 𝑃𝑖 is a ith partition. In this research the DCD features are extracted in both RGB and
HSV domains. Each pixel color value is replaced by the center value of its corresponding
partition. As a result DCD quantizes the images which the possible colors for each pixel is
equal with the partition numbers. Figures 6 and 7 show the result of applying DCD to image
in RGB and HSV domains respectively.

Figure. 4. DCD of the image in RGB color space

Figure 5. DCD of the image in HSV color space

3.2. Color statistic features


Usually images are used in the literature in both RGB and HSV space since these spaces are
in bisection with one another. For each image, as the simplest statistic features, the first order
(mean, denoted by M) and the second order (standard deviation, denoted by STD) are color
statistics with respect to R, G, B, H, S and V channels and gray level image. As a result,
fourteen features are generated per image with the following names:
𝑅𝑀 , 𝑅𝑆𝑇𝐷 , 𝐺𝑀 , 𝐺𝑆𝑇𝐷 , 𝐵𝑀 , 𝐵𝑆𝑇𝐷 , 𝐻𝑀 , 𝐻𝑆𝑇𝐷 , 𝑆𝑀 , 𝑆𝑆𝑇𝐷 , 𝑉𝑀 , 𝑉𝑆𝑇𝐷 , 𝐺𝑟𝑎𝑦𝑀 𝑎𝑛𝑑 𝐺𝑟𝑎𝑦𝑆𝑇𝐷 .

3.2 Color histogram features


The histogram is a graph which shows the number of color values falling in a number of
resolution ranges or bins. From image histogram, a set of features could be extracted which
are named as color histogram features [48]. For each color channel such as R, G, B, H, S and
V a color histogram graph is drawn with respect to bin size B (the number of intervals or
range resolutions). Each color channel has B histogram features corresponding to number of
pixels which are dropped in each interval.

4. Similarity Measures
After feature extraction, images are retrieved based on their similarity with images in
database. In this section the similarity measures for each type of features are proposed.

4.1. Texture and color statistic features similarity measure


As discussed in texture feature extraction, the standard deviation of special component of
wavelet decomposition is computed as texture features. Also, the standard deviation and
mean of color components are computed as color statistic features. Euclidean distance is used
as similarity measure for both of these features.

𝑛 (4)
𝐷(𝐼𝑚𝑎𝑔𝑒1, 𝐼𝑚𝑎𝑔𝑒2) = √∑(𝑝𝑖 − 𝑞𝑖 )2
𝑖=1

Where 𝑝𝑖 and 𝑞𝑖 are the ith correspondence features in Image1 and Image2 respectively.

4.2. Color histogram similarity measure


Color histograms, which record the frequencies of colors in the image is used as features for
describing images in order to perform image retrieval. The following distance measure is
applied for retrieving image among image databases [2].
𝑁

𝐷ℎ𝑖𝑠𝑡 (𝐼𝑚𝑎𝑔𝑒1, 𝐼𝑚𝑎𝑔𝑒2) = 1 − ∑ min(𝐻1 (𝑘), 𝐻2 (𝑘)) (5)


𝑘=1
Where N is the number of bins in histogram and H(k) is a percentage of colors in bin number
k.

4.3. DCD similarity measure


DCD features are centers (dominant color) of partitioned color space. As it is important that
each color coarse partition contain how many pixels in it, the weighted Euclidean distance
measure is presented for these features. It means that the distance of two correspondence
centers is affected by percentage of pixels in that center.
𝑛
(6)
𝐷(𝐼𝑚𝑎𝑔𝑒1, 𝐼𝑚𝑎𝑔𝑒2) = ∑ 𝑤𝑖 √(𝐶1𝑖 − 𝐶2𝑖 )2
𝑖=1
(𝑝1𝑖 +𝑝2𝑖 )
Where 𝐶1𝑖 is the ith dominant color in Image 1 and 𝑤𝑖 = . 𝑝1𝑖 is the percentage of
2
pixels in ith center in Image1 and sum of all 𝑤𝑖 s is 1.
5. Proposed feature selection based on ACO

ACO is a system based on agents, which simulate the natural behavior of ants, consisting of
mechanisms of adaptation and cooperation. The ability of ants to find shortest paths is
achieved by their depositing of pheromone when they travel. Each ant probabilistically
prefers to follow a direction that has more pheromone and the pheromone decays over time.
Given that over time, the shortest paths will have more pheromone and higher chance for
selection by ants. This path will be reinforced and the other paths’ pheromone diminished
until all ants follow the same path. In this way, the shortest path is achieved and system
converges to a single solution [9].

ACO can be reformulated to solve feature selection problem. The main idea of ACO is to
find a path with minimum cost in graph. ACO starts to search for the optimal feature subset
with the ant’s traverse through the graph until a minimum number of nodes are visited and
traversal stop criterion is satisfied [28]. The graph is fully connected to allow any feature to
be selected in the next stages. Based on this reformulation of the graph representation, the
pheromone update rule and transition rule of standard ACO algorithm can be used. In this
case, pheromone and heuristic value are not associated with edges. Instead, each feature has
its own pheromone value and heuristic value because edges do not affect the optimum path
but the features affect it. The probability that ant k selects feature i at time step t is [29]:

|𝜏𝑖 (𝑡)|𝛾 |𝜂𝑖 |𝛿 (7)


𝑖𝑓 𝑖 ∈ 𝐽𝑘
𝑃𝑖𝑘 (𝑡) = {∑𝑢∈𝐽𝑘 |𝜏𝑢 (𝑡)|𝛾 |𝜂𝑢 |𝛿
0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

where 𝐽𝑘 is the set of features that are allowed to be added to the partial solution if they are
not visited so far. 𝜏𝑖 (𝑡) and 𝜂𝑖 are the pheromone value and heuristic desirability associated
with feature i respectively. 𝛾 and 𝛿 are two parameters that determine the importance of the
pheromone value and the heuristic information respectively [25]. After each ant has
completed its tour, the solution is generated and then allowing each ant to deposit pheromone
on the features that are part of its tour. The amount of pheromone deposited by ant k on
feature i in step t is proposed to be defined as following:

∆𝜏(𝑖, 𝑘) = ∝. (𝐶𝐵𝐼𝑅_𝐹_𝑚𝑒𝑎𝑠𝑢𝑟𝑒(𝑘)) (8)


𝐹𝑒𝑎𝑡𝑢𝑟𝑒𝑠𝑁𝑢𝑚𝑏𝑒𝑟 − 𝐹𝑒𝑎𝑡𝑢𝑟𝑒𝑠𝑁𝑢𝑚𝑏𝑒𝑟(𝑘)
+ 𝛽. ( )
𝐹𝑒𝑎𝑡𝑢𝑟𝑒𝑠𝑁𝑢𝑚𝑏𝑒𝑟

where 𝐶𝐵𝐼𝑅_𝐹_𝑚𝑒𝑎𝑠𝑢𝑟𝑒(𝑘) is a F-measure of CBIR system with the features which are
found by ant k at iteration t. 𝐹𝑒𝑎𝑡𝑢𝑟𝑒𝑠𝑁𝑢𝑚𝑏𝑒𝑟 and 𝐹𝑒𝑎𝑡𝑢𝑟𝑒𝑠𝑁𝑢𝑚𝑏𝑒𝑟(𝑘) are the number of all
features and the number of features which are found by ant k respectively, ∝ and 𝛽 are
parameters that control importance of classifier performance and feature subset length
respectively . After all ants have completed their solutions, the pheromone trails are updated
by the following relation [28]:
𝑚

𝜏𝑖 (𝑡 + 1) = (1 − 𝜌). 𝜏𝑖 (𝑡) + ∑ ∆𝜏𝑖 𝑘 (𝑡) + ∆𝜏𝑖 𝑔 (𝑡))


𝑘=1 (9)

where 𝜌 is an evaporation rate constant, m is the number of ants and g is the best ant in
previous iteration. This relationship means that all the ants can update the pheromone and the
one with the best solution deposits additional pheromone on nodes. This causes the search of
ants to stay around the optimal solution in next iterations.
In this paper the stop criteria for ants is a threshold which is proposed to be defined as
following:
FN
(10)
Ant_Threshold = φ ∗ exp− N + ω ∗ expF_Measure
where 𝐹𝑁 is the feature cardinality of the selected feature by the ant so far, 𝑁 is the number
of all features, 𝜑 and 𝜔 are the parameters that control the effect of feature size and
F_measure respectively with restriction φ + ω = 1.
We applied ACO-based feature selection in CBIR system. Ants select most relevant features
based on their performance in CBIR system. Here, we use F-measure, a combination of
precision and recall measures, for CBIR performance evaluation. So, the main advantage of
presenting most relevant features for CBIR system is a system with the low complexity and
high accuracy. The block diagram of proposed feature selection schema for CBIR systems is
shown in Figure 6.
Complete image features include wavelet and color features

Initialization: Ant population size, node pheromone, maximum iteration and stop criteria

Create a population of ants

Select initial feature for ant

Select a feature based on its probability(7), add it to ant path


For the next ant

Select the next NO Ant threshold is


feature for ant reached?(10)

Yes
All ants
NO
traverse their
path?
Yes

Compute delta pheromone for all ants (8)

Find the best ant with the highest F_measure

Pheromone update with: evaporation, delta pheromone and best ant(9)

Maximum iteration or
stope criteria is
NO reached?
Yes

Save the features which visited by ant with highest F_measure

Figure 6. Proposed ACO feature selection schema in CBIR system

6. Experimental setup
To show the utility of the proposed CBIR schema, it is implemented on a machine with 3.4
GHz corei7 CPU, 6GB of RAM and windows 7. For simplicity, all images are resized to
256x256 pixels. For texture features, since the images are in the size of 256x256, the wavelet
decompositions LL, LH and HL are all 128x128. The norm of each row in LL and LH and
the norm of each column in HL are all the vector of size 1x128. So, for each color component
we have 3 features as standard deviation and 3 features as mean of these three vectors. Since
just the RGB color space is used for wavelet features, the feature vector of size 18 is
computed per image. The number of dominant colors in DCD feature extraction is 8. So 48
features are extracted as DCD features in R, G, B, H, S and V color components.
Furthermore, 12 features are extracted as standard deviation and mean of 6 color space in
RGB and HSV. Finally, in histogram features we have set the bin number to 8. Therefore, for
RGB and HSV 48 features could be extracted as histogram features. As a result, after
extraction all of mentioned features, for each image 126 features are extracted. Table 1 shows
the extracted features from image.
Table 1. Abstract name of all extracted features

Feature Sample Feature Meaning Feature Sample Feature Meaning Feature Sample Feature Meaning
LL_meanL Mean of norm of rows in LL DCDS𝑖 ith color dominant of S 𝑽𝒎 Mean of V
component
LH_meanL Mean of norm of L DCDV𝑖 ith color dominant of V 𝑯𝒔 Std of H component
Components of LH
HL_meanL Mean of norm of L 𝑹𝒎 Mean of R component 𝑺𝒔 Std of S component
Components of HL
LL_stdL Std of norm of rows in LL 𝑮𝒎 Mean of G component 𝑽𝒔 Std of V component
LH_stdL Std of norm of L 𝑩𝒎 Mean of B component HistR 𝑖 ith color bin of R
Components of LH
HL_stdL Std of norm of L 𝑹𝒔 Std of R component HistG𝑖 ith color bin of G
Components of HL
DCDR 𝑖 ith color dominant of R 𝑮𝒔 Std of G component HistB𝑖 ith color bin of B
DCDG𝑖 ith color dominant of G 𝑩𝒔 Std of B component HistH𝑖 ith color bin of H
DCDB𝑖 ith color dominant of B 𝑯𝒎 Mean of H component HistS𝑖 ith color bin of S
DCDH𝑖 ith color dominant of H 𝑺𝒎 Mean of S component HistV𝑖 ith color bin of V

Each wavelet feature in Table 1 is extracted for all 6 color components R, G, B, H, S and V.
Finally, the parameters for ACO-based feature selection are set as:

𝛾 = 1, 𝛿 = 2, ∝= 0.7, 𝛽 = 0.3, 𝜌 = 0.2, φ = 0.2 and ω = 0.8.

7. Experimental results
Our proposed system is evaluated with the Corel image database which have been widely
used by the image processing and CBIR research communities that consists of 11,000 images
of 110 categories which the test classes in our evaluation are including bus, horse, flower,
dinosaur, building, elephant, people, beach, scenery and dish. In this paper, first, the ACO
feature selection is adapted for CBIR system which gives the most relevant features among
all features extracted from images. In this experiment ACO feature selection selects 42 top
features among all 126 features which lead to 66% of feature reduction. Although ACO
feature selection is a time-consuming task, it is an offline task and performed once. Selected
features by feature selection algorithm are LL_meanL_G, LL_meanL_B, LH_meanL_B,
HL_meanL_R, LL_stdL_G, LH_stdL_R, HL_stdL_R, HL_stdL_B, DCDR4, DCDR7,
DCDG2, DCDB4, DCDB6, DCDB7, DCDH3, DCDH8, DCDS5, DCDV5, DCDV6,
R m , R s , Bs , Hm , Sm ,Hs , Vs , HistR3, HistR8, HistG1, HistG5, HistG8, HistB1, HistB2,
HistH6, HistH7, HistH8, HistS2, HistS7, HistV4, HistV5, HistV7 and HistV8.
After feature selection, the indexes of selected features are presented for proposed CBIR
system which is shown in Figure 7.
Figure 7. Proposed CBIR system

To evaluate the performance of the proposed system, it has been compared with CBIR
system using color, texture and shape features [38], co-occurrence matrix-based CBIR[24]
and Image retrieval by texture similarity[30]. From Corel database, all images have been used
as query images (the tested 10 semantic class includes bus, horse, flower, dinosaur, building,
elephant, people, beach, scenery, and dish) and then the first 20 most similar images are
retrieved. For each class of image, the both average normal precision and recall are computed
for all 100 query images in each class. Results for all image categories are show in Table 2.

Table 2. Average precision and recall for proposed CBIR system and other systems

Image Proposed System CBIR with color, texture co-occurrence matrix-based Image retrieval by
Category and shape features[38] CBIR[24] texture similarity[30]
Precision Recall Precision Recall Precision Recall Precision Recall
African people 0.583 0.116 0.522 0.104 0.453 0.115 0.424 0.126
Beach 0.489 0.097 0.462 0.092 0.398 0.121 0.446 0.113
Building 0.455 0.091 0.444 0.088 0.374 0.127 0.411 0.132
Buses 0.546 0.109 0.558 0.111 0.741 0.092 0.852 0.099
Dinosaurs 0.998 0.199 0.892 0.178 0.915 0.072 0.587 0.104
Elephants 0.469 0.093 0.442 0.088 0.304 0.132 0.426 0.119
Flowers 0.896 0.179 0.863 0.172 0.852 0.087 0.898 0.093
Horses 0.612 0.122 0.601 0.120 0.568 0.102 0.589 0.103
Mountains 0.398 0.079 0.446 0.089 0.293 0.135 0.268 0.152
Food 0.633 0.126 0.619 0.123 0.369 0.129 0.427 0.122
Average 0.6079 0.1211 0.5849 0.1165 0.527 0.111 0.533 0.116

It is clear that the proposed CBIR system has achieved better average precision and recall
of various image categories than the other three models. The sample result for Image retrieval
by texture similarity, co-occurrence matrix-based CBIR, CBIR system using color, texture
and shape features and our proposed system are shown in Figures 8, 9, 10 and 11
respectively. The image at the top of left corner is the query image and other 20 images are
the retrieval results.
Figure 8. Retrived images for Image retrieval by texture similarity
Figure 9. Retrived images for co-occurrence matrix-based CBIR
Figure 10. Retrived images CBIR system using color, texture and shape features
Figure 11. Retrieved images for proposed CBIR system

8. Conclusion
In this paper, the Frobenius norm of low frequency components of decomposed images from
wavelet transform are proposed as wavelet features as well as proposed color features include
DCD, color statistic and color histogram features. Feature extraction phase is developed by
the way that distance between features of same image class is minimum while it is maximum
for images of different classes which it leads to a CBIR system with higher performance.
Also, all irrelevant and redundant features are dropped by ant colony optimization which
selects most relevant features among complete feature set. Results showed that our proposed
CBIR system has higher precision and recall in comparison with three older CBIR systems.
For future works, curve let transform could be used for extracting features in higher
frequency resolution rather than just low and high frequency components in wavelet
transform. In addition, other feature selection methods such as particle swarm optimization
and genetic algorithm can be applied and intrinsic property of data such as Relieff weights
can be used in these feature selection methods to increase CBIR system performance.
References
[1] André Ricardo Backes, Alexandre Souto Martinez, Odemir Martinez Bruno, “Texture analysis based on
maximum contrast walker”, Pattern Recognition Letters 31 (2010) 1701–1707
[2] Cha, Sung-Hyuk, and Sargur N. Srihari. "On measuring the distance between histograms." Pattern
Recognition 35.6 (2002): 1355-1370.
[3] Chen, Bolun, Ling Chen, and Yixin Chen. "Efficient ant colony optimization for image feature
selection." Signal processing 93.6 (2013): 1566-1576.
[4] Chuen-Horng Lin, Der-Chen Huang, Yung-Kuan Chan, Kai-Hung Chen, Yen-Jen Chang,” Fast color-spatial
feature based image retrieval methods”, Expert Systems with Applications 38 (2011) 11412–11420
[5] Chuen-Horng Lin, Rong-Tai Chen, Yung-Kuan Chan, “A smart content-based image retrieval system based
on color and texture feature”, Image and Vision Computing 27 (2009) 658–665
[6] D. Lee Fugal,” Conceptual Wavelets in Digital Signal Processing”, Space & Signals Technologies, San
Diego, 2009
[7] Datta, R., Joshi, D., Li, J., and Wang, J. Z. 2008. Image retrieval: Ideas, influences, and trends of the new
age. ACM Comput. Surv. 40, 2, Article 5 (April 2008), 60 pages DOI = 10.1145/1348246.1348248 http://
doi.acm.org/10.1145/1348246.1348248
[8] Dengsheng Zhang∗, Guojun Lu, “Review of shape representation and description techniques”, Pattern
Recognition 37 (2004) 1 – 19 [
[9] Dorigo, Marco, and Christian Blum. "Ant colony optimization theory: A survey."Theoretical computer
science 344.2 (2005): 243-278.
[10] Dr. Fuhui Long, Dr. Hongjiang Zhang and Prof. David Dagan Feng, “Fundamentals of Content-Based
Image Retrieval”
[11] ElAlami, M. Esmel. "A novel image retrieval model based on the most relevant features." Knowledge-
Based Systems 24.1 (2011): 23-32.
[12] G. Quellec, M. Lamard, G. Cazuguel, B. Cochener, C. Roux, “Wavelet optimization for content-based
image retrieval in medical databases”, Medical Image Analysis 14 (2010) 227–241
[13] George Gagaudakis, Paul L. Rosin, “Incorporating shape into histograms for CBIR”, Pattern Recognition
35 (2002) 81–91
[14] Gundimada, S., Asari, V. K., & Gudur, N. (2010). Face recognition in multi-sensor images based on a
novel modular feature selection technique. Information Fusion, 11, 124-132
[15] ISO/IEC 15938-3/FDIS Information Technology—Multimedia Content Description Interface—Part 3
Visual Jul. 2001, ISO/IEC/JTC1/SC29/WG11 Doc. N4358.
[16] Jensen, R. (2005). Combining rough and fuzzy sets for feature selection. Ph.D. thesis, University of
Edinburgh.
[17] Jianlin Zhang, Wensheng Zou, “Content-Based Image Retrieval Using Color and Edge Direction Features”
[18] Jun Wei Han, Lei Guo, “A shape-based image retrieval method using salient edges”, Signal Processing:
Image Communication 18 (2003) 141–156
[19] Jun Yue, Zhenbo Li, Lu Liu, Zetian Fu, “Content-based image retrieval using color and texture fused
features”, Mathematical and Computer Modelling 54 (2011) 1121–1127
[20] Kashef, Shima, and Hossein Nezamabadi-pour. "An advanced ACO algorithm for feature subset
selection." Neurocomputing 147 (2015): 271-279.
[21] Keiji Yanai, Masaya Shindo, Kohei Noshita,” A fast image-gathering system from the World-Wide Web using a
PC cluster”, Image and Vision Computing 22 (2004) 59–71
[22] Khairul Muzzammil Saipullah · Deok-Hwan Kim, “A robust texture feature extraction using the localized
angular phase”, Multimed Tools Appl DOI 10.1007/s11042-011-0766-5
[23] Manesh Kokare, P.K. Biswas, B.N. Chatterji, “Texture image retrieval using rotated wavelet filters”,
Pattern Recognition Letters 28 (2007) 1240–1249
[24] N. Jhanwar et al., Content based image retrieval using motif co-occurrence matrix, Image and Vision
Computing 22 (2004) 1211–1220.
[25] Nemati, Shahla, and Mohammad Ehsan Basiri. "Text-independent speaker verification using ant colony
optimization-based selected features." Expert Systems with Applications 38.1 (2011): 620-630.
[26] Nidhi Singhai, Prof. Shishir K. Shandilya, “A Survey On: Content Based Image Retrieval Systems”,
International Journal of Computer Applications (0975 – 8887) Volume 4 – No.2, July 2010
[27] Noureddine Abbadeni, “Texture representation and retrieval using the causal autoregressive model”, J. Vis.
Commun. Image R. 21 (2010) 651–664.
[28] P.S.SUHASINI, Dr. K.SRI RAMA KRISHNA, Dr. I. V. MURALI KRISHNA, “CBIR USING COLOR
HISTOGRAM PROCESSING”, Journal of Theoretical and Applied Information Technology © 2005 - 2009
JATIT. All rights reserved.
[29] P.W. Huang, S.K. Dai, “Image retrieval by texture similarity”, Pattern Recognition 36 (2003) 665 – 679.
[30] P.W. Huang, S.K. Dai, Image retrieval by texture similarity, Pattern Recognition 36 (3) (2003) 665–679.
[31] Rashno, A., Koozekanani, D. D., Drayna, P. M., Nazari, B., Sadri, S., Rabbani, H., & Parhi, K. K. (2018).
Fully Automated Segmentation of Fluid/Cyst Regions in Optical Coherence Tomography Images With Diabetic
Macular Edema Using Neutrosophic Sets and Graph Algorithms. IEEE Transactions on Biomedical
Engineering, 65(5), 989-1001.
[32] Rashno, A., Nazari, B., Koozekanani, D. D., Drayna, P. M., Sadri, S., Rabbani, H., & Parhi, K. K. (2017).
Fully-automated segmentation of fluid regions in exudative age-related macular degeneration subjects: Kernel
graph cut in neutrosophic domain. PloS one, 12(10), e0186949.
[33] Parhi, K. K., Rashno, A., Nazari, B., Sadri, S., Rabbani, H., Drayna, P., & Koozekanani, D. D. (2017).
Automated fluid/cyst segmentation: A quantitative assessment of diabetic macular edema. Investigative
Ophthalmology & Visual Science, 58(8), 4633-4633.
[34] Kohler, J., Rashno, A., Parhi, K. K., Drayna, P., Radwan, S., & Koozekanani, D. D. (2017). Correlation
between initial vision and vision improvement with automatically calculated retinal cyst volume in treated dme
after resolution. Investigative Ophthalmology & Visual Science, 58(8), 953-953.
[35] Rashno, A., Parhi, K. K., Nazari, B., Sadri, S., Rabbani, H., Drayna, P., & Koozekanani, D. D. (2017).
Automated intra-retinal, sub-retinal and sub-rpe cyst regions segmentation in age-related macular degeneration
(amd) subjects. Investigative Ophthalmology & Visual Science, 58(8), 397-397.
[36] Salafian, B., Kafieh, R., Rashno, A., Pourazizi, M., & Sadri, S. (2018). Automatic segmentation of choroid
layer in edi oct images using graph theory in neutrosophic space. arXiv preprint arXiv:1812.01989.
[37] Rashno, A., Koozekanani, D. D., & Parhi, K. K. (2018, July). Oct fluid segmentation using graph shortest
path and convolutional neural network. In 2018 40th Annual International Conference of the IEEE Engineering
in Medicine and Biology Society (EMBC) (pp. 3426-3429). IEEE.
[38] Heshmati, A., Gholami, M., & Rashno, A. (2016). Scheme for unsupervised colour–texture image
segmentation using neutrosophic set and non-subsampled contourlet transform. IET Image Processing, 10(6),
464-473.
[39] Rashno, A., Nazari, B., Sadri, S., & Saraee, M. (2017). Effective pixel classification of Mars images based
on ant colony optimization feature selection and extreme learning machine. Neurocomputing, 226, 66-79.
[40] Rashno, A., Saraee, M., & Sadri, S. (2015, April). Mars image segmentation with most relevant features
among wavelet and color features. In AI & Robotics (IRANOPEN), 2015 (pp. 1-7). IEEE.
[41] Rashno, A., SadeghianNejad, H., & Heshmati, A. (2013). Highly efficient dimension reduction for text-
independent speaker verification based on relieff algorithm and support vector machines. International Journal
of Signal Processing, Image Processing and Pattern Recognition, 6(1), 91-108.
[42] Rashno, A., Ahadi, S. M., & Kelarestaghi, M. (2015, March). Text-independent speaker verification with
ant colony optimization feature selection and support vector machine. In Pattern Recognition and Image
Analysis (IPRIA), 2015 2nd International Conference on (pp. 1-5). IEEE.
[43] Rashno, A., Sadri, S., & SadeghianNejad, H. (2015, March). An efficient content-based image retrieval
with ant colony optimization feature selection schema based on wavelet and color features. In Artificial
Intelligence and Signal Processing (AISP), 2015 International Symposium on (pp. 59-64). IEEE.
[44] Rashno, A., & Sadri, S. (2017, April). Content-based image retrieval with color and texture features in
neutrosophic domain. In Pattern Recognition and Image Analysis (IPRIA), 2017 3rd International Conference
on (pp. 50-55). IEEE.
[45] Rashno, A., Smarandache, F., & Sadri, S. (2017, November). Refined neutrosophic sets in content-based
image retrieval application. In Machine Vision and Image Processing (MVIP), 2017 10th Iranian Conference
on (pp. 197-202). IEEE.
[46] Rashno, A., Tabataba, F. S., & Sadri, S. (2014). Image restoration with regularization convex optimization
approach. Journal of Electrical Systems and Signals, 2(2), 32-36.
[47] Rashno, A., Tabataba, F. S., & Sadri, S. (2014, October). Regularization convex optimization method with
l-curve estimation in image restoration. In Computer and Knowledge Engineering (ICCKE), 2014 4th
International eConference on(pp. 221-226). IEEE.
[48] R. Gonzalez, R.E. Woods, Digital Image Processing and Scene Analysis, third ed.,Prentice Hall, 2008.
[49] Rashedi, Esmat, Hossein Nezamabadi-Pour, and Saeid Saryazdi. "A simultaneous feature adaptation and
feature selection method for content-based image retrieval systems." Knowledge-Based Systems 39 (2013): 85-
94.
[50] Rashno Abdolreza, Hossein SadeghianNejad, and Abed Heshmati. "Highly Efficient Dimension Reduction
for Text-Independent Speaker Verification Based on Relieff Algorithm and Support Vector Machines."
International Journal of Signal Processing, Image Processing & Pattern Recognition 6.1 (2013).
[51] Remco C. Veltkamp, Mirela Tanase, “Content-Based Image Retrieval Systems: A Survey”,
[52] S.Arivazhagan, L.Ganesan, S.Selvanidhyananthan, “ Image Retrieval using Shape Feature”, international
journal of imaging science and engineering (IJISE),GA,USA,ISSN:1934-9955,VOL.1,NO.3, JULY 2007 .
[53] Song, M. "Wavelet image compression." Contemporary Mathematics 414 (2006): 41.
[54] Tu, Gang Jun, and Henrik Karstoft. "Logarithmic dyadic wavelet transform with its applications in edge
detection and reconstruction." Applied Soft Computing26 (2015): 193-201.
[55] Wang, Xiang-Yang, Yong-Jian Yu, and Hong-Ying Yang. "An effective image retrieval scheme using
color, texture and shape features." Computer Standards & Interfaces 33.1 (2011): 59-68.
[56] Ying Liu, Dengsheng Zhang, Guojun Lu, Wei-Ying Ma, “A survey of content-based image retrieval with
high-level semantics”, Pattern Recognition 40 (2007) 262– 282.
[57]Rashno, E., Norouzi, S. S., Minaei-bidgoli, B., & Guo, Y. (2018). Certainty of outlier and boundary points
processing in data mining. arXiv preprint arXiv:1812.11045.
[58]Rashno, E., Minaei-Bidgolia, B., & Guo, Y. (2018). An effective clustering method based on data
indeterminacy in neutrosophic set domain. arXiv preprint arXiv:1812.11034.
[59]Rashno, E., Akbari, A., & Nasersharif, B. (2019). A Convolutional Neural Network model based on
Neutrosophy for Noisy Speech Recognition. arXiv preprint arXiv:1901.10629.
[60] Yong Rui and Thomas S. Huang, “Image Retrieval: Current Techniques, Promising Directions, and Open
Issues”, Journal of Visual Communication and Image Representation 10, 39–62 (1999).

S-ar putea să vă placă și