Sunteți pe pagina 1din 6

2016 XI International Symposium on Telecommunications (BIHTEL)

October 24-26, 2016, Sarajevo, Bosnia and Herzegovina

Application of Neural Networks to Compression of


CT Images

Emir Turajli
Faculty of Electrical Engineering,
University of Sarajevo
Sarajevo, Bosna and Hercegovina
emir.turajlic@etf.unsa.ba

Abstract Efficient compression of medical images is needed acceptable fidelity and compression ratios differ among various
to decrease the storage space and enable efficient image transfer types of medical images, e.g. CT, MRI etc.
over network for access of electronic patient records. Since the
medical images contain diagnostically relevant information, it is Digital image compression methods can be broadly
necessary for the process of image compression to preserve high classified in two groups, lossy and lossless methods [3].
levels of image fidelity, especially when the images are compressed Lossless technique achieve compression by exploiting
at low bit rates. This paper investigates the capacity of an artificial statistical and/or spatial redundancies in an image and as such
neural network framework for medical image compression. are able to recover original image perfectly. On the other hand,
Specifically, the performance of the proposed image compression lossy image compression methods can also reduce psychovisual
method is evaluated on a database of computed tomography image redundancies and can generally attain much higher
images of lungs, where PSNR and MSE are used as the principal compression ratios at the cost of irreversibly degrading image
image quality metrics. The compressed image data is derived from quality.
the hidden layer outputs, where the artificial neural networks are
trained to reconstruct the network input features. The results of Some examples of widely employed lossless image
image block segmentation are used as the network training compression methods include arithmetic encoding [4],
features. The paper proposes the use of Kohonen's self-organizing Huffman encoding [5] and run-length encoding [3]. Successful
maps for segmentation of feature space and the use of multiple approaches to lossy image compression include fractal coding
finely tuned multi-layer perceptrons to achieve an improved [6-8] vector quantization [9, 10], DCT transform coding [11-
compression performance. This paper presents a study on how the 13], wavelet transform [14-16] and neural network based
choice of block size, network architecture, and training method compression [17-20].
affect the compression performance. An attempt is made to
optimize the artificial neural network framework for the Artificial neural networks have been successfully applied to
compression of computed tomography lung images. a broad spectrum of applications, from speech processing [21],
medicine [22], power engineering [23], to finance [24]. This
KeywordsSignal processing, Image Processing, Artificial paper investigates the capacity of artificial neural networks for
Intelligence. compression of computed tomography images of lungs.
Artificial neural network based image compression commonly
I. INTRODUCTION relies on adopting a feedforward neural network structure that
is trained to reconstruct the input features at its output. Having
For a long time, image compression has had an important a smaller number of neurons in the hidden layer compared to
role in the development of a range of multimedia computer the input and output layers of the network, the outputs of the
services and telecommunication applications, including hidden layer in fact represent the compressed image data.
teleconferencing, digital broadcast codec and video technology, Commonly, network training relies on block segmentation of
etc. With the recent significant growth in e-health, images to generate the training data for the network.
telemedicine, teleconsultation and teleradiology, there is an
increasing research interest in the field of medical images This paper investigates how image segmentation block size,
compression [1]. Digital representation of medical images has network architecture, and training method influence image
an indispensable role in medical diagnostics, and thus, it is compression performance. In addition, the paper proposes the
necessary for the image compression to effectively preserve the use of Kohonens Self-Organizing Maps (SOM) for feature
resolution as well as the perceptual quality of medical images. space segmentation and the use of multiple finely tuned multi-
In fact, the principal aim of image compression is to impose the layer perceptrons to improve the compression performance for
least amount degradation to the diagnostically relevant computer tomography images of lungs. An attempt is made to
information, while enabling effective archiving and transferring optimize the artificial neural network based image compression
of medical images, with respect to the available communication method so as to attain the best CT image quality, as defined by
and storage channels [2]. Another important issue in medical various objective quality measures, at a given bit-rate.
image compression that needs to be considered is the fact that

978-1-5090-2902-0/16/$31.00 2016 IEEE


Fig. 1. A multilayer perceptron Fig.2. Self-organizing maps

The remainder of this paper is organized as follows. In Their effect on the quality of image reconstruction will be
section II, a review of a multilayer perceptron, along with the further studied in this paper. This paper will consider two
considered training methods is presented. A brief overview of neural network training methods, specifically backpropagation
self-organizing maps is also presented in this section. The (gradient descent) and Scaled Conjugate Gradient (SCG)
proposed system for compression of CT images of lungs is algorithm. Supervised network learning is an iterative
presented in section III. Section IV presents and discusses the procedure of a general form:
experimental results. Section V concludes the paper.
w m+1 = w m + w m = w m + um p m , m 0 (1)
II. ARTIFICIAL NEURAL NETWORKS
, where w denotes the weight at mth iteration, and u and p
A. A Multilayer Perceptron represent the learning rate and the direction of weight
Fig. 1 presents a schematic diagram of a three layer adaptation, respectively. While the weight update under
multilayer perceptron consisting of the input layer with L standard backpropagation is in the negative direction of the
inputs, a hidden layer with H neurons and the output layer with error function gradient, the conjugate gradient search uses the
K neurons. Each neuron computes the weighted sum of its second order approximation of the error function. Conjugate
inputs and subsequently, passes the output through an activation gradient methods are well suited to handle large-scale problems
function to obtain a neuron response. Here, the input and output in an effective way [25, 26]. Scalable conjugate gradient
layers are fully connected to the hidden layer. The outputs of adopts the Levenberg-Marquardt approach to scale the learning
the hidden layer are passed to the decoder which consists of the rate and is demonstrated to be significantly faster than the
neural network structure that links the outputs of the hidden standard backpropagation and other CGM methods [26].
layer to the outputs of the neural network. In the process of
image compressions, the number of inputs, L, is equated to the B. Kohonens Self-Organized Maps
number of outputs, K and the neural network is trained to
Kohonens Self Organizing Maps (SOM) correspond to a
reconstruct the input features at its output. Thus, image
feed-forward artificial neural with a single computational layer
compression is achieved by selecting a smaller number of
that adopts an unsupervised competitive form of learning to
neurons in the hidden layer H, relative to the input layer size.
produce a low-dimensional representation of the input space.
The network training data is obtained through the process SOMs are able to adaptively transform any incoming signal
of block image segmentation, where the entire image is divided pattern of arbitrary dimension into a low-dimensional map, and
into rectangular NxN blocks. Each block is vectorised to form in the process preserve the topological ordering. Although,
a feature vector that is used both as an input and as a target higher dimensional maps are also possible, Kohonens self-
vector during the training process. Thus, the image organizing maps are typically used to produce one or two
segmentation block size is directly related to the dimensionality dimensional discrete maps. Fig. 2. illustrates a self-organizing
of the input feature vectors. Generally, the most important map with a two-dimensional grid topology.
aspects in the artificial neural network design are related to the The process of self-organization involves four distinct
network architecture and the choice of training method. The elements: a) Initialization, where small random values are
choice of image segmentation block size and the choice for the assigned to connection weights; b) Competition, where a
number of neurons in the hidden layer of the network will have discriminant function is used to declare a single neuron as a
a direct effect on the attained compression ratios. competition winner; c) Cooperation, where a spatial location of
a topological neighborhood, as determined by the location of III. IMAGE COMPRESSION
the winning neuron, provides the basis for cooperation among Fig. 3 illustrates the proposed image compression method.
neighboring neurons; d) Adaptation, where connection weights In the first stage of the image compression process, a computed
are adjusted in order to decrease the corresponding discriminant tomography image of lungs is segmented into rectangular NxN
function values in relation to the input pattern. In this paper, blocks. The blocks are converted into a vector form to produce
Euclidian distance is adopted as a discriminant function and the the input and the target patterns for the supervised artificial
weight update rule is defined as: neural network learning. The training of the artificial neural
network, labeled ANN, involves the available dataset, in its
w = (x i wji) (2) entirety. In the next stage, the outputs of this artificial neural
network are passed to Kohonens self-organizing map to
partition the feature space and enable training of multiple
, where represents the ith element in the input pattern x. Here, artificial neural networks in the subsequent stage of the image
denotes the network weight connecting the ith input unit and compression process. Multiple artificial neural networks,
the jth neuron in the output layer, while represents the labeled ANN 2 to ANN M, are trained to process only the
size of topological neighborhood as a function of winning specific input subspaces as defined by the trained self-
neuron index I(x) and time (epoch). Finally, weight update uses organizing map. The design of each artificial network involves
an exponentially decreasing time-dependent learning rate, two stages. Firstly, all individual artificial neural networks, in
denoted as . In this paper, one-dimensional self-organizing this hierarchical layer of the encoder, are initialized as the
maps are used to partition the input space into a small number previously trained ANN, where the entire available dataset is
of subspaces and as such facilitate the use of multiple finely used to train the neural network. Subsequently, each neural
tune multilayer perceptrons to perform image compression. network is trained on a particular subset of available feature
vectors. This two stage process ensures that each of the
networks, ANN 1 to ANN M, represent a finely tuned version of
the ANN network that are specifically designed to process
particular input subspace.
After the training of unsupervised and supervised neural
networks is completed, the process of image encoding is rather
straightforward. Image segmentation and vectorization
transforms an image into a set of feature vectors. Each of these
vectors is allocated to one particular finely tuned neural network
according to the SOM outputs. A particular neural network
processes the feature vector, and its hidden layer outputs are
passed to the decoder. Since the number of hidden layers is
significantly less than the feature vector length, image
compression is achieved.
The process of image reconstruction is illustrated in Fig. 4.
In the decoder design, the principal challenge presents a fact that
the decoder inputs can come from M different artificial neural
Fig.3. Block diagram of the encoder
networks in the encoder and the decoder must ascertain from
which specific neural network it comes from before it can make
an accurate reconstruction of that specific image segment. Thus,
the decoder functionality requires two distinct stages, allocation
of encoder input to a specific neural network, and the
reconstruction of an image segment associated with that encoder
input.
In general, the reconstruction of image segments requires
only a part of the ANN structure that links the hidden layer
outputs to the ANN outputs. In order to make this distinction
clear, superscript H is used to denote that the reduced neural
network structure. Thus, ANNH represents the reduced ANN
network.
Initially, in the process of neural network identification, a
coarse reconstruction of the feature vectors is performed. At this
stage, when the encoder input is not associate with a particular
finely tuned network, ANN constitutes the best means for
feature vector reconstruction. This neural network can be
Fig.4. Image reconstruction thought of as the average of the set of finely tuned networks.
The ANNH outputs are passed to the SOM network, where the
reconstructed feature vector is associate with the particular
finally tuned neural network.
In the second stage, an improved reconstruction of the
feature vector can be obtained. If a particular encoder input is a
result of some feature vector being processed by the ith neural
network, ANN i at the encoder, by correctly identifying the ith
neural network at the decoder, ANNH i can be used to obtain a
more accurate reconstruction of that feature vector. Once the
feature vectors are reconstructed, the invers process of images
segmentation and vectorization can be imposed to reconstruct
the image.
Note that the performance of image reconstruction relies on
the assumption that neural network identification can be
performed with high levels of accuracy. The results of
experimental evaluation presented later in this paper will show
that this is indeed the case.

IV. RESULTS AND DISCUSSION


The experiments are conducted on a subset of the ELAP
database that includes 20 CT images of lungs [27]. An
example of a 512 x 512, 8-bit image from the selected database
subset is presented in Fig. 5. a). The entire ELCAP public
image database consists of an image set of 50 low-dose
documented whole-lung CT scans, obtained in a single breath Fig. 5. a) An example of CT lung image; the results of applying the proposed
method b) N=4, H=8, PSNR=32.93 dB; c) N=8, H=8, PSNR=28.31 dB; d)
hold with a 1.25 mm slice thickness. N=16, H=6, PSNR=25.16 dB
The quality of image reconstruction is reported in terms of
objective image quality metrics, namely mean square error TABLE I. IMAGE COMPRESSION PERFORMANCE FOR DIFFERENT
(MSE), and peak signal-to-noise ratio (PSNR). The mean CHOICES OF NETWORK TRAINING ALGORITHM
square error of the reconstructed image is defined as Number of neurons in the hidden layer
Training
Algorithm H=4 H=6 H=8 H=10 H=12




(3) Backprop 25.71 26.83 27.23 28.01 28.37

SCG 26.67 27.76 28.04 28.88 29.25
, where X and R denote the original and the reconstructed
mxn images, respectively. On the other hand, the peak signal-
to-noise ratio (in dB) is defined as: In this experiment, the image segmentation block size is
kept constant, 8x8, while the number of hidden neurons is

varied between 4 and 12 in increments of 2. The results are
(4) reported in Table I. The average improvements in the image

compression across the range of selected hidden layer neuron
, where describes the maximum possible pixel value numbers is 0.89 dB. The biggest reported difference in the
of the image X, while B denotes the number of bits per sample. performances of two algorithms is 0.96 dB, which corresponds
The reported results constitute the average PSNR and MSE to the neural network with 4 hidden layer neurons. These
across the database of CT images of lungs. results clearly show the scaled conjugate gradient is a more
effect option for training the artificial neural network to perform
One of the objectives of this paper is to study the how image compression compared to the more commonly used
different options for the artificial neural network training backpropagation algorithm. Inherent complexity of the training
method, network architecture and the image segmentation process that is involved in the applications of artificial neural
block size affect the quality of image reconstruction, in terms network to image compression is the principal reason why the
of defined objective quality measures. For this purpose a single choice of network training algorithm can influence the
artificial neural network is used to perform image compression compression performance significantly.
and image reconstruction.In the first experiment, image
compression performance for a choice of two neural network Let us consider the previous example. Here image
training algorithms is evaluated. Specifically, scaled conjugate segmentation is based on 8x8 blocks and thus each feature
gradient algorithm is compared to Backpropagation (gradient vector consists of 64 elements. Furthermore, for a single
descent), which is commonly reported in literature as the 512x512 image, there are 4096 feature vectors in a training set.
principal choice of network training method [18]. Considering the overall dimensionality of the training set, it is
clear that neural network based image compression requires an TABLE II. PSNR VALUES FOR DIFFERENT CHOICES OF BLOCK SIZE AND
THE NUMBER OF HIDDEN NEURONS
efficient neural network learning algorithm.
NxN Block Number of neurons in the hidden layer
Since, it has been demonstrated that the scaled conjugate
Size H=4 H=6 H=8 H=10 H=12 H=16
gradient outperforms the backpropagation algorithm in ANN
applications to CT image compression, in all the remaining 4x4 29.72 31.22 32.63 34.28 36.14 36.8
experiments only the scaled conjugate gradient algorithm is 8x8 26.67 27.76 28.04 28.88 29.25 29.84
used to perform supervised neural network learning.
16x16 24.22 24.83 25.90 26.46 26.70 27.55
In the next experiment, various neural network
architectures are examined. The number of elements in the
TABLE III. MSE VALUES FOR DIFFERENT CHOICES OF BLOCK SIZE AND
input layer are directly related to the size of blocks used to THE NUMBER OF HIDDEN NEURONS
perform image segmentation. In order to avoid image
truncation or padding, only 4x4, 8x8 and 16x16 blocks are Number of neurons in the hidden layer
NxN Block Size
considered. For each considered block size, the number of H=4 H=6 H=8 H=10 H=12 H=16
hidden layer neurons is varied in a range between 4 and 16. Let 4x4 69.36 49.10 35.49 24.27 15.82 13.59
us remind that at the neural network level, image compression
ratio is equated with the ratio between the size of the network 8x8 139.98 108.91 102.11 84.16 77.28 67.5
input layer and the size of the hidden layer. Thus, the highest 16x16 246.08 213.84 167.14 146.91 139.02 114.3
considered compression ratio is 64:1, while the lowest
considered CR value is 1.33:1.
TABLE IV. PROPOSED METHOD: PSNR FOR DIFFERENT CHOICES OF
The PSNR and MSE results are reported in Table II and BLOCK SIZE AND THE NUMBER OF HIDDEN NEURONS
Table III, respectively. These results demonstrate that Number of neurons in the hidden layer
irrespective of image segmentation block size and given that NxN Block Size
H=4 H=6 H=8 H=10 H=12 H=16
everything else is equal, the quality of the reconstructed image
increases with the increasing number of hidden layer neurons. 4x4 29.9 31.53 32.93 34.56 36.44 37.17
This is expected as any increase in the number of hidden layer 8x8 26.96 28.13 28.31 29.27 29.49 30.22
neurons lowers the compression ratio. It is worth noting that
16x16 24.53 25.16 26.23 26.83 27.05 27.89
for the same choice of hidden layer neuron number, the use of
16x16 segmentation blocks implies 4 times higher compression
ratio compared to the 8x8 blocks, and 16 times higher
When comparing the performance of the proposed method
compression ratio compared to the 4x4 blocks. This has to be
with the image compression performance based on a single
taken in consideration when interpreting the presented PSNR
neural network, on average the proposed method offers an
and MSE results. The PSNR and MSE values that are associated
improvement of 0.32 dB, across image segmentation block
with the 16x16 block size and H=16 are comparable to those
sizes and the hidden neuron numbers.
obtained for 8x8 block size and H=4. Similarly, the results
associated with the 8x8 block size and H=16 are comparable to The results of applying the proposed method on an
those obtained when 4x4 block size and H=4 hidden layer example of a CT lung image for three different neural network
neurons are used. architectures are shown in Fig. 5. Visual inspection of the
reconstructed images confirms the objective image quality
Thus, one can conclude that for the same compression
rating. When 16 x16 image segmentation is employed in
ratio, marginally better PSNR and MSE performances can be
conjunction with the hidden layer size of 6 neurons, to attain a
attained when the size of the image segmentation blocks is
compression ratio of 1:43, blocking effect becomes noticeable
higher. However, further increase in block size, e.g. 32 and
around the edge separating the lung CT image and the black
higher, significantly increases the feature vector
background.
dimensionality, which imposes a severe constrained on the
networks ability to learn and exploit spatial correlation that is It should be noted that any further fragmentation of the
present in the CT images of lungs. Another issue with small input space does not yield any notable improvements in the
image segmentation block size, e.g. 4x4 block size, places a image compression performance. Thus, these results are not
strong constraint on the compression ratios that can be reported
achieved. In the hidden layer, a neural network. There must be
a sufficient number of neurons in order to have meaningful As previously discussed in Section III, an important aspect
image reconstruction. of the decoder performance corresponds to the ability to
correctly relate the encoder inputs to the hidden layer outputs
In the final experiment, the proposed method is evaluated of a specific finely tuned neural network. On average, the
for a range of image segmentation block sizes and a range of evaluated network identification accuracy is 99.3%. The ability
hidden neuron numbers. The proposed method is realized on to correctly identify a finely tuned neural network is also
the basis of have two finely tuned ANN networks to perform evident in the overall improved image compression
image compression. The corresponding PSNR results are performance.
reported in Table IV.
V. CONCLUSION [9] Huiyan Jiang, Zhiyuan Ma, Yang Hu, Benqiang Yang, Libo Zhang,
Medical image compression based on vector quantization with variable
This paper studies the how different options for the image block sizes in wavelet domain, Computational Intelligence and
segmentation block size, network architecture, and the training Neuroscience - Special issue on Computational Intelligence in
method affect the image compression performance. An Biomedical Science and Engineering, 2012.
attempt is made to optimize the performance of a neural [10] W. Xu, A. K. Nandi,J. Zhang, Novel fuzzy reinforced learning vector
quantisation algorithm and its application in image compression, IEEE
network based compression of computed tomography images Proceedings - Vision, Image and Signal Processing, vol 5, pp. 292-298,
of lungs. The use of unsupervised form of learning, or 2003.
specifically, Kohonens self-organizing maps, to perform the [11] Yen-Yu Chen, Medical image compression using DCT-based sub band
segmentation of feature space and the use of multiple neural decomposition and modified SPIHT data organization, International
network architectures is also examined. The presented journal of medical informatics, Vol.76, pp. 717725, 2007.
experiments are performed on a database of 20 computed [12] Yung-Gi Wu and Shen-Chuan Tai, Medical image compression by
tomography images of lungs and the results were reported in discrete cosine transform spectral similarity strategy, IEEE Transactions
terms of attained compression ratios and objective image on Information Technology in Biomedicine, vol. 5, pp. 236 243, 2001.
quality metrics, namely PSNR, and MSE. The results [13] S.G. Miaou, FS Ke, S.C. Chen, A Lossless Compression Method for
Medical Image Sequences Using JPEG-LS and Interframe Coding, IEEE
demonstrate that the scaled conjugate gradient is a more Transactions on Information Technology in Biomedicine , vol. 13, pp.
effective training algorithm, rather than backpropagation 818 821, 2009.
algorithm in the application of the artificial neural networks to [14] Tim Bruylants, Adrian Munteanu, Peter Schelkens, Wavelet based
image compression. The results of evaluating the image volumetric medical image compression, Signal Processing: Image
compression performance across a range of image segmentation Communication, Vol. 31, pp. 112133, 2015.
block sizes and the hidden neuron numbers show that that for [15] N. Sriraam, R. Shyamsunder. 3-D medical image compression using 3-
the same compassion ratios, up to a certain point, an increasing D wavelet coders, Elsevier on Digital Image Processing, Vol.21, pp.100-
109, 2013.
size of image segmentation blocks offer better performances.
Furthermore, it has been found that small image segmentation [16] C.L. Chang, B. Girod, Direction-Adaptive discrete wavelet transform for
image compression, IEEE Trans. Image Process., 16, pp. 12891302,
blocks place a constraint on the ANNs ability to achieve high 2007.
compression ratios, while overly large image segmentations [17] J. Stark, Iterated function systems as neural networks, Neural Networks 4
blocks increase the feature space dimensionality and thus, (5), pp. 679-690, 1992.
impair the capacity of an artificial neural network to learn. The [18] M. Mougeot, R. Azencott, B. Angeniol, Image compression with back
proposed algorithm offers performance improvements in the propagation: improvement of the visual restoration using different cost
compression of computed tomography images of lungs, across functions, Neural Networks, Vol. 4, No. 4, pp 467-476, 1991.
a range of considered experimental setups. [19] F. Hussain, J.Jeong, Efficient Deep Neural Network for Digital Image
Compression Employing Rectified Linear Neurons, Journal of Sensors,
pp. 1-7. 2016.
REFERENCES [20] Christophe Amerijckx et.al, Image Compression by Self-organized
Kohonen maps, IEEE Trans. on Neural Networks , Vol.9, No.3, May
[1] J. S. Duncan, and N. Ayache, Medical Image Analysis, Progress over 1998.
two Decades and the Challenges Ahead, IEEE Transactions on Pattern [21] E. Turajlic, O. Bozanovic, Neural network based speaker verification for
Analysis and Machine Intelligence, Vol. 22, No.1, pp. 85-106, 2000. security systems, Telecommunications Forum (TELFOR), pp. 740-743,
[2] P. Cosman, R. Gray, Evaluating Quality of Compressed Medical Images: 2012.
SNR, Subjective Rating, and Diagnostic Accuracy, Proceedings of the [22] E. Turajlic, Dzenan Softic Ehsan Eydi, ECG Diagnostics based on the
IEEE , Vol. 82, pp. 919-932,1994. Filter-Bank Signal Processing and ANN/SVM Classification,
[3] R. Gonzalez, E. Woods, Digital Image Processing, 3rd ed., Pearson Proceedings of the 7th European Computing Conference (ECC '13), pp.
Education, 2011. 185-191, 2013.
[4] S. Nigar Sulthana, Mahesh Chandra, Image Compression with Adaptive [23] E. Turajlic, D Softic, Classification of Power Quality Disturbances
Arithmetic Coding, International Journal of Computer Applications, Using Artificial Neural Networks and a Logarithmically Compressed S-
2011. Transform, Neural Information Processing, Lecture Notes in Computer
Science Volume 7664 (ICONIP 2012), pp 608-615, 2012.
[5] T. Bruylants, A. Munteanu, P.Schelkens, Wavelet based volumetric
medical image compression, Elsevier, December 2014. [24] A. Bahrammirzaee, A Comparative Survey of Artificial Intelligence
Applications in Finance: Artificial Neural Networks, Expert System and
[6] Jianji Wang, and Nanning Zhen, A Novel Fractal Image Compression Hybrid Intelligent Systems, Neural Comput & Applic, 19, 1165-1195,
Scheme with Block Classification and Sorting Based on Pearsons 2009.
Correlation Coefficient, IEEE Transactions on Image processing, vol.
22, no. 9, September 2013. [25] P. E. Gill, W. Murray, M. H. Wright, Practical optimization,New York:
Academic Press Inc, 1980,
[7] J. H. Jeng, C.C. Tseng, J.G. Hsieh, Study on Huber fractal image
compression, IEEE Transactions on Image processing, 18 (5), 995-1003, [26] M.F. Moller, A Scaled Conjugate Gradient Algorithm for Fast
2009. Supervised Learning, Neural Networks, 6 (4), pp.525-533, 1993.
[8] S. Bhavani, K.G. Thanushkod, Comparison of fractal coding methods [27] ELCAP public image database, http://www.via.cornell.edu/lungdb.html
for medical image compression, IET Image Processing, vol. 7, pp. 686
693, 2013.

S-ar putea să vă placă și