Sunteți pe pagina 1din 8

International Journal of EmergingTrends & Technology in Computer Science(IJETTCS)

Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com


Volume 2, Issue 2, March April 2013 ISSN 2278-6856


Volume 2, Issue 2 March April 2013 Page 165

Abstract: JPEG (Joint Photographic Experts Group) is an
international compression standard for continuous-tone still
image, both grayscale and color. JPEG standard supports two
basic compression methods. The DCT-based-Lossy
compression method, and the predictive based- Lossless
compression method. The DCT-based-Lossy compression
method is widely used today for a large number of
applications. This technique converts a signal into
elementary frequency components. This work involves the
design and implementation of JPEG Encoder and Decoder for
the compression of gray scale images and color images as
well, the process involves the application of Discrete Cosine
Transformation and quantization followed by Zigzag scan and
Run Length Encoding techniques as a part of compressing the
input image, while the decompression involves the same
operations in a reverse order.
Keywords: Discrete Cosine Transform(DCT), JPEG
Image Compression, Quantization, Run Length
Coding(RLC), Zigzag Scan.
1. INTRODUCTION
Image compression basically deals with the
application of various data compression techniques on
digital images. Digital representation of analog
signals requires huge storage, It had always been a
great challenge to transfer such files within the
available limited bandwidth and storage requirement
constraints.
Unlike all of the other compression methods, JPEG is
not a single algorithm. Instead, it may be thought of as a
toolkit of image compression methods that may be altered
to fit the needs of the user. JPEG may be adjusted to
produce very small, compressed images that are of
relatively poor quality in appearance but still suitable for
many applications. Conversely, JPEG is capable of
producing very high-quality compressed images that are
still far smaller than the original uncompressed data.
2. IMAGE COMPRESSION
FUNDAMENTALS
The need for image compression becomes apparent
when number of bits per image is computed resulting
from typical sampling rates and quantization methods.
2.1 PRINCIPLES BEHIND COMPRESSION
Number of bits required to represent the information in
an image can be minimized by removing the redundancy
present in it. There are three types of redundancies: (i)
spatial redundancy, which is due to the correlation or
dependence between neighboring pixel values; (ii)
spectral redundancy, which is due to the correlation
between different color planes or spectral bands; (iii)
temporal redundancy, which is present because of
correlation between different frames in images. Image
compression research aims to reduce the number of bits
required to represent an image by removing the spatial
and spectral redundancies as much as possible.
Data redundancy is of central issue in digital image
compression. If n1 and n2 denote the number of
information carrying units in original and compressed
image respectively, then the compression ratio CR can be
defined as CR=n1/n2 and relative data redundancy RD of
the original image can be defined as RD=1-1/CR;
Three possibilities arise here:
(1) If n1=n2, then CR=1 and hence RD=0 which implies
that original image do not contain any redundancy
between the pixels.
(2) If n1>>n2, then CR and hence RD>1 which
implies considerable amount of redundancy in the
original image.
(3) If n1<<n2, then CR>0 and hence RD- which
indicates that the compressed image contains more data
than original image.
2.2 IMAGE COMPRESSION
Image compression is very important for efficient
transmission and storage of images. Demand for
communication of multimedia data through the
telecommunications network and accessing the
multimedia data through Internet is growing explosively.
With the use of digital cameras, requirements for storage,
manipulation, and transfer of digital images, has grown
explosively. These image files can be very large and can
occupy a lot of memory. A gray scale image that is 256 x
256 pixels have 65, 536 elements to store and a typical
640 x 480 color image have nearly a million.
Downloading of these files from internet can be very time
consuming task. Image data comprise of a significant
portion of the multimedia data and they occupy the major
portion of the communication bandwidth for multimedia
communication. Therefore development of efficient
techniques for image compression has become quite
necessary. A common characteristic of most images is
NOVEL TECHNIQUE FOR IMPROVING THE
METRICS OF JPEG COMPRESSION SYSTEM

N. Baby Anusha
1
, K.Deepika
2
and S.Sridhar
3


J NTUK, Lendi Institute Of Engineering & Technology,
Dept.of Electronics and communication, India

International Journal of EmergingTrends & Technology in Computer Science(IJETTCS)
Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com
Volume 2, Issue 2, March April 2013 ISSN 2278-6856


Volume 2, Issue 2 March April 2013 Page 166

that the neighboring pixels are highly correlated and
therefore contain highly redundant information. The basic
objective of image compression is to find an image
representation in which pixels are less correlated. The
two fundamental principles used in image compression
are redundancy and irrelevancy. Redundancy removes
redundancy from the signal source and irrelevancy omits
pixel values which are not noticeable by human eye.
JPEG and JPEG 2000 are two important techniques used
for image compression.
Many other committees and standards have been formed
to produce de jure standards (such as JPEG), while
several commercially successful initiatives have
effectively become de facto standards (such as GIF).
Image compression standards bring about many benefits,
such as:
(1) easier exchange of image files between different
devices and applications;
(2) reuse of existing hardware and software for a wider
array of products;
(3) existence of benchmarks and reference data sets for
new and alternative developments.
As our use of and reliance of computers continues to
grow, so too does our need for efficient ways of storing
large amounts of data. For example someone with a web
page or online catalog-that uses dozens or perhaps
hundreds of images-will more than likely need to use
some form of image compression to store those images.
This is because the amount of space required to hold
unadulterated images can be prohibitively large in terms
of cost. Fortunately, there are several methods of image
compression available today. These fall into two general
categories: lossless and lossy image compression.
The JPEG standard is a collaboration among the
International Telecommunication Union (ITU),
International Organization for Standardization (ISO), and
International Electro technical Commission (IEC). Its
official name is "ISO/IEC 10918-1 Digital compression
and coding of continuous-tone still image", and "ITU-T
Recommendation T.81".The JPEG process is a widely
used form of lossy image compression that centers around
the Discrete Cosine Transform.
JPEG have the following modes of operations :
(a) Lossless mode: The image is encoded to guarantee
exact recovery of every pixel of original image even
though the compression ratio is lower than the lossy
modes.
(b) Sequential mode: It compresses the image in a single
left-to-right, top-to-bottom scan.
(c) Progressive mode: It compresses the image in
multiple scans. When transmission time is long, the
image will display from indistinct to clear appearance.
(d) Hierarchical mode: Compress the image at multiple
resolutions so that the lower resolution of the image
can be accessed first without decompressing the whole
resolution of the image.
The last three DCT-based modes (b, c, and d) are lossy
compression because precision limitation to compute
DCT and the quantization process introduce distortion in
the reconstructed image. The lossless mode uses
predictive method and does not have quantization
process. The hierarchical mode can use DCT-based
coding or predictive coding optionally. The most widely
used mode in practice is called the baseline JPEG system,
which is based on sequential mode, DCT-based coding
and Huffman coding for entropy encoding.
2.3 COMPRESSION TECHNIQUES
The image compression techniques are broadly classified
into two categories depending whether or not an exact
replica of the original image could be reconstructed using
the compressed image. They are:
1. Lossy Image Compression
2. Lossless Image Compression
2.3.1 LOSSY IMAGE COMPRESSION
Lossy schemes provide much higher compression ratios
than lossless schemes. Lossy schemes are widely used
since the quality of the reconstructed images is adequate
for most applications .By this scheme, the decompressed
image is not identical to the original image, but
reasonably close to it. The transformation is applied to the
original image. The quantization process results in loss of
information. The entropy coding after the quantization
step, however, is lossless. The decoding is a reverse
process. Firstly, entropy decoding is applied to
compressed data to get the quantized data. Secondly,
reverse quantization is applied to it & finally the inverse
transformation to get the reconstructed image. Major
performance considerations of a lossy compression
scheme include:
- Compression ratio
- Signal - to noise ratio
- Speed of encoding & decoding.
Lossy compression techniques includes following
schemes:
1. Transformation coding
2. Vector quantization
3. Fractal coding
4. Block Truncation Coding
5. Sub band coding

2.3.1.1 VECTOR QUANTIZATION
The basic idea in this technique is to develop a dictionary
of fixed-size vectors, called code vectors. A vector is
usually a block of pixel values. A given image is then
partitioned into non-overlapping blocks (vectors) called
image vectors. Then for each in the dictionary is
determined and its index in the dictionary is used as the
encoding of the original image vector. Thus, each image
is represented by a sequence of indices that can be further
entropy coded.
2.3.2 LOSSLESS IMAGE COMPRESSION
In lossless compression techniques, the original image
can be perfectly recovered from the compressed (encoded)
image. These are also called noiseless since they do not
add noise to the signal (image).It is also known as
entropy coding since it use statistics/decomposition
International Journal of EmergingTrends & Technology in Computer Science(IJETTCS)
Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com
Volume 2, Issue 2, March April 2013 ISSN 2278-6856


Volume 2, Issue 2 March April 2013 Page 167

techniques to eliminate/minimize redundancy. Lossless
compression is used only for a few applications with
stringent requirements such as medical imaging.
Lossless compression techniques includes following
schemes:
1. Run length encoding
2. Huffman encoding
3. LZW coding
4. Area coding
2.3.2.1 RUN LENGTH ENCODING
This is a very simple compression method used for
sequential data. It is very useful in case of repetitive data.
This technique replaces sequences of identical symbols
(pixels), called runs by shorter symbols. The run length
code for a gray scale image is represented by a sequence
{Vi, Ri } where Vi is the intensity of pixel and Ri refers
to the number of consecutive pixels with the intensity Vi.
If both Vi and Ri are represented by one byte, this span of
12 pixels is coded using eight bytes yielding a
compression ratio of 1: 5.
3. PROPOSED ARCHITECTURE
The prescribed architecture of JPEG Image Compression
using DCT for Grayscale images is shown below

Fig.3.1: Prescribed Architecture of JPEG Image
Compression using DCT for Grayscale Images

We will discuss in detail about each block in the above
block diagram:
DCT (Discrete cosine transform)
Quantization
Zigzag Scan
RLC (Run length coding)
Inverse RLC
Inverse Zigzag
Reverse Quantization
IDCT (Inverse Discrete Cosine Transform)
3.1 DISCRETE COSINE TRANSFORM
1. ONE-DIMENSIONAL DCT
The most common DCT definition of a 1-D sequence of
length N is

In both equations as above, (k) is defined as:

The basis sequences of the 1D DCT are real, discrete-time
sinusoids are defined by:

Each element of the transformed list X[k] in equation of
Forward DCT is the inner dot product of the input list
x[n] and a basis vector. Constant factors are chosen so the
basis vectors are orthogonal and normalized. The DCT
can be written as the product of a vector (the input list)
and the N x N orthogonal matrix whose rows are the basis
vectors.
2. TWO-DIMENSIONAL DCT
The two-dimensional discrete cosine transform (2D-DCT)
is used for processing signals such as images. The 2D
DCT resembles the 1D DCT transform since it is a
separable linear transformation; that is if the two-
dimensional transform is equivalent to a one-dimensional
DCT performed along a single dimension followed by a
one-dimensional DCT in the other dimension. For
example, in an n x m matrix, S, the 2D DCT is computed
by applying it to each row of S and then to each column
of the result.
Since the 2D DCT can be computed by applying 1D
transforms separately to the rows and columns, hence the
2D DCT is separable in the two dimensions.
The 2-D DCT is similar to a Fourier transform but uses
purely real math. It has purely real transform domain
coefficients and incorporates strictly positive frequencies.
The 2D DCT is equivalent to a DFT of roughly twice the
length, operating on real data with even symmetry, where
in some variants the input and/or output data are shifted
by half a sample. As the 2D DCT is simpler to evaluate
than the Fourier transform, it has become the transform
of choice in image compression standards such as JPEG.
The 2D DCT represents an image as a sum of sinusoids of
varying magnitudes and frequencies. It has the property
that, for a typical image, most of the visually significant
information about the image is concentrated in just a few
coefficients of the DCT.
The mathematical definition of DCT is :

International Journal of EmergingTrends & Technology in Computer Science(IJETTCS)
Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com
Volume 2, Issue 2, March April 2013 ISSN 2278-6856


Volume 2, Issue 2 March April 2013 Page 168

The above equation is called the analysis formula or the
forward transform
Because the DCT uses cosine functions, the resulting
matrix depends on the horizontal, diagonal, and vertical
frequencies. Therefore am image black with a lot of
change in frequency has a very random looking resulting
matrix , while an image matrix of just one color, has a
resulting matrix of a large value for the first element and
zeros for the other elements. Mathematically, the DCT is
perfectly reversible and there is no loss of image
definition until coefficients are quantized.
The pixels in the DCT image describe the proportion of
each two-dimensional basis function present in the image.
Each basis matrix is characterized by a horizontal and
vertical spatial frequency. The matrices arranged from
left to right and top to bottom in order of decreasing
frequencies. The top-left function (brightest pixel) is the
basis function of the "DC" coefficient, with frequency
{0,0} and represents zero spatial frequency. It is the
average of the pixels in the input, and is typically the
largest coefficient in the DCT of "natural" images. Along
the top row the basis functions have increasing horizontal
spatial frequency content. Down the left column the
functions have increasing vertical spatial frequency
content.
3.2 QUANTIZATION
The block of 8 x 8 DCT coefficients are divided by an 8
x 8 quantization table. In quantization the low DCT
coefficients of the high frequencies are discarded. Thus,
quantization is applied to allow further compression of
entropy encoding by neglecting insignificant low
coefficients.
The DCT implies that many of the higher frequencies of
an image can be discarded without any perceived
degradation of image quality. In lossy compression,
quantization exploits both facts by scaling the DCT
coefficients to levels that will result in the zeroing of most
of the higher frequencies, but maintaining most of the
images energy.
The 8 x 8 block of DCT coefficients is now ready for
compression by quantization. A remarkable and highly
useful feature of the JPEG process is that in this step,
varying levels of image compression and quality are
obtainable through selection of specific quantization
matrices. This enables the user to decide on quality levels
ranging from 1 to 100, where 1 gives the poorest image
quality and highest compression, while 100 gives the best
quality and lowest compression. As a result, the
quality/compression ratio can be tailored to suit different
needs.
Subjective experiments involving the human visual
system have resulted in the JPEG standard quantization
matrix. With a quality level of 50, this matrix renders
both high compression and excellent decompressed image
quality.
If, however, another level of quality and compression is
desired, scalar multiplies of the JPEG standard
quantization matrix may be used. For a quality level
greater than 50 (less compression, higher image quality),
the standard quantization matrix is multiplied by
(100-quality level)/50. For a quality level less than 50
(more compression, lower image quality), the standard
quantization matrix is multiplied by 50/quality level. The
scaled quantization matrix is then rounded and clipped to
have positive integer values ranging from 1 to 255.
Quantization is achieved by dividing each element in the
transformed image matrix D by the corresponding
element in the quantization matrix, and then rounding to
the nearest integer value.


3.3 ZIGZAG SCAN
After doing 8x8 DCT and quantization over a block we
have new 8x8 blocks which denotes the value in
frequency domain of the original blocks. Then we have to
reorder the values into one dimensional form in order to
encode them. the AC terms are scanned in a Zigzag
manner. The reason for this zigzag traversing is that we
traverse the 8x8 DCT coefficients in the order of
increasing the spatial frequencies. So, we get a vector
sorted by the criteria of the spatial frequency. After we are
done with traversing in zigzag the 88 matrix we have
now a vector with 64 coefficients (0, 1... 63).

Fig 3.1: Zigzag Scan
3.4 RUN-LENGTH CODING
Run-length encoding (RLE) is a very simple form of data
compression in which runs of data (that is, sequences in
which the same data value occurs in many consecutive
data elements) are stored as a single data value and count,
rather than as the original run. This is most useful on
data that contains many such runs: for example, simple
graphic images such as icons, line drawings, and
animations. It is not useful with files that don't have many
runs as it could greatly increase the file size.
Now we have the one dimensional quantized vector with
a lot of consecutive zeroes. We can process this by run
length coding of the consecutive zeroes. Let's consider the
63 AC coefficients in the original 64 quantized vectors
first. For example, we have:
57, 45, 0, 0, 0, 0, 23, 0, -30, -16, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
..., 0
We encode for each value which is not 0, than add the
number of consecutive zeroes preceding that value in
front of it. The RLC (run length coding) is:
(0,57) ; (0,45) ; (4,23) ; (1,-30) ; (0,-16) ; (2,1) ; EOB
International Journal of EmergingTrends & Technology in Computer Science(IJETTCS)
Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com
Volume 2, Issue 2, March April 2013 ISSN 2278-6856


Volume 2, Issue 2 March April 2013 Page 169

The EOB (End of Block) is a special coded value. If we
have reached in a position in the vector from which we
have till the end of the vector only zeroes, we'll mark that
position with EOB and finish the RLC of the quantized
vector. Note that if the quantized vector does not finishes
with zeroes (the last element is not 0), we do not add the
EOB marker. Actually, EOB is equivalent to (0,0), so we
have :
(0,57) ; (0,45) ; (4,23) ; (1,-30) ; (0,-16) ; (2,1) ; (0,0)

3.5 RUN-LENGTH DECODER
The 4-bit run-length is a count of the number of zero data
values occurred between the last non-zero data value and
the current one. The 4-bit data-length is the number of
bits following this 8-bit word that make up the actual
non-zero data point. A data-length of 0 signifies either
the end of a data block or if the run-length is 15 then the
event of 16 consecutive zero data values.
3.6 INVERSE ZIGZAG
The frequency matrix is ordered in a zigzag fashion as
described in the following Figure 3.2

Fig 3.2: After Inverse Zigzag
3.7 REVERSE QUANTIZATION
The Reverse Quantization requests data values from its
input. It multiplies these data values by the
corresponding value in the Quantization table and then
places them in the appropriate location in the 8x8 JPEG
data block. During JPEG encoding the frequency
components of the data block are ordered so that the low
frequency components are at the beginning and higher
frequency components follow.
This data block is then passed on to the Inverse Discrete
Cosine Transform unit. Since the Reverse Quantization
block is in the middle of the JPEG decoder pipeline and is
relatively simple. It requests the Huffman Decoder to give
it data and with that data it assembles a data block and
requests the Inverse Discrete Cosine Transform unit to
decode it. This allows almost all of the operations of the
Quantization Unit to be done while the Huffman decoder,
which takes a long time since it has to make decisions at
every bit, is running.
Reconstruction of our image begins by decoding the bit
stream representing the quantized matrix C. Each
element of C is then multiplied by the corresponding
element of the quantization matrix originally used.

The IDCT is next applied to matrix R, which is rounded
to the nearest integer.
3.9 INVERSE DISCRETE COSINE TRANSFORM
The Inverse Discrete Cosine Transform unit is definitely
the most complex unit in the JPEG decoder. The IDCT
requires many multiplications and additions of irrational
values and is computationally intensive. Since a floating
point ALU is very difficult to design, very large, and very
slow floating point arithmetic is generally never done in
custom hardware designs except for the data-path of a
microprocessor where it can be properly shared among
many different uses.
Here are the equations for the 2-Dimensional 8x8 Inverse
Discrete Cosine Transform
( )
( ) ( )

=
=
=
|
.
|

\
| +
|
.
|

\
| +
=

= =
0 1
0
2
1
16
1 2
cos
16
1 2
cos
4
1
, : IDCT
7
0
7
0
, ,
n
n
C
v y u x
S C C y x s
n
u v
v u v u y x

4. DESIGN METRICS
4.1 MEAN SQUARE ERROR
The MSE is the second moment (about the origin) of the
error, and thus incorporates both the variance of the
estimator and its bias. For an unbiased estimator, the
MSE is the variance of the estimator. Like the variance,
MSE has the same units of measurement as the square of
the quantity being estimated. In an analogy to standard
deviation, taking the square root of MSE yields the root
mean square error or root mean square
deviation (RMSE or RMSD), which has the same units as
the quantity being estimated; for an unbiased estimator,
the RMSE is the square root of the variance, known as
the standard deviation.
If is a vector of n predictions, and is the vector of
the true values, then the MSE of the predictor is:


4.2 PEAK SIGNAL TO NOISE RATIO
Peak Signal-to-Noise Ratio, often abbreviated PSNR, is
an engineering term for the ratio between the maximum
possible power of a signal and the power of
corrupting noise that affects the fidelity of its
International Journal of EmergingTrends & Technology in Computer Science(IJETTCS)
Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com
Volume 2, Issue 2, March April 2013 ISSN 2278-6856


Volume 2, Issue 2 March April 2013 Page 170

representation. Because many signals have a very
wide dynamic range, PSNR is usually expressed in terms
of the logarithmic decibel scale.
PSNR is most easily defined via the mean squared
error (MSE). Given a noise-free mn monochrome
image I and its noisy approximation K, MSE is defined
as:

The PSNR is defined as:

4.3 COMPRESSION RATIO
The size of the compressed image divided by the size of
the original image and this value will be subtracted from
1 and the final value gives the compression ratio. This
ratio gives an indication of how much compression is
achieved for a particular image. Most algorithms have a
typical range of compression ratios that they can achieve
over a variety of images. Because of this, it is usually
more useful to look at an average compression ratio for a
particular method.
The compression ratio typically affects the picture quality.
Generally, the higher the compression ratio, the poorer
the quality of the resulting image. The tradeoff between
compression ratio and picture quality is an important one
to consider when compressing images.
Compression ratio = 1- (Compressed image size /
Original image size) x 100%
4.4 BITS PER PIXEL
The number of bits of information stored per pixel of
an image or displayed by a graphics adapter. The more
bits there are, the more colors can be represented, but the
more memory is required to store or display the image.
Bpp =numbers of bits/number of pixels
5. EXPERIMENTAL ANALYSIS

RESULTS FOR GRAY SCALE IMAGES

For Baboongray image









Fig 5.1: Quality level vs Design metrics for Baboongray
image

For Lenagray image

Qua
lity
Level
Size
of the
image
PSN
R
MSE CR
10 256 x
256
22.31
39
381.6
680
94.8
%
40 256 x
256
26.08
83
160.0
485
67.9
%
60 256 x
256
27.92
75
104.7
921
49.9
%
80 256 x
256
31.72
55
43.70
46
15.0
%
International Journal of EmergingTrends & Technology in Computer Science(IJETTCS)
Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com
Volume 2, Issue 2, March April 2013 ISSN 2278-6856


Volume 2, Issue 2 March April 2013 Page 171







Qual
ity
Level
Size of
the
image
PSNR MSE CR
10 256 x
256
23.03
88
322.9
979
95.8%
40 256 x
256
27.96
38
103.9
202
78.5%
60 256 x
256
29.88
06
66.83
77
66.6%
80 256 x
256
33.49
60
29.07
24
40.5%



Fig 5.2: Quality level vs Design metrics for Lenagray
image
For Rosesgray image





Qual
ity
Level
Size of
the
image
PSN
R
MSE CR
10 256 x
256
24.31
35
240.8
423
96.7%
40 256 x
256
29.17
67
78.59
70
83.8%
60 256 x
256
31.16
42
49.73
44
74.2%
80 256 x
256
34.52
13
22.95
87
53.2%

Fig 5.3: Quality level vs Design metrics for Rosesgray
image

6. CONCLUSION

- For the gray scale images, different objective
fidelity quality metrics like Peak Signal to Noise
International Journal of EmergingTrends & Technology in Computer Science(IJETTCS)
Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com
Volume 2, Issue 2, March April 2013 ISSN 2278-6856


Volume 2, Issue 2 March April 2013 Page 172

Ratio(PSNR), Mean Square Error(MSE) and
Compression Ratio(CR) have been arrived.
- The findings for gray scale images are
For Rosesgray image, the PSNR value is 34.5213
and it is more compared to other input gray
scale images.
For Rosesgray image, the CR value is 96.7% and
it is more compared to other input gray scale
images.
For Rosesgray image, the MSE value is 22.9587
and it is less compared to other input gray scale
images.
- Finally the proposed JPEG algorithm can be
extended for Region Of Interest(ROI)
segmentation based image compression
technique which involves dividing the image
into two images namely front portion of the
image and back portion of the image.
- The back portion of the image comes under
redundant data and so it can be compressed to
the maximum extent.
- This doesn't effect the front portion of the image
and so the compression can be done better only
to our desired area.
REFERENCES
[1] Z. Lin, J. He, X. Tang, and C. -K. Tang, Fast,
automatic and fine grained tampered JPEG image
detection via DCT
coefficient analysis, Pattern Recognit., vol. 42, pp.
24922501, 2009.
[2] G. K. Wallace, The JPEG still picture compression
standard, IEEE Trans. Consumer Electron., vol. 38,
no. 1, pp. XVIIIXXXIV, Feb. 1992.
[3] Kesavan, Hareesh. Choosing a DCT Quantization
Matrix for JPEG Encoding. Web page.
http://www.ise.Stanford.EDU/class/ee392c/demos/kes
avan/
[4] McGowan, John. The Discrete Cosine Transform.
Web page. http://www.rahul.net/jfm/dct.html
[5] Wallace, Gregory K. The JPEG Still Picture
Compression Standard. Paper submitted in
December1991 for publication in IEEE Transactions
on Consumer Electronics.
[6] Wolfgang, Ray. JPEG Tutorial. Web page.
[7] Nelson, Mark, The Data Compression Book:
Featuring Fast, Efficient, Data Compression
Techniques in C, M&T Books, Redwood City, CA,
1992.
[8] Ramstad, Tor A. Still Image Compression, New
York: CRC Press, 1998.
[9] Data Compression Book (The Complete Reference) by
David Salomon
[10] Digital Image Processing, 2/E by Gonzalez
www.prenhall.com/gonzalezwoods

S-ar putea să vă placă și