Sunteți pe pagina 1din 6

Review Article ISSN 22779140

Received on November 2011, Published on February 2012 80


INTERNATIONAL JOURNAL OF ADVANCES IN
COMPUTING AND INFORMATION
TECHNOLOGY
An International online open access peer reviewed journal


A Brief Introduction of Digital Image Processing

Suman Chaudhry
*
, Kulwinder Singh

ECE department, BMS College of Engineering & Technology, Muktsar, India


doi:10.6088/ijacit.12.10010



ABSTRACT

The demand for images, video sequences and computer animations has increased drastically over the years. This
has resulted in image and video compression becoming an important issue in reducing the cost of data storage and
transmission. JPEG is currently the accepted industry standard for still image compression, but alternative
methods are also being explored. Fractal Image Compression (FIC) is one of them. This scheme works encoding
by partitioning an image into blocks and using Contractive Mapping to map range blocks to domains. The
encoding step in fractal image compression has high computational complexity whereas, decoding step involves
starting from all zeros image to achieve final image which is same as original image by applying self
Transformations.

Keywords: Image Compression, Source Encoder and Decoder, quantizer, lossy compression.

1. Introduction

Digital Images are the numerical representations of images (radiographs) or objects (patients) whereas; Image
compression simply refers to reduce this numerical data. The digital image must first be converted to an analog
signal before it can be viewed on a monitor. The digital image can be stored and archived on magnetic data
carriers (magnetic tapes and/or disks) and laser optical disks. The computer then uses special digital image
software to manipulate the data for different purposes. There are 5 fundamental classes of digital image
processing:
a) Image enhancement
b) Image restoration
c) Image analysis
d) Image compression
e) Image synthesis.

1.1 Image compression
It uses various algorithms to remove data from an image. The under lying basis of the reduction is the removal of
redundant data. Redundant data is the data which contains no relevant information or simply restate that which is
already known. It is said to contain data redundancy. In digital image compression, three basic data redundancies
can be identified and exploited:
a) Coding redundancy
A Brief Introduction of Digital Image Processing
81
Suman Chaudhry, Kulwinder Singh

International Journal of Advances in Computing and Information Technology

b) Interpixel redundancy
c) Psychovisual redundancy
Data compression is achieved when one or more of these redundancies are reduced or eliminated. The purpose of
compressing digital images is to reduce the size of the image to decrease transmission time and reduce storage
space requirements.

2. Image Compression Model

The general image compression model consisting of two major building blocks is shown in figure 1.

Fig.1a The general Compression Model

It shows that a compression system consists of two distinct structural blocks: an encoder and decoder. An input
image f(x,y) is fed into the encoder, which creates a set of symbols from input data, after transmission over the
channel it is fed to the decoder where a reconstructed image f`(x,y) is generated. The source encoder is
responsible for reducing or eliminating any coding, interpixel, or psychovisual redundancies in the input image.

2.1 Source Encoder and Decoder
As shown in the fig.1 (b), In the first stage of source encoder process , the mapper transforms the input in to a
format designed to reduce interpixel redundancies in the input image which is reversible process and may or may
not reduce directly the amount of data required to represent the image .



The second stage quantizer block reduces the accuracy of the mappers output in accordance with some pre-
established fidelity criterion. This stage reduces the psychovisual redundancies of the input image. This operation
is irreversible & this is omitted when error-less compression is required.
The third and final stage of the source encoding process, the symbol encoder creates affixed or variable length
code to represent the quantizer output and maps the output in accordance with the code. It assigns the shortest
code words to the most frequently occurring output values and thus reduces coding redundancy. Source decoder
shown in fig. 1(c) contains only two blocks: a symbol decoder and inverse mapper. These blocks perform, in
reverse order, the inverse operations of the source encoders symbol encoder and mapper blocks. Because
quantization results in irreversible information loss, an inverse quantization block is not included in the general
source decoder model shown in Fig.1 (c).

A Brief Introduction of Digital Image Processing
82
Suman Chaudhry, Kulwinder Singh

International Journal of Advances in Computing and Information Technology



In general, there are two forms of image compression: lossy and lossless

2.2 Lossy and Lossless Compression Techniques


Fig. 2 Compression techniques

Compressing an image is significantly different than compressing raw binary data. Of course, general purpose
compression programs can be used to compress images, but the result is less than optimal. This is because images
have certain statistical properties which can be exploited by encoders specifically designed for them. Also, some
of the finer details in the image can be sacrificed for the sake of saving a little more bandwidth or storage space
In lossless, or reversible, compression there is no loss of information in the image (i.e, detail is not compromised)
when it is decompressed. In various applications, such as medical or business documents, satellite imagery, digital
radiography etc, error-free compression is the only acceptable means of data reduction. The commonly used
lossless compression techniques are:
a) Variable-length Coding
b) LZW Coding.
c) Bit-Plane Coding
d) Lossless Predictive Coding
e) 5 Huffman Coding
f) 6 Arithmatic coding
A lossy compression method is one where compressing data and then decompressing it retrieves data that may
well be different from the original, but is close enough to be useful in some way. Lossy compression is useful in
situations when it is not necessary to preserve the exact details of the original image. Lossy methods are most
often used for compressing sound, images or videos. This is because these types of data are intended for human
interpretation where the mind can easily "fill in the blanks" or see past very minor errors or inconsistencies
ideally lossy compression is transparent (imperceptible).Most commonly used lossy compression techniques are:
a) Lossy Predictive Coding
b) Transform Coding
c) Wavelet Coding
Lossy compression formats suffer from generation loss: repeatedly compressing and decompressing the file will
cause it to progressively lose quality. The advantage of lossy methods over lossless methods is that in some cases
a lossy method can produce a much smaller compressed file than any known lossless method, while still meeting
the requirements of the application.

A Brief Introduction of Digital Image Processing
83
Suman Chaudhry, Kulwinder Singh

International Journal of Advances in Computing and Information Technology

3. What is Fractal Image Compression?

Imagine a special type of photocopying machine that reduces the image to be copied by half and reproduces it
three times on the copy (see Figure 2). What happens when we feed the output of this machine back as input?
Figure 3 shows several iterations of this process on several input images. We can observe that all the copies
seem to converge to the same final image, the one in 3(c). Since the copying machine reduces the input
image, any initial image placed on the coping machine will be reduced to a point as we repeatedly run the
machine; in fact, it is only the position and the orientation of the copies that determines what the final
image looks like.


Figure 3: A copy machine that makes three reduced copies of the input image

The way the input image is transformed determines the final result when running the copy machine in a
feedback loop. However we must constrain these transformations, with the limitation that the transformations
must be contractive, that is, a given transformation applied to any two points in the input image must bring
them closer in the copy. This technical condition is quite logical, since if points in the copy were spread out
the final image would have to be of infinite size. Except for this condition the transformation can have any form.
In practice, choosing transformations of the form is sufficient to generate interesting transformations called affine
transformations of the plane. Each can skew, stretch, rotate, scale and translate an input image. A common feature
of these transformations that run in a loop back mode is that for a given initial image each image is formed
from a transformed (and reduced) copies of itself, and hence it must have detail at every scale. That is, the
images are fractals. This and more information about the various ways of generating such fractals can be found in
books by Barnsley and Yuval Fisher [4].


Initial Image First Copy Second Copy Third Copy

Figure 4: The first three copies generated on the copying machine Figure 2

i
i
i i
i i
i
f
e
y
x
d c
b a
y
x
t
A Brief Introduction of Digital Image Processing
84
Suman Chaudhry, Kulwinder Singh

International Journal of Advances in Computing and Information Technology


Barnsley suggested that perhaps storing images as collections of transformations could lead to image
compression. His argument went as follows: the image in Figure 3 looks complicated yet it is generated from
only 4 affine transformations.


Figure 5: Fractal Fern


Each transformation wi is defined by 6 numbers, ai, bi, ci, di, ei, and fi , see eq(1), which do not require much
memory to store on a computer (4 transformations x 6 numbers / transformations x 32 bits /number = 768
bits). Storing the image as a collection of pixels, however required much more memory (at least 65,536 bits for
the resolution shown in Figure 3). So if we wish to store a picture of a fern, then we can do it by storing the
numbers that define the affine transformations and simply generate the fern whenever we want to see it. Now
suppose that we were given any arbitrary image, say a face. If a small number of affine transformations could
generate that face, then it too could be stored compactly. The trick is finding those numbers.

4. Contractive Transformations

A transformation w is said to be contractive if for any two points P1, P2, the distance
d(t(p
1
),t(p
2
)) < sd(p
1
,p
2
),.
For some s < 1, where d = distance. This formula says the application of a contractive map always brings
points closer together (by some factor less than 1).

4.1 The Contractive Mapping Fixed Point Theorem
This theorem says something that is intuitively obvious: if a transformation is contractive then when applied
repeatedly starting with any initial point, we converge to a unique fixed point.
If X is a complete metric space and W: XX is contractive, then W has a unique fixed point W.
This simple looking theorem tells us how we can expect a collection of transformations to define an image.

5. Applications

This type of compression can be applied in Medical Imaging, where doctors need to focus on image details, and
in Surveillance Systems, when trying to get a clear picture of the intruder or the cause of the alarm. This is a clear
advantage over the Discrete Cosine Transform Algorithms such as that used in JPEG or MPEG.

6. Motivation of the Proposed Work

In this paper, the use of various types of fractal-based image coding techniques, which are lossy image compression
methods, are investigated. There are lots of schemes available for spatial fractal and fractal-wavelet image
compression. There I am looking forward for some way out that the lossy fractal image compression become nearly
lossless and encoding/decoding is performed in quick time. Hence that is the basis of this research.
A Brief Introduction of Digital Image Processing
85
Suman Chaudhry, Kulwinder Singh

International Journal of Advances in Computing and Information Technology


References

[1] A. Jacquin, Image coding based on a fractal theory of iterated contractive image transformations, IEEE Trans.
Image Processing, vol. 1, pp. 1830, Jan. 1992.
[2] M. F. Barnsley and A. Jacquin, Application of recurrent iterated function systems to images, Proc. SPIE, vol.
1001, pp. 122131, 1988.
[3] A. Jacquin, Fractal image coding based on a theory of iterated contractive image transformations, Proc. SPIE
Conf. Visual Communications and Image Processing, pp. 227239, 1990.
[4] Y. Fisher, Fractal image compression with quadtrees, in Fractal Compression: Theory and Application to
Digital Images, Y. Fisher, Ed. New York: Springer-Verlag, 1994.
[5] D. Saupe, Accelerating fractal image compression by multidimensional nearest neighbor search, in Proc. Data
Compression Conf., Snowbird, UT, Mar. 1995, pp. 222231.
[6] B. Hurtgen and T. Hain, On the convergence of fractal transforms, in Proc. ICASSP, 1994, vol. 5, pp. 561
564.
[7] J. Kominek, Convergence of fractal encoded images, in Proc. Data Compression Conf., Snowbird, UT, Mar.
1995, pp. 242251.
[8] G.M.Davis A Wavelet-Based Analysis of Fractal Image Compression in IEEE Trans. Image Processing,,
VOL. 7, pp 141-154 Feb 1998.
[9] Hannes Hartenstein, et al Region-Based Fractal Image Compression in IEEE Trans. Image Processing, VOL.
9, pp 1171-1184 Jul 2000.
[10] Kamel Belloulata and Janusz Konrad Fractal Image Compression With Region-Based Functionality in IEEE
Trans. Image Processing, VOL. 11, pp 351-362 Apr 2002.
[11] Jyh-Horng Jeng, Chun-Chieh Tseng, and Jer-Guang Hsieh Study on Huber Fractal Image Compression in
IEEE Trans. Image Processing, VOL. 18, pp 995-1003 May 2009.

S-ar putea să vă placă și