Sunteți pe pagina 1din 25

Image Compression Technique Using Lossless

and Lossy Compression

J Component Project Report for ECE4007


- Information Theory and Coding

SLOT – B2 + TB1

Bachelor of Technology ECE with Specialization in


Internet of Things and Sensors
in

by

16BIS0108 Aayush Gupta


16BIS0107 Anurag Jaiswal
16BIS0010 Shivam Virmani

Under the guidance of

Prof. Kavitha K.V.N.


School of Electronics Engineering

Vellore Institute of Technology, Vellore-632014

WIN 2018-19
ABSTRACT
Image compression is an implementation of the data compression which encodes actual image
with some bits. The purpose of the image compression is to decrease the redundancy and
irrelevance of image data to be capable to record or send data in an effective form. Hence the
image compression decreases the time of transmit in the network and raises the transmission
speed. In Lossless technique of image compression, no data get lost while doing the
compression. To solve these types of issues various techniques for the image compression are
used. Now questions like how to do Image compression and second one is which types of
technology is used, may be arises. For this reason, commonly two types of approaches are
explained called as lossless and the lossy image compression approaches. These techniques are
easy in their applications and consume very little memory.
This pаper proposes а novel Imаge compression bаsed on the Huffmаn coding аnd Run Length
coding technique. Imаge files contаin some redundаnt аnd inаppropriаte informаtion. Imаge
compression аddresses the problem of reducing the аmount of dаtа required to represent аn
imаge. Wаvelets provide а mаthemаticаl wаy of encoding informаtion in such а w аy thаt it is
lаyered аccording to level of detаil. This lаyering fаcilitаtes аpproximаtions аt vаrious
intermediаte stаges. These аpproximаtions cаn be stored using а lot less spаce thаn the originаl
dаtа. Huffmаn encoding аnd decoding is very eаsy to implement аnd it reduce the complexity
of memory. This pаper аlso elаborаtes on а low complex 2D imаge compression method using
wаvelets (the Hааr Wаvelet) аs the bаsis functions аnd then uses Run-Length Encoding (RLE)
to compress the imаge. Mаjor goаl of this pаper is to provide prаcticаl wаys of exploring
Huffmаn coding technique аnd Run Length coning technique using MАTLАB.
INTRODUCTION
Image compression is important for many applications that involve huge data storage,
transmission and retrieval such as for multimedia, documents, videoconferencing, and
medical imaging. Uncompressed images require considerable storage capacity and
transmission bandwidth. The objective of image compression technique is to reduce
redundancy of the image data in order to be able to store or transmit data in an efficient form.
This results in the reduction of file size and allows more images to be stored in a given
amount of disk or memory space. Image compression can be lossy or lossless. not provide
sufficiently high compression ratios to be truly useful in image compression. Lossless image
compression is particularly useful in image archiving as in the storage of legal or medical
records. Methods for lossless image compression includes: Entropy coding, Huffman coding,
Bit-plane coding, Run-length coding and LZW ( Lempel Ziv Welch ) coding.

EXISTING SYSTEM
To compress the image data here we are using compression. JPEG image compression
standard use DCT (DISCRETE COSINE TRANSFORM). The discrete cosine transform is a
fast transform. It is a widely used and robust method for image compression. It has excellent
compaction for highly correlated data.DCT has fixed basis images DCT gives good
compromise between information packing ability and computational complexity. JPEG 2000
image compression standard makes use of DWT (DISCRETE WAVELET TRANSFORM).
DWT can be used to reduce the image size without losing much of the resolutions computed
and values less than a pre-specified threshold are discarded. Thus it reduces the amount of
memory required to represent given image.

PROPOSED SYSTEM
The proposed compression technique with pruning proposal based on discrete wavelet
transform (DWT). The proposed technique first decomposes an image into coefficients called
sub-bands and then the resulting coefficients are compared with a threshold. Coefficients
below the threshold are set to zero. Finally, the coefficients above the threshold value are
encoded with a loss less compression technique.

BLOCK DIAGRAM
Image compression

Image compression is minimizing the size in bytes of a graphics file without degrading the
quality of the image to an unacceptable level. The reduction in file size allows more images
to be stored in a given amount of disk or memory space. It also reduces the time required for
images to be sent over the Internet or downloaded from Web pages.

There are several different ways in which image files can be compressed. For Internet use,
the two most common compressed graphic image formats are the JPEG format and
the GIF format. The JPEG method is more often used for photographs, while the GIF method
is commonly used for line art and other images in which geometric shapes are relatively
simple.

Other techniques for image compression include the use of fractals and wavelets. These
methods have not gained widespread acceptance for use on the Internet as of this writing.
However, both methods offer promise because they offer higher compression ratios than the
JPEG or GIF methods for some types of images. Another new method that may in time
replace the GIF format is the PNG format.

A text file or program can be compressed without the introduction of errors, but only up to a
certain extent. This is called lossless compression. Beyond this point, errors are introduced.
In text and program files, it is crucial that compression be lossless because a single error can
seriously damage the meaning of a text file, or cause a program not to run. In image
compression, a small loss in quality is usually not noticeable. There is no "critical point" up
to which compression works perfectly, but beyond which it becomes impossible. When there
is some tolerance for loss, the compression factor can be greater than it can when there is no
loss tolerance. For this reason, graphic images can be compressed more than text files or
programs.

TYPES OF IMAGE COMPRESSION TECHNIQUES

There are two categories of image compression i.e. lossless and lossy compression. Lossless
compression is used in artificial images. Basically, it uses low bit rate. In the Lossy
compression techniques, there is the possibility of losing some information during this process.
While lossless compression is basically preferred in medical images and military images,
owing to the lesser possibility of loss of information. The explanation of these methods.

Lossy Compression:
In compression technique, accuracy is very important in compression and decompression.
There will be a possibility of data information loss but it should be under the limit of tolerance.
It should be good enough for application of image processing. This kind of compression is used
for sharing, transmitting or storing multimedia data, where some loss of data or image is
allowed. JPEG is examples of lossy processing methods. When the receiver is human eye, lossy
data is allowed, because human eye can tolerate some imperfection in data/information. Some
lossy compression techniques are explained as follow. Memory less source an information
source that is independently distributed. Namely, the value of the current symbol does not
depend on the values of the previously appeared symbols. Instead of assuming memory less
source, Run-Length Coding (RLC) exploits memory present in the information source.
Rationale for RLC: if the information source has the property that symbols tend to form
continuous groups, then such symbol and the length of the group can be coded.

Lossless Compression:
Lossless compression is a class of data compression algorithms that allows the original data to
be perfectly reconstructed from the compressed data. By contrast, lossy compression permits
reconstruction only of an approximation of the original data, though this usually improves
compression rates (and therefore reduces file sizes). Lossless data compression is used in many
applications. For example, it is used in the ZIP file format and in the GNU tool zip. It is also
often used as a component within lossy data compression technologies (e.g. lossless mid/side
joint stereo pre-processing by the LAME MP3 encoder and other loss audio encoders).
Lossless and lossy compression are terms that describe whether or not, in the compressionof
a file, all original data can be recovered when the file is uncompressed. With lossless
compression, every single bit of data that was originally in the file remains after the file is
uncompressed. All of the information is completely restored. This is generally the technique
of choice for text or spreadsheet files, where losing words or financial data could pose a
problem. The Graphics Interchange File (GIF) is an image format used on the Web that
provides lossless compression.

On the other hand, lossy compression reduces a file by permanently eliminating certain
information, especially redundant information. When the file is uncompressed, only a part of
the original information is still there (although the user may not notice it). Lossy compression
is generally used for video and sound, where a certain amount of information loss will not be
detected by most users. The JPEG image file, commonly used for photographs and other
complex still images on the Web, is an image that has lossy compression. Using JPEG
compression, the creator can decide how much loss to introduce and make a trade-off
between file size and image quality.

LOSELESS AND LOSSY


A compression technique that does not decompress digital data back to 100% of the original.
Lossy methods can provide high degrees of compression and result in smaller compressed
files, but some number of the original pixels, sound waves or video frames are removed
forever. Examples are the widely used JPEG image, MPEG video and MP3 audio formats.

The greater the compression, the smaller the file. However, a high image compression loss
can be observed in photos printed very large, and people with excellent hearing can notice a
huge difference between MP3 music and high-resolution audio files (see audiophile).
Typically, the moving frames of video can tolerate a greater loss of pixels than still images.

Lossy compression is never used for business data and text, which demand a perfect
restoration (see lossless compression). See data compression, codec
examples, JPEG, MPEG and MP3.

In information technology, lossy compression or irreversible compression is the class


of data encoding methods that uses inexact approximations and partial data discarding to
represent the content. These techniques are used to reduce data size for storing, handling, and
transmitting content. The different versions of the photo of the cat to the right show how
higher degrees of approximation create coarser images as more details are removed. This is
opposed to lossless data compression (reversible data compression) which does not degrade
the data. The amount of data reduction possible using lossy compression is much higher than
through lossless techniques.
Well-designed lossy compression technology often reduces file sizes significantly before
degradation is noticed by the end-user. Even when noticeable by the user, further data
reduction may be desirable (e.g., for real-time communication, to reduce transmission times,
or to reduce storage needs).

Lossy compression is most commonly used to compress multimedia data (audio, video,
and images), especially in applications such as streaming media and internet telephony. By
contrast, lossless compression is typically required for text and data files, such as bank
records and text articles. It can be advantageous to make a master lossless file which can then
be used to produce additional copies from. This allows one to avoid basing new compressed
copies off of a lossy source file, which would yield additional artifacts and further
unnecessary information loss.
Huffman

Huffman coding is a method of data compression that is independent of the data type, that is,
the data could represent an image, audio or spreadsheet. This compression scheme is used in
JPEG and MPEG-2. Huffman coding works by looking at the data stream that makes up the
file to be compressed. Those data bytes that occur most often are assigned a small code to
represent them (certainly smaller then the data bytes being represented). Data bytes that occur
the next most often have a slightly larger code to represent them. This continues until all of
the unique pieces of data are assigned unique code words. For a given character distribution,
by assigning short codes to frequently occurring characters and longer codes to infrequently
occurring characters, Huffman's minimum redundancy encoding minimizes the average
number of bytes required to represent the characters in a text. Static Huffman encoding uses a
fixed set of codes, based on a representative sample of data, for processing texts. Although
encoding is achieved in a single pass, the data on which the compression is based may bear
little resemblance to the actual text being compressed. Dynamic Huffman encoding, on the
other hand, reads each text twice; once to determine the frequency distribution of the
characters in the text and once to encode the data. The codes used for compression are
computed on the basis of the statistics gathered during the first pass with compressed texts
being prefixed by a copy of the Huffman encoding table for use with the decoding process.
By using a single-pass technique, where each character is encoded on the basis of the
preceding characters in a text, Gallager's adaptive Huffman encoding avoids many of the
problems associated with either the static or dynamic method

1.Build a Huffman Tree :


1. Combine the two lowest probability leaf nodes into a new node.
2. Replace the two leaf nodes by the new node and sort the nodes according to the new
probability values.
3. Continue the steps (a) and (b) until we get a single node with probability value 1.0. We
will call this node as root

RUNLENGHT

The goal of image compression is to remove the redundancies by minimizing the number of
bits required to represent an image. It is used for reducing the redundancy that is nothing but
avoiding the duplicate data. It also reduces the storage memory to load an image. Image
Compression algorithm can be Lossy or Lossless. In this paper, DWT based image
compression algorithms have been implemented using MATLAB platform. Then, the
improvement of image compression through Run Length Encoding (RLE) has been achieved.
The three images namely Baboon, Lena and Pepper have been taken as test images for
implementing the techniques. Various image objective metrics namely compression ratio,
PSNR and MSE have been calculated. It has been observed from the results that RLE based
image compression achieves higher compression ratio as compared with DWT based image
compression algorithms.

Simplest form of lossless image compression technique. Fig. 4 represents long sequences of
same data by shorter form. Long runs of redundant data are storedas a single data value and
count. Can be even more efficient if the data uses only two symbols (for example 0 and 1) in
its bit pattern and one symbol is more frequent than the other. Images with repeating grey
values along rows (or columns) can be compressed by storing "runs" of identical grey values
in the format:

Using the optimized run length coding to compress the MRI medical image has to helped
greatly reduce the size of compressed image. This algorithm is used transform the data or
image. The image compression techniques and it comparison between run length and discrete
wavelet transform. The future work two different categories of compression have discussed
enlarge on advantages and disadvantages. After we have to brief overview of some medical
image compression techniques and provided descriptive comparison between them and the
performance for image compression and the computational complexity can be enhanced.
EZW coding
An EZW encoder is an encoder specially designed to use with wavelet transforms, which
explains why it has the word wavelet in its name. The EZW encoder was originally designed
to operate on images (2Dsignals) but it can also be used on other dimensional signals.The
EZW encoder is based on progressive encoding to compress an image into a bitstream with
increasing accuracy. This means that when more bits are added to the stream, the decoded
image will contain more detail, a property similar to JPEG encoded images. It is also similar
to the representation of a number like π. Every digit we add increases the accuracy of the
number, but we can stop at any accuracy we like. Progressive encoding is also known as
embedded encoding, which explains the E in EZW. This leaves us with the Z. This letter is a
bit more complicated to explain, but I will give it a try in the next paragraph. Coding an
image using the EZW scheme, together with some optimizations results in a remarkably
effective image compressor with the property that the compressed data stream can have any
bit rate desired. Any bit rate is only possible if there is information loss somewhere so that
the compressor is lossy. However, lossless compression is also possible with an EZW
encoder, but of course with less spectacular results.

ZERO coding
The EZW encoder is based on two important observations:
1. Natural images in general have a low pass spectrum. When an image is wavelet
transformed the energy in the subbands decreases as the scale decreases (low scale means
high resolution), so the wavelet coefficients will, on average, be smaller in the higher
subbands than in the lower subbands. This shows that progressive encoding is a very natural
choice for compressing wavelet transformed images, since the higher subbands only add
detail.
2. Large wavelet coefficients are more important than smaller wavelet coefficient
These two observations are exploited by the EZW encoding scheme by coding the
coefficients in decreasing order, in several passes. For every pass a threshold is chosen
against which all the coefficients are measured. If a wavelet coefficient is larger than the
threshold it is encoded and removed from the image, if it is smaller it is left for the next pass.
When all the wavelet coefficients have been visited the threshold is lowered and the image is
scanned again to add more detail to the already encoded image. This process is repeated until
all the wavelet coefficients have been encoded completely or another criterion has been
satisfied (maximum bit rate for instance). The trick is now to use the dependency between the
wavelet coefficients across different scales to efficiently encode large parts of the image
which are below the current threshold. It is here where the zerotree enters.
A wavelet transform transforms a signal from the time domain to the joint time-scale domain.
This means that the wavelet coefficients are two-dimensional. If we want to compress the
transformed signal we have to code not only the coefficient values, but also their position in
time. When the signal is an image then the position in time is better expressed as the position
in space. After wavelet transforming an image we can represent it using trees because of the
subsampling that is performed in the transform. A coefficient in a low subband can be
thought of as having four descendants in the next higher subband (see figure 1). The four
descendants each also have four descendants in the next higher subband and we see a quad-
tree emerge: every root has four leafs
LITERATURE REVIEW
The objective of image compression is to reduce the storage space required to store the digital
images. Digital images are used in several fields and sometimes they needed to be compressed
for various application. Image compression techniques are used according to the requirement
of application. The objective of compression is to reduce the number of bits as much as
possible, while preserving the visual quality of the reconstructed image close to the original
image. In [7], the authors proposed a compression technique using the two lossless
methodologies Huffman coding and Lempel Ziv Welch coding to compress image. First the
image is compressed with Huffman coding resulting the Huffman tree and Huffman Code
words. After that Huffman code words are concatenated together and then compressed by using
Lempel Ziv Welch coding. Finally Retinex algorithm is used on compressed image to enhance
the contrast of image and improve the quality of image. The amount of compression achieved
depends upon the characteristics of the source to a great extent. It was noted that the higher
data redundancy helps to achieve more compression.

Reproduced image and the original image are equal in quality by using Retinex Algorithm, as
it enhances the image contrast using MSR. In [8], a lossless image compression based on
Huffman algorithm is presented. The image is converted into an array using Delphi image
control tool. Huffman coding method is used to removes redundant codes from the image and
compresses a BMP image file.

Huffman coding is a form of coding technique which attempts to reduce the amount of bits
required to represent a string of symbols. This image compression scheme is well suited for
gray scale (black and white) bit map images. Huffman coding suffers from the fact that the
decompresser need to have some knowledge about the probabilities of the symbols in the
compressed files. It need more bit to encode the file if this information is unavailable.

Huffman coding requires knowledge about the probabilities of the source sequence. If this
knowledge is not available, Huffman coding becomes a two-pass operation. In the first pass
statistics are collected and in the second pass source is encoded. In order to transfer this
algorithm into a one-pass procedure, adaptive algorithms were developed. This method can be
used both for lossy and lossless compression. It provides better compression ratios compared
with other lossless coding methods like LZW coding method, JPEG lossless compression. The
performance of this method increases by using better predictive methods. M. Mozammel
Hoque Chowdhury suggests an image compression scheme based on discrete wavelet
transformation (DWT). This reduced the redundancy of the image data in order to be able to
store or transmit data in an efficient form. It was noted that discrete wavelet transform offers
less computational complexity without any sacrifice in image quality.

First the image is decomposed into sub-bands and then the resulting coefficients are compared
with a threshold. Coefficients below the threshold are taken as zero. Finally, the coefficients
above the threshold value are selected and encoded with a loss less compression technique. He
also noted that wavelets are well suited to time-limited data and wavelet based image
compression technique maintains better image quality with less errors. Monika Rathee presents
discrete Haar wavelet transform (DWT) for image compression. She states that DWT can be
used to reduce the image size without losing much of the resolutions. Haar Transform is a very
fast transform. Discrete wavelet transform (DWT) represents an image as a sum of wavelet
functions on different resolution levels. There exists a large choice of wavelet families
depending on the wavelet function. The choice of wavelet family depends on the application.
A Haar mother wavelet function and its scaling function has also been described. Compression
is done by first digitize the source image into a signal and then decompose the signal into a
sequence of wavelet coefficients. A threshold value is used to modify the wavelet coefficients.
Quantization is performed to convert a sequence of floating numbers to a sequence of integers.
Entropy encoding is applied to change an integer sequence into a shorter sequence with
numbers being 8 bit integers. This technique considerably improves the time performance of
the system. Wavelet transform is one of the important methods used for image compression.
Information about wavelet image compression technique to fulfill the requirement of image
compression that is to reduce data storage capacity or reduce transmission bandwidth was
proposed by Rasika N Khatke. The method provides a wavelet transforms technique to generate
transform coefficients of an input image. Furthermore, the method generates and encodes an
efficient tree structure of the transform coefficients that are obtained. The transform
coefficients are quantized based on the quantizing interval to produce quantized transform
coefficients. The modified tree list along with the quantized transform coefficients is
arithmetically coded. The Wavelet transform analyzes a signal in time and scale. It offers multi-
resolution capability and it provides improvements in picture quality at higher compression
ratios.

In [13], A. M. Raid presented the use of Wavelet Based Image compression algorithm
Embedded Zerotree Wavelet (EZW). I is an effective image compression algorithm. The
progressive encoding is a common option for compressing wavelet transformed images, since
the details are concentrated on higher subbands only. Compact binary maps are provided by
significant wavelet coefficients by zero tree coding. The trees maintains a parent-child
relationship among the coefficients of subbands having the same spatial orientation. These
parent-child dependencies contributed excellent performance to the zero-tree coders. It was
noted that EZW is fast, robust and efficient enough to implement it in still and complex images
with significant image compression.

Security can be given to the image along with effective compression. Ch. Naveen in his paper
discussed about the part of EZW in providing additional security to image along with its main
function of compression. The process starts by providing image security with compressing the
image using EZW. This will generate four different data vectors out of which one is coded
sequence. The coded sequences are taken and convert it into2D sequence. On the 2D data chaos
based scrambling method is applied using two initial conditions (keys) for row and column
respectively. The user must provide same key at the time of descrambling and reconstruction
of image. To reconstruct the image using decoding process, the encoded bit stream in the same
order as at the time of generation is required. This helps in making the algorithm more robust.
ZHANG Wei noted that an EZW and Huffman joint encoding algorithm can reduce the desired
number of digits used for coding. The average code length will be shorter due to the repetition
of the output stream, if joint Huffman coding is done on the output stream which can improve
the compression ratio. Huffman encoding is a lossless coding method. It does not affect the
image recovery theoretically. By this joint technique of EZW with Huffman coding scheme
provides a method with better compression ratio and coding efficiency.

Set partitioning in hierarchical trees (SPIHT) is wavelet based algorithm which is


computationally very fast and offers good compression ratio. It is an extension of embedded
zero tree wavelet (EZW) coding method. It is based on spatial orientation trees and makes use
of set partitioning sorting algorithm. SPIHT defines parent-children relationships between
similar sub bands to establish spatial orientation trees. The SPIHT algorithm encodes the image
file using three lists such as LIP, LIS and LSP. LIP list contains the individual coefficients that
have magnitudes smaller than the threshold values. LIS list contains the overall wavelet
coefficients that are defined in tree structure with magnitudes smaller than the threshold values.
LSP is the set of pixels having magnitude greater than the threshold value. Sorting process and
refinement process is carried out to select the coefficients that are important. Precise rate
control is an important characteristic of SPIHT algorithm.

Charles D. Creusere proposed a wavelet-based image compression algorithm that achieves


robustness to transmission errors by partitioning the transform coefficients into groups [20].
These groups are independently processed using an embedded coder. Thus, a bit error in one
group does not affect the others, allowing more information to reach the decoder correctly. The
basic idea of this robust EZW image compression algorithm is to divide the wavelet
coefficients into S groups. Quantization and coding is done on each of them independently so
that S different embedded bit streams are created. By coding the wavelet coefficients with
independent bit streams, a single bit error truncates only on one of the streams, the others are
still completely received. Thus robustness to transmission errors is added to an embedded
image compression algorithm without any definable increase in its complexity.

The authors also discussed about the problems that arise due to the normal EZW method. The
main issue in the EZW is that a single bit error in the string can lead to the entire bit stream to
reconstruct incorrectly. The bits those are decoded after the error bit becomes useless for
reconstruction of the image. Thus, it affects reconstruction quality of the whole image. To
overcome this problem the authors proposed a block based EZW. The advantage of this method
is that one single bit error in the bit stream only affects the reconstruction quality of that
particular block. Other blocks can be reconstructed without any problem.

An improvement or modification to the block based EZW was proposed by Ch. Naveen [22].
The modifications suggested further improve the compression ratio. The proposed method
forces the maximum value in each block to the lowest maximum value of all the blocks in the
image. At the encoder, first all the blocks are scaled down to the same maximum value and
then it is encoded using EZW technique. To reconstruct the original image, the scaled down
values of all the blocks are scaled up to the original maximum values at the receiver. Thus the
number of passes applied on each block will be equal to the lowest number of passes taken by
one of the blocks in image. This downside approach will reduces the number of bits used for
encoding the image which successively increase the compression ratio.
RESULTS
Lossless Image Compression
Huffman and Run Length Encoding algorithm is used under lossless image compression.
Algorithm:

Input : Reading the header and data from the encoded file.
Output : Image file.

Process:
Step1: Find the grey-level probabilities for the image by finding the histogram.
Step2: Order the input probabilities (histogram magnitudes) from smallest to largest.
Step3: Combine the smallest two by addition.
Step4: GOTO step 2, until only two probabilities are left.
Step5: By working backward along the tree, generate code by alternating assignment of 0 and
1
Input Image: with size 25 kb

Output Image: with size 10kb

Input Image: with size 573 kb


Output Image: with size 52kb

Lossy Image Compression


In Lossy Image Compression Technique we used Embedded Zerotree Wavelet(EZW)
technique.

Algorithm:
Input Image: with size 253 Kb

Output Image: with size 80Kb


Input Image: with size 768 kb

Output Image: with size 65kb

Input Image: with size 253 kb

Here we can size in lossy Compression technique is greatly reduced as compared to lossless
compression but information from the image is greatly lost.
APPLICATIONS

Image Compression Techniques for MRI Brain Image:

Compression:
Compression is a method that reduces the size of files. The aim of compression is to
reduce the number of bits that are not required to represent data and to decrease the
transmission time. Achieve compression by encoding data and the data is decompressed
to its original form by decoding. A common compressed file extension is .sit, .tar, .zip;
which indicates different types of software used to compress files.

Decompression:
The compressed file is firstly decompressed and then used. There are many software’s
used to decompress and it depends upon which type of file is compressed. For example
WinZip software is used to decompress .zip file.

MEDICAL IMAGE COMPRESSION:


Most hospitals store medical image data in digital form using picture archiving and
communication systems due to extensive digitization of data and increasing telemedicine use.
If we look inside the medical image processing, we can see that there are many medical issues
being treated through this processing. These issues comprise subjects related to heart, brain,
lungs, kidney, cancer diagnosis, stomach etc. An effort has been done to provide effective
storage of medical images with patient medical record for future use and also for effective
transfer between hospitals and health care centers. In the following, the most important medical
image compression techniques that have been proposed are reviewed.
IMAGE COMPRESSION SYSTEM FOR MOBILE COMMUNICATION:
Mobile communication has a great potential to the users due to fulfilling the dreams of real-
time multimedia communication like voice, image, and text. The huge amount of data
redundancy in still image should be compressed using exact image compression algorithm
(ICA) before transmitting via wireless channel. Thus, an ICA should be adaptive, simple, and
cost-effective and suitable for feasible implementation. Hardware implementation of the
different algorithms has improved using modern, fast, and cost-effective technologies.
Recently there has been an increasing interest in multimedia communications over wireless
channels using formation such as data, image, and video. Numerous types of portable
communication devices are getting popular. Wireless image transmis-sion is one of the most
wanted features of multimedia communication. However, mobile communication (MC)
system is susceptible to a fading phenomenon that is enormously random, creating problems
for image transmission over wireless channels. Image transmission is more challenging
compared to in a fixed line system.

In point-to-point wireless communication, each subscriber is allocated to a given bandwidth


that is determined according to the required quality of the reconstructed image data. Because
of the increasing availability of very low-bandwidth digital communication channels coupled
with multimedia applications require high quality services. So, the development of more
powerful image compression and transmission techniques remains a substantial topic of
interest. One promising way to increase the efficiency of image data compression and
transmission is to make an interface between the human user and displayed image. This is
considered as an important property of the human visual system. Considering the size and
weight of the portable devices, image communication has several restrictions such as
limitation of energy, image processing power, and computation delay and memory
requirement.

Image compression in the retail store and government agency:


Image compression is also useful to any organization that requires the viewing and storing of
images to be standardized, such as a chain of retail stores or a federal government agency. In
the retail store example, the introduction and placement of new products or the removal of
discontinued items can be much more easily completed when all employees receive, view and
process images in the same way. Federal government agencies that standardize their image
viewing, storage and transmitting processes can eliminate large amounts of time spent in
explanation and problem solving. The time they save can then be applied to issues within the
organization, such as the improvement of government and employee programs.

Image compression in the security industry:

In the security industry, image compression can greatly increase the efficiency of recording,
processing and storage. However, in this application it is imperative to determine whether one
compression standard will benefit all areas. For example, in a video networking or closed-
circuit television application, several images at different frame rates may be required. Time is
also a consideration, as different areas may need to be recorded for various lengths of time.
Image resolution and quality also become considerations, as does network bandwidth, and the
overall security of the system.
In today’s world of growing technology security is of utmost concern. With the increase in
cybercrime, providing only network security is not sufficient. Security provided to images like
blueprint of company projects , secret images of concern to the army or of company’s interest,
using image steganography and stitching is beneficial. As the text message is encrypted using
AES algorithm and embedded in a part of the image the text message is difficult to find. More
over since the secret image is broken down into parts and then sent to the receiver. This makes
it difficult for the trespassers to get access to all the parts of the images at once. Thus increasing
the security to a much needed higher level. This makes it becomes highly difficult for the the
intruder to detect the and decode the document. There is no limitation on the image format that
can be used right from bmp to a giff image can be used. It can be grey scale or coloured images.
The size of the message needs to be of only 140 characters.

Image compression in museums:

Museums and galleries consider the quality of reproductions to be of the utmost importance.
Image compression, therefore, can be very effectively applied in cases where accurate
representations of museum or gallery items are required, such as on a Web site. Detailed images
that offer short download times and easy viewing benefit all types of visitors, from the student
to the discriminating collector. Compressed images can also be used in museum or gallery
kiosks for the education of that establishment’s visitors. In a library scenario, students and
enthusiasts from around the world can view and enjoy a multitude of documents and texts
without having to incur travelling or lodging costs to do so.
CONCLUSION

A picture can say more than a thousand words. However, storing an image can cost more than
a million words. This is not always a problem because now computers are capable enough to
handle large amounts of data. However, it is often desirable to use the limited resources more
efficiently. For instance, digital cameras often have a totally unsatisfactory amount of memory
and the internet can be very slow. In these cases, the importance of the compression of image
is greatly felt. The rapid increase in the range and use of electronic imaging justifies attention
for systematic design of an image compression system and for providing the image quality
needed in different applications. Wavelet can be effectively used for this purpose. A low
complex 2D image compression method using Haar wavelets as the basis functions along with
the quality measurement of the compressed images have been presented here. As for the further
work, we proposed to use Multiwavelet Transformation may be get highest compression ratio
or use Wavelet Packet Transform.

REFERENCES
[1] Mrs.Bhumika Gupta, “Study Of Various Lossless Image Compression Technique”, IJETTCS, volume 2, issue
4, July-August 2013

[2] Harpreet Kaur, Rupinder Kaur, Navdeep Kumar, “Review of Various Techniques for Medical Image
Compression”, International Journal of Computer Applications, Volume 123, No.4, August 2015

[3] Bhonde Nilesh, Shinde Sachin, Nagmode Pradip, D.B. Rane, “Image Compression Using Discrete Wavelet
Transform”, IJCTEE, Volume 3, March-April 2013.

[4] Malwinder Kaur, Navdeep Kaur, “A Litreature Survey on Lossless Image Compression”, International Journal
of Advanced Research in Computer and Communication Engineering, Vol. 4, Issue 3, March 2015.

[5] A. Alarabeyyat, S. Al-Hashemi, T. Khdour, M. Hjouj Btoush, S. Bani-Ahmed, R. Al-Hashemi, “Lossless


Image Compression Technique Using Combination Methods”, Journal of Software Engineering and Applications,
2012

[6] Richa Goyal, Jasmeen Jaura, “A Review of Various Image Compression Techniques”, International Journal
of Advanced Research in Computer Science and Software Engineering, Volume 4, Issue 7, July 2014

[7] Dalvir Kaur, Kamaljit Kaur, “Huffman Based LZW Lossless Image Compression Using Retinex Algorithm”,
International Journal of Advanced Research in Computer and Communication Engineering, Vol. 2, Issue 8,
August 2013.

[8] Mridul Kumar Mathur, Seema Loonker, Dr. Dheeraj Saxena, “Lossless Huffman Coding Technique For Image
Compression And Reconstruction Using Binary Trees”, IJCTA, Vol 3 , Jan-Feb 2012.

[9] Jagadeesh B, Ankitha Rao, “An approach for Image Compression Using Adaptive Huffman Coding”,
International Journal of Engineering Research & Technology (IJERT), Vol. 2 Issue 12, December – 2013.

[10] M. Mozammel Hoque Chowdhury, Amina Khatun, “Image Compression Using Discrete Wavelet
Transform”, International Journal of Computer Science Issues, Vol. 9, Issue 4, No 1, July 2012.

[11] Monika Rathee, Alka Vij, “ Image compression Using Discrete Haar Wavelet Transforms”, International
Journal of Engineering and Innovative Technology (IJEIT), Volume 3, Issue 12, June 2014.
[12] Ms. Rasika N Khatke, “Image Compression Using Wavelet Transform”, Imperial Journal of Interdisciplinary
Research (IJIR), Vol-2, Issue-9, 2016.

[13] A.M.Raid, W.M.Khedr, M.A. El-dosuky, Wesam Ahmed, “Image Compression Using Embedded Zerotree
Wavelet”, Signal & Image Processing: An International Journal (SIPIJ), Vol.5, No.6, December 2014.

[14] Ch. Naveen, T Venkata Sainath Gupta, V.R. Satpute, A.S Gandhi, “A Simple and Efficient Approach for
Medical Image Security Using Chaos on EZW”, IEEE 2015.

[15] Zhang Wei, “An Improved Image Encoding Algorithm Based on EZW and Huffman Joint Encoding”, IEEE
2014.

[16] S. NirmalRaj, “SPIHT: A Set Partitioning in Hierarchical Trees Algorithm for Image Compression”,
Contemporary Engineering Sciences, Vol. 8, 2015.

[17] Ritu Chourasiya, Prof. Ajit Shrivastava, “A Study Of Image Compression Based Transmission Algorithm
Using SPIHT for Low Bit Rate Application”, Advanced Computing: An International Journal (ACIJ), Vol.3, No.6,
November 2012.

[18] Kazi Rafiqul Islam, Md. Anwarul Abedin, Masuma Akter, Rupam Deb, “High Speed ECG Image
Compression Using Modified SPIHT”, International Journal of Computer and Electrical Engineering, Vol. 3, No.
3, June 2011.

[19] Sure. Srikanth, Sukadev Meher, “Compression Efficiency for Combining Different Embedded Image
Compression Techniques with Huffman Encoding”, IEEE, April 2013.

[20] Charles D. Creusere, “A New Method of Robust Image Compression Based on the Embedded Zerotree
Wavelet
Algorithm”, IEEE Trans., VOL. 6, NO. 10, October 1997.

[21] Jen-Chang Liu, Wen-Liang Hwang, Wen-Jyi Hwang, Ming-Syan Chen, "Robust Block-Based EZW Image
Compression
with Channel Noise Optimized Rate-Distortion Functions", Proceedings of 1999 International Conference on
Image Processing, ICIP 99, 24-28 October 1999, pp: 560-564.

[22] Ch. Naveen, V.R. Satpute, A.G. Keskar, “An Efficient Low Dynamic Range Image Compression using
Improved Block Based EZW”, IEEE, pg: 1-6, 2015.

S-ar putea să vă placă și