Sunteți pe pagina 1din 8

Jonathan Medenblik

Math 547 Section 001


Research paper
Jeremy Marzuola
Compression and Using SVD to Compress Images
Overview of paper main topic
One way linear algebra can be applied is through image compression. Images are made
up of matrices, where each value in it contains the value of the darkness. In a black and white
image this is easy, as each pixel is given a value from 0 to 255 (2^8 shades) which is used to
show how dark that pixel is. In color, it takes three times as much data, as there is a matrix for
the red, green and blue components of each image. Image compression is the act of taking these
matrices and making them able to be stored in a smaller amount of storage space. While there are
different methods of doing this, these compression algorithms actually become more inefficient
when the image is preserved completely. The less precise the image can be the less space is
needed to store the compressed image. Also, the smaller the rank of the image, the more easily
compressed it is. One method of image compression is called Single Value Decomposition
(SVD). (Prasantha) Image compression is one of the reasons that images vary in size. If you take
a photo on a camera and dont change any of the settings, the next picture you take will not be
the same size (storage wise). This is what the different image formats are doing, using different
algorithms to store the image in more efficient ways than a value of 0 to 255 for each pixel.
History and background of compression
Image compression is not a new concept, and has been around almost as long as people
have been saving images. With a large number of images and only limited amounts of storage,
image compression has been a very important part of storing images. However, most types of
compression are what are called lossy compressions, which essentially means that some of the
image is lost during the compression. A JPEG image format is one of the most common lossy
image formats. It is often used many websites. The part that is lost is normally not evident to
average viewers, as they try to only remove parts that are not as important to the human eye. By
removing information from an image that the eye deems non-important the file size can be
reduced and the image still contains enough of its original content that the users are satisfied. Not
all methods of storing images is lossless, however, as many fields of work would not want any
important image information to be accidentally removed. Medical fields especially will not use a
lossy image compression as their images need to be precise, and often images such as x-rays or
an image of an MRI scan need to be precise as small imperfections in the body that the image
captures could be removed with a compression method. These errors are unacceptable, and thus
they will not use a lossy image compression. Lossless compressions are less common among
images and videos as far as internet streaming, but more common among medical professionals.
They do not often save as much space as lossy compression, as all of the original data must still
remain in some form. Lossless compression is most easily done when data is repeated. For
example, a number sequence (2, 2, 2, 2) could be represented by a 2 and a 4 that represents the
number of twos. These repetitions could happen in many different types of files, from text files,
even to some image formats. GIF and PNG files are two image examples of lossless image files
that retain all of their original data. Depending on the image, a lossless compression is not
necessarily the better compression format. Surprisingly, a lossless compression of an image can
be stored using less storage in some circumstances. A BMP image will be larger than any of its
counterparts as it does not compress anything, it retains all the original data in the original size,
with the size dependent on the actual image size. In windows computers this format was more
commonly used in various places of Microsofts operating system (Lossless vs. Lossy).
Other uses of compression
Compression like this is not just helpful for saving storage space, but also necessary for
certain things. Video is just a collection of images that are normally displayed at as low as 24
frames per second and sometimes upwards of 60 frames per second. Regardless of the length of a
video, this would make it very difficult for video streaming websites such as YouTube or Vimeo
to succeed, as many Internet connections would not be able to download the video faster than it
could play, resulting in either having to download the video first or buffer the content whenever
the player needs to download more of the video. Websites such as these also rely heavily on
different forms of compression. Video compression can sometimes be seen in the pixilation. This
is more easily seen if the video player lags, or there is a short amount of buffering, but it can also
be seen when a video is played at a resolution
that is not the highest resolution possible,
albeit more subtly. This is more easily seen on
a larger monitor or a projector. The picture to
the side is an example from Vimeo on how
the bitrate would affect an image at a given
point.
SVD construction
A=USI
1

Above is the equation that we will use to compute the Single Value Decomposition. This
is when U and V transpose are orthogonal matrices and, S is a diagonal matrix. For a matrix
A, we will compute S. This is done by using the eigenvalues of AA
1
and A
1
A. The eigenvalues
are found by using the formulas Det( AA
1
- I
n
) = 0 and Det( A
1
A - I
n
) = 0, where represents
the eigenvalue and n represents the rank of the identity matrix, or the number of columns our
square matrix uses. Example: where n = 2, the matrix I
n
is represented by [
1 u
u 1
]. The
determinants are then calculated using the difference between the two matrices (for both parts).
This creates a polynomial that can be solved for the two (on a 2x2 matrix) values of .
Singular Values
The singular values are simply the square root of the eigenvalues that are computed. In a
square matrix nxn this can be up to n values.
n
is used to represent the singular values, where

n
is the nth largest term of all the singular values, or where
n

n+1
. In our formula, the
matrix S is created by filling the rows of an nxn diagonal matrix with these singular values where
(I = j) S
]
=

. For all values found of


n
. For a 2x2 matrix, this would be [

1
u
u
2
].
Eigenvectors
We must also compute the eigenvectors. These are calculated using part of the equation
used to calculate the determinants. We will use the formulas ( AA
1
- I
n
):
1
= 0, and ( A
1
A -
I
n
) :
2
= 0 to calculate U and V, respectively. We want to compute the vector :
1
, and do so by
solving for vector v using each of the eigenvalues found. We then convert it into a unit vector
and create U using the unit vectors created. We can solve for V by solving for :
2
in a similar
fashion.
What do we do now?
Now that we have all the parts of the equation that we want, we can manipulate it to
compress it and make it smaller. We have USI
1
. To compress the image, we can remove
parts of the matrices. We will call k the value that we are changing to compress the images
differently. We can change U into an nxk matrix, we change V into an kxn matrix, and S
becomes a kxk matrix with the (n-k)th (where k > 0) last rows and columns of S have been
removed. Keep in mind that the least largest singular values will be the ones cut off here, and
thus when we re-create the image, it will lose some data, but not all of it. Ideally this only
removes the least important parts of the image. As k gets larger the amount of compression
increases, the file size decreases, but the quality of the image is reduced as well as this increases
the amount of the original image that is lost.

These pictures show how increasing the k value can make the image appear to be lower quality.
In this example the image on the left has the higher k value than the image on the right.
Very Quick Example of SVD on a small matrix
Lets take the 2x2 matrix [
2 2
-1 1
]. We find that AA
1
= [
8 u
u 2
], and A
1
A = [
S S
S S
].
Using these we find the eigenvalues. The eigenvalues are 8 and 2 for both. Using these, we find
that the Eigenvectors for AA
1
and A
1
A are (1,0) and (0, 1), and (1/2, 1/2) and (-1/2, 1/2).
Using those values, we finally have our SVD of matrix A. Using the matrices we found, we can
show that [
2 2
-1 1
] = [
-1 u
u 1
] [
8 u
u 2
] [
12 12
-12 12
]. If we wanted to compress
this image, we would remove the following highlighted parts.
[
2 2
-1 1
] = [
-1 u
u 1
] [
8 u
u 2
] [
12 12
-12 12
]. Of course very few images are actually
this small and this would remove a significant amount of the image. This example would be fore
k = 1.
Here are two example programs I written in java. This code was abstracted from what a
full program would contain and not all functions are defined here.
This first class is for colored images. it doesnt do much other than pass the black and white
conversions of the red, green and blue matrices to the next class.
import SVDCompr essi on.I mageCompr essi on
import MyLi br ar i es.I mages / / Ar bi t r ar y i mage cl ass wi t h many i mage f unct i ons.
publ i c cl ass Col or edI mageCompr essi on{
I mage r ed;
I mage gr een;
I mage bl ue;
I mage newRed;
I mage newBl ue;
I mage newGr een;
I mage f ul l ;
I mageCompr essi on i mRed;
I mageCompr essi on i mGr een;
I mageCompr essi on i mBl ue;
publ i c Col or edI mageCompr essi on(I mage r , I mage g, I mage b){
r ed = r ;
gr een = g;
bl ue = b;
compr ess(r ed, gr een, bl ue);
f ul l = I mages.makeCol or ed(newRed, newGr een, newBl ue);
}
publ i c I mage col or edComr ess(I mage r , I mage g, I mage b){
r ed = r ;
gr een = g;
bl ue = b;
compr ess(r ed, gr een, bl ue);
f ul l = I mages.makeCol or ed(newRed, newGr een, newBl ue);
return f ul l ;
}
pr i vat e voi d compr ess(I mage r , I mage g, I mage b){
i mRed = new I mageCompr essi on(r );
i mGr een = new I mageCompr essi on(g);
i mBl ue = new I mageCompr essi on(b);
newRed = i mRed.get SVDI mage();
newGr een = i mGr een.get SVDI mage();
newBl ue = I i mBl ue.get SVDI mage();
}
publ i c I mage get Compr essed(){
return f ul l ;
}
-
Here is the general class for doing SVD and compressing a black and white image.
import MyLi br ar i es.I mages
import MyLi br ar i es. 2DAr r ay; / / Ar bi t r ar y 2d ar r ay/ mat r i x cl ass.
import MyLi br ar i es.Al gebr a; / / Cl ass t hat woul d cont ai n l i near al gebr a
f unct i ons.
import MyLi br ar i es.Mat r i xFunct i ons; / / Comput es si mpl e mat r i x f unct i ons.
publ i c cl ass I mageCompr essi on{
I mage i mage;
I mage newI mage;
2DAr r ay mat r i x, v, u, s, nV, nU, nS;
i nt k = 10; / / K val ue = 10 by def aul t . Use set K( i nt k) met hod t o change.
publ i c I mageCompr essi on(I mage p){
if (p != null){
i mage = p;
mat r i x = new 2DAr r ay(i mage.hei ght , i mage.wi dt h);
const r uct (i mage);
SVD(mat r i x);
compr essByK(v, s, u);
r econst r uct I mage(v, s, u);
}
}
publ i c voi d const r uct (I mage i mage){
for (i = 0; i < I mage.hei ght (i mage); i ++){
for ( i 1 = 0; i 1 < I mage.wi dt h(i mage); i 1++;){
mat r i x.add(Pi xel At (i mage, i 1, i );
}
}
}
pr i vat e voi d SVD(2DAr r ay bi m){
2DAr r ay aTr ansA = Al gebr a.mul t i pl y(mat r i x, Al gebr a.t r anspose(mat r i x));
2DAr r ay aAt r ans = Al gebr a.mul t i pl y(Al gebr a.t r anspose(mat r i x), mat r i x);
i nt at a0 = Al gebr a.ei gen0(aTr ansA); / / Thi s met hod r et ur ns t he f i r st
ei genval ue of t he mat r i x;
i nt at a1 = Al gebr a.ei gen1(aTr ansA);
i nt aat 0 = Al gebr a.ei gen0(aATr ans);
i nt aat 1 = Al gebr a.ei gen1(aAt r ans); / / A l oop shoul d be her e t o cr eat e as many
at a# and aat
as needed)
v = Al gebr a.eVect or (aAt r ans, aat 0, aat 1, ... aat #);
u = Al gebr a.eVect or (aTr ansA, at a0, at a1, ... at a#);
s = Al gebr a.const r uct di agonal (Mat h.sqr t (aat 0), Mat h.sqr t (aat 1), ...
Mat h.sqr t (aat #));
}
pr i vat e voi d compr essByK(){
nV = Mat r i xFunct i ons.r emoveLast Col umns(n, k);
nU = Mat r i xFunct i ons.r emoveLast Rows(n, k);
nS = Mat r i xFunct i ons.r emoveLast Col umns(n, k);
nS = Mat r i xFunct i ons.r emoveLast Rows(nS, k);
}
pr i vat e voi d r econst r uct I mage(){
return Mat r i xFunct i ons.cr eat eFr omSVD(nV, nS, nU);
}
publ i c voi d set K(i nt k){
this.k = k;
}
publ i v I mage get SVDI mage(){/ / Pr e condi t i on t hat newI mage i s not nul l ;
return newI mage;
}
}



















References:
Aase, S.O.; Husoy, J.H.; Waldemar, P., "A critique of SVD-based image coding systems,"
Circuits and Systems, 1999. ISCAS '99. Proceedings of the 1999 IEEE International
Symposium on , vol.4, no., pp.13,16 vol.4, Jul 1999 doi:
10.1109/ISCAS.1999.779931http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=
779931&isnumber=16886
Andrews, Harry C.; Patterson, C., III, "Singular Value Decomposition (SVD) Image
Coding," Communications, IEEE Transactions on , vol.24, no.4, pp.425,432, Apr 1976
doi: 10.1109/TCOM.1976.1093309
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1093309&isnumber=23865
Bretscher, Otto. Linear Algebra with Applications. 5th ed. Upper Saddle River, New Jersey:
Pearson Education, Print.
"Lossless vs Lossy." MaximumCompression. N.p.. Web. 19 Apr 2013.
<http://www.maximumcompression.com/lossless_vs_lossy.php>.
Mei Tian; Si-Wei Luo; Ling-Zhi Liao, "An investigation into using singular value decomposition
as a method of image compression," Machine Learning and Cybernetics, 2005.
Proceedings of 2005 International Conference on , vol.8, no., pp.5200,5204 Vol. 8, 18-21
Aug. 2005 doi: 10.1109/ICMLC.2005.1527861
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1527861&isnumber=32630
Prasantha, H.S.; Shashidhara, H.L.; Balasubramanya Murthy, K.N., "Image Compression Using
SVD," Conference on Computational Intelligence and Multimedia Applications, 2007.
International Conference on , vol.3, no., pp.143,145, 13-15 Dec. 2007
doi: 10.1109/ICCIMA.2007.386
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4426357&isnumber=4426319

S-ar putea să vă placă și