Sunteți pe pagina 1din 10

Abstract

Transform theory plays a fundamental role in image processing, as working with the
transform of an image instead of the image itself may give us more insight into the
properties of the image. Two dimensional transforms are applied to image enhancement,
restoration, encoding and description.
1. UNITARY TRANSFORMS
1.1 One dimensional signals
For a one dimensional sequence
} 1 0 ), ( { N x x f
represented as a vector
[ ]
T
N f f f f ) 1 ( ) 1 ( ) 0 (
of size N , a transformation may be written as
1 0 , ) ( ) , ( ) (
1
0

N u x f x u T u g f T g
N
x
where
) (u g
is the transform (or transformation) of
) ( x f
, and
) , ( x u T
is the so called forward
transformation kernel !imilarly, the inverse transform is the relation


1
0
1 0 ), ( ) , ( ) (
N
u
N x u g u x I x f
or written in a matri" form
g T g I f
1
where
) , ( u x I
is the so called inverse transformation kernel
#f
T
T T I


1
the matri"
T
is called unitar, and the transformation is called unitary as well #t can be proven
(!ow") that the columns (or rows) of an N N unitary matri" are orthonormal and therefore, form a
complete set of #asis ve$tors in the N dimensional vector space
#n that case



1
0
) ( ) , ( ) (
N
u
T
u g x u T x f g T f
$he columns of
T
T

, that is, the vectors


[ ]
T
u
N u T u T u T T ) 1 , ( ) 1 , ( ) 0 , (

are called the


#asis ve$tors of
T

1.% Two dimensional signals &images'


%s a one dimensional si&nal can be represented by an orthonormal set of #asis ve$tors, an ima&e can
also be e"panded in terms of a discrete set of #asis arras called basis ima&es throu&h a two
dimensional &image' transform
For an N N ima&e
f x y ( , )
the forward and inverse transforms are &iven below

1
0
1
0
) , ( ) , , , ( ) , (
N
x
N
y
y x f y x v u T v u g

1
0
1
0
) , ( ) , , , ( ) , (
N
u
N
v
v u g v u y x I y x f
where, a&ain,
) , , , ( y x v u T
and
) , , , ( v u y x I
are called the forward and inverse transformation
kernels, respectively
$he forward 'ernel is said to be se(ara#le if
) , ( ) , ( ) , , , (
( 1
y v T x u T y x v u T
0
#t is said to be smmetri$ if
1
T is functionally equal to
(
T such that
) , ( ) , ( ) , , , (
1 1
y v T x u T y x v u T
$he same comments are valid for the inverse 'ernel
#f the 'ernel
) , , , ( y x v u T
of an ima&e transform is separable and symmetric, then the transform


1
0
1
1
0
1
1
0
1
0
) , ( ) , ( ) , ( ) , ( ) , , , ( ) , (
N
x
N
y
N
x
N
y
y x f y v T x u T y x f y x v u T v u g
can be written in
matri" form as follows
T
T f T g
1 1

where
f
is the ori&inal ima&e of size N N , and
1
T is an N N transformation matri" with
elements
) , (
1
j i T t
ij

#f, in addition,
1
T is a unitary matri" then the transform is called se(ara#le
unitar and the ori&inal ima&e is recovered throu&h the relationship


1 1
T g T f
T
1.) Fundamental (ro(erties of unitar transforms
1.).1 T!e (ro(ert of energ (reservation
#n the unitary transformation
f T g
it is easily proven (try the proof by usin& the relation
T T
T

1
) that
( (
f g
$hus, a unitary transformation preserves the si&nal ener&y $his property is called ener&y preservation
property
$his means that every unitary transformation is simply a rotation of the vector
f
in the N )
dimensional vector space
For the ()* case the ener&y preservation property is written as
f x y g u v
y
N
x
N
v
N
u
N
( , ) ( , )
(
0
1
0
1
(
0
1
0
1

1.).% T!e (ro(ert of energ $om(a$tion


+ost unitary transforms pac' a lar&e fraction of the ener&y of the ima&e into relatively few of the
transform coefficients $his means that relatively few of the transform coefficients have si&nificant
values and these are the coefficients that are close to the ori&in (small inde" coefficients)
$his property is very useful for compression purposes (*!")
%. T+, T*O -IM,NSIONA. FOURI,R TRANSFORM
%.1 /ontinuous s(a$e and $ontinuous fre0uen$
$he Fourier transform is e"tended to a function
f x y ( , )
of two variables #f
f x y ( , )
is continuous
and inte&rable and
F u v ( , )
is inte&rable, the followin& Fourier transform pair e"ists,


+
dxdy e y x f v u F
vy ux j ) ( (
) , ( ) , (


+
dudv e v u F y x f
vy ux j ) ( (
(
) , (
) ( (
1
) , (

#n &eneral
F u v ( , )
is a comple")valued function of two real frequency variables
u v ,
and hence, it
can be written as,
1
) , ( ) , ( ) , ( v u jI v u R v u F +
$he amplitude spectrum, phase spectrum and power spectrum, respectively, are defined as follows
F u v R u v I u v ( , ) ( , ) ( , ) +
( (
1
]
1


) , (
) , (
tan ) , (
1
v u R
v u I
v u
P u v F u v R u v I u v ( , ) ( , ) ( , ) ( , ) +
(
( (
%.% -is$rete s(a$e and $ontinuous fre0uen$
For the case of a discrete sequence
) , ( y x f
of infinite duration we can define the ()* discrete space
Fourier transform pair as follows

x y
vy xu j
e y x f v u F
) (
) , ( ) , (
dudv e v u F y x f
vy xu j
u v
) (
(
) , (
) ( (
1
) , (
+

F u v ( , )
is a&ain a comple")valued function of two real frequency variables
u v ,
and it is periodic
with a period ( ( , that is to say
F u v F u v F u v ( , ) ( , ) ( , ) + + ( (
$he Fourier transform of
f x y ( , )
is said to conver&e uniformly when
F u v ( , )
is finite and
) , ( ) , ( lim lim
1
1
(
(
( 1
) (
v u F e y x f
N
N x
N
N y
vy xu j
N N



+

for all
u v ,

-hen the Fourier transform of


f x y ( , )
conver&es uniformly,
F u v ( , )
is an analytic function and is
infinitely differentiable with respect to u and v
%.) -is$rete s(a$e and dis$rete fre0uen$1 T!e two dimensional -is$rete Fourier
Transform &%2- -FT'
#f
f x y ( , )
is an N M array, such as that obtained by samplin& a continuous function of two
dimensions at dimensions N M and on a rectan&ular &rid, then its two dimensional *iscrete Fourier
transform (*F$) is the array &iven by

1
0
1
0
) . . ( (
) , (
1
) , (
M
x
N
y
N vy M ux j
e y x f
MN
v u F

1 , , 0 M u , 1 , , 0 N v
and the inverse *F$ (#*F$) is
f x y F u v e
j ux M vy N
v
N
u
M
( , ) ( , )
( . . )


(
0
1
0
1

-hen ima&es are sampled in a square array, M N and


F u v
N
f x y e
j ux vy N
y
N
x
N
( , ) ( , )
( ).


1
(
0
1
0
1

f x y
N
F u v e
j ux vy N
v
N
u
N
( , ) ( , )
( ).


1
(
0
1
0
1

#t is strai&htforward to prove that the two dimensional *iscrete Fourier $ransform is separable,
symmetric and unitary
%.).1 3ro(erties of t!e %2- -FT
+ost of them are strai&htforward e"tensions of the properties of the 1)* Fourier $ransform %dvise any
introductory boo' on #ma&e /rocessin&
(
%.).% T!e im(ortan$e of t!e (!ase in %2- -FT. Image re$onstru$tion from am(litude or (!ase
onl.
$he Fourier transform of a sequence is, in &eneral, comple")valued, and the unique representation of a
sequence in the Fourier transform domain requires both the phase and the ma&nitude of the Fourier
transform #n various conte"ts it is often desirable to reconstruct a si&nal from only partial domain
information 0onsider a ()* sequence
) , ( y x f
with Fourier transform { } ) , ( ) , ( y x f v u F so
that
) , (
) , ( } , ( { ) , (
v u j
f
e v u F y x f v u F


#t has been observed that a strai&htforward si&nal synthesis from the Fourier transform phase
) , ( v u
f

alone, often captures most of the intelli&ibility of the ori&inal ima&e


) , ( y x f
(w!") %
strai&htforward synthesis from the Fourier transform ma&nitude
) , ( v u F
alone, however, does not
&enerally capture the ori&inal si&nal1s intelli&ibility $he above observation is valid for a lar&e number
of si&nals (or ima&es) $o illustrate this, we can synthesise the phase)only si&nal
) , ( y x f
p and the
ma&nitude)only si&nal ) , ( y x f
m
by
[ ]
) , (
1
1 ) , (
v u j
p
f
e y x f


[ ]
0 1
) , ( ) , (
j
m
e v u F y x f


and observe the two results (Tr t!is e4er$ise in MAT.A5)
%n e"periment which more dramatically illustrates the observation that phase)only si&nal synthesis
captures more of the si&nal intelli&ibility than ma&nitude)only synthesis, can be performed as follows
0onsider two ima&es
) , ( y x f
and
) , ( y x g
From these two ima&es, we synthesise two other ima&es
) , (
1
y x f
and ) , (
1
y x g by mi"in& the amplitudes and phases of the ori&inal ima&es as follows,
[ ]
) , (
1
1
) , ( ) , (
v u j
f
e v u G y x f


[ ]
) , (
1
1
) , ( ) , (
v u j
g
e v u F y x g


#n this e"periment
) , (
1
y x f
captures the intelli&ibility of
) , ( y x f
, while ) , (
1
y x g captures the
intelli&ibility of
) , ( y x g
(Tr t!is e4er$ise in MAT.A5)
). T+, -IS/R,T, /OSIN, TRANSFORM &-/T'
).1 One dimensional signals
$his is a transform that is similar to the Fourier transform in the sense that the new independent
variable represents a&ain frequency $he *0$ is defined below

1
]
1

1
0 (
) 1 ( (
cos ) ( ) ( ) (
N
x N
u x
x f u a u C

, 1 , , 1 , 0 N u
with
) (u a
a parameter that is defined below

'

1 , , 1 . (
0 . 1
) (
N u N
u N
u a

$he inverse *0$ (#*0$) is defined below

1
]
1

1
0 (
) 1 ( (
cos ) ( ) ( ) (
N
u N
u x
u C u a x f

).% Two dimensional signals &images'
For ()* si&nals it is defined as
2
1
]
1

+
1
]
1

N
v y
N
u x
y x f v a u a v u C
N
x
N
y (
) 1 ( (
cos
(
) 1 ( (
cos ) , ( ) ( ) ( ) , (
1
0
1
0

1
]
1

+
1
]
1

N
v y
N
u x
v u C v a u a y x f
N
u
N
v (
) 1 ( (
cos
(
) 1 ( (
cos ) , ( ) ( ) ( ) , (
1
0
1
0

) (u a
is defined as above and 1 , , 1 , 0 , N v u
).) 3ro(erties of t!e -/T transform
$he *0$ is a real transform $his property ma'es it attractive in comparison to the Fourier
transform
$he *0$ has e"cellent ener&y compaction properties For that reason it is widely used in ima&e
compression standards (as for e"ample 3/45 standards)
$here are fast al&orithms to compute the *0$, similar to the FF$ for computin& the *F$
6. *A.S+ TRANSFORM &*T'
6.1 One dimensional signals
$his transform is sli&htly different from the transforms you have met so far !uppose we have a
function
1 , , 0 ), ( N x x f
where
n
N (
and its -alsh transform
) (u W

#f we use binary representation for the values of the independent variables


x
and
u
we need
n
bits
to represent them 6ence, for the binary representation of
x
and
u
we can write,
( )
( 0 ( 1 10
) ( ) ( ) ( ) ( x b x b x b x
n n

,
( )
( 0 ( 1 10
) ( ) ( ) ( ) ( u b u b u b u
n n

with 1 or 0 ) (x b
i
for 1 , , 0 n i
,4am(le
#f
samples) 7 ( , 8 , , 0 ), ( x x f
then 2 n and for 9 x ,
0 ) 9 ( , 1 ) 9 ( , 1 ) 9 ( (110) : 9
0 1 ( (
b b b
-e define now the 1)* -alsh transform as

1
]
1



1
0
1
0
) ( ) (
1
) 1 ( ) (
1
) (
N
x
n
i
u b x b
i n i
x f
N
u W or


1
0
) ( ) (
1
1
0
) 1 )( (
1
) (
N
x
u b x b
i n
n
i
i
x f
N
u W
$he array formed by the -alsh 'ernels is a&ain a symmetric matri" havin& ortho&onal rows and
columns $herefore, the -alsh transform is and its elements are of the form



1
0
) ( ) (
1
) 1 ( ) , (
n
i
u b x b
i n i
x u T
;ou can immediately observe that
0 ) , ( x u T
or 1 dependin& on
the values of ) (x b
i
and ) (
1
u b
i n
#f the -alsh transform is written in a matri" form
f T W
the rows of the matri" T which are the vectors [ ] ) 1 , ( ) 1 , ( ) 0 , ( N u T u T u T have the form of
square waves %s the variable
u
(which represents the inde" of the transform) increases, the
correspondin& square wave1s <frequency= increases as well For e"ample for 0 u we see that
( ) ( )
( ( 0 ( 1 10
0 00 ) ( ) ( ) ( ) (

u b u b u b u
n n
and hence, 0 ) (
1


u b
i n
, for any i $hus,
>
1 ) , 0 ( x T
and

1
0
) (
1
) 0 (
N
x
x f
N
W -e see that the first element of the -alsh transform in the
mean of the ori&inal function
) (x f
(the *0 value) as it is the case with the Fourier transform
$he inverse -alsh transform is defined as follows

1
]
1



1
0
1
0
) ( ) (
1
) 1 ( ) ( ) (
N
u
n
i
u b x b
i n i
u W x f or


1
0
) ( ) (
1
1
0
) 1 )( ( ) (
N
u
u b x b
i n
n
i
i
u W x f
6.% Two dimensional signals
$he -alsh transform is defined as follows for two dimensional si&nals

1
]
1



1
0
1
0
)) ( ) ( ) ( ) ( (
1
0
1 1
) 1 ( ) , (
1
) , (
N
x
n
i
v b y b u b x b
N
y
i n i i n i
y x f
N
v u W or


1
0
)) ( ) ( ) ( ) ( (
1
0
1 1
1
0
) 1 )( , (
1
) , (
N
x
v b y b u b x b
N
y
i n i i n i
n
i
y x f
N
v u W
$he inverse -alsh transform is defined as follows for two dimensional si&nals

1
]
1



1
0
1
0
)) ( ) ( ) ( ) ( (
1
0
1 1
) 1 ( ) , (
1
) , (
N
u
n
i
v b y b u b x b
N
v
i n i i n i
v u W
N
y x f or


1
0
)) ( ) ( ) ( ) ( (
1
0
1 1
1
0
) 1 ( ) , (
1
) , (
N
u
v b y b u b x b
N
v
i n i i n i
n
i
v u W
N
y x f
6.) 3ro(erties of t!e *als! Transform
?nli'e the Fourier transform, which is based on tri&onometric terms, the -alsh transform consists
of a series e"pansion of basis functions whose values are only 1 or 1 and they have the form
of square waves $hese functions can be implemented more efficiently in a di&ital environment than
the e"ponential basis functions of the Fourier transform
$he forward and inverse -alsh 'ernels are identical e"cept for a constant multiplicative factor of
N
1
for 1)* si&nals
$he forward and inverse -alsh 'ernels are identical for ()* si&nals $his is because the array
formed by the 'ernels is a symmetric matri" havin& ortho&onal rows and columns, so its inverse
array is the same as the array itself
$he concept of frequency e"ists also in -alsh transform basis functions -e can thin' of frequency
as the number of zero crossin&s or the number of transitions in a basis vector and we call this
number se0uen$ $he -alsh transform e"hibits the property of ener&y compaction as all the
transforms that we are currently studyin& (w!@)
For the fast computation of the -alsh transform there e"ists an al&orithm called Fast *als!
Transform &F*T'. $his is a strai&htforward modification of the FF$ %dvise any introductory
boo' for your own interest
7. +A-AMAR- TRANSFORM &+T'
7.1 -efinition
A
#n a similar form as the -alsh transform, the ()* 6adamard transform is defined as follows
Forward

1
]
1


1
0
1
0
)) ( ) ( ) ( ) ( (
1
0
) 1 ( ) , (
1
) , (
N
x
n
i
v b y b u b x b
N
y
i i i i
y x f
N
v u H ,
n
N (
or


1
0
)) ( ) ( ) ( ) ( (
1
0
1
0
) 1 )( , (
1
) , (
N
x
v b y b u b x b
N
y
i i i i
n
i
y x f
N
v u H
Inverse

1
]
1


1
0
1
0
)) ( ) ( ) ( ) ( (
1
0
) 1 ( ) , (
1
) , (
N
u
n
i
v b y b u b x b
N
v
i i i i
v u H
N
y x f etc
7.% 3ro(erties of t!e +adamard Transform
Most of t!e $omments made for *als! transform are valid !ere.
$he 6adamard transform differs from the -alsh transform only in the order of basis functions $he
order of basis functions of the 6adamard transform does not allow the fast computation of it by
usin& a strai&htforward modification of the FF$ %n e"tended version of the 6adamard transform is
the Ordered +adamard Transform for which a fast al&orithm called Fast +adamard Transform
&F+T' can be applied
%n important property of 6adamard transform is that, lettin&
N
H represent the matri" of order
N , the recursive relationship is &iven by the e"pression
1
]
1

N N
N N
N
H H
H H
H
(
8. 9AR+UN,N2.O,:, &9.T' or +OT,..IN; TRANSFORM
$he Barhunen)Coeve $ransform or BC$ was ori&inally introduced as a series e"pansion for continuous
random processes by Barhunen and Coeve For discrete si&nals 6otellin& first studied what was called
a method of principal components, which is the discrete equivalent of the BC series e"pansion
0onsequently, the BC transform is also called the 6otellin& transform or the method of principal
components T!e term 9.T is t!e most widel used
8.1 T!e $ase of man realisations of a signal or image &;on<ale<=*oods'
$he concepts of eigenvalue and eigeve$tor are necessary to understand the BC transform
#f
C
is a matri" of dimension n n , then a scalar is called an ei&envalue of
C
if there is a
nonzero vector
e
in
n
R
such that
e e C
$he vector
e
is called an ei&envector of the matri"
C
correspondin& to the ei&envalue
&If ou !ave diffi$ulties wit! t!e a#ove $on$e(ts $onsult an elementar linear alge#ra #ook.'
0onsider a population of random vectors of the form
9
1
1
1
1
]
1

n
x
x
x
x

(
1
$he mean ve$tor of the population is defined as
{ } x E m
x

$he operator E refers to the e"pected value of the population calculated theoretically usin& the
probability density functions (pdf) of the elements
i
x and the Doint probability density functions
between the elements
i
x and j
x

$he covariance matri" of the population is defined as


{ }
T
x x x
m x m x E C ) )( (
Eecause
x
is
n
)dimensional,
x
C and
T
x x
m x m x ) )( ( are matrices of order n n $he
element
ii
c of
x
C is the variance of
i
x , and the element ij
c
of
x
C is the covariance between the
elements
i
x and j
x
#f the elements
i
x and j
x
are uncorrelated, their covariance is zero and,
therefore,
0
ji ij
c c

For M vectors from a random population, where M is lar&e enou&h, the mean vector and
covariance matri" can be appro"imately calculated from the vectors by usin& the followin&
relationships where all the e"pected values are appro"imated by summations

M
k
k x
x
M
m
1
1
T
x x
T
k
M
k
k x
m m x x
M
C

1
1
Fery easily it can be seen that
x
C is real and symmetric #n that case a set of
n
orthonormal (at t!is
(oint ou are familiar wit! t!at term) ei&envectors always e"ists Cet
i
e and
i
, n i , , ( , 1 , be
this set of ei&envectors and correspondin& ei&envalues of
x
C , arran&ed in descendin& order so that
1 +

i i
for 1 , , ( , 1 n i Cet A be a matri" whose rows are formed from the ei&envectors of
x
C , ordered so that the first row of A is the ei&envector correspondin& to the lar&est ei&envalue, and
the last row the ei&envector correspondin& to the smallest ei&envalue
!uppose that A is a transformation matri" that maps the vectors
xG
into vectors
yG
by usin& the
followin& transformation
) (
x
m x A y
$he above transform is called the 9ar!unen2.oeve or +otelling transform $he mean of the
y
vectors resultin& from the above transformation is zero (tr to (rove t!at)
0
y
m
the covariance matri" is (tr to (rove t!at)
T
x y
A C A C
and y
C
is a dia&onal matri" whose elements alon& the main dia&onal are the ei&envalues of
x
C (tr
to (rove t!at)
1
1
1
1
]
1

n
y
C

0
0
(
1

$he off)dia&onal elements of the covariance matri" are 0 , so the elements of the
y
vectors are
uncorrelated
8
Cets try to reconstruct any of the ori&inal vectors
x
from its correspondin&
y
Eecause the rows of
A
are orthonormal vectors (w!"), then
T
A A
1
, and any vector
x
can by recovered from its
correspondin& vector
y
by usin& the relation
x
T
m y A x +
!uppose that instead of usin& all the ei&envectors of
x
C we form matri"
!
A from the !
ei&envectors correspondin& to the ! lar&est ei&envalues, yieldin& a transformation matri" of order
n ! $he
y
vectors would then be ! dimensional, and the reconstruction of any of the ori&inal
vectors would be appro"imated by the followin& relationship
x
T
!
m y A x + H
$he mean square error between the perfect reconstruction
x
and the appro"imate reconstruction
xH
is
&iven by the e"pression

+

n
! j
j
!
j
j
n
j
j m
e
1 1 1

Ey usin&
!
A instead of
A
for the BC transform we achieve compression of the available data
8.% T!e $ase of one realisation of a signal or image
$he derivation of the BC$ for the case of one ima&e realisation assumes that the two dimensional si&nal
(ima&e) is ergodi$ $his assumption allows us to calculate the statistics of the ima&e usin& only one
realisation ?sually we divide the ima&e into bloc's and we apply the BC$ in each bloc' $his is
reasonable because the ()* field is li'ely to be er&odic within a small bloc' since the nature of the
si&nal chan&es within the whole ima&e Cet1s suppose that
f
is a vector obtained by le"ico&raphic
orderin& of the pi"els
) , ( y x f
within a bloc' of size M M (placin& the rows of the bloc'
sequentially)
$he mean vector of the random field inside the bloc' is a scalar that is estimated by the appro"imate
relationship

(
1
(
) (
1
M
k
f
k f
M
m
and the covariance matri" of the ()* random field inside the bloc' is
f
C
where
(
1
(
) ( ) (
1
(
f
M
k
ii
m k f k f
M
c

and
(
1
(
) ( ) (
1
(
f
M
k
j i ij
m j i k f k f
M
c c +

%fter 'nowin& how to calculate the matri"


f
C
, the BC$ for the case of a sin&le realisation is the same
as described above
8.) 3ro(erties of t!e 9ar!unen2.oeve transform
*espite its favourable theoretical properties, the BC$ is not used in practice for the followin& reasons
#ts basis functions depend on the covariance matri" of the ima&e, and hence they have to
recomputed and transmitted for every ima&e
/erfect decorrelation is not possible, since ima&es can rarely be modelled as realisations of er&odic
fields
$here are no fast computational al&orithms for its implementation
R,F,R,N/,S
7
I1J Digital Image Processing by K 0 5onzales and K 4 -oods, %ddison)-esley /ublishin&
0ompany, 1LL(
I(J Two-Dimensional ignal and Image Processing by 3 ! Cim, /rentice 6all, 1LL0
L

S-ar putea să vă placă și