Sunteți pe pagina 1din 41

Robust Identification and Robust Control with the

Applications to Texture Image Processing


(Comprehensive Exam Proposal)

Dept. of Electrical Engineering


Penn State University
Tao Ding

Aug., 2005

ABSTRACT
There are many common methods in system analysis, design and development in two different
fields of control theory and texture image processing. The progress of these two area also shows
that the techniques developed to solve the problems of one area often find applications to the
other one.
This proposal is an attempt to setup and solve the texture image processing problem from
a viewpoint of control, especially the study of robust identification and robust control theory.
The work will focus on texture modelling, synthesis, recognition and classification. A novel
image modelling and model reduction approach is introduced. We will show how the recently
developed robust identification techniques can be applied to find models for textures which are
capable to do image compression and reconstruction. The work on extending the current 1-D
approach to 2-D to remove the column and row oriented limitation are mainly discussed in this
chapter.
Furthermore, based on the models discussed in the prior chapter, we apply semi-blind model
(in)validation method to address the problem of texture recognition and classification. The work
was motivated by the recent progress in the model (in)validation of robust control. A LMI based
convex relaxation was introduced to the original NP hard minimization problem and an efficient
solution was brought out. Under the proposed framework, texture recognition and classification
problem was recast into a robust model (in)validation form and was solved by introducing the
similar convex relaxation.
Moreover, a Hankel operator based approach to the problems of texture modelling and
inpainting is introduced. Textured images are modelled as the output of an unknown, periodic,
linear shift invariant operator in response to a suitable input. This approach can be applied to
missing portions reconstruction, finding textons and synthesizing textures from the same family.
It has great potential in the 2-D extension using the 2-D models and Hankel matrix indicated
in this proposal.
Keywords:

Robust Control, Robust Identification, Model (In)Validation, Texture Syn-

thesis, Classification and Recognition, LMI, Hankel Operator.

Contents
1 Introduction

1.1

Organization of the proposal . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.2

Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2 Texture Image Modelling and Synthesis

2.1

Model Reduction for a Class of Neutrally Stable Systems . . . . . . . . . . . . .

2.2

Texture Modelling Using 2-D Recursive Filters SVD Realization . . . . . . . . .

2.2.1

Texture Modelling Using 2-D Filters SVD Realization . . . . . . . . . . .

2.2.2

Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

11

2-D Discrete State-Space Image Model . . . . . . . . . . . . . . . . . . . . . . . .

12

2.3.1

Roessers discrete-state space model for linear image processing . . . . . .

12

2.3.2

Procedure to Get Roessers Model by Algorithm 1 and 2 . . . . . . . . . .

13

2.3.3

Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

15

2.3

3 Model (In)Validation and Texture Classification

20

3.1

Semi-blind model (in)validation problem set-up . . . . . . . . . . . . . . . . . . .

20

3.2

Problem transformation and main results . . . . . . . . . . . . . . . . . . . . . .

21

3.3

Applications to texture classification . . . . . . . . . . . . . . . . . . . . . . . . .

23

3.3.1

The detailed procedure and problems . . . . . . . . . . . . . . . . . . . .

23

3.3.2

Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

25

3.4

Texture Classification Using 2-D Filters SVD Realization Model

. . . . . . . . .

29

3.4.1

Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

29

3.4.2

Special problems and the solutions . . . . . . . . . . . . . . . . . . . . . .

31

4 A Hankel Operator Approach to Texture Modelling and Inpainting

33

4.1

Texture modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

33

4.2

Finding textons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

33

4.3

Texture Inpaiting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

35

5 Conclusions and Future Work

37

Introduction
Control theory is a comprehensive research area with great power for the applications to wide

fields. Robust control and robust identification as the theories which highlight and deal with the
model uncertainty problem got considerable attentions and have been quickly developed during
recent decades. Signal processing is also an exciting area of research. Much recent progress has
been driven by applications involving speech and audio processing, image processing and others.
The research in this proposal arises in the context of many practical problems in different fields,
where the use of control-theoretical tools can bring in great benefit. It will give sufficient solution
to many problems, like image restoration, texture recognition and texture inpainting, from a
novel viewpoint of control theory. The following chapters will give more detailed explanation to
the theories and applications.

1.1

Organization of the proposal


This proposal shows that recently developed robust identification and control system tech-

niques can be brought to bear on the problem of texture image processing involving image
modelling, texture synthesis, texture recognition and classification.
The proposal is organized as follows.
Part 1 gives the general introduction and the organization of the proposal. The notations
used throughout the proposal are also given in this part.
Part 2 gives a summary of the undergoing research progress in texture modelling. A finite
horizon model reduction method is introduce. There are two methods indicated in this chapter
to extend the finite horizon modelling and reduction method from 1-D to 2-D: 2-D filter SVD
realization algorithm and a new realization method for Roessers image processing model.
In part 3, the semi-blind model (in)validation theory is introduced. The problem setup,
transform and an efficient relaxation for the applications of model (in)validation method to
texture synthesis was discussed. Several examples with the discussion will give some light to the
efficiency and performance of this algorithm.
In part 4, a Hankel operator approach to texture modelling and inpaiting is introduced. The
image restoration and in painting problem is converted to a rank minimization problem.
Finally, the conclusion and the future work leading to my Ph.D degree are discussed in part

5.

1.2

Notations
Below we summarize the notation used in this proposal:
Z, R, C: set of integer, real and complex numbers respectively.
x, x : complex-valued column vector and its conjugate transpose row vector.
||x||: Euclidean norm of vector x Cm .
1
. P
p p
||x||p : p-norm of a vector, ||x||p = ( m
k=1 |xk | ) .

A : conjugate transpose of matrix A.

(A): maximum singular value of matrix of A.


A > ()0: A = A is positive definite (negative semi-definite).
I, 0: identity and null matrices of compatible dimensions (when omitted).
BX(): -ball in a normalized space. X : BX() = {x X : ||x||X }.
BX: (closed) unit ball in X.
l2m : Hilbert space of vector-valued sequences {xi }iZ , equipped with the inner product:
1
. P
.
< x, y >= iZ xi yi and norm ||x||2 =< x, x > 2 .
L : Lebesgue space of complex-valued matrix function X(z) essentially bounded on the
.
unit circle, equipped with the norm: ||X|| = ess sup|z|=1
(X(z)).
H : subspace of functions in L with bounded analytic continuation inside the unit disk,
.
equipped with the norm: ||X|| = ess sup|z|<1
(X(z)).
RH (RH ): subspace of L (H ).
BH (): -ball in H .
BH : unit-ball in H .
Lm
2 : Hilbert space of Lebesgue square integrable vector functions x() equipped with the
. R
norm ||x||x = 02 trace[x()x() )] d
2 .
. P
x(ej) : Fourier transform of a real-valued sequence in l2m : x(ej ) = iZ xi eji .
X(z): Z-transform of a real-valued matrix sequence {Xi }iZ : X(z) =

P
iZ

Xi z 1 .

M : Upper linear fractional transformation: M = M21 (I M11 )1 M12 + M22 .

Texture Image Modelling and Synthesis


There have been many works on the texture modelling and synthesis, which has been long

standing problems in computer vision [15] with applications to widely dissimilar areas.
The first general texture model was proposed by Julesz [6] who suggested that texture
perception could be explained by extracting k th order statistics. However, the amount of data
contained in high order statistics, which is normal for human visual systems, is huge and difficult
to handle by computer algorithms. Most recent statistical approaches are based on two well
established areas. The first one is the theory of Markov random fields, which is used to model
statistical local interactions within small neighborhoods [711]. The second one is the use of
multiple linear kernels at different scales and orientations followed by a non-linear procedure,
motivated by the multi-channel filtering mechanisms generally accepted by neurophysiology
[1217].
Texture synthesis algorithms can be classified as procedural-based algorithms. Most procedural approaches are based on the statistical approaches described above. However, the cost of
tuning parameters of the models can be a limiting factor to synthesize large sets of textures. On
the other hand, image-based algorithms work by directly copying image pixels or patches from
a sample texture image and stitching them together in the synthesized image.
Efros and Leung [18] grow texture by coping pixel from a sample image using a statistical
non-parametric model based on the distribution of the brightness values in the neighborhood of
the pixel. Many optimization work to the basic algorithm has been done, such as Wei and Leung
[19], Harrison [20] and Ashikhmin [21]. Rane et al [22] address the problem of filling-in blocks of
missing data in wireless image transmission where the correlation between the lost data and its
neighbors is used to choose between an inpainting [23] and a texture synthesis [18] algorithm.
Pixel-based algorithms are greedy and can fail to capture the overall structure of the
texture. This problem can be alleviated by quilting patches instead. Xu et al. [24] use tiles
where the boundaries are obscured by pasting and blending random overlapping blocks. Efros
and Freeman in [25] stich together patches in a way to minimize the overlap error between
adjacent patches can have arbitrary shape and dimension. Liu et al [26],[27] generate nearregular textures by finding first its underlying lattice from a sample patch using concepts from
crystallographic symmetry groups.
The following portion will introduce a novel model reduction approach, its application to

texture synthesis and the extending work on it.

2.1

Model Reduction for a Class of Neutrally Stable Systems


Model reduction of stable LTI systems is by now a well understood problem and a number

of efficient algorithms are available [2832]. However the proposed approach to model reduction
of unstable systems cannot guarantee that certain structural properties, such as periodicity of
the impulse response, are preserved, a key requirement in the applications of special interests
in texture synthesis. Motivated by some well known results on realization theory [3334], a
different approach to model reduction is proposed by Sznaier [36]. The main idea is: Addressing
the problem by working directly with the constructed Hankel Matrix. Since the impulse response
is periodic, the Hankel matrix of the system under consideration is circulant and structural
properties can then be exploited to obtain balanced realizations in an efficient way.
The following is the brief introduction of the model reduction method and the application.
Consider a system Gn with the k th Markov parameter Mk = CAk1 B Rpm . The state
space realization of system Gn has the form: A Rnn , B Rnm , C Rpn and D
Rpm where An = I. The following algorithm is presented in [36] to get a rth order reduced
approximation Gr , such that ||Gn Gr || is minimized over the finite interval [0,n-1].
Algorithm 1:
1. Given the system with Mk = CAk1 B Rpm , form the np nm block matrix:

.
Hn =

M1 M2 Mn
M2 M3 M1
..
..
..
..
.
.
.
.
Mn M1 Mn1

2. Perform a singular value decomposition:


h

"

#"

S 0
VT
Hn = U U
0 0
VT
S = diag(1 , , n ), i j , i j
3. Form the reduced order model:
Sr = diag(1 , , r ), r n
1

(1)

Ar = Sr 2 UrT P Ur Sr2 , Br = Sr2 Vr


(1)

1
2

Cr = Ur Sr , Dr = D

where

P =

0
0
..
.

Ip
0
..
.

Ip

0 0
Ip 0

..
..

. .

0 0
(1)

Ur (Vr ) denotes the sub-matrix formed by the first r columns(rows) of Ur (Vr ) and where Ur
(1)

and Vr

denotes the first p r block of Ur and r m block of VrT , respectively.

Algorithm 1 can be applied to the non-trivial problems of texture synthesis and recognition,
algorithm 2. The main idea is to model images as the (periodic) impulse response of a nonnecessarily causal LTI system and use the proposed method to identify the corresponding model.
Partial images can be expended and additional realizations of the same texture can be obtained
by simply driving the corresponding model with a suitable input.
Algorithm 2:
1. Given an n m image, let RiT denote its ith row, and form the block matrix:

.
Hn =

R1 R2 Rn
R2 R3 R1
..
.. . .
..
.
.
.
.
Rn R1 Rn1

2. Use Algorithm 1 to obtain a balanced state-space realization.


3. Obtain a reduced order realization by truncating the modes corresponding to the smallest
n r singular values of S.

2.2

Texture Modelling Using 2-D Recursive Filters SVD Realization


In the recent past years there has been much interest in the problem of two-dimensional

(2-D) recursive filtering with particular application to images. The operator theoretic approach
[36] considers images exhibiting a given static texture as realizations of one period of an infinite
2-D periodic signal, and then model this signal as the output of an unknown, periodic, linear
shift invariant operator with suitable input. However, this technique gets different model in
different row or column direction. Here we introduced a 2-D recursive filters SVD realization
to get the 2-D image model and remove this limitation. The 2-D periodic signal is considered
as the output of 2-D linear-phase IIR filters realized by a SVD-based structure. This structure
obtains a set of 1-D sub-filters which represent the model in two directions respectively.
2.2.1

Texture Modelling Using 2-D Filters SVD Realization

Consider n m image as one period of an infinite 2-D signal with period (n,m). Therefore
9

the intensity value I(i, j) at a given pixel is satisfied with I(i, j) = I(i qn, j qm) for suitable
integer q. For simplicity, the length and width of the image are assumed to be the same, n = m.
It can be shown the realization approach for the case n 6= m can be easily extended.
Furthermore, this 2-D image signal is assumed as the output of 2-D IIR filters. Without
generality, we can assume the image is the responses to the specific case: input uk = (0),
i.e. the impulse response of the 2-D IIR filters. With these assumptions, the image modelling
problem becomes the realization of the impulse response of 2-D linear phase filters.
Motivated by the FIR filters realization algorithm [37], under the proposed image modelling
frame, for an image intensity value matrix I(i, j), 1 < i, j < n, the SVD decomposition to I is:
I = U SV

U2T

U1

"

0
0 0

#"

V1T
V2T

, = diag(1 , , l ), 1 > > l > 0

where U and V are unitary matrices; is a diagonal matrix with singular values in diagonal,
where 1 > 2 > > l > 0. V T is complicated conjugate of V .
Modify the SVD decomposition form to
I=

l
X

i ui viT

i=1

l
X

u1i vi1 , u1i =

i ui , vi1 = i vi ,

i=1

Therefore 2-D FIR filer can be realized by l parallel 2-D sub-filters. Each 2-D sub-filter contains
two 1-D FIR filters in different row or column direction with the impulse response given by u1i
and vi1 .
This SVD technique can be used in model reduction by discarding the states small singular
values of I. Hence the reduced SVD structure can be expressed as I =

Plr

1 1T
i=1 ui vi .

We can

choose lr < l to get proper numbers of 2-D sub-filters in the filters realization.
For each 1-D FIR filter, with the assumption of periodic property, we can use the algorithm
2 to get an IIR realization. The IIR realization will be applied in texture synthesis and other
texture research.
Algorithm 3:
1. Given a n m image I(x, y), 1 < i, j < n, perform the SVD decomposition to I:
I = U SV

U1 U2T

"

0
0 0

#"

V1T
V2T

, = diag(1 , , l , )

2. Modify the SVD decomposition form to


I=

l
X
i=1

i ui viT =

l
X

u1i vi1 , u1i =

i=1

10

i ui , vi1 = i vi ,

3. Realize the 2-D FIR filter by l parallel 2-D sub-filters, with the form I =

Plr

1 1T
i=1 ui vi .

4. Apply Algorithm 2 and 1 on each 1-D sub-filter to obtain the reduced order realization.
2.2.2

Examples

Example 1:
For a 192 192 color image, we use 2-D filter SVD realization approach to get the texture
model with different rank of sub-filters, lr . The original image and outputs of different rank
2-D filters are showed in Figure 2.1. For 60 rank sub-filters, the output is slightly different from
the original image. Through the comparison we can conclude that the reduced 2-D SVD Filter
Realization model is efficient to describe the image.

Figure2.1 Original Image, Output of 40 2-D Filters, 60 2-D Filters, 80 2-D Filters

Example 2:
For different color or gray value images, use the 2-D IIR filters model to reconstruct partial
images. Figure 2.2 shows the efficiency of proposed algorithm in texture synthesis for specific
images.

Figure2.2 Partial and Reconstructed Images

From the above examples, we can get the conclusion that the modelling method can bead
to low order parsimonious models capable of generating images with the desired texture. It can
be used to both restore partial images and to classify an unknown sample.
11

2.3

2-D Discrete State-Space Image Model


As mentioned before, new approaches are in need to remove the column and row-wised

limitation. Motivated by Roessers discrete state-space model for linear image processing [38],
an new image modelling approach is brought out here. We will first give the introduction to
Roessers model [38].
2.3.1

Roessers discrete-state space model for linear image processing

Roessers model consider an image of two spatial dimensions as a generalization of a temporal


signal. The model has two space coordinates i and j in stead of time t. Hence two-space sets
conveying information vertically and horizontally are introduced to replace the single-state set.
The definitions for the model:
i
j
{R}
{S}
{u}
{y}

An integer-valued vertical coordinat.


An integer-valued horizontal coordinat.
A set of real n1 -vectors which convey information vertically.
A set of real n2 -vectors which convey infrmatiion horizontally.
A set of real p-vectors that act as inputs.
A set of real m-vectors that act as outpus.

The state-space model is represented by the following matrix equations:


R(i + 1, j) = A1 R(i, j) + A2 S(i, j) + b1 u(i, j)
S(i, j + 1) = A3 R(i, j) + A4 S(i, j) + b2 u(i, j)
y(i, j) = c1 R(i, j) + c2 S(i, j) + Du(i, j),
i, j >= 0
The state-space model has the following forms by introducing the following matrices and vectors:
x0 (i, j) = Ax(i, j) + Bu(i, j)
y(i, j) = Cx(i, j) + Du(i, j)

(1)

where
"

"

i
h
A1 A2
b1
A=
,B =
, C = c1 c2
A A4
b2
"
# "3
#
"
# "
#
x
(i, j)
R(i, j)
x
(i + 1, j)
R(i + 1, j)
.
.
0
x(i, j) =
=
, x (i, j) =
=
x
(i, j)
S(i, j)
x
(i, j + 1)
S(i, j + 1)

Roesser [36] and Takao [39] has shown that the solution of the model is given by
x(i, j) =

j
X
k=0

"

A(i,jk)

x
(0, k)
0

i
X
r=0

"

A(ir,j

0
x
(r, 0)

12

0<ki, 0<rj

M (k, r)u(i k, j r)

where

A(i,j)

(0,1)
M (k, r) = A"(k1,r)#b(1,0) + A(k,r1)
"
#b
b1
0
b(0,1) =
, b(0,1) =
0
b2
"
#
"
#
A1 A2
0
0
(1,0)
(0,1)
A
=
,A
=
0
0
A3 A4
(1,0)
(i1,j)
(0,1)
(i,j1)
=A
A
+A
A
, f or (i, j) > (0, 0); A(0,0) = I

With the assumption that A2 is a null matrix, which implies that the corresponding transfer
function has a separable denominator, the Markov parameters for the model can be generated
by
wi0 = c1 Ai1
1 b1
j1
woj = c2 A4 b2
i1
wij = c2 Aj1
4 A3 A1 b1
By taking the z-transform, we can compute the system function H(z1 , z2 ) = Y (z1 , z2 )/X(z1 , z2 )
given by
i
Y (z1 , z2 ) h
= c1 c2
H(z1 , z2 ) =
X(z1 , z2 )

"

z1 I 0
0 z2 I

"

A1 A2
A3 A4

#!1 "

b1
b2

+D

There is still problems to use a single Hankel matrix form and get A, B, C, D directly. In
[36], the algorithm to get the model uses the special forms of the Markov parameters. But for
2-D model, the Markov parameters have much more complex forms. It cant be expressed with
the form Mk = CAk1 B. And to generate future output needs not only current states, but also
the boundary conditions.
We get a system realization if and only if the system model can generate the same Markov
parameters sequence. Hence, applying the system modelling algorithm in [36] to Roessers
state-space model, we can solve the problem and get a 2-D model to generate the given Markov
parameters sequence.
2.3.2

Procedure to Get Roessers Model by Algorithm 1 and 2

Under the assumption that A2 is a null matrix, system (1) has the solutions
"

A(i,0)

{A(1,0) }i

Ai1 0
0 0

"

, A(0,j)

"

{A(0,1) }j
#

0
Aj1
4 A3

0
0
, i, j 1
j1
i 0
A
A
A
4
#3 1
"
#
"
0
b1
Ai1
1
, M (0, j) =
M (i, 0) =
0
Aj1
4 b2
#
"
0
, i, j 1.
M (i, j) =
i1
Aj1
A
3 A1 b1
4
A(i,j) =

13

0
Aj4

The results in paper [40], [41], [39] showed that the Hankel matrix generated by the realized
system can be written as the product of the observability and controllability matrices.
"

(n)
mm
H
0

(m)
Hnm = n,m n,m = H
nn

where

(m)
nn
H

(n)
(n) W
(n)
m
(m) W
(m) W
n(m)
W
W
W
1
2
1
2

W
(m) W
(m) W
(m)
(n) W
(n)
(n)
W
W
(n)

2
3
m+1
2
3
n+1

, Hmm =
..
..
..
..
..
..
..
..

.
.
.
.
.
.
.
.

(n)
(n)
(m)
(m)
(n)
(m)

Wn+1 Wn+n1
Wm Wm+1 Wm+m1
Wn
(m) = Cn Ai1 b1 = [wi0 , wi1 , , wim ]T
W
1
i
(n) = c2 Aj1 Bm = [w0j , w1j , , wnj ]
W

(m)
(n)
nn
mm
H
and H
are referred to as the horizontal and vertical Hankel matrices respectively. since

the given sequences assumed to have N as the least upper bound on its index, the n and m
can always be taken as N . Similar to the technique in [36], we form the horizontal and vertical
Hankel matrices as the following:

(N )
H
NN

(N ) W
(N ) W
(N
W
1
2
N
(N ) W
(N ) W
(N )
W
2
3
1
..
..
..
..
.
.
.
.
(N
(N
)
(N
)
)

W
W
W
1
N 1
N

(N
)

=
, H

NN

(N )
(N ) W
(N ) W
m
W
1
2
(N ) W
(N ) W
(N )
W
2
3
1
..
..
..
..
.
.
.
.
(N
(N
)
(N
)
)

W
W
W
1
N 1
N

Then using [36] algorithm, we get the system realization for horizontal and vertical Hankel
matrices respectively. The realization algorithm will give a realization solution set, including
CN , A1 , b1 , c2 , A4 and BN . Moreover, CN and BN can be expressed in the following forms.

CN =

c2 A3
c2 A4 A3
..
.

c2 A4N 1 A3

c2
c2 A4
..
.

.
A3 =
QN A3 ,

1
c2 AN
4

1
BN = [A3 b1 , A3 A1 b1 , , A3 AN
b1 ]
1
N 1
= A3 [b1 , A1 b1 , , A1 b1 ]
.
= A3 PN

N . And given b1 and A1 , we can get PN . Then we can get a


Given c2 and A4 , we can get Q
solution of A3 through the following equation.
1 CN
A3 = Q
N

or A3 = BN PN1

N will be not a square matrix, then we get A3 through the


Using reduced realization, Q
following equations.

1 0
0 Q
A3 = (Q
N N ) QN CN

or A3 = BN PN0 (PN PN0 )1


14

For b2 and c1 , we can use the following equation to get a solution.

.
Wi0 =

w10
w20
..
.

wN 0

c2 b2
c2 A4 b2
..
.

c2 AN
4 b2

c2
c2 A4
..
.

N b2 = b2 = Q
1 Wi0
b2 = Q

c2 AN
4

For reduced model realization,


1

0 Q

b2 = (Q
N N)

0 Wi0 .
Q
N

Similarly

W0j

.
=

w01
w02
..
.

i
i
h
h

= c1 b1 , c1 A1 b1 , , c1 AN b1 = c1 b1 , A1 b1 , , AN b1 = c1 PN
1
1

w0N

= c1 = W0j PN1

For reduced model realization,


c1 = W0j PN0 (PN PN0 )1 .
Similarly
Then we get a realization for the discrete state-space system (1), where
"

A=

2.3.3

A1 0
A3 A4

"

, B=

b1
b2

, C = [c1 , c2 ], D = w00

Examples

Example 1:
As a simple example, suppose the dimension of the interested image is 10 by 10, and the

15

gray value matrix is

W =

Wi0 =

52 91 128 80 72 60 82 62 64 48
158 165 146 95 72 49 53 65 59 60

134 80 132 176 97 55 76 65 69 84

95 96 83 101 166 122 86 116 47 54

51 28 90 79 100 117 161 85 152 141

78 45 73 40 106 111 181 169 162 144

71 52 36 47 189 135 93 102 102 97

46 52 88 100 153 124 101 64 57 69

57 76 53 108 159 107 145 60 56 41


114 133 147 157 153 64 73 55 89 74

158 134 95 51 78 71 46 57 114


i

91 128 80 72 60 82 62 64 48

W0j =

iT

, i = 1, , 9

, j = 1, , 9

Form the horizontal and vertical Hankel Matrices respectively, and apply [36] algorithms to
get the realization of CN , A1 , b1 , c2 , A4 and BN .

A1 =

1
0
0
0
0
0
0 0
0
0 0.76604 0.64279
0
0
0
0 0
0
0
0
0.76604
0
0
0
0 0
0
0
0
0
0.17365 0.98481
0
0 0
0
0
0
0
0
0.17365
0
0 0
0
0
0
0
0
0
0
0 0
0
0
0
0
0
0
0.86603 0 0
0
0
0
0
0
0
0
0 0 0.34202
0
0
0
0
0
0
0 0
0
h

b1 =

, A4 = A0
1

iT

0 0 0 0 1.9199 0 0.0029825 0.028229 7.3255


h

c2 =

0 0 0 0 1.3843 0.57259 0.27945 0 0.5717

CN =
4.7056 0.75217 4.5448 2.0648 0.061258 0.11663
2.034
1.3527
2.2333
5.4888 0.68685 3.8443 1.7748 0.40345 1.8201
0.20412
1.803
2.4825
5.8448
0.3057
4.3093 1.9673
0.60427
1.1888
0.48354 0.024403 4.0105
7.7347
2.071
1.6779
2.6395
0.65053
3.2309
3.7108
0.37095
0.32521
5.7218 0.10584
3.4401
2.1262 0.016862 0.20599
1.2203
0.15129
1.0556
6.2719 0.51849 3.6301
0.766
3.6348
0.3508
3.8139
0.12966
0.19734
5.0551 2.0568
2.8378
1.591 0.50817
2.0402
0.53473 2.4311
1.818
5.1328 1.624
3.098
3.1573 3.1945 0.16196 1.2458
0.66318
2.1026
4.9451 1.9562
2.6829 2.5048
1.555
0.67568 1.1184
0.32847
2.3932
16

N and solve A3 through A3 = Q


1 CN .
Calculate Q
N

A3 =

0.023346 0.052747
0
0.00017604 0.025776 0.034817
0.16725
0
0.095289
0
0.030537 0.091546

0
0
0
0
0
0.055846

0.023829 0.10617 0.0021246 0.027474


0

0
0
0
0
0
0

0.010131 0.11842
0.16197
0.21877
0
0.26139

0.074655
0
0.060109
0
0.037931
0

0
0
0
0
0.065239
0
0
0.029262
0
0.014859
0
0

0.33068 0.023437
0
0
0.021322 0.27673
0.060555
0
0.23893
0
00.012825
0
0.033871
0
0
0
0.084206
0
0
0
0
0
0.066208 0.054259
0
0
0.16721

1 Wi0 , and c1 through c1 = W0j P 1 .


Then solve b2 through b2 = Q
N
N
h

4.4632 0.95866 1.2996 0.79898 2.5584 0.20378 1.0313 1.4697 0.1878

b2 =

c1 =

5.204 1.1418 3.723 3.073 1.3843 0.57259 0.27945 0.60903 0.5717

Then we get a realization, where


"

A=

A1 0
A3 A4

"

R1818 , B =

b1
b2

R181 , C = [c1 , c2 ] R118 , D = w00 = 52

Verify the realization result. The system can generate the identical Markov sequences to the
given image.
wi0 = c1 Ai1
1 b1
j1
woj = c2 A4 b2
i1
wij = c2 Aj1
4 A3 A1 b1
Here we get the full realization for the state-space model, and we can get reduced realization by
choose proper sub-matrix size r. If we choose r = 5, 7, 9, we get the following output:

Woriginal =

52 91 128 80 72 60 82 62 64 48
158 165 146 95 72 49 53 65 59 60

134 80 132 176 97 55 76 65 69 84

95 96 83 101 166 122 86 116 47 54

51 28 90 79 100 117 161 85 152 141

78 45 73 40 106 111 181 169 162 144

71 52 36 47 189 135 93 102 102 97

46 52 88 100 153 124 101 64 57 69

57 76 53 108 159 107 145 60 56 41


114 133 147 157 153 64 73 55 89 74
F ull realization(r = 9).
17

iT

Wr=8 =

Wr=5 =

52
155.76
135.83
93.806
51.415
78.415
69.806
47.829
54.756
116.39

88.874
141.25
97.702
81.86
52.036
42.145
71.856
54.255
57.134
130.51

129.73
170.21
120.29
84.561
86.426
55.176
41.674
65.222
85.88
137.07

78.869 72.393 60.393


120.44 70.984 42.633
147.84 98.512 62.098
126.02 165.03 115.88
68.203 97.51 118.6
50.267 106.44 108.91
58.999 183.54 130.22
86.72 156.04 128.3
123.74 157.78 102.74
121.71 158.84 74.301
where, r = 8

80.869
47.62
76.46
88.058
164.74
186.91
98.649
104.01
136.47
67.03

63.733
57.71
84.505
90.391
101.27
141.04
106.87
52.863
71.808
73.031

61.874
67.009
51.763
66.769
147.24
179.36
112.66
56.336
51.814
61.847

50.263
85.399
67.832
63.905
122.42
139.03
83.934
59.455
66.445
73.7

52
158.16
139.12
84.927
61.296
72.533
70.685
48.541
56.35
112.39

93.759
130.32
100.52
70.966
70.303
74.118
57.218
44.105
70.087
116.54

115.39
159.62
137.93
94.165
58.268
40.902
38.609
54.567
93.625
139.68

94.197 63.498 62.955


137.17 83.191 50.488
139.62 105.19 70.799
119.92 127.17 115.82
82.428 117.17 135.79
60.836 104.94 131.27
75.729 126.61 136.49
107.62 157.49 150.28
126.76 147.71 134.49
131.56 104.07 84.247
where, r = 5

78.207
54.196
59.115
101.35
140.29
139.76
114.63
99.681
95.291
78.187

71.743
64.984
57.844
90.759
142.69
151.99
106.98
63.533
61.373
73.46

50.655
68.386
54.83
78.912
136.09
158.87
115.7
60.284
51.062
70.148

56.596
87.131
64.821
67.621
107.44
130.45
100.45
57.83
57.127
84.329

Example 2:
For a 51 51 image, using full and reduced realization algorithms to extract the discrete
state-space model, and compare the result of the image synthesis with the models respectively.
Pictures in Figure 2.3 are respectively the fully realized image (equally to the sub-matrix
size r = 51), reduced image with r = 40, r = 30, r = 25, r = 15 and r = 10.

18

Figure2.3

Original image(r=51), reduced realization r=40, r=30, r=25, r=15 and r=10.

The approximation error analysis and other further theoretical analysis will be one of the
future work.

19

Model (In)Validation and Texture Classification


Model (in)validation of LTI systems in a Robust Control setting has been extensively ad-

dressed in the past decade. The problem of semi-blind frequency-domain (in)validation of


discrete-time, Linear Time Invariant (LTI) models, which is discussed in this proposal, can
be formally stated as follows: Given (i) a priori information consisting of a candidate model,
and set descriptions N, and U of the measurement noise, model uncertainty and experimental
inputs, and (ii) experimental data consisting of frequency-domain measurements, corrupted by
additive noise, to an unknown input in U, find whether the a posteriori experimental data is
consistent with the a priori information, that is whether admissible uncertainty, input and noise
could have generated this data.
In the case of a completely known input and unstructured LTI uncertainty entering the plant
as an LFT, model (in)validation reduces to a LMI feasibility problem that can be efficiently
solved. However, this framework cannot be directly applied here, where only a set description of
the input is available. This situation arises in many practical cases. Examples are the validation
of plants subject to unknown time delays or when the only information available about the input
is its spectral power density.
In paper [35], this semi-blind (in)validation problem is showed to be a (generically NP-hard)
Bilinear Matrix Inequality minimization problem. And an efficient convex relaxation can be
obtained by recasting the problem into a structured invalidation form, with two uncertainty
blocks. The application to the problem of texture classification with the use of this framework
will also be introduced. This chapter mainly includes the examples for the application of the
semi-blind model (in)validation in texture processing.

3.1

Semi-blind model (in)validation problem set-up


Consider the problem of invalidating a model of the form shown in Figure 3.1 on the left,

consisting of the upper linear fractional interconnection P of a discrete-time, causal, stable,


LTI candidate model P :
q(ej ) = P11 (ej )p(ej ) + P12 (ej )u(ej )
s(ej ) = P21 (ej )p(ej ) + P22 (ej )u(ej ) + z(ej )
and an unstructured uncertainty block BH
Under the assumption that both signals (u, s) are the impulse responses of some discretetime, causal, stable, LTI, rational systems in BH , and since the magnitude of u(ej ) is known,

20

system can be reformed as follows:


q(ej ) = M11 (ej )p(ej ) + M12 (ej )
z(ej ) = M21 (ej )p(ej ) + M22 (ej )
Where
.
.
M11 (ej ) = P11 (ej ), M21 (ej ) = P21 (ej )
.
j
j j()
M12 (ej ) = P12
(e )Su (e )e

.
M22 (ej ) = 1 s(ej ) P22 (ej )Su (ej )ej()
The model (in)validation set-up form is as Figure 3.1 in [35].

Figure3.1 Sample Model Invalidation Set-up

In this frame work, the semi-blind model (in)validation problem can be precisely stated as
follows.
Problem 1: Given the output s(ej ) and the admissible sets of inputs U and noise N ,
determine whether there exists at least one pair z N , BH and a scalar function ()
so that system equations hold; or equivalently, whether:
= min, ||M ||2 1
The difficulty in solving problem 1 stems from the fact that the problem is not jointly convex
in , . To solve this problem, a tight convex relaxation of this problem is introduced and a
sufficient condition for solving problem 1 is proposed.

3.2

Problem transformation and main results


In [35] the optimization problem can be expressed equivalently as follows.
||(P )Su ej() s||2 = ||(P )Su sej() ||2 = ||Maug aug ||2

where

Maug

0
0
1
.

j
j
0
P11 (e )
P12 (e )Su (ej )
=

1
1
j
j
j
j
s(e ) P21 (e ) P22 (e )Su (e )
"

aug

.
=

ej() 0
0

21

And Maug can be expressed by

"

.
Maug =

M11 M12
M21 M22

.
=

0
0
|
1
0
P11 (ej )
|
P12 (ej )Su (ej )

1
j
P21 (ej )
|
1 P22 (ej )Su (ej )
s(e )

In terms of these augmented structure, problem 1 can be rested as a constrained optimization


problem with structured uncertainty:
= min"aug ||Maug #aug ||2
1 (ej ) 0
| | = |ej() | = 1
.
aug =
: 1
0

2 BH
Note that the set aug is not convex due to the constrain |1 | = 1. To address the difficulty
and obtain a tractable optimization problem, we will relax the constrain to ||1 || 1. This leads
to the following model (in)validation problem with 2-block LTI structured uncertainty.
st = min
" st ||Maug st ||2 #
1 (ej )
0
.
st =
: ||i || 1
0
2 (ej )
Figure3.2 shows the transform of the semi-blind model (in)validation problem.

Figure3.2

Sample Model Invalidation Set-up Transformation

The necessary and sufficient condition equivalent to st > 1 is is given in [35] as the following:
Consider a system M (z) RH and 2-block structured uncertainty st = {diag(1 , 2 ) :
|||| 1}. Then the following conditions ar equivalent:
(i) infst ||M ||22 > 1
(ii) There exists a Hermitian matrix X() 0 and a real transfer function y() 0, such
that in [0, 2) the following two inequalities hold:
"

X() 0
M (ej )
0
1
where X() = diag(x1 ()I1 , x2 ()I2 ).
R
02 y() d
2 > 1.

.
L(X, y) = M (ej )

22

"

X()
0
0
y()

0,

3.3

Applications to texture classification


Texture classification has been the subject of intense research in the computer vision and

image processing communities, with application ranging from medical diagnosis to object recognition and image database retrieval. Discussed in the priori chapters, the image was considered
as the output of a linear shift invariant operator S to white noise, or, in a deterministic setting,
to a signal u(ej ) l2 , |u(ej )| = 12 . This leads to the set-up shown in the Figure 3.3., where
G(z) represents a nominal model of a particular texture, y and s denote the ideal and actual images, respectively, and where the (unknown) operator describes the mismatch between these
two images.
Under the proposed framework, apply the model (in)validation theorem 1 to the problem
set-up for the texture recognition. The problem will be converted to be the following form:
Find for G(z) an input u, |u(e(j) |=1 and an admissible uncertainty operator of minimum
size opt :
.
opt = min,u {|||| : s = [( + I)Gu] + z}
where ||.|| denotes some norm of interest.
Consider opt as the criteria for texture classification, where the small opt means the small
mismatch, and vice versa.
3.3.1

The detailed procedure and problems

For texture recognition problem, the the image is considered as the output of a model which
has single input and multi output(SIMO). The texture recognition under this condition is a
(in)validation procedure for SIMO system.

Figure3.3 The Texture Recognition Set-up and Problems Conversion

Then we have:
q(ej ) = P11 (ej )p(ej ) + P12 (ej )u(ej )
s(ej ) = P21 (ej )p(ej ) + P22 (ej )u(ej ) + z(ej )

23

Assume the image model has dimension Nd , then we have

P11

0
0

RNd Nd , P12 = G(ej ) C Nd 1 ,


=
..

. 0
0 0
0 0 0

P21

0
0

0
1

0
0

RNd Nd , P22 = G(ej ) C Nd 1 .


=
..

. 0
0 0
0 0 1

And the transformation of this problem can be expressed by

"

Maug

.
=

M11 M12
M21 M22

.
=

0 0

iT

P11 (ej )

P21 (ej )

0 0


1
j
s(e )

|
P12 (ej )Su (ej )

|
1 P22 (ej )Su (ej )

M11 , M12 , M21 , M22 can be expressed as M11 = a11 + jb11 , M12 = a12 + jb12 , M21 = a21 + jb21
and M22 = a22 + jb22 . Based on the discussion of the complex LMI matrix problem, we can get
the following results for general cases.
"

.
L(x) = M (ej )

X() 0
0
1

"
j

M (e )

X()
0
0
y()

< 0,

In the realization of LMI algorithm of Matlab, the function LMITERM has a requirement,
that the value can not be complex. Because of this reason, the forms should be adjusted for the
requirement of Matlab.

"

Lemma: Assume A = {a + jb}, then A < 0

a b
b a

<0

Proof : From the definition of definitely negative, we can get


.
.
A = {a + jb} < 0 X = {x + jy}, X AX < 0
X AX < 0 (xT jy T )(a + jb)(x + jy) < 0
T
T
T
= (xT ax
" + y ay#+ y ax x ay) < 0
a b
=
< 0. The end.
b a
From the lemma, we can get
"

L(x) < 0

Re{L(x)} Im{L(x)}
Im{L(x)} Re{L(x)}

< 0, Im{L(x)} = Im{L(x)}0

Where
"

Re{L(x)} =
#
a011 Xa12 + b011 Xb12 (a021 a22 + b021 b22 )
a012 Xa12 + b012 Xb12 (a022 a22 + b022 b22 ) + y()

a011 Xa11 + b011 Xb11 (a021 a21 + b021 b21 ) X


a012 Xa11 + b012 Xb11 (a022 a21 + b022 b21 )

24

"

3.3.2

a011 Xb11
a012 Xb11

b011 Xa11
b012 Xa11

(a021 b21
(a022 b21

Im{L(x)} =
#

a011 Xb12 b011 Xa12 (a021 b22 b021 a22 )

a012 Xb12 b012 Xa12 (a022 b22 b022 a22 )


b021 a21 )
b022 a21 )

Examples

Example 1:
Assume G(z) =

2
z 2 +0.1z0.12

and (z) =

0.05
.
z 2 +0.1z0.12

Then we have:

q(ej ) = P11 (ej )p(ej ) + P12 (ej )u(ej )


s(ej ) = P21 (ej )p(ej ) + P22 (ej )u(ej ) + z(ej )
Where P11 = 0, P12 = G(z), P21 = 1, P22 = G(z).
Based on the generation of z and , we have = 0.03. For the input with unknown timedelays, we apply the proposed problem algorithm to transform the problem and use the theorem
1 and the equivalent form to calculate to check the efficiency of criteria opt . The results of
the computer simulation is listed in Table 3.1.
Table 3.1 and the results of I(
y)

0.0001
0.0056444
0.011189
0.016733
0.022278
0.027822
0.033367
0.038911
0.044456
0.05
opt

=0
2.1927
1.5666
1.0596
0.66545
0.37802
0.18754
0.08281
0.031768
0.0083003
0.00043026
0.011

=5
2.4214
1.7963
1.2918
0.8878
0.5725
0.33985
0.18215
0.086101
0.031474
0.0073614
0.014

= 20
2.2744
1.6498
1.1464
0.75612
0.47521
0.28768
0.16787
0.097067
0.055328
0.030491
0.012

= 30
2.2034
1.6142
1.1458
0.77667
0.49066
0.27957
0.13641
0.054631
0.016178
0.002356
0.012

= 55
2.4982
1.8203
1.2754
0.84857
0.52319
0.28768
0.13984
0.062837
0.024155
0.0054036
0.013

Example 2:
In the following Figure, apply the proposed algorithm to get opt and use it to do texture
classification.

25

Figure3.4 The Texture Recognition Samples 1-9

Table 3.2 and the results of I(


y)

0.04
0.06
0.08
0.1
opt

I 1,2 (
y)
1.4
0.94892
0.67078
0.48445
0.057

I 1,3 (
y)
1.5497
1.0636
0.77038
0.5856
0.061

I 1,4 (
y)
1.5769
1.1234
0.83756
0.64198
0.07

I 1,5 (
y)
49099
46615
44241
41988
0.94

I 1,6 (
y)
76394
71840
67455
63206
0.97

I 1,7 (
y)
41229
39354
37552
35835
0.92

I 1,8 (
y)
39230
37373
35606
33897
0.92

From the above results, we can see that, the opt can be a criteria for texture recognition.
Figure 3.4 is the figure 1 corrupted by additive noise with ||.||2 < 5. Figure 3.5 and 3.6 are parts
in different positions of the same type texture background as figure 1.

Figure3.5 The Texture Background

Example 3:
To test the algorithm more, we shift the area under the same background as Figure 3.5 with
different horizontal and vertical directions, and get different opt to check the result. Assume
the noise is additive with ||.||2 < 5, and the result is listed in Table 3.3, 3.4.
26

First lets consider the vertical shift condition.


Table 3.3 and the results of I(
y)

0.04
0.06
0.08
0.1
opt

0.04
0.06
0.08
0.1
opt

I 1,2 (
y)
0.088986
0.046072
0.027926
0.018476
0.01
1,8
I (
y)
1.7557
1.2309
0.89328
0.67204
0.065

I 1,3 (
y)
0.24264
0.1356
0.081752
0.053295
0.015
I 1,9 (
y)
2.0081
1.4193
1.0364
0.77355
0.078

I 1,4 (
y)
0.43166
0.27629
0.18808
0.13633
0.02
1,10
I
(
y)
2.373
1.6936
1.2492
0.93463
0.097

I 1,5 (
y)
0.6129
0.39362
0.267
0.19479
0.03
1,11
I
(
y)
2.7161
1.968
1.4599
1.1043
0.11

I 1,6 (
y)
0.98534
0.66173
0.46939
0.34955
0.038
I 1,12 (
y)
3.157
2.3733
1.7906
1.3607
0.12

I 1,7 (
y)
1.4239
0.97687
0.70071
0.52469
0.059
I 1,13 (
y)
3.7224
2.8596
2.2186
1.7468
0.13

Figure3.6 The Texture Recognition Samples(Vertical Shift) 1-13

Then lets consider the horizontal shift condition.


Table 3.4 and the results of I(
y)

0.04
0.06
0.08
0.1
opt

I 1,2 (
y)
14730
13882
13099
12359
0.95

I 1,3 (
y)
12512
11660
10870
10130
0.95

I 1,4 (
y)
13034
12250
11528
10852
0.952

I 1,5 (
y)
13123
12290
11527
10821
0.95

I 1,6 (
y)
13908
13026
12206
11446
0.95

I 1,7 (
y)
13425
12658
11945
11278
0.95

Figure3.7 The Texture Recognition Samples(Horizontal Shift) 1-7

27

We can see that under vertical shift condition, the opt criteria works well. But under
horizontal shift condition, it didnt work because the system model is setup for vertical direction.
If we improved the theorem into 2-D direction and transform the problem into 2-D condition,
it may work better.
Example 4:
In these tests, the images have been made zero mean and normalized to ||.||2 = 1. The
images are as the Figure 3.6, and noise level was set to = 0.05, the results are listed in Table
3.5.
Table 3.5 and the results of I(
y)

0.04
0.06
0.08
0.1
opt

0.04
0.06
0.08
0.1
opt

I 1,2 (
y)
0.11848
0.088952
0.067472
0.052033
0.001
I 1,8 (
y)
1.173
1.0336
0.92049
0.82436
0.06

I 1,3 (
y)
0.26697
0.21973
0.18425
0.1566
0.01
1,9
I (
y)
1.3167
1.1727
1.0545
0.95299
0.08

I 1,4 (
y)
0.43647
0.37158
0.32074
0.27923
0.016
I 1,10 (
y)
1.5262
1.3635
1.2286
1.1137
0.11

I 1,5 (
y)
0.5769
0.49201
0.42736
0.37536
0.02
1,11
I
(
y)
1.633
1.4653
1.3232
1.2009
0.12

I 1,6 (
y)
0.77889
0.67288
0.58972
0.52213
0.03
1,12
I
(
y)
1.7965
1.6138
1.4596
1.326
0.13

I 1,7 (
y)
1.0027
0.87922
0.78091
0.69937
0.04
1,13
I
(
y)
2.017
1.8166
1.645
1.4959
0.15

The image size is 64 10. In Figure 3.6, the every vertical shift step shift the image with
5 pixels. From Table 3.5, we can see that the largest shift will be 30 pixels, when require
opt < 0.05
For the different type textures, the test sample texture images are as Figure 3.8, and the
invalidation result of opt was listed in Table 3.6.

Figure3.8

The Texture Recognition Test II: Samples 1-7

28

Table 3.6 and the results of I(


y)

0.04
0.34333
0.64667
0.95
opt

3.4

I 1,2 (
y)
340.93
138.28
31.073
0.29715
0.9

I 1,3 (
y)
493.07
221.22
59.164
1.0318
0.95

I 1,4 (
y)
424.33
187.22
49.913
0.85273
0.94

I 1,5 (
y)
400.49
169.06
42.808
0.7315
0.94

I 1,6 (
y)
520.88
239.07
67.059
1.264
0.97

I 1,7 (
y)
364.44
152.12
36.17
0.5331
0.91

Texture Classification Using 2-D Filters SVD Realization Model


In the Chapter 2.1.1, a 2-D Filters SVD realization algorithm has been presented to get the

model of the image. The following part will apply the (in)validation algorithm based on the
2-D filters image model, and the results will show the efficiency of the model and the validation
algorithm.
3.4.1

Examples

Example 1:
The images are chosen in different position of the whole texture background. This image
has some periodic property with noise. We choose as the 5% energy of the input (||.||2 ). The
size of the image is 94 94. For model reduction, we choose lr = 3. It can be showed that for
lr = 3, the image can be reconstructed well. The results are shown in Figure 3.9 and Table 3.7.
Iv1,n (
y ) and Ih1,n (
y ) express the vertical and horizontal filter model respectively.
Table 3.7 and the results of I(
y)

0.01
0.105
0.2

Iv1,2 (
y)
0.036999
0.013094
0.006261

Ih1,2 (
y)
0.032742
0.016105
0.0093659

Iv1,3 (
y)
0.085412
0.035724
0.018607

Ih1,3 (
y)
0.09073
0.03841
0.01941

Iv1,4 (
y)
0.16259
0.082398
0.052301

Ih1,4 (
y)
0.11996
0.068086
0.047621

Iv1,5 (
y)
0.22965
0.098114
0.058963

Ih1,5 (
y)
0.2662
0.12419
0.091537

0.01
0.105
0.2

Iv1,6 (
y)
0.25676
0.12485
0.083035

y)
Ih1,6 (
0.34653
0.16924
0.12733

Iv1,7 (
y)
0.3588
0.17578
0.11544

y)
Ih1,7 (
0.44825
0.2484
0.18254

Iv1,8 (
y)
0.46859
0.2653
0.17821

y)
Ih1,8 (
0.8556
0.40442
0.23445

Iv1,9 (
y)
0.56763
0.23936
0.16509

y)
Ih1,9 (
0.6451
0.29128
0.19548

0.01
0.105
0.2

y)
Iv1,10 (
0.57209
0.21921
0.15502

Ih1,10 (
y)
0.62187
0.25599
0.15626

y)
Iv1,11 (
0.67865
0.25571
0.17614

Ih1,11 (
y)
0.79029
0.34455
0.1967

y)
Iv1,12 (
0.75834
0.25891
0.18127

Ih1,12 (
y)
0.61524
0.30155
0.20498

29

Figure3.9 The Sample Textures 1-12

From table 3.7, we can see that all opt < 0.01. The results was better than takeing SVD
before. The reason may be that after SVD decomposition and model reduction, only key information of the texture structure remains.
Example 2:
This image has some periodic property with noise. We also choose as the 5% energy of the
input (||.||2 ). The size of the image is 64 64. For model reduction, we choose lr = 8. It can be
showed that for lr = 8, the image can be reconstructed well. The results are shown in Figure
3.10 and Table 3.8.
Table 3.8 and the results of I(
y)

0.01
0.105
0.2

Iv1,2 (
y)
0.055463
0.0072827
0.0021096

Ih1,2 (
y)
0.066322
0.0089614
0.0032282

Iv1,3 (
y)
0.29882
0.10738
0.059401

Ih1,3 (
y)
0.41178
0.12704
0.072031

Iv1,4 (
y)
0.43431
0.12664
0.069125

Ih1,4 (
y)
0.32709
0.10518
0.056889

Iv1,5 (
y)
0.47972
0.027096
0.012982

Ih1,5 (
y)
0.52293
0.042722
0.012937

0.01
0.105
0.2

Iv1,6 (
y)
0.51972
0.039081
0.019593

Ih1,6 (
y)
0.56625
0.058679
0.020106

Iv1,7 (
y)
0.83892
0.17797
0.10507

Ih1,7 (
y)
0.8488
0.12633
0.050167

Iv1,8 (
y)
1.3121
0.15372
0.083288

Ih1,8 (
y)
1.3122
0.23267
0.070809

Iv1,9 (
y)
1.3715
0.18303
0.10223

Ih1,9 (
y)
1.4422
0.22995
0.077837

0.01
0.105
0.2

Iv1,10 (
y)
1.6523
0.2423
0.10756

Ih1,10 (
y)
1.6452
0.27469
0.061891

Iv1,11 (
y)
2.1183
0.37516
0.11816

Ih1,11 (
y)
2.0547
0.42783
0.076685

Iv1,12 (
y)
1.6783
0.40574
0.19141

Ih1,12 (
y)
1.5033
0.35879
0.16596

Figure3.10 The Sample Textures 1-12

30

From table 3.8, we can see that all opt < 0.1. These textures have the same type texture
construction.
Example 3:
For different type texture, the results are list below. For comparisons, we choose noise
= 10% energy of input. The sample images are as Figure 3.11. The results are listed in Table
3.9.
Table 3.9 and the results of I(
y)

0.1
0.55
0.9
opt

Iv1,2 (
y)
0.071633
0.012426
0.00019605
<0.01

y)
Ih1,2 (
0.060475
0.012054
0.00021195
<0.01

Iv1,3 (
y)
5.7204
0.89536
0.0019302
0.5

y)
Ih1,3 (
5.9574
0.9928
0.003285
0.5

Iv1,4 (
y)
18.75
5.2887
0.12652
0.70

y)
Ih1,4 (
21.018
5.6141
0.15009
0.75

Iv1,5 (
y)
28.058
7.7955
0.20527
0.85

y)
Ih1,5 (
24.288
6.977
0.3314
0.83

Figure3.11 The Sample Textures 1-12

3.4.2

Special problems and the solutions

There were one problem worth mentioning during analyzing the experiment results. One
possible problem to get unexpected abnormal result is the difference of the +/- sign when taking
SVD decomposition. As we all know, the sign change of u and v during SVD decomposition will
not change the restoration result of the original Matrix.
I=

l
X
i=1

i ui viT

l
X
i=1

i (ui )(vi ) =

l
X

u1i vi1 , u1i =

i ui , vi1 = i vi ,

i=1

Because of this, the validation result will be dramatically affected. Some examples were
tried to testify this assumption. The results are listed below.
Take I1 = [u1, u2, u3] R943 , consider 3 conditions:
I2 = [u1, u2, u3] R943 ,
I3 = [u1, u2, u3] R943 ,
I4 = [u1, u2, u3] R943 .
31

Table 3.10 and the results of I(


y)
y)

I 1,2 (
y ) I 1,3 (
y ) Iv1,4 (
0.1
0
254.32
0
0.4
0
113.16
0
0.7
0
28.307
0
opt
0
0.95
0

Apply semi-blind model validation algorithm to get the result.


From Table 3.10, we can conclude that the difference of the sign will greatly effect the
invalidation result. To solve this problem, I choose the following procedure.
1. Normalize the input data.
2. Take SVD decomposition and calculate the mean values for each corresponding pair of
vectors.
3. If the corresponding mean values have the same signs and are large enough, then keep
the original data. If the sighs are different and the mean values are large enough, multiply -1
to one vector.
4. If the mean values are very small, then mark it and try two different sign respectively,
then choose the better result.

32

A Hankel Operator Approach to Texture Modelling and Inpainting


The approach discussed in this chapter was proposed in Sznaiers paper [46], which is to

model images exhibiting a given texture as the realization of a second order stationary stochastic
process. It considers the intensity values I(k, :) of the k th row of the image as the output of a
discrete linear shift-invariant, not necessarily causal, system driven by white noise. Moreover,
proved in [46], finding texton and completing missing portions of a textured image reduce both
to a rank minimization problem.
The frame work will be introduced in the following, which is the base of my future work to
extend the current inpaintg work using Roesser models and the 2-D Hankel Matrix.

4.1

Texture modelling
In [46], the textured images with the intensity values I(k, :) of the k th row of the n m

image is modelled as the output, at step k, of a discrete linear shift-invariant system driven by
the white noise u:
n
X

I(k, :) =

anj I(j, :) + bnj uj

i=1,j6=k

where ai , bi are the unknown parameters to be extracted from the image data.
The issue of causality can be addressed by considering the given n m image as one period
of an infinite 2-D signal with period (n,m). Thus it admits a state space realization of the form:
xk+1 = Axk + Buk
yk = Cxk , An = I
To extract the texture model parameters from noisy images, finite horizon model reduction
method (algorithm 1) is applied here, which is based on the SVD of a circulant Hankel matrix
constructed from the image data, where

.
HI =

R1
R2
..
.
Rn

R2 Rn
R3 R1
.. . .
..
.
.
.
R1 Rn1

RiT denote its ith row of the image matrix: I Rmn , form nm n block circulant Hankel
matrix HI .

4.2

Finding textons
Texton is the smallest sub-image to reproduces the original image. In [46], finding textons
33

problem can be solved by finding regions of the image corresponding to local minima of the rank
of the associated Hankel matrices. The main result was given as the following:
1. Given an nm image I(x, y). (A,B,C) denotes the state-space matrices of the corresponding model. Assume that the image I contains at least one complete texton, that is, there
exists some r < min{m, n}, such that Ri = CAi1 B, where A Rrr , Ar =I and Ak 6=I
for any 1 k < r.
For an ideal image, uncorrupted by noise, form the following matrix in this case:

Hnk

.
=

R1
R2
..
.

R2 Rnk
R3
R1
.. . .
..
.
.
.
R1 Rnk 1

Rnk

.
where nk = (N 1)r + k, satisfies rank(H(N 1)r+k )> r,
Textons can be found by considering the rank of a sequence of Hankel matrices, staring
with k = n, with decreasing values of k, and searching for relative minima of rank(Hk ).
2. For the more realistic case of ideal texture, corrupted by additive noise v, the Hankel
matrices of the actual (Yk ) and ideal (Hk ) images are related by
Yk = Hk + Hv
the equivalent problem is
(

min{r} over k, subject to:

(Hv ) = (Yk Hk )
Hk H , rank(Hk ) = r

where H denotes the set of all block circulant Hankel matrix.


This approach is illustrated in Figure 4.1 where it was used to (i) find a texton, (ii) extract
the corresponding model, and (iii) expand the original image.

Figure4.1

Examples of finding textons through rank minimization.

34

4.3

Texture Inpaiting
The texture inpainting problem arises in the the context of restoring a textured image where

some pixels. It can also be used to remove unwanted elements from the image or to correct
errors. As shown in [46], this problem can be recast into a rank minimization problem. The
main result was given as the following:
1. Given an n m image I(x, y), let H and (A,B,C) denote the associated Hankel matrix and
state-space model, respectively. Assume that the image I contains at least one complete
period, that is A Rrr , with Ar =I and r <min{m, n}.
Assume now that R1 , the first row of image, is missing or corrupted. The corresponding
Hankel matrix is given by:

x
R2
..
.

.
H(x) =
R1

..
.

Rr
x
..
.

R1
R2
.
..
. ..
x
.
..
. ..

Rr
..
.

R2
R1

..
.

Rr
.
..

R1 Rr1 Rr

where x denotes the missing pixels.


The missing pixels x can be found by solving the following problem:
(

min{r} over x, subject to:

(Y H(x))
H(x) H , rank(H(x)) = r

Y and H(x) denote the corresponding image and the underlying low rank Hankel operator.
Reducing the Computational Complexity:
A potential difficulty with the approach outlined above stems from the fact that rank minimization problems are known to be generically NP-hard. The computationally tractable convex
relaxations are given in [46] based on the specific structure of the problem.
Given the Hankel matrix H(x) with rank r, so does the Toeplitz matrix:

.
T (x) =

R1
R2
..
.

x
R1
..
.

Rn1

Rn1

x
..
..
.
.
R1

Proven in [46], one can attempt to solve the above NP-hard problem by solving the following
optimization problem:
min

log((i)2 + )

35

Texutre Inpainting from an Image-based View Point: As we discussed before, the


image-based algorithm, another very important kind of texture synthesis algorithms, works
by directly copying sample image pixels or patches and stitching them together to restore or
synthesis image. We can solve the inpainting problem by searching the proper image patches
with the mentioned rank minimization criteria. It follows that the missing pixels x can be found
by solving the following problem:

(Y H(x))

min{r} over x, subject to:

H(x) H , rank(H(x)) = r

x X, X denotes the set of sample patches.

Here we deal with the rank minimization problem with a patch wise optimization approach,
which is the most different from the pixel-wise methods indicated before. This image-based
approach is very quick compared to the other mentioned methods, and also very efficient to the
inpaiting problem. The small patches as a whole can also preserve the inherent property better
for the same type class of texture. The figure 4.2 gives the illustration of this.

Figure4.2 Original, Corrupted and Restored Images

36

Conclusions and Future Work


Conclusions:
This proposal approaches the problems of texture modelling, analysis and synthesis from an

viewpoint of control and operator theory, where images exhibiting a given texture are viewed
as the output of an unknown system with periodic impulse response, corrupted by noise. Many
problems of practical interest require addressing the issues of identification and model reduction
of systems having a periodic impulse response. Currently available techniques are not well suited
for solving these problems, since they cannot guarantee the key structural periodicity properties.
In the proposal, I introduced the the finite horizon model reduction method and extended
the method from 1-D to 2-D motivated by the idea of 2-D IIR filter realization and Roessers
2-D state-space model. Besides, many experiments have been done for the applications of semiblind (in)validation theory to image recognition and classification. The results have shown the
efficiency, drawbacks and future work orientation of the method. Finally, a Hankel operator
approach is applied in image restoration and inpaiting by converting to a rank minimization
problem. The discussion and experiments were done for both procedural-based and image-based
algorithm.
Future Work:
Based on the work done in the proposal, the following lists my recent future work for my
Ph.D degree with rough time schedule.
1. Aug. 05 Sep. 05
Summarization of finished work into papers, including the work on the applications of
semi-blind (in)validation theory to texture classification and the work on 2-D extension
of the Hankel operator texture modelling approach. There will be additional work on
theoretical or experimental analysis.
2. Sep. 05 Nov. 05
Consummating the current 2-D texture image processing frame work. The approximation
error should be analyzed. More efficient modelling method is still in need, such as using
a special single Hankel matrix to get system realization. There are much work needed to
extended many applications from 1-D to 2-D, like the image inpaiting.
3. Oct. 05 Dec. 05
Extending the current work to dynamic texture. As we can see in the proposal and other
related papers, the algorithms proposed can be modified to incorporate time evolution of

37

the state representation, potentially providing for efficient ways of modelling and recognizing dynamic texture. Dynamic texture analysis is getting considerable attention currently.
4. Nov. 05 Feb. 06
Generalizing the current semi-blind (in)validation results to cases involving time varying
and slowly time varying uncertainty structure. Developing necessary and sufficient invalidation conditions for mixed LTI/LTV and LTI/SLTV structures. Applications of the new
results to static and dynamic texture processing.
5. Mar. 06 Jun. 06
Other unscheduled works related to the research work and the thesis.

38

References
[1] L. F. Gool, P. Dewaele, and A. Oosterlinck, Survey: texture analysis anno, cvgip, vol. 29,
pp. 336-357, 1985
[2] M. Tuceryan and A. K. Jain, Texture analysis, in Handbook of Pattern Recognition and
Computer Vision,. H. Chen, L. F. Pau, and P. S. Wang, Eds. World Science Publishing,
1993, pp. 235-276.
[3] T. Ojala, M. Pietikainen, and D. Hardwook, A comparative study of texture measures
with classification based on feature distributions, Pattern Recognition, vol. 29, no. 1, pp.
51-59, 1996
[4] T. Randen and J. H. Husoy, Filtering for texture classifications: Acomparative study,
pami, vol. 21, no. 4, pp. 291-310, 1999
[5] J. Zhang and T. Tan, Brief review of invariant texture analysis methods, Pattern Recognition, vol. 35, pp. 735-747, 2002.
[6] B. Julesz, Visual pattern discrimination, IRE Trns of Information Theory, vol. IT, no. 8,
pp. 84-92, 1962
[7] G. R. Cross and A. K. Jain, Markov random field texture models, IEEE Trans. on Pattern
Analysis and Machine Intelligence, vol. 5, pp. 713-718, 1983
[8] J. Mao and A. K. Jain, Texture calssification and segmentation using multi-resolution
simultaneous autoregressive models, Pattern Recognition, vol. 25, pp. 173-188, 192
[9] R. Chellappa and R. Kashyap, Texture synthesis using 2d noncausal autoregressive models, IEEE Trans. on Acoustics, Speech and Signal Processing, vol. assp-33, no. 1, pp.
194-199, February 1985
[10] Y. Q. Xu, . Guo, and H. Shum, Chaos mosaic: Fast and memory efficient texture synthesis, Microsoft Research, Tech. Rep. MSR-TR-2000-32, April 2000.
[11] S. Geman and D. Geman, Stochastic relaxation, gibbs distributions and the bayesian
restoration of images, IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 6,
pp. 721-741, 1984
[12] D. Cano and T. H. Minh, Texture synthesis using hierarchical linear transforms, Signal
Processing, vol. 15, pp. 131-148, 1988
[13] M. Porat and Y. Y. Zeevi, Localized texture processing in vision: Analysis and synthesis
in gaborian space, IEEE Trans. Biomedical Engineering, vol. 36, no. 1, pp. 115-129, 1989
[14] J. D. Bonet and P. Viola, A non-parametric multi-scale statistical model for natural images. MIT Press, 1997, ch. 9
[15] D. Heeger and J. Bergen, Pyramid-based texture analysis/synthesis, in ACM SIGGRAPH, 1995
[16] K. Popat and R. W. Picard, Cluster-based probability model and itsapplication to image
and texture processing, IEEE Trans. on Image Processing, vol. 6, no. 2, pp. 268-284, 1997
[17] J. Portilla and E. P. Simoncelli, A parametric texture model based on joint statistics of
complex wavelet coefficients, Int. Journal of Computer Vision, vol. 40, no. 1, pp. 49-71,
2000
[18] A. Efros and T. Leung, Texture synthesis by non-parametric sampling, in ICCV, 1999
39

[19] L. Y. Wei and M. Levoy, Order-independent texture synthesis, Standford University,


Computer Science Department, Tech. Rep. TR-2002-01, April 2002
[20] P. Harrison, A non-hierarchical precedure for re-synthesis of complex textures, in WSGC,
2001, pp. 190-197
[21] M. Ashikhmin, Synthesizing natural textures, in Symposium on Interactive 3D Graphics,
2001, pp. 217-226.
[22] S. D. Rane, G. Sapiro, and M. Bertalmio, Structure and texture filling-in of missing image
blocks in wireless transmission and compression appications, ip, vol. 12, no. 3, pp. 296-303,
March 2003.
[23] M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester, Image inpainting, in Siggraph
2000, Computer Graphics Proceedings, K. Akeley, Ed. ACM PressWesley Longman, 2000,
pp. 417-424.
[24] Y. Q. Xu, B. Guo, and H. Shum, Chaos mosaic: Fast and memory efficient texture synthesis, Microsoft Research, Tech. Rep. MSR-TR-2000-32, April 2000.
[25] A. A. Efros and W. T. Freeman, Image quilting for texture synthesis and transfer, in
Proceeding of the 28th annual conference on Computer graphics and interactive techniques.
ACM Press, 2001, pp. 341-346
[26] Y. Liu, Y. Tsin, and W. C. Lin, The promise and perils of near-regular texture, Internatinal Journal of Computer Vision, vol. 62, no. 1-2, pp. 145-159, April 2005
[27] Y. Liu, R. Collins, and Y. Tsing, A computational model for periodic pattern perception
based on frieze and wallpaper groups, IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 26, no. 3, pp. 354-371, March 2004
[28] B. C. Moore, Principal component analysis in linear systems: Controllability, observability
and model reduction, IEEE Trans. Aut. Control, vol. 26, no. 1, pp. 17-32, 1981
[29] U. M. Al-Saggaf, One Model Reduction and Control of Discrete Time Systems. Ph.D.
Thesis, Stanford University, 1986
[30] K. Glover, All optimal Hankel norm approximations of linear multivariable systems and
their L error bounds, Int. J. of Control, vol. 39, no. 6, pp. 1115-1193, 1994
[31] R. Sanchez Pena and M. Sznaier, Robust Systems Tehroy and Applications. Wiley & Sons,
Inc., 1998
[32] K. Zhou, J.C. Doyle, and K. Glover, Robust and Optimal Control. Prentice Hall, 1996
[33] A. J. Tether, Construction of minimal linear state variable models from finite input/output
data, IEEE Trans. Aut. Control, vol. 15, pp. 427-436, 1981
[34] H. P. Zeiger and A. J. McEwen, Approximate linear realizations of given dimension via
hos algorithm, IEEE Trans. Aut. Control, vol. 19, p. 153, 1974
[35] M. Sznaier, M. C. Mazzaro, O. Camps, Semi-blind model (in)validation with applications
to texture classification.
[36] M. Sznaier, O. Camps and M. C. Mazzaro, Finite horizon model reduction of a class of
neutrally stable systems with applications to texture synthesis and recognition, Proc. 2004
IEEE CDC, to appear, Dec. 2004.
[37] Wei-Ping Zhu, Omair Ahmad. Realization of 2-D linear-phase FIR filters by using the
singularvalue decomposition. IEEE Transaction on signal processing, 1999,45:1349-1358
40

[38] Robert P. Roesser, A discrete state-space model for linear image processing, IEEE Transactions on Automatic Control, vol AC-20, no. 1, February 1975.
[39] Takao Hinamoto, Frederick W. Fairman, Separable-denominator state-space realization of
two-dimensional filters using a canonic form, IEEE Transactions on Acoustics, Speech,
and Signal Processing. vol ASSP-29, no. 4, Aug. 1981.
[40] B. L. Ho, R. E. Kalman, Effective construction of linear state variable models from inputoutput functions, Proc. 3rd Annu. Alberton Conf. Circuits and Syst. Theory, 1965, pp.
449-459.
[41] A. J. Tether, Construction of minimal linear state-variable models from finite input-output
data, IEEE Trans. Automat. Contr., vol. AC-15, pp.427-436, Aug. 1970
[42] P. Van Overschee and B. De Moor. Subaspce algorithm for the stochastic identification
problem, Automatica, vol. 29, no. 3, pp. 649-660, May 1993
[43] J. Chen and G. Gu, Control Oriented System Identification, An H Approach. New York:
John Wiley, 2000
[44] H. C. Lin, L. L. Wang, and S. N. Yang, Extracting periodicity of a regular texturebased
on autocorrelation functions, Pattern Recognition Letters, vol. 18, p. 433443, 1997
[45] S. Boyd, L. E. Ghaoui, E. Feron, and V, Balakrishnan, Lnear Matrix Inequalities in Systems
and Control Theory. Philadelphia: SIAM Studies in Applied Mathematics, 1994
[46] M. Sznaier,. A Hankel operator approach to texture modelling and inpainting.

41

S-ar putea să vă placă și