Sunteți pe pagina 1din 12

250 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 62, NO.

1, JANUARY 2017

LaSalle-Type Theorem and Its Applications to


Infinite Horizon Optimal Control of Discrete-Time
Nonlinear Stochastic Systems
Weihai Zhang, Senior Member, IEEE, Xiangyun Lin, and Bor-Sen Chen, Life Fellow, IEEE

AbstractBased on discrete martingale theory, the of deterministic discrete-time systems, we refer the reader to
LaSalle-type theorem for general discrete-time stochastic the references Hurt [10] and LaSalle [11]. For the discrete-
systems is obtained and the almost sure stability is in time stochastic or uncertain systems, Taniguchi [12] obtained
turn discussed. As applications, innite horizon nonlinear
optimal regulator is investigated, and a dynamical program- some stochastic stability theorems, which are based on a
ming equation called the Hamilton-Jacobi-Bellman equa- comparison theorem for the difference inequalities. Costa and
tion is also derived for discrete-time nonlinear stochastic Fragoso [13] discussed the Lyapunov stability for systems with
optimal control. Markovian jumping parameters. Oliveira et al. [14] presented
Index TermsHamilton-Jacobi-Bellman equations, a stability condition for uncertain discrete-time systems with
LaSalles theorem, Lyapunov function, martingale, optimal convex polytopic uncertainty, which is based on a parameter-
stabilizing control. dependent Lyapunov function. In recent years, Kelly et al. [15]
derived a condition guaranteeing almost sure instability of
I. I NTRODUCTION the equilibrium of a kind of stochastic difference equations,
which is based on the convergence of nonnegative martingale
I N 1892, Lyapunov established the well known Lyapunovs
second method for the stability of ordinary differential equa-
tions (ODEs) [1]. From then, Lyapunovs second method has
sequences. However, as far as we know, the stochastic version
of LaSalles theorem for the discrete-time stochastic systems
been greatly developed and applied to various fields together with multiplicative noise has not appeared up to now, which
with many fruitful results having appeared. On the other hand, motivates us to study the topics.
with the development of the theory of stochastic differential In this paper, we firstly establish the LaSalle-type stability
equations (SDEs), Lyapunovs method has also been extended theorem for the following discrete-time stochastic system:

to deal with stochastic stability [2][5]. Among all results of
xk+1 = Fk (xk , k )
Lyapunov stability, LaSalles theorem [6] is the most important (1)
milestone work, which locates the limit sets of non-autonomous x0 Rn
systems described by ODEs. In 1999, Mao [3] first generalized
the classical LaSalles theorem to stochastic It systems, and where Fk : Rn Rd Rn is a measurable function, {k }kN
then, Maos results have been extended to stochastic functional is an independent Rd -valued random variable sequence defined
differential equations [7][9]. As far as the LaSalle theorem on a given complete probability space (, F , P). In order to
obtain our main results, we introduce a new definition for
Lyapunov function {Vk (x)}, which satisfies an easily testing
Manuscript received August 5, 2015; revised December 15, 2015 inequality involving the expectation of {k } but not depending
and December 18, 2015; accepted March 14, 2016. Date of publication on the solution {xk } of (1) [see Theorem 3.1 and (5)]. It turns
April 22, 2016; date of current version December 26, 2016. This work out that {Vk (xk ), Fk } is a nonnegative super-martingale. By
was supported in part by the NSF of China under Grant 61573227, Grant
11271009, Grant 61170054, Grant 61633014; by the Research Fund for the martingale convergence theorem, we prove the stochastic
the Taishan Scholar Project of Shandong Province of China, SDUST version of LaSalles theorem. In addition, in our study, we
(Shandong University of Science and Technology, China) Research find that the optimal control problem for the general stochastic
Fund Research Fund under Grant 2015TDJH105; and by the State
Key Laboratory of Alternate Electrical Power System with Renewable discrete-time systems in infinite horizon must turns to the
Energy Sources (North China Electric Power University, China) under LaSalle-type theorem to prove the asymptotic stability.
Grant LAPS16011. Recommended by Associate Editor Q.-S. Jia. As applications of our given LaSalle-type theorem, the non-
W. Zhang is with the College of Electrical Engineering and Automa-
tion, Shandong University of Science and Technology, Qingdao 266590, linear optimal regulator problem is studied: Under the con-
China (e-mail: w_hzhang@163.com). straint of
X. Lin is with the College of Mathematics and Systems Science,
Shandong University of Science and Technology, Qingdao 266590, xk+1 = Fk (xk , uk , k ) (2)
China (e-mail: lxy9393@sina.com).
B.-S. Chen is with the Department of Electrical Engineering, National
Tsing Hua University, Hsin Chu 30013, Taiwan (e-mail: bschen@ee.
to minimize the cost functional given by
nthu.edu.tw). 

Color versions of one or more of the figures in this paper are available 
online at http://ieeexplore.ieee.org. min J(u, x0 ) := E [lk (xk , uk )]
u
(3)
uU
Digital Object Identifier 10.1109/TAC.2016.2558044 k=0

0018-9286 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
ZHANG et al.:LASALLE-TYPE THEOREM AND ITS APPLICATIONS TO INFINITE HORIZON OPTIMAL CONTROL 251

where Fk : Rn U Rd Rn are measurable functions, The following lemma is the convergence theorem for discrete
u = {uk } is the control with uk taking values in U Rnu , martingales (see [20, Th. 2.2]).
xu = {xuk } is the corresponding solution and x0 Rn is the Lemma 2.2: If {Xk , Fk }kN is a super-martingale
initial value, U is the set of admissible controls. such that
Up to now, there are few results on infinite horizon general sup E|Xk | <
nonlinear stochastic optimal control (2), (3). For the following k
discrete-time stochastic system:
then {Xk }kN converges almost surely to a limit X with
xk+1 = f (xk ) + g(xk )uk + h(xk )k . E|X | < .
The following is the well known Doobs decomposition
Elvira-ceja and Sanchez [16] presented some results for the
theorem (see [19, Lemma 7.10].
inverse optimal stabilizing control, and an integrated optimal
Lemma 2.3: Suppose {Yk , Fk }kN is a super-martingale,
control algorithm for finite horizon optimal control was given
then there exist an increasing predictable sequence {Ak }kN
by Kek et al. [17]. Hernandez-Gonzalez and Basin [18] gave a
with A0 = 0 and a martingale sequence {Mk }kN such that
method to obtain the solution to the optimal control problem
for stochastic polynomial systems over linear observations and Yk = Mk Ak a.s. (4)
a quadratic criterion. Our objective is to find a control u=
uk , k N} which guarantees the system (2) to be almost
{ Moreover, if {Yk , Fk }kN is a nonnegative super-martingale,
surely asymptotically stable, and minimizes J(u, x0 ) simulta- then Ak converges to A as k and A is integrable.
neously, i.e.,
III. L A S ALLE -T YPE T HEOREM
u, x0 ) = min J(u, x0 ).
J(
uU
In this section we consider the stochastic version of LaSalles
For convenience, we adopt the following notations: R : the theorem for the discrete-time system (1). To this end, we first
set of all real numbers; Rn : the set of all real n-dimensional introduce the following definition.
vectors; N : all positive integers including 0; 1{|x|N } : the Denition 3.1: A sequence of positive measurable functions
indicative function of the set {x Rn : |x| < N } (N is a given {Vk }k0 : Rn R+ is called the Lyapunov function sequence
positive integer number) defined as 1{|x|N } = 1 when |x| if there exist a deterministic real-valued sequence {k 0, k
N , and 1{|x|N } = 0 when |x| > N ; Id : the d d identity N} and a nonnegative function W : Rn R+ such that
matrix; AT or xT : transpose of a matrix A or vector x; nu : E [Vk+1 (Fk (x, k ))]Vk (x) k W (x), x Rn , k N.
the dimension of vector u; S n : the set of all real symmet- (5)
ric matrices; S+ n
: the set of all positive definite symmetric
matrices; ij : the Kronecker delta, i.e., ij = 1 when i = j, Remark 3.1: For system (1), we denote the left hand side of
while ij = 0 when i = j; (P ) = max1in {i }, (P ) = (5) as
min1in {i }, where {i , 1 i n} is the eigenvalues of Vk (x) := E [Vk+1 (Fk (x, k ))] Vk (x), x Rn , k N.
P S n ; u2A := uT Au, where A is a positive definite sym-
metric matrix; L2 (, Fk ; Rn ): the set of all Fk -measurable Then (5) can be written as
Rn -valued random variables X with E|X|2 < .
Vk (x) k W (x), x Rn , k N.
II. P RELIMINARIES Similarly, for the system (2) with control u, we denote
Let (, F , P) be a complete probability space and {k }kN
are independent and Rd -valued random variables. Denote Fk u Vk (x) := E [Vk+1 (Fk (x, u, k ))] Vk (x),
the -field generated by 0 , 1 , . . . , k1 , i.e., x Rn , u Rnu , k N.
Fk = {0 , 1 , . . . , k1 }, k N
= {
For the state-feedback control u uk (x)} U, if there exists
and F0 = {, } ( is the empty set, is the sample space). Vk : Rn R+ , {k 0, k N} and W : Rn R+ such that
Obviously, Fk1 Fk . Set F = {Fk } k=0 . In the following
u Vk (x) := E [Vk+1 (Fk (x, uk (x), k ))] Vk (x)
discussion,
 without loss of generality, we suppose F = F =
( k=0 Fk ). From the definition of system (1), it is easy to see
k W (x), x Rn , k N
that the solution xk is Fk adapted.
we call the function sequence {Vk }kN to be the Lyapunov
Now, we first review some results on conditional expectation
functions for the system (2) with control u
, which will be used
and martingale theory. The following lemma is the special case
in Section IV.
of [19, Th. 6.4].
In the following discussion, we suppose all the functions are
Lemma 2.1: If Rd valued random variable is independent
measurable, and random variables such as Vk (xk ), W (xk ) are
with the -field G F , and Rn -valued random variable is G-
members of L2 (, Fk ; R). It is easy to see that {xk , k N},
measurable, then, for every bounded function f : Rn Rd
the solutions of system (1), are {Fk , k N}-adapted, i.e., xk
R, there exists
is Fk -measurable for every k N. We first give the following
E [f (, )|G] = E[f (x, )]x= a.s. lemma for the convergence property of super-martingale.
252 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 62, NO. 1, JANUARY 2017

Lemma 3.1: Suppose {Yk }kN is a nonnegative super- By iteration, we have


martingale, then

k1

Yk E[Yk+1 |Fk ] 0 a.s., (when k ). (6) E [Vk (xk )] V0 (x0 ) + i . (11)


i=0
Proof: By Doobs decomposition theorem-Lemma 2.3, 
Since i=1 k is convergent, we obtain
we know that Yk can be written as
sup E [Vk (xk )] < . (12)
Yk = Mk Ak k

where Mk is a martingale sequence and Ak is an increasing Denote


predictable sequence with A0 = 0. So

k = E [Vk (xk )] + i .
0 Yk E[Yk+1 |Fk ] = E[Ak+1 |Fk ] Ak i=k

E[A |Fk ] Ak . (7) By (10), we can obtain


Since k+1 k (13)

lim E[A |Fk ] = E[A |F ] = A a.s. i.e., {k }kN is a positive and decreasing sequence, limk k
k
exists. Note that
let k on both sides of (7), we can obtain (6). 

The following lemma shows the convergences of Vk (xk ) E [Vk (xk )] = k i
and W (xk ). i=k
Lemma 3.2: Suppose {Vk }k0 : Rn R+ are the Lyapunov so E[Vk (xk )] is convergent. By iterating the inequality (9), it

functions satisfying (5) and k=0 k < . Let {xk }kN be the follows that
solution of (1), then

k 
k

lim E [Vk (xk )] exists and is finite E [Vk+1 (xk+1 )] + E [W (xi )] E [V0 (x0 )] + i .
k i=0 i=0
lim E [W (xk )] = 0. 
k From the above discussion, we see that k=0 E[W (xk )] is
convergent, which implies
(N )
Proof: For every positive integer N , denote Vk (x) =
(N ) lim E [W (xk )] = 0
Vk (x)1{|x|N } , then Vk is bounded. For each x0 Rn and k
k N, we have and the proof is hence completed. 
   
(N ) (N ) Remark 3.2: From the proof of Lemma 3.2, using the similar
E Vk+1 (xk+1 )|Fk = E Vk+1 (Fk (xk , k )) |Fk .
techniques, we can see that for every fixed integer i
Since xk is Fk -measurable and k is independent with Fk , by lim E [Vk (xk )|Fi ] exists and is finite
Lemma 2.1, we have k
    lim E [W (xk )|Fi ] = 0 a.s.
(N ) (N )
E Vk+1 (xk+1 )|Fk = E Vk+1 (Fk (x, k )) . k
x=xk
The following theorem is called the LaSalle-type theorem for
Let N , we have the discrete-time stochastic system (1).
Theorem 3.1: Suppose Vk : Rn R+ , k 0, are the

E [Vk+1 (xk+1 )|Fk ] = E [Vk+1 (Fk (x, k ))]x=xk . Lyapunov function satisfying (5), k=0 k < and

By (5), we have lim inf inf Vk (x) = . (14)


|x| kN
E [Vk+1 (Fk (x, k ))]x=xk [Vk (x) + k W (x)]x=xk
{xk }kN is the solution of (1), then
= Vk (xk ) + k W (xk ).
lim Vk (xk ) exists and is finite almost surely
k
We obtain
lim W (xk ) = 0 a.s.
k
E [Vk+1 (xk+1 )|Fk ] Vk (xk ) + k W (xk ). (8)
Proof: Set
Taking expectation on both sides of (8), we obtain

E [Vk+1 (xk+1 )] E [Vk (xk )] + k E [W (xk )] . (9) Yk := Vk (xk ) + i
i=k
With W (x) 0 and accordingly EW (x) 0 in mind, we have then from (8) we have
E [Vk+1 (xk+1 )] E [Vk (xk )] + k . (10) E[Yk+1 |Fk ] Yk W (xk ). (15)
ZHANG et al.:LASALLE-TYPE THEOREM AND ITS APPLICATIONS TO INFINITE HORIZON OPTIMAL CONTROL 253

From the above inequality, it is easy to show that Yk is a Proof: Since the inequality (23) can be written as
nonnegative super-martingale. By Lemma 3.2, we know that
E [Vk+1 (Fk (x, k ))] Vk (x) (1 c)Vk (x) (25)
sup E|Yk | < . (16)
kN similarly to the proof of Theorem 3.1, it is easy to show that
Vk (xk ) 0 as k . 
By Lemma 2.2, Yk converges
 to Y almost surely as k ,
Remark 3.3: Obviously, if we can choose a function V (x)
and E|Y | < . Since k=0 k is convergent, Vk (xk ) is also such that
convergent and the limit is finite almost surely.
As far as W (xk ) 0 almost surely, it can be shown by the E [V (Fk (x, k ))]V (x) k W (x), x Rn , k N (26)
following inequality:
or
0 W (xk ) Yk E[Yk+1 |Fk ]
E [V (Fk (x, k ))] cV (x), x Rn , k N (27)
and Lemma 3.1. This ends the proof. 
Denote where 0 < c < 1 and

G := {x : W (x) = 0} Rn (17) lim inf V (x) = (28)


|x|

and assume then, for Vk (x) = V (x), (26)(28) reduce to (5), (23), and (14),
respectively. So, Theorem 3.1 and Proposition 3.1 still hold.
lim inf W (x) = . (18) Furthermore, if Fk (x, y) has an equilibrium point, for conve-
|x|
nience, we assume
In this case, x is the limit point of {xk }kN and
Fk (0, y) 0, y Rd , k 0
P(x G) = 1. (19)
and W (x) in Theorem 3.1 and Vk (x) = V (x) in Proposition 3.1
are also continuous, positive definite (V (0) = 0, V (x) > 0 for
Furthermore, if we consider another case
x = 0) and proper (i.e., for each > 0, V 1 ([0, ]) is compact),
lim inf W (x) = 0 (20) we are in a position to obtain the following corollary.
|x| Corollary 3.1: If V satisfies (27), (28) and V (x) is a positive
definite and proper function, and {xk }kN is the solution of (1),
and associated with this, there exists an increasing subsequence then
{ki } N such that

lim xk = 0 a.s. (29)
P : lim |xki ()| = >0 k
i
Proof: By Proposition 3.1, we have V (xk ) 0 almost
then (19) should be replaced uniformly by surely as k . So there exists a F -measurable set N with
0 probability such that for any > 0 and N c , there exists
P(x G ) = 1 (21) a positive integer K(), when k K(), there always holds

where V (xk ()) [0, ].

G = G {}. (22) So

In other words, the set G (or G ) gives the possible values that xk () V 1 ([0, ]) Rn .
the limit points of xk maybe take. So, in order to describe the
limit point x more clearly, we hope the set G as small as pos- Since V 1 ([0, ]) is compact, {xk ()} has limit points. Sup-
sible, which depends on the selections of functions Vk and W . is the limit point of {xk ()}, i.e., there exists a subse-
pose x
In particular, if G only includes one point, this unique point is quence {xki ()}, such that
just the limit of xk .
lim xki () = x
.
Proposition 3.1: If there exist functions Vk : Rn R+ , k 0, i
and a real number c (0, 1) satisfying (14) and
By the continuity of V , we have
E [Vk+1 (Fk (x, k ))] cVk (x), x Rn , k N (23)
0 = lim V (xki ()) = V (
x).
i
and {xk }kN is the solution of (1), then
Since V is positive definite, we obtain x = 0. This proves that
lim Vk (xk ) = 0 a.s. (24) 0 Rn is the unique limit point of xk (), so we have xk 0
k almost surely. 
254 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 62, NO. 1, JANUARY 2017

Remark 3.4: In particular, for the following deterministic Corollary 3.2: If {k } are independent and identically distrib-
difference system: uted, and V : Rn R+ is the Lyapunov function satisfying

xk+1 = fk (xk ) (30) V (x) 0 (36)

(5) reduces to and {xk }kN is the solution of (32), then limk V (xk ) exists
and is finite almost surely. Moreover, if V (x) is a continu-
Vk (x) := Vk+1 (fk (x)) Vk (x) k W (x). (31) ous function and the random variable is the limit point of
{xk }kN , then
If Vk satisfies (14), and W satisfies (18) and is also positive
P{ G } = 1
definite and proper, then the solution of difference equation (30)
converges to zero. where
In the following, we give the definition of almost sure
stability of system (1), which is expected to be useful in G = {x : V (x) = 0} .
nonlinear H control and regulator problem. Another concept
Proof: The convergence of {V (xk )} can be obtained
called asymptotic mean square stability is often used in linear
directly by Theorem 3.2 with k = 0 and W (x) = V (x) 0.
stochastic control; see [25], [26].
Moreover, we also have
Denition 3.2: The system (1) is said to be almost surely
asymptotically stable if the solutions {xk } of (1) satisfy lim V (xk ) = lim W (xk ) = 0 a.s.
k k
lim xk = 0 a.s. By the continuity of V (x), we have
k

Remark 3.5: If system (1) satisfies the conditions of V () = 0 a.s.


Corollary 3.1, the corresponding solutions {xk } are almost
surely asymptotically stable. which implies G . This ends the proof. 
Now, we consider the following autonomous stochastic Remark 3.6: The continuity of V (x) is necessary in
system: Corollary 3.2, because, in general, the continuity of V (x)
and F (x, 0 ) (w.r.t. x) does not necessarily imply that

xk+1 = F (xk , k ) E[V (F (x, 0 ))] is also continuous. For example, let 0 be a
(32) geometric distributed random variable with probability
x0 Rn
1
P{0 = m} = , m = 1, 2, . . .
where F : R R R is a measurable function, and
n d n 2m
0 , 1 , . . . , k , . . . are independent sequences. We choose the F (x, 0 ) is defined as
Lyapunov function V : Rn R+ , which satisfies
2m 2m
F (x, m) =
E [V (F (x, k ))] V (x) k W (x), k N. (33) (1 + |x|) m1 (1 + |x|)m

when 0 = m, m = 1, 2, . . .. Let V (x) = |x|, then


Generally speaking, we also assume that

1, x = 0
lim inf V (x) = . (34) E [V (F (x, 0 ))] =
|x| 0, x = 0.

Theorem 3.2: Suppose V : Rn R+ is the Lyapunov func- Obviously, E[V (F (x, 0 ))] is not continuous at x = 0.
tion satisfying (33), {xk }kN is the solution of (32), then Remark 3.7: Corollary 3.2 contains the classical LaSalles
limk V (xk ) exists and is finite almost surely, and theorem as a special case. This is because if F is a deterministic
function (see [11]), then V (x) given by (35) is equivalent to
lim W (xk ) = 0 a.s.
k V (x) = V (F (x)) V (x).

If {k }kN are independent and identically distributed, the


IV. I NFINITE H ORIZON N ONLINEAR O PTIMAL C ONTROL
E [V (F (x, k ))] = E [V (F (x, 0 ))] , k N In this section, we consider the infinite horizon nonlinear op-
timal control problem (2), (3). Denote U the set of all admissible
and the left hand side of (33) can be replaced by controls that ensures the cost functional J(u, x0 ) to take finite
values, i.e.,
V (x) := E [V (F (x, 0 ))] V (x). (35)
u = {uk }kN : uk is the Fk -measurable random
The following corollary shows that our result includes that U= variable taking values in U Rnu such that

of [11]. < J(u, x0 ) < +, x0 Rn .
ZHANG et al.:LASALLE-TYPE THEOREM AND ITS APPLICATIONS TO INFINITE HORIZON OPTIMAL CONTROL 255

Now, we consider the state-feedback control and assume that Proof: For every admissible state-feedback control u =
the admissible control u = {uk } in the system (2) is a function {uk (x)}, by the smoothness of conditional expectation, we get
of xk , i.e., uk has the form of uk = uk (xk ). Denote U the set    
of all the admissible state-feedback controls E Vi+1 xui+1 |Fi Vi (xui )
  = E [Vi+1 (F (xui , ui , i )) Vi (xui ) |Fi ]
u = {uk (xk )}kN : Fk (0, uk (0), k ) 0, and

U= . = [u Vi (x) + li (x, u)]x=xui ,u=ui li (xui , ui ) . (42)
{uk (xk )}kN U

and xu = {xuk }kN is the solution of system (2) under admissi- Setting u = { uk (x)} in (42) and considering (38), it follows
ble control u = {uk } U. that:
Denition 4.1: The admissible control u = { uk (xk )}kN        
U is called an optimal (state-feedback) stabilizing control, if it i = Vi xui E Vi+1 xui+1
li xui , u |Fi . (43)
satisfies the following two conditions:
Taking expectation and summation on both sides of (43) for i
1) Stabilization: The system (2) with the given control u is from k to N , we have (
ui = ui (xi ))
almost surely asymptotically stable, i.e.,

N
   
lim xuk =0 a.s. i |Fk
E li xui , u
k
i=k

2) Optimization: Minimizing the cost functional J(u, x0 ) 


N
       
given by (3), i.e., = E Vi xui E Vi+1 xui+1 |Fi |Fk
i=k
u, x0 ) = min J(u, x0 ), x0 Rn .
J( 
N
   u        
uU
= E Vi xi |Fk E E Vi+1 xui+1 |Fi |Fk
Our purpose is to find sufficient conditions for the existence of i=k

state-feedback optimal stabilizing control. For simplicity, in the 


N
   u      
following discussion, we denote the cost functional with finite = E Vi xi |Fk E Vi+1 xui+1 |Fk
i=k
horizon starting at k as:      
= Vk xuk E VN +1 xuN +1 |Fk

N
JkN (u, xk ) = E [li (xi , ui )|Fk ] which implies
i=k
       
and for infinite horizon case as JkN u , xuk = Vk xuk E VN +1 xuN +1 |Fk . (44)

 Combining the second inequalities of (37) and (38), the
Jk (u, xk ) = E [li (xi , ui )|Fk ] following:
i=k

for which, the admissible control u U satisfies < u Vk (x) c2 W (x).


Jk (u, xk ) < +. Let Vk : Rn R+ be a series of positive is derived. By the LaSalle-type Theorem 3.1 and the positive
functions. For every x Rn , u U Rnu , denote
definite proper condition of W , the solution sequence {xuk }kN
u Vk (x) = E [Vk+1 (Fk (x, u, k ))] Vk (x). converges almost surely. Furthermore, by Lemma 3.2 and
Remark 3.2
Theorem 4.1: Suppose there exist a sequence of continuous    
positive functions {Vk }kN : Rn R+ , a continuous posi- lim E W xuN |Fk = 0.
N
tive definite proper function W : Rn R+ , uk (x) U , and
c1 , c2 > 0, satisfying In view of the first inequality of (37), it yields that
   
sup Vk (x) c1 W (x), inf lk (x, u) c2 W (x) (37) lim E VN +1 xuN +1 |Fk = 0 a.s.
kN N
kN

k (x)) 0, x R
u Vk (x) + lk (x, u n
(38) Letting N in equality (44) leads to
u Vk (x) + lk (x, u) 0, u U R nu
,x R .
n
(39)  
Jk u , xuk <
   
Then u = {
uk (xk )}kN is an optimal stabilizing control law. Jk u , xuk = Vk xuk .
Moreover, we have
    This proves (40).
Vk xuk = Jk u , xuk = min Jk (u, xk ), k = 0, 1, 2, . . . (40) Below, we prove u to be the optimal control. For each admis-
uU
sible control u U, using the similar technique in deriving (44)
In particular and taking the inequality (39) in mind, it deduces that
   
V0 (x0 ) = min J(u, x0 ). (41) JkN (u, xuk ) Vk (xuk ) E VN +1 xuN +1 |Fk .
uU
256 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 62, NO. 1, JANUARY 2017

If u is an admissible control such that Jk (u, xuk ) < , then Proposition 4.1: Suppose there exists a matrix sequence
{Pk }kN with Pk > 0 satisfying
lim E [VN (xuN ) |Fk ] = 0.
N
(Pk ) < , inf (Qk ) > 0
sup (48)
kN kN
Let N , we get
H (Pk ) = 0 (49)
Jk (u, xuk ) Vk (xuk ) .
then
Specially, for k = 0 and x0 Rn , we have
1
d
 T
J(u, x0 ) V0 (x0 ) = J(
u, x0 ).
k = Rk + BkT Pk+1 Bk +
u Bkj Pk+1 Bkj
This proves that u is the optimal stabilizing control of the j=1
optimization problem (2), (3). 
Remark 4.1: Combining (38) with (39), we obtain the follow- d
 T
ing so-called HamiltonJacobiBellman (HJB) equation BkT Pk+1 Ak + Bkj Pk+1 Ajk xk (50)
j=1
u Vk (x)+lk (x, u
(x)) = min [u Vk (x)+lk (x, u)] = 0. (45)
uU
is the optimal stabilizing control for the LQ problem (46), (47).
Theorem 4.1 may be viewed as a discrete dynamic program- Proof: Let
ming principle, which asserts that under the condition (37), the
optimal value function must satisfy the HJB equation (45). A Vk (x) = xT Pk x, lk (x, u) = xT Qk x + uT Rk u.
maximal principle for discrete stochastic optimal control can
be found in [21]. By the first condition of (48), we have
Remark 4.2: HJB equation (45) gives the relationship be-
Vk (x) c1 |x|2
tween the optimal value functions in one period and in the next
period as in [22][24]. As far as we know, for the discrete-time and by the second condition of (48), we have
nonlinear optimization (2), (3), Theorem 4.1 and HJB equation
(45) appear to be new. lk (x, u) c2 |x|2
In particular, for the LQ problem with the time-varying state
equation where the positive constants c1 = supkN (Pk ) and c2 =
inf kN (Qk ).

x 
d
Moreover, similarly to the derivations of [25], applying the
k+1 = Ak xk + Bk uk + (Ajk xk + Bkj uk )kj
j=1 (46) completing squares method to u Vk (x) + lk (x, u), we have

x Rn
0
u Vk (x) + lk (x, u) = xT H (Pk )x + u u
k (x)2R k
and the cost functional
  where R k = Rk + B T Pk+1 Bk + d (B j )T Pk+1 B j . Tak-
k j=1 k k
  T  ing condition (49) in mind, we obtain
J(u, x0 ) = E xk Qk xk + uTk Rk uk (47)
k=0
k (x)2R k 0,
u Vk (x) + lk (x, u) = u u u Rnu
where Ak , Ajk Rnn , Bk , Bkj Rnnu , Qk > 0, Rk > 0, u Vk (x) + lk (x, u(x)) 0.
E[kj ] = 0d1 and E[ki kj ] = ij , i, j = 1, . . . , d, k N. Denote
By Theorem 4.1, we can prove that u given by (50) is the
d
 T optimal stabilizing control for the LQ problem (46), (47). 
H (Pk ) := ATk Pk+1 Ak + Ajk Pk+1 Ajk + Qk Pk
Furthermore, if k , k N, are independent and identically
j=1
distributed random variables, the functions F and l do not
d
 T depend on k, then system (2) becomes the following time-
ATk Pk+1 Bk + Ajk Pk+1 Bkj invariant control system
j=1
1 xk+1 = F (xk , uk , k ) (51)
d
 T
Rk + BkT Pk+1 Bk + Bkj Pk+1 Bkj and the cost functional is


j=1 
J(u, x0 ) = E l (xuk , uk ) . (52)
d
 T
BkT Pk+1 Ak + Pk+1 Ajk
k=0
Bkj
j=1 Denote

then we have the following results. u V (x) = E [V (F (x, u, k ))] V (x).


ZHANG et al.:LASALLE-TYPE THEOREM AND ITS APPLICATIONS TO INFINITE HORIZON OPTIMAL CONTROL 257

By Theorem 4.1, we have Generally speaking, as far as the conditions of Theorem 4.1
Proposition 4.2: Suppose there exist a continuous positive are given, the main difficulty is to check or solve the HJB
function V : Rn R+ and u = u(x) satisfying equation (45). The following proposition shows that, under
some proper conditions, this problem can be solved technically
u V (x) + l (x, u(x)) 0 (53) as well. In order to make our notations simple, we denote:
u V (x) + l(x, u) 0 (54)
Hk (x, u) = u Vk (x) + lk (x, u)
where l(x, u(x)) is a continuous positive function. Then u(x)
is the optimal stabilizing control for the system (51). Moreover, and call Hk (x, u) the Hamilton function of problem (2), (3).
we also have In the following discussion, we also suppose Hk is twice
continuously differential with respect to u U = Rnu .
V (x0 ) = min J(u, x0 ). (55) Theorem 4.2: Suppose {Vk (x)} and {lk (x, u)} satisfy con-
uU
ditions of (37) and Rnu -valued function k = k (x) satisfies
Similarly, consider the time-invariant LQ problem with the 1) Hk (x, k (x)) 0, x Rn ;
system equation 2) (Hk /u)(x, u)|u=k (x) = 0, x Rn ;
3) ( 2 Hk /u2 )(x, u) 0 for all x Rn , u Rnu .

x d
k+1 = Axk + Buk + (Aj xk + B j uk )kj
j=1 (56) Then {uk = k (x)}kN is the feedback stabilizing control se-

x0 Rn quence for the optimization problem (2), (3).
Proof: For any x Rn , taking Taylors series expansion
and the cost functional for Hk at uk = k (x), we have (0 < < 1)

 " #
   Hk
J(u, x0 ) = E xTk Qxk + uTk Ruk (57) Hk (x, u) = Hk (x, k (x)) + |u=k (x) , u k (x)
u
k=0
1 2 Hk
where A, Aj Rnn , B, B j Rnnu , Q > 0, R > 0, + [u k (x)]T |u=k (x)+(uk (x)) [u k (x)] .
2 u2
E[kj ] = 0d1 and E[ki kj ] = ij , i, j = 1, . . . , d, k N. We
have the following result: Applying the condition iii), we obtain
Proposition 4.3: Suppose there exists a symmetric matrix " #
P > 0 satisfying the following generalized algebraic Riccati Hk
Hk (x, u) Hk (x, k (x)) + |u=k (x) , u k (x) .
equation (GARE): u

d  d By the conditions i)-ii), we have
AT P A+ (Aj )T P Aj +QP AT P B + (Aj )T P B j
j=1 j=1 Hk (x, u) 0 (59)
1

d
for all u Rnu and x Rn . By Theorem 4.1, this theorem is
R + B T P B + (B j )T P B j immediately obtained. 
j=1


d
V. S IMULATIONS AND E XAMPLES
B T P A + (B j )T P Aj = 0 (58)
j=1 In this section, we give some examples to demonstrate the
effectiveness of our obtained results.
then Example 5.1: Consider the following one-dimensional
1 second-order linear difference equation with white noises:

d 
= R + B T P B +
u (B j )T P B j xk+1 = axk + bxk1 + k1 k
(60)
j=1 x0 , x1 R, k = 1, 2, . . .


d
B T P A + (B j )T P Aj x where {k } is an independent random variable sequence with
j=1 E[k ] = 0, E[k2 ] = 1, k = 1, 2, . . .. Introduce another variable
yk+1 = xk , then (60) can be transformed into a 2-D first-order
system as follows:
is the optimal feedback stabilizing control for the LQ problem
(56), (57).

xk+1 = axk + byk + k k
1
Remark 4.3: Zhang et al. [26] discussed the well-posedness
yk+1 = xk (61)
of the indefinite LQ problem for system (56), and gave an LMI-

based approach to solve the GARE (58). x0 , x1 R, y1 = x0 , k = 1, 2, . . . .
258 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 62, NO. 1, JANUARY 2017

Fig. 1. The trajectory of xk for (60) with a = b = 0.4.

If |a| + |b| < 1, we take the $positive number c in (d2 , d2 ) with


d = max{b , [1+b a $(1 + b2 a2 )2 4b2 ]/2}, d2 =
2 2 2 2

min{1 a2 , [1 + b2 a2 + (1 + b2 a2 )2 4b2 ]/2}, and


let V (x, y) = x2 + cy 2 , then

% &
1
E V (ax + by + k , x) V (x, y)
k

1
= W (x, y), (x, y) R2 , k N
k2

where W (x, y) = (1 c a2 )x2 2abxy + (c b2 )y 2 0,


and G = {(x, y); W (x, y) = 0} = {(0, 0)}, i.e., W (x, y) is

k=1 (1/k ) < , by
2
a positive definite function. Since Fig. 2. Trajectory of xk for system (60) with |a| + |b| 1. (a) a = b =
Theorem 3.1, we know that xk 0 almost surely as k ; 0.707; (b) a = 0, b = 1.
see Fig. 1 for the simulation.
If |a| + |b| 1, the solution of (60) is not necessarily con- The inequality (64) yields
vergent (see Fig. 2).
Remark 5.1: The following is the derivation of the c2 (b2 a2 + 1)c + b2 < 0. (65)
constrained conditions |a| + |b| < 1 and c (d2 , d2 ) in
Example 5.1. The quadratic form of W (x, y) can be A necessary and sufficient condition for the solvability of (65) is
written as:
 2
:= (b2 a2 + 1) 4b2 > 0.
W (x, y) = (1 a2 c)x2 2abxy + (c b2 )y 2
By (62) and (63), 1 a2 > c > 0, which implies b2 a2 +
' (' ( 1 > 0. Therefore, from > 0, we have
1 a2 c ab x
= (x, y) .
ab c b2 y
b2 a2 + 1 > 2|b|

It is well known that W (x, y) is a positive definite quadratic i.e.,


form if and only if (iff)
(|b| 1)2 > |a|2 . (66)
1a c >0
2
(62)
Again, by (62) and (63), it is easy to see |b| < 1. So, from (66),
we have
cb >02
(63)
) ) 1 |b| > |a|
) 1 a2 c ab ))
) > 0. (64)
) ab c b2 ) i.e., |a| + |b| < 1 is a necessary condition for W (x, y) > 0.
ZHANG et al.:LASALLE-TYPE THEOREM AND ITS APPLICATIONS TO INFINITE HORIZON OPTIMAL CONTROL 259

Solving the inequality (65), we have


 $ 
1 + b2 a2 (1 + b2 a2 )2 4b2 /2 < c
 $ 
< 1 + b2 a2 + (1 + b2 a2 )2 4b2 /2. (67)

By (62) and (63), we obtain

b 2 < c < 1 a2 . (68)

Combining (67) and (68), c must satisfy

c (d2 , d2 )
$
where d2 = max{b2 , [1 + b2 a2 $ (1 + b2 a2 )2 4b2 ]/2},
Fig. 3. Trajectory of xk for system (69).
and d2 = min{1a2, [1+b2 a2 + (1+b2 a2 )2 4b2 ]/2}.
Example 5.2: Consider the following n-order stochastic
difference system: Remark 5.2: A detailed derivation for the constrained con-
 dition (1/(1
a), 1/b) in Example 5.2 is as follows. For
xk+1 = ak xk + bk min {|xki+1 |} V (y) = |y |+ 2in |y (i) |, y = (y (1) , y (2) , . . . , y (n) )T Rn ,
(1)
1in (69)
x0 , x1 , . . . , xn1 R+ , k = n 1, n, . . . we have

which appeared in [27] for deterministic case. However, in E [V (Fk (y))] V (y)
this paper, we suppose {ak }, {bk } are independent identically % )
))&
)
distributed random variable sequences and ak , bk 0, ak + = E ))ak y (1) + bk min y (i) ))
1in
bk < 1 with a = E[ak ], b = E[bk ] (a + b < 1). We can trans- 0 1
form (69) into a n-dimensional first-order stochastic system as  ) ) ) )  )) ))
) (i) ) ) )
follows: + )y ) )y (1) ) + )y (i) )

1in1 2in

(1) (1)
yk+1 = ak yk + bk min |yk |
(i) %)
)
) ) ))&
) )

1in )
= E )ak y + bk min )y (i) ) ))
(1)

(2) (1) 1in


k+1
y = y k ) ) ) ) ) )
.. ) ) ) ) ) )
. + )y (1) ) )y (1) ) )y (n) )

(n) (n1) ) )
) ) ) ) ) ) ) )

yk+1 = yk ) ) ) ) ) ) ) ) ) )

y (1) = x , y (2) = x , . . . , y (n) = x a )y (1) ) + b min )y (i) ) + )y (1) ) )y (1) ) )y (n) )



n1 R , k = 0, 1, . . . .
+
0 0 0 1 0 1in
(70) ) )
) ) ) )
) ) ) ) ) )
a + 1 ) )y (1) ) + b min )y (i) ) )y (n) )
= (
), 1/b) and
We choose a fixed positivenumber (1/(1 a 1in
) ) ) ) ) )
define V (y) = |y | + 2in |y |, y = (y , y , . . . ,
(1) (i) (1) (2) ) ) ) ) ) )
a + 1 ) )y (1) ) + b )y (n) ) )y (n) ) .
(
y (n) )T Rn . Then
If we let
V (Fk (y)) V (y) W (y) ) ) ) )
) ) ) )
1) )y (1) ) + (1 b) )y (n) )
W (y) = ( a
where
 
ak y (1) + bk min y (i) then we have the following form:
1in
y (1) E [V (Fk (y))] V (y) W (y).
Fk (y) :=
..


.
So W (y) 0 for all y Rn is needed. This implies the follow-
y (n1) ing two inequalities:
) ) ) )
) ) ) )
1) )y (1) ) + (1 b) )y (n) ) .
W (y) = ( a a
1 > 0 (71)

By Theorem 3.1, we know limk V (yk ) exists and limk 1 b > 0. (72)
W (yk ) = 0 almost surely. Because limk W (yk ) = 0 almost
(1) Combining inequalities (71) and (72), we see that
surely implies limk yk = 0 almost surely, by the definition ' (
(1)
of yk , we know that xk = yk . So, 0 is the limit of {xk }
1 1
, .
(see Fig. 3). 1a b
260 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 62, NO. 1, JANUARY 2017

VI. C ONCLUSION
We have obtained the stochastic version of LaSalles theorem
for discrete-time stochastic systems, based on which, we have
obtained the Lyapunov-type stability criteria. The obtained
LaSalle-type theorem is very powerful in the study of stability of
stochastic discrete-time difference equations. As applications,
we apply the LaSalle-type theorem to study the infinite horizon
nonlinear optimal regulator for the discrete-time stochastic con-
trol systems with multiplicative noise. Another potential appli-
cation of the LaSalle-type theorem is in nonlinear discrete-time
stochastic H control as done in continuous-time case [28].

R EFERENCES
[1] A. M. Lyapunov, The General Problem of the Stability of Motion
(in Russian, 1892), A. T. Fuller, Ed., New York, NY, USA: Taylor &
Francis, 1992.
[2] R. Z. Hasminskii, Stochastic Stability of Differential Equations.
Germantown, MD, USA: Sijthoff and Noordhoff, 1980.
[3] X. Mao, Stochastic versions of the LaSalle theorem, J. Differential Eq.,
vol. 153, pp. 175195, 1999.
[4] P. Cheng, F. Deng, and Y. Peng, Global exponential stability of impul-
sive stochastic functional differential systems, Stat. Prob. Lett., vol. 80,
pp. 18541862, 2010.
[5] P. Cheng and F. Deng, Robust exponential stabilization and delayed-
state-feedback stabilization of uncertain impulsive stochastic systems
with time-varying delay, Commun. Nonlinear Sci. Numer. Simul., vol. 17,
pp. 47404752, 2012.
[6] J. P. LaSalle, Stability theory of ordinary differential equations,
J. Differential Eq., vol. 4, pp. 5765, 1968.
[7] Y. Shen, Q. Luo, and X. Mao, The improved LaSalle-type theorems
for stochastic functional differential equations, J. Math. Anal. Appl.,
Fig. 4. Trajectories of the optimal control u and state x for optimal vol. 318, pp. 134154, 2006.
control problems (73), (74). [8] F. Wu and S. Hu, The LaSalle-type theorem for neutral stochastic func-
tional differential equations with infinite delay, Discrete Cont. Dyn. Syst.,
Example 5.3: Consider the following optimal control prob- vol. 32, pp. 10651094, 2012.
lem: Under the constraint of [9] X. Zhao, F. Deng, and X. Zhong, LaSalle-type theorem for general non-
linear stochastic functional differential equations by multiple Lyapunov

x1 (k + 1) = 2 |x1 (k) + x2 (k)| + u1 (k) + u2 (k)1 (k)


1 functions, Abstract Appl. Anal., vol. 2014, pp. 111, 2014.
[10] J. Hurt, Some stability theorems for ordinary difference equations,
x2 (k + 1) = 12 |x1 (k) x2 (k)| + u2 (k)

  SIAM J. Numer. Anal., vol. 4, pp. 582596, 1967.


+ 12 |x1 (k)| + u1 (k) 2 (k) [11] J. P. LaSalle, The Stability and Control of Discrete Processes. New York,
NY, USA: Springer-Verlag, 1986.
(73) [12] T. Taniguchi, Stability theorems of stochastic difference equations,
to minimize the cost functional J. Math. Anal. Appl., vol. 147, pp. 8196, 1990.
 % [13] O. L. V. Costa and M. D. Fragoso, Stability results for discrete-time
1 linear systems with Markovian jumping parameters, J. Math. Anal. Appl.,
J(x, u) = |u1 (k)|2 + |u2 (k)|2 + |x1 (k)|2 vol. 179, pp. 154178, 1993.
2
k=0 & [14] M. C. DE Oliveira, J. Bernussou, and J. C. Geromel, A new discrete-time
2 2 1 robust stability condition, Syst. Control Lett., vol. 37, pp. 261265, 1999.
+ |x2 (k)| + |x1 (k)| |x1 (k) + x2 (k)| (74) [15] C. Kelly, P. Palmer, and A. Rodkina, Almost sure instability of the
3 6
equilibrium solution of a Milstein-type stochastic difference equation,
% & Comput. Math. Appl., vol. 66, pp. 22202230, 2013.
1 (k) [16] S. Elvira-ceja and E. N. Sanchez, Inverse optimal control for discrete-
where is an independent random vector-valued se-
2 (k) time stochastic nonlinear systems stabilization, in Proc. 2013 Amer.
% & Control Conf., Washington, DC, USA, 2013, pp. 46894693.
(k)
quence with E 1 = 021 , E[i (k)j (k)] = ij , i, j = 1, [17] S. L. Kek, K. L. Teo, and A. A. M. Ismail, An integrated optimal control
2 (k) algorithm for discrete-time nonlinear stochastic system, Int. J. Control,
2, k = 0, 1, 2, . . .. Take V (x1 , x2 ) = x21 + x22 , then, from (45), vol. 83, pp. 25362545, 2010.
the Hamilton function is simplified by [18] M. Hernandez-Gonzalez and M. V. Basin, Discrete-time optimal control
for stochastic nonlinear polynomial systems, Int. J. Gen. Syst., vol. 43,
pp. 359371, 2014.
Hk (x, u) = 3u21 +3u22 |x1 +x2 |u1 |x1 |u1 +|x1 x2 |u2 [19] O. Kallenberg, Foundations of Modern Probability. New York, NY,
1 1 1 USA: Springer-Verlag, 2002.
+ x21 + x22 + |x1 ||x1 + x2 |. [20] D. Revus and M. Yor, Continuous Martingales and Brownian Motion.
4 6 6
New York, NY, USA: Springer-Verlag, 1999.
It is easy to check that the control u = ( 2 )T with u
u1 , u 1 = [21] X. Lin and W. Zhang, A maximum principle for optimal control of
discrete-time stochastic systems with multiplicative noise, IEEE Trans.
(1/6)(|x1 | + |x1 + x2 |) and u 2 = (1/6)|x1 x2 | satisfying Autom. Control, vol. 60, no. 4, pp. 11211126, Apr. 2015.
the conditions of Theorem 4.2, so u is the optimal stabilizing [22] R. Bellman, On the theory of dynamic programming, Proc. Nat. Acad.
control for (73), (74) (see Fig. 4). Sci., vol. 38, pp. 716719, 1952.
ZHANG et al.:LASALLE-TYPE THEOREM AND ITS APPLICATIONS TO INFINITE HORIZON OPTIMAL CONTROL 261

[23] R. Bellman, The theory of dynamic programming, in Proc. Summer Xiangyun Lin received the M.S. degree from
Meet. Soc. Laramie, 1954, pp. 503515. the Shandong University of Science and Tech-
[24] R. Bellman, T. T. Soong, and R. Vasudevan, On the moment behavior of nology, Qingdao, China, in 2003, and the Ph.D.
a class of stochastic difference equations, J. Math. Anal. Appl., vol. 40, degree from Shandong University, Jinan, China,
pp. 286299, 1972. in 2013.
[25] Y. Huang, W. Zhang, and H. Zhang, Infinite horizon linear quadratic He is currently an Associate Professor of
optimal control for discrete-time stochastic systems, Asian J. Control, Shandong University of Science and Technol-
vol. 10, pp. 608615, 2008. ogy. His main research interests include sto-
[26] W. Zhang, Y. Li, and X. Liu, Infinite horizon indefinite stochastic lin- chastic optimal control, stochastic analysis, and
ear quadratic control for discrete-time systems, Control Theory Tech., robust H control.
vol. 13, no. 3, pp. 230237, 2015.
[27] E. Liz, Stability of non-autonomous difference equations: Simple ideas
leading to useful results, J. Difference Eq. Appl., vol. 17, pp. 203220,
2011.
[28] W. Zhang and B. S. Chen, State feedback H control for a class Bor-Sen Chen (F01LF14) received the B.S.
of nonlinear stochastic systems, SIAM J. Control Optim., vol. 44, degree from the Tatung Institute of Technology,
pp. 19731991, 2006. Taipei, Taiwan, in 1970, the M.S. degree from
National Central University, Chungli, Taiwan, in
1973, and the Ph.D. degree from the University
of Southern California, Los Angeles, in 1982.
Weihai Zhang (SM16) received the M.S. de- He was a Lecturer, Associate Professor, and
gree from Hangzhou University, Hangzhou, Professor at the Tatung Institute of Technology
China, in 1994, and the Ph.D. degree from from 1973 to 1987. He is currently the Tsing Hua
Zhejiang University, Hangzhou, China, in 1998. University Professor of Electrical Engineering
He is currently a Professor with the and Computer Science at the National Tsing
Shandong University of Science and Technol- Hua University, Hsinchu, Taiwan. His current research interests are in
ogy, Qingdao, China, and a Taishan Scholar of control engineering, signal processing and systems biology.
Shandong Province. He is an Associate Editor Dr. Chen is a Research Fellow of the National Science Council of
of the Asian Journal of Control and an Asso- Taiwan and holds the excellent scholar Chair in engineering. He was
ciate Editor of the Conference Editorial Board a recipient of the Distinguished Research Award from the National
of IEEE Control Systems Society. He has pub- Science Council of Taiwan four times. He was also a recipient of the
lished more than 100 journal papers. His representative paper SIAM J. Automatic Control Medal from the Automatic Control Society of Taiwan
Control and Optimization, has been selected as a featured fast-moving in 2001. He was an Associate Editor of the IEEE T RANSACTIONS ON
front paper by Thomson Reuters Science Watch, and has been selected F UZZY S YSTEMS from 2001 to 2006 and Editor of the Asian Journal of
as the most cited paper in the research area of mathematics by Essential Control. He is a Member of the Editorial Advisory Board of Fuzzy Sets
Science Indicator from Thomson Reuters. His research interests include and Systems and the International Journal of Control, Automotion and
linear and nonlinear stochastic optimal control, robust H control and Systems. He was the Editor-in-Chief of International Journal of Fuzzy
estimation, stochastic stability and stabilization. Systems from 2005 to 2008.

S-ar putea să vă placă și