A. NOVIKOV
b
= inf{t > 0: X
t
> b}, b > X
0
.
Dierent approaches were used for studying this problem: integral equations (see,
e.g., [1], [10], and [21]); martingale techniques ([7], [8], and [9]), etc. In this paper, we
also apply the martingale technique, namely a special parametric family of martingales
(Theorem 1). We nd the Laplace transform of
b
provided that the Levy process Y
t
possesses negative jumps only (Theorem 2; under somewhat less general conditions
this result is known from [7] and [8]; see also Remark 3).
When the process Y
t
has positive jumps, the Laplace transform of
b
, as well as
its moments, are unknown, except for exponential distribution of positive jumps [10],
or uniform distribution [22] of ones.
Received by the editors January 23, 2003. This work was supported by ARC Large Grant
A0010474.
http://www.siam.org/journals/tvp/482/98040.html
Steklov Mathematical Institute RAN, Gubkin St. 8, 119991 Moscow, Russia, and Department
of Mathematical Sciences, University of Technology, P.O. Box 123, Broadway, Sydney, NSW 2007,
Australia (Alex.Novikov@uts.edu.au).
288
MARTINGALES AND FIRSTPASSAGE TIMES 289
In this paper, we prove that the distribution of
b
is exponentially bounded for
all b under the assumption that the process Y
t
has diusion component or positive
jumps (Theorem 3). This result is known from [9] under the additional assumption
of niteness of the mathematical expectation of Y
t
. Moreover, with the help of the
moment Wald identity (section 4), we derive a lower bound for the expectation E(
b
).
Notice that the moment Wald identity can also be used for deriving asymptotic ex
pansions of E
b
as b (see [22]). In section 5, we use this moment identity for
deriving the moment inequalities for sup
t
(X
t
) even for an arbitrary stopping time
provided that Y
t
obeys onesided stable distribution, i.e., in the absence of positive
jumps (Theorem 5). The proof of this result uses techniques from [11], where the
moment inequalities for sup
t
X
t
 are given for the Gaussian OUprocess.
2. An exponential martingale family. We assume that the Levy process Y
t
and all other random objects are dened on a probability space (, F, P) supplied by
a ltration (including the assumption of rightcontinuity, etc.).
Recall that any Levy process
Y
t
= mt +W
t
+Z
t
, (2)
where m and are constants, W
t
is a standard Brownian motion, and Z
t
is a dis
continuous process with independent homogeneous increments and paths from the
Skorokhod space.
A (unique) solution of (1) has the following representation in terms of stochastic
integrals with respect to W
t
and Z
t
:
X
t
= X
0
e
t
+e
t
_
t
0
e
s
dY
s
=
m
+
_
X
0
m
_
e
t
+e
t
_
t
0
e
s
dW
s
+e
t
_
t
0
e
s
dZ
s
. (3)
It is well known that the jump component Z
t
of the Levy process can be represented
in terms of integrals with respect to a Poisson random measure p(dx, ds) (generated
by jumps of Y
t
), and a Levy canonical measure of jumps (dx):
Z
t
=
_
t
0
_
xI{x < 1}
_
p(dx, ds) (dx) ds
+
_
t
0
_
xI
_
x 1
_
p(dx, ds). (4)
Here I{ } is an indicator function and the Levy measure (dx) must satisfy the
following condition:
_
min(x
2
, 1) (dx) < . (5)
In what follows we dene a class of martingales as a parametric family of X
t
1
) < (7)
(henceforth, a
+
= max(a, 0), a
= (a)
+
). We shall see that this condition is
sucient and necessary for niteness of the following function (u), which will be
used in a denition of martingales below:
(u) =
1
_
u
0
v
1
(v) dv =
1
_
mu +
2
4
u
2
+I
1
(u) +I
2
(u)
_
, u 0, (8)
where
I
1
(u) =
_
u
0
v
1
_ _
(e
vx
vxI{x < 1} 1)I{x > 1} (dx)
_
dv,
I
2
(u) =
_
u
0
v
1
_ _
(e
vx
1)I{x 1} (dx)
_
dv.
By conditions (5) and (6) the integral I
1
(u) is well dened and nite. It is convenient
to express the integrand in I
2
(u) as follows:
_
u
0
v
1
(e
vx
1) dv =
_
xu
0
y
1
(e
y
1) dy
=
_
1
0
y
1
(1 e
y
) dy
_
xu
1
y
1
(1 e
y
) dy
= EulerGamma log(xu)
_
xu
y
1
e
y
dy,
where EulerGamma is the Euler constant (in notation of the package Mathemat
ica [23]):
EulerGamma =
_
1
0
y
1
(1 e
y
) dy
_
1
y
1
e
y
dy
(see also [12, formula 8.367.12]). Therefore,
I
2
(u) = D
_ _
log(u) +
_
xu
y
1
e
y
dy
_
I{x 1} (dx), (9)
where D =
_
[EulerGamma + log(x)
_
m+
_
xI{1 < x < 0} (dx)
_
. (13)
Proof. From (9) it follows that
I
2
(u) = log(u)
_
(, 1]
_
+O(1), u . (14)
By the inequality e
z
zI{z < 1} 1 z
2
I{z > 0}/2, we nd that
I
1
(u)
u
2
4
_
x
2
I{x > 0} (dx),
and, hence,
(u) mu +
u
2
4
_
2
+
_
x
2
I{x > 0} (dx)
_
+O(log(u)).
So, if > 0 or ((0, )) > 0, then by this lower bound we obtain (11).
Now assume (12). Then
(u) = mu +
_ _ _
u
0
e
v x
v x 1
v
dv
_
I{1 < x < 0} (dx) +I
2
(u).
By virtue of (14), the integral I
2
(u) is of the order O(log(u)). Note that for x < 0
and v > 0 the following inequalities hold:
0
e
vx
vx 1
v
x.
Taking into account the assumption
_
xI{1 < x < 0} (dx) < , the dominated
convergence theorem, and the lHospital rule, we nd
lim
u
1
u
_
u
0
_
e
v x
v x 1
v
dv I{1 < x < 0} (dx) =
_
xI{1 < x < 0} (dx).
Thus, (13) holds.
If
_
xI{1 < x < 0} (dx) = , the same arguments lead to the following
estimate with any > 0:
lim
u
(u)
u
1
_
m+
_
xI{1 < x < } (dx)
_
.
Letting here 0, we obtain (11). Lemma 1 is proved.
292 A. NOVIKOV
Notice that by Lemma 1 the function H(, x) is nite for any real x if condi
tion (10) holds or if
= 0, ((0, )) = 0, m+
_
xI{1 < x < 0} (dx) > x. (15)
Remark 1. If
= 0, ((0, )) = 0, m+
_
xI{1 < x < 0} (dx) X
0
, (16)
then for b > X
0
the stopping time
b
= .
Indeed, the condition ((0, )) = 0 implies that the process X
t
does not have
any positive jumps at all. So, since the continuous part of X
t
is not smaller than X
t
itself, by (3), (4), and (16) we have the following deterministic upper bound:
X
t
m
+
_
X
0
m
_
e
t
+e
t
_
t
0
e
s
_ _
x I{1 < x < 0} (dx)
_
ds
=
1
_
m+
_
xI{1 < x < 0} (dx)
_
+
_
X
0
m
_
xI{1 < x < 0} (dx)
_
e
t
X
0
.
Thus, under b > X
0
we have sup
t>0
X
t
< b and so
b
= .
Theorem 1. Let conditions (6) and (7) hold. Further, assume (10) or (15) with
x = X
0
hold. Then
_
e
t
H(, X
t
), t 0
_
, > 0,
is the martingale.
Proof. Using standard tools of stochastic analysis (see, e.g., [1] or [13]) we obtain
that, under conditions (6) and (7), the process
t
(u) = exp
_
ue
t
X
t
_
t
0
(ue
s
) ds
_
(17)
is a martingale. The fact that this process is a local martingale can be checked using
representation (3) and the generalized It o formula. The uniform integrability (under
assumption (6)), and, hence, the validity of the martingale property, is a consequence
of the exponential identity
Eexp
__
t
0
f(s) dY
s
q
t
_
= 1,
where
q
t
= m
_
t
0
f(s) ds +
2
2
_
t
0
f
2
(s) ds +
_
t
0
_
_
e
f(s) x
f(s) xI{x < 1} 1
_
(dx) ds
and f(s) is a bounded deterministic function.
Note that
_
t
0
(ue
s
) ds =
1
_
ue
t
u
(v)
v
dv = (ue
t
) (u).
MARTINGALES AND FIRSTPASSAGE TIMES 293
Since E(
t
(u)) =
t
(0) = exp{uX
0
}, from the latter formula it follows that
Eexp{uX
t
} = exp
_
uX
0
e
t
+(u) (ue
t
)
_
. (18)
Applying the Fubini theorem and then introducing a new variable z = ue
t
, we
obtain
E(H(, X
t
)) =
_
0
E(e
uXt(u)
) u
1
du
=
_
0
e
uX0e
t
(ue
t
)
u
1
du = e
t
H(, X
0
) < . (19)
The niteness of the function H(, X
0
) is due to Lemma 1 and condition (10) or (15)
with x = X
0
.
By (17), for all s t we have
E
_
t
(u)  F
s
_
=
s
(u) a.s.
Now, integrating both sides of the above equality with respect to Q(du) =
e
(u)
u
1
du ( > 0), with > 0, over the interval (0, ) we obtain
_
0
E(
t
(u)  F
s
) e
(u)
u
1
du =
_
0
s
(u) e
(u)
u
1
du
= e
s
_
0
e
ue
s
Xs(ue
s
)
(ue
s
)
1
d(ue
s
) = e
s
H(, X
s
). (20)
Hence, by the Fubini theorem, applied to the lefthand side of (20), we obtain the
required martingale property
E
_
e
t
H(, X
t
)  F
s
_
= e
s
H(, X
s
) a.s.
Theorem 1 is proved.
Remark 2. The idea of constructing a special parametric martingale family is
not new. A similar method was used in the papers [14] and [15] for boundary crossing
problems related to a Brownian motion, and in the papers [7] and [16] for boundary
crossing problems related to a stable Levy process.
By the optional stopping theorem and Theorem 1 we have the following identity:
For any stopping time and xed t <
E
_
e
min(,t)
H(, X
min(,t)
)
= H(, X
0
), > 0, (21)
we apply (21) to derive an explicit formula for the Laplace transform of
b
provided
that the process Y
t
does not have positive jumps. Before proving this formula we have
to introduce further notation. Set
H(, x) =
_
0
e
ux
(u)
u
1
du, > 0,
with
(u) = mu +
2
u
2
4
+I
1
(u)
_ _
log(u) +
_
xu
e
y
y
dy
_
I{x 1} (dx).
294 A. NOVIKOV
If (7) holds, then, by (9) and (8),
H(, x) = exp{D
1
} H(, x),
where the constant D is dened above. Notice also that
H(, x) is nite even if (7)
fails.
Theorem 2. Let ((0, )) = 0. If
> 0 or m+
_
x I{1 < x < 0} (dx) > b, (22)
then P{
b
< } = 1 and
Ee
b
=
H(, X
0
)
H(, b)
, > 0.
Proof. First we assume that (7) holds and consider identity (21) with =
b
. By
Lemma 1 and (22), H(, x) < , x b. Under the absence of positive jumps, on
the set {
b
< t} we have X
b
= b and, on the set {
b
t}, we have X
t
b for all
t 0. Hence, H(, X
min(
b
,t)
) H(, b) < and, by the Fatou lemma, we can pass
to the limit as t under the expectation in the lefthand side of (21). As the
result, we get
E[I{
b
< }e
b
] =
H(, X
0
)
H(, b)
, > 0. (23)
It is easy to verify, integrating by parts, that
lim
0
H(, x) = 1. (24)
Passing to the limit as 0 in (23), by the Fatou lemma we get P{
b
< } = 1.
Since
H(, X
0
)
H(, b)
=
H(, X
0
)
H(, b)
, > 0,
then, by (23) under (7), the statement of Theorem 1 is valid under the imposed
assumption (7). If (7) fails, we may consider the OUprocess X
N
t
, which solves (1)
with Y
t
replaced by the truncated Levy process
Y
N
t
= mt +W
t
+Z
N
t
,
where
Z
N
t
=
_
t
0
_
xI{1 < x < 0}
_
p(dx, ds) (dx) ds
+
_
t
0
_
xI{N x 1} p(dx, ds).
Denote by
N
b
the corresponding crossing time of the level b and by
H
N
(, x) the
corresponding martingale function. Obviously,
N
b
b
a.s. as N . Notice
that (7) holds for Y
N
t
and we have
Ee
N
b
=
H
N
(, X
0
)
H
N
(, b)
, > 0,
MARTINGALES AND FIRSTPASSAGE TIMES 295
where the function
H
N
(, x) is dened above. Now it is easy to check that
lim
N
H
N
(, x) =
H(, x) for any x b.
Theorem 2 is proved.
Remark 3. In the case when (dx) 0 and > 0 the process X
t
is Gaussian
and, of course, the result of Theorem 2 for this case is well known (see, e.g., [17]).
Note also that for this special case it is possible to derive an analytical inversion
of the Laplace transform of
b
based on the representation for the function H(, x)
in terms of the parabolic cylinder function D
a,b
= inf{t > 0: X
t
> b or X
t
< a}, b > X
0
> a,
is established too.
Theorem 3. Let condition (10) or (15) with x = b hold. Assume also that
E(Y
1
)
b
< .
Proof. Assuming (6), we shall use an analytical continuation of the martingale
family e
t
H(, X
t
), > 0, to (, 0) with involved in (25).
For (, 0), set
H(, x) =
_
0
(e
ux(u)
1) u
1
du for (, 0). (26)
By (10) and Lemma 1, H(, x) < if and only if
_
1
0
(u) u
1
du < .
Owing to (5) and (8), for > 1 we have
_
1
0
(u) u
1
du C +C
_
1
0
I
2
(u) u
1
du
(hereafter C is a positive generic constant).
The inequality
1 e
z
C
_
x
I{x 1} (dx)
_
1
0
u
1+
du
in which the latter integral in the righthand side is nite for any (, 0),
whereas (25) is equivalent to
_
x
_
0
(e
ub
e
uX0
) u
1
e
(u)
du +H(, X
0
)
_
H(, X
0
). (30)
Further, whereas
0 <
_
0
(e
ub
e
uX0
) u
1
e
(u)
du
_
0
(e
ub
e
uX0
) u
1
e
(u)
du <
as 0, there exists
0
(, 0) such that for any (
0
, 0)
_
0
(e
ub
e
uX0
) u
1
e
(u)
du > 0.8.
On account of (28), there is
1
(
0
, 0) such that 0.9 <
1
H(
1
, X
0
) < 1.1 So, (30)
provides
Ee
1 min(
b
,t)
1
H(
1
, X
0
)
1
_
0
(e
ub
e
uX0
) u
11
e
(u)
du +
1
H(
1
, X
0
)
<
1.1
0.1
.
This bound is valid for any t 0. Hence, by the Fatou lemma, the statement of
Theorem 3 holds true for =
1
.
MARTINGALES AND FIRSTPASSAGE TIMES 297
If ((0, )) > 0, then, choosing a positive constant A with ((0, A]) > 0, we in
troduce the OUprocess X
A
t
, generated by the Levy process Y
A
t
with positive jumps
truncated by A, and the level crossing time
A
b
and notice that
A
b
b
. Ap
plying identity (29) with
b
replaced by
A
b
and properly dened functions
A
(u)
and H
A
(, X
0
), and taking into account X
A
min (
A
b
,t)
b + A and < 0, we get the
bound
Ee
min(
A
b
,t)
_
_
0
(e
u(A+b)
e
uX0
) u
1
e
A
(u)
du +H
A
(, X
0
)
_
H
A
(, X
0
).
The last part of the proof is similar to that for the case ((0, )) = 0. Theorem 3 is
proved.
Corollary 1. Let > 0 or ((, )) > 0. Assume
EY
1

a,b
< .
Proof. Denote
a
= inf{t > 0: X
t
< a} and note that
a,b
= min(
b
,
a
). By
Theorem 3, applied to
b
and
a
, the desired result holds.
4. The moment Wald identity. The theorem below generalizes Theorem 2
of [8].
Theorem 4. Denote T = inf{t 0: X
t
f(t)}, X
0
< f(0), where f(t) is a
continuous deterministic function such that sup
t0
f(t) = M < . Let conditions (6)
and (25) hold. Further, assume condition (10) or (15) with x = M holds.
Then
ET = E
_
0
(e
uX
T
e
uX0
) u
1
e
(u)
du < . (31)
Proof. First we shall show that under (10) or (15) with x = M the process
__
0
(e
uXt
e
uX0
) u
1
e
(u)
du t, t 0
_
(32)
is a martingale. Indeed, since {e
t
H(, X
t
), t 0} is the martingale for > 0, we
have
e
t
E
_
_
H(, X
t
) H(, X
0
)
 F
s
_
+ (e
t
1) H(, X
0
)
= e
s
_
H(, X
s
) H(, X
0
)
+ (e
s
1) H(, X
0
) a.s. (33)
Under the conditions of Theorem 4
lim
0
_
H(, z) H(, X
0
)
=
_
0
(e
uz
e
uX0
) u
1
e
(u)
du
for any z, if (10) holds, or for any z < M, if (15) holds with x = M. Further, due
to (24), we have
lim
0
(e
t
1) H(, X
0
) = t.
298 A. NOVIKOV
Applying now the dominated convergence theorem we can interchange the symbols
of the limit as 0 and the conditional expectation in the left side of (33). Thus,
passing to the limit as 0 in both parts of (33) we obtain the martingale prop
erty (32).
By the optional stopping theorem for martingales we nd that
Emin(, t) = E
_
0
(e
uX
min (,t)
e
uX0
) u
1
e
(u)
du (34)
is valid for any stopping time and xed t < .
To complete the proof, it remains only to verify that for computation of lim
t
in the righthand side of (34) with = T can be reduced to computation under the
expectation symbol. We verify that as follows. By Theorem 3,
ET < . (35)
Further, since (34), with = T, is equivalent to
Emin(T, t) = EI{T t}
_
0
(e
uX
T
e
uX0
) u
1
e
(u)
du +e
t
,
where e
t
= EI{T > t}
_
0
(e
uXt
e
uX0
) u
1
e
(u)
du. Assume for a moment that
lim
t
e
t
= 0. (36)
Then, by the dominated convergence theorem, (31) holds true.
For a verication of (36), we notice that X
t
M on the set {T > t}. Therefore,
applying (27), we nd that
e
t
 =
EI{T > t, X
t
X
0
}
_
0
(e
uXt
e
uX0
) u
1
e
(u)
du
EI{T > t, X
t
< X
0
}
_
0
e
uX0
(1 e
u(XtX0)
) u
1
e
(u)
du
P{T > t, X
t
X
0
}
_
0
(e
uM
e
uX0
) u
1
e
(u)
du
+E
_
I{T > t, X
t
< X
0
} C
_
0
e
uX0
u
1+
e
(u)
du
_
(X
t
X
0
)
_
C
1
P{T > t} +C
2
E
_
I{T > t}X
t
X
0

, (37)
where
C
1
=
_
0
(e
uM
e
uX0
) u
1
e
(u)
du, C
2
= C
_
0
e
uX0
u
1+
e
(u)
du.
Set
Y
t
= W
t
+
_
t
0
_
xI{x < 1}
_
p(dx, ds) (dx) ds
Y
t
=
_
t
0
_
xI{x 1} p(dx, ds).
MARTINGALES AND FIRSTPASSAGE TIMES 299
Since by (3) and (4),
X
t
X
0
= (
1
mX
0
)(1 e
t
) +e
t
_
t
0
e
s
d
Y
s
+e
t
_
t
0
e
s
d
Y
s
.
with the help of inequality a +b +c
a
+b
+c

1
mX
0

+
X
t

+
_
t
0
_
x
X
t

C
,
E
/2
. (39)
By the property of the stochastic integral (see, e.g., [1] or [13]), for any stopping
time we have
E
_
0
_
x
I{x 1} (dx) E,
where, by (6) and (25),
_
x
Y
s
,
and nd that
X
2
t
=
_
t
0
e
2s
(2M
s
dM
s
) +
_
t
0
e
2s
d
_
[M
s
, M
s
] M
s
, M
s
_
+
_
t
0
M
2
s
de
2s
+t
_
2
+
_
x
2
I{x < 1} (dx)
_
.
Here, the rst and second integral terms are martingales, and the third one is negative.
For any bounded stopping time , these facts provide
E
X
2
E()
_
2
+
_
x
2
I{x < 1} (dx)
_
.
So, for 2, (39) is provided by Lenglarts domination principle (see, e.g., [18,
p. 156]). Theorem 4 is proved.
Remark 4. Since X
b
b, Theorem 4 provides
E
b
_
0
(e
ub
e
uX0
) u
1
e
(u)
du.
300 A. NOVIKOV
Note that Theorems 1 and 4 involve condition (6), which does not hold for ex
ponentially distributed, as well as others of such type, positive jumps. However, the
truncation technique implementation (for large positive jumps) allows obtaining a
lower bound in this case as well. So, instead of (6) we assume
K = sup
_
u 0: Ee
uYt
= exp
_
t(u)
_
<
_
<
and dene the function
(u) =
1
_
u
0
v
1
(v) dv, u < K.
Then, repeating the steps of the proof of Theorem 4, rst for the case with truncated
jumps and then passing to the limit as the parameter of truncation increases to innity,
we obtain the lower bound
E
b
_
K
0
(e
ub
e
uX0
) u
1
e
(u)
du.
Remark 5. Identity (31) might also be used for creating corresponding bounds
for twosided stopping times
a,b
. If, for example, Y
t
is the process with a symmetric
distribution, X
0
= 0, and (6) holds, then (34) holds for X
t
and (X
t
) as well, that
is, for any stopping time
Emin (, t) = E
_
0
(e
uX
min(,t)
1) u
1
e
(u)
du.
Hence,
Emin(, t) = E
_
0
(cosh(uX
min(,t)
) 1) u
1
e
(u)
du. (41)
Note that the similar identity is used in [11] for the derivation of maximal inequalities
for the Gaussian OUprocess. Since the Gaussian OUprocess is continuous, from
(41), as t , it follows that
E
b,b
=
_
0
_
cosh(ub) 1
_
u
1
e
(u)
du < , (u) =
2
u
2
4
.
5. Maximal inequalities for stable OUprocesses. We consider now a spec
tral negative stable process Y
t
(see [1] or [19]) with
Ee
uY1
= exp{
1
u
}, u 0, 1 < 2. (42)
This process Y
t
, and in turn X
t
, does not have positive jumps. Moreover,
(u) = u
/(
2
).
By (18),
Eexp{uX
t
} = exp
_
uX
0
e
t
+
(1 e
t
) u
_
.
MARTINGALES AND FIRSTPASSAGE TIMES 301
If = 2, X
0
= 0, the process X
t
is Gaussian. Then by [11] the following remarkable
inequality is valid: For any stopping time
C
1
E
_
log (1 +)
_
E
_
max
t
X
t

_
C
2
E
_
log (1 +), (43)
where C
1
1
3
, C
2
3.3795.
We prove here an analogue of (43).
Theorem 5. Let (42) hold and X
t
solve (1) with X
0
0. Then for any stopping
time and all p > 0
c
p
E
_
_
log(1 +)
_
p(11/)
_
E
__
sup
t
X
t
_
p
_
a
p
+C
p
E
_
_
log(1 +)
_
p(11/)
_
, (44)
where positive constants a
p
, c
p
, and C
p
do not depend on .
For the proof of inequality (43), Graversen and Peskir [11] apply Walds moment
identity with the formula for E being similar to (41). Here, we apply identity (34),
which is the onesided analogue of the abovementioned formula from [11]. We also
use the following simple consequence of Lenglarts domination principle.
Lemma 2. Let Q
t
be a nonnegative, right continuous process and let A
t
be an
increasing continuous process, A
0
= 0. Assume that for all bounded stopping times
EQ
EA
. (45)
Then for all p > 0 and for all bounded stopping times there exist constants c
p
and
C
p
such that
E
__
log
_
1 + sup
t
Q
t
__
p
_
c
p
+C
p
E([log(1 +A
)]
p
). (46)
Proof. By Lenglarts principle, for any increasing continuous function H(x) with
H(0) = 0, (45) provides
E
_
sup
t
H(Q
t
)
_
E
_
H(A
)
_
, (47)
where
H(x) = x
_
x
1
s
dH(s) + 2H(x).
Set H(x) = (log (1 +x))
p
, x 0. By lHospitals rule, lim
x0
H(x) = 0 and
lim
x
x
H(x)
_
x
1
s
dH(s) = 0.
Hence, lim
x
[
H(x)/H(x)] = 2 and there are constants c
p
and C
p
such that
H(x)
c
p
+C
p
H(x). Therefore, (46) is implied by (47) with H(x) = (log (1 +x))
p
.
302 A. NOVIKOV
Proof of Theorem 5. Denote X
t
= sup
st
X
s
. In the absence of positive jumps
for X
t
, the process X
t
is increasing and continuous. Then, (34) provides the following
inequality as valid for any bounded stopping times :
E E
_
0
(e
uX
e
uX0
) u
1
e
u
/(
2
)
du = E
_
G(X
) G(X
0
)
_
,
where
G(y) =
_
0
(e
uy
1) u
1
e
u
/(
2
)
du.
Hence, (45) is valid for Q
t
= t and A
t
= G(X
t
)G(x) the continuous increasing
process, A
0
= 0. By Lemma 2,
E
_
_
log(1 +)
_
p(11/)
_
c
p
+C
p
E
_
_
log
_
1 +G(X
) G(X
0
)
_
p(11/)
_
.
Thus, the lower bound in (44) will be held, if the inequality
log(1 +G(y)) C
1
+C
2
y
/(1)
, y > 0,
is valid. The latter bound readily follows from the wellknown asymptotic relation
G(y) = exp
_
C
y
/(1)
_
1 +o(1)
_
_
, y , (48)
(see, e.g., [20, Chap. 3, Exercise 7.3]). The boundedness requirement for stopping
time is easily removed by applying the localization technique.
The upper bound (44) is derived with the help of (34) which, jointly with an
obvious equality e
x
= e
x
+
1 +e
x
)
_
G(X
0
) +E +E
_
0
(1 e
uX
) u
1
e
u
/(
2
)
du.
Since
E
_
0
(1 e
uX
) u
1
e
u
/(
2
)
du E(X
)
_
0
e
u
/(
2
)
du
and EX
X
0
/ + CE (see, also the proof of Theorem 4), we nd the following
estimate:
E
_
G(X
+
)
_
c +CE.
Thus, (45) is valid with A
t
= c + Ct and Q
t
= G(X
+
t
), where Q
t
is a nonnegative
rightcontinuous process. By Lemma 2,
E
_
_
log
_
1 +G(X
)
__
p(11/)
_
c
p
+C
p
E
_
_
log(1 +)
_
p(11/)
_
and it remains only to notice that (48) provides the following bound:
C + log(1 +G(y)) Cy
/(1)
, y > 0.
Theorem 5 is proved.
Remark 6. For = 2 and X
0
= 0, the application of (44) to max
t
X
t
and
max
t
(X
t
) leads to (43) without specication of the constants c
p
and C
p
.
MARTINGALES AND FIRSTPASSAGE TIMES 303
Acknowledgments. The author is thankful to R. Elliot, B. Ergashev, R. Liptser,
E. Shinjikashvili, and A. N. Shiryaev for useful comments.
REFERENCES
[1] A. V. Skorohod, Random Processes with Independent Increments, Kluwer Academic Publish
ers Group, Dordrecht, 1991.
[2] K. Sato, Levy Processes and Innitely Divisible Distributions, Cambridge University Press,
Cambridge, UK, 1999.
[3] D. Perry, W. Stadje, and S. Zacks, Firstexit times for Poisson shot noise, Stoch. Models,
17 (2001), pp. 2537.
[4] M. Grigoriu, Applied NonGaussian Processes, PrenticeHall, Englewood Clis, NJ, 1995.
[5] O. E. BarndorffNielsen and N. Shephard, Modelling by Levy processes for nancial econo
metrics, in Levy Processes, Birkhauser Boston, Boston, MA, 2001, pp. 283318.
[6] M. O. C aceres and A. A. Budini, The generalized OrnsteinUhlenbeck process, J. Phys. A,
30 (1997), pp. 84278444.
[7] D. I. Hadjiev, The rst passage problem for generalized OrnsteinUhlenbeck processes with
nonpositive jumps, in Seminaire de Probabilites, XIX, Lecture Notes in Math. 1123,
Springer, Berlin, 1985, pp. 8090.
[8] A. A. Novikov, On the rst exit time of an autoregressive process beyond a level and an
application to the changepoint problem, Theory Probab. Appl., 35 (1990), pp. 269279.
[9] A. A. Novikov and B. A.
`
Ergashev, Limit theorems for the time of crossing a level by an
autoregressive process, Proc. Steklov Inst. Math., 4 (1994), pp. 169186.
[10] A. Tsurui and S. Osaki, On a rstpassage problem for a cumulative process with exponential
decay, Stochastic Processes Appl., 4 (1976), pp. 7988.
[11] S. E. Graversen and G. Peskir, Maximal inequalities for the OrnsteinUhlenbeck process,
Proc. Amer. Math. Soc., 128 (2000), pp. 30353041.
[12] I. Gradshteyn and I. Ryzhik, Table of Integrals, Series, and Products, Academic Press, New
York, 1980.
[13] J. Jacod and A. N. Shiryaev, Limit Theorems for Stochastic Processes, SpringerVerlag,
Berlin, 1987.
[14] L. A. Shepp, Explicit solutions to some problems of optimal stopping, Ann. Math. Statist., 40
(1969), pp. 9931010.
[15] H. Robbins and D. Siegmund, Boundary crossing probabilities for the Wiener process and
sample sums, Ann. Math. Statist., 41 (1970), pp. 14101429.
[16] A. A. Novikov, The martingale approach in problems on the time of the rst crossing of
nonlinear boundaries, Trudy Mat. Inst. Steklov, 158 (1981), pp. 130152, 230 (in Russian).
[17] D. A. Darling and A. J. F. Siegert, The rstpassage problem for a continuous Markov
process, Ann. Math. Statist., 24 (1953), pp. 624639.
[18] D. Revuz and M. Yor, Continuous Martingales and Brownian Motion, SpringerVerlag,
Berlin, 1999.
[19] V. M. Zolotarev, Onedimensional Stable Distributions, Translations of Mathematical Mono
graphs 65, American Mathematical Society, Providence, RI, 1986.
[20] F. W. J. Olver, Asymptotics and Special Functions, Academic Press, New York, 1974.
[21] K. Borovkov and A. Novikov, On a piecewise deterministic Markov process model, Statist.
Probab. Lett., 53 (2001), pp. 421428.
[22] A. Novikov, R. Melchers, E. Shinjikashvili, and N. Kordzakhia, First Passage Time of
Filtered Poisson Processes with Exponential Shape Function, Research paper 109, Quanti
tative Finance Research Center, University of Technology, Sydney, Australia, 2003. Avail
able online at http://www.business.uts.edu.au/qfrc/research papers/rp109.pdf.
[23] S. Wolfram, The Mathematica Book, 4th ed., Wolfram Media, Inc., Champaign, IL; Cam
bridge University Press, Cambridge, UK, 1999.
Mult mai mult decât documente.
Descoperiți tot ce are Scribd de oferit, inclusiv cărți și cărți audio de la editori majori.
Anulați oricând.