Sunteți pe pagina 1din 34

Rate-optimal Tests for Jumps in Diusion Processes

Taesuk Lee
a
and Werner Ploberger
b
a
Department of Economics, University of Auckland
b
Department of Economics, Washington University in St. Louis
January 2011
Abstract
Suppose one has given a sample of high-frequency intra-day discrete observations of a
continuous-time random process (e.g. stock market data) and wants to test for the presence
of jumps. We show that the power of any test of this hypothesis depends on the frequency
of observation. In particular, we show that if the process is observed at intervals of length
1=n and the instantaneous volatility of the process is given by
t
, at best one can detect
jumps of height no smaller than
t
_
2 log(n)=n. We construct a test which achieves this
rate in the case for diusion-type processes.
Keywords : High Frequency Data, Jump, Likelihood Test.
0
Further author information:
Taesuk Lee: Lecturer, E-mail: taesuk.lee@auckland.ac.nz
Werner Ploberger: Thomas H. Eliot Distinguished Professor, E-mail: wernerp@artsci.wustl.edu
1
1 Introduction
Continuous diusion models have provided a simple, exible, and powerful tool to analyze eco-
nomic and nancial data since the time before high frequency data become available. Because
data are only observed at discrete times, we do not have full information on the trajectory of the
process. Therefore we may have modeling errors due to the discreteness of the observations, but
this problem can be signicantly mitigated with high frequency data which is now available with
technological progress.
High frequency data, however, generate their own challenges: We are not sure that the data
generating process is continuous; there may be jumps. Since continuous diusion models do not
capture jumps, researchers must know whether the data contain jumps or not. Furthermore, many
datasets (such as when returns are measured over short intervals - say 1 to 5 minutes) contain
some contamination commonly called "market microstructure noise". Our aim in this paper is to
propose an optimal test for the null hypothesis of continuous diusion models against an alterna-
tive hypothesis of jump diusion models while allowing for the presence of market microstructure
noise in the data. Though several tests (Barndor-Nielsen and Shephard (2006), hereafter BNS,
Ait-Sahalia and Jacod (2009), hereafter AJ and Lee and Mykland (2008). hereafter LM) were
already introduced, their power properties were not explicitly considered. In this paper, we de-
rive a rate-optimal test valid under fairly general assumptions about the data generating process.
Furthermore we compare power of our test and that of other competing tests.
2 Local Power Bound
As the null, we consider the usual (i.e., purely continuous) diusion model:
dA
t
= j
t
dt +o
t
d\
t
, (1)
where \ = (\
t
: t [0. 1)) is a Wiener process and j
t
and o
t
are non-anticipating random
processes fullling the usual requirements of Ito-calculus. Later on we will impose additional
assumptions on j and o. (For example, we will assume them to be smooth to a certain extent, so
that the process A
t
specied in equation(1) has certain "nice" properties.)
2
As the alternative, we want to consider jumps present in the diusion model :
dA
t
= j
t
dt +o
t
d\
t
+J
t
di
t
,
where J
t
is a non-zero random variable whose absolute value species the jump size and i
t
is a
counting process governing whether there is a jump or not at time t.
The problem, however, is that in empirical practice, we cannot observe the entire process.
Instead, it is observed only at discrete times t = i,:. where : is a positive integer denoting the
"sample size" and
0 _ i _ :.
The rst test for this testing problem - and still the "gold standard" for all tests was developed
by Barndor-Nielsen and Shephard (2002). Since the problem of detecting jumps is of enormous
practical importance, further research has occurred in this eld. An alternative test was developed
by Ait-Sahalia and Jacod (2009), and an informal "testing procedure" was provided by Lee and
Mykland (2008).
To the best of our knowledge, none of these contributions, however, discussed power of the
tests. In this paper, we establish the following two results.
1. Clearly, when :, the number of observations increases, we should expect our test to have
"better" power. In particular, we want to consider the power against local alternatives for the
jumping processes. So we consider for each : the alternative
J
(a)
t
= c
a
where we assume that c
a
is a sequence converging to zero, and assume the process i
t
remains
(uniformly) bounded. (So we assume there is only a maximum number of jumps). Let 0 be
arbitrary. Then we show that - even if we know that o
t
= o - it is impossible to construct tests
that have nontrivial power against alternatives with
c
a
= o
_
2(1 ) log :
_
:
(2)
2. As a second result, we analyze a class of tests very similar to Lee and Mykland (2008) so
that (even in the general case)
c
a
= o
t
_
2(1 +) log :
_
:
.
3
the power of the test converges to one. So in a certain way, our tests attain the "optimal rate".
The main dierence of our tests to the ones in Lee and Mykland (2008) is a dierent construction
of the estimator for the volatility. We only use information from narrow time intervals, which
might be useful when the volatility varies a lot.
This is an advantage over the classical BNS or AJ tests: their local alternatives shrink with
the order :
14
(or - in the case of AJ - with the order of :
12+1j
, where j is a positive number
determining the test statistic. Moreover, our test is able to deal with simple mis-specication due
to microstructure).
Let us rst deal with our rst assertion. Let assume we even deal with the simplest case,
namely j
t
= 0 and o
t
= 1, so our underlying process A
t
is a Wiener process. Then let us assume
that - under the alternative - we only have one jump, and the time of the jump is distributed
uniformly in the interval [0. 1] . We rst will show that even under this rather ideal conditions we
will be unable to construct tests with nontrivial power if the c
a
are following (2).
Theorem 1 We want to test the null of A
t
being a Wiener process \
t
,of known variance,
against the alternative of
A
t
= \
t
+c
a
1 t _ t .
where t is an independent random variable following an uniform distribution. Suppose we observe
the process A
t
only at the time points 0. 1,:. 2,:. ...1. Suppose c
a
follows (2) (or is smaller then
this bound). Then it is impossible to construct nontrivial tests.
Proof. We did assume the variance of the Wiener process \ to be known. Without loss
of generality, we can assume this variance to be 1. Let 1
a
be the probability measure of
(A
0
. A
1a
. A
2a
. .... A
1
) under the null, and Q
a
be the measure under the alternative. Let the
.
i
be dened as
.
i
=
_
A
ia
A
(i1)a
_ _
:
Then we can easily see that the .
i
are i.i.d. standard normal, and that
dQ
a
d1
a
=
1
:
a

i=1
exp
_
(c
a
_
:).
i

1
2
(c
a
_
:)
2
_
Since each of the .
i
is standard normal, the expectation of each exp
_
(c
a
_
:).
i

1
2
(c
a
_
:)
2
_
equals
one.
4
Since we are interested in small 0 (the smaller we choose , the bigger are our jumps), we
can maintain the assumption that
<
2
3
. (3)
We are convinced that our result holds for larger , too. Assuming (3), however, greatly simplies
one part of the proof.
Let us rst show that
dQ
a
d1
a
1. (4)
in probability. The density
oQn
o1n
is a nonnegative random variable. Hence we can show (4) by
showing that its Laplace transforms obeys
1
_
exp
_
:
dQ
a
d1
a
__
exp(:)
for all positive :, which is equivalent to
log 1
_
exp
_
:
dQ
a
d1
a
__
:.
With (2), we can simplify the expression for the density. One can easily see that
dQ
a
d1
a
=
1
:
2.
a

i=1
exp
_
.
i
_
2(1 ) log :
_
.
Since each of the .
i
is standard normal, and the .
i
are independent and identically distributed,
we can simplify the Laplace transforms
log 1
_
exp
_
:
dQ
a
d1
a
__
= :log 1
_
exp
_

:
:
2.
exp
_
.
i
_
2(1 ) log :
___
.
Hence we have to show that
:log 1
_
exp
_

:
:
2.
exp
_
.
i
_
2(1 ) log :
___
:. (5)
It is well known that
lim
a!1
log r
r 1
= 1.
Using that result, it can easily be established that (5) is equivalent to
:
:
1
_
1 exp
_

:
:
2.
exp
_
.
i
_
2(1 ) log :
___
1.
5
which we will prove to be correct. Let (.) be the cumulative distribution function (for short cdf)
of the standard normal. Then the cdf of
exp
_
.
i
_
2(1 ) log :
_
equals

_
log r
_
2(1 ) log :
_
.
Hence
1
_
1 exp
_

:
:
2.
exp
_
.
i
_
2(1 ) log :
___
=
_
_
1 exp
_

:
:
2.
r
__
d
_
log r
_
2(1 ) log :
_
=
1
_
2:
_
_
1 exp
_

:
:
2.
r
__
exp
_

(log r)
2
4(1 ) log :
_
1
r
1
_
2(1 ) log :
dr.
Let us re-scale this expression and dene o
a
by
o
a
=
:
:
1
_
1 exp
_

:
:
2.
exp
_
.
i
_
2(1 ) log :
___
=
1
_
2:
:
1+.
_
2(1 ) log :
_
1
0
1 exp (:
.2
:r)
:
.2
:r
exp
_

(log r)
2
4(1 ) log :
_
dr.
Then we have to show that
o
a
1. (6)
Substituting
= :
.2
:r
yields
o
a
=
1
_
2:
:
(1.)
_
2(1 ) log :
_
1
0
1 exp()

exp
_

log + (2 ) log : log :


2
4(1 ) log :
_
:
2.
:
d
=
1
_
2:
:
_
2(1 ) log :
1
:
_
1
0
1 exp()

exp
_

log + (2 ) log : log :


2
4(1 ) log :
_
d.
Hence we will have to evaluate
_
1
0
1 exp()

exp
_

log + (2 ) log : log :


2
4(1 ) log :
_
d. (7)
6
Let us rst deal with the second factor. We can split it up into three factors:
exp
_

log + (2 ) log : log :


2
4(1 ) log :
_
= exp
_

(log )
2
4(1 ) log :
_
exp
_
(log )((2 ) log : log :)
2(1 ) log :
_
exp
_

(2 ) log : log :
2
4(1 ) log :
_
.
The last factor is a constant, so when evaluating (7) we can take it outside the integral. Moreover,
we can easily see that with
C
a
= exp
_

(2 ) log : log :
2
4(1 ) log :
_
we can simplify the expression to
C
a
= :

(2")
2
4(1")
:
(2")
2(1")
exp
_

(log :)
2
4(1 ) log :
_
.
For the second factor, we can see that it equals to

1n
where
1
a
=
2
2(1 )
+
log :
2(1 ) log :
.
So
o
a
=
1
_
2:
:
_
2(1 ) log :
1
:
C
a
_
1
0
1 exp()


1n
exp
_

(log )
2
4(1 ) log :
_
d. (8)
Hence we have to evaluate
_
1
0
1 exp()


1n
exp
_

(log )
2
4(1 ) log :
_
d (9)
=
_
1
0
1 exp()


1n
exp
_

(log )
2
4(1 ) log :
_
d (10)
+
_
1
0

1n
exp
_

(log )
2
4(1 ) log :
_
d (11)
The second part can be evaluated rather easily: Substitute
. =
1
_
2(1 ) log :
log
7
and therefore
d. =
1
_
2(1 ) log :
d

.
_
1
0

1n
exp
_

(log )
2
4(1 ) log :
_
d (12)
=
_
2(1 ) log :
_
1
1
exp
_
. (1
a
+ 1)
_
2(1 ) log :
_
exp
_

.
2
2
_
d..
Then we have
_
1
1
exp
_
.(1
a
+ 1)
_
2(1 ) log :
_
exp
_

.
2
2
_
d.
= exp
_
_
(1
a
+ 1)
_
2(1 ) log :
_
2
,2
_ _
1
1
exp
_

_
. (1
a
+ 1)
_
2(1 ) log :
_
2
,2
_
d.
=
_
2: exp
_
(1
a
+ 1)
2
(1 ) log :
_
.
Since
(1
a
+ 1)
2
(1 )
=
_
1
2
2(1 )
+
log :
2(1 ) log :
_
2
(1 )
=
_


2(1 )
+
log :
2(1 ) log :
_
2
(1 )
=

2
4 (1 )

log :
log :
_

2 (1 )
_
+
(log :)
2
(log :)
2
1
4 (1 )
.
we have
exp
_
(1
a
+ 1)
2
(1 ) log :
_
= exp
__

2
4 (1 )

log :
log :
_

2 (1 )
_
+
(log :)
2
(log :)
2
1
4 (1 )
_
log :
_
= :
"
2
4(1")
:

"
2(1")
exp
_
1
log :
(log :)
2
4 (1 )
_
.
Hence (using (12)) we can conclude that for all : 0.
lim
a!1
_
1
0

1n
exp(
(log j)
2
4(1.) log a
)d
:
"
2
4(1")
:

"
2(1")
_
2:
_
2(1 ) log :
= 1. (13)
8
We now return to our original task, namely the analysis of (9), which is the sum of (10) and
(11). We can easily see from (13) that for all xed : 0. (11) diverges to innity as : .
We now will show that (10) remains C(1). First of all let us observe that
1
a

2
2(1 )
. (14)
As 0.

2
2(1 )
< 1.
Our assumption (3), namely that < 2,3, implies that

2
2(1 )
2.
So the limit of the 1
a
lies between 1 and 2. Hence we can nd constants c, , with
2 < c < , < 1.
So that for all but nitely many :.
c < 1
a
< ,. (15)
Without loss of generality we can assume that (15) holds for all :. Let us now analyze the
integrand in (10):
1 exp()


1n
exp
_

(log )
2
4(1 ) log :
_
(16)
The rst factor,
() =
1 exp()

is easily seen to be uniformly bounded for all nonnegative real . So there exists an ` that
[()[ _ `
for all nonnegative . Using the power series representation for exp(), one can easily see that (.)
is an analytic function for all and
(0) = 0.
Since any analytic function has derivatives bounded on any compact set, we may conclude that
for 0 _ _ 1.
[()[ _ C
9
for some universal constant C. The third factor is easily seen to have absolute value smaller than
1.
Let us now distinguish two cases: For _ 1,

1 exp()


1n
exp(
(log )
2
4(1 ) log :
)

_ C
1+c
.
Since c 2,

_
1
0
1 exp()


1n
exp(
(log )
2
4(1 ) log :
)d

(17)
_ C
_
1
0

1+c
d = C
1
2 +c
.
For _ 1.

1 exp()


1n
exp
_

(log )
2
4(1 ) log :
_

_ `
o
and since , < 1.

_
1
1
1 exp()


1n
exp
_

(log )
2
4(1 ) log :
_
d

_ `
_
1
1

o
d = `
1
1 +,
.
Hence we may conclude that

_
1
0
1 exp()


1n
exp
_

(log )
2
4(1 ) log :
_
d

remains uniformly bounded in :. Therefore for all :.


lim
a!0
_
1
0
1exp(j)j
j

1n
exp(
(log j)
2
4(1.) log a
)d
_
2(1 ) log :
_
2::
"
2
4(1")
:

"
2(1")
= 0.
Now we can combine this result with (13) and (9) and conclude that
lim
a!0
_
1
0
1exp(j)
j

1n
exp(
(log j)
2
4(1.) log a
)d
_
2(1 ) log :
_
2::
"
2
4(1")
:

"
2(1")
= 1.
10
We can now use this limit to characterize o
a
dened in (8):
lim
a!1
o
a
1
p
2
a
_
2(1.) log a
1
c
C
a
_
_
2(1 ) log :
_
2::
"
2
4(1")
:

"
2(1")
_ (18)
= lim
a!1
o
a
:
(2.)
2
4(1.)
_
:

"
2(1")
1
_
C
a
= 1.
Since the C
a
were dened as
C
a
= :

(2")
2
4(1")
:
(2")
2(1")
exp
_

(log :)
2
4(1 ) log :
_
.
we can easily see that the denominator of the expression in (18) converges to one. But this implies
that
lim
a!1
o
a
= 1.
which is exactly (6), the result we wanted to prove. Hence the Laplace transforms
1
_
exp
_
:
dQ
a
d1
a
__
of the density ratios converge to exp(:). which is the Laplace transform of a measure concen-
trated in 1.
We therefore have shown that the density ratios
dQ
a
d1
a
converge in distribution to a constant, namely 1 (Theorem 2, p. 431, Feller (1971)). Therefore
they converge in probability to this constant, too. Hence for an arbitrary j 0.
1
a
__

dQ
a
d1
a
1

j
__
0.
Now let
a
be a sequence of events. Then we have
(1 j)1
a
(
a
) 1
a
__

dQ
a
d1
a
1

j
__
< Q
a
(
a
)
< (1 +j)1
a
(
a
) +1
a
__

dQ
a
d1
a
1

j
__
.
Since j was arbitrary, we can conclude that
1
a
(
a
) Q
a
(
a
) 0.
11
Since
a
is an arbitrary sequence of events, we can conclude that the total variation between 1
a
and Q
a
converges to zero, hence for all measurable functions ,
a
with 0 _ ,
a
_ 1 we have
_
,
a
d1
a

_
,
a
dQ
a
0.
But this is exactly what we wanted to show: For every sequence of tests, the power under the
null (1
a
) is the same as under the alternative (Q
a
).
Now we want to present a test statistic, for which we will show that the local power of the
test statistic attains this bound. The preceding result indicates that for xed c
a
our best statistic
is an exponential sum of the .
i
. We should, however, keep in mind that our .
i
are increments
over shorter and shorter time intervals. In order to consider relevant alternatives, we might be
interested in alternatives where c
a
_
: becomes "large". In this case, the test statistic gives more
and more increasing inuence to bigger values. Continuing with this line of reasoning, it may
be a good idea to look mainly at the "largest" absolute values of the increments of A
t
: The
standard theory of diusion processes guarantees that, when divided by o
t
, these increments
are approximately normal. Next, because we do not know o
t
, we have to estimate it. As o
t
is
time-varying, a moving average of the squares of the increments seems to be a natural candidate
estimator. This leads us to propose the following test statistic: Dene (for an arbitrary :) the
return :
i
= :
i,a
by
:
i
= :
i,a
=
_
A
ia
A
(i1)a
_
Then choose an integer / (the "length of the window") and reject the null whenever
t
a
= sup
i
:
2
i
(:
2
i1
+:
2
i2
+..:
2
i
),/
is "too large". This is quite analogous to the test statistics of Lee and Mykland (2008): We
standardize the returns by an estimator for o
2
(t. A
t
). We use the usual quadratic estimator
instead of the bipower estimator. One might argue that jumps could unduly aect the properties
of our estimator. We think, however, that the much simpler form is justied, essentially in part
for the following two reasons:
12
1. We assume that the jumps are separated events: Before the rst jump occurs, our estimator
for o
2
(t. A
t
) is not inuenced by it.
2. We only use a window of length / to estimate o
2
(t. A
t
). So a jump will only inuence a
small number of estimated values of o
2
(t. A
t
). Our test would get only distorted if we had
two (or more) jumps within an interval of length /,:, an event whose probability we assume
converges to zero.
The main reason, however, for using this specic estimator is convenience. Specically, only
Lemma 6 is essential to prove the rate-optimality of our proposed test. We think analogous results
will hold for more general classes of estimators.
3 Critical Values and Local Power of Our Test
Let .
i
. i = 1. ... : be a sequence of independent, identically distributed standard normal random
variables. Assume that for each : we have given an / = /(:), and let us denote by T
i
the
oalgebra generated by .
i
. .
i1
. .. Then let us dene
n
i
=

)=1
.
2
i)
.
o
2
i
= n
i
,/
and
t
i
= .
2
i
, o
2
i
.
For the computation of the critical values the following lemma is very helpful.
Lemma 2 Suppose that
/ = o (:) .
but also
/ _ 2 log :.
13
Dene for each c 0. a 1

a
= 1

a
(c) such that
21
_
exp
_
1

a
o
2
i
,2
_
,
_
2:1

a
o
2
i
_
= c,: (19)
Then.
1
_
max
i=+1,..a
t
i
1

a
_
1 exp (c) as : .
Proof. First of all we observe that 1 (max
i=+1,..a
t
i
1

a
) = 1 1 (max
i=+1,..a
t
i
_ 1

a
) and
1
_
max
i=+1,..a
t
i
_ 1

a
_
= 1
_

i=+1,..a
1 t
i
_ 1

_
.
It can immediately be seen that the t
i
are T
i
measurable. We will repeatedly apply the optional
sampling theorem for various stopping times. Let 0 be arbitrary, and let `() be dened as
in the Appendix A (28), (29), (30).
Let us dene the stopping time i as follows: Dene i to be the rst index : _ : 1 so that
n+1

)=+1
log 1 [1 t
i
_ 1

a
[T
i1
] < c(1 +)
3
or o
2
n+1
< ` ()
2
,1
a
or
n+1

)=+1
log 1 [1 t
i
_ 1

a
[T
i1
] c(1 )
and
: if no such : exists.
Observe that i is indeed a stopping time adapted to T
i
: Since for i _ :+1. 1 ((t
i
_ 1

a
)[T
i1
)
as well as o
2
n+1
are T
n
-measurable, the event
i = : T
n
.
We contend that
lim
a!1
1 [i = :] = 1. (20)
To demonstrate (20), it is sucient to rst show that
1
__
inf o
2
i
` ()
2
,1
a
_
1 (21)
and then, because log 1 (1 t
i
_ 1

a
[T
i1
) _ 0, then that
1
__
a

)=+1
log 1 (1 t
i
_ 1

a
[T
i1
) _ c(1 +)
3
_

_
inf o
2
i
` ()
2
,1
a
_
_
1. (22)
14
Equation (21) is an immediate consequence of Lemma 6, which shows that
1
__
inf o
2
i
_ ` ()
2
,1
a
_
_ :1
__
o
2
i
_ ` ()
2
,1
a
_
0.
To prove that (22) is valid, rst observe that
1 [1 t
i
_ 1

a
[T
i1
] = 2
_
_
1

a
o
2
i
_
1.
If o
2
i
` ()
2
,1
a
, we can use inequality (30) to conclude that
log
_
2
_
_
1

a
o
2
i
_
1
_
_ 2(1 +)
2
exp
_
1

a
o
2
i
,2
_
,
_
2:1

a
o
2
i
Hence
_
a

)=+1
log 1 [1 t
i
_ 1

a
[T
i1
] _ c(1 +)
3
_

_
inf o
2
i
` ()
2
,1
a
_
_
_
2(1 +)
2
a

)=+1
exp
_
1

a
o
2
i
,2
_
,
_
2:1

a
o
2
i
_ c(1 +)
3
_

_
inf o
2
i
` ()
2
,1
a
_
Since we already know that 1
__
inf o
2
i
` ()
2
,1
a
_
1, it is sucient to show that
1
__
2
a

)=+1
exp
_
1

a
o
2
i
,2
_
,
_
2:1

a
o
2
i
_ c (1 +)
__
1 (23)
Let us now introduce the term 1
)
by
1
)
= 2 exp
_
1

a
o
2
i
,2
_
,
_
2:1

a
o
2
i
Then we can easily see that (23) is fullled if
a

)=+1
1
)
c (24)
in probability. By our denition of 1

a
, 11
)
= c,:. Moreover, we know that o
2
i
is distributed
according to a scaled
2
distribution with / degrees of freedom. Hence it is an elementary exercise
to show that 11
2
)
= C(1,:
2
), and that 1
)
and 1
I
are independent if
[, /[ / + 1.
15
As /,: 0, we can easily see that the variance of

1
)
converges to zero. This establishes (20).
Now it is rather easy to establish the claim stated in our lemma: We have to show that
1
_

i=+1,..a
1 t
i
_ 1

_
exp(c)
Using (20), it is sucient to show
1
_

ii
1 t
i
_ 1

_
exp(c)
Trivially,
1
_
1 t
i
_ 1

1 (1 t
i
_ 1

a
[T
i1
)
[T
i1
_
= 1.
A straightforward argument, perfectly analogous to the optional sampling theorem, yields
1
_

_
1
_

ii
1 t
i
_ 1

ii
1 [(1 t
i
_ 1

a
[T
i1
]
_

_
= 1. (25)
According to the denition of i.
(1 +)
2
i

)=+1
1
)
_ log

ii
1 [1 t
i
_ 1

a
[T
i1
] _ (1 )
2
i

)=+1
1
)
(26)
and
log

ii
1 [1 t
i
_ 1

a
[T
i1
] _ c(1 +)
3
(27)
Moreover, (20) implies that 1
__

i
)=+1
1
)
=

a
)=+1
1
)
__
1. Therefore

i
)=+1
1
)
c as
well. Hence it can be seen that (27) and (26) allow us to deduce from (25) that
exp((1 +)
2
c) _ liminf 1

ii
1 t
i
_ 1

_ limsup 1

ii
1 t
i
_ 1

a
_ exp((1 )
2
c).
Now one can easily see that (20) allows us to replace i with : in the preceding inequalities, which
proves our lemma.
So far, we have computed the distribution of our test statistic for a very specic case, namely
when j
t
= 0 and o
t
= 1. We now have to show that the general case given by (1) can be reduced
16
to the specic case discussed above. To achieve this goal, we have to impose some stronger
assumptions on j
t
and o
t
.
Theorem 3 Suppose j
t
and log o
t
are diusion processes with a.s. uniformly bounded diusion
coecients. Then - provided that the conditions of Lemma 2 are satised and that /
a
, log :
converges to a constant dierent from 0 - the dierence between the test statistic applied to A
t
and \
t
converges to zero in probability.
Proof. The proof is rather technical and is provided in the Appendix B.
Lemma 2 and Theorem 3 establish that our construct - rejecting when the max t
i
are larger
than 1

a
- is indeed a test. Moreover, it is an easy, but tedious exercise to establish the order of
magnitude of 1

a
. Because the distribution of o
2
i
is a scaled
2
distribution, the left-hand side of
equation (19) can be evaluated using the gamma function. Then it is then an easy task to show
that
1

a
, (2 log :) 1.
Finally, it is elementary to establish our assertion that the test is consistent against jumps of the
order
o
t
_
2 (1 +) log :
_
:
.
4 Power of the Some Competing tests
As mentioned in the introduction, several tests for this testing problem already been published.
Two of the best-known ones are the tests of BNS and of AJ. These tests are based on the following
test statistics:
Denition 4 BNS test statistic(Barndor-Nielsen and Shephard (2006))
t
11.
1.S
=
_
:
_
1\

2
11\
_
_
_
1
0
o
4
&
dn
. t
1J
1.S
=
_
:
_
1
11\
21\
_
_
max
_
1.
_
1
0
o
4
&
dn,
_
_
1
0
o
2
&
dn
_
2
_
where
1\ =
1

)=1
:
2
t+)
and 11\ =
1

)=2
[:
t+)
[

:
t+()1)

17
Another alternative test has recently been proposed by Ait-Sahalia and Jacod. This test
is based on the jth power variation of the process A
t
and it compares the estimates of this
variation for dierent time scales.
Denition 5 AJ test statistic(Ait-Sahalia and Jacod (2009))
1
t
j,I
J
=
_
/
j21


o (j. /. )
_
,
_

\
j,I
where j 3. / _ 2.

o (j. /. ) =

1(j. /) ,

1(j. ) ,

1(j. /)
t
=
aI

i=1
[:
t+iI
[
j
and

\
j,I
is the variance of

o (j. /. ) under the null.
The behavior of both sets of test statistics under our alternatives can easily be analyzed. We
just add a specic jump to one of returns. One can easily see that if the jump is of o(:
14
),
the dierence between the BNS test statistic under the null and the alternative converges to 0.
Hence their test will be much less powerful against the alternatives we consider. The same is true
for AJ tests: After some calculations, we can see that the corresponding bound is o
_
:
12+1j
_
.
Despite the fact that both tests have "low" power against "our" alternatives, it should be noted
that there are situations where the BNS and AJ tests have large advantages over our test. For
instance, the relevant alternative could be the occurrence of many jumps in the sample. Assume
there is not only one jump, but many. So let us assume that we have 1 jumps of size J (rather
evenly distributed), and let us assume that the number of intervals between successive jumps is
always greater than 1. Then it is easily seen from the denition of BNS statistics that the tests
are consistent (i.e., its power converges to 1) if
_
:1J
2
.
1
For

V
p;k
; they suggest two estimators:

V
c
p;k
=
nM(p;k)
b
A(2p;n)
t
b
A(p;n)
2
t
;

V
c
p;k
=
nM(p;k)
e
A(
p
p+1
;2p+2;n)
t
e
A(
p
p+1
;p+1;n)
2
t
where
M (p; k) =
1
m
2
p
_
k
p2
(1 +k) m
2p
+k
p2
(k 1) m
2
p
2k
p=21
m
k;p
_
;
m
p
= E ([Z
1
[
p
) =
1=2
2
p=2

_
p+1
2
_
;
m
k;p
= E
_
[Z
1
[
p

Z
1
+
_
k 1Z
2

p
_
,
Z
i
~
iid
N (0; 1) ;

A(p;
n
)
t
=

1p=2
n
mp

i=1
[
n
i
X[
p
1 [
n
i
X[ _
$
n
, $ (0; 1=2) ;

A(r; q;
n
)
t
=

1qr=2
n
m
q
r

i=1

q
j=1

n
i+j1
X

r
;

n
i
X = X
in
X
(i1)n
18
An analogous result holds for AJ tests. These results are easily explainable by taking into account
that both test statistics are constructed from sums: The contributions of several small jumps can
cumulate, whereas our test does not allow for this. So one should employ their tests not as
tests against the alternative of the occurrence of a single jump, but as tests against Levy-type
alternatives. It might be worthwhile to investigate their power of the test against alternatives of
this specic type.
5 Simulations
First, we consider a very simple ideal process for returns:
:
ia
=
_
ia
(i1)a
dj
t
=
_
ia
(i1)a
od\
t
+
_
ia
(i1)a
Jdi
t
with o
2
= 0.513 (Model 1)
Second, the stochastic volatility model of Barndor-Nielsen, Hansen, Lunde, and Shephard
(2008) is considered.
:
iT
=
_
iT
(i1)T
dj
t
=
_
iT
(i1)T
j
t
dt +
_
iT
(i1)T
o
t
d\
t
+Jdi(t) , (Model 2)
log o
iT
log o
(i1)T
=
_
iT
(i1)T
(,
0
+,
1
t
t
) dt.
t
iT
t
(i1)T
=
_
iT
(i1)T
ct
t
dt +d1
t
. and
Co:: (d\
t
. d1
t
) = o.
We set j = 0.03. ,
1
= 0.125. c = 0.025. o = 0.3 and ,
0
=
o
2
1
2c
.
Third, we considered the stochastic volatility model devised by Barndor-Nielsen and Shep-
hard (2002).
19
dj (t) = o (:) \ (d:) +Jdi(t) with (Model 3)
o
2
(t) =

I
o
2
I
(t)
o
2
I
(t) =
_
t
0
`
I
_
o
2
I
(:)
I
_
d: +
_
t
0
.
I
o
I
(:) d1
I
(:) .

I
= j
I

0
.

j
I
= 1.
For the one factor model, we set
0
= 0.513. `
1
= 1.44. j
1
= 1. and .
2
1
= 2.1. For the two
factor model, we set
0
= 0.509. `
1
= 0.0429. `
2
= 3.74. j
1
= 0.218. j
2
= 1 j
1
. We consider
(.
2
1
. .
2
2
) = (0.0169. 5.2978) .
We track the performance of four test statistics : Lee-Ploberger (LP), Barndor-Nielsen &
Shephard(BNS), Ait-Sahalia & Jacod(AJ), and Lee-Mykland (LM). We assume J ~ ` (0. o
2
c
) and
consider three cases for jump sizes : no jump(o
2
c
= 0), 20% jump (o
2
c
= 0.2
0
(:)), and 2|oq (:) ,:
jump(o
2
c
= 2|oq (:) ,: +
0
(:)). We impose one jump at a random time. The sample sizes con-
sidered are 72, 288, 1440, 2880, 8640; these sample sizes correspond to sampling interval lengths
of 20 minutes, 5 minutes, 1 minute, 30 seconds, and 10 seconds, respectively, over a 24-hour
trading day. We repeat this simulation 5,000 times. Table 1 - 6 show the results of small simula-
tion experiments varying the size of jumps. In those tables, we compare the empirical rejection
probabilities of various tests with the 5% level of signicance.
Let us consider rejection probabilities under the null, no-jump case. In many cases, our tests
have quite precise sizes even with 5-minute data, although our tests over-reject under the null
if the volatility process moves too sharply and/or the volatility process stays around zero level
for a long time (Table 3, 4). In those cases, we could reduce the degree of those size distortions
by changing averaging windows for instantaneous volatility. Our tests assume some continuity
of volatility process within the averaging window. If the volatility process moves too sharply,
then our assumptions may not be fullled in a nite sample even though the underlying process
is continuous. To achieve more robust size properties in nite samples, we will consider data-
adaptive rules for averaging window.
Next, we consider the power of the test. Our tests have better power than other tests after
20
controlling for size distortions. In our simulations, we consider one jump and check whether tests
detect the jump or not. In that set-up, the LM tests show a large size distortion even with 10-
second data, so their power is signicantly reduced by the size adjustment. Although our tests
sometimes over-reject the null, our tests have the best or comparable power in many cases if we
adjust the critical values to remove size distortions. In particular, in many cases our tests have
a non-trivial power even with jump the order of which is |oq (:) ,:. where : is the sample size,
whereas the BNS and AJ tests have trivial power in that case. The LM tests also have a non-
trivial power but their sizes are not as reliable as ours and their averaging window requirement
is more restrictive than ours.
References
Ait-Sahalia, Y. (2004): Disentangling diusion from jumps, Journal of Financial Economics,
74(3), 487528.
Ait-Sahalia, Y., and J. Jacod (2009): Testing for jumps in a discretely observed process,
Annals of Statistics, 37(1), 184222.
Andersen, T., T. Bollerslev, and F. Diebold (2007): Roughing it Up: Including Jump
Components in the Measurement, Modeling and Forecasting of Return Volatility, Review of
Economics and Statistics, 89(4), 701720.
Andersen, T., T. Bollerslev, and D. Dobrev (2007): No-arbitrage semi-martingale re-
strictions for continuous-time volatility models subject to leverage eects, jumps and iid noise:
Theory and testable distributional implications, Journal of Econometrics, 138(1), 125180.
Bandi, F., and J. Russell (2006): Separating microstructure noise from volatility, Journal
of Financial Economics, 79(3), 655692.
Barndorff-Nielsen, O., P. Hansen, A. Lunde, and N. Shephard (2008): Designing
realized kernels to measure the ex post variation of equity prices in the presence of noise,
Econometrica, 76(6), 14811536.
21
Barndorff-Nielsen, O., and N. Shephard (2002): Econometric Analysis of Realized
Volatility and Its Use in Estimating Stochastic Volatility Models, Journal of the Royal Statis-
tical Society. Series B (Statistical Methodology), 64(2), 253280.
(2003): Realised power variation and stochastic volatility models, Bernoulli, 9(2),
243265.
(2004): Econometric Analysis of Realized Covariation: High Frequency Based Covari-
ance, Regression, and Correlation in Financial Economics, Econometrica, 72(3), 885925.
(2006): Econometrics of Testing for Jumps in Financial Economics Using Bipower
Variation, Journal of Financial Econometrics, 4(1), 130.
Boehmer, E., G. Saar, and L. Yu (2005): Lifting the veil: An analysis of pre-trade trans-
parency at the NYSE, Journal of Finance, 60(2), 783815.
Bollerslev, T., T. Law, and G. Tauchen (2008): Risk, jumps, and diversication, Journal
of Econometrics, 144(1), 234256.
Delattre, S., and J. Jacod (1997): A Central Limit Theorem for Normalized Functions of
the Increments of a Diusion Process, in the Presence of Round-O Errors, Bernoulli, 3(1),
128.
Drost, F. C., T. E. Nijman, and B. J. M. Werker (1998): Estimation and Testing in
Models Containing Both Jump and Conditional Heteroscedasticity, Journal of Business &
Economic Statistics, 16(2), 23743.
Eraker, B. (2004): Do Stock Prices and Volatility Jump? Reconciling Evidence from Spot
and Option Prices, Journal of Finance, 59(3), 13671404.
Eraker, B., M. Johannes, and N. Polson (2003): The Impact of Jumps in Volatility and
Returns, Journal of Finance, 58(3), 12691300.
Feller, W. (1971): An introduction to probability and its applications, vol. II. John Wiley &
Sons, 2nd edn.
22
Huang, X., and G. Tauchen (2005): The Relative Contribution of Jumps to Total Price
Variance, Journal of Financial Econometrics, 3(4), 456499.
Jiang, G., and R. Oomen (2008): Testing for jumps when asset prices are observed with
noisea swap variance approach, Journal of Econometrics, 144(2), 352370.
Lee, S., and P. Mykland (2008): Jumps in Financial Markets: A New Nonparametric Test
and Jump Dynamics, Review of Financial Studies, 21(6), 25352563.
Oomen, R. (2005): Properties of bias corrected realized variance in calendar time and business
time, Journal of Financial Econometrics, 3(4), 555577.
Todorov, V. (2009): Estimation of continuous-time stochastic volatility models with jumps
using high-frequency data, Journal of Econometrics, 148(2), 131148.
Zhang, L. (2006): Ecient estimation of stochastic volatility using noisy observations: a multi-
scale approach, Bernoulli, 12(6), 10191043.
Zhang, L., P. Mykland, and Y. Ait-Sahalia (2005): A tale of two time scales: Determin-
ing integrated volatility with noisy high-frequency data, Journal of the American Statistical
Association, 100(472), 13941411.
A Normal and
2
distributions for small and large values
It is well known that for the standard distribution function (r)
lim
a!1
(1 (r))
_
2:r exp
_
r
2
,2
_
= 1
or equivalently
lim
a!1
(log(2(r) 1))
_
2:r exp
_
r
2
,2
_
,2 = 1
we can dene ` () as the smallest value so that for all
r ` () (28)
23
(1 ) _

_
2:r exp
_
r
2
,2
_
(1 (r))

_ (1 +) . (29)
and
2 (1 +)
2
_ (log(2(r) 1))
_
2:r exp
_
r
2
,2
_
_ 2 (1 )
2
(30)
Lemma 6 So let us now choose an arbitrary 0, and let /, :, n
i
be the integers dened in the
main section of the paper. If
/ _ 2 log :
and
1 .
then
j
a
= 1
_
n
i
_ /` ()
2
,1

= o
_
:
1
_
.
.
Proof. Since n
i
is distributed according to a
2
distribution with / degrees of freedom, we have
j
a
=
1
(/,2)
_
A(.)
2
(1)
0
r
21
exp (r,2) dr.
Since exp (r,2) _ 1, we have with
C =
` ()
2
1
j
a
_
1
(/,2)
1
/,2
C
2
/
2
.
and therefore
log j
a
_ log (/,2) log (/,2) +
/
2
log C +/,2 log /.
The well known formula of Stirling implies that for / (with : = /,2 1)
log (/,2)
_
:log (:) :+ log
_
_
2::
__
0.
Therefore
log j
a
_ (/,2) log / :(log(:) + (:+
/
2
log C) +C(log /)
One can easily see that (/,2) log / :(log(:) = (/,2) log(/,:)+C(log :). So the terms linear
in / dominate the right hand side of the inequality, Moreover, as ` () is xed and 1 ,
24
we may conclude that C 0. Therefore,

2
log C will become negative. Therefore,

2
log C will
become negative and dominate other parts. Therefore it can immediately be seen that limsup
log j
a
/,2 (log C)
_ 1.
which implies j
a
_ exp (/,2)
A(.)
2
1
.
B The proof of Theorem 3
Our proof of Theorem 3 is based on the following lemma.
Lemma 7 Suppose we have given a standard Wiener process \, an adapted process , and a
constant c so that
b
_
:
,
2
dt _ 1.
Then, where
b
_
o
,d\ is the usual Ito-integral,
1
_
_
_
_
_

b
_
o
,d\

_ C
_
_
_
_
_
_ 2 exp
_

C
2
21
_
.
Proof. Novikovs theorem guarantees that for all n
1
_
_
exp
_
_
n
b
_
o
,d\
n
2
2
b
_
o
,
2
dt
_
_
_
_
= 1
Hence
1
_
_
exp
_
_
n
b
_
o
,d\
n
2
2
1
_
_
_
_
_ 1
and therefore
exp
_
nC
n
2
2
1
_
1
_
_
_
_
_
b
_
o
,d\ C
_
_
_
_
_
_ 1.
Setting
n =
C
1
25
and repeating the same idea with
b
_
o
,d\ proves our proposition.
We are now prove Theorem 3 applying Lemma 7.
Proof of Theorem 3. We have
dj
t
=
t
dt +1
t
d\
(1)
t
.
d(log o
t
) = C
t
dt +1
t
d\
(2)
t
.
where
t
. 1
t
. C
t
. 1
t
are continuous processes and \
(1)
t
,\
(2)
t
are (standard) Wiener processes. First
of all let us demonstrate that without limitation of generality we can assume that
t
. 1
t
. C
t
. 1
t
and j
t
. log o
t
as well are uniformly bounded.
Since the processes
t
. 1
t
. C
t
. 1
t
and j
t
. log o
t
are continuous, for every 0 there exists a
` = `() so that
1 [sup [
t
[ . sup [1
t
[ . sup [C
t
[ . sup [1
t
[ . sup [j
t
[ . sup [log o
t
[ < `()] 1 .
Let us now dene the stopping time t
(.)
be dened as the rst time one of the absolute values of

t
. 1
t
. C
t
. 1
t
and j
t
. log o
t
becomes larger than `(), or 1 if the absolute values of the processes
remain below `() all the time. Then
1
__
t
(.)
= 1
_
1 . (31)
Let :
i,a
=
_
A
ia
A
(i1)a
_
and :
i,a
=
_
\
ia
\
(i1)a
_
. Then let
j
a
= sup
i
:
2
i
(:
2
i1
+:
2
i2
+..:
2
i
),/
.
j
(.)
a
= sup
it
(")
:
2
i
(:
2
i1
+:
2
i2
+..:
2
i
),/
.

a
= sup
i
:
2
i
(:
2
i1
+:
2
i2
+..:
2
i
),/
.
and

(.)
a
= sup
it
(")
:
2
i
(:
2
i1
+:
2
i2
+..:
2
i
),/
.
Then - by denition, j
a
and
a
are our test statistics applied to A
ia
and \
ia
, respectively.
Moreover, (31) guarantees that
1
__
j
a
= j
(.)
a
_
1
26
and
1
__

a
=
(.)
a
__
1 .
too. Hence it is sucient to show that for all 0 the dierence between converges to zero. For
showing this, let us rst observe that
min(
o
2
(iI)a
o
2
ia
) _
:
2
i
(:
2
i1
+:
2
i2
+..:
2
i
),/
,
o
2
ia
:
2
i
(o
2
(i1)a
:
2
i1
+o
2
(i2)a
:
2
i2
+..o
2
(i)
:
2
i
),/
_ max(
o
2
(iI)a
o
2
ia
).
For analyzing the dierence of the left and right side of the above inequality and one, it is sucient
to consider
sup
I

log(
o
2
(iI)a
o
2
ia
)

.
Now observe that log(o
2
ia
) log(o
2
(iI)a
) =
_
ia
(iI)a
C
t
dt +1
t
d\
(2)
t
. For i < t
(.)

_
ia
(iI)a
C
t
dt

_
/`,:.Moreover, we have due to Lemma 7
1
__

_
ia
(iI)a
1
t
d\
(2)
t

2
_
`/
_
log :
:
__
_
1
:
2
and hence
1
__
sup
it
(")
,I

_
ia
(iI)a
1
t
d\
(2)
t

2
_
`/
_
log :
:
__
_
/
:
0.
Hence we can conclude that
1
__
sup
I

log(
o
2
(iI)a
o
2
ia
)

4
_
`/
_
log :
:
__
0.
Since
sup
:
2
i
(:
2
i1
+:
2
i2
+..:
2
i
),/
= C(log :).
we can conclude that the dierence between
sup
:
2
i
(:
2
i1
+:
2
i2
+..:
2
i
),/
and
sup
o
2
ia
:
2
i
(o
2
(i1)a
:
2
i1
+o
2
(i2)a
:
2
i2
+..o
2
(i)
:
2
i
),/
converges to zero.
It now remains to show that the dierences

:
i,a
o
(i1)a
:
i,a

27
remain small. Now observe that

:
i,a
o
(i1)a
:
i,a

_
ia
(i1)a
_
j
t
dt +o
t
d\
t
o
(i1)a
d\
t
_

_
ia
(i1)a
j
t
dt

_
ia
(i1)a
_
o
t
o
(i1)a
_
d\
t

_ max [j
t
[
1
:
+

_
ia
(i1)a
_
o
&
o
(i1)a
_
d\
&

.
For the analysis of

_
ia
(i1)a
_
o
&
o
(i1)a
_
d\
&

we will apply Lemma 7. Since o


&
is a diusion process, where drift and diusion coecients were
assumed to be bounded, we can conclude that for all c 0 there exists a ` so that
1
__
for all i and (i 1),: _ n _ i,:

o
&
o
(i1)a

_ `[n (i 1),:[
12c
__
1.
Hence
1
__
_
ia
(i1)a
_
o
&
o
(i1)a
_
2
dn _ 2`:
2+c
__
1.
To apply Lemma 7, however, we need to guarantee an uniformbound on the integral
_
ia
(i1)a
_
o
&
o
(i1)a
_
2
dn.
This can easily be achieved by using a stopping time.
We stop the process at time o, where
i,: _ o _ (i 1),:.
if for the rst time
_
S
(i1)a
_
o
&
o
(i1)a
_
2
dn = 2`:
2+c
.
otherwise we set
o = 1.
Obviously the denition of ` guarantees that
1 (o = 1) _ 1
Hence if we dene
o

&
=
_
_
_
o
&
for n _ o
o
S
otherwise,
28
we have
1 [o

&
= o
&
for all n] _ 1 .
Hence it is sucient to give estimates for
_
ia
(i1)a
_
o

&
o

(i1)a
_
d\
&
. For this task, however, we
can apply Lemma 7 and conclude that
1
_

_
ia
(i1)a
_
o

&
o

(i1)a
_
d\
&


_
8`:
2+c
log :
_
_
2
:
2
Since c 0 was arbitrary, we can conclude that for arbitrary , 0
1
_
sup

_
ia
(i1)a
_
o

&
o

(i1)a
_
d\
&

:
1+o
_
0.
which demonstrates that these terms are negligible.
29
B.1 Tables and Figures
Following tables summarize rejection probabilities under 5% size.
n 4LN 2LN LIN Ratio ADJ QV BIP SQRT 4LN 2LN
72 3.04 3.74 11.14 8.26 6.42 3.24 3.70 48.64 17.72 48.64
288 4.82 4.56 8.40 6.98 6.18 3.84 3.68 43.38 28.52 66.40
1440 4.76 4.90 5.84 5.30 5.16 4.28 4.30 28.76 39.44 85.96
2880 5.06 5.64 6.02 5.56 5.30 5.10 4.74 22.82 45.60 92.62
8640 5.12 5.18 4.92 4.64 4.58 5.06 4.68 16.46 52.90 96.54
n 4LN 2LN LIN Ratio ADJ QV BIP SQRT 4LN 2LN
72 33.24 25.44 32.74 28.50 26.36 6.78 9.98 68.32 49.60 68.32
288 59.32 52.58 48.90 46.96 46.06 20.46 36.26 78.62 72.92 87.60
1440 78.80 75.92 65.28 64.56 64.44 34.76 68.26 86.66 88.94 97.74
2880 84.38 81.54 71.26 70.94 70.84 39.62 76.96 88.94 92.32 98.84
8640 90.88 88.78 78.36 78.14 78.08 43.32 85.62 93.12 96.12 99.78
n 4LN 2LN LIN Ratio ADJ QV BIP SQRT 4LN 2LN
72 22.22 16.44 24.82 20.92 18.58 5.36 7.70 62.32 39.28 62.32
288 25.84 19.14 18.62 16.72 15.68 8.22 12.68 60.38 49.24 76.54
1440 26.80 20.52 12.60 11.64 11.36 9.14 13.76 49.96 57.78 90.68
2880 27.26 21.76 11.30 10.66 10.40 9.52 14.38 46.98 62.76 94.80
8640 27.74 21.94 7.74 7.44 7.32 9.02 13.40 41.70 67.12 97.58
Model1-1 : Pure Diffusion W/O JUMP
LP BNS AJ LM
Model1-2 : Pure Diffusion W 20% JUMP
LP BNS AJ LM
Model1-3 : Pure Diffusion W LN(2N)/N JUMP
LP BNS AJ LM
Table 1: Simulated rejection probability of Model 1 : Pure Diusion
n 4LN 2LN LIN Ratio ADJ QV BIP SQRT 4LN 2LN
72 3.42 3.76 10.58 8.22 6.38 2.92 3.34 47.24 18.12 47.24
288 4.58 4.02 7.72 6.56 5.84 3.62 3.80 42.98 28.44 65.58
1440 4.30 4.96 6.00 5.52 5.22 4.30 4.50 27.86 38.76 86.52
2880 4.72 4.70 5.86 5.46 5.28 4.14 4.30 23.40 45.06 93.20
8640 5.14 5.58 5.26 5.08 4.98 4.96 5.06 16.10 54.20 96.82
n 4LN 2LN LIN Ratio ADJ QV BIP SQRT 4LN 2LN
72 5.82 5.30 12.26 9.70 8.12 2.66 3.52 49.62 21.24 49.62
288 13.30 9.98 12.72 11.08 10.40 4.40 7.62 50.60 37.44 70.30
1440 32.14 26.40 18.64 17.84 17.52 11.08 19.92 52.08 59.58 91.36
2880 42.44 36.76 22.52 21.78 21.60 15.68 28.50 58.36 69.46 96.06
8640 59.94 54.92 32.36 31.96 31.84 23.24 44.96 68.44 83.22 98.98
n 4LN 2LN LIN Ratio ADJ QV BIP SQRT 4LN 2LN
72 4.12 4.16 10.84 8.48 6.82 2.72 3.24 48.02 19.22 48.02
288 4.90 4.28 8.20 6.76 6.02 3.66 3.98 43.50 29.04 65.94
1440 4.64 5.14 6.10 5.58 5.32 4.38 4.66 28.34 39.20 86.60
2880 5.00 4.92 5.98 5.52 5.34 4.18 4.42 23.82 45.48 93.22
8640 5.42 5.82 5.42 5.26 5.12 5.04 5.20 16.70 54.54 96.90
Mode 2-1 : log SV-Diffusion W/O JUMP
LP BNS AJ LM
Mode 2-2 : log SV-Diffusion W 20% JUMP
LP BNS AJ LM
Mode 2-3 : log SV-Diffusion W LN(2N)/N JUMP
LP BNS AJ LM
Table 2: Simulated rejection probability of Model 2 : log SV model
30
n 4LN 2LN LIN Ratio ADJ QV BIP SQRT 4LN 2LN
72 25.30 16.86 14.84 10.96 9.64 3.30 6.02 71.24 47.94 71.24
288 32.24 20.38 9.60 7.84 7.70 4.18 4.90 72.64 65.44 83.80
1440 33.02 19.64 6.82 6.12 6.12 5.02 5.20 65.28 70.06 92.78
2880 30.64 17.66 6.60 6.04 6.04 5.52 5.10 61.68 72.98 96.00
8640 28.22 16.38 6.04 5.72 5.72 5.22 5.12 56.02 74.00 97.86
n 4LN 2LN LIN Ratio ADJ QV BIP SQRT 4LN 2LN
72 55.18 45.76 41.92 37.24 35.96 3.48 9.52 83.96 70.96 83.96
288 72.46 64.86 52.58 50.46 50.38 13.34 25.28 89.60 86.84 93.76
1440 85.48 80.68 66.64 65.94 65.92 31.90 63.98 93.02 93.88 98.82
2880 88.94 85.12 72.42 72.10 72.10 36.80 74.16 94.00 95.92 99.36
8640 93.36 91.42 78.80 78.52 78.52 41.38 83.82 95.78 97.42 99.80
n 4LN 2LN LIN Ratio ADJ QV BIP SQRT 4LN 2LN
72 40.58 32.46 27.00 22.38 20.68 3.28 7.60 78.60 60.60 78.60
288 46.08 35.20 18.66 16.22 16.08 4.72 7.64 78.70 72.88 87.42
1440 47.64 35.18 12.60 11.60 11.60 6.52 8.94 73.22 77.24 94.64
2880 44.20 32.92 10.36 9.78 9.78 6.98 9.14 69.18 78.48 96.72
8640 43.68 33.24 8.18 7.94 7.94 6.56 8.24 65.22 79.28 98.36
Model 3-1-3 : 1 Factor CIR SV-Diffusion W LN(2N)/N JUMP
LP BNS AJ LM
Model 3-1-2 : 1 Factor CIR SV-Diffusion W 20% JUMP
LP BNS AJ LM
Model 3-1-1 : 1 Factor CIR SV-Diffusion W/O JUMP
LP BNS AJ LM
Table 3: Simulated rejection probability of Model 3-2 : 1 Factor CIR SV
n 4LN 2LN LIN Ratio ADJ QV BIP SQRT 4LN 2LN
72 18.90 10.74 14.84 11.18 10.06 3.54 7.20 65.80 43.08 65.80
288 19.56 9.42 10.92 9.38 9.30 4.54 6.26 65.02 56.48 78.32
1440 12.10 6.64 7.22 6.42 6.42 5.02 5.48 51.42 57.62 90.10
2880 8.76 5.88 6.46 6.00 6.00 5.40 5.16 42.06 58.94 94.48
8640 6.96 5.70 6.28 5.74 5.74 5.40 4.68 30.06 60.32 96.96
n 4LN 2LN LIN Ratio ADJ QV BIP SQRT 4LN 2LN
72 49.28 37.20 38.12 33.76 32.46 4.14 10.50 81.36 68.28 81.36
288 67.74 59.18 49.80 47.74 47.58 13.82 26.86 88.04 84.82 92.54
1440 81.76 77.46 65.10 64.46 64.46 32.06 61.76 91.28 92.46 98.34
2880 85.90 83.06 70.62 70.02 70.02 37.18 72.00 92.42 94.68 99.20
8640 91.40 90.08 77.50 77.34 77.34 42.20 82.20 94.28 97.00 99.74
n 4LN 2LN LIN Ratio ADJ QV BIP SQRT 4LN 2LN
72 32.22 21.84 24.28 19.86 18.68 3.44 8.28 73.90 54.96 73.90
288 34.84 23.34 17.06 14.86 14.68 5.62 8.52 74.02 67.48 84.04
1440 29.28 20.84 10.18 9.34 9.34 6.24 7.98 64.08 68.88 93.28
2880 27.86 20.94 8.92 8.30 8.30 6.54 7.62 57.54 70.10 96.04
8640 26.38 20.76 7.74 7.36 7.36 6.44 6.84 48.32 71.28 98.06
Model 3-2-3 : 2 Factor CIR SV-Diffusion W LN(2N)/N JUMP
LP BNS AJ LM
Model 3-2-2 : 2 Factor CIR SV-Diffusion W 20% JUMP
LP BNS AJ LM
Model 3-2-1 : 2 Factor CIR SV-Diffusion W/O JUMP
LP BNS AJ LM
Table 4: Simulated rejection probability of Model 3-4 : 2 Factor CIR SV
31
Jump/MeanVol Meaning 4LN 2LN LIN Ratio ADJ QV BIP SQRT 4LN 2LN
0.000 No Jump 33.04 21.40 10.60 8.64 8.56 4.20 5.46 72.92 66.40 84.30
0.022 LN(2N)/N 45.64 34.08 17.90 15.78 15.60 4.84 7.08 78.14 72.70 87.56
0.107 63.70 53.90 40.92 38.26 38.14 9.84 17.54 86.44 83.42 91.22
0.192 71.38 63.24 51.54 49.28 49.18 13.52 24.26 88.88 86.04 94.06
0.277 76.02 69.10 57.94 56.16 56.02 15.78 29.72 90.94 88.42 94.82
0.362 78.16 72.14 62.18 60.48 60.40 16.92 31.76 91.64 89.64 95.16
0.447 20% 80.36 73.80 64.60 63.08 63.04 18.66 34.96 92.96 90.74 95.72
Size Distortion 28.04 16.40 5.60 3.64 3.56 -0.80 0.46 67.92 61.40 79.30
Size Adjusted Power
Jump/MeanVol
0.000 No Jump 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00
0.005 LN(2N)/N 17.60 17.68 12.30 12.14 12.04 5.64 6.62 10.22 11.30 8.26
0.093 35.66 37.50 35.32 34.62 34.58 10.64 17.08 18.52 22.02 11.92
0.182 43.34 46.84 45.94 45.64 45.62 14.32 23.80 20.96 24.64 14.76
0.270 47.98 52.70 52.34 52.52 52.46 16.58 29.26 23.02 27.02 15.52
0.359 50.12 55.74 56.58 56.84 56.84 17.72 31.30 23.72 28.24 15.86
0.447 20% 52.32 57.40 59.00 59.44 59.48 19.46 34.50 25.04 29.34 16.42
LP BNS AJ LM
Table 5: Simulated rejection probability of Model 3-1 (1 Factor CIR SV model) with 5min
frequency
Jump/MeanVol Meaning 4LN 2LN LIN Ratio ADJ QV BIP SQRT 4LN 2LN
0.000 No Jump 19.94 9.12 10.48 8.74 8.56 4.54 5.54 65.28 57.40 78.50
0.022 LN(2N)/N 33.32 21.58 16.62 13.96 13.80 5.44 7.16 73.60 67.20 83.22
0.107 58.20 47.60 39.22 36.68 36.58 10.50 18.76 84.02 80.34 90.36
0.192 66.96 58.80 50.24 48.08 48.04 13.08 25.62 88.14 85.10 92.68
0.277 72.10 63.94 55.48 53.66 53.54 16.04 29.64 89.34 86.56 93.72
0.362 74.82 67.16 60.60 58.54 58.52 17.58 33.54 90.18 87.68 94.26
0.447 20% 77.40 71.36 63.86 62.14 62.10 17.90 34.94 91.38 89.44 95.04
Size Distortion 14.94 4.12 5.48 3.74 3.56 -0.46 0.54 60.28 52.40 73.50
Size Adjusted Power
Jump/MeanVol
0.000 No Jump 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00
0.005 LN(2N)/N 18.38 17.46 11.14 10.22 10.24 5.90 6.62 13.32 14.80 9.72
0.093 43.26 43.48 33.74 32.94 33.02 10.96 18.22 23.74 27.94 16.86
0.182 52.02 54.68 44.76 44.34 44.48 13.54 25.08 27.86 32.70 19.18
0.270 57.16 59.82 50.00 49.92 49.98 16.50 29.10 29.06 34.16 20.22
0.359 59.88 63.04 55.12 54.80 54.96 18.04 33.00 29.90 35.28 20.76
0.447 20% 62.46 67.24 58.38 58.40 58.54 18.36 34.40 31.10 37.04 21.54
LP BNS AJ LM
Table 6: Simulated rejection probability of Model 3-2 (2 Factor CIR SV model) with 5min
frequency
32
Figure 1: A time path of pure diusion model under the null
Figure 2: A time path of 1 factor CIR SV model under the null
33
Figure 3: A time path of 2 factor CIR SV model under the null
34

S-ar putea să vă placă și