Sunteți pe pagina 1din 4

Department of Statisti

s and Operations Resear h,


University of Copenhagen, Universitetsparken 5, 2100 Copenhagen, Denmark, www.stat.ku.dk

Niels Ri hard Hansen


De ember 12, 2001

Renewal Theory.
A Markov Chain Approa h

In this note we will introdu e the basi on epts from renewal theory and prove the
renewal onvergen e theorem.

Renewals and the forward re urren e time hain

Let a = (a(n)) 2N and p = (p(n)) 2N be probability distributions on N and let


Y = (Y ) 2N0 be independent sto hasti variables with
n

n n

Y0  a

and for n 2 N

Y  p:
We will think of the sto hasti pro ess Y as random waiting times between the
o urren e of \something", and this something will be referred to as a renewal.
Initially we wait Y0 before the rst renewal takes pla e, and if n renewals have taken
pla e, we wait Y before the next renewal o urs.
The total waiting time before the n + 1'th renewal
n

S =

X
n

i=0

is alled a delayed renewal pro ess with delay distribution a and in rement distribution p, and with this pro ess we asso iate the forward re urren e time hain
V + = inf fS
n

n j S > ng;
m

n 2 N 0:

This pro ess will at ea h renewal jump to the waiting time before the next renewal,
and then deterministi ally go down by one until the next renewal o urs. It is a
Markov hain on N with initial distribution a and transition probability matrix

0
B
B
P =B
B
B


p(1) p(2) p(3) p(4)


1
0
0
0
0
1
0
0
0
0
1
0
..
..
..
..
.
.
.
.

:::
:::
:::
:::
...

1
C
C
C
C
C
A

Niels R. Hansen

P
Now put m = 1=1 jp(j ) { the mean value for the in rement distribution. If
m < 1 we an de ne a probability measure
1
1 X
 (n) =
p(j ) n 2 N ;
m =
p

and it is easily veri ed that

 = P;

thus  is an invariant measure for the Markov hain (V + ) 2N0 .


n

The renewal theorem

We will study the long term behavior of this pro ess, more pre isely the probability
for a time to be a renewal time and show that in the long run this is approximately
m 1 and thus independent of the delay distribution. This is done via a so alled
oupling argument.
Let (S ) 2N0 and (Se ) 2N0 be two independent renewal pro esses with the same
in rement distribution p but with di erent delay distributions a and b, and let (V + )
and (Ve + ) be the orresponding forward re urren e time hains. Put
p

n n

n n

(1 1) = inf fn  1 j V + = 1; Ve + = 1g:


;

With the notation as above V  = (V + ; Ve + ) 2N0 is a Markov hain on


N 2 . If p is aperiodi , i.e. g dfn j p(n) > 0g = 1, and m < 1 then
Lemma 2.1

P
((1 1) < 1) = 1:
a

Clearly V  is a Markov hain with 


2 as an invariant measure. We will
assume that p has in nite support and show that V  is irredu ible. Then V  is
positive re urrent and espe ially P
((1 1) < 1) = 1. If the support is nite, the
hain must be restri ted to a nite set, but the argument is similar.
We start by showing that Markov hains with transition probabilities P are aperiodi . If x; y 2 N with p(x) > 0 and p(y ) > 0 we get P (x; x) = p(x) > 0 and by the
Chapman-Kolmogorov equations P + (x; x)  p(y )p(x) > 0, thus

Proof:

N = fn j P (x; x) > 0g
n

ontains x and all numbers of the form x + y for p(y ) > 0. Sin e p is aperiodi
g d(N ) = 1, and the Markov hains are aperiodi by de nition.
Sin e N is stable under addition lemma 2.2 below shows that we an nd (large)
r; s 2 N with g d(r; s) = 1. Then there are n and m su h that
nr = ms + 1;

and we an assume n; m  1. For i; j 2 N with i  j it follows that


j + (i

j )nr = i + (i

j )ms

Starting V  in (i; j ) it an jump to (1; 1) in the following way: Let V + jump to x


after it rea hes 1 for the rst time (this takes i step), and then in (i j )ms steps let
it jump from x to x. Similarly let Ve + jump to x the rst time it hits 1 (this takes
j step), and then in (i j )nr steps let it jump form x to x. Then they both end
up in x after j + (i j )nr steps, and V  gets to (1; 1) after x further steps. All this
happens with probability p(x)2 P ( ) (x; x)P ( ) (x; x) > 0. On the other hand,
if V  starts in (1; 1) nd a y  i with p(y ) > 0 and let the hain jump to (y; y ).
Choose a ording to aperiodi ity r so large that P (y; y ) > 0 and P +( ) (y; y ) > 0
and let V + jump from y to y in r + (i j ) steps and Ve + jump from y to y in r steps.
Then after y i further steps V  ends up in (i; j ). This happens with probability
p(y )2 P (y; y )P +( ) (y; y ) > 0, and irredu ibility follows.

In the proof we used the following lemma:
i

j nr

j ms

Lemma 2.2 If N is a set of integers losed under addition and with g d(N ) = d,
then there is an n0 2 N su h that for all n  n0 , nd 2 N .

Now de ne the oupling time for the two renewal pro esses to be
T = inf fn j S = Se
n

ab

and noti e that (1 1) + 1 = T . From lemma 2.1 we immediately get the following
orollary.
ab

For two independent renewal pro esses with the same aperiodi in rement distribution but with perhaps di erent delays a and b, the oupling time is
almost surely nite, i.e. P(T < 1) = 1.
Corollary 2.3

ab

Sin e p is a probability distribution on N , p (n) = 0 for j > n, hen e


1
X
u(n) =
p (n); n  0
j

j =0

is a well de ned sum and u is alled the renewal fun tion. Here onvolution is
interpreted as onvolution of probabilities on N 0 . Thus p0 = "0 { the one point
measure at 0 and u(0) = 1. Furthermore, all probability measures previously de ned
are extended if ne essary to have mass 0 at 0. If the renewal pro ess (S ) has delay
a, we see by onditioning on the time for the rst renewal that
n

P(V + 1 = 1) = P(9j : S = n) = a  u(n);


n

i.e. the probability that n is a renewal time is a  u(n).


We are now ready to prove the main theorem about onvergen e of the renewal
fun tion.

Niels R. Hansen

Theorem 2.4

m < 1, then
p

Suppose a, b and p are distributions on N and p is aperiodi with


(i)

ja  u(n)

b  u(n)j ! 0 for n ! 1

(ii)

ja  u(n)

mp

j!0

for n ! 1:

We start with (i), whi h is proved using the oupling time. De ne a new
\ oupled" forward re urren e time hain

Proof:

Vb =
+

V + if n < T
Ve + if n  T
n

ab

ab

The strong Markov property shows that V + and Vb + have the same distribution.
From this it follows that
n

ja  u(n)

= 1) P(Ve + 1 = 1)j
+
=
P(Ve + 1 = 1)j
1 = 1)
+
> n 1) P(Ve + 1 = 1; T > n
=
1 = 1; T
 P(T > n 1)

b  u(n)j =

jP(V
jP(Vb
jP(V

ab

ab

1)j

ab

From orollary 2.3 the last term tends to zero for n ! 1 and this shows (i).
Sin e  is invariant for the forward re urren e time hain, (V + ) 2N0 is stationary if
we use  as delay distribution and we see that
n

  u(n) = P(V + 1 = 1) = P(V0+ = 1) =   u(1) =  (1):


n

Thus   u is onstant and equals


 (1) =

1
1
1 X
p(j ) =
m =1
m
p

so (ii) follows by (i) using b =  .

S-ar putea să vă placă și