Documente Academic
Documente Profesional
Documente Cultură
Module 2
!
!
Sayan Gupta
Stochastic Processes
We define a stochastic process (or a random process) as one when the
random variable evolves with time.
X(t, !)
time
parameter
t1
Figure 1
sample
space
!
When both t and ! are fixed, X(t = tk , ! = !i ) = xki is just a deterministic number.
2
!
If X(t, !) is a discrete random variable, then X(t, ) is a random process
with the state space being discrete.
!
On the other hand, if X(t, ) is a continuous random variable, the state
space of the random process is continuous.
!
If the parameter t is discrete, then X(t, !) is a discrete parametered
random process.
!
Conversely, X(t, ) is a continuous parametered random process when the
parameter t is continuous.
!
Thus, random processes can be
!
The parameter of a random process could be t indicating that the process
evolves in time or x indicating that the process evolves in space. Usually, a
process that evolves in space is called a random field.
!
A random process/random field could have multi-parameters. For example,
y coordinates.
For example, the wind force acting on the wind
turbine (Fig. 2) is a 2-dimensional random
process that evolves in space (the variation
along the height of the wind turbine mast) as well
as temporal variation and can be denoted as F (x, t).
Figure 2
!
This is a too stringent requirement and is not feasible.
...(8.1)
...(8.2)
E[X(t1 )X(t2 )]
When t1 = t2 = t,
RXX (t, t) = E[X(t)2 ] =
2
X (t)
...(8.3)
!
the statistical properties (mean, autocorrelation function and variance) are
time varying
!
the averaging here is carried out across the ensemble ( ! space ) and not
across time.
6
CXX (t1 , t2 )
=
=
E[(X(t1 )
Z 1Z 1
1
X (t1 ))(X(t2 )
(x1
1 )(x2
X (t2 ))]
...(8.4)
CXX (t1 , t2 )
X (t1 ) X (t2 )
...(8.5)
...(8.6)
!
C
XX (t1 , t2 ) = RXX (t1 , t2 ) X (t1 )X (t2 )
!
2
X (t) = CXX (t, t)
Consider two random processes, X(t) & Y (t) , such that
Z 1
X (t) = E[X(t)] =
x pX (x; t)dx
Z 11
Y (t) = E[Y (t)] =
y pY (y; t)dy
...(8.7)
...(8.8)
=
=
E[X(t1 )Y (t2 )]
Z 1Z 1
xy pXY (x, y; t1 , t2 )dxdy
1
...(8.9)
=
=
=
E[(X(t1 )
Z 1Z 1
1
RXY (t1 , t2 )
x (t1 ))(y
Y (t2 ))]
Y (t2 )) pXY (x, y; t1 , t2 )dxdy
X (t1 )Y (t2 )
...(8.10)
CXX (t1 , t2 )
X (t1 ) Y (t2 )
...(8.11)
Remarks
!
1. X(t) and Y (t) are mutually orthogonal if
!
RXY (t1 , t2 ) = 0 8 t1 , t2
!
CXY (t1 , t2 ) = 0 8 t1 , t2
2. X(t) and Y (t) are uncorrelated if
9
Example Problems
Problem 1:
Let X(t) = f (t) , where f (t) is a deterministic function.
!
Solution:
!
X (t) = E[X(t)]
Z 1
!
=
f (t)pX (x; t)dx = f (t)
!
1
!
Hence, the mean of the process is equal to the deterministic function f (t) .
!
The auto-correlation function is given by
!
RXX (t1 , t2 )
E[X(t1 )X(t2 )]
f (t1 )f (t2 )
10
Example Problems
Problem 2:
Let X(t) = A cos !t , where A is a random variable and ! is a deterministic
function. Find the mean and the auto-correlation function of the process.
!
Solution:
!
X (t) = E[X(t)] = E[A cos !t]
!
= E[A] cos !t
!
= A cos !t
!
where, A = E[A] is the mean value of the random variable A.
!
!
RXX (t1 , t2 ) = E[X(t1 )X(t2 )]
!
2
=
E[A
cos !t1 cos !t2 ]
!
= E[A2 ] cos !t1 cos !t2
!
2
!
= ( A
+ 2A ) cos !t1 cos !t2
!
Here, we use the property
!
2
A )2 ] = E[A2 ] 2A .
A = E[(A
11
Example Problems
Problem 3:
Let X(t) = A cos(!t + ), where A is a random variable with pdf pA (a) , and
!
Solution:
Z 1Z 1
!
!
E[A cos ] =
a cos s pA (a, s)dads
1
1
!
Z 1Z 1
!
=
a cos s pA (a)p (s)dads
!
1
1
!
= E[A]E[cos ]
...(8.13)
12
Example Problems
Similarly,
E[cos ] =
cos sp (s)ds
!
1
!
Z
1
!
=
cos s ds
2
!
Z
1
!
=
cos sds = 0.
2
!
Similarly,
E[sin ] = 0.
Substituting in Eq (8.12), X (t) = 0.
RXX (t1 , t2 )
...(8.14)
E[X(t1 )X(t2 )]
E[A2 cos(!t1
E[A2 ]E[cos(!t1
) cos(!t2
)]
1
E[A2 ]E[cos{!(t1 t2 )} + cos(!t1 + !t2 + 2 )]
2
2
1 2
A
t2 )} +
E[cos{!(t1 + t2 ) + 2 }]
A cos{!(t1
2
2
=
=
) cos(!t2
)]
...(8.15)
13
Example Problems
Now
E[cos{!(t1 + t2 ) + 2 }] = E[cos{!(t1 + t2 ) + 2 }]
= E[cos{!(t1 + t2 )} cos(2 ) sin{!(t1 + t2 )} sin(2 )]
= cos[!(t1 + t2 )]E[cos(2 )] sin[!(t1 + t2 )]E sin(2 )] = 0.
...(8.16)
Substituting the results from Eq (8.16) into Eq. (8.15), we get
RXX (t1 , t2 ) =
1
2
2
A
cos[!(t1
When
t1 = t2 = t,
1
!
RXX (t, t) =
!
2
2
A
t2 )].
2
X
!
Note that for this example, both the mean and the variance of the process are
independent of time.
14
Stationary Processes
!
For example, if for a random process
X(t)
!
E[X(t)] = E[X(t + c)]
!
for any value of c , then X(t) is said to be stationary in mean.
!
If E[X(t)] = E[X(t + c)] for any value of c, it implies that no matter at what
time instant the averaging is carried out across the ensemble, the mean
remains the same, i.e., the mean is a constant and is independent of time.
15
Stationary Processes
Similarly, if E[X 2 (t)] = E[X 2 (t + c)] for any value of c , then the process is
stationary in second moment.
!
2
2
If E[{X(t) X (t)} ] = E[{X(t + a) X (t + a)} ] for any value of a,
2
2
i.e., X (t) = X (t + a) for any value of a , then the process is stationary in
variance.
!
Similarly, one can define stationarity in terms of 1st order pdf,
i.e., if pX (x; t) = pX (x; t + c) for any value of c , then the 1st order pdf of the
process does not depend on t and the process X(t) is said to be first order
strong sense stationary (1st order SSS).
16
!
pX (x; t) = pX (x; t + c) = pX (x) 8c
1st order SSS:
!
2nd order SSS:
!
pX (x1 , . . . , xn ; t1 , . . . , tn ) = pX (x1 , . . . , xn ; t1 + c, . . . , tn + c)
!
= pX (x1 , . . . , xn ) 8c
!
!
where, X is a n-dimensional vector. Note that this implies that the stationarity
is valid for all
m n.
!
This is a stringent condition for stationarity and hence termed as strict (or
strong) sense stationarity (SSS).
17
!
!
!
The following are examples when a process X(t) is said to be weak sense
stationary:
!
X (t) = X (t + c) = X 8c
!
RXX (t1 , t2 ) = E[X(t1 )X(t1 + )]
!
= E[X(t1 + c)X(t1 + + c)] 8c
!
Here, it is obvious that RXX (t1 , t2 ) does not depend on (t1 , t2 ) but only on
the dierence = t2 t1.
t1 ) = RXX ( )
...(8.17)
18
Stationary Processes
For a process stationary in correlation, in Eq. (8.17) refers to the lag as
shown in Fig.4.
Here, is the distance between two
time instants, i.e., the distance
between the two red lines.
!
The ensemble average for a process
stationary in correlation remains the
same as long as the distance
between the two time instants
remains the same.
Figure 4
19
Stationary Processes
A typical plot of the variation of RXX ( ) with respect to for a process that
is stationary in correlation is shown in Fig. 5.
Here, for a positively correlated process,
it is reasonable to expect that as the
distance between the two time instants
!
Here, c is the critical value of , such
that, if > c, X(t1 ) and X(t1 + ) can
be assumed to be uncorrelated.
!
In such situations, c is referred to as
Figure 5
the correlation length of the process.
A stochastic process is called a -correlated process if
...(8.18)
20
Stationary Processes
The mean and variance of a 1st order SSS random process
Z 1
X (t) =
x pX (x; t) dx
1
Z 1
=
x pX (x) dx = X
...(8.19)
!
Similarly,
Z 1
2
2
(t)
=
(x
(t))
pX (x; t) dx
X
X
1
Z 1
2
=
(x X )2 pX (x) dx = X
...(8.20)
21
Stationary Processes
Now a second order SSS process also implies that the process is stationary in
the strong sense.
!
The covariance of a second order SSS process is expressed as
CXX
=
=
Z1Z
1
Z1Z
(x(t1 )
(x1
CXX (t2
X (t1 ))(x(t2 )
X )(x2
t1 )
X )pXX (x1 , x2 ; t2
t1 )dx1 dx2
...(8.21)
22
Stationary Processes
Remarks:
!
X(t) is said to be second order WSS process if
!
the mean of the process is independent of time,
i.e.,
X (t) = X
!
and the covariance function depends on the time lag,
i.e.,
CXX (t1 , t2 ) = CXX (t2 t1 )
!
!
The default notion of stationarity of a random process is 2nd order WSS,
implying the above two conditions.
!
Stationarity in random fields is referred to as homogenous fields.
23
Ergodic Process
Figure 6
Z T
!
1
Tave [X(t)] =
X(t) dt
...(8.22)
!
2T
T
!
The ensemble average
Z 1 is
!
E[X(t)] =
x pX (x; t) dx ...(8.23)
1
With respect to Fig. 6, we see that the ensemble average is averaging along
the ensemble (green line) while the temporal average is averaging along the
time (red line).
From Eq. (8.22), it is obvious that Tave [x(t)] = T is a random variable.
Z T
!
1
...(8.24)
E[T ] =
E[X(t)]dt = E[X(t)] =
!
2T
T
24
Ergodic Process
The variance of the random variable T is
ZTZ
1
2
=
E[(X(t1 ) )(X(t2 )
T
4T
)]dt1 dt2
=
=
=
1
4T 2
1
4T 2
ZTZ
Z
2T
2T
2T
2T
| | C( ) d
Z
2T
| |
(1
)C( ) d
4T 2
2T
2T
Z
i
1 2T
h
1
R( ) 2 d
T 0
2T
2 d ! 0
...(8.25)
...(8.26)
25
Ergodic Process
The result of Eq. (8.26) implies that as the sample time history of a realization
of the random process becomes infinitely long, the statistical fluctuation of
the temporal mean goes to zero.
!
In other words, the temporal mean ceases to be a random variable.
!
Furthermore, from Eqs. (8.23) and (8.24), it implies that the temporal mean
and the ensemble mean are equal.
!
Thus, the process X(t) is said to be ergodic in mean.
!
Similarly, ergodicity in the 2nd moment implies
!
Tave [X 2 (t)] = E[X 2 (t)]
!
and ergodicity in auto-correlation implies
!
!
Tave [X(t)X(t + )] = E[X(t)X(t + )] = RXX ( )
26
Ergodic Process
Remarks:
!
Ergodicity of a random process with respect to a statistical parameter
implies equivalence of temporal and ensemble averages.
!
All ergodic processes are necessarily stationary processes; however,
stationary processes need not be all ergodic.
!
The practical implication of a process being ergodic is that if a long sample
time history for an ergodic random process is available, one can, in
principle obtain all the relevant statistical information by computing the
temporal statistics in lieu of ensemble statistics.
27