Sunteți pe pagina 1din 4

Probability Basics

c 20032006 Communication Theory Group, ETH Zurich SVN Revision: 1767, January 24, 2007
Discrete Random Variables of X given Y = y is then defined as
Basic Probability Discrete Random Variables [1, 3.13.2].
Z x
fX,Y (, y)
FX | Y (x | y) = d
Sample Space and Sigma-Field [1, 1.2.5]. P(A B) = P(A) + P(B) P(B A). The probability mass function (PMF) of a dis- fY (y)
The sample space is the set of all outcomes, Let A1 , A2 , . . . be an increasing sequence crete RV X is the function f : R [0, 1] for any y such that fY (y) > 0. The conditional
or elementary events, of a random experiment. of events, so that A1 A2 , and given by fX (x) = P(X = x). The joint PMF of density function is given by
The power set of contains all subsets of write for their limit a random vector X is defined analogously.
fX,Y (x, y)
and is written as {0, 1} . A collection F of [ The PMF of a discrete RV satisfies fX | Y (x | y) =
A = Ai = lim Ai . The set of x such that fX (x) , 0 is count- fY (y)
subsets of is called a -field if it satisfies
the following conditions i=1
i
able. for any y such that fY (y) > 0.
i fX (xi ) = 1.
P
F. The P(A) = limi P(Ai ). The same Expectation [1, 4.3]. The expectation of a
If A1 , . . . An F then ni=1 Ai F , where holds for a decreasing sequence of events Discrete RVs X1 , X2 , . . . .XN are independent continuous RV X is
S
n = 1, 2, . . . can be infinite. and their intersection. if the events {X1 = x1 }, {X2 = x2 }, . . . , {XN = Z
If A F then Ac F . xN } are independent for all x1 , x2 , . . . , xN . E[X] := x fX (x)dx
Conditional Probability [1, 1.4].
A is called an event; -fields are closed under If X and Y are independent and g, h :
If P(B) > 0 then the conditional probabil- R R, then g(X) and h(X) are indepen- whenever the integral exists.
countable intersections.
ity that A occurs given that B occurs is dent also. If X and g(X) are continuous random
Probability Space [1, 1.3.1]. A proba- defined to be X1 , X2 , . . . , XN are independent variables then Z
bility measure P on (, F ) is a function P(A B) iff fX2 ,X2 ,...,XN (x1 , x2 , . . . , xN ) =

P : F [0, 1] satisfying P(A | B) := . 
g(X)

= g(x) fX (x)dx.
P(B) fX1 (x1 ) fX2 (x2 ) . . . fXN (xN ) for all E
P() = 0, P() = 1. For any events A and B such that 0 < x1 , x2 , . . . , xN R.

If A1 , A2 , . . . is a collection of disjoint If X has PDF fX with fX (x) = 0 when x <
P(B) < 1, 0, and distribution function FX , then
members of F , in that Ai A j = for all Expectation [1, 3.3]. The expectation of the
P(A) = P(A | B) P(B) + P(A | Bc ) P(Bc ) RV X with PMF fX is defined as
Z
pairs i, j with i , j, then
More generally, let B1 , B2 , . . . , Bn be a X E[X] = (1 FX (x))dx.
E[X] := x fX (x)
[ X partitioning of such that P(Bi ) > 0 for 0
P Ai =
P(Ai ). all i. Then x: fX (x)>0 If g : RN R is an F -measurable func-
i=1 i=1 X n
whenever the sum is absolutely convergent. tion then
The triple (, F , P) is called a probability P(A) = P(A | Bi ) P(Bi ). E g(X1 , X2 , . . . , XN ) =
 
The expectation of random vectors is de-
space. i=1 fined element wise. (
An event A is called null if P(A) = 0. Bayes Rule: let Bi and A be as before, If X has PMF fX and g : RN R, then g(x1 , x2 , . . . , xN )
An event B is said to occur almost surely P(A) > 0, then  X
if P(B) = 1. E g(X) =

g(x) fX (x) fX1 ,X2 ,...,XN (x1 , x2 , . . . , xN )dx1 dx2 . . . dxN
P(A | Bi ) P(Bi )
A probability space (, F , P) is called P(Bi | A) = Pn . x Z
complete if all subsets of null sets, i.e., i=1 P(A | Bi ) P(Bi ) whenever the sum is element-wise abso- = g(x) fX (x)dx.
events of zero probability, are events Independence [1, 1.5]. A family {A : i lutely convergent.
i
themselves. I} is called independent if If X 0 then E[X] 0. The expectation is linear: E[aX + bY] =
If a, b R then E[aX + bY] = a E[X] + a E[X] + b E[Y], for all a, b C.
Properties of a Probability Space [1,

\ Y b E[Y] (linearity).
1.3]. P Ai = P(Ai ) The random variable 1, taking the value 1 Functions of Random Variables [1, 4.7,
P(Ac ) = 1 P(A). iJ iJ always, has expectation E[1] = 1. 4.8]. Let X1 and X2 be RVs with joint den-
If B A then P(B) = P(A) + P(B \ A) for all finite subsets J of I. If X and Y are independent, then sity function fX1 ,X2 , and let T : (x1 , x2 )
P(A). E[XY] = E[X] E[Y]. (y1 , y2 ) be a one-to-one mapping taking
some domain D R2 onto some range
Sums of discrete RVs [1, 3.8]. The proba- R R2 . If g : R2 R and T maps the set
Random Variables bility of the sum of two RVs X and Y having A D onto the set B R, then
Basics random vector [X Y] has the following joint PMF fX,Y is given by "
g(x1 , x2 )dx1 dx2
X
properties, which hold analogously for N- P(X + Y = z) = fX,Y (x, z x).
Random Variables and Distribution Func- dimensional random vectors:
tions [1, 2.1]. A random variable (RV) is x "
A
lim FX,Y (x, y) = 0, lim FX,Y (x, y) = 1.
a function X : R with the property If X and Y are independent, then = g x1 (y1 , y2 ), x2 (y1 , y2 )

x,y x,y
that { : X() x} F for each x R. If [x1 y1 ] [x2 y2 ] then FX,Y (x1 , y1 )
X B
fX+Y (z) = P(X + Y = z) = fX (x) fY (z x).
FX,Y (x2 , y2 ).

Such a function is said to be F -measurable.
x
J(y1 , y2 ) dy1 dy2 ,
The (cumulative) distribution function (CDF) FX,Y is continuous from above in that
of a RV X is the function FX : R [0, 1] where J denotes the Jacobian of the transform
FX,Y (x + u, y + v) FX,Y (x, y) x1 x2
given by FX (x) := P(X x). A distribution
J(y1 , y2 ) = det y
1 y1
function has the following properties
as u, v 0. Continuous Random Variables x1 x2
.
The marginal distribution functions of X y2 y2

lim FX (x) = 0, lim FX (x) = 1. and Y are Density Functions [1, 4.1,4.5]. If X is a
x x
If x < y then FX (x) FX (y). lim FX,Y (x, y) = FX (x), continuous RV, its CDF FX = P(X x) can Then the pair Y1 , Y2 , given by (Y1 , Y2 ) =
y be expressed as T(X1 , X2 ) has joint density function
The CDF FX is right-continuous, that is,
fY1 ,Y2 (y1 , y2 ) =
Z x
FX (x + h) FX (x) as h 0. lim FX,Y (x, y) = FY (y).
P(X > x) = 1 FX (x).
x FX (x) = fX ()d.  
written as FX (x) = FX,Y (x, ) and FY (y) = fX1 ,X2 x1 (y1 , y2 ), x2 (y1 , y2 ) J(y1 , y2 )
P(x < X y) = FX (y) FX (x). The function fX is called the (probability) if (y , y ) R(T) and 0 otherwise.
P(X = x) = FX (x) lim FX (y). FX,Y (, y), respectively. 1 2
yx The definitions of discrete, continuous, and density function of the continuous RV X. If the transformation is not one-to-one
R
The RV X is called continuous [2, 4.2] if mixed RVs extend to random vectors. fX (x)dx = 1. but piecewise one-to-one and sufficiently
its CDF FX is continuous; in that case, Relationship Between Real-Valued and P(X = x) = 0 for all x R. smooth, the more general transformation
FX (x ) = FX (x) x, and P(X = x) = 0. It is dis- Complex-Valued Operations [3, 5]. A Rb rule is the following: Let I1 , . . . , IN be
crete if it takes values in some countable sub- Complex RV U = X + jY can be treated as P(a X b) = a fX (x)dx. intervals which partition R, and suppose
set {x1 , x2 , . . . } of R; in that case, FX (x) is con- a random vector [X Y]. Consider arbitrary The random variables X and Y are jointly Y = g(X) is strictly monotone and continu-
stant except for a finite number of jump dis-
complex vectors u, v CN and a complex continuous with joint PDF fX,Y : R2 [0, ) ously differentiable on every In . For each n,
continuities, and P(X = x) = FX (x) FX (x ). MN if the function g : In R is invertible on
It is of mixed type if FX (x) is piecewise con- M N matrix A C . Define uR := <{u} Z y Z x g(In ) and we write hn for the inverse func-
tinuous with a finite number of jump dis- and u I := ={u} and the real 2N-dimensional FX,Y (x, y) = fX,Y (, )dd tion. Then,
continuities. vector = = N
<{u}
  
u

The indicator function IA : R is defined
X
u = uR = ={u} .
  0
I
for each x, y R. If FX,Y is sufficiently dif- fY (y) = fX hn (y) hn (y) ,
as the binary RV e ferentiable then n=1
Then the complex-valued linear operation
1 if A, 2
(
v = Au can be expressed in terms of the fX,Y (x, y) = FX,Y (x, y). with the convention that the nth summand
IA () = xy
0 if Ac . real quantities as v = Au, where a matrix A is 0 if hn is not defined at y.
satisfying v = Aueexists eeand is given by e The marginal densities are given as If X and Y have joint density function fX,Y ,
Independence [1, 4.2]. Random vari-
Z then the sum of RVs X + Y has density func-
A <{A} ={A}

A
e ee   
ables X and Y (discrete or continuous) are A = AR A I = ={A} <{A} . fX (x) = fX,Y (x, y)dy tion Z
I R
called independent if {X x} and {Y y} e Z fX+Y (z) = fX,Y (x, z x)dx.
are independent events for all x, y R, i.e., Let B be another complex matrix, then
FX,Y (x, y) = FX (x)FY (y) for all x, y R. AB = AB. fY (y) = fX,Y (x, y)dx.

g+ B e If X and Y are independent, this simplifies


Let g, h : R R. The functions g(X) and A =eA + B.
For continuous RVs, independence is equiva- to
h(Y) map into R. Suppose that g(X) and A = ATe
^ H
. e
lent to requiring that fX,Y = fX (x) fY (y) when-
Z
h(Y) are random variables, i.e. they are F - A f1 = eA1 . fX+Y (z) = fX (x) fY (z x)dx.
ever FX,Y is differentiable at (x, y).
measurable. If X and Y are independent,
det(A) = |det A|2 = det(AAH ). The above properties hold analogously
g e
then so are g(X) and h(Y).
u + v = u + v.
e for higher-dimensional continuous random Inverse Transform [1, 4.11.1]. Let F be a
Random Vectors [1, 2.5]. The joint dis- Au ]= Au e. e vectors. distribution function and let Y be uniformly
tribution function of a random real-valued fn H eoe T The probabil- distributed on the interval [0, 1].
< u v = u v. Conditioning [1, 4.6].
vector X := [X1 X2 . . . XN ]T on the probabil- ity P(X x | Y = y) is undefined because If F is a continuous function, the RV X =
If A CNN is unitary, then A R2N2N
ee
ity space (, F , P) is the function FX : RN P(Y = y) = 0 for continuous RVs. Hence, F1 (Y) has distribution function F.
[0, 1] given by FX (x) := P(X x) for x RN , is orthogonal. e
conditioning has to be understood as the Let F be the distribution function of a
NN
where the ordering x y means that xi yi If A C is positive semidefinite, then limit of P(X x | y Y y + dy) for RV taking on non-negative integer val-
for each i = 1, 2, . . . . so is A R2N2N ; moreover, uH Au = dy 0. The conditional distribution function ues. The RV X given by X = k iff
The joint distribution function FX,Y of the uT Au.e F(k 1) < U < F(k) has distribution
e ee function F.
Order Statistics [2, 7.1]. Consider covariance matrices
the N-dimensional random vector X = KUR VR := Cov[UR , VR ] , Jointly Gaussian Random Vectors
[X1 X2 . . . XN ]T . Ordering the elements of KUR VI := Cov[UR , VI ] ,
the vector for each outcome from smallest Gaussian Random Variables [4, 2.2]. A matrix, the PDF is
to largest yields a new random vector Y. KUI VR := Cov[UI , VR ] , continuous RV X R is Gaussian or normal
 
exp 21 (x X )T K1 (x X )
The kth element of Y is called the kth-order KUI VI := Cov[UI , VI ] , distributed if it has the PDF fX (x) =
X
.
(x X )2
!
statistic. If the Xn are i.i.d. with CDF FX and or the two complex-valued covariance ma- 1 (2)N/2 det KX
PDF fX , then the PDF of the kth statistik Yk trices fX (x) = q exp ,
22X 22X For zero-mean JGRVs, independence and
is given as   H 
N! KUV := E U E[U] V E[V] , uncorrelatedness are equivalent.
fk (y) = Fk1 (y) X R, X 0. Pairwise independence of the elements
(k 1)!(N k)! X
 T 
of a JGRV implies overall independence

JUV := E U E[U] V E[V] . It is denoted X N(X , 2X ).
 Nk of the elements of the vector.
1 FX (y) fX (y). The moment generating function of X
KUV is called covariance matrix, while JUV is
referred to as pseudo-covariance matrix. The N(X , X ) is given as
2 Covariance Matrices [4, 2.5]. The matrix
KX is a covariance matrix of a real JGRV
Variance and Covariance following relations hold: s X
2 2 !
iff it is positive semidefinite. In particular,
KUV = KUR VR + KUI VI , gX (s) = exp sX + .
2 KX is the covariance matrix of X = AW,
The following properties hold for both dis- + i(KUI VR KUR VI ),
crete and continuous random variables and The moments of the zero mean Gaussian RV where A is the unique square root matrix
vectors. JUV = KUR VR KUI VI , X N(0, X ) are A = Q1/2 QT arising from the spectral de-
Moments [1, 3.3]. If k N, the kth moment + i(KUI VR + KUR VI ), h i (2k)!X 2k composition of KX , and W N(0, I). For
of the real RV X is defined as E X2k = . any covariance matrix K, a zero mean JGRV
h i and k!2k X N(0, K) exists and can be expressed as
mk = E Xk . KUR VR = 12 <{KUV + JUV } , The odd moments of X are zero. X = AW, where AAT = K.
The kth central moment
h is i KUI VI = 12 <{KUV JUV } , The Q-Function [5, 2.2]. The integral Conditional Probabilities [4, 2.7]. Let X
k := E (X mk ) .
k
K = 1 ={K + J } , over the tail of the Gaussian PDF is called and Y be zero mean, jointly Gaussian and
UI V R 2 UV UV
As a special case, the variance is defined as the Q-function jointly non-singular. Then the dependence
KUR VI = 1
2 ={KUV + JUV } . 1
Z
of X on Y and vice versa can be stated ex-
u2 /2
h i
Var[X] := 2 = E (X E[X])2 , U and V are said to be uncorrelated if the Q(x) = e du. plicitly as
2 x X = GY + V,
also denoted by Var[X] = 2 . The following four real covariance matrices above van- The Q-function is related to the com- with
properties hold: ish. It follows that they are uncorrelated iff
plementary error function erfc(x) =
h i K UV = JUV = 0. R 2 G = KXY K1
Var[X] = E X2 E[X]2 . (2/ ) x eu du according to
Y
Complex Covariance Matrices. KV = KX GKY GT = KX KXY K1 Y KXY ,
T
Var[aX] = a2 Var[X] , for a R. h i 1 x
!
KU = KUU = E UU H E[U] E[U]H . Q(x) = erfc . and the conditional PDF of X given Y is
Conditional  Expectation [1, 3.7]. Let 2 2
 
exp 12 (x Gy)T K1
i= |a| hVar[U]i?, for a C.
2
(Y) = E X | Y = y . Then (Y) is called Var[aU] V (x Gy)

h Q(x) can be bounded for x > 0 as fX | Y (x | y) = .
the conditional expectation of X given Y, writ- E U1 U? = E U2 U? = (2)N/2 det KV
1 ex /2 ex /2
2 2
2 1  
ten as E[X | Y]. Conditioning for contin- ? < Q(x) < ,
uous random variables always has to be Cov[U 1 , U2 ] = Cov[U 2 , U1 ] . 1 2 Here, V N(0, KV ), independent of Y; it is
x 2x 2x
understood in the limit dy 0. The con- Proper Complex Random Vectors [3, 4]. sometimes called the innovation.
and
ditional variance is defined as Var[X | Y] := A complex random vector U is called 1 2 Jointly Complex Gaussian Random Vec-
Q(x) ex /2 .
h i
E (X E[X | Y])2 | Y . proper if its pseudo-covariance JU vanishes. 2 tors [3, 6]. Let U CN be a proper com-
The conditional expectation satisfies The complex vectors U and V are called plex Gaussian random vector with mean U
jointly proper if the composite random vec- Gaussian Random Vectors [4, 2.32.5]. and nonsingular covariance matrix KU .
EX [EY [Y | X]] = E[Y]. 
EX [EY [Y | X]g(X)] = E Yg(X) .
 tor [U T V T ]T is proper. X = [X1 X2 . . . XN ]T is defined to be a jointly Then the PDF is given by
EX2 [EY [Y | X1 , X2 ] | X1 ] = E[Y | X1 ] . Any subvector of a proper random vec- Gaussian random vector (JGRV) if, for all real
 
exp (u U )H K1 U (u U )
EY [Yg(X) | X]] = g(X) EY [Y | X] for any tor is also proper. vectors s = [s1 s2 . . . sN ] , the linear combi- fU (u) = fU (u) =
T
.
suitable function g(x). Two jointly proper complex random vec- nation sT X = s1 X1 + s2 X2 + + sN XN is a e e N det(KU )
The conditional variance satisfies tors U and V are uncorrelated iff their Gaussian RV. Two jointly proper Gaussian random vec-
Var[Y] = EX [Var[Y | X]] + Var[EY [Y | X]].
covariance matrix KUV vanishes. A JGRV is completely characterized by the tors U and V are independent iff KUV = 0.
A complex random vector U is proper iff mean Xh := E[X] and the covariance matrix Analogously to the real case, the covari-
KUR = KUI and KUI UR = KTUI UR . Hence, KX = E (X X )(X X )T . X is a JGRV ance matrix KU is Hermitian and positive
i
Covariance [1, 3.6]. The covariance of the KUI UR is zero on the main diagonal; thus semidefinite.
X N(X , KX ) iff its MGF is
RVs X and Y is defined as the real and imaginary parts of each com- Circularly Symmetric Random Vectors.
sT KX s
!
Cov[X, Y] = E[(X E[X])(Y E[Y])] ponent of U are uncorrelated. gX (s) = exp sT X + . A complex random vector U is called circu-
and the correlation coefficient as A real random vector is proper iff it is 2 larly symmetric if fU (ei u) does not depend
constant. For JGRVs with a non-singular covariance on R. For zero mean Gaussian random
Cov[X, Y]
(X, Y) := Any random vector V obtained from U
vectors, circular symmetry is equivalent to
Var[X] Var[Y] by an affine transformation is also proper,
properness.
as long as the variances are non-zero. i.e., V = AU + b is proper for all A
Cov[X, Y] = E[XY] E[X] E[Y]. CMN and b CM . Then, U and V are
X and Y are called uncorrelated jointly proper. Inequalities and Limit Theorems
if Cov[X, Y] = 0. Let U and V be two independent com-
The correlation coefficient satisfies plex random vectors, and let U be proper. Modes of Convergence [1, 7.2]. Let Markov inequality: if X is a RV with finite

(X, Y) 1, with equality iff P(aX+bY = Then the linear combination aU + bV, X1 , X2 , . . . be a sequence of RVs on some mean , then
.

with a, b C, b , 0 is proper iff V is also probability space We say:
c) = 1 fore some a, b, c R. P(|X| > ) .
proper. Xn X almmost surely, with probability 1, 
If X and Y are uncorrelated, then a.s.
Chernoff bound: if X is a RV with finite
or almost everywhere, written Xn X, if
Var[X + Y] = Var[X] + Var[Y].
Characteristic Functions P({ : Xn () X() as n }) = mean, then
 h i
Covariance Matrices [4, 2]. The covari- 1. P(X v) min esv E esX .
ance matrix of a random vector X is defined Characteristic Function [2, 5.5]. The char- Xn X in theh rth imean, r 1, written s0
r
as X, if E Xn < for all n, and Cauchy-Schwarz inequality: if X and Y are
acteristic function cX of a RV X is defined r
h i Xn
KX = E (X E[X])(X E[X])T . as  r RVs with finite second moments, then
Z E |Xn X| 0 as n .  2 h i h i
h i 2 2
Properties of covariance matrices: cX () = fX (x)e dx = E e
jx jX
. Xn X in probability, written Xn
P
X, if E[XY] E X E Y
KX is symmetric. P(|Xn X| > ) 0 as n for all  > with equality iff P(aX + bY) = 1 for some
KX positive semidefinite, i.e., aT KX a 0 |cX ()| cX (0) = 1 0. real a and b, at least one of which is
for every vector a, its eigenvalues are cX () is uniformly continuous on R. Xn X in distribution, also termed weak non-zero.
nonnegative, and it can be written as cPX () is nonnegative definite, i.e., convergence or convergence in law, written
?
Limit Theorems [1, 5.10,7.4]. Let
KX = AT A for some matrix A. j,k cX (i j )ai a j 0 for all real Xn
D
X, if P(Xn x) P(X x) as n X1 , X2 , . . . be i.i.d. RVs.
The elements of X are uncorrelated if the 1 , 2 , . . . , N and complex a1 , a2 , . . . , aN . for all points x at which the function
covariance matrix is diagonal. Weak law of large numbers: if E[X1 ] = ,
For Y = aX + b and a, b R: cY () = FX (x) is continuous. then
Every positive semidefinite matrix is a
e jb cX (a). The following implications hold 1 X
N
covariance matrix. Xn
D
.
If X and Y are independent, then a.s. P
The cross-covariance matrix between the ran- cX+Y () = cX ()cY (). (X n X) (X n X). N
n=1
dom vectors X r P
h and Y is defined as i RVs X and Y are independent iff (Xn X) (Xn X) for any r 1. Strong law of large numbers: if E[|X1 |] < ,
KXY = E (X E[X])(Y E[Y])T , cX,Y (, ) = cX ()cY (). P D then
(Xn X) (Xn X). N
matrix is defined as RX = 1 X
h correlation
the i Moment Generating Function [1, 5.7]. If r > s 1, then (Xn r
X) (Xn
s
X).
a.s.
Xn E[X1 ] .
E XX T . The moment generating function (MGF) gX N
of a RV X is defined Z as Inequalities. Let  > 0 arbitrary. n=1
Central limit theorem: Let SN = N
P
KX = RX E[X] E[X]T . h i Chebyshev inequality: if X is a RV with n=1 Xn .
gX (s) = fX esx dx = E esX . If E[X1 ] = < and 2 = Var[X1 ] , 0 <
mean and variance , then 2
2 < , then
Complex Extension 2
For Y = aX + b and a, b R: gY () =

P( X ) 2 . SN N D
Complex-Valued Random Vectors [3, 2]. e jbs gX (as). 
N(0, 1).
Let U = UR + iUI and V = VR + iVI be N2
Taylor expansion of the MGF within its
complex random vectors. The expectation is circle of convergence yields
given as E Xk
h i
E[U] = E[UR ] + i E[UI ] .
X
gX (s) = sk .
The second order statistics are completely k!
k=0
characterized either by the four real-valued (n)
The nth derivative of the MGF is gX (s) =
h i
E Xn esX . Therefore,
(n)
gX (0) = E[Xn ] = mn .
Random Processes
Random Process [1, 8.1],[2, 9.1]. A ran- Let L denote a linear time invariant (LTI)
dom process X(t) is a family {X(t) : t T } system, i.e., L satisfies
of random variables that map the sample L[x(t) + y(t)] = L[x(t)] +
space into some set S. L[y(t)], , C.
A random process is called a discrete-time If y(t) = L[x(t)], then y(t + c) = L[x(t + x)],
random process if T is a finite set. c R.
It is called a continuous-time process if T Consider an LTI system L with impulse re-
is uncountable. sponse h() and the random process V(t) =
A realization, or sample path, is a collection L[U(t)]. Then,
{X(t, ) : t T } for a fixed . E[L[U(t)]] = RL[E[U(t)]].
The first order distribution of X(t) is de-
fined as RUV (t1 , t2 ) = RU (t1 , t2 )h? ()d.
R
FX(t) (x, t) = P(X(t) x). RV (t1 , t2 ) = RUV (t1 , t2 )h? ()d.
The n-th order distribution is defined as Let L be a differentiator, i.e., V(t) = U0 (t).
FX(t) (x1 , x2 , . . . , xn , t1 , t2 , . . . , tn ) Then
= P(X(t1 ) x1 , X(t2 ) t2 , . . . , X(tn ) tn ). RUU0 (t1 , t2 ) = RU (t1 , t2 )/t2 .
A random process is completely specified RU0 (t1 , t2 ) = 2 RU (t1 , t2 )/t1 t2 .
if a joint distribution is given for any finite If the input to a (not necessarily linear) mem-
subset of T . oryless system g(x) is a SSS process U(t), the
resulting output V(t) is also SSS.
Covariance and Correlation [1, 9]. The For a WSS process in an LTI system, the second
autocorrelation function of a complex-valued order properties of the output can be com-
random process U(t) is defined as puted explicitly: consider V(t) = L[U(t)],
RU (t, t0 ) = E U(t)U? (t0 ) , where U(t) is WSS and L LTI with impulse
 
and RU (t, t) is called the average power of response h() and transfer function H. Then
the process. The autocorrelation function is RUV () = RU () ? h? ().
positive semidefinite, i.e., for any ai , a j , SUV ( f ) = SU ( f )H? ( f )
X RV () = RUV () ? h().
RU (ti , t j )ai a?j 0. 2
i, j
SV ( f ) = SUV ( f )H( f ) = SU ( f ) H( f ) .
The autocovariance function is defined as Let L be a differentiator and U WSS. Then
RUU0 () = R0U ().
KU (t, t0 ) = E (U(t) U (t))(U(t0 ) U (t0 ))? ,
 
SUU0 ( f ) = j2 f SU ( f ).
where U (t) = E[U(t)]. Covariance and cor- RU0 () = R00 ().
relation functions are related according to U
SU0 ( f ) = 4 f SU ( f ).
KU (t, t0 ) = RU (t, t0 ) U (t)?U (t0 ).
Gaussian Processes [1, 9.6]. A real-
The variance of the process is 2 (t) = KU (t, t). valued continuous-time process X(t) is
The pseudocovariance function is defined as called a Gaussian process if each finite-
h i
JU (t, t0 ) = E (U(t) U (t))2 . dimensional vector [X(t1 ) X(t2 ) . . . X(tN )]T
The cross-correlation of two processes U(t) is a JGRV. A complex-valued continuous-
and V(t) is defined as time process U(t) is called a complex Gaus-
sian process if each finite-dimensional vector
RUV (t, t0 ) = E U(t)V ? (t0 ) = R?VU (t0 , t),
 
[U(t1 ) U(t2 ) . . . U(tN )]T is a proper complex
and the cross-covariance is JGVR.
KXY (t, t0 ) = RXY (t, t0 ) X (t)?Y (t0 ). A Gaussian process (real or complex) is
The two processes are called uncorrelated if completely specified through its mean
KUV (t, t0 ) = JUV (t, t0 ) = 0 for every t and t0 . and autocovariance function.
Real and complex Gaussian processes
Stationarity [1, 8.1][2, 9.1]. The process are (strict-sense) stationary iff they are
X(t) is called (strongly) stationary, or strict WSS.
sense stationary (SSS) if the families
{X(t1 ), X(t2 ), . . . , X(tn )} Linear Functionals of Random Processes.
and If X(t) is a continuos-time random process
with continuous covariance function, and
{X(t1 + c), X(t2 + c), . . . , X(tn + c)} g(t) is a continuous function, nonzero only
have the same joint distribution for all over a finite time interval, then the linear
t1 , t2 , . . . , tn and c R. functional Z
The process is called wide sense stationary
(WSS), or weakly stationary, if, for all t1 , t2 Y = g, X =


g(t)X(t)dt
and c,

X (t1 ) = X (t2 ) = X , is a random variable


Z with mean

KX (t1 , t2 ) = KX (t1 + c, t2 + c) E[Y] = g(t) E[X(t)] dt
= KX (t1 t2 ) = KX ().
For a complex-valued process U(t), it is also and variance"

required that JU (t1 , t2 ) = JU (t1 + c, t2 + c) =
JU (t1 t2 ). Two processes U(t) and V(t) are Var[Y] = g(t)KX (t, t0 )g(t0 )dtdt0 .

called jointly WSS if each is WSS and their
cross-correlation depends only on = tt0 . If X(t) is a Gaussian process, Y is also Gaus-
?
sian.
KU () = KU ().
= RU (0).
2 White Gaussian Noise. A zero-mean sta-
|RU ()| RU (0) . tionary process W(t) is called white, if
the co-
variance of any linear functional Y = gi , W

If RU (1 ) = RU (0) for some 1 , 0, then
satisfies
RU () is periodic with period 1 . i "
R2UV () RU (0)RV (0).
h
E Yi Y j = g(t)KW (t )g(t0 )dtdt0
Power Spectral Density [2, 9.3]. The Z
power spectral density (PSD) of a WSS pro- = gi (t)g j (t)dt.
cess U(t) is given by the Fourier transform
Z Such a process W(t) is not a well-defined
SU ( f ) = RU ()ej2 f d. random process, but functionals of this
process are; therefore, WGN is a general-
The cross-PSD SUV ( f ) of two jointly WSS ized random process. Formally, the covari-
processes U(t) and V(t) is the Fourier trans- ance function is written as KW = (N0 /2)().
form of RUV (). If W(t) is Gaussian, called white Gaussian
SU ( f ) is real. noise (WGN), then W(t1 ) and W(t2 ) are inde-
For X(t) real, SX ( f ) is real and even. pendent for every t1 , t2 . The PSD of WGN
SU ( f ) 0. is SW ( f ) = N0 /2.
SUV ( f ) is complex, and SUV ( f ) = S?VU ( f ). Let W(t) be WGN, and let {i (t)} be a set
of orthogonal functions. Then the random
Random Processes in Systems [2, 9.2]. variables Yi = hW, i i are independent.
Common Distributions and Densities
Tabular Overview Including Moments and Characteristic Functions [1, 2].

Name PMF/PDF Domain Mean Variance Skewness Characteristic Function


qp
Bernoulli f (1) = 0, f (0) q = 1 p {0, 1} p pq
pq q + peit
eit (1eiNt )
discrete Uniform 1
N {1, . . . , N} 1
2 (N + 1) 1
12 (N
2
1) 0 N(1eit )
Binomial N k
k p (1 p)(Nk) {0, 1, . . . , N} Np Np(1 p) 12p (1 p + peit )N
Np(1p)

e k!
k
n o
Poisson k = 0, 1, 2, . . . 1 exp (eit 1)

eibt eiat
continuous Unifrom 1
ba [a, b] 1
2 (a + b) 1
12 (b a)2 0 it(ba)

Exponential e x
[0, ), > 0 1

1
2
2 it
 
(x)2
eit 2 t
1 2
Normal N(, 2 ) 1 exp 22 R 2 0
22
exp{ 12 (x)T K1 (x))}  T

Multivariat Normal N(, K) Rn K exp itT t 2Kt
2 det(K)

Cauchy C(, ) 2 +(x)2

R eit e|t|
2
q  q  22
x
2 2 (3) t
x 22
(2 2 )2 1 + i 2 t e 2 (?)
2
Rayleigh 2
e [0, ) 2 (4)3/2

"
x +a 2 2

   
x 22
Rice 2
e I0 ax
2
R 2
(1 + r) I0 2r
 i n o
a2
r= 2 2 +rI1 2r exp 2r

(ln x)2

2
2
e+ 2 e +2 (e 1) e 1(2 + e )
2 2 2
Log-normal 1 exp (0, )
x 22 22
N
q
x 2 1
Central Chi-square 2N (N/2)2N/2
ex/2 [0, ) N 2N 2
N
1
(1i2t)N/2
x+ N1
e 2 x 2 2 2(3+N)
Non-Central Chi-Square 2(x)N/4
I N 1 ( x) [0, ] +N 2(2 + N) (2+N)3/2
2

  h    i 3
2 ( 1+ 1 )3(1+ 1 )(1+ 2 )(1+ 3 )
Weibull x1 e(x/) [0, ) 1 + 1 2 1 + 2 2 1 + 1 3/2
[(1+ 2 )2 (1+ 1 )]
2 !
1 (m+ 2 )
 m q  1
x2m1 (m+ 2 )
1

Nakagami-m 2
(m)
m
m 2 (0, ) (m) m 1 m (m)
ex

References [3] I. E. Telatar, Mathematical preliminaries, Mar. 2002, lecture Notes for Wireless
Communication and Mobility, EPFL.
[1] G. R. Grimmett and D. R. Stirzaker, Probability and Random Processes, 3rd ed. Oxford,
U.K.: Oxford Univ. Press, 2001. [4] R. G. Gallager, Stochastic processesa conceptual approach, Aug. 2001, University
of California at Berkeley, EE226A Class Reader, Fall 2001.
[2] A. Papoulis and S. U. Pillai, Probability, Random Variables and Stochastic Processes, 4th ed.
Boston, MA, U.S.A.: McGraw-Hill, 2002. [5] S. G. Wilson, Digital Modulation and Coding. Upper Saddle River, NJ, U.S.A.: Prentice
Hall, 1996.

S-ar putea să vă placă și