Documente Academic
Documente Profesional
Documente Cultură
Lecture 2
The Capacity of Wireless Channels
Channel
Channel Capacity :
C = B log2 (1 + SN R)
where the sum is taken over all possible input and output pairs x 2 X and
y 2 Y for X and Y the input and output alphabets.
Mutual information can also be written in terms of the entropy in the
channel output y and conditional output y|x as
where
F (x) = P[X x]
d
and f (x) = F (x) is the density function.
dx
Let S = {x : f (x) > 0} be the support set. Then
Since we integrate over only the support set, no worries about log 0.
E[X 2 ]
1
= log2 2e 2 , bits
2
Note: only a function of the variance 2 , not the mean. Why?
So entropy of a Gaussian is monotonically related to the variance.
= h(g) + h( ) 0
Therefore, the Gaussian distribution maximizes the entropy overall
distributions with the same variance.
2017/3/7 Lecture 2: Capacity of Wireless Channels 13
Capacity of Gaussian Channels
Definition: The information capacity of the Gaussian channel with
power constraint P is
C= max2 I(X; Y ),
f (x):E[X ]P
Since there are 2B samples each second, the capacity of the channel
can be rewritten as
P
C = B log2 1 + (bits/s, bps)
N0 B
(This equation is one of the most famous formulas of information
theory. It gives the capacity of a bandlimited Gaussian channel with
noise spectral density N0 /2 watts/Hz and power P watts.)
If we let B ! 1 in the above capacity formula, we obtain
P (for infinite bandwidth channels, the capacity grows
C= log2 e
N0 linearly with the power.)
2017/3/7 Lecture 2: Capacity of Wireless Channels 20
Capacity of Flat-Fading Channels
Assume a discrete-time
p channel with stationary and ergodic time-
varying gain g[i], 0 g[i], and AWGN n[i], as shown in Figure 4.1.
The channel power gain g[i] follows a given distribution p(g), e.g.
for Rayleigh fading p(g) is exponential. The channel gain g[i] can
change at each time i, either as an i.i.d. process or with some
correlation over time.
By Jensens inequality,
Z
E[B log2 (1 + )]= B log2 (1 + )p( )d B log2 (1 + E[ ])
= B log2 (1 + ),
where is the average SNR on the channel.
Here we see that the Shannon capacity of a fading channel with receiver
CSI only is less than the Shannon capacity of an AWGN channel with the
same average SNR.
In other words, fading reduces Shannon capacity when only the receiver
has CSI.
C = B log2 (1 + )
(4.9)
Solving for P ( ) with the constraint that P ( ) > 0 yields the optimal
power adaptation that maximizes (4.9) as
(
1 1
P( ) , 0
= 0
(4.12)
P 0, < 0
If [i] is below this cutoff then no data is transmitted over the ith time
interval, so the channel is only used at time i if .
Substituting (4.12) into (4.9) then yields the capacity formula:
Z 1
C= B log2 p( )d (4.13)
0 0
2017/3/7 Lecture 2: Capacity of Wireless Channels 33
Capacity of Flat-Fading Channels
The multiplexing nature of the capacity-achieving coding strategy
indicates that (4.13) is achieved with a time varying data rate, where the
rate corresponding to instantaneous SNR is B log2 ( / 0 ) .
Note that the optimal power allocation policy (4.12) only depends on
the fading distribution p( ) through the cutoff value 0 . This cutoff
value is found from the power constraint.
By rearranging the power constraint and replacing the inequality with
equality (since using the maximum available power will always be optimal)
yields the power constraint
Z 1
P( )
p( )d = 1.
0 P
By using the optimal power allocation (4.12), we have
Z 1
1 1
p( )d = 1
0 0
1/ 0 The water-filling
P( ) terminology refers to the
fact that the line 1/
P sketches out the bottom of
a bowl, and power is
1/ poured into the bowl to a
constant water level
of 1/ 0 .
We first assume that all channel states are used to obtain 0 , i.e.
assume 0 mini i , and see if the resulting cutoff value is below that of
the weakest channel. If not then we have an inconsistency, and must redo
the calculation assuming at least one of the channel states is not used.
Applying (4.17) to our channel model yields
E[1/ ] =
The outage capacity is defined as the maximum data rate that can be
maintained in all non-outage channel states times the probability of non-
outage.
The outage capacity associated with a given outage probability pout and
corresponding cutoff 0 is given by
1
C(pout ) = B log2 1 + P[ 0 ].
E 0 [1/ ]
This maximum outage capacity will still be less than Shannon capacity
(4.13) since truncated channel inversion is a suboptimal transmission
strategy.
Example 4.6: Assume the same channel as in the previous example, with
a bandwidth of 30 KHz and three possible received SNRs: 1 = .8333 with
p( 1 ) = .1, 2 = 83.33 with p( 2 ) = .5 , and 3 = 333.33 with p( 3 ) = .4 .
Find the outage capacity of this channel and associated outage
probabilities for cutoff values 0 = .84 and 0 = 83.4 . Which of these
cutoff values yields a larger outage capacity?
where A is the transpose of the matrix A with each element replaced by its
complex conjugate, and At is just the transpose of A.
Note that in general the covariance matrix K of the complex random
vector x by itself is not enough to specify the full second-order statistics
of x. Indeed, since K is Hermitian, i.e., K = K, the diagonal elements are
real and the elements in the lower and upper triangles are complex
conjugates of each other.
2017/3/7 Lecture 2: Capacity of Wireless Channels 44
Circular Complex Gaussian Vectors
In wireless communication, we are almost exclusively interested in
complex random vectors that have the circular symmetry property:
(10.2)
E[yyH ] (10.6)
H
C= max B log2 det IMr + HRx H , (10.8)
Rx :Tr(Rx )=
(10.10)
where i = Pi / n2 and i = 2 2
i P/ n is the SNR associated with the ith
channel at full power.
Solving the optimization leads to a water-filling power allocation for the
MIMO channel: (
1 1
Pi , i 0
= 0 i
(10.11)
P 0, i < 0
where i = i2 = i2 P/ 2
n and RH is the number of nonzero
singular values of H.
The mutual information of the MIMO channel (10.13) depends on the
specific realization of the matrix H, in particular its singular values { i }.
P (10.14)
Note that for fixed Mr , under the ZMSW (Zero-Mean Spatially White)
model the law of large numbers implies that
1
lim HHH = IMr . (10.15)
Mt !1 Mt
Substituting this into (10.13) yields that the mutual information in the
asymptotic limit of large Mt becomes a constant equal to
C = Mr B log2 (1 + ) (The max. rate can be achieved by Massive MIMO!)
2017/3/7 Lecture 2: Capacity of Wireless Channels 59
MIMO Channel Capacity
We can have two important observations from the results in (10.14)
and (10.15)
As SNR grows large, capacity also grows linearly with M = min{Mt , Mr }
for any Mt and Mr .
At very low SNRs transmit antennas are not beneficial: Capacity only
scales with the number of receive antennas independent of the number of
transmit antennas.
Fading Channels
Channel Known at Transmitter: Water-Filling
EH
EH (10.16)
EH (10.17)
EH (10.18)
where the expectation is with respect to the distribution on the channel matrix
H, which for the ZMSW model is i.i.d. zero-mean circularly symmetric unit
variance.
As in the case of scalar channels, the optimum input covariance matrix
that maximizes ergodic capacity for the ZMSW model is the scaled
identity matrix M IMt . Thus the ergodic capacity is given by:
t
EH (10.19)