Documente Academic
Documente Profesional
Documente Cultură
transition probabilities memoryless: - output at time i depends only on input at time i - input and output alphabet finite
Error Source
1-p
E X
Input +
0
Y = X E
Output
p
0 1
1-p
E is the binary error sequence s.t. P(1) = 1-P(0) = p X is the binary information sequence Y is the binary output sequence
Other models
1-e e
0 X 1
p 1-p
E 1
P(X=0) = P0
1
1-e
p p
1-p-e e
e 1-p-e
E 1
P(0) = 1- P(1);
P(0 | state = bad ) = P(1|state = bad ) = 1/2; P(0 | state = good ) = 1 - P(1|state = good ) = 0.999
transition probability
Pbb
Pbg
channel capacity:
I(X;Y) = H(X) - H(X|Y) = H(Y) H(Y|X) (Shannon 1948)
H(X)
channel
H(X|Y)
notes: capacity depends on input probabilities because the transition probabilites are fixed
message
2k
with errors
Channel capacity
Definition: The rate R of a code is the ratio k/n, where
k is the number of information bits transmitted in n channel uses
Shannon showed that: : for R C encoding methods exist with decoding error probability 0
P(> 1) (2 k 1)
2 nh( p) 2n
0 X 1
1-p p
Y 1
H(Y|X) = h(p)
= P(X=0)h(p) + P(X=1)h(p)
0.5
Bit error p
1.0
H(Y) = h(P0 +p(1- P0 ) ) H(Y|X) = (1 - P0 ) h(p) For capacity, maximize I(X;Y) over P0
P(X=0) = P0
0 Y
E 1
1
1-e
P(X=0) = P0
Thus Cerasure = 1 e
(check!, draw and compare with BSC and Z)
p p
e 1-p-e
E 1
example
Consider the following example
0 1/3 1 1/3 2
0 1 2
For P(0) = P(2) = p, P(1) = 1-2p H(Y) = h(1/3 2p/3) + (2/3 + 2p/3); H(Y|X) = (1-2p)log23 Q: maximize H(Y) H(Y|X) as a function of p Q: is this the capacity? hint use the following: log2x = lnx / ln 2; d lnx / dx = 1/x
x1 x2
The statistical behavior of the channel is completely defined by the channel transition probabilities Pj|i = PY|X(yj|xi)
* clue:
I(X;Y) is convex in the input probabilities i.e. finding a maximum is simple
Pe
k/n C
Converse:
Xi
n n n n i =1
channel
Yi
n n i 1 = i 1
I ( X ; Y ) = H (Y ) H (i Y | X ) H (i Y ) H (Y | iX ) i i
i 1=
= I ( iX ;i Y )
=
nC
encoder
Xn
channel
Yn
decoder
converse
k = H(M) = I(M;Yn)+H(M|Yn)
Xn is a function of M Fano
R := k/n
1 C n/k - 1/k
Pe
I(Xn;Yn) + 1 + k Pe nC + 1 + k Pe Pe 1 C/R - 1/nR Hence: for large n, and R > C, the probability of error Pe > 0
I(X;Z)
X
I(X;Y)
I(Y;Z)
The overall transmission rate I(X;Z) for the cascade can not be larger than I(Y;Z), that is:
I(X; Z) I(Y; Z)
Appendix:
Assume: binary sequence P(0) = 1 P(1) = 1-p t is the # of 1s in the sequence Then n , > 0 Weak law of large numbers Probability ( |t/n p| > ) 0 i.e. we expect with high probability pn 1s
Appendix:
Consequence: 1. 2. 3. n(p- ) < t < n(p + ) with high probability
n ( p + ) n ( p )
n n 2n 2n2 nh ( p) t pn
1 log 2n n lim n 2 pn n
h ( p)
Homework: prove the approximation using ln N! ~ N lnN for N large. Or use the Stirling approximation:
N!
2 NN e
N N
Binary Entropy:
1 0 .9 0 .8 0 .7 0 .6 0 .5 0 .4 0 .3 0 .2 0 .1 0 0 0 .1 0 .2 0 .3 0 .4 0 .5 0 .6 0 .7 0 .8 0 .9 p 1
Output Y
Input X is Gaussian with power spectral density (psd) S/2W; Noise is Gaussian with psd = 2noise Output Y is Gaussian with psd = y2 = S/2W + 2noise For Gaussian Channels: y2 = x2 +noise2
Noise X Y X Y
Cap = 1 log 2 (2e( 2 + 2 )) 1 log 2 (2e 2 ) bits / trans. x noise noise 2 2 = 1 log 2 ( 2 2 + 2 noise x
2 noise
) bits / trans.
Cap = W log 2 (
2 + S / 2W noise 2 noise
) bits / sec .
p(z) =
1 22 z
z2 / 2 2 z
channel 1
channel 2
Fritzman model:
multiple states G and only one state B
Closer to an actual real-world channel
G1
1-p
Gn
B
Error probability h
Error probability 0
Interleaving: block
Channel models are difficult to derive: - burst definition ? - random and burst errors ? for practical reasons: convert burst into random error read in row wise
1 0 0 1 1 0 1 0 0 1 1 0 0 0 0 0 0 1 1 0 1 0 0 1 1
De-Interleaving: block
1 0 0 1 1
0 1 0 0 1
1 e e e e
e e 1 1 0
1 0 0 1 1
Interleaving: convolutional
input sequence 0 input sequence 1 input sequence m-1 Example: b = 5, m = 3 delay of (m-1)b elements in out delay of b elements