Documente Academic
Documente Profesional
Documente Cultură
S: {s1,s2sn}
encoder
W: {W1,W2Wn}
A: {a1,a2aq}
It can be seen that encoder is to establish a one-one mapping relation
and to convert the symbols in original source set S to those of W , which is
composed of symbols in channel symbol set A.
Obviously, the set of univocal decodable code contains the set of univocal
decodable codes. That is to say, univocal decodable codes must be
univocal decodable code, but univocal decodable code maybe not
univocal decodable code.
Harbin Institute of Technology
respectively L1,L2,,Ln
Kraft inequality:
Li
i 1
root
root
1
0
W1
W1
0
W2
0
W3
1
1
0
1
W2
W4
1
W3
0
W4
It can be seen that the instantaneous decodable code constituted by this method is not
unique.
Harbin Institute of Technology
L p( si ) Li
i 1
The information transmission rate R (the entropy rate) is equal to the source
entropy H(S).
Passing through the encoder, the original source converts to code word set
W:{W1,W2,,Wn}, then the information transmission rate is
H (W ) H ( S )
H ( A)
L
L
When the original source is fixed, the shorter the average code length, the higher the
information transmission rate, that is the higher coding efficiency. So, the coding
efficiency can be described by the average code length.
R
C r H max ( S )
If the actual source entropy is H(S), then the actual entropy rate of this discrete noiseless
channel is
R r H ( A)
Here we define the ratio of the entropy rate and the channel capacity as the transmission
efficiency. It can be seen that the transmission efficiency is the ratio of the actual
information transmission capability and the maximum information transmission
capability of a communication system (channel), that is
R
H ( A)
C H max ( S )
10
R H ( A)
C log n
R
H ( A)
H (S )
C H max ( A) L log q
H (S )
L
11
H (S )
=72.19%
L
12
13
14
s1
s2
s3
s4
s5
s6
s7
s8
p(S):
0.1
0.18
0.4
0.05
0.06
0.1
0.07
0.04
Solution:
The entropy of the original source is
H (S )
H max(S ) log 8 3
R
H (S )
2.55
85%
C H max (S )
3
15
si
p(si )
The 1st
grouping
The 2nd
grouping
s3
0.40
s2
0.18
s1
The 3rd
grouping
The 4th
grouping
Wi
Li
00
01
0.10
110
s6
0.10
111
s7
0.07
1000
s5
0.06
1001
s4
0.05
1010
s8
0.04
1011
16
0
1
R
H ( A)
H (S ) 2.55
96.6%
C H max ( A)
2.64
L
W3
W1
W6
0
W7
W5
W4
1
1
0
0
W2
W8
17
s1
s2
s3
s4
s5
s6
s7
s8
p(S):
1/4
1/4
1/8
1/8
1/16
1/16
1/16
1/16
18
s1
s2
s3
s4
s5
s6
s7
s8
p(S):
1/4
1/4
1/8
1/8
1/16
1/16
1/16
1/16
i 1
H max(S ) log 8 3
R
H (S )
2.75
91.7%
C H max(S )
3
1/4
The 1st
grouping
0
The 2nd
grouping
0
s2
1/4
s3
1/8
s4
1/8
s5
1/16
s6
si
p(si)
s1
The 3rd
grouping
The 4th
grouping
R
H ( A)
H (S ) 2.75
1
C H max ( A)
2.75
L
Wi
Li
00
01
100
101
1100
1/16
1101
s7
1/16
1110
s8
1/16
1111
19
20
s1
s2
s3
s4
s5
s6
s7
s8
p(S):
0.1
0.18
0.4
0.05
0.06
0.1
0.07
0.04
21
Solution: The entropy of the original source is H (S ) p(si ) log p(si ) 2.55
i 1
R
H (S )
2.55
85%
C H max (S )
3
s3
0.4
s2
0.18
s1
0.1
s6
0.1
s7
0.07
s5
0.06
s4
0.05
s8
0.04
W3=1
W2=001
1
(0.37)
(0.23)
(0.13)
(1.0)
(0.19)
1
(0.09)
0
(0.6)
W1=011
W6=0000
W7=0100
W5=0101
W4=00010
W8=00011
22
Solution: The entropy of the original source is H (S ) p(si ) log p(si ) 2.55
i 1
R
H (S )
2.55
85%
C H max (S )
3
s3
0.4
s2
0.18
s1
0.1
s6
0.1
s7
0.07
s5
0.06
s4
0.05
s8
0.04
W3=1
W2=001
1
(0.37)
0
(0.6)
(0.23)
(1.0)
W5=0101
(0.19)
1
(0.09)
W6=0000
W7=0100
(0.13)
W1=011
p(s )L 2.61
i 1
W4=00010
W8=00011
23
Solution: The entropy of the original source is H (S ) p(si ) log p(si ) 2.55
i 1
R
H (S )
2.55
85%
C H max (S )
3
R
H ( A)
H ( S ) 2.55
97.8%
C H max ( A)
2.61
L
s3
0.4
s2
0.18
s1
0.1
s6
0.1
s7
0.07
s5
0.06
s4
0.05
s8
0.04
W3=1
W2=001
1
(0.37)
0
(0.6)
(0.23)
(1.0)
W5=0101
(0.19)
1
(0.09)
W6=0000
W7=0100
(0.13)
W1=011
p(s )L 2.61
i 1
W4=00010
W8=00011
24
Homework
[Homework 4-1]
A single-symbol discrete memoryless source is as follows:
x2
x3
x4
x5
x6
x7
X x1
Please show both Shannon-Fano and the Huffman Coding results and
calculate the coding efficiency.
25
26
27
Source
Coding
Channel
Coding
Input
Compress
Output
Coding
Send
Receive
Decoding
Decompression
Channel
with Noise
28
p1=0.01
p1=0.01
1
p=1-p1=0.99
29
p=1-p1=0.99
p1=0.01
X1=000X2=111
The outputs are
p1=0.01
Y1=000Y2=001Y3=010Y4=011
Y5=100Y6=101Y7=110Y8=111
p=1-p1=0.99
Decoding
F(Y1)=F(Y2)=F(Y3)=F(Y5)=X1=000
F(Y4)=F(Y6)=F(Y7)=F(Y8)=X2=111
The probability of error decoding is
P emin310-4
Harbin Institute of Technology
30
31
32
=[b1,b2,bN]
bi{0,1}
d ( , ) a i bi
i 1
0d N
33
d ( , ) a i bi
i 1
34
dmin=min{d, ,X }
35
d(,)=W()
36
1102
1113
Hamming distance
Number of edges from
one vertex to another
vertex
Hamming weight
the number of1
1001
0000
0011
1012
37
Code C
Code D
000
111
000
011
101
110
000
001
100
010
000
001
111
Code
number
Minimum
code
distance
Allowable
code
word
38
39
p=3/4
p=3/4
1
1-p=1/4
40
1-p=1/4
p=3/4
1.Receive 0 0
receive 1 1
Probability of correct decoding = 1/4
p=3/4
1
1-p=1/4
Reliability
improved
41
p(yj/xi)
X:{x1,x2,..,xn}
Y:{y1,y2,ym}
42
1-p=1/4
p=3/4
B:{F(0)=0;F(1)=1}
C:{F(0)=1;F(1)=0}
D:{F(0)=1;F(1)=1}
p=3/4
1
1-p=1/4
Harbin Institute of Technology
43
Prj=P{F(yj)=xi/yj}
44
Pej=P{e/yj}=1-P{F(yj)=xi/yj}
The e expresses that collection of all the other source symbols except xi.
And then take the average of all yj, the average probability of correct decoding
is:
j 1
j 1
Pr P( y j ) Prj p( y j ) p{F ( y j ) xi / y j }
Harbin Institute of Technology
45
j 1
j 1
Pe p( y j ) P{e / y j } p( y j ){1 P[ F ( y j ) xi / y j ]}
All communication system will regard the average decoding error probability
as an important indicator of system reliability.
j 1 i
j 1 i
Pem in p ( y j ) p ( xi / y j ) p ( xi ) p ( y j / xi )
Harbin Institute of Technology
46
p(yj/x1),p(yj/x2),,p(yj/xn)
then decode yj into x*the above method is called Maximum likelihood
decoding criterion
when p(xi)=1/n maximum a posterior probability criterion is equivalent to
the maximum likelihood decoding criterion.
Maximum likelihood decoding criterion use the channel transmission
47
x1
y1
1/4
x2
1/3
2/3
y2
48
x1
x2 ... xn
[ X , P]:
P( X ) : p( x1 ) p( x2 ) ... p( xn )
X:
p( y1 / x1 ) p( y 2 / x1 ) ... p( y m / x1 )
p( y1 / x2 ) p( y 2 / x2 ) ... p( y m / x2 )
[ P]
...
...
...
...
p( y1 / xn ) p( y 2 / xn ) ... p( y m / xn )
49
1/
3
2
/
3
Example
3/4
x1
y1
1/4
x2
1/3
2/3
y2
50
p(x1/yj),p(x2/yj),p(xn/yj)
There will be a maximum, and set it as: p(x*/yj)
so that p(x*/yj)p(xi /yj) (for each i)
This indicatethe input is considered as x* after receiving
the symbol yj
so the decoding function is
F(yj)=x*
(j==1,2,m)
51
Pem in p ( y j ) p ( xi / y j )
j 1 i
In the expression, the minimum value associated with the prior probability
52
x1
y1
1/4
x2
1/3
2/3
y2
53
x1
y1
1/4
x2
1/3
2/3
y2
3/ 4 1/ 4
[ P (Y / X )]
1/
3
2
/
3
3/ 8 1/ 8
[ P ( X ;Y )]
1/
6
2
/
6
[ P (Y )] [13/ 24;
11/ 24]
9 /13 3/11
[ P ( X / Y )]
4
/13
8
/11
54
1/
3
2
/
3
Example
3/4
x1
y1
1/4
x2
1/3
2/3
3/ 8 1/ 8
[ P ( X ;Y )]
1/
6
2
/
6
y2
[ P (Y )] [13/ 24;
11/ 24]
9 /13 3/11
[ P ( X / Y )]
4
/13
8
/11
55
1/
3
2
/
3
Example
3/4
x1
y1
1/4
x2
1/3
2/3
3/ 8 1/ 8
[ P ( X ;Y )]
1/
6
2
/
6
y2
[ P (Y )] [13/ 24;
11/ 24]
9 /13 3/11
[ P ( X / Y )]
4
/13
8
/11
probability
Harbin Institute of Technology
56
Homework 5
X
p(yj/xi)
X:{x1,x2,x3}
Y:{y1,y2,y3}
57