Documente Academic
Documente Profesional
Documente Cultură
Sender side
Channel Receiver side Output signal Decoder Demodulator
s(t)
r(t)
n(t)
Random processes
A random variable is the result of a single measurement A random process is a indexed collection of random variables, or equivalently a nondeterministic signal that can be described by a probability distribution Noise can be modeled as a random process
The power spectral density of n(t) has equal power in all frequency bands
WGN continued
When an additive noise channel has a white Gaussian noise source, we call it an AWGN channel
Most frequently used model in communications Reasons why we use this model
Its easy to understand and compute It applies to a broad class of physical channels
x = | x(t ) | 2 dt
1 Px = lim | x(t ) |2 dt T T T / 2
T /2
Most signals are either finite energy and zero power, or infinite energy and finite power
Digital system
A relatively small amount of noise will cause no harm at all Too much noise will make decoding of received signal impossible
Symbol (S): element from the set String: sequence of symbols from A Codeword: sequence representing coded string
0110010111101001010
p 1
i i 1
"The fundamental problem of communication is that of reproducing at one point, either exactly or approximately, a message selected at another point." -Shannon, 1944
Measure of Information
Information content of symbol si
(in bits) log2p(si)
Examples
p(si) = 1 has no information smaller p(si) has more information, as it was unexpected or surprising
Entropy
Weigh information content of each source symbol by its probability of occurrence:
value is called Entropy (H)
p(s ) log
i i 1
p( si )
Produces lower bound on number of bits needed to represent the information with code words
Entropy Example
Alphabet = {A, B}
p(A) = 0.4; p(B) = 0.6
Entropy definitions
Shannon entropy
Binary entropy formula Differential entropy
Properties of entropy
Can be defined as the expectation of log p(x) (ie H(X) = E[-log p(x)]) Is not a function of a variables values, is a function of the variables probabilities
Usually measured in bits (using logs of base 2) or nats (using logs of base e)
Maximized when all values are equally likely (ie uniform distribution) Equal to 0 when only one value is possible
Mutual information
Mutual information is how much information about X can be obtained by observing Y
For any rate R < C, we can transmit information with arbitrarily small probability of error
Capacity C = 1 - H(p,1-p)
Coding theory
Information theory only gives us an upper bound on communication rate Need to use coding theory to find a practical method to achieve a high rate 2 types
Source coding - Compress source data to a smaller size Channel coding - Adds redundancy bits to make transmission across noisy channel more robust
Coding Intro
Assume alphabet K of {A, B, C, D, E, F, G, H}
In general, if we want to distinguish n different symbols, we will need to use, log2n bits per symbol, i.e. 3. Can code alphabet K as: A 000 B 001 C 010 D 011 E 100 F 101 G 110 H 111
Coding Intro
BACADAEAFABBAAAGAH is encoded as the string of 54 bits 00100001000001100010000010100000 1001000000000110000111 (fixed length code)
Coding Intro
With this coding: A0 B 100 C 1010 E 1100 F 1101 G 1110
D 1011 H 1111
Huffman Tree
Huffman Encoding
Use probability distribution to determine how many bits to use for each symbol
higher-frequency assigned shorter codes entropy-based, block-variable coding scheme
Huffman Encoding
Produces a code which uses a minimum number of bits to represent each symbol
cannot represent same sequence using fewer real bits per symbol when using code words optimal when using code words, but this may differ slightly from the theoretical lower limit lossless
Find a prefix free binary code with minimum weighted path length
prefix free means no codeword is a prefix of any other codeword
Huffman Algorithm
Construct a binary tree of codes
leaf nodes represent symbols to encode interior nodes represent cumulative probability edges assigned 0 or 1 output code
Huffman Example
Construct the Huffman coding tree (in class)
Symbol (S)
P (S)
0.25
0.30 0.12 0.15 0.18
A
B C D E
Characteristics of Solution
Lowest probability symbol is always furthest from root
Assignment of 0/1 to children edges arbitrary Symbol (S) A Code 11
other solutions possible; lengths remain the same If two nodes have equal probability, can select any two
B
C D
00
010 011
Notes
10
Example Encoding/Decoding
Encode BEAD 001011011
Symbol (S) A Code 11
00
010 011
Decode 0101100
C D
10
Symbol A B
P (S)
0.25 0.30
Code 11 00
= -.25 * log2 .25 + -.30 * log2 .30 + -.12 * log2 .12 + -.15 * log2 .15 + -.18 * log2 .18 H = 2.24 bits
C D E
010 011 10
Symbol
P (S) Code
0.25
0.30 0.12 0.15
A
B C D
11
00 010 011
L = 2.27 bits
0.18
10
H p( si ) log 2 p ( si )
i 1
Huffman reaches entropy limit when all probabilities are negative powers of 2
i.e., 1/2; 1/4; 1/8; 1/16; etc.
Example
H = -.01*log2.01 + -.99*log2.99 = .08
L = .01(1) + .99(1) =1
Symbol A B
P (S)
0.01 0.99
Code 1 0
Exercise
Compute Entropy (H) Build Huffman tree Compute average
code length
Symbol (S) A B C D E
P (S)
0.1 0.2 0.4 0.2 0.1
Code BCCADE
Solution
Compute Entropy (H)
H = 2.1 bits
Symbol P(S) Code A B 0.1 0.2 111 100
C
D E
0.4
0.2 0.1
0
101 110
Limitations
Diverges from lower limit when probability of a particular symbol becomes high
always uses an integral number of bits