Sunteți pe pagina 1din 11

Measure of Information

Principles of Communications

Prof. Mao Wang


Measure of Information
➢ Communication systems are designed to transmit information
➢ An information source produces the message
➢ The communication system delivers the message to the destination
➢ How to quantify the amount of information content in the message?
➢ Related to the “uncertainty”
➢ The more “surprising” the message is, the more info it contains

Prof. Wang Mao


Measure of Information (Cont’d)
➢ Discrete
➢ The uncertainty can be described using probability 𝑃 𝑥
➢ 𝑃 𝑥 – Probability of the message x produced by the info source
➢ 𝐼𝑥 – Information content of 𝑥
➢ Depends on the probability of the message and not on the value of the
message
➢ 𝑃 𝑥 ↑ ⇒ 𝐼𝑥 ↓
➢ 𝑃 𝑥 ↓ ⇒ 𝐼𝑥 ↑
➢ 𝑃 𝑥 = 1 ⇒ 𝐼𝑥 = 0
➢ 𝑃 𝑥 = 0 ⇒ 𝐼𝑥 = ∞
1
➢ We define 𝐼𝑥 = 𝑙𝑜𝑔2 𝑃(𝑥) = −𝑙𝑜𝑔2 𝑃 𝑥 bit
➢ Average information (per message) of an information source 𝑥𝑖 ,
➢ 𝐻 𝑥 = σ𝑖 𝑃 𝑥𝑖 ∙ 𝐼𝑥 = − σ𝑖 𝑃 𝑥𝑖 ∙ 𝑙𝑜𝑔2 𝑃 𝑥𝑖 ,
𝑖
➢ known as the entropy of the source
➢ Measures the uncertainty of the source output

➢ Continuous

H ( x)   f  x  log a f  x dx


➢ 𝑓(𝑥) – probability density


Prof. Wang Mao
Example
➢ An information source, ℌ, ℑ, ℵ, ℘ , contains 4 symbols; the probability that
each symbol appears at the output of the source is 3/8, 1/4, 1/4, 1/8,
respectively
3 3 1 1 1 1 1 1
➢ 𝐼 = − 8 𝑙𝑜𝑔 − 4 log − 4 log − 8 log = 1.9 bits/symbol.
8 4 4 8

➢ Assuming the transmission rate is 1000 symbols/sec, the total information


transmitted per sec is
➢ 1.9 bits/symbol × 1000 symbols/sec = 1900 bits/sec

➢ If the transmission lasts for 10 sec, what’s the amount of information that has
been transmitted?

Prof. Wang Mao


Channel Capacity – The capability of a
channel for conveying information

Principles of Communications

Prof. Mao Wang


➢ What is the maximum amount of information we can send through a pulse over a
channel?
➢ If the channel is noiseless, i.e., the input is always faithfully reproduced at the
output, the maximal amount of information that can be conveyed over the
channel equals the source entropy – the channel has unlimited capacity
➢ If the channel is noisy, there exists uncertainty between the input and output of
the channel, the answer is non-trivial

Prof. Wang Mao


Channel Model
➢ A channel is given by
➢ An input alphabet 𝒳 ≡ 𝑥1 , 𝑥2 , ⋯ , 𝑥𝑚
➢ An output alphabet 𝒴 ≡ 𝑦1 , 𝑦2 , ⋯ , 𝑦𝑛
➢ 𝒳 and 𝒴 are complex
➢ A conditional probability 𝑃𝑌ȁ𝑋 𝑦𝑗 ห𝑥𝑖 , which specifies the probability of
observing 𝑌 = 𝑦𝑗 at the output given that 𝑋 = 𝑥𝑖 is transmitted
➢ A channel with input 𝑋 ∈ 𝒳 and output 𝑌 ∈ 𝒴 is entirely specified by a set of
conditional probabilities 𝑃𝑌ȁ𝑋 𝑦𝑗 ห𝑥𝑖
➢ We can regard the action of a channel as transferring the message X into the
output Y by following the conditional probability law 𝑃𝑌ȁ𝑋 𝑦𝑗 ห𝑥𝑖
➢ Both the input and output ends of the channel are uncertain in nature

X X 
PY X y j xi  Y Y

Prof. Wang Mao


Channel Model
➢ The information gain after receiving 𝑌
➢ 𝐼(𝑋, 𝑌) ≡ 𝐻 𝑋 − 𝐻(𝑋 ȁ𝑌)

➢ 𝐻 𝑋ȁ𝑌 ≡ − σ𝑛𝑗=1 𝑃(𝑦𝑗 ) σ𝑚 𝑖=1 𝑃 𝑥𝑖 ቚ𝑦𝑗 𝑙𝑜𝑔𝑃(𝑥𝑖 ቚ𝑦𝑗 ) – the conditional


entropy
➢ The amount of lost information due to noise
➢ For noiseless channel, 𝐻 𝑋ȁ𝑌 = 0
➢ 𝐼 𝑋; 𝑌 = 𝐻 𝑋
➢ The information gain from receiving 𝑌 is 𝐻 𝑋
➢ For very noisy channel, 𝑋 and 𝑌 are independent, 𝐻 𝑋ȁ𝑌 = 𝐻 𝑋
➢ 𝐼 𝑋; 𝑌 = 0
➢ The information gain from receiving 𝑌 is 0

H (Y ) H (X ) H (Y )
H (X ) H (Y ) H (X )

H(X Y ) H (Y X )
H(X Y ) I ( X ;Y ) H (Y X ) I ( X ;Y )

𝐼 𝑋; 𝑌 = 𝐻 𝑋 Prof. Wang Mao 𝐼 𝑋; 𝑌 = 0


Channel Capacity: Definition
➢ Channel capacity
➢ The amount of information a channel can convey, maximized over all possible
input distribution,
➢ 𝐶 = max 𝐼(𝑋, 𝑌) = max 𝐻 𝑋 − 𝐻(𝑋 ȁ𝑌) (bits/symbol)
𝑃(𝑋) 𝑃(𝑋)

➢ Since the maximum symbol or pulse rate is 𝑊 (symbols/sec)


➢ 𝐶 = 𝑊 ∙ max 𝐼(𝑋, 𝑌) = 𝑊 ∙ max 𝐻 𝑋 − 𝐻(𝑋 ȁ𝑌) (bits/sec)
𝑃(𝑋) 𝑃(𝑋)

Prof. Wang Mao


Gaussian Channel Capacity
➢ An important special case occurs when noise 𝑍 is zero mean additive complex
Gaussian, and is independent of 𝑋,
➢ 𝑌 =𝑋+𝑍
➢ When the transmitted symbol is limited to a certain power 𝑃, the received
signal has an average power of 𝑃 + 𝑁, where 𝑁 is the average noise power

➢ 𝐶 = max 𝐼(𝑋; 𝑌) = max 𝐻 𝑋 − 𝐻(𝑋ȁ𝑌) H (X ) H (Y )


𝑃(𝑋) 𝑃(𝑋)
= max 𝐻 𝑌 − 𝐻(𝑌ȁ𝑋)
𝑃(𝑋) H(X Y ) I ( X ;Y ) H (Z )
= max 𝐻 𝑌 − 𝐻(𝑍)
𝑃(𝑋)
= max 𝐻 𝑌 − 𝐻(𝑍)
𝑃(𝑋)
= max 𝐻 𝑌 − 𝑙𝑜𝑔2𝜋𝑒𝑁 (bits/symbol)
𝑃(𝑋)

➢ It can be shown that max 𝐻 𝑌 = 𝑙𝑜𝑔2𝜋𝑒 𝑃 + 𝑁 iff 𝑌 is Gaussian


𝑃(𝑌)
➢ Since 𝑍 is Gaussian, 𝐻 𝑌 is maximized as long as 𝑋 is Gaussian (sum of
two Gaussian variables is still Gaussian)
➢ max 𝐻 𝑌 = 𝑙𝑜𝑔2𝜋𝑒 𝑃 + 𝑁
𝑃(𝑋)

Prof. Wang Mao


Gaussian Channel Capacity

➢ The capacity of a Gaussian channel is achieved when X is Gaussian


➢ 𝐶 = 𝑙𝑜𝑔2𝜋𝑒 𝑃 + 𝑁 − 𝑙𝑜𝑔2𝜋𝑒𝑁
𝑃
➢ = 𝑙𝑜𝑔 1 + 𝑁 (bits/symbol)

➢ Recall the max symbol rate for a bandlimited channel is 𝑊 (symbols/sec)


𝑃
➢ 𝐶 = 𝑊 ∙ 𝑙𝑜𝑔 1 + 𝑁 (bits/sec)

Prof. Wang Mao

S-ar putea să vă placă și