# Information Theory MCQ || Information Theory Questions and Answers

1. Discrete source S1 has 4 equiprobable symbols while discrete source S2 has 16 equiprobable symbols. When the entropy of these two sources is compared, the entropy of:

1. S1 is greater than S2
2. S1 is less than S2
3. Sis equal to S2
4. Depends on rate of symbols/second

Answer.2. S1 is less than S2

Explanation

Entropy of mutual information H(X):

$H\left( X \right) = \mathop \sum \nolimits_{i = 1}^n p\left( {{x_i}} \right).{\log _2}\frac{1}{{p\left( {{x_i}} \right)}} = {\log _2}m$

[for equiprobable ‘m’ no. of symbol]

Calculation:

Case-I: For source S1;

No. of symbol; m = 4

H(x) = log2 m

= 2 bits / symbol

Case-II: For source S2;

No. of symbol; m = 16

H(x) = log2 m

= 4 bits / symbol

∴ entropy of S1 is less than S2

2. A (7, 4) block code has a generator matrix as shown.

$G = \left[ {\begin{array}{*{20}{c}} 1&0&0&0&1&1&0\\ 0&1&0&0&0&1&1\\ 0&0&1&0&1&1&{1}\\ 0&0&0&1&1&0&1 \end{array}} \right]$

If there is error in the 7th Bit then syndrome for the same will be

1. 001
2. 010
3. 100
4. 011

Explanation

The generator Matrix is given by

$G = \left[ {{I_K}{P^T}} \right]$

${P^T} = \left[ {\begin{array}{*{20}{c}} 1&1&0\\ 0&1&1\\ 1&1&1\\ 1&0&1 \end{array}} \right]$

The parity check matrix is given by:

H = [P Ikn – K]

Syndrome

S = eHT

${H^T} = \left[ {\begin{array}{*{20}{c}} {{P^T}}\\ {{I_{n – k}}} \end{array}} \right]$ ${H^T} = \left[ {\begin{array}{*{20}{c}} 1&1&0\\ 0&1&1\\ 1&1&1\\ 1&0&1\\ 1&0&0\\ 0&1&0\\ 0&0&1 \end{array}} \right]$

S = eHT

For error in 7th Bit

E = [000 0001]

$S = \left[ {000\;000\;1} \right]\left[ {\begin{array}{*{20}{c}} 1&1&0\\ 0&1&1\\ 1&1&1\\ 1&0&1\\ 1&0&0\\ 0&1&0\\ 0&0&1 \end{array}} \right]$

S = [ 0 0 1]

3. An Ideal power-limited communication channel with additive white Gaussian noise is having 4 kHz bandwidth and Signal to Noise Ratio of 255. The channel capacity is:

1. 8-kilo bits/sec
2. 9.63 kilo bits/sec
3. 16 kilo bits/sec
4. 32 kilo bits/sec

Answer.4. 32 kilo bits / sec

Explanation

Shannon’s channel capacity is the maximum bits that can be transferred error-free. Mathematically, this is defined as:

$C = B~{\log _2}\left( {1 + \frac{{\rm{S}}}{{\rm{N}}}} \right){\rm{bits}}$

B = Bandwidth of the channel

Signal to noise ratio = S/N

Calculation:

Given B = 4 kHz and SNR = 255

Channel capacity will be:

${\rm{C}} = 4~{\log _2}\left( {1 + 255} \right){\rm{kbits}}/{\rm{sec}}$

C = 32 kbits/sec

4. Let (X1, X2) be independent random varibales. X1 has to mean 0 and variance 1, while X2 has mean 1 and variance 4. The mutual information I(X1; X2) between X1 and X2 in bits is.

1. 0
2. 2
3. 4
4. 5

Explanation

Mutual information of two random variables is a measure to tell how much one random variable tells about the other.

It is mathematically defined as:

I(X1, X2) = H(X1) – H(X1/X2)

Application:

Since X1 and X2 are independent, we can write:

H(X1/X2) = H(X1)

I(X1,X2 ) = H(X1) – H(X1)

= 0

5. Which of the following statements are true?

(A) A parity check code can detect and correct single-bit error

(B) The efficiency of the Huffman code is linearly proportional to the average entropy

(C) Coding increases the channel bandwidth

(D) Coding increases the information rate

(E) A code dictionary with a minimum distance of 2 is not capable of error correction.

Choose the correct answer from the options given below:

(1) (A), (B), (D) only

(2) (B), (D), (E) only

(3) (A), (C), (D) only

(4) (B), (C), (D) only

1. 4
2. 3
3. 2
4. 1

Explanation

Types of error correction

The codes which are used for both error detecting and error correction are called as “Error Correction Codes”. The error correction techniques are of two types. They are,

• Single error
• Burst error

Hamming code or Hamming Distance Code is the best error-correcting code we use in most communication network and digital systems.

Statement A is false.

Coding efficiency

The coding efficiency (η) of an encoding scheme is expressed as the ratio of the source entropy H(z) to the average codeword length L(z) and is given by

η = H(z)/L(z)

Since L(z) ≥ H(z) according to Shannon’s Coding theorem and both L(z) and H(z) are positive, 0≤ η ≤ 1

If m(z) is the minimum of the average codeword length obtained out of different uniquely decipherable coding schemes, then as per Shannon’s theorem, we can state that m(z) ≥ H(z)

Statement B is true.

Channel coding
When it comes to channel coding, we always have to compromise between energy efficiency and bandwidth efficiency.
Low-rate codes are not very efficient due to the reason that:

• Requires more bandwidth to transmit the data.
• The decoders are more complex.

While codes with greater redundancy are more efficient in correcting errors, as the efficiency of the code increases:

• It contributes greatly by operating at a lower transmit power and transmitting over long distances.
• It is also more tolerant towards interference and hence can be used with small antennas.
• Transmits at a higher data rate.

Hence Statement C is false while Statement D is true.

Error correction and detection

Any code which is capable of correcting the ‘t’ errors should satisfy the following condition:

dmin ≥ 2t + 1

Any code which is capable of detecting the ‘t’ errors should satisfy the following condition:

dmin ≥ t + 1

statement E is true.

6. An event has two possible outcomes with probability P1 = 1/2 & P2 = 1/64. The rate of information with 16 outcomes per second is:

1. (38/4) bits/sec
2. (38/64) bits/sec
3. (38/2) bits/sec
4. (38/32) bits/sec

Explanation

Entropy: The average amount of information is called the “Entropy”.

$H = \;\mathop \sum \limits_i {P_i}{\log _2}\left( {\frac{1}{{{P_i}}}} \right)\;bits/symbol$

Rate of information = r.H

Calculation:

Given: r = 16 outcomes/sec

$P_1=\frac{1}{2}$, and $P_2=\frac{1}{64}$

$H = \frac{1}{2}{\log _22} + \frac{1}{64}{\log_264};$

$H=\frac{19}{32} \ bits/outcomes$

∴ Rate of information = r.H

Rs = 16 x 19/32

Rs = 19/2 or 38/4 bits/sec

7. If the probability of a message is 1/4, then the information in bits is:

1. 8 bit
2. 4 bit
3. 2 bit
4. 1 bit

Explanation

Information associated with the event is “inversely” proportional to the probability of occurrence.

Mathematically, this is defined as:

$I = {\log _2}\left( {\frac{1}{P}} \right)\;bits$

P and I represent the probability and information associated with the event

Calculation:

With P = 1/4, the information associated with it will be:

$I = {\log _2}\left( {\frac{1}{1/4}} \right)\;bits$

$I = {\log _2}\left( {4} \right)\;bits$

I = log2 (22) bits

Since logx yn = n logx y, the above can be written as:

I = 2 log2(2)

I = 2 bits

8. If the SNR of 8 kHz white bandlimited Gaussian channel is 25 dB the channel capacity is:

1. 2.40 kbps
2. 53.26 kbps
3. 66.47 kbps
4. 26.84 kbps

Explanation

The capacity of a band-limited AWGN channel is given by the formula:

$C = B~{\log _2}\left( {1 + \frac{S}{N}} \right)$

C = Maximum achievable data rate (in bits/sec)

B = channel bandwidth

Signal Noise power (in W) = S/N

Given

B.W. = 8 kHz, S/N = 25 dB

Since the S/N ratio is in dB,

$10~log_{10}(\frac{S}{N})=25$

$log_{10}(\frac{S}{N})=2.5$

$\frac{S}{N}=10^{2.5}~W=316.22~W$

The channel capacity will be:

$C = (8×10^3)~{\log _2}( {1 + 316.22})$

C = 66.47 kbps

9. _______ is also called vertical redundancy check, one of the types of error detection in communications.

1. Longitudinal check
2. Sum technique
3. Parity checking
4. Cyclic check

Explanation

Vertical redundancy check (VRC) is an error-checking method used on an eight-bit ASCII character.

In VRC, a parity bit is attached to each byte of data, which is then tested to determine whether the transmission is correct.

Vertical redundancy check (VRC) is an error-checking method used on an eight-bit ASCII character.

10. The channel capacity is measured in terms of:

1. Bits per channel
2. Number of input channels connected
3. Calls per channel
4. Number of output channels connected

Explanation

Bits per channel states the channel capacity C, meaning the theoretical highest upper bound on the information rate of data that can be communicated at an arbitrarily low error rate using an average received signal power S through an analog communication channel that is subject to additive white Gaussian noise (AWGN) of power N.

The capacity of a band-limited AWGN channel is given by the formula:

$C = B{\log _2}\left( {1 + \frac{S}{N}} \right)$

C = Maximum achievable data rate with units of bits/sec for continuous nature of input and output. For discrete nature of input/output, bits/channel is used.

B = channel bandwidth

Signal Noise power (in W) = S/N

Scroll to Top