1. Discrete source S_{1} has 4 equiprobable symbols while discrete source S_{2} has 16 equiprobable symbols. When the entropy of these two sources is compared, the entropy of:

3. An Ideal power-limited communication channel with additive white Gaussian noise is having 4 kHz bandwidth and Signal to Noise Ratio of 255. The channel capacity is:

8-kilo bits/sec

9.63 kilo bits/sec

16 kilo bits/sec

32 kilo bits/sec

Answer.4. 32 kilo bits / sec

Explanation

Shannon’s channel capacity is the maximum bits that can be transferred error-free. Mathematically, this is defined as:

$ C = B~{\log _2}\left( {1 + \frac{{\rm{S}}}{{\rm{N}}}} \right){\rm{bits}}$

4. Let (X1, X2) be independent random varibales. X1 has to mean 0 and variance 1, while X2 has mean 1 and variance 4. The mutual information I(X1; X2) between X1 and X2 in bits is.

0

2

4

5

Answer.1. 0

Explanation

Mutual information of two random variables is a measure to tell how much one random variable tells about the other.

It is mathematically defined as:

I(X_{1}, X_{2}) = H(X_{1}) – H(X_{1}/X_{2})

Application:

Since X_{1} and X_{2} are independent, we can write:

H(X_{1}/X_{2}) = H(X_{1})

I(X_{1},X_{2} ) = H(X_{1}) – H(X_{1})

= 0

5. Which of the following statements are true?

(A) A parity check code can detect and correct single-bit error

(B) The efficiency of the Huffman code is linearly proportional to the average entropy

(C) Coding increases the channel bandwidth

(D) Coding increases the information rate

(E) A code dictionary with a minimum distance of 2 is not capable of error correction.

Choose the correct answer from the options given below:

(1) (A), (B), (D) only

(2) (B), (D), (E) only

(3) (A), (C), (D) only

(4) (B), (C), (D) only

4

3

2

1

Answer.2. 3

Explanation

Types of error correction

The codes which are used for both error detecting and error correction are called as “Error Correction Codes”. The error correction techniques are of two types. They are,

Single error

Burst error

Hamming code or Hamming Distance Code is the best error-correcting code we use in most communication network and digital systems.

Statement A is false.

Coding efficiency

The coding efficiency (η) of an encoding scheme is expressed as the ratio of the source entropy H(z) to the average codeword length L(z) and is given by

η = H(z)/L(z)

Since L(z) ≥ H(z) according to Shannon’s Coding theorem and both L(z) and H(z) are positive, 0≤ η ≤ 1

If m(z) is the minimum of the average codeword length obtained out of different uniquely decipherable coding schemes, then as per Shannon’s theorem, we can state that m(z) ≥ H(z)

Statement B is true.

Channel coding
When it comes to channel coding, we always have to compromise between energy efficiency and bandwidth efficiency.
Low-rate codes are not very efficient due to the reason that:

They have large overheads

Requires more bandwidth to transmit the data.

The decoders are more complex.

While codes with greater redundancy are more efficient in correcting errors, as the efficiency of the code increases:

It contributes greatly by operating at a lower transmit power and transmitting over long distances.

It is also more tolerant towards interference and hence can be used with small antennas.

Transmits at a higher data rate.

Hence Statement C is false while Statement D is true.

Error correction and detection

Any code which is capable of correcting the ‘t’ errors should satisfy the following condition:

d_{min} ≥ 2t + 1

Any code which is capable of detecting the ‘t’ errors should satisfy the following condition:

d_{min} ≥ t + 1

statement E is true.

6. An event has two possible outcomes with probability P_{1} = 1/2 & P_{2} = 1/64. The rate of information with 16 outcomes per second is:

(38/4) bits/sec

(38/64) bits/sec

(38/2) bits/sec

(38/32) bits/sec

Answer.1. (38/4) bits/sec

Explanation

Entropy: The average amount of information is called the “Entropy”.

Since log_{x} y^{n} = n log_{x} y, the above can be written as:

I = 2 log_{2}(2)

I = 2 bits

8. If the SNR of 8 kHz white bandlimited Gaussian channel is 25 dB the channel capacity is:

2.40 kbps

53.26 kbps

66.47 kbps

26.84 kbps

Answer.3. 66.47 kbps

Explanation

The capacity of a band-limited AWGN channel is given by the formula:

$C = B~{\log _2}\left( {1 + \frac{S}{N}} \right)$

C = Maximum achievable data rate (in bits/sec)

B = channel bandwidth

Signal Noise power (in W) = S/N

Given

B.W. = 8 kHz, S/N = 25 dB

Since the S/N ratio is in dB,

$10~log_{10}(\frac{S}{N})=25$

$log_{10}(\frac{S}{N})=2.5$

$\frac{S}{N}=10^{2.5}~W=316.22~W$

The channel capacity will be:

$C = (8×10^3)~{\log _2}( {1 + 316.22})$

C = 66.47 kbps

9. _______ is also called vertical redundancy check, one of the types of error detection in communications.

Longitudinal check

Sum technique

Parity checking

Cyclic check

Answer.3. Parity checking

Explanation

Vertical redundancy check (VRC) is an error-checking method used on an eight-bit ASCII character.

In VRC, a parity bit is attached to each byte of data, which is then tested to determine whether the transmission is correct.

Vertical redundancy check (VRC) is an error-checking method used on an eight-bit ASCII character.

10. The channel capacity is measured in terms of:

Bits per channel

Number of input channels connected

Calls per channel

Number of output channels connected

Answer.1. Bits per channel

Explanation

Bits per channel states the channel capacity C, meaning the theoretical highest upper bound on the information rate of data that can be communicated at an arbitrarily low error rate using an average received signal power S through an analog communication channel that is subject to additive white Gaussian noise (AWGN) of power N.

The capacity of a band-limited AWGN channel is given by the formula:

$C = B{\log _2}\left( {1 + \frac{S}{N}} \right)$

C = Maximum achievable data rate with units of bits/sec for continuous nature of input and output. For discrete nature of input/output, bits/channel is used.