# Sorting in Data Structure MCQ

1. How many passes do an insertion sort algorithm consist of?

a) N

b) N-1

c) N+1

d) N^{2}

Answer: b

An insertion algorithm consists of N-1 passes when an array of N elements is given.

2. Which of the following algorithm implementations is similar to that of an insertion sort?

a) Binary heap

b) Quick sort

c) Merge sort

d) Radix sort

Answer: a

Insertion sort is similar to that of a binary heap algorithm because of the use of a temporary variable to swap.

3. What is the average case running time of an insertion sort algorithm?

a) O(N)

b) O(N log N)

c) O(log N)

d) O(N^{2})

Answer: d

The average case analysis of a tightly bound algorithm is mathematically achieved to be O(N^{2}).

4. Any algorithm that sorts by exchanging adjacent elements require O(N^{2}) on average.

a) True

b) False

Answer: a

Each swap removes only one inversion, so O(N^{2}) swaps are required.

5. What is the average number of inversions in an array of N distinct numbers?

a) N(N-1)/4

b) N(N+1)/2

c) N(N-1)/2

d) N(N-1)/3

Answer: a

The total number of pairs in a list L is N(N-1)/2. Thus, an average list has half this amount, or N(N-1)/4 inversions.

6. What is the running time of an insertion sort algorithm if the input is pre-sorted?

a) O(N2)

b) O(N log N)

c) O(N)

d) O(M log N)

Answer: c

If the input is pre-sorted, the running time is O(N), because the test in the inner for loop always fails immediately and the algorithm will run quickly.

7. What will be the number of passes to sort the elements using insertion sort?

14, 12,16, 6, 3, 10

a) 6

b) 5

c) 7

d) 1

Answer: b

The number of passes to sort the elements using insertion sort is given by N-1.

Here, N=6.

Therefore, 6-1=5 passes.

8. For the following question, what will the array elements look like after the second pass?

34, 8, 64, 51, 32, 21

a) 8, 21, 32, 34, 51, 64

b) 8, 32, 34, 51, 64, 21

c) 8, 34, 51, 64, 32, 21

d) 8, 34, 64, 51, 32, 21

Answer: d

After swapping elements in the second pass, the array will look like, 8, 34, 64, 51, 32, and 21.

9. Which of the following real-time examples is based on insertion sort?

a) arranging a pack of playing cards

b) database scenarios and distributes scenarios

c) arranging books on a library shelf

d) real-time systems

Answer: a

Arranging a pack of cards mimics an insertion sort. Database scenario is an example for merge sort, arranging books is a stack and real-time systems use quick sort.

10. In C, what are the basic loops required to perform an insertion sort?

a) do- while

b) if else

c) for and while

d) for and if

Answer: c

To perform an insertion sort in C, we use two basic loops- an outer for loop and an inner while loop.

11. Binary search can be used in an insertion sort algorithm to reduce the number of comparisons.

a) True

b) False

Answer: a

Binary search can be used in an insertion sort algorithm to reduce the number of comparisons. This is called a Binary insertion sort.

12. Which of the following options contain the correct feature of an insertion sort algorithm?

a) anti-adaptive

b) dependable

c) stable, not in-place

d) stable, adaptive

Answer: d

Insertion sort is a sorting algorithm in which the elements are transferred one at a time to the right position. An insertion sort is stable, adaptive, in-place, and incremental in nature.

13. Which of the following sorting algorithms is the fastest for sorting small arrays?

a) Quick sort

b) Insertion sort

c) Shell sort

d) Heap sort

Answer: b

For sorting small arrays, insertion sort runs even faster than quick sort. But, it is impractical to sort large arrays.

14. For the best case input, the running time of an insertion sort algorithm is?

a) Linear

b) Binary

c) Quadratic

d) Depends on the input

Answer: a

The best case input for the running time of an insertion sort algorithm runs in linear time and is given by O(N).

15. Which of the following examples represent the worst-case input for an insertion sort?

a) array in sorted order

b) array sorted in reverse order

c) normal unsorted array

d) a large array

Answer: b

The worst-case input for an insertion sort algorithm will be an array sorted in reverse order and its running time is quadratic.

1. Which of the following is correct about insertion sort?

a) insertion sort is stable and it sorts In-place

b) insertion sort is unstable and it sorts In-place

c) insertion sort is stable and it does not sort In-place

d) insertion sort is unstable and it does not sort In-place

Answer: a

During insertion sort, the relative order of elements is not changed. Therefore, it is a stable sorting algorithm. And insertion sort requires only O(1) of additional memory space. Therefore, it sorts In-place.

2. Which of the following sorting algorithm is best suited if the elements are already sorted?

a) Heap Sort

b) Quick Sort

c) Insertion Sort

d) Merge Sort

Answer: c

The best case running time of the insertion sort is O(n). The best case occurs when the input array is already sorted. As the elements are already sorted, only one comparison is made on each pass, so that the time required is O(n).

3. The worst-case time complexity of insertion sort is O(n2). What will be the worst-case time complexity of insertion sort if the correct position for inserting element is calculated using binary search?

a) O(nlogn)

b) O(n^{2})

c) O(n)

d) O(logn)

Answer: b

The use of binary search reduces the time of finding the correct position from O(n) to O(logn). But the worst case of insertion sort remains O(n^{2}) because of the series of swapping operations required for each insertion.

4. Insertion sort is an example of an incremental algorithm.

a) True

b) False

Answer: a

In the incremental algorithms, the complicated structure on n items is built by first building it on n − 1 item. And then we make the necessary changes to fix things by adding the last item. Insertion sort builds the sorted sequence one element at a time. Therefore, it is an example of an incremental algorithm.

6. Which of the following is good for sorting arrays having less than 100 elements?

a) Quick Sort

b) Selection Sort

c) Merge Sort

d) Insertion Sort

Answer: d

Insertion Sort is good for sorting arrays having less than 100 elements.

The insertion sort is good for sorting small arrays. It sorts smaller arrays faster than any other sorting algorithm.

7. Consider an array of length 5, arr[5] = {9,7,4,2,1}. What are the steps of insertions done while running insertion sort on the array?

a) 7 9 4 2 1 4 7 9 2 1 2 4 7 9 1 1 2 4 7 9

b) 9 7 4 1 2 9 7 1 2 4 9 1 2 4 7 1 2 4 7 9

c) 7 4 2 1 9 4 2 1 9 7 2 1 9 7 4 1 9 7 4 2

d) 7 9 4 2 1 2 4 7 9 1 4 7 9 2 1 1 2 4 7 9

Answer: a

The steps performed while running insertion sort on given array are:

Initial : 9 7 4 2 1 key = 7

7 9 4 2 1 key = 4

4 7 9 2 1 key = 2

2 4 7 9 1 key = 1

1 2 4 7 9

In each step, the key is the element that is compared with the elements present at the left side to it.

8. Statement 1: In insertion sort, after m passes through the array, the first m elements are in sorted order.

Statement 2: And these elements are the m smallest elements in the array.

a) Both the statements are true

b) Statement 1 is true but statement 2 is false

c) Statement 1 is false but statement 2 is true

d) Both the statements are false

Answer: b

In insertion sort, after m passes through the array, the first m elements are in sorted order but they are whatever the first m elements were in the unsorted array.

9. In insertion sort, the average number of comparisons required to place the 7th element into its correct position is __________

a) 9

b) 4

c) 7

d) 14

Answer: b

On average (k + 1) / 2 comparisons are required to place the kth element into its correct position. Therefore, average number of comparisons required for 7th element = (7 + 1)/2 = 4.

10. Which of the following is not an exchange sort?

a) Bubble Sort

b) Quick Sort

c) Partition-exchange Sort

d) Insertion Sort

Answer: d

In Exchange sorts, we compare each element of an array and swap those elements that are not in their proper position. Bubble Sort and Quick Sort are exchange sorts. Quick Sort is also called a Partition-exchange Sort. Insertion sort is not an exchange sort.

1. What is an in-place sorting algorithm?

a) It needs O(1) or O(logn) memory to create auxiliary locations

b) The input is already sorted and in-place

c) It requires additional storage

d) It requires additional space

Answer: a

A sorting algorithm in which the sorted items occupy the same storage as the original ones. Auxiliary memory is required for storing the data temporarily.

2. In the following scenarios, when will you use selection sort?

a) The input is already sorted

b) A large file has to be sorted

c) Large values need to be sorted with small keys

d) Small values need to be sorted with large keys

Answer: c

Selection is based on keys, hence a file with large values and small keys can be efficiently sorted with selection sort.

3. What is the worst-case complexity of selection sort?

a) O(nlogn)

b) O(logn)

c) O(n)

d) O(n^{2})

Answer: d

Selection sort creates a sub-list, LHS of the ‘min’ element is already sorted and RHS is yet to be sorted.

Starting with the first element the ‘min’ element moves towards the final element. The worst-case complexity of the selection sort is O(n^{2}).

5. What is the advantage of selection sort over other sorting techniques?

a) It requires no additional storage space

b) It is scalable

c) It works best for inputs that are already sorted

d) It is faster than any other sorting technique

Answer: a

- Selection sort is a simple sorting algorithm.
- This sorting algorithm is an in-place comparison-based algorithm in which the list is divided into two parts, the sorted part at the left end and the unsorted part at the right end.
- Since selection sort is an in-place sorting algorithm, it does not require additional storage.

6. What is the average case complexity of the selection sort?

a) O(nlogn)

b) O(logn)

c) O(n)

d) O(n^{2})

Answer: d

The average case complexity of the selection sort is O(n^{2}).

In the average case, even if the input is partially sorted, the selection sort behaves as if the entire array is not sorted. The selection sort is insensitive to input.

7. What is the disadvantage of selection sort?

a) It requires auxiliary memory

b) It is not scalable

c) It can be used for small keys

d) It takes linear time to sort the elements

Answer: b

- Selection sort is a simple sorting algorithm.
- This sorting algorithm is an in-place comparison-based algorithm in which the list is divided into two parts, the sorted part at the left end and the unsorted part at the right end.
- As the input size increases, the performance of the selection sort decreases.

8. The given array is arr = {3,4,5,2,1}. The number of iterations in bubble sort and selection sort respectively are __________

a) 5 and 4

b) 4 and 5

c) 2 and 4

d) 2 and 5

Answer: a

Since the input array is not sorted, bubble sort takes 5 iterations and selection sort takes 4(n-1) iterations.

The given array is arr = {3,4,5,2,1}. The number of iterations in bubble sort and selection sort respectively are 5 and 4.

9. The given array is arr = {1,2,3,4,5}. (bubble sort is implemented with a flag variable)The number of iterations in selection sort and bubble sort respectively are __________

a) 5 and 4

b) 1 and 4

c) 0 and 4

d) 4 and 1

Answer: d

Selection sort is insensitive to input, hence 4(n-1) iterations. Whereas bubble sort iterates only once to set the flag to 0 as the input is already sorted.

The given array is arr = {1,2,3,4,5}. (bubble sort is implemented with a flag variable)The number of iterations in selection sort and bubble sort respectively are 4 and 1.

10. What is the best case complexity of selection sort?

a) O(nlogn)

b) O(logn)

c) O(n)

d) O(n^{2})

Answer: d

The best, average, and worst case complexities of selection sort are O(n^{2}).

(n-1) + (n-2) + (n-3) + …. + 1 = (n(n-1))/2 ~ (n^{2})/2.

1. What is an external sorting algorithm?

a) Algorithm that uses tape or disk during the sort

b) Algorithm that uses main memory during the sort

c) Algorithm that involves swapping

d) Algorithms that are considered ‘in place’

Answer: a

External sorting is a term for a class of sorting algorithms that can handle massive amounts of data. As the name suggests, the external sorting algorithm uses external memory like tape or disk.

2. What is an internal sorting algorithm?

a) Algorithm that uses tape or disk during the sort

b) Algorithm that uses main memory during the sort

c) Algorithm that involves swapping

d) Algorithms that are considered ‘in place’

Answer: b

An internal sort is any data sorting process that takes place entirely within the main memory of a computer. As the name suggests, the internal sorting algorithm uses internal main memory.

3. What is the worst-case complexity of bubble sort?

a) O(nlogn)

b) O(logn)

c) O(n)

d) O(n^{2})

Answer: d

Bubble sort works by starting from the first element and swapping the elements if required in each iteration. The worst-case complexity of bubble sort is O(n^{2}).

5. What is the average case complexity of bubble sort?

a) O(nlogn)

b) O(logn)

c) O(n)

d) O(n^{2})

Answer: d

Bubble sort works by starting from the first element and swapping the elements if required in each iteration even in the average case. The average case complexity of bubble sort is O(n^{2}).

6. Which of the following is not an advantage of optimized bubble sort over other sorting techniques in the case of sorted elements?

a) It is faster

b) Consumes less memory

c) Detects whether the input is already sorted

d) Consumes less time

Answer: c

Optimized Bubble sort is one of the simplest sorting techniques and perhaps the only advantage it has over other techniques is that it can detect whether the input is already sorted.

It is faster than others in the case of a sorted array and consumes less time to describe whether the input array is sorted or not. It consumes the same memory as other sorting techniques. Hence it is not an advantage.

7. The given array is arr = {1, 2, 4, 3}. Bubble sort is used to sort the array elements. How many iterations will be done to sort the array?

a) 4

b) 2

c) 1

d) 0

Answer: a

Even though the first two elements are already sorted, bubble sort needs 4 iterations to sort the given array.

9. What is the best case efficiency of bubble sort in the improvised version?

a) O(nlogn)

b) O(logn)

c) O(n)

d) O(n2)

Answer: c

The best case efficiency of bubble sort in the improvised version is O(n). Some iterations can be skipped if the list is sorted, hence efficiency improves to O(n).

10. The given array is arr = {1,2,4,3}. Bubble sort is used to sort the array elements. How many iterations will be done to sort the array with an improvised version?

a) 4

b) 2

c) 1

d) 0

Answer: b

The given array is arr = {1,2,4,3}. Only 2 elements in the given array are not sorted, hence only 2 iterations are required to sort them.

1. Merge sort uses which of the following technique to implement sorting?

a) backtracking

b) greedy algorithm

c) divide and conquer

d) dynamic programming

Answer: c

Merge sort uses divide and conquers in order to sort a given array. This is because it divides the array into two halves and applies the merge sort algorithm to each half individually after which the two sorted halves are merged together.

2. What is the average case time complexity of merge sort?

a) O(n log n)

b) O(n2)

c) O(n2 log n)

d) O(n log n2)

Answer: a

The average case time complexity of merge sort is O(n log n).

The recurrence relation for merge sort is given by T(n) = 2T(n/2) + n. It is found to be equal to O(n log n) using the master theorem.

3. What is the auxiliary space complexity of merge sort?

a) O(1)

b) O(log n)

c) O(n)

d) O(n log n)

Answer: c

The auxiliary space complexity of merge sort is O(n). An additional space of O(n) is required in order to merge two sorted arrays. Thus merge sort is not an in-place sorting algorithm.

4. Merge sort can be implemented using O(1) auxiliary space.

a) true

b) false

Answer: a

Standard merge sort requires O(n) space to merge two sorted arrays. We can optimize this merging process so that it takes only constant space. This version is known as the in-place merge sort.

5. What is the worst-case time complexity of merge sort?

a) O(n log n)

b) O(n2)

c) O(n2 log n)

d) O(n log n2)

Answer: a

The time complexity of merge sort is not affected by the worst case as its algorithm has to implement the same number of steps in any case. So its time complexity remains to be O(n log n).

6. Which of the following method is used for sorting in merge sort?

a) merging

b) partitioning

c) selection

d) exchanging

Answer: a

The merge sort algorithm divides the array into two halves and applies the merge sort algorithm to each half individually after which the two sorted halves are merged together. Thus its method of sorting is called merging.

7. What will be the best case time complexity of merge sort?

a) O(n log n)

b) O(n2)

c) O(n2 log n)

d) O(n log n2)

Answer: a

The time complexity of merge sort is not affected in any case as its algorithm has to implement the same number of steps. So its time complexity remains to be O(n log n) even in the best case.

8. Which of the following is not a variant of merge sort?

a) in-place merge sort

b) bottom-up merge sort

c) top-down merge sort

d) linear merge sort

Answer: d

In-place, top-down, and bottom-up merge sort are different variants of merge sort. Whereas linear merge sort is not a possible variant as it is a comparison-based sort and the minimum time complexity of any comparison-based sort is O(n log n).

9. Choose the incorrect statement about merge sort from the following?

a) it is a comparison-based sort

b) it is an adaptive algorithm

c) it is not an in-place algorithm

d) it is a stable algorithm

Answer: b

Merge sort is not an adaptive sorting algorithm. This is because it takes O(n log n) time complexity irrespective of any case.

10. Which of the following is not in place sorting algorithm by default?

a) merge sort

b) quick sort

c) heap sort

d) insertion sort

Answer: a

Quick sort, heap sort, and insertion sort are in-place sorting algorithms, whereas an additional space of O(n) is required in order to merge two sorted arrays.

Even though we have a variation of merge sort (to do in-place sorting), it is not the default option. So, among the given choices, merge sort is the most appropriate answer.

11. Which of the following is not a stable sorting algorithm?

a) Quick sort

b) Cocktail sort

c) Bubble sort

d) Merge sort

Answer: a

Out of the given options, quick sort is the only algorithm that is not stable. Merge sort is a stable sorting algorithm.

12. Which of the following stable sorting algorithm takes the least time when applied to an almost sorted array?

a) Quick sort

b) Insertion sort

c) Selection sort

d) Merge sort

Answer: d

Insertion sort takes linear time to sort a partially sorted array. Though merge and quick sort take O(n*logn) complexity to sort, merge sort is stable. Hence, Merge sort takes less time to sort a partially sorted array.

13. Merge sort is preferred for arrays over linked lists.

a) true

b) false

Answer: b

Merge sort is preferred for linked lists over arrays. It is because in a linked list the insert operation takes only O(1) time and space which implies that we can implement merge operation in constant time.

14. Which of the following sorting algorithm makes use of merge sort?

a) Tim sort

b) intro sort

c) BOGO sort

d) quick sort

Answer: a

Tim sort is a hybrid sorting algorithm as it uses more than one sorting algorithm internally. It makes use of merge sort and insertion sort.

16. Which of the following sorting algorithm does not use recursion?

a) quick sort

b) merge sort

c) heap sort

d) bottom-up merge sort

Answer: d

Bottom-up merge sort uses the iterative method in order to implement sorting. It begins by merging a pair of an adjacent array of size 1 each and then merging arrays of size 2 each in the next step and so on.

1. Merge sort uses which of the following algorithm to implement sorting?

a) backtracking

b) greedy algorithm

c) divide and conquer

d) dynamic programming

Answer: c

Merge sort uses the technique of divide and conquer in order to sort a given array. It divides the array into two halves and applies the merge sort algorithm to each half individually after which the sorted versions of these halves are merged together.

2. What is the average case time complexity of standard merge sort?

a) O(n log n)

b) O(n2)

c) O(n2 log n)

d) O(n log n2)

Answer: a

The average case time complexity of the standard merge sort is O(nlogn).

The recurrence relation for merge sort is given by T(n) = 2T(n/2) + n. This can be solved using the master’s theorem and is found equal to O(n log n).

3. What is the auxiliary space complexity of standard merge sort?

a) O(1)

b) O(log n)

c) O(n)

d) O(n log n)

Answer: c

The merging of two sorted arrays requires an additional space of n due to which the auxiliary space requirement of merge sort is O(n). Thus merge sort is not an in-place sorting algorithm.

4. What is the space complexity of in-place merge sort?

a) O(1)

b) O(n)

c) O(log n)

d) O(n log n)

Answer: c

The space complexity of the in-place version of merge sort is O(log n) which is used for storing call stack formed due to recursion.

Note that the algorithms with space complexity is O(log n) also qualifies as in-place algorithms as the value of log n is close to 1.

6. Merge sort uses which of the following method to implement sorting?

a) merging

b) partitioning

c) selection

d) exchanging

Answer: a

Merge sort implements sorting by merging the sorted versions of smaller parts of the array. Thus its method of sorting is called merging.

7. In-place merge sort has the same time complexity as standard merge sort.

a) true

b) false

Answer: b

In place version of merge sort has a greater time complexity as compared to its standard version. It is because the merging in the in-place merge sort takes place in O(n2) time whereas in the standard version it takes O(n) time.

8. In-place merge sort is a stable sort.

a) true

b) false

Answer: a

In-place merge sort like standard merge sort is a stable sort. This implies that the relative position of equal valued elements in the input and sorted array remains the same.

9. Choose the incorrect statement about merge sort from the following?

a) both standard merge sort and in-place merge sort are stable

b) standard merge sort has greater time complexity than in-place merge sort

c) standard merge sort has greater space complexity than in-place merge sort

d) in place merge sort has O(log n) space complexity

Answer: b

The time complexity of the standard merge sort is O(n log n) and that of the in-place merge sort is O(n^{2}). So the time complexity of the in-place merge sort is more than that of the standard merge sort.

1. Merge sort uses which of the following algorithm to implement sorting?

a) backtracking

b) greedy algorithm

c) divide and conquer

d) dynamic programming

Answer: c

Merge sort uses the technique of dividing and conquering in order to sort a given array. It divides the array into two halves and applies the merge sort algorithm to each half individually after which the sorted versions of these halves are merged together.

2. What is the average case time complexity of standard merge sort?

a) O(n log n)

b) O(n2)

c) O(n2 log n)

d) O(n log n2)

Answer: a

The recurrence relation for merge sort is given by T(n) = 2T(n/2) + n. This can be solved using the master’s theorem and is found equal to O(n log n).

3. What is the auxiliary space complexity of standard merge sort?

a) O(1)

b) O(log n)

c) O(n)

d) O(n log n)

Answer: c

The merging of two sorted arrays requires an additional space of n due to which the auxiliary space requirement of merge sort is O(n). Thus merge sort is not an in-place sorting algorithm.

4. What is the auxiliary space complexity of the bottom-up merge sort?

a) O(1)

b) O(n)

c) O(log n)

d) O(n log n)

Answer: b

The auxiliary space complexity of the bottom-up merge sort is the same as the standard merge sort as both use the same algorithm for merging the sorted arrays which take o(n) space. But bottom-up merge sort does not need to maintain a call stack.

5. What is the average time complexity of the bottom-up merge sort?

a) O(n log n)

b) O(n2)

c) O(n2 log n)

d) O(n log n2)

Answer: a

The merge function in the bottom-up merge sort takes O(n) time which is placed inside the for a loop. The loop runs for O(log n) time, thus the overall time complexity of the code becomes O(n log n).

6. Merge sort uses which of the following method to implement sorting?

a) merging

b) partitioning

c) selection

d) exchanging

Answer: a

Merge sort implements sorting by merging the sorted versions of smaller parts of the array. Thus its method of sorting is called merging.

7. Bottom-up merge sort uses recursion.

a) true

b) false

Answer: b

Bottom-up merge sort uses the iterative method in order to implement sorting. It begins by merging a pair of adjacent arrays of size 1 each and then merging arrays of size 2 each in the next step and so on.

8. Bottom-up merge sort is a stable sort.

a) true

b) false

Answer: a

Bottom-up merge sort like standard merge sort is a stable sort. This implies that the relative position of equal valued elements in the input and sorted array remains the same.

9. Choose the correct statement about bottom-up merge sort from the following?

a) bottom-up merge sort has greater time complexity than standard merge sort

b) bottom-up merge sort has lesser time complexity than standard merge sort

c) bottom-up merge sort saves auxiliary space required on the call stack

d) bottom-up merge sort uses recursion.

Answer: c

Bottom-up merge sort unlike standard merge sort uses an iterative algorithm for sorting. Thus, it saves auxiliary space required by the call stack.

1. Which of the following sorting algorithms is the fastest?

a) Merge sort

b) Quick sort

c) Insertion sort

d) Shell sort

Answer: b

- Quicksort is a popular sorting algorithm that is often faster in practice compared to other sorting algorithms.
- It utilizes a divide-and-conquer strategy to quickly sort data items by dividing a large array into two smaller arrays.
- Quick sort is the fastest known sorting algorithm because of its highly optimized inner loop.

2. Quick sort follows the Divide-and-Conquer strategy.

a) True

b) False

Answer: a

Quicksort is a popular sorting algorithm that is often faster in practice compared to other sorting algorithms. In quick sort, the array is divided into sub-arrays, and then it is sorted (divide-and-conquer strategy).

3. What is the worst-case time complexity of a quick sort algorithm?

a) O(N)

b) O(N log N)

c) O(N^{2})

d) O(log N)

Answer: c

Quicksort is a popular sorting algorithm that is often faster in practice compared to other sorting algorithms. The worst-case performance of a quick sort algorithm is mathematically found to be O(N^{2}).

4. Which of the following methods is the most effective for picking the pivot element?

a) first element

b) the last element

c) median-of-three partitioning

d) random element

Answer: c

Median-of-three partitioning is the best method for choosing an appropriate pivot element. Picking a first, last or random element as a pivot is not much effective.

5. Find the pivot element from the given input using the median-of-three partitioning method.

8, 1, 4, 9, 6, 3, 5, 2, 7, 0.

a) 8

b) 7

c) 9

d) 6

Answer: d

The pivot element using the median-of-three partitioning method is

Left element=8, right element=0,

Centre=[position(left+right)/2]=6.

6. Which is the safest method to choose a pivot element?

a) choosing a random element as a pivot

b) choosing the first element as a pivot

c) choosing the last element as a pivot

d) median-of-three partitioning method

Answer: a

Choosing a random element as a pivot is the safest method to choose the pivot element since it is very unlikely that a random pivot would consistently provide a poor partition.

7. What is the average running time of a quick sort algorithm?

a) O(N2)

b) O(N)

c) O(N log N)

d) O(log N)

Answer: c

The best case and average case analysis of a quick sort algorithm are mathematically found to be O(N log N).

8. Which of the following sorting algorithms is used along with quick sort to sort the sub-arrays?

a) Merge sort

b) Shell sort

c) Insertion sort

d) Bubble sort

Answer: c

- Insertion sort is used along with quick sort to sort the sub-arrays. It is used only at the end.
- Insertion sort is a sorting algorithm in which the elements are transferred one at a time to the right position.

9. Quick sort uses join operation rather than merge operation.

a) true

b) false

Answer: a

Quicksort is a popular sorting algorithm that is often faster in practice compared to other sorting algorithms. Quick sort uses the join operation since join is a faster operation than merge.

10. How many sub-arrays does the quick sort algorithm divide the entire array into?

a) one

b) two

c) three

d) four

Answer: b

The entire array is divided into two partitions, 1st sub-array containing elements less than the pivot element and 2nd sub-array containing elements greater than the pivot element.

11. Which is the worst method of choosing a pivot element?

a) first element as pivot

b) last element as pivot

c) median-of-three partitioning

d) random element as pivot

Answer: a

Choosing the first element as the pivot is the worst method because if the input is pre-sorted or in reverse order, then the pivot provides a poor partition.

12. Which among the following is the best cut-off range to perform insertion sort within a quick sort?

a) N=0-5

b) N=5-20

c) N=20-30

d) N>30

Answer: b

A good cut-off range is anywhere between N=5 and N=20 to avoid nasty degenerate cases.

1. Quick sort is a __________

a) greedy algorithm

b) divide and conquer algorithm

c) dynamic programming algorithm

d) backtracking algorithm

Answer: b

Quick sort is a divide and conquers algorithm. Quick sort first partitions a large array into two smaller sub-arrays. And then recursively sorts the sub-arrays.

2. What is the worst-case time complexity of the Quick sort?

a) O(nlogn)

b) O(n)

c) O(n3)

d) O(n^{2})

Answer: d

The worst-case running time for Quick sort is O(n^{2}). In Quick sort, the worst case behavior occurs when the partitioning routine produces two sub-arrays one with n – 1 element and the other with 0 elements.

3. Apply Quick sort on a given sequence 7 11 14 6 9 4 3 12. What is the sequence after the first phase, the pivot is the first element?

a) 6 4 3 7 11 9 14 12

b) 6 3 4 7 9 14 11 12

c) 7 6 14 11 9 4 3 12

d) 7 6 4 3 9 14 11 12

Answer: b

Let’s apply Quick sort on the given sequence,

For first phase, pivot = 7

7 11 14 6 9 4 3 12

i j

7 11 14 6 9 4 3 12

i j

7 3 14 6 9 4 11 12

i j

7 3 4 6 9 14 11 12

i j

7 3 4 6 9 14 11 12

j i

6 3 4 7 9 14 11 12

4. The best case behaviour occurs for quick sort is, if partition splits the array of size n into __________

a) n/2 : (n/2) – 1

b) n/2 : n/3

c) n/4 : 3n/2

d) n/4 : 3n/4

Answer: a

The best case analysis of quick sort occurs when the partition splits the array into two subarrays, each of size no more than n/2 since one is of size n/2 and one of size (n/2) – 1.

5. Quick sort is a stable sorting algorithm.

a) True

b) False

Answer: b

In a stable sorting algorithm the records with equal keys appear in the same order in the sorted sequence as they appear in the input unsorted sequence. Quick sort does not preserve the relative order of equal sort items. Therefore, the Quick sort is not a stable sort.

6. Consider the Quick sort algorithm in which the partitioning procedure splits elements into two sub-arrays and each sub-array contains at least one-fourth of the elements. Let T(n) be the number of comparisons required to sort an array of n elements. Then T(n)<=?

a) T(n) <= 2 T(n/4) + cn

b) T(n) <= T(n/4) + T(3n/4) + cn

c) T(n) <= 2 T(3n/4) + cn

d) T(n) <= T(n/3) + T(3n/4) + cn

Answer: b

If there are n/4 elements in one sub-array then T(n/4) comparisons are needed for this sub-array. And T(3n/4) comparisons are required for the rest 4n/5 elements, and cn is the time required for finding the pivot. If there are more than n/4 elements in one sub-array then the other sub-array will have less than 3n/4 elements and time complexity will be less than T(n/4) + T(3n/4) + cn.

7. Consider the Quick sort algorithm which sorts elements in ascending order using the first element as a pivot. Then which of the following input sequence will require a maximum number of comparisons when this algorithm is applied to it?

a) 22 25 56 67 89

b) 52 25 76 67 89

c) 22 25 76 67 50

d) 52 25 89 67 76

Answer: a

If the input sequence is already sorted then worst-case behavior occurs for the Quick sort algorithm which uses the first element as a pivot. Therefore, the input sequence given in 22 25 56 67 89 will require a maximum number of comparisons.

8. A machine needs a minimum of 200 sec to sort 1000 elements by Quick sort. The minimum time needed to sort 200 elements will be approximately __________

a) 60.2 sec

b) 45.54 sec

c) 31.11 sec

d) 20 sec

Answer: c

The Quick sort requires nlog2n comparisons in the best case, where n is the size of the input array. So, 1000 * log21000 ≈ 9000 comparisons are required to sort 1000 elements, which takes 200 sec. To sort 200 elements minimum of 200 * log2200 ≈ 1400 comparisons are required. This will take 200 * 1400 / 9000 ≈ 31.11 sec.

9. Which one of the following sorting algorithms is best suited to sort an array of 1 million elements?

a) Bubble sort

b) Insertion sort

c) Merge sort

d) Quick sort

Answer: d

The Quick sort is best suited to sort the array of 1 million elements. The practical implementations of Quick sort use a randomized version.

In practice, randomized Quick sort algorithms rarely show worst-case behavior and are almost always O(nlogn). And Quick sort requires little additional space and exhibits good cache locality.

10. Quick sort is a space-optimized version of _________

a) Bubble sort

b) Selection sort

c) Insertion sort

d) Binary tree sort

Answer: d

Quick sort is a space-optimized version of the binary tree sort. In a binary sort tree, the elements are inserted sequentially into the binary search tree and Quick sort organizes elements into a tree that is implied by the recursive calls.

1. QuickSort can be categorized into which of the following?

a) Brute Force technique

b) Divide and conquer

c) Greedy algorithm

d) Dynamic programming

Answer: b

QuickSort can be categorized into Divide and conquer. First, you divide(partition) the array based on the pivot element and sort accordingly.

3. What is the worst-case complexity of QuickSort?

a) O(nlogn)

b) O(logn)

c) O(n)

d) O(n^{2})

Answer: d

The worst-case complexity of QuickSort is O(n^{2}) when the input array is already sorted.

4. What is a randomized QuickSort?

a) The leftmost element is chosen as the pivot

b) The rightmost element is chosen as the pivot

c) Any element in the array is chosen as the pivot

d) A random number is generated which is used as the pivot

Answer: c

QuickSort is randomized by placing the input data in the randomized fashion in the array or by choosing a random element in the array as a pivot.

6. What is the best case complexity of QuickSort?

a) O(nlogn)

b) O(logn)

c) O(n)

d) O(n2)

Answer: a

The array is partitioned into equal halves, using the Divide and Conquer master theorem, the complexity is found to be O(nlogn).

7. The given array is arr = {2,3,4,1,6}. What are the pivots that are returned as a result of subsequent partitioning?

a) 1 and 3

b) 3 and 1

c) 2 and 6

d) 6 and 2

Answer: a

For the given array is arr = {2,3,4,1,6}. The pivots returned result of subsequent partitioning is 1 and 3 as the pivot elements.

8. What is the average case complexity of QuickSort?

a) O(nlogn)

b) O(logn)

c) O(n)

d) O(n^{2})

Answer: a

The position of partition(split) is unknown, hence all(n) possibilities are considered, and the average is found by adding all and dividing by n. The average case complexity of QuickSort is O(nlogn).

9. The given array is arr = {2,6,1}. What are the pivots that are returned as a result of subsequent partitioning?

a) 1 and 6

b) 6 and 1

c) 2 and 6

d) 1

Answer: d

The pivots that are returned as a result of subsequent partitioning is 1. There is only one pivot with which the array will be sorted, the pivot is 1.

10. Which of the following is not true about QuickSort?

a) in-place algorithm

b) pivot position can be changed

c) adaptive sorting algorithm

d) can be implemented as a stable sort

Answer: b

Once a pivot is chosen, and its position is finalized in the sorted array, it cannot be modified.

1. Quick sort uses which of the following algorithm to implement sorting?

a) backtracking

b) greedy algorithm

c) divide and conquer

d) dynamic programming

Answer: c

Quick sort uses the technique of dividing and conquers in order to sort a given array. It divides the array into two parts about the pivot and then applies a quick sort to both the parts.

2. What is a randomized quick sort?

a) quick sort with random partitions

b) quick sort with a random choice of pivot

c) quick sort with random output

d) quick sort with random input

Answer: b

Randomized quick sort chooses a random element as a pivot. It is done so as to avoid the worst case of the quick sort in which the input array is already sorted.

3. What is the purpose of using randomized quick sort over standard quick sort?

a) to avoid worst-case time complexity

b) to avoid worst-case space complexity

c) to improve the accuracy of the output

d) to improve the average case time complexity

Answer: a

Randomized quick sort helps in avoiding the worst-case time complexity of O(n2) which occurs in cases when the input array is already sorted. However, the average case and best case time complexities remain unaltered.

4. What is the auxiliary space complexity of randomized quick sort?

a) O(1)

b) O(n)

c) O(log n)

d) O(n log n)

Answer: c

The auxiliary space complexity of randomized quick sort is O(log n) which is used for storing call stack formed due to recursion. Note that the algorithms with space complexity is O(log n) also qualifies as in-place algorithms as the value of log n is close to 1.

5. What is the average time complexity of randomized quick sort?

a) O(n log n)

b) O(n2)

c) O(n2 log n)

d) O(n log n2)

Answer: a

The average case time complexity of randomized quick sort is the same as that of standard quick sort as randomized quick sort only helps in preventing the worst case. It is equal to O(n log n).

6. Quick sort uses which of the following method to implement sorting?

a) merging

b) partitioning

c) selection

d) exchanging

Answer: b

Quick sort makes partitions of the input array about the pivot in order to implement sorting. Thus its method of sorting is called partitioning.

7. Randomized quick sort is an in-place sort.

a) true

b) false

Answer: a

In-place algorithms require constant or very less auxiliary space. Quick sort qualifies as an in-place sorting algorithm as it has a very low auxiliary space requirement of O(log n).

8. Randomized quick sort is a stable sort.

a) true

b) false

Answer: b

Randomized quick sort like standard quick sort is also not a stable sorting algorithm. It is because the elements with the same values are not guaranteed to appear in the same relative order in the output sorted array.

9. What is the best case of time complexity randomized quick sort?

a) O(log n)

b) O(n log n)

c) O(n2)

d) O(n2 log n)

Answer: b

Best case time complexity is given in the case when there is equal partitioning of the array about the pivot. It is given by the relation T(n) = 2T(n/2) + n which gives the result O(n log n).

10. Which of the following is incorrect about randomized quicksort?

a) it has the same time complexity as a standard quick sort

b) it has the same space complexity as standard quick sort

c) it is an in-place sorting algorithm

d) it cannot have a time complexity of O(n^{2}) in any case.

Answer: d

Randomized quick sort prevents the worst-case complexity of O(n^{2}) in most cases. But in some rare cases, the time complexity can become O(n^{2}). The probability of such a case is however very low.

12. What is the worst-case time complexity of randomized quicksort?

a) O(n)

b) O(n log n)

c) O(n^{2})

d) O(n2 log n)

Answer: c

Randomized quicksort prevents the worst-case complexity of O(n2) in most cases. But in some rare cases, the time complexity can become O(n^{2}). The probability of such a case is however very low.

1. Quick sort uses which of the following algorithm to implement sorting?

a) backtracking

b) greedy algorithm

c) divide and conquer

d) dynamic programming

Answer: c

Quick sort uses the technique of divide and conquers in order to sort a given array. It divides the array into two parts about the pivot and then applies quick sort to both the parts.

2. What is the median of three techniques in quick sort?

a) quick sort with random partitions

b) quick sort with a random choice of pivot

c) choosing median element as pivot

d) choosing the median of the first, last, and middle element as a pivot

Answer: d

In the median of three techniques, the median of the first, last, and middle elements is chosen as the pivot. It is done so as to avoid the worst case of the quick sort in which the time complexity shoots to O(n^{2}).

3. What is the purpose of using a median of three quick sorts over a standard quick sort?

a) to avoid worst-case time complexity

b) to avoid worst-case space complexity

c) to improve the accuracy of the output

d) to improve the average case time complexity

Answer: a

The median of three quick sorts helps in avoiding the worst-case time complexity of O(n^{2}) which occurs in the case when the input array is already sorted. However, the average case and best case time complexities remain unaltered.

4. What is the auxiliary space complexity of a median of three quick sorts?

a) O(1)

b) O(n)

c) O(log n)

d) O(n log n)

Answer: c

The auxiliary space complexity of the median of three quick sorts is O(log n) which is used for storing call stack formed due to recursion. Note that the algorithms with space complexity is O(log n) also qualifies as in-place algorithms as the value of log n is close to 1.

5. What is the average time complexity of the median of three quick sorts?

a) O(n log n)

b) O(n2)

c) O(n2 log n)

d) O(n log n2)

Answer: a

The average case time complexity of the median of three quick sorts is the same as that of a standard quick sort as a randomized quick sort only helps in preventing the worst case. It is equal to O(n log n).

6. Quick sort uses which of the following method to implement sorting?

a) merging

b) partitioning

c) selection

d) exchanging

Answer: b

Quick sort makes partitions of the input array about the pivot in order to implement sorting. Thus its method of sorting is called partitioning.

7. Median of three quick sorts is an in-place sort.

a) true

b) false

Answer: a

In-place algorithms require constant or very less auxiliary space. The median of three quick sorts qualifies as an in-place sorting algorithm. It has a very low auxiliary space requirement of O(log n).

8. Median of three quick sorts is a stable sort.

a) true

b) false

Answer: b

The median of three quick sorts like standard quick sort is also not a stable sorting algorithm. It is because the elements with the same values are not guaranteed to appear in the same relative order in the output sorted array.

9. What is the best case time complexity Median of three quick sorts?

a) O(log n)

b) O(n log n)

c) O(n2)

d) O(n2 log n)

Answer: b

Best case time complexity is given in the case when there is equal partitioning of the array about the pivot. It is given by the relation T(n) = 2T(n/2) + n which gives the result O(n log n).

1. What is the other name for a shell sort algorithm?

a) Diminishing increment sort

b) Diminishing decrement sort

c) Insertion sort

d) Selection sort

Answer: a

The other name for a shell sort algorithm is diminishing decrement sort as the distance between comparisons decreases as the algorithm runs until the last phase.

2. The worst-case running time of shell sort, using Shell’s increments is?

a) O(N)

b) O(N log N)

c) O(log N)

d) O(N^{2})

Answer: d

The worst-case running time of shell sort, using Shell’s increments is O(N^{2}). The lower bound of a shell sort algorithm is mathematically found to be O(N^{2}).

3. Who invented the shell sort algorithm?

a) John Von Neumann

b) Donald Shell

c) Tony Hoare

d) Alan Shell

Answer: b

The shell sort algorithm is invented by Donald shell. Merge sort is invented by John Von Neumann. Quick sort is invented by Tony Hoare.

4. Shell sort algorithm is the first algorithm to break the quadratic time barrier.

a) True

b) False

Answer: a

Shell sort broke the quadratic time barrier as it works by comparing distant elements.

5. Shell sort algorithm is an example of?

a) External sorting

b) Internal sorting

c) In-place sorting

d) Bottom-up sorting

Answer: b

Shell sort is an example of internal sorting because sorting of elements is done internally using an array.

6. Shell sort uses a sequence called an incrementing sequence to sort the elements.

a) True

b) False

Answer: a

Shell sort uses an increment sequence h1, h2, h3… and this sequence will work as long as h1=1.

7. Which of the following sorting algorithms is closely related to shell sort?

a) Selection sort

b) Merge sort

c) Insertion sort

d) Bucket sort

Answer: c

Shell sort is a generalized version of the insertion sort algorithm. It first sorts elements that are far apart from each other and successively reduces the interval between the elements to be sorted.

Shell sort performs an insertion sort on hk independent arrays. It is mainly a variation of insertion sort.

8. Why is Shell sort called a generalization of Insertion sort?

a) Shell sort allows an exchange of far items whereas insertion sort moves elements by one position

b) Improved lower bound analysis

c) Insertion is more efficient than any other algorithms

d) Shell sort performs internal sorting

Answer: a

Shell sort is a generalized version of the insertion sort algorithm.

It first sorts elements that are far apart from each other and successively reduces the interval between the elements to be sorted.

Shell sort is an extension of insertion sort because it swaps elements at far distances and at a faster rate.

9. Given an array of the following elements 81,94,11,96,12,35,17,95,28,58,41,75,15.

What will be the sorted order after 5-sort?

a) 11,12,15,17,28,35,41,58,75,81,94,95,96

b) 28,12,11,35,41,58,17,94,75,81,96,95,15

c) 35,17,11,28,12,41,75,15,96,58,81,94,95

d) 12,11,15,17,81,94,85,96,28,35,41,58,75

Answer: c

The general strategy to hk sort is for each position, i, in hk, hk+1,…., N-1, place the element in the correct spot among i, i-hk,i-2hk, etc.

The sorted order after 5-sort is 35,17,11,28,12,41,75,15,96,58,81,94,95.

10. Which of the following statements is the basic for loop for a shell sort algorithm?

a) for(increment=N/2;increment>0;increment/=2)

b) for(i=1;i<n;i++)

c) for(i=n/2;i>=0;i- -)

d) for(i=0;i< n;i++;numelements- -)

Answer: a

The basic for loop for a shell sort algorithm for(increment=N/2;increment>0;increment/=2).

for(increment=N/2;increment>0;increment/=2) represents shell sort, for(i=1;i<n;i++) represents insertion sort, for(i=n/2;i>=0;I- -) represents heap sort, for(i=0;i<n;i++;numelements- -) merge sort.

11. On how many increment sequences does the worst-case analysis of shell sort depends?

a) one

b) two

c) three

d) four

Answer: c

The worst-case analysis of shell sort depends on two increment sequences- using Shell’s increments, Sedgewick’s, and Hibbard’s increments.

12. What is the worst-case running time of shell sort using Hibbard’s increments?

a) O(N)

b) O(N2)

c) O(N1/2)

d) O(N3/2)

Answer: d

Mathematically, the lower bound analysis for shell sort using Hibbard’s increments is O(N3/2).

13. What is the general form of Shell’s increments?

a) 1,2,3,…,n

b) 1,3,7,….,2k-1

c) 1,3,5,7,….,k-1

d) 1,5,10,15,…, k-1

Answer: b

Shell’s increments are of the form 1,3,7,….,2k-1. The key difference is that the consecutive elements have no common factors.

14. What is the worst-case analysis of shell sort using Shell’s increments?

a) O(N)

b) O(N^{2})

c) O(N1/2)

d) O(N3/2)

Answer: b

The worst-case analysis is mathematically found to be O(N^{2}). The proof is rather complicated.

15. What is the worst-case analysis of Shell sort using Sedgewick’s increments?

a) O(N^{2})

b) O(N3/2)

c) O(N4/3)

d) O(N5/4)

Answer: c

The worst-case analysis of Shell sort using Sedgewick’s increments is mathematically calculated to be O(N^{4/3)}.

1. Shell sort is also known as _____________

a) diminishing decrement sort

b) diminishing increment sort

c) partition exchange sort

d) diminishing insertion sort

Answer: b

Shell sort is also known as diminishing increment sort since each pass is defined by an increment h such that only the records which are h units apart will be sorted.

2. Statement 1: Shell sort is a stable sorting algorithm.

Statement 2: Shell sort is an in-place sorting algorithm.

a) Both statements are true

b) Statement 2 is true but statement 1 is false

c) Statement 2 is false but statement 1 is true

d) Both statements are false

Answer: b

In Shell sort, the relative order of elements with equal values may change. Therefore, it is not a stable sorting algorithm. Shell sort is an in-place sorting algorithm as it requires O(1) auxiliary space.

3. Shell sort is applied on the elements 27 59 49 37 15 90 81 39 and the chosen decreasing sequence of increments is (5,3,1). The result after the first iteration will be

a) 27 59 49 37 15 90 81 39

b) 27 59 37 49 15 90 81 39

c) 27 59 39 37 15 90 81 49

d) 15 59 49 37 27 90 81 39

Which condition will correctly implement the while loop?

a) k >= j && y < elements[k- span]

b) k >= span || y < elements[k + span]

c) k >= j || y < elements[k + span]

d) k >= span && y < elements[k- span]

Answer: d

In Shell sort, for increment = h we sort the sub-arrays that start at an arbitrary element and include every nth element.

So, if h = 4 the algorithms sort:

Sub-array formed with elements at positions 1, 5, 9, 13 … in the original array

Sub-array formed with elements at positions 2, 6, 10, 14 … in the original array

Sub-array formed with elements at positions 3, 7, 11, 15 … in the original array

Sub-array formed with elements at positions 4, 8, 12, 16 … in the original array

Therefore, the condition given in option k >= span && y < elements[k- span] will implement while loop correctly.

5. Shell sort is an improvement on ________

a) insertion sort

b) selection sort

c) binary tree sort

d) quick sort

Answer: a

Shell sort is an improvement on insertion sort that allows the exchange of elements that are far apart. The shell sort algorithm sorts separate sub-arrays of the original input array. These sub-arrays contain every hth element of the original array.

6. An array that is first 7-sorted, then 5-sorted becomes _________

a) 7-ordered

b) 5-ordered

c) both 2-ordered and 5-ordered

d) both 7-ordered and 5-ordered

Answer: d

An array that is 7-sorted, becomes 7-ordered. And an array that is 5-sorted, becomes 5-ordered. If the k-ordered array is h-sorted, it remains k-ordered. Thus, an array that is first 7-sorted, then 5-sorted becomes both 7-ordered and 5-ordered.

7. If Hibbard increments (h1= 1, h2= 3, h3= 7, …, hk = 2k–1) are used in a Shell sort implementation, then the best case time complexity will be ________

a) O(nlogn)

b) O(n)

c) O(n2)

d) O(logn)

Answer: a

The best case occurs when the array is already sorted. In the best case, the number of comparisons for each of the increments-based insertion sorts is equal to the length of the array.

Here 2k –1 < n, where n is the number of records. So k < log(n+1), thus the sorting time in best case is less the n * log(n+1). Therefore best case time complexity is O(nlogn).

8. Records R1, R2, R3,.. RN with keys K1, K2, K3,.. KN are said to be h-ordered, if ________

a) Ki <= Ki+h for 1<= i*h <= N

b) Kh <= Ki+h for 1<= i <= N

c) Ki <= Kh for 1<= i <= h

d) Ki <= Ki+h for 1<= i <= N-h

Answer: d

Records are h-ordered if every hth element (starting anywhere) yields a sorted array. Therefore, given records with keys K1, K2, K3, and KN are said to be h-ordered if Ki <= Ki+h for 1<= i <= N-h.

9. Shell sort is more efficient than insertion sort if the length of input arrays is small.

a) True

b) False

Answer: b

Insertion sort is more efficient than Shell sort if the length of the array is small because insertion sort is quite simple to program and involves very few actions other than comparisons and replacements on each pass.

10. Which of the following is true about Shell short?

a) Shell sort’s passed completely sort the elements before going on to the next-smallest gap while Comb sort’s passes do not completely sort the elements

b) Shell sort’s passes do not completely sort the elements before going on to the next-smallest gap like in Comb sort

c) Comb sort’s passes completely sort the elements before going on to the next-smallest gap like in Shell sort

d) Shell sort’s passes do not completely sort the elements before going on to the next-smallest gap while Comb sort’s passes completely sort the elements

Answer: a

Both Shell sort and Comb sort have repeated sorting passes with decreasing gaps. Unlike Comb sort, in Shell sort, the array is sorted completely in each pass before going on to the next-smallest gap.

1. Heap sort based algorithm is based on?

a) Fibonacci heap

b) Binary tree

c) Priority queue

d) FIFO

Answer: c

Heap sort is based on the algorithm of the priority queue and it gives the best sorting time.

2. In what time can a binary heap be built?

a) O(N)

b) O(N log N)

c) O(log N)

d) O(N2)

Answer: a

A binary heap is a heap data structure that takes the form of a binary tree. Binary heaps are a common way of implementing priority queues. The basic strategy is to build a binary heap of N elements which takes O(N) time.

3. Heap sort is faster than Shell sort.

a) true

b) false

Answer: b

Heap sort is slower than Shell sort because Shell sort uses Sedgewick’s increment sequence.

4. Consider the following heap after buildheap phase. What will be its corresponding array?

a) 26,53,41,97,58,59,31

b) 26,31,41,53,58,59,97

c) 26,41,53,97,31,58,59

d) 97,53,59,26,41,58,31

Answer: d

Constructing a max heap using the elements 97,53,59,26,41,58,31 will cause the heap to look like that.

5. In what position does the array for heap sort contains data?

a) 0

b) 1

c) -1

d) anywhere in the array

Answer: a

The array for heap sort contains data at position 0 whereas, for a binary heap, the array begins at 1. This is the reason for its complexity.

6. In heap sort, after deleting the last minimum element, the array will contain elements in?

a) increasing sorting order

b) decreasing sorting order

c) tree inorder

d) tree preorder

Answer: b

By logic, after deleting the minimum element, the heap will contain elements in decreasing sorting order. We can change this by altering the ordering property.

7. What is the typical running time of a heap sort algorithm?

a) O(N)

b) O(N log N)

c) O(log N)

d) O(N2)

Answer: b

The total running time of a heap sort algorithm is mathematically found to be O(N log N).

8. How many arrays are required to perform deletion operations in a heap?

a) 1

b) 2

c) 3

d) 4

Answer: b

To perform deletion operation in a heap, we require 2 arrays that occupy extra memory space and hence increase the running time.

9. What is the time taken to perform a delete min operation?

a) O(N)

b) O(N log N)

c) O(log N)

d) O(N2)

Answer: c

The time taken to perform a deletion of a minimum element is mathematically found to be O( log N).

10. Heap sort is an extremely stable algorithm.

a) true

b) false

Answer: a

Heap sort uses fewer comparisons than other sorting algorithms and hence it is an extremely stable algorithm.

11. What is the average number of comparisons used in a heap sort algorithm?

a) N log N-O(N)

b) O(N log N)-O(N)

c) O(N log N)-1

d) 2N log N + O(N)

Answer: d

The average number of comparisons in a heapsort algorithm is mathematically found to be 2N log N + O(N).

12. What is the time taken to copy elements to and from two arrays created for deletion?

a) O(N)

b) O(N log N)

c) O(log N)

d) O(N2)

Answer: a

The time taken to copy elements to and from the main array and extra array is found to be O(N).

13. What is the average number of comparisons used to heap sort a random permutation of N distinct items?

a) 2N log N-O(N)

b) 2N log N-O(N log N)

c) 2N log N-O(N log log N)

d) 2N log N-O(log N)

Answer: c

According to a theorem, the average number of comparisons used to heap sort a random permutation of N distinct items is found to be 2N log N-O(N log log N).

1. Heap sort is an implementation of ____________ using a descending priority queue.

a) insertion sort

b) selection sort

c) bubble sort

d) merge sort

Answer: b

Heap sort is an implementation of selection sort using the input array as a heap representing a descending priority queue. The heap sort algorithm is divided into two phases. In the first phase the max-heap is created and the second phase (selection phase) deletes the elements from the priority queue using the siftdown operation.

2. Which one of the following is false about heap sort?

a) Heap sort is an in-place algorithm

b) Heap sort has O(nlogn) average case time complexity

c) Heap sort is a stable sort

d) Heap sort is a comparison-based sorting algorithm

Answer: c

Heap sort is a comparison-based sorting algorithm and has time complexity O(nlogn) in the average case. Heap sort is an in-place algorithm as it needs O(1) of auxiliary space. Heap sort uses a heap and operations on the heap can change the relative order of items with the same key values. Therefore, the heap sort is not a stable sort.

3. The essential part of Heap sort is the construction of a max-heap. Consider the tree shown below, the node 24 violates the max-heap property. Once the heapify procedure is applied to it, which position will it be in?

a) 4

b) 5

c) 8

d) 9

Answer: d

In max-heap element at each node is smaller than or equal to the element at its parent node. On applying the heapify procedure on an item at position 2, it will be in position 9 as shown below.

4. The descending heap property is ___________

a) A[Parent(i)] = A[i]

b) A[Parent(i)] <= A[i]

c) A[Parent(i)] >= A[i]

d) A[Parent(i)] > 2 * A[i]

Answer: c

The max-heap is also known as descending heap. Max-heap of size n is an almost complete binary tree of n nodes such that the element at each node is less than or equal to the element at its parent node.

The descending heap property is A[Parent(i)] >= A[i].

5. What is its worst-case time complexity of Heap sort?

a) O(nlogn)

b) O(n2logn)

c) O(n2)

d) O(n3)

Answer: a

In Heap sort, the call to procedure build_Max-heap takes O(n) time and each of O(n) calls to the function max_Heapify takes O(logn) time. So the worst-case complexity of Heap sort is O(nlogn).

6. In average Quick sort is more efficient than Heap sort.

a) True

b) False

Answer: b

Quick sort is more efficient than Heap sort because experiments indicate that Heap sort requires twice as much time as Quick sort for randomly sorted input.

8. Which one of the following is a variation of Heap sort?

a) Comb sort

b) Smooth sort

c) Binary tree sort

d) Shell sort

Answer: b

Smooth sort is a variation of Heap sort. Smooth sort has O(nlogn) worst case time complexity like Heap sort. But Smooth sort takes O(n) time to sort the nearly sorted input array.

9. Introsort algorithm is combination of _____________

a) Quick sort and Heap sort

b) Quick sort and Shell sort

c) Heap sort and Merge sort

d) Heap sort and insertion sort

Answer: a

Introsort is a hybrid sorting algorithm that combines Quick sort and Heap sort to retain the advantages of both. It has the worst case speed of Heap sort and average case speed of Quick sort.

10. How many elements can be sorted in O(logn) time using Heap sort?

a) O(1)

b) O(n/2)

c) O(logn/log(logn))

d) O(logn)

Answer: c

The time complexity of Heap sort is O(klogk) for k input elements for

k = logn/log(logn),

O(klogk) = O(logn/log(logn) * log(logn/log(logn)))

∴ O(klogk) = O(logn/log(logn) * (log(logn) – log(log(long)))) = O(logn)

Hence the correct option is O(logn/log(logn)).

1. Which of the following sorting algorithm is used by C++ internally?

a) quicksort

b) introsort

c) merge sort

d) heap sort

Answer: b

Introsort is the in-built sorting algorithm used by C++. It is an example of a hybrid sorting algorithm which means it uses more than one sorting algorithm as a routine.

2. Which of the following sorting algorithm is not a constituent of introsort?

a) selection sort

b) quicksort

c) insertion sort

d) heap sort

Answer: a

Introsort is a hybrid sorting algorithm which means it uses more than one sorting algorithm as a routine. It may use the quick sort or heap sort or insertion sort depending on the given situation.

3. Introsort begins sorting the given array by using one of the following sorting algorithms?

a) selection sort

b) quick sort

c) insertion sort

d) heap sort

Answer: b

Introsort begins sorting any given array by using quick sort. Then it may switch to heap sort or insertion sort or may stick to quick sort depending upon the size of the partition.

4. Which of the following sorting algorithm is NOT stable?

a) Introsort

b) Brick sort

c) Bubble sort

d) Merge sort

Answer: a

Out of the given options introsort is the only algorithm that is not stable. As it may use quick sort in some cases to perform sorting which is itself not stable. Thus the stability of introsort is not guaranteed.

5. Which of the following sorting algorithm is in place?

a) intro sort

b) merge sort

c) counting sort

d) radix sort

Answer: a

Introsort may use the quick sort or heap sort or insertion sort internally in order to sort the given input. All of the three algorithms are in place, thus making introsort to be an in-place sorting algorithm.

6. Introsort sort is a comparison-based sort.

a) true

b) false

Answer: a

Quicksort, heap sort, and insertion sort are comparison-based sorts. Thus overall introsort also becomes a comparison-based sort.

7. What is the best case time complexity of introsort?

a) O(n)

b) O(n log n)

c) O(n2)

d) O(log n)

Answer: b

Introsort is mainly governed by heap sort and quick sort. As the best case of both heap sort and quick sort is O(n log n) so the best case of introsort also becomes O(n log n).

8. What is the worst-case time complexity of introsort?

a) O(n)

b) O(n log n)

c) O(n2)

d) O(log n)

Answer: b

Worst case time complexity of quicksort is avoided when we implement introsort. Introsort switches to heap sort when there is a possibility of crossing the maximum depth limit.

9. What is the average time complexity of introsort?

a) O(n)

b) O(n log n)

c) O(n2)

d) O(log n)

Answer: b

The average time complexity of introsort remains to be O(n log n) as for most of the cases quick sort and heap sort are used which have O(n log n) time complexity for an average case.

10. What is the auxiliary space requirement of introsort?

a) O(n)

b) O(n log n)

c) O(n2)

d) O(log n)

Answer: d

Introsort is a hybrid of quick sort, heap sort, and insertion sort. So like quick sort it may use O(log n) auxiliary space in the stack to store call statements.

11. Why is heap sort preferred over merge sort for introsort implementation?

a) Because heap sort is faster

b) Because heap sort requires less space

c) Because heap sort is easy to implement

d) Because heap sort is easy to understand

Answer: b

Both heap sort and merge sort have the same time complexity. But heap sort is an in-place sorting algorithm whereas merge sort requires O(n) auxiliary space which makes heap sort a more preferable option.

12. Why is insertion sort preferred over other sorting algorithms (like selection sort, bubble sort, etc.) for introsort implementation?

a) Because insertion sort is faster and adaptive

b) Because insertion sort requires less space

c) Because insertion sort is easy to implement

d) Because insertion sort is easy to understand

Answer: a

When small arrays need to be sorted then insertion sort proves to be the best choice. Also, it is adaptive so it performs better than others when the given array is fully/partially sorted.

13. What is the cut-off for switching from quick sort to insertion sort in the implementation of introsort?

a) 4

b) 8

c) 16

d) 32

Answer: c

When small arrays need to be sorted then insertion sort proves to be the best choice. So when the size of the partition is less than 16 introsort switches to insertion sort. This particular value has been deduced experimentally.

14. What is the cut-off for switching from quick sort to heap sort in the implementation of introsort?

a) 16

b) n2

c) n log(n)

d) 2 log (n)

Answer: d

Quicksort has a worst-case time complexity of O(n2) which is not preferable. So in order to avoid a worst case of quicksort, introsort switches to heap sort when the depth is greater than 2 log(n). This particular value has been deduced experimentally.

15. Which of the following sorting algorithm will be preferred when the size of the partition is between 16 and 2 log(n) while implementing introsort?

a) quick sort

b) insertion sort

c) heap sort

d) merge sort

Answer: a

Quicksort proves to be the best sorting algorithm for mid-sized arrays as it has low space and time complexity. Thus quick sort is preferred when the size of the partition is between 16 and 2 log(n).

1. Which of the following is Python’s standard sorting algorithm?

a) quick sort

b) introsort

c) merge sort

d) Tim sort

Answer: d

Tim sort has been python’s standard sorting algorithm since its version 2.3. It is an example of a hybrid sorting algorithm which means it uses more than one sorting algorithm as a routine.

2. Which of the following sorting algorithm is a constituent of Tim sort?

a) selection sort

b) quick sort

c) merge sort

d) heap sort

Answer: c

Tim sort is a hybrid sorting algorithm which means it uses more than one sorting algorithm as a routine. It is derived from insertion sort and merges sort.

3. Tim sort begins sorting the given array by using one of the following sorting algorithms?

a) selection sort

b) quick sort

c) insertion sort

d) merge sort

Answer: c

Tim sort begins sorting any given array by using insertion sort for each run. The array is divided into smaller parts for this purpose, each part having a size equal to the value of the run. Then these small parts called runs are merged in order to obtain a sorted array.

4. Which of the following sorting algorithm is stable?

a) Tim sort

b) Introsort

c) Quick sort

d) Heap sort

Answer: a

Tim sort is the only stable algorithm. As both constituents of Tim sort (I.e insertion sort and merge sort) are stable so Tim sort also becomes stable.

5. Which of the following sorting algorithm is not in place?

a) insertion sort

b) Tim sort

c) quick sort

d) intro sort

Answer: b

Tim sort is not an in-place sorting algorithm as it requires auxiliary space. It is because it requires merging sorted runs which require a third array of the size equal to the sum of the two runs.

6. Tim sort is a comparison-based sort.

a) true

b) false

Answer: a

Merge sort and insertion sort are comparison-based sorts. Thus overall Tim sort also becomes a comparison-based sort.

7. What is the best case time complexity of Tim sort?

a) O(n)

b) O(n log n)

c) O(n2)

d) O(log n)

Answer: a

The best case time complexity of the Tim sort occurs when the input array is already sorted. In such a case only one run will be required. The best case time complexity of Tim sort is O(n).

8. What is the worst-case time complexity of Tim sort?

a) O(n)

b) O(n log n)

c) O(n2)

d) O(log n)

Answer: b

The worst-case time complexity of Tim sort is O(n log n). It is because the worst complexity of merge sort is O(n log n) and insertion sort is only applied for small arrays.

9. What is the average time complexity of the Tim sort?

a) O(n)

b) O(n log n)

c) O(n2)

d) O(log n)

Answer: b

The average time complexity of the Tim sort remains to be O(n log n). It is the same as the average case complexity of merge sort.

10. What is the auxiliary space requirement of Tim sort?

a) O(n)

b) O(n log n)

c) O(n2)

d) O(log n)

Answer: a

Tim sort is a hybrid of merge sort and insertion sort. It requires merging sorted runs which require a third array of the size equal to the sum of the two runs. So in the worst case, the auxiliary space requirement will be O(n).

11. Which of the following algorithm is implemented internally in java when we use function arrays. sort()?

a) intro sort

b) quick sort

c) Tim sort

d) merge sort

Answer: c

Java makes use of Tim sort internally for implementing arrays. sort(). It is mainly due to the fastness of this algorithm in comparison to other comparison-based sorts.

12. Why is insertion sort preferred over other sorting algorithms (like selection sort, bubble sort, etc.) for Tim sort implementation?

a) Because insertion sort is faster and adaptive

b) Because insertion sort requires less space

c) Because insertion sort is easy to implement

d) Because insertion sort is easy to understand

Answer: a

When small arrays need to be sorted then insertion sort proves to be the best choice. Also, it is adaptive so it performs better than others when the given array is fully/partially sorted.

13. In which case Tim sort will work as an insertion sort?

a) when no. of elements is less than 64

b) when no. of elements is greater than 64

c) when no. of elements is less than the size of the run

d) when no. of elements is less than 32

Answer: c

Tim sort will work as an insertion sort when no. of elements is less than the size of the run.

Tim sort uses a hybrid of insertion and merges sort. It reduces to insertion sort when the size of the array is less than the size of the run as insertion sort is efficient in sorting small arrays.

14. What is the usual size of a run in Tim sort?

a) 32

b) less than 32

c) 32-64 depending on the size of the array

d) 64

Answer: c

Usually, the size of the run is chosen somewhere between 32 and 64. The size of the run is preferably chosen in powers of 2 in order to maintain balance while merging the sorted runs.

1. Which of the following is an example of a parallel sorting technique?

a) BOGO sort

b) sleep sort

c) cube sort

d) merge sort

Answer: c

Cube sort is a parallel sorting algorithm. It builds self-balancing multi-dimensional arrays from the input keys.

2. What is the worst-case time complexity of cube sort?

a) O(n)

b) O(log n)

c) O(n log n)

d) O(n2)

Answer: c

The worst-case performance of cube sort is O(n log n). This is the fastest possible complexity with a comparison-based sort.

3. What is the auxiliary space requirement of cube sort?

a) O(n)

b) O(1)

c) O(log n)

d) O(n log n)

Answer: a

Cubesort is a parallel sorting algorithm that builds a self-balancing multi-dimensional array from the keys to be sorted. Cube sort requires an auxiliary space of O(n). This is the worst case of auxiliary space complexity.

4. What is the best case time complexity of cube sort?

a) O(n2)

b) O(n)

c) O(n log n)

d) O(1)

Answer: b

The best case time complexity of cube sort occurs when the input array is almost sorted. So in such a case only O(n) time is required for sorting.

5. What is the average case time complexity of cube sort?

a) O(n2)

b) O(n log n)

c) O(log n)

d) O(n)

Answer: b

The average case performance of cube sort is O(n log n). This is the fastest possible complexity with a comparison-based sort.

6. Which of the following algorithm is stable?

a) heap sort

b) cube sort

c) quick sort

d) Bogo sort

Answer: b

Out of the given algorithms, only cube sort is stable. This implies that the relative position of equal valued elements in the input and sorted array remains the same.

7. Cube sort is an in-place sorting algorithm.

a) true

b) false

Answer: b

Cube sort has an auxiliary space complexity of O(n). So it does not qualify to be an in-place sort.

8. Which of the following is a disadvantage of cube sort?

a) high memory overhead for small data

b) high memory overhead for any data

c) balancing is slow

d) Iteration is slow

Answer: a

In a general case the memory overhead of cube sort is low. But when the data set is small then in that case the memory overhead becomes high.

9. Cube sort is a comparison-based sort.

a) true

b) false

Answer: a

Cube sort is a comparison-based sorting algorithm. This is because it compares elements in order to sort them.

10. Which of the following sorting algorithm uses the method of insertion?

a) cube sort

b) bubble sort

c) quick sort

d) selection sort

Answer: a

Cube sort is the only algorithm from the given ones that uses the method of insertion. Other than this, the insertion sort also uses the method of insertion for sorting.

1. Consider the original array 17 8 12 4 26. How many comparisons are needed to construct the BST on the original array?

a) 5

b) 4

c) 7

d) 10

Answer: d

The original array is 17 8 12 4 26. The BST built on this array is shown in the figure below.

To build the BST, we travel down the tree until a leaf is reached. Therefore, for every element, we compare the element with the internal nodes until the leaves and then again compare the element with its parent to decide whether it is the right child or left child. So, for a given array we need to perform 10 comparisons to build the BST.

2. In binary tree sort, we first construct the BST, and then we perform _______ traversal to get the sorted order.

a) inorder

b) postorder

c) preorder

d) level order

Answer: a

In a binary tree, sort is a sort algorithm where a binary search tree is built from the elements to be sorted, and then we perform inorder traversal on the BST to get the elements in sorted order.

3. What is the worst-case time complexity of the binary tree sort?

a) O(n)

b) O(nlogn)

c) O(n2)

d) O(logn)

Answer: c

For the binary tree sort the worst case when the BST constructed is unbalanced. BST gets unbalanced when the elements are already sorted.

So, in the worst case, O(n2) time is required to build the BST and O(n) time to traverse the tree. Therefore, the worst case time complexity is O(n2) + O(n) = O(n2).

5. What is the best case time complexity of the binary tree sort?

a) O(n)

b) O(nlogn)

c) O(n2)

d) O(logn)

Answer: b

The best case occurs when the BST is balanced. So, when the tree is balanced we require O(nlogn) time to build the tree and O(n) time to traverse the tree. So, the best case time complexity of the binary tree sort is O(nlogn).

6. Binary tree sort is not an in-place sorting algorithm.

a) True

b) False

Answer: a

In binary tree sort it is required to reserve one tree node for each array element. Its implementation requires two pointer variables for each node. So, it requires extra memory. The worst-case space complexity of binary tree sort is Θ(n). Therefore, binary tree sort is not an in-place sorting algorithm.

7. Which of the following is false about binary free sort?

a) Binary tree sort and quick sort have the same running time

b) Binary tree sort used BST as a work area

c) As the number of elements to sort gets larger, binary tree sort gets more and more efficient

d) Both quick sort and binary tree are in place sorting algorithms

Answer: d

Binary tree sort and quick sort have the same running time i.e O(nlogn) in the average case and O(n2) in the worst case. A binary tree is not an in-place sorting algorithm.

8. Which of the following sorting algorithms can be considered an improvement to the binary tree sort?

a) Heap sort

b) Quick sort

c) Selection sort

d) Insertion sort

Answer: a

Heap sort is basically an improvement to the binary tree sort. Heap sort builds a heap on the input element by adjusting the position of the elements within the original array, rather than creating nodes as in binary tree sort.

9. Consider the following statements related to the binary tree sort.

I. Elements can be added gradually as they become available

II. It needs extra memory space

a) Statement I is true but Statement II is false

b) Both Statement I and Statement II are false

c) Both Statement I and Statement II are true

d) Statement II is true but Statement I is false

Answer: c

Binary tree sort is dynamic sorting, that is it gets more efficient as more elements are added. So, we can add elements gradually as they become available. Binary tree sort requires extra memory space, its worst-case space complexity is Θ(n).

1. Which of the following is an example of an unstable sorting algorithm?

a) cycle sort

b) insertion sort

c) bubble sort

d) merge sort

Answer: a

Cycle sort is an unstable sorting algorithm. This implies that the relative position of equal valued elements in the input and sorted array does not remain the same when we use cycle sort.

2. What is the worst-case time complexity of cycle sort?

a) O(n)

b) O(log n)

c) O(n log n)

d) O(n^{2})

Answer: d

The worst-case performance of cycle sort is O(n^{2}). It is because it has to make n comparisons for each element of the array.

3. What is the auxiliary space requirement of cycle sort?

a) O(n)

b) O(1)

c) O(log n)

d) O(n log n)

Answer: b

Cycle sort requires an auxiliary space of O(1). So this makes it an in-place sorting algorithm.

4. What is the best case time complexity of cycle sort?

a) O(n^{2})

b) O(n)

c) O(n log n)

d) O(1)

Answer: a

The best case time complexity of cycle sort is O(n^{2}). This shows that cycle sort is not an adaptive sorting algorithm and thus makes it undesirable when the given array is already almost sorted.

5. What is the average case time complexity of cycle sort?

a) O(n^{2})

b) O(n log n)

c) O(log n)

d) O(n)

Answer: a

The average case performance of cycle sort is O(n^{2}). It is because it has to make n comparisons for each element of the array.

6. Cycle sort is an adaptive sorting algorithm.

a) true

b) false

Answer: b

The time complexity of cycle sort is O(n^{2}) in any case. So cycle sort algorithm is not adaptive, as it will take the same time to sort an already sorted array and any other randomly arranged array.

7. Which of the following sorting algorithm is in place?

a) Merge sort

b) Cycle sort

c) Counting sort

d) Radix sort

Answer: b

Cycle sort has an auxiliary space complexity of O(1). So it qualifies to be an in-place sort. All other given options are not in place sorting algorithm.

8. Which of the following is an advantage of cycle sort?

a) it can sort large arrays efficiently

b) it has a low time complexity

c) it requires minimal write operations

d) it is an adaptive sorting algorithm

Answer: c

Cycle sort is a slow sorting algorithm as compared to quick sort, merge sort, etc., and is also not adaptive. But it offers an advantage in that it performs minimal write operations which can be very useful in a situation where the write operation is expensive.

9. Cycle sort is a comparison-based sort.

a) true

b) false

Answer: a

Cycle sort is a comparison-based sorting algorithm. This is because it compares elements in order to sort them.

10. Which of the following sorting algorithm uses the method of insertion?

a) cycle sort

b) bubble sort

c) quick sort

d) selection sort

Answer: a

Cycle sort is the only algorithm from the given ones that uses the method of insertion. Other than this, insertion sort also uses the method of insertion for sorting.

11. How many write operations will be required to sort the array arr={2,4,3,5,1} using cycle sort?

a) 4

b) 5

c) 6

d) 3

Answer: a

Cycle sort requires a minimum number of write operations in order to sort a given array. So in this case, only 4 write operations will be required.

12. Which of the following algorithm is best suited for the case where swap operation is expensive?

a) bubble sort

b) cycle sort

c) cocktail sort

d) merge sort

Answer: b

Cycle sort is a slow sorting algorithm but it requires a minimum number of write operations in order to sort a given array. So it is useful when the write/swap operation is expensive.

1. Which of the following is a disadvantage of library sort when compared to insertion sort?

a) library sort has greater time complexity

b) library sort has greater space complexity

c) library sort makes more comparisons

d) it has no significant disadvantage

Answer: b

Library sort or gapped insertion sort is a sorting algorithm that uses an insertion sort, but with gaps in the array to accelerate subsequent insertions. Library sort has a disadvantage in that it takes up O(n) auxiliary space whereas insertion sort takes constant auxiliary space.

2. Library sort is an online sorting algorithm.

a) true

b) false

Answer: a

Library sort does not require the entire input data at the beginning itself in order to sort the array. It rather creates a partial solution in every step, so future elements are not required to be considered. Hence it is an online sorting algorithm like insertion sort.

3. Library sort is a modified version of which of the following sorting algorithm?

a) Bubble sort

b) selection sort

c) insertion sort

d) quick sort

Answer: c

Library sort or gapped insertion sort is a sorting algorithm that uses an insertion sort, but with gaps in the array to accelerate subsequent insertions. Library sort requires the use of Insertion sort and binary search in its code. So it is a modified version of insertion sort.

4. Which of the following sorting algorithm is stable?

a) Selection sort

b) Quick sort

c) Library sort

d) Heap sort

Answer: c

Library sort or gapped insertion sort is a sorting algorithm that uses an insertion sort, but with gaps in the array to accelerate subsequent insertions.

Library sort is the only stable algorithm. It is because the elements with identical values appear in the same order in the output array as they were in the input array.

5. Which of the following sorting algorithm requires the use of binary search in their implementation?

a) radix sort

b) library sort

c) odd-even sort

d) bead sort

Answer: b

Library sort makes use of a binary search algorithm. It is used to find the correct index in the array where the element should be inserted. Then after the insertion of the element re-balancing of the array takes place.

6. Library sort is a comparison-based sort.

a) true

b) false

Answer: a

In library sort, we need to compare elements in order to insert them at the correct index. So we can say that it uses comparisons in order to sort the array. Thus it qualifies as a comparison-based sort.

7. What is the average case time complexity of library sort?

a) O(n)

b) O(n log n)

c) O(n2)

d) O(log n)

Answer: b

Library sort uses binary search in order to insert elements in the sorted segment of the array which reduces its time complexity. So the average time complexity of library sort is O(n log n).

8. What is the best case time complexity of library sort?

a) O(n)

b) O(n log n)

c) O(n2)

d) O(log n)

Answer: a

The best case time complexity of library sort is O(n). It occurs in the case when the input is already/almost sorted.

9. What is the worst-case time complexity of library sort?

a) O(n)

b) O(n log n)

c) O(n^{2})

d) O(log n)

Answer: c

The worst-case time complexity of library sort is the same as that of insertion sort. The worst-case time complexity is O(n^{2}).

10. Which of the following is an alternate name of library sort?

a) gapped insertion sort

b) binary insertion sort

c) recursive insertion sort

d) binary gap sort

Answer: a

Library sort is also known as gapped insertion sort because it uses insertion sort with gaps in between successive elements. These gaps allow for fast insertion.

11. What is the advantage of library sort over insertion sort?

a) Library sort has a better average time complexity

b) Library sort has a better space complexity

c) Library sort has better best case time complexity

d) Library has better worst-case time complexity

Answer: a

Library sort has a better average time complexity as compared to insertion sort because it uses the binary search for finding the index where the element has to be inserted in the sorted array. This makes the process faster.

12. What is the auxiliary space complexity of library sort?

a) O(n)

b) O(1)

c) O(n log n)

d) O(n2)

Answer: a

The auxiliary space required by library sort is O(n). This space is taken up by the gaps present in between successive elements.

13. Which of the following is an adaptive sorting algorithm?

a) library sort

b) merge sort

c) heap sort

d) selection sort

Answer: a

Library sort is an adaptive algorithm. It is because the time complexity of the algorithm improves when the input array is almost sorted.

14. Which of the following sorting algorithm is not in place?

a) library sort

b) quick sort sort

c) heap sort

d) gnome sort

Answer: a

Library sort or gapped insertion sort is a sorting algorithm that uses an insertion sort, but with gaps in the array to accelerate subsequent insertions. Library sort is the only algorithm that is not in place. It is because the auxiliary space required by library sort is O(n).

15. Which of the following is not true about library sort?

a) it uses binary search and insertion sort in its implementation

b) gaps are created between successive elements in order to ensure faster insertion

c) array needs to be re-balanced after every insertion

d) it is an in-place sorting algorithm

Answer: d

Library sort is not an in-place sorting algorithm as it requires O(n) auxiliary space. The array needs to be re-balanced after every insertion so as to ensure that gaps are present between every successive element.

1. Which one of the following sorting algorithms requires recursion?

a) pigeonhole sort

b) strand sort

c) insertion sort

d) counting sort

Answer: b

- Strand sort is a recursive sorting algorithm that sorts items of a list into increasing order. It has O(n²) worst time complexity which occurs when the input list is reverse sorted.
- Strand sort requires the use of recursion for implementing its algorithm. On the other hand, the sorting algorithms given in the remaining options use iterative methods.

2. Strand sort is most efficient for data stored in?

a) linked list

b) arrays

c) trees

d) graphs

Answer: a

- Strand sort is a recursive sorting algorithm that sorts items of a list into increasing order. It has O(n²) worst time complexity which occurs when the input list is reverse sorted.
- Strand sort is most efficient when data is stored in a linked list as it involves many insertions and deletions which is performed quite efficiently with the help of a linked list. Using an array would increase time complexity significantly.

3. In which of the following case strands sort is the most efficient?

a) when the input array is already sorted

b) when the input array is reverse sorted

c) when the input array is large

d) when the input array has randomly spread elements

Answer: a

- Strand sort is a recursive sorting algorithm that sorts items of a list into increasing order. It has O(n²) worst time complexity which occurs when the input list is reverse sorted.
- The best case of strand sort occurs when the input array is already sorted. In this case, it has linear time complexity.

4. What is the auxiliary space complexity of strand sort?

a) O(n)

b) O(1)

c) O(log n)

d) O(n log n)

Answer: a

- The auxiliary space complexity of strand sort is O(n). It is because a sub-list of size n has to be maintained.

5. Which of the following sorting algorithm is not in place?

a) quick sort

b) strand sort

c) cycle sort

d) heap sort

Answer: b

- Strand sort has an auxiliary space complexity of O(n). So it is not an in-place sorting algorithm.

6. Strand sort is a comparison-based sorting algorithm.

a) true

b) false

Answer: a

- Pigeonhole sort is an example of a comparison-based sorting algorithm. This is because it compares the value of elements present in a list in order to sort them.

7. Strand sort is a stable sorting algorithm.

a) true

b) false

Answer: a

- Strand sort is an example of a stable sorting algorithm. It is because the elements with identical values appear in the same order in the output array as they were in the input array.

8. What is the average time complexity of strand sort?

a) O(n)

b) O(n log n)

c) O(n^{2})

d) O(n2 log n)

Answer: c

- Strand sort is a recursive sorting algorithm that sorts items of a list into increasing order.
- The average case time complexity of strand sort is O(n
^{2}). So it is not as efficient as quick sort or merge sort.

9. What is the best case time complexity of strand sort?

a) O(n)

b) O(n log n)

c) O(n2)

d) O(n2 log n)

Answer: a

- Best case time complexity of strand sort is O(n). It occurs in the case where the input array is already sorted.

10. What is the worst-case time complexity of strand sort?

a) O(n)

b) O(n log n)

c) O(n^{2})

d) O(n2 log n)

Answer: c

Worst-case time complexity of strand sort is O(n2). It occurs in the case where the input array is in reverse sorted order.

11. Strand sort algorithm used which of the following method for sorting a list?

a) merging

b) selection

c) insertion

d) partitioning

Answer: b

- Strand sort uses the method of selection for sorting any given list. This is because it selects the strands of elements from the input list which are sorted.

12. Which of the following is an adaptive sorting algorithm?

a) heap sort

b) strand sort

c) selection sort

d) merge sort

Answer: b

- Strand sort is an adaptive sorting algorithm. This is because it gives a better performance when the input list is almost sorted.

1. Cocktail sort is also known as ____________

a) bidirectional sort

b) bubble sort

c) brick sort

d) ripple sort

Answer: d

Cocktail sort is the variation of Bubble Sort, which traverses the list in both directions alternatively. Cocktail sort is also known by the name of ripple sort. It is also known by other names like – bidirectional bubble sort, cocktail shaker sort, shuttle sort, shuffle sort, etc.

2. Cocktail sort is a variation of _____________

a) Bubble sort

b) Selection sort

c) Insertion sort

d) Gnome sort

Answer: a

Cocktail sort is very similar to bubble sort. It works by traversing an array in both directions alternatively. It compares the adjacent elements in each iteration and swaps the ones which are out of order.

3. Auxiliary space requirement of cocktail sort is _____________

a) O(n)

b) O(log n)

c) O(1)

d) O(n2)

Answer: c

Cocktail sort is the variation of Bubble Sort, which traverses the list in both directions alternatively. In a cocktail, sort manipulation is done on the input array itself. So no extra space is required to perform sorting. Thus it requires constant auxiliary space.

4. Which of the following sorting algorithm is NOT stable?

a) Quick sort

b) Cocktail sort

c) Bubble sort

d) Merge sort

Answer: a

Quick sort is the only algorithm that is not stable. Cocktail sort is the variation of Bubble Sort, which traverses the list in both directions alternatively.

5. Which of the following sorting algorithm is in place?

a) cocktail sort

b) merge sort

c) counting sort

d) radix sort

Answer: a

- Cocktail sort is the variation of Bubble Sort, which traverses the list in both directions alternatively.
- Cocktail sort is an in-place sorting technique as it only requires constant auxiliary space for manipulating the input array. Rest all other options are not in place.

6. Cocktail sort is a comparison-based sort.

a) True

b) False

Answer: a

Cocktail sort is the variation of Bubble Sort, which traverses the list in both directions alternatively. Cocktail sort compares the value of different elements in the array for sorting. Thus, it is a comparison-based sort.

7. Cocktail sort uses which of the following methods for sorting the input?

a) selection

b) partitioning

c) merging

d) exchanging

Answer: d

- Cocktail sort is the variation of Bubble Sort, which traverses the list in both directions alternatively.
- Cocktail sort uses a method of exchanging as it swaps the elements which are out of order. This swapping is done in two phases i.e. forward phase and the backward phase.

8. What is the worst-case time complexity of cocktail sort?

a) O(n)

b) O(n log n)

c) O(n^{2})

d) O(log n)

Answer: c

- Cocktail sort is the variation of Bubble Sort, which traverses the list in both directions alternatively.
- Worst-case complexity is observed when the input array is reverse sorted. This is the same as the worst-case complexity of bubble sort. The worst-case time complexity of cocktail sort is O(n
^{2}).

9. What is the best case time complexity of cocktail sort?

a) O(n)

b) O(n log n)

c) O(n2)

d) O(log n)

Answer: a

- Best case complexity is observed when the input array is already sorted. This is the same as the best case complexity of bubble sort.

10. What is the average case time complexity of the odd-even sort?

a) O(n)

b) O(n log n)

c) O(n^{2})

d) O(log n)

Answer: c

- Cocktail sort takes O(n
^{2}) time on average as it keeps on applying bubble sort on the elements in two phases until they are sorted. This is the same as the average time complexity of bubble sort.

11. How many iterations are required to sort the array arr={2,3,4,5,1} using bubble sort and cocktail sort respectively?

a) 4,2

b) 5,3

c) 5,2

d) 4,3

Answer: a

Cocktail sort applies bubble sort in two phases until the array gets sorted. So here bubble sort will take 4 iterations to sort the given array whereas cocktail sort takes only 2 iterations. This shows cocktail sort has a comparatively better performance.

13. Bubble sort performs better as compared to cocktail sort.

a) True

b) False

Answer: b

- Both bubble sort and cocktail sort has the same time complexities. But cocktail sort has a comparatively better performance.

1. Comb sort is an improved version of _______

a) Selection sort

b) Bubble sort

c) Insertion sort

d) Merge sort

Answer: b

Comb sort compares two elements at a variable gap from each other in each iteration, unlike bubble sort where the gap remains 1. This reduces the average time complexity of comb sort.

2. The gap between two elements being compared shrinks by a factor of _______ after every iteration.

a) 1.1

b) 1.2

c) 1.3

d) 1.4

Answer: c

It has been found experimentally that the gap between the two elements should shrink by a factor of 1.3 after each iteration for the most efficient sorting. Comb Sort improves on Bubble Sort by using a gap of the size of more than 1.

3. The initial gap between two elements being compared _______

a) is equal to the number of elements in the array

b) is equal to 1.3

c) depends on the number of iterations

d) depends on the compiler being used

Answer: a

The initial gap is taken to be equal to the number of elements in the array and shrinks by a factor of 1.3 in each iteration, the initial gap is independent of the number of iterations and compiler being used.

4. What is the worst-case time complexity of comb sort?

a) O(n^{2})

b) O(n log n)

c) O(n)

d) O(n2/2a) (a=number of increment)

Answer: a

Comb Sort improves on Bubble Sort by using a gap of the size of more than 1. The worst-case time complexity of comb sort is O(n^{2}).

Worst-case complexity is observed when the input array is reverse sorted. This is the same as the worst-case complexity of bubble sort.

5. The gap value after 3 iterations in an array with 6 elements will be _______

a) 4

b) 3

c) 2

d) 1

Answer: c

Comb Sort improves on Bubble Sort by using a gap of the size of more than 1.

Initially, the gap value will be 6. For the first iteration, it will be 6/1.3=4 then 4/1.3=3 for the second iteration, and 3/1.3=2 for the third iteration.

6. Auxiliary space used by comb sort is _______

a) O(1)

b) O(n)

c) O(log n)

d) O(n log n)

Answer: a

Comb Sort improves on Bubble Sort by using a gap of the size of more than 1. Auxiliary space used by comb sort is O(1) as it does not use any extra space for manipulating the input.

7. The given array is arr={7,4,5,8,1,2}. The number of iterations required to sort the array using comb sort and bubble sort respectively will be _______

a) 7 and 8

b) 5 and 6

c) 5 and 5

d) 4 and 5

Answer: d

- Comb Sort improves on Bubble Sort by using a gap of the size of more than 1.
- Comb sort takes 1 iteration less as compared to bubble sort as it makes use of variable gap value whereas bubble sort uses a constant gap value of 1.

8. Comb sort is a stable sorting algorithm.

a) True

b) False

Answer: b

Comb Sort improves on Bubble Sort by using a gap of the size of more than 1.

Comb sort is not a stable sorting algorithm as it does not sort the repeated elements in the same order as they appear in the input.

9. What is the best case time complexity of comb sort and bubble sort respectively?

a) O(n2) and O(n log n)

b) O(n log n) and O(n)

c) O(n) and O(n2)

d) O(n2/2a) (a=number of increment) and O(n2)

Answer: b

- Comb Sort improves on Bubble Sort by using a gap of the size of more than 1.
- The best case complexity for comb sort and bubble sort is O(n log n) and O(n) respectively. It occurs when the input array is already sorted.

10. What is the advantage of comb sort over merge sort?

a) Comb sort is an in-place sorting algorithm

b) Comb sort is a stable sorting algorithm

c) Comb sort is more efficient

d) It has no advantage

Answer: a

Comb sort does not require auxiliary space for manipulating input so it is an in-place sorting algorithm but merge sort does require O(n) of auxiliary space which makes comb sort better in terms of space complexity.

1. Gnome sort is also called __________

a) Smart sort

b) Stupid sort

c) Bogo sort

d) Special sort

Answer: b

Gnome Sort also called Stupid sort is based on the concept of a Garden Gnome sorting his flower pots. Gnome sort was originally named stupid sort but later on it got renamed as gnome sort.

2. How many loops are required to implement the gnome sorting algorithm?

a) Single loop

b) 2 nested loops

c) 3 nested loops

d) It does not require any loop

Answer: a

- Gnome Sort also called Stupid sort is based on the concept of a Garden Gnome sorting his flower pots.
- In this sorting algorithm, the variable representing the index number is not incremented in case the adjacent pair of elements are out of place.
- In such a case its value is decremented instead. Thus it can implement sorting using a single loop.

3. Which of the following pair of sorting algorithms are stable?

a) gnome sort and quick sort

b) merge sort and selection sort

c) gnome sort and merge sort

d) heap sort and merge sort

Answer: c

- Gnome Sort also called Stupid sort is based on the concept of a Garden Gnome sorting his flower pots.
- Gnome sort and merge sort are stable sorting algorithms as the elements with identical values appear in the same order in the output array as they were in the input array when any of these sorting algorithms are implemented.

4. Auxiliary space used by gnome sort is _________

a) O(1)

b) O(n)

c) O(log n)

d) O(n log n)

Answer: a

- Gnome Sort also called Stupid sort is based on the concept of a Garden Gnome sorting his flower pots.
- Auxiliary space used by gnome sort is O(1) as it does not use any extra space for manipulating the input. Thus it also qualifies as an in-place sorting algorithm.

5. The given array is arr = {1,2,4,3,5}.The number of iterations required to sort the array using gnome sort will be _________

a) 5

b) 6

c) 7

d) 8

Answer: b

- 6 iterations will be required as one pair of elements i.e. {4,3} is out of place which causes the loop to take one step backward.

6. Gnome sort uses which of the following method to implement sorting?

a) Merging

b) Partitioning

c) Selection

d) Exchanging

Answer: d

- Gnome sort implements sorting by exchanging the adjacent elements which are out of order. Thus its method of sorting is called exchanging.

7. What is the best case time complexity of gnome sort?

a) O(n)

b) O(n2)

c) O(n log n)

d) O(log n)

Answer: a

- When the input array is already sorted then in that case there will be no need to decrease the value of the index variable at any stage.
- So only O(n) time is required in this case as we keep on increasing its value after each iteration.

9. What is the worst-case time complexity of gnome sort?

a) O(n)

b) O(n^{2})

c) O(n log n)

d) O(log n)

Answer: b

Worst case occurs when the input array is reverse sorted as it will have the maximum number of decrements while sorting. This causes the algorithm to have a time complexity of O(n2) in this case.

10. What is the average case time complexity of gnome sort?

a) O(n)

b) O(n^{2})

c) O(n log n)

d) O(log n)

Answer: b

- In gnome sort, the loop has to take one step back every time any adjacent pair of elements is out of place which causes it to have a time complexity of O(n
^{2}) on average.

1. Which of the following header file is a must to implement the sleep sort algorithm?

a) string.h

b) math.hw

c) bios.h

d) windows.h

Answer: d

- The sleep sort algorithm works by starting a separate task for each item to be sorted, where each task sleeps for an interval corresponding to the item’s sort key, then emits the item.
- To implement sleep sort algorithm we need functions like WaitForMultipleObjects(), _beginthread(). These are included in the header file windows. h.

2. Sleep sort does not work for ___________

a) negative numbers

b) large numbers

c) small numbers

d) positive numbers

Answer: a

- The sleep sort algorithm works by starting a separate task for each item to be sorted, where each task sleeps for an interval corresponding to the item’s sort key, then emits the item.
- The sleep sort algorithm does not work for negative numbers. It is because the thread cannot sleep for a negative amount of time.

3. In how many comparisons does the array arr={1,4,2,3,5} get sorted if we use sleep sort?

a) 5

b) 3

c) 1

d) 0

Answer: d

- The sleep sort algorithm works by starting a separate task for each item to be sorted, where each task sleeps for an interval corresponding to the item’s sort key, then emits the item.
- Sleep sort makes different elements of the array to sleep for an amount of time that is proportional to its magnitude. So it does not require performing any comparison to sort the array.

4. Sleep sort works by ___________

a) making elements sleep for a time that is proportional to their magnitude

b) making elements to sleep for a time that is inversely proportional to their magnitude

c) partitioning the input array

d) dividing the value of input elements

Answer: a

In sleep sort, each element is made to sleep for a time that is proportional to its magnitude. Then the elements are printed in the order in which they wake up.

5. Sleep sort code cannot compile online because ___________

a) it has a very high time complexity

b) it has very high space complexity

c) it requires a multithreading process

d) online compilers are not efficient

Answer: c

Sleep sort requires a multithreading process for making the elements sleep. This process happens in the background at the core of the OS and so cannot be compiled on an online compiler.

6. Time complexity of sleep sort can be approximated to be ___________

a) O(n + max(input))

b) O(n2)

c) O(n log n + max(input))

d) O(n log n)

Answer: c

- The sleep() function creates multiple threads by using a priority queue which takes n log n time for insertion. Also, the output is obtained when all the elements wake up.
- This time is proportional to the max(input). So its time complexity is approximately O(n log n + max(input)).

7. Sleep sort can be preferred over which of the following sorting algorithms for a large number of input elements?

a) Quick sort

b) Bubble sort

c) Selection sort

d) No sorting algorithm is preferred

Answer: d

Sleep sort is not preferred over any of the given sorting algorithms as sleep sort does not guarantee a correct output every time. So sleep sorting is not a reliable sorting technique.

8. Auxiliary space requirement of sleep sort is ___________

a) O(n)

b) O(1)

c) O(max(input))

d) O(log n)

Answer: b

- The auxiliary space requirement of sleep sort is O(1).
- All the major processes involved in sleep sort take place internally in the OS. So it does not require any auxiliary space to sort the elements.

9. Sleep sort does gives a correct output when ___________

a) any input element is negative

b) input array is reverse sorted

c) any input element is positive

d) when there is a very small number to the left of a very large number

Answer: c

Sleep sort gives a sorted output when the array elements are positive. But when any other case than this occurs out of the above-given cases then we may not see a correct output. This makes sleep sort a very unreliable sorting technique.

10. Which of the following sorting algorithm is most closely related to the OS?

a) gnome sort

b) sleep sort

c) radix sort

d) BOGO sort

Answer: b

- Sleep sort works by starting a separate task for each item to be sorted, where each task sleeps for an interval corresponding to the item’s sort key, then emits the item.
- Sleep sort is most closely related to the operating system. It is because most of the major steps of this algorithm take place at the core of OS.

11. Sleep sort is an in-place sorting technique.

a) True

b) False

Answer: a

Sleep sort is an in-place sorting technique as most of its major steps take place in the background. So it does not require auxiliary space to sort the input.

1. How many comparisons will be made to sort the array arr={1,5,3,8,2} using pigeonhole sort?

a) 5

b) 7

c) 9

d) 0

Answer: d

Pigeonhole sorting is a sorting algorithm that is suitable for sorting lists of elements where the number of elements and the number of possible key values are approximately the same.

As pigeonhole sort is an example of a non-comparison sort so it is able to sort an array without making any comparison. So 0 comparisons are required.

2. Which of the following is a non-comparison sort?

a) heap sort

b) quick sort

c) merge sort

d) pigeonhole sort

Answer: d

Heap sort, merge sort, and quick sort is examples of comparison sort as it needs to compare array elements in order to sort an array. Whereas pigeonhole sort is a non-comparison-based sort.

3. In which of the following cases pigeonhole sort is most efficient?

a) when the range of input is less than the number of elements

b) when the range of input is more than the number of elements

c) when the range of input is comparable to the number of elements

d) when the given array is almost sorted

Answer: c

Pigeonhole sorting is a sorting algorithm that is suitable for sorting lists of elements where the number of elements and the number of possible key values are approximately the same.

Pigeonhole sort is a non-comparison-based sort. It is most efficient in the case where the number of elements is comparable to the input range.

4. What is the space complexity of pigeonhole sort (k=range of input)?

a) O(n*k)

b) O(n)

c) O(k)

d) O(n+k)

Answer: d

The pigeonhole sort algorithm requires two arrays. The first one is required to store the input elements so its size is n. The second one is the pigeonhole array and has a size equal to range k. Overall space complexity becomes O(n+k).

5. The auxiliary array used in pigeonhole sorting is called ______________

a) bucket

b) pigeon

c) hole

d) pigeonhole

Answer: d

Pigeonhole sorting is a sorting algorithm that is suitable for sorting lists of elements where the number of elements and the number of possible key values are approximately the same.

The auxiliary array used in pigeonhole sorting is called pigeonhole. It is used to store every element in its corresponding hole.

6. Pigeonhole sort is a stable sorting algorithm.

a) true

b) false

Answer: a

Pigeonhole sort is an example of a stable sorting algorithm. It is because the elements with identical values appear in the same order in the output array as they were in the input array.

7. Pigeonhole sort is an in-place sorting algorithm.

a) true

b) false

Answer: b

Pigeonhole sort requires space of O(n+k). So it does not qualify to be an in-place sorting algorithm.

8. What is the average time complexity of pigeonhole sort (k=range of input)?

a) O(n)

b) O(n+k)

c) O(n2)

d) O(n*k)

Answer: b

The time complexity of pigeonhole sort is O(n+k). It has two loops. One of the loops runs from 0 to range(k) and the other one runs from 0 to n so the time complexity becomes O(n+k).

9. The complexity of which of the following sorting algorithms remains to be the same in its best, average, and worst case?

a) quick sort

b) insertion sort

c) pigeonhole sort

d) bubble sort

Answer: c

The time complexity of the pigeonhole remains unvaried in all three cases. It is given by O(n+k). But it is efficient only when the number of elements is comparable to the input range.

10. Choose the correct statement of the pigeonhole Sort.

a) pigeonhole sort is a comparison-based sort

b) any comparison based sorting can be made stable

c) quick sort is not a comparison-based sort

d) any comparison based sort requires at least O(n2) time

Answer: b

Any comparison-based sorting technique can be made stable by considering the position as a criterion while making comparisons. Pigeonhole sort is a stable sort.

11. What is the advantage of pigeonhole sort over merge sort?

a) pigeonhole sort has lesser time complexity when the range is comparable to the number of input elements

b) pigeonhole sort has lesser space complexity

c) counting sort is not a comparison-based sorting technique

d) pigeonhole sort is adaptive

Answer: a

Pigeonhole sort is efficient in the cases where the range is comparable to a number of input elements as it performs sorting in linear time. Whereas merge sort takes O(n log n) in any case.

13. Which of the following algorithm takes a linear time for sorting?

a) pigeonhole sort

b) heap sort

c) comb sort

d) cycle sort

Answer: a

Pigeonhole sort has a linear time complexity of O(n+k). Whereas all other given options have a non-linear time complexity. But it should be noted that pigeonhole sort may not be efficient in every case.

1. Which of the following is the distribution sort?

a) Heap sort

b) Smooth sort

c) Quick sort

d) LSD radix sort

Answer: d

In the Distribution sort, the inputted values are distributed to multiple intermediate structures which are then combined and placed on the output. And LSD radix sort distributes values into buckets based on the digits within values, so it is a distribution sort.

2. What is the worst-case time complexity of LSD radix sort?

a) O(nlogn)

b) O(wn)

c) O(n)

d) O(n + w)

Answer: b

The time complexity of LSD radix sort depends upon the word size and the number of items. It runs in O(wn) time in the worst case, where n is the number of inputted elements and w is the number of digits in the largest number.

3. LSD radix sort requires ________ passes to sort N elements.

a) (w/logR)

b) N(w/logR)

c) (w/log(RN))

d) (wN/log(N))

Answer: a

LSD radix sort sorts the N elements in (w/logR) passes where w is the number of digits in the largest number and R(radix) is extra space required for performing the sorting operation.

4. Which of the following is false about LSD radix sort?

a) LSD radix sort is an integer sorting algorithm

b) LSD radix sort is a comparison sorting algorithm

c) LSD radix sort is a distribution sort

d) LSD radix sort uses bucket sort

Answer: b

LSD radix sort uses bucket sort for grouping the keys based on the digits at that value. And as it grouped the keys based on the digits at that value, it is an integer sorting algorithm.

5. Which of the following sorting algorithm is stable?

a) Heap sort

b) Selection sort

c) In-place MSD radix sort

d) LSD radix sort

Answer: d

Radix sort is a non-comparative sorting algorithm. In LSD radix sort after sorting the multiple elements with the same key will be in the same order as they were in the input array. So LSD radix sort is a stable sort.

6. LSD radix sort is faster than comparison sorts.

a) True

b) False

Answer: b

LSD radix sort is faster than comparison sorts when the word size is less than logn. But LSD radix sort runs slowly for elements with larger word sizes and smaller radix.

7. Which of the following should be used to sort a huge database on a fixed-length key field?

a) Insertion sort

b) Merge sort

c) LSD radix sort

d) Quick sort

Answer: c

LSD radix requires only w passes to sort a fixed-length string, where w is the length of the strings. So, LSD radix sort is best suited to sort a huge database on a fixed-length key field.

8. Which of the following is a combination of LSD and MSD radix sorts?

a) Forward radix sort

b) 3-way radix quick sort

c) Trie-based radix sort

d) Flash sort

Answer: a

Radix sort is the linear sorting algorithm that is used for integers. Forward radix sort combines the advantages of LSD and MSD radix sort. Forward radix sort inspects a complete horizontal strip at a time just like LSD radix sort.

9. Which of the following is true for the LSD radix sort?

a) works best for variable length strings

b) accesses memory randomly

c) inner loop has less instructions

d) sorts the keys in left-to-right order

Answer: b

LSD radix sort sorts the keys in right-to-left order, working with Least Significant Digit first. The inner loop has a lot of instructions and LSD radix sort is used to sort fixed-length strings.

10. LSD radix sort is an in-place sorting algorithm.

a) False

b) True

Answer: a

LSD radix is not an in-place sorting algorithm. It needs extra memory to store the elements in a bucket and its worst-case space complexity is O(w + R).

1. How many comparisons will be made to sort the array arr = {1, 5, 3, 8, 2} using MSD radix sort?

a) 5

b) 7

c) 9

d) 0

Answer: d

As MSD radix sort is an example of the non-comparison sort so it is able to sort an array without making any comparison. So the answer should be 0.

The idea of MSD radix sorting is to divide all of the digits with an equal value into their own bucket, then do the same thing with all of the buckets until the array is sorted.

2. What is the full form of MSD in MSD radix sort?

a) most significant digit

b) many significant digits

c) more significant digit

d) must significant digit

Answer: a

- MSD stands for Most Significant Digit. It is named so because in this algorithm the processing begins from the most significant digit.
- The idea of MSD radix sorting is to divide all of the digits with an equal value into their own bucket, then do the same thing with all of the buckets until the array is sorted.

3. Which of the following combines qualities of MSD radix sort and LSD radix sort?

a) in-place MSD radix sort

b) stable MSD radix sort

c) 3-way radix quick sort

d) forward radix sort

Answer: d

Forward radix sort combines the qualities of MSD and LSD radix sort. The sorting is done by separating the strings into groups.

4. Which of the following is the most suitable definition of radix sort?

a) It is a non-comparison-based integer sort

b) It is a comparison-based integer sort

c) It is a non-comparison based non-integer sort

d) It is a comparison-based non-integer sort

Answer: a

- Radix sort is a sorting algorithm that sorts the elements by first grouping the individual digits of the same place value.
- Radix sort is a non-comparison-based integer sort. It sorts the given data by grouping keys that share the same significant position value.

5. Which of the following is an alternate name for MSD radix sort?

a) bottom-up radix sort

b) top-down radix sort

c) forward radix sort

d) backward radix sort

Answer: b

Top-down radix sort is an alternate name for MSD radix sort. It is because in this algorithm the processing starts from the most significant digit and ends with the least significant digit.

6. Which of the following is not true about MSD radix sort?

a) its processing starts from the most significant digit

b) it is not a stable sort

c) it is an in-place sorting algorithm

d) it is a non-comparison-based sort

Answer: c

- MSD radix sort takes non-constant time for sorting the input data. So it is not an in-place sorting algorithm.
- The idea of MSD radix sorting is to divide all of the digits with an equal value into their own bucket, then do the same thing with all of the buckets until the array is sorted.

7. MSD radix sort should be preferred over LSD radix sort when we have to maintain the original relative order.

a) True

b) False

Answer: b

MSD radix sort is not a stable sort whereas LSD radix sort is stable. So when we require to preserve the original order then in that case we should prefer LSD radix sort.

8. What is the average time complexity of MSD radix sort (w= bits required to store each key)?

a) O(n + w)

b) O(n.w)

c) O(n2)

d) O(n log n)

Answer: b

The time complexity of radix sort is O(n.w). It performs better than quick sort when we have log n bits for every digit.

9. MSD radix sort is an in-place sorting algorithm.

a) True

b) False

Answer: b

MSD radix sort takes non-constant time for sorting the input data. So it is not an in-place sorting algorithm.

10. Which of the following statement is not a stable sorting algorithm?

a) LSD radix sort

b) MSD radix sort

c) Counting sort

d) Pigeonhole sort

Answer: b

MSD radix sort is not a stable sort. It is because the elements with identical values do not appear in the same order in the output array as they were in the input array.

11. Which of the following is not true about the radix sort?

a) Radix sort performs better than quick sort when we have log n bits for every digit

b) Radix sort has better cache performance than quick sort

c) Radix sort has higher values of a constant factor in asymptotic notation

d) Radix sort takes more space than quick sort

Answer: b

Quick sort has a better cache performance than radix sort. Radix sort also takes more space as compared to quick sort.

12. What is the advantage of radix sort over quick sort?

a) radix sort performs better than quick sort when we have log n bits for every digit

b) radix sort has lesser space complexity

c) radix sort is not a comparison-based sorting technique

d) radix sort has better cache performance than quick sort

Answer: a

Radix sort performs better than quick sort when we have log n bits for every digit. But radix sort takes more space as compared to quick sort.

13. What will be the order of elements of the array arr = {23, 67, 143, 654, 43} after the first iteration of MSD sort is complete?

a) 23, 43, 67, 143, 654

b) 23, 67, 43, 143, 654

c) 23, 67, 143, 654, 43

d) 23, 143, 43, 654, 67

Answer: b

In the first iteration, the array is sorted according to the most significant digit I.e. hundreds place value. So the order of elements will be 23, 67, 43, 143, 654.

1. How many comparisons will be made to sort the array arr={1,5,3,8,2} using counting sort?

a) 5

b) 7

c) 9

d) 0

Answer: d

0 comparisons will be made to sort the array arr={1,5,3,8,2} using counting sort.

As counting sort is an example of the non-comparison sort so it is able to sort an array without making any comparison.

2. Which of the following is not an example of a non-comparison sort?

a) bubble sort

b) counting sort

c) radix sort

d) bucket sort

Answer: a

Bubble sort is not an example of non-comparison sort as it needs to compare array elements in order to sort an array.

3. Which of the following sorting techniques is most efficient if the range of input data is not significantly greater than the number of elements to be sorted?

a) selection sort

b) bubble sort

c) counting sort

d) insertion sort

Answer: c

The time complexity of the counting sort is given as O(n+k) where n is the number of input elements and k is the range of input. So if the range of input is not significantly larger than the number of elements in the array then it proves to be very efficient.

4. What is the auxiliary space requirement of counting sort?

a) O(1)

b) O(n)

c) O(log n)

d) O(n+k) k=range of input

Answer: d

Counting sort uses two extra arrays to get the input array sorted. The first array is required to store the count of all the elements which fall in the range of input data elements, so its size is k. The second array is required to store the input elements in a sorted manner, so its size is n. Thus overall auxiliary space required becomes O(n+k).

5. It is possible to implement counting sort when any of the input elements have a negative value.

a) True

b) False

Answer: a

It is possible to extend the counting sort algorithm for negative numbers as well. In such a case we store the minimum element at the 0th index.

6. Which of the following sorting techniques is stable?

a) quick sort

b) counting sort

c) heap sort

d) selection sort

Answer: b

Counting sort is an example of a stable sorting algorithm as the elements with identical values appear in the same order in the output array as they were in the input array.

7. Which of the following uses the largest amount of auxiliary space for sorting?

a) Bubble sort

b) Counting sort

c) Quick sort

d) Heap sort

Answer: b

Counting sort requires auxiliary space of O(n+k) whereas quick sort, bubble sort, and heap sort are in place sorting techniques. Thus counting sort requires most auxiliary space.

8. What is the average time complexity of counting sort?

a) O(n)

b) O(n+k) k=range of input

c) O(n2)

d) O(n log n)

Answer: b

The time complexity of counting sort is O(n+k) as counting the occurrence of each element in the input range takes k time and then finding the correct index value of each element in the sorted array takes n time.

9. The complexity of which of the following sorting algorithms remains to be the same in its best, average, and worst case?

a) quick sort

b) insertion sort

c) counting sort

d) gnome sort

Answer: c

- Counting sort is a sorting algorithm that sorts the elements of an array by counting the number of occurrences of each unique element in the array.
- The time complexity of counting sort remains unvaried in all three cases. It is given by O(n+k).

10. Which of the following statement is true about comparison-based sorting?

a) counting sort is a comparison-based sort

b) any comparison based sorting can be made stable

c) bubble sort is not a comparison-based sort

d) any comparison based sort requires at least O(n2) time

Answer: b

- Comparison Based Soring techniques are bubble sort, selection sort, insertion sort, Merge sort, quicksort, heap sort etc.
- Any comparison-based sorting technique can be made stable by considering a position as a criterion while making comparisons.

11. Counting sort is often used as a sub-routine for radix sort.

a) true

b) false

Answer: a

Counting sort is used as a sub-routine for radix sort as it is a stable and non-comparison-based sorting algorithm.

12. What is the advantage of counting sort over quick sort?

a) counting sort has lesser time complexity when the range is comparable to the number of input elements

b) counting sort has lesser space complexity

c) counting sort is not a comparison-based sorting technique

d) it has no advantage

Answer: a

- Counting sort is a sorting algorithm that sorts the elements of an array by counting the number of occurrences of each unique element in the array.
- Counting sort is very efficient in cases where the range is comparable to a number of input elements as it performs sorting in linear time.

14. What is the disadvantage of counting sort?

a) counting sort has large time complexity

b) counting sort has large space complexity

c) counting sort is not a comparison-based sorting technique

d) counting sort cannot be used for arrays with non-integer elements

Answer: d

- Counting sort is a sorting algorithm that sorts the elements of an array by counting the number of occurrences of each unique element in the array.
- Counting sort can only be used for arrays with integer elements because otherwise an array of frequencies cannot be constructed.

15. Which of the following algorithm takes a non-linear time for sorting?

a) counting sort

b) quick sort

c) bucket sort

d) radix sort

Answer: b

Quick sort requires O(n log n) time for sorting so it takes a non-linear time for sorting whereas counting sort, bucket sort, and radix sort in linear time.

1. How many comparisons will be made to sort the array arr={1, 5, 3, 8, 2} using bucket sort?

a) 5

b) 7

c) 9

d) 0

Answer: d

As bucket sort is an example of a non-comparison sort so it is able to sort an array without making any comparison. So the answer should be 0.

2. What is the alternate name of bucket sort?

a) group sort

b) radix sort

c) bin sort

d) uniform sort

Answer: c

Bucket sort is mainly useful when input is uniformly distributed over a range. For example, consider the following problem. Bucket sort is also known as bin sort. It is an example of a non-comparison sort.

3. Which of the following non-comparison sort can also be considered a comparison-based sort?

a) counting sort

b) MSD radix sort

c) bucket sort

d) pigeonhole sort

Answer: c

- Bucket sort is mainly useful when input is uniformly distributed over a range. For example, consider the following problem.
- Bucket sort can also be considered as a comparison-based sort. It is because it can also be implemented with comparisons.

4. Which of the following is not true about bucket sort?

a) It is a non-comparison-based integer sort

b) It is a distribution sort

c) It can also be considered as a comparison-based sort

d) It is in place sorting algorithm

Answer: d

A bucket sort is a non-comparison-based integer sort. It sorts the given data by distributing the array of elements into a number of buckets. It is not an in-place sorting technique.

5. Which of the following doesn’t affect the time complexity of bucket sort?

a) algorithm implemented for sorting individual buckets

b) number of buckets used

c) distribution of input

d) input values

Answer: d

The time complexity of bucket sort is affected by various factors. These include algorithms implemented for sorting individual buckets, the number of buckets used, and the distribution of input. It doesn’t depend on the input value. It can be either positive or negative or zero.

6. Bucket sort is most efficient in the case when __________

a) the input is non uniformly distributed

b) the input is uniformly distributed

c) the input is randomly distributed

d) the input range is large

Answer: b

Bucket sort works by distributing the array of elements into a number of buckets. So bucket sort is most efficient in the case when the input is uniformly distributed.

7. Bucket sort is a generalization of which of the following sort?

a) LSD radix sort

b) Pigeonhole sort

c) Counting sort

d) MSD radix sort

Answer: b

- Pigeonhole sort is a particular case of bucket sort. Bucket sort is also closely related to MSD radix sort.

8. What is the worst-case time complexity of bucket sort (k = number of buckets)?

a) O(n + k)

b) O(n.k)

c) O(n^{2})

d) O(n log n)

Answer: c

The time complexity of bucket sort is O(n^{2}) in the worst case. So if bucket sort does not get the inputs in the desired manner then it becomes very inefficient.

9. What is the best time complexity of bucket sort (k= number of buckets)?

a) O(n + k)

b) O(n.k)

c) O(n2)

d) O(n log n)

Answer: a

The time complexity of bucket sort is O(n+k). It performs better than any comparison-based sort if there is a uniform input distribution.

10. Which of the following is not necessarily a stable sorting algorithm?

a) bucket sort

b) counting sort

c) merge sort

d) pigeonhole sort

Answer: a

Bucket sort is not necessarily a stable sorting algorithm. This is because its stability depends on the stability of the algorithm that we have used to sort the individual buckets.

11. Bucket sort is not an in-place sorting algorithm.

a) True

b) False

Answer: b

Bucket sort is not an in-place sorting algorithm. It requires an auxiliary space of O(n k) in the worst case.

12. What is the worst space complexity of bucket sort (k = number of buckets)?

a) O(n + k)

b) O(n.k)

c) O(n2)

d) O(n log n)

Answer: b

The worst-case space complexity of bucket sort is O(n.k). So it is not an in-place sorting algorithm.

1. Bead sort is also known as _________

a) gravity sort

b) strand sort

c) abacus sort

d) counting sort

Answer: a

Bead sort is also known as gravity sort. It is because this algorithm was designed by keeping the natural phenomenon of falling objects in mind.

2. Which of the following sorting algorithm was inspired by the natural phenomenon of falling objects?

a) BOGO sort

b) heap sort

c) bead sort

d) strand sort

Answer: c

- The algorithm of bead sort was inspired by the natural phenomenon of falling objects. Thus, it is also known by the name of gravity sort.
- Bead sort is also known as gravity sort. It is because this algorithm was designed by keeping the natural phenomenon of falling objects in mind.

3. Which of the following sorting algorithm is only applicable to positive integers?

a) quick sort

b) heap sort

c) bead sort

d) strand sort

Answer: c

- The bead sort algorithm is only applicable to positive integers. This is because it works by placing a number of beads equal to the key value, in each row.
- Bead sort is also known as gravity sort. It is because this algorithm was designed by keeping the natural phenomenon of falling objects in mind.

4. What is the auxiliary space complexity of bead sort?

a) O(n)

b) O(1)

c) O(n^{2})

d) O(n log n)

Answer: c

The auxiliary space complexity of bead sort is O(n^{2}). It is because an array of size maximum_element*n (maximum_element is the maximum element that is present in the array and n is the size of the array) has to be maintained.

5. Which of the following sorting algorithm is not in place?

a) quick sort

b) bead sort

c) cycle sort

d) heap sort

Answer: b

Bead sort has an auxiliary space complexity of O(n^{2}). So it is not an in-place sorting algorithm. Bead sort is also known as gravity sort. It is because this algorithm was designed by keeping the natural phenomenon of falling objects in mind.

6. Bead sort is a comparison-based sorting algorithm.

a) true

b) false

Answer: b

Bead sort is an example of a non-comparison-based sorting algorithm. This is because it does not compare the value of elements present in a list in order to sort them.

7. How many comparisons will be required to sort the array arr={5, 4, 7, 1, 9} using bead sort?

a) 5

b) 4

c) 6

d) 0

Answer: d

Zero comparisons will be required to sort the array arr={5, 4, 7, 1, 9} using bead sort.

Bead sort is an example of a non-comparison-based sorting algorithm. So no comparison is required to be performed in order to sort the array.

8. What is the average time complexity of bead sort (S = sum of input elements)?

a) O(n)

b) O(S)

c) O(n2)

d) O(n log n)

Answer: b

The average case time complexity of bead sort is O(S). It is because we drop each bead as a separate operation.

9. What is the best case time complexity of bead sort (S = sum of input elements)?

a) O(n)

b) O(S)

c) O(n2)

d) O(n log n)

Answer: a

The best case time complexity of bead sort is O(n). It is when a row of beads is dropped as a distinct operation and since the number of rows is equal to n.

10. What is the worst-case time complexity of bead sort (S= sum of input elements)?

a) O(n)

b) O(S)

c) O(n2)

d) O(n log n)

Answer: b

The worst-case time complexity of bead sort is O(S). It is because we drop each bead as a separate operation.

12. Bead sort is only applicable to positive integers.

a) true

b) false

Answer: a

The bead sort algorithm is only applicable to positive integers. This is because it works by placing the number of beads equal to the key value, in each row.

1. What is the time complexity for a given pancake sort given it undergoes “n” flip operations?

a) O(n)

b) O(n^{2})

c) O(n3)

d) O(2n)

Answer: b

Most sorting algorithms try to sort by making the least number of comparisons but in pancake sort, we try to sort using as few reversals as possible. Because the total number of flip operations performed in a pancake sort is O(n), the overall time complexity is O(n^{2}).

2. Which operation is most essential to the process of pancake sort?

a) Flip the given data

b) Find the largest of given data

c) Finding the least of given data

d) Inserting something into the given data

Answer: a

When we use pancake sort, we sort the array to find the largest and then flip the array at that point to bring that value to the bottom of the pancake stack. The size of the array that we are dealing with is then reduced and the process continues. Flip operation is the most important function in the pancake sort technique.

4. How many flips do the simplest of pancake sorting techniques require?

a) 3n−3 flips

b) 2n-4 flips

c) 2n-3 flips

d) 3n-2 flips

Answer: c

The minimum number of flips required to sort any stack of n pancakes has been shown to lie between 1.087n and 1.636n. using an average of that 1.36n and extracting that for values of n>1. We have 1.36, 2.72, 4.08, etc.

This matches best with 2n-3 which is equal to 1, 3, 5, 7, 9, etc. An upper bound of 2n-3 comes by iteratively using the next largest element in its correct place using two flips.

5. Pancake Sorting appears in which of the following?

a) Frequency Scaling

b) Storage Virtualization

c) Parallel Processing

d) Neural Networking

Answer: c

Pancake Sorting finds application in educational use not to mention parallel processing networks by providing optimal routing algorithms between networks.

6. In addition to the pancake sorting problem, there is the case of the burnt pancake problem in which we are dealing with pancakes (discs) that are burnt on one side only. In this case, it is taken that the burnt side must always end up _______

a) Faced down

b) Faced up

c) It doesn’t matter

d) Both sides are burnt

Answer: a

A variation of this pancake is with burnt pancakes. Here each pancake has a burnt side and all pancakes must, in addition, end up with the burnt side on the bottom. It is a more difficult version of the regular pancake problem.

7. In a computational complexity theory, a problem with decision-making is said to be NP-complete when it is both NP and NP-hard. What does NP mean?

a) Non-Polynomial time

b) Non-deterministic Probabilistic

c) Non-deterministic Polynomial time

d) Non Probabilistic time

Answer: c

Although any given solution to an NP-complete problem can be validated quickly in polynomial time; there is no way to efficiently locate a solution, to begin with.

The unique characteristic of NP-complete problems is that no fast solution to them is known and hence NP-complete problems are said to be non-deterministic polynomial time.

8. When we realize a specific implementation of a pancake algorithm, every move when we find the greatest of the sized array and flipping can be modeled through __________

a) Combinations

b) Exponential functions

c) Logarithmic functions

d) Permutations

Answer: d

When we flip the array or stack, we have to take utmost priority to preserve the order of the list so that the sorting doesn’t become invalid. Hence we use permutations, we are ensuring that order matters.

9. The Pancake Problems (1975, 1979, 1973) did NOT involve which of the following people?

a) Bill Gates

b) Jacob Goodman

c) Christos Papadimitriou

d) John Goodman

Answer: d

(Jacob Goodman – 1975) did NOT involve in Pancake Problems.

1. Odd-even sort is also known as ____________

a) stupid sort

b) smart sort

c) brick sort

d) BOGO sort

Answer: c

Odd-even sort is also known by the name of a brick sort. This algorithm was first proposed by Habermann in 1972 and was initially invented for parallel computation of local interconnection.

2. Odd-even sort is a variation of ___________

a) Bubble sort

b) Selection sort

c) Insertion sort

d) Gnome sort

Answer: a

Odd-even sort is very similar to the bubble sort. It works by applying bubble sort in two phases i.e the odd phase and the even phase. In the odd phase, bubble sort is applied to odd indexed elements, and in the even phase, bubble sort is applied to even indexed elements.

3. Auxiliary space requirement of odd-even sort is ___________

a) O(n)

b) O(log n)

c) O(1)

d) O(n2)

Answer: c

In odd-even sort manipulation is done on the input array itself. So no extra space is required to perform sorting. Thus it requires constant auxiliary space.

4. Which of the following sorting algorithm is NOT stable?

a) Quick sort

b) Brick sort

c) Bubble sort

d) Merge sort

Answer: a

Quick sort is the only algorithm that is not stable. Brick sort like bubble sort is a stable sorting algorithm.

5. Which of the following sorting algorithm is in place?

a) brick sort

b) merge sort

C) counting sort

D) radix sort

Answer: a

Brick sort is an in-place sorting technique as it only requires constant auxiliary space for manipulating the input array.

6. Odd-even sort is a comparison-based sort.

a) true

b) false

Answer: a

Odd-even sort compares the value of different elements in the array for sorting. Thus, it is a comparison-based sort.

7. Brick sort uses which of the following methods for sorting the input?

a) selection

b) partitioning

c) merging

d) exchanging

Answer: d

Brick sort uses the method of exchanging as it swaps the elements which are out of order. This swapping is done in two phases i.e. odd phase and the even phase.

8. What is the worst-case time complexity of the odd-even sort?

a) O(n)

b) O(n log n)

c) O(n^{2})

d) O(log n)

Answer: c

The worst-case time complexity of the odd-even sort is O(n^{2}).

Worst-case complexity is observed when the input array is reverse sorted. This is the same as the worst-case complexity of bubble sort.

9. What is the best case time complexity of the odd-even sort?

a) O(n)

b) O(n log n)

c) O(n2)

d) O(log n)

Answer: a

The best case time complexity of the odd-even sort is O(n).

Best case complexity is observed when the input array is already sorted. This is the same as the best case complexity of bubble sort.

10. What is the average case time complexity of the odd-even sort?

a) O(n)

b) O(n log n)

c) O(n^{2})

d) O(log n)

Answer: c

Odd-even sort takes O(n^{2}) time on average as it keeps on applying bubble sort on the elements in two phases until they are sorted.

11. How many odd and even phases are required respectively to sort the given array using odd-even sort.arr={3,2,3,8,5,6,2,1}.

a) 3,3

b) 4,4

c) 3,4

d) 4,3

Answer: b

Odd-even sort applies bubble sort in two phases until the array gets sorted. So here 8 phases will be required in totality to sort the array. Out of these 4 phases will be an odd phase and the other 4 will be an even phase.

1. Which one of the following sorting algorithms requires recursion?

a) odd-even sort

b) stooge sort

c) selection sort

d) counting sort

Answer: b

Stooge sort requires the use of recursion for implementing its algorithm. On the other hand, the sorting algorithms given in the remaining options use iterative methods.

2. What is the recurrence relation for stooge sort?

a) T(n) = 2T(2/3n) + O(n)

b) T(n) = 2T(2/3n) + O(1)

c) T(n) = 3T(2/3n) + O(n)

d) T(n) = 3T(2/3n) + O(1)

Answer: d

In stooge, sort recursion is applied to 2/3 part of the array 3 times. The rest of the portion of the code has constant time complexity. So the overall recurrence relation becomes T(n) = 3T(2/3n) + O(1).

3. In which of the following case stooge sort is most efficient (in terms of time complexity)?

a) when the input array is already sorted

b) when the input array is reverse sorted

c) when the input array is large

d) it has the same time complexity in any case

Answer: d

Stooge Sort is a recursive sorting algorithm. It is not much efficient but interesting sorting algorithm.

Stooge sort has the same time complexity under any case. It is given by the recurrence relation T(n) = 3T(2/3n) + O(1).

4. What is the space complexity of stooge sort?

a) O(n)

b) O(1)

c) O(log n)

d) O(n log n)

Answer: a

Stooge Sort is a recursive sorting algorithm. The space complexity of the stooge sort is O(n). It is used to store the input array.

5. What is the first step in the algorithm of stooge sort(after base case)?

a) apply stooge sort on the first 2/3 elements of the array

b) apply stooge sort on the last 2/3 elements of the array

c) apply stooge sort on the first 1/3 elements of the array

d) compare the first and last elements of the array

Answer: d

The first step in the algorithm of stooge sort is to compare the first and last element of the array and switch them if found out of order. In the second step, stooge sort is applied to the first 2/3 elements of the array.

6. Stooge sort is a comparison-based sorting algorithm.

a) true

b) false

Answer: a

Stooge sort is an example of a comparison-based sorting algorithm. This is because it compares the value of elements present in a list in order to sort them.

7. Stooge sort is a unstable sorting algorithm.

a) true

b) false

Answer: b

Stooge sort is not a stable sorting algorithm. It is because the elements with identical values do not appear in the same order in the output array as they were in the input array.

8. What is the average time complexity of stooge sort?

a) O(n2)

b) O(n3)

c) O(n2.6)

d) O(n^{2.7})

Answer: d

The recurrence relation of stooge sort is given as T(n) = 3T(2/3n) + O(1). It is found to be equal to O(n^{2.7}) using the master’s theorem.

9. How many recursive statements are used in the algorithm of stooge sort?

a) 0

b) 1

c) 2

d) 3

Answer: d

The algorithm of stooge sort uses 3 recursive statements in its algorithm. The first and third recursive statement applies stooge sort to the first 2/3 elements of the array and the second recursive statement applies stooge sort to the last 2/3 elements of the array.

10. Which of the following sorting algorithm has the same time complexity in every case?

a) stooge sort

b) strand sort

c) quick sort

d) bubble sort

Answer: a

- Stooge sort is a recursive sorting algorithm in which we swap the first and last element of the array.
- Stooge sort has the same time complexity of O(n
^{2.7}) in any case. This also shows that it is not an adaptive sorting algorithm.

11. Which of the following sorting algorithm is the worst in terms of time complexity?

a) bubble sort

b) selection sort

c) insertion sort

d) stooge sort

Answer: d

Stooge sort has a time complexity of O(n^{2.7}) which is the worst out of the given options. This shows that stooge sort is even less efficient than bubble sort which is itself considered to be a very inefficient sort.

12. Which of the following is not an adaptive sorting algorithm?

a) insertion sort

b) strand sort

c) stooge sort

d) bubble sort

Answer: c

Stooge sort is not an adaptive sorting algorithm. This is because it does not perform better in the case when the array is already/almost sorted.

1. Which of the following is not an alternative name of permutation sort?

a) stupid sort

b) BOGO sort

c) donkey sort

d) monkey sort

Answer: c

Permutation sort is also known by names like stupid sort, monkey sort, BOGO sort, slow sort, and shotgun sort. These names are particularly chosen due to their inefficient algorithm.

2. Permutation sort works by __________

a) generating random permutations of its input

b) partitioning the array

c) dividing the value of input elements

d) generating permutations according to the value of the first element of the array

Answer: a

The permutation sort algorithm successively generates permutations of its input. This process is repeated until the sorted version of the array is found.

3. What is the auxiliary space requirement of permutation sort?

a) O(n)

b) O(1)

c) O(log n)

d) O(n log n)

Answer: b

The permutation sort algorithm does not require any extra space for sorting the input array. Thus its auxiliary space requirement is O(1).

4. What is the best case time complexity of permutation sort?

a) O(n2)

b) O(n)

c) O(n log n)

d) O(1)

Answer: b

The best case time complexity of permutation sort occurs when the input array is already sorted. So in such a case we only need to check whether all the elements are sorted which can be done in O(n) time.

5. What is the worst-case time complexity of permutation sort?

a) O(n2)

b) O(n*n!)

c) O(infinity)

d) O(n log n)

Answer: c

There is no upper bound to the worst case of this algorithm. It can go on to take a very large amount of time if the array has many elements. So the worst case of this algorithm can be taken as O(infinity).

6. Which of the following sorting algorithm is not stable __________

a) insertion sort

b) bubble sort

c) merge sort

d) bogosort

Answer: d

Out of the given algorithms, only bogosort is not stable. This is because it creates permutations of the input array in order to obtain the sorted version. So there is no guarantee that the sorted version obtained by such a method gives a stable output.

7. Which of the following is an in-place sorting algorithm?

a) Merge sort

b) Permutation sort

c) Radix sort

d) Counting sort

Answer: b

Permutation sort is an in-place sorting algorithm. It is because the permutation sort algorithm does not require any extra space for sorting the input array.

8. Sleep sort should be preferred over permutation sort as it has better time complexity.

a) true

b) false

Answer: b

If we sort an array using sleep sort then there is no guarantee that the output we get is correctly sorted. So even though sleep sort is better than bogosort in time complexity but it cannot be preferred due to its inaccuracy.

9. What is the average case time complexity of permutation sort?

a) O(n2)

b) O(n*n!)

c) O(infinity)

d) O(n log n)

Answer: b

For calculating the average we first need to calculate the number of possible permutations an array of size n can have. This will be equal to n!. As each permutation also needs to be checked whether it is sorted or not so this takes another n time. Thus overall time complexity becomes O(n*n!).

1. Which of the following is an advantage of recursive bubble sort over its iterative version?

a) it has better time complexity

b) it has better space complexity

c) it is easy to implement

d) it has no significant advantage

Answer: d

Recursive bubble sort has no significant advantage over iterative bubble sort. It is just a different way to implement the same.

2. Bubble sort is also known as ___________

a) stupid sort

b) ship sort

c) sinking sort

d) shell sort

Answer: c

Bubble sort is also referred to as sinking sort. It continuously compares the value of adjacent elements as it traverses through an array and swaps the elements which are out of order.

3. What will be the recurrence relation of the code of recursive bubble sort?

a) T(n) = 2T(n/2) + n

b) T(n) = 2T(n/2) + c

c) T(n) = T(n-1) + n

d) T(n) = T(n-1) + c

Answer: c

The recurrence relation of the code of recursive bubble sort is T(n) = T(n-1) + n. It can be solved by the method of substitution and is found to be equal to n2.

4. Which of the following sorting algorithm is stable?

a) Selection sort

b) Quick sort

c) Bubble sort

d) Heap sort

Answer: c

Bubble sort is the only stable algorithm. It is because the elements with identical values appear in the same order in the output array as they were in the input array.

5. Which of the following is a variation of bubble sort?

a) selection sort

b) odd-even sort

c) cocktail sort

d) stupid sort

Answer: b

Odd-even sort is a variation of bubble sort. But unlike bubble sort, odd even sort traverses the array in two phases- odd phase and even phase.

6. Recursive bubble sort is a comparison-based sort.

a) true

b) false

Answer: a

In bubble sort, we need to compare elements in order to find the minimum element in each iteration. So we can say that it uses comparisons in order to sort the array. Thus it qualifies as a comparison-based sort.

7. What is the average case time complexity of recursive bubble sort?

a) O(n)

b) O(n log n)

c) O(n^{2})

d) O(log n)

Answer: c

The overall recurrence relation of recursive bubble sort is given by T(n) = T(n-1) + n. It is found to be equal to O(n^{2}).

8. What is the best case time complexity of recursive bubble sort?

a) O(n)

b) O(n log n)

c) O(n2)

d) O(log n)

Answer: a

The best case time complexity of recursive bubble sort is O(n). It occurs in the case when the input is already/almost sorted.

9. What is the worst-case time complexity of recursive bubble sort?

a) O(n)

b) O(n log n)

c) O(n^{2})

d) O(log n)

Answer: c

The overall recurrence relation of recursive bubble sort is given by T(n) = T(n-1) + n. It is found to be equal to O(n^{2}).

10. What is the number of swaps required to sort the array arr={1, 2, 4, 3, 7, 5, 6} using recursive bubble sort?

a) 0

b) 1

c) 2

d) 3

Answer: d

The first swap takes place between 4 and 3 then the second swap takes place between 7 and 5 and then finally 6 and 7 are swapped which sorts our array.

12. What is the auxiliary space complexity of recursive bubble sort?

a) O(n)

b) O(1)

c) O(n log n)

d) O(n2)

Answer: b

The auxiliary space required by recursive bubble sort is O(1). So it qualifies as an in-place sorting algorithm.

13. Bubble sort is an adaptive sorting algorithm.

a) true

b) false

Answer: a

Bubble sort is an adaptive algorithm. It is because the time complexity of the algorithm improves when the input array is almost sorted.

14. Which of the following sorting algorithm is in place?

a) recursive bubble sort

b) merge sort

c) radix sort

d) counting sort

Answer: a

Recursive bubble sort is the only algorithm that is in place. It is because the auxiliary space required by recursive bubble sort is O(1).

1. Which of the following is an advantage of binary insertion sort over its standard version?

a) it has better time complexity

b) it has better space complexity

c) it makes less number of comparisons

d) it has no significant advantage

Answer: c

Binary insertion sort makes less number of comparisons as compared to the standard version of insertion sort. Binary insertion sort makes log n comparisons in the worst case as compared to n comparisons made in the standard version.

2. Binary Insertion sort is an online sorting algorithm.

a) True

b) False

Answer: a

Binary Insertion sort does not require the entire input data at the beginning itself in order to sort the array. It rather creates a partial solution in every step, so future elements are not required to be considered. Hence it is an online sorting algorithm.

3. How many comparisons will be made in the worst case when an array of size n will be sorted by using a binary insertion sort algorithm?

a) n

b) 1

c) log n

d) n log n

Answer: c

Binary insertion sort makes log n comparisons in the worst case. Whereas the standard version makes n comparisons in the worst case.

4. Which of the following sorting algorithm is stable?

a) Selection sort

b) Quick sort

c) Binary insertion sort

d) Heap sort

Answer: c

Binary insertion sort is the only stable algorithm. It is because the elements with identical values appear in the same order in the output array as they were in the input array.

5. Which of the following sorting algorithm uses a binary search?

a) radix sort

b) binary insertion sort

c) odd-even sort

d) bead sort

Answer: b

Binary insertion sort makes use of a binary search algorithm. It is used to find the correct index in the array where the element should be inserted.

6. Binary insertion sort is a comparison-based sort.

a) true

b) false

Answer: a

In insertion sort, we need to compare elements in order to find the minimum element in each iteration. So we can say that it uses comparisons in order to sort the array. Thus it qualifies as a comparison-based sort.

7. What is the average case time complexity of binary insertion sort?

a) O(n)

b) O(n log n)

c) O(n^{2})

d) O(log n)

Answer: c

The time complexity does not change when we use binary insertion sort in place of standard insertion sort. So the average case time complexity is O(n^{2}).

8. What is the best case time complexity of binary insertion sort?

a) O(n)

b) O(n log n)

c) O(n2)

d)

Answer: a

The best case time complexity of binary insertion sort is O(n). It occurs in the case when the input is already/almost sorted.

9. What is the worst-case time complexity of binary insertion sort?

a) O(n)

b) O(n log n)

c) O(n^{2})

d) O(log n)

Answer: c

The time complexity does not change when we use binary insertion sort in place of standard insertion sort. So the worst-case time complexity is O(n^{2}).

10. Choose the correct statement regarding binary insertion sort?

a) It has a better time complexity as compared to the standard version

b) It has a better space complexity as compared to the standard version

c) it takes less number of comparisons in the best case as compared to the standard version

d) it takes less number of comparisons in the worst case as compared to the standard version

Answer: d

Binary insertion sort has the advantage that it takes less number of comparisons in the worst case as compared to the standard version. In the best case, both standard insertion sort and binary insertion sort make only 1 comparison.

12. What is the auxiliary space complexity of binary insertion sort?

a) O(n)

b) O(1)

c) O(n log n)

d) O(n2)

Answer: b

The auxiliary space required by a binary insertion sort is O(1). So it qualifies as an in-place sorting algorithm.

13. Which of the following is an adaptive sorting algorithm?

a) binary insertion sort

b) merge sort

c) heap sort

d) selection sort

Answer: a

Binary insertion sort is an adaptive algorithm. It is because the time complexity of the algorithm improves when the input array is almost sorted.

14. Which of the following sorting algorithm is in place?

a) binary insertion sort

b) merge sort

c) radix sort

d) counting sort

Answer: a

In binary insertion sort, we divide the array into two subarrays — sorted and unsorted.

Binary insertion sort is the only algorithm that is in place. It is because the auxiliary space required by recursive bubble sort is O(1).

1. Which of the following is an advantage of recursive insertion sort over its iterative version?

a) it has better time complexity

b) it has better space complexity

c) it is easy to implement

d) it has no significant advantage

Answer: d

Recursive insertion sort has no significant advantage over iterative insertion sort. It is just a different way to implement the same.

2. Insertion sort is an online sorting algorithm.

a) true

b) false

Answer: a

Insertion sort does not require the entire input data at the beginning itself in order to sort the array. It rather creates a partial solution in every step, so future elements are not required to be considered. Hence it is an online sorting algorithm.

3. What will be the recurrence relation of the code of recursive insertion sort?

a) T(n) = 2T(n/2) + n

b) T(n) = 2T(n/2) + c

c) T(n) = T(n-1) + n

d) T(n) = T(n-1) + c

Answer: c

The recurrence relation of the code of recursive insertion sort is T(n) = T(n-1) + n. It can be solved by the method of substitution and is found to be equal to n2.

4. Which of the following sorting algorithm is stable?

a) Selection sort

b) Quick sort

c) Insertion sort

d) Heap sort

Answer: c

Insertion sort is the only stable algorithm. It is because the elements with identical values appear in the same order in the output array as they were in the input array.

5. Which of the following is a variant of insertion sort?

a) selection sort

b) shell sort

c) odd-even sort

d) stupid sort

Answer: b

Shell sort is a generalized version of the insertion sort algorithm. It first sorts elements that are far apart from each other and successively reduces the interval between the elements to be sorted.

Shell sort is a variation of insertion sort. It has a better running time in practical applications.

6. Recursive insertion sort is a comparison-based sort.

a) True

b) False

Answer: a

In insertion sort, we need to compare elements in order to find the minimum element in each iteration. So we can say that it uses comparisons in order to sort the array. Thus it qualifies as a comparison-based sort.

7. What is the average case time complexity of recursive insertion sort?

a) O(n)

b) O(n log n)

c) O(n^{2})

d) O(log n)

Answer: c

The overall recurrence relation of recursive insertion sort is given by T(n) = T(n-1) + n. It is found to be equal to O(n^{2}).

8. What is the best case time complexity of recursive insertion sort?

a) O(n)

b) O(n log n)

c) O(n2)

d) O(log n)

Answer: a

The best case time complexity of recursive insertion sort is O(n). It occurs in the case when the input is already/almost sorted.

9. What is the worst-case time complexity of recursive insertion sort?

a) O(n)

b) O(n log n)

c) O(n^{2})

d) O(log n)

Answer: c

The overall recurrence relation of recursive insertion sort is given by T(n) = T(n-1) + n. It is found to be equal to O(n^{2}).

10. How many swaps will be required in the worst case to sort an array having n elements using binary insertion sort?

a) n

b) 1

c) n * log n

d) log n

Answer: d

In a normal insertion sort at most n comparisons are required to sort the array. But if we also implement the concept of a binary sort in insertion sort then we can sort by having **log n** comparisons only.

12. What is the auxiliary space complexity of recursive insertion sort?

a) O(n)

b) O(1)

c) O(n log n)

d) O(n2)

Answer: b

The auxiliary space required by recursive insertion sort is O(1). So it qualifies as an in-place sorting algorithm.

13. Which of the following is an adaptive sorting algorithm?

a) recursive insertion sort

b) merge sort

c) heap sort

d) selection sort

Answer: a

Insertion sort is an adaptive algorithm. It is because the time complexity of the algorithm improves when the input array is almost sorted.

14. Which of the following sorting algorithm is in place?

a) recursive insertion sort

b) merge sort

c) radix sort

d) counting sort

Answer: a

Out of the given options recursive insertion sort is the only algorithm that is in place. It is because the auxiliary space required by recursive bubble sort is O(1).

1. Which of the following data structure is required for the implementation of tree sort?

a) any ordinary tree

b) balanced tree

c) binary search tree

d) unbalanced tree

Answer: c

Tree sort uses a binary search tree for sorting the given elements. Tree sort can also be performed by using an unbalanced binary search tree.

2. Tree sort is an online sorting algorithm.

a) True

b) False

Answer: a

Tree sort does not require the entire input data at the beginning itself in order to sort the array. It rather creates a partial solution in every step, so future elements are not required to be considered. Hence it is an online sorting algorithm.

3. Which of the following traversal in a binary search tree results in a sorted output?

a) in order to traversal

b) pre order traversal

c) post order traversal

d) breadth-first traversal

Answer: a

Tree sort uses a binary search tree for sorting the given elements. First, a BST is formed by using the input data elements and then the BST is traversed in an in-order fashion which gives a sorted output.

4. Which of the following sorting algorithm is stable?

a) Selection sort

b) Quick sort

c) Tree sort

d) Heap sort

Answer: c

Tree sort is the only stable algorithm. It is because the elements with identical values appear in the same order in the output array as they were in the input array.

5. Which of the following sorting algorithm uses a binary search tree?

a) radix sort

b) tree sort

c) odd-even sort

d) bead sort

Answer: b

Tree sort makes use of a binary search tree. It is because whenever a BST is traversed in an in-order fashion it gives a sorted output.

6. Which of the following is a comparison-based sort?

a) tree sort

b) radix sort

c) counting sort

d) pigeonhole sort

Answer: a

In tree sort, we need to compare elements as we insert them into the binary search tree. Thus it qualifies as a comparison-based sort.

7. What is the average case time complexity of tree sort?

a) O(n)

b) O(n log n)

c) O(n2)

d) O(log n)

Answer: b

On average every element takes log n time for insertion in a binary search tree so for n elements O(n log n) time will be required on average.

8. What is the best case time complexity of tree sort?

a) O(n)

b) O(n log n)

c) O(n2)

d) O(log n)

Answer: b

The best case time complexity of tree sort is the same as its average-case complexity. So best case time complexity is O(n log n).

9. What is the worst-case time complexity of tree sort (when implemented with a balanced tree)?

a) O(n)

b) O(n log n)

c) O(n2)

d) O(log n)

Answer: b

The worst-case time complexity of tree sort depends on whether the tree used in the implementation is balanced or not. If the tree is balanced then the worst-case complexity is O(n log n).

10. What is the worst-case time complexity of tree sort (when implemented with an unbalanced tree)?

a) O(n)

b) O(n log n)

c) O(n^{2})

d) O(log n)

Answer: c

The worst-case time complexity of tree sort depends on whether the tree used in the implementation is balanced or not. If the tree is unbalanced then the worst-case complexity is O(n^{2}).

11. What is the auxiliary space complexity of tree sort?

a) O(1)

b) O(n)

c) O(log n)

d) O(n log n)

Answer: b

Tree sort requires auxiliary space for maintaining a binary search tree. So the auxiliary space complexity of tree sort is O(n).

12. In which of the following case does a tree sort become adaptive?

a) when implemented with an unbalanced tree

b) when implemented with a balanced tree

c) when implemented with a splay tree as BST

d) when implemented with AVL tree as BST

Answer: c

Tree sort becomes an adaptive sort when it is implemented with a splay tree is a BST. In such a case the best case time complexity is better than (n log n).

13. Which of the following is not true about tree sort?

a) it is not an in-place sorting algorithm

b) every implementation is adaptive

c) it requires in order traversal of BST for sorting input elements

d) it is a stable sort

Answer: b

Every implementation of tree sort is not adaptive. It becomes adaptive only when implemented with a splay tree as BST.

14. Which of the following sorting algorithm is not in place?

a) insertion sort

b) quick sort

c) tree sort

d) gnome sort

Answer: c

Tree sort is a sorting algorithm that is based on Binary Search Tree data structure. Tree sort is the only algorithm that is not in place. It is because the auxiliary space required by tree sort is O(n).

15. The worst-case time complexity of tree sort remains unaffected when implemented with an unbalanced tree or a balanced tree.

a) True

b) False

Answer: b

The time complexity of tree sort is affected when implemented with an unbalanced tree or a balanced tree. In the case of a balanced tree, it is O(n log n) and in the case of an unbalanced tree, it is O(n^{2}).

16. Which of the following is not an advantage of tree sort?

a) it has a low space complexity

b) it has good time complexity for balanced BST

c) it is an online sorting algorithm

d) it is a stable sorting algorithm

Answer: a

Tree sort has a linear time complexity O(n) which makes it inefficient. This is the main reason why sorting algorithms like quick sort, heap sort, etc. are preferred over it.

17. Which of the following version of tree sort will have the highest worst-case time complexity?

a) using AVL tree as BST

b) using the red-black tree as BST

c) using the splay tree as BST

d) using ordinary BST

Answer: d

Tree sort has the highest worst-case time complexity with ordinary BST as it may not be balanced in every case. Whereas all other options have self-balancing BST.