Recall the following intuitive descriptions of asymptotic notation:
5 / \ 3 9 / \ / \ 1 2 4 6
index _1___2___3___4___5___6___7_ value |___|___|___|___|___|___|___|
1 Bubble-Sort (A, n) 2 swaps_done = True 3 while swaps_done do 4 swaps_done = False 5 for i in 2..n do 6 if A[i] < A[i-1] then 7 exchange A[i] with A[i-1] 8 swaps_done = True 9 end if 10 end for 11 end whileThe running time of the algorithm can be characterized by the number of comparisons made, i.e., the number of times line 6 is executed.
Professor Fubar claims that there is a worst-case lower bound of (n ln n) on the number of array accesses needed to solve this problem.
We know that this is true for a comparison sort, but it's not clear whether this extends to every sorting algorithm.
Professor Emacs refutes the claim with this argument: Arrange the integers as a list of strings of digits, using leading zeros when needed, then use radix sort. We know radix sort makes (c n) array accesses where c is the number of digits per item. If c is a constant, then only (n) array accesses are needed.
Is this a valid refutation of the claim? Explain your answer in two or three brief sentences. Remember, we interested only in this refutation, so don't try to think of a different algorithm.
1 Sort (A, i, n) 2 if i = n then 3 for j in 2..n do 4 if A[j-1] > A[j] then return False 5 end for 6 return True 7 else 8 for j in i..n do 9 exchange A[i] with A[j] 10 if Sort (A, i+1, n) then return 1 11 exchange A[i] with A[j] 12 end for 13 return False 14 end ifNote that the value of i recursively increases from 1 to n (see line 10). The recursion stops when i reaches n; at this point, the array is checked to see if it is in order. If it isn't, False is returned and the algorithm continues. If it is, True is returned and all the recursion "unwinds" and stops, leaving the array in sorted order. The when i isn't equal to n, the algorithm tries exchanging every element in A[i..n] with A[i], seeing if the resulting array is (recursively) sorted (and if so, returning with True), then restoring the array to its original order to try again. This just generates every possible permutation of A[1..n], testing each one to see if it is sorted.
Give upper and lower bounds on the number of times line 4 is executed. Your bounds should be within a factor of n of each other. You may assume that, on average, half of the permutations must be checked before the sorted one is found.
[(n-1) C (k-1)] + [(n-1) C k], if 0 < k < nWrite a memoized recursive function that computes the binomial coefficient. What is a tight bound on the running time of your algorithm? On the space required for your algorithm?
1, otherwise