Review Questions

These problems are similar in content and format as problems that will appear on the midterm exam.

Recall the following intuitive descriptions of asymptotic notation:

True or False

Questions of this nature require only T or F.
  1. n2 = O(n3)
  2. n3 = O(n2)
  3. 3n = o(n)
  4. n/3 = O(n)
  5. n! = O (n lg n)
  6. 3n = (2n)
  7. n2 + 3n + sin n = (n2)
  8. 2n = (2n+4)
  9. loga n = (logb n), for a and b constant, a!=b
  10. na = (nb), for a and b constant, a!=b
  11. It is possible to do a general purpose comparison sort on n items using O(n) comparisons.
  12. In the worst case, access to the maximum element of a heap takes (ln n) time.
  13. Merge sort is asymptotically faster than bubble sort in the best case.
  14. Radix sort is a stable sort if the columns are sorted using a stable sort.

Short Answer

Answer in one or two brief sentences.
  1. What is the least number of nodes possible in an almost complete binary tree of height h?
  2. What is the purpose of the Partition procedure in Quicksort?
  3. What is the heap property?
  4. What can be done to decrease the likelihood that Quicksort will exhibit its worst-case behavior?
  5. What is "memoizing"?
  6. The memoized recursive algorithm for computing Fibonacci numbers given in class required an extra array of size (n) to compute the nth Fibonacci number. The original algorithm did not use this array. Clearly, the original algorithm requires more time than the memoized one, but does it require less space (i.e., memory)? Why or why not?
  7. Give an example of a situation where bubble sort is faster than Quicksort.
  8. Is it possible that some triangulations of an n-sided polygon may be better than others because they have fewer triangles?

Drawing Pictures

  1. Consider the following binary tree:
                                 5
                               /   \
                              3     9
                             / \   / \
                            1  2  4   6
    

Analysis of Algorithms

  1. Consider the following algorithm, Bubble-Sort. It accepts an array A[1..n] of numbers and sorts the array into ascending order. swaps_done is a Boolean variable.
    1	Bubble-Sort (A, n)
    2		swaps_done = True
    3		while swaps_done do
    4			swaps_done = False
    5			for i in 2..n do
    6				if A[i] < A[i-1] then 
    7					exchange A[i] with A[i-1]
    8					swaps_done = True
    9				end if
    10			end for
    11		end while
    
    The running time of the algorithm can be characterized by the number of comparisons made, i.e., the number of times line 6 is executed.
  2. Consider the problem of sorting an array A[1..n] of unique positive integers. That is, we are guaranteed there are no duplicates.

    Professor Fubar claims that there is a worst-case lower bound of (n ln n) on the number of array accesses needed to solve this problem.

    We know that this is true for a comparison sort, but it's not clear whether this extends to every sorting algorithm.

    Professor Emacs refutes the claim with this argument: Arrange the integers as a list of strings of digits, using leading zeros when needed, then use radix sort. We know radix sort makes (c n) array accesses where c is the number of digits per item. If c is a constant, then only (n) array accesses are needed.

    Is this a valid refutation of the claim? Explain your answer in two or three brief sentences. Remember, we interested only in this refutation, so don't try to think of a different algorithm.

  3. Consider the following recursive algorithm for sorting a subarray A[i..n] of unique items. It it initially invoked as Sort (A, 1, n) to sort the entire array. It returns a truth value that is True if the array is sorted, False otherwise; this value is only used in the algorithm for deciding whether to continue sorting.
    1	Sort (A, i, n)
    2		if i = n then 
    3			for j in 2..n do
    4				if A[j-1] > A[j] then return False
    5			end for
    6			return True
    7		else
    8			for j in i..n do
    9				exchange A[i] with A[j]
    10				if Sort (A, i+1, n) then return 1
    11				exchange A[i] with A[j]
    12			end for
    13			return False
    14		end if
    
    Note that the value of i recursively increases from 1 to n (see line 10). The recursion stops when i reaches n; at this point, the array is checked to see if it is in order. If it isn't, False is returned and the algorithm continues. If it is, True is returned and all the recursion "unwinds" and stops, leaving the array in sorted order. The when i isn't equal to n, the algorithm tries exchanging every element in A[i..n] with A[i], seeing if the resulting array is (recursively) sorted (and if so, returning with True), then restoring the array to its original order to try again. This just generates every possible permutation of A[1..n], testing each one to see if it is sorted.

    Give upper and lower bounds on the number of times line 4 is executed. Your bounds should be within a factor of n of each other. You may assume that, on average, half of the permutations must be checked before the sorted one is found.

Writing Algorithms

  1. The binomial coefficient, written in text as n C k (read "n choose k"), is the number of combinations of n things taken k at a time. One definition of this is recursive:
    n C k =
    [(n-1) C (k-1)] + [(n-1) C k], if 0 < k < n
    1, otherwise
    Write a memoized recursive function that computes the binomial coefficient. What is a tight bound on the running time of your algorithm? On the space required for your algorithm?
  2. Write an algorithm that sorts an array A[1..n] of integers in the range 1..1000 in descending (i.e., larger numbers come first) order. Give the asymptotic running time of your algorithm in terms of number of array accesses or some other realistic metric. The better the running time, the more points you get.