Programming with
Data Structures and Algorithms

Amortized Analysis

Part 1

Suppose we want an ordered, variable-size data structure that gives quick access to all elements.

Specifically, we need to be able to:

Note that Scheme lists support insertion (and deletion) at one end in constant time, but getting the kth element takes O(k) time, so lists alone won't be sufficient. Arrays (which Scheme calls vectors) support accessing elements in constant time, but are fixed in size.

The fixed size of arrays is a major limitation. Once an array is full, we can no longer insert elements. To solve this problem, we can start out with an array (or vector) of a certain size (you may assume the starting size is 1) and replace it with a bigger array when it is full. Here is a pseudocode description of this flexible array:

makeFlexibleArray():
    currentArray = new array[1]
insertAtEnd(elem):
    if currentArray is full:
        newArray = new array[length of current array + 1]
        copy each element of currentArray to newArray
        add elem to the first free position in newArray
        currentArray = newArray
    else:
        add elem to the first free position in currentArray
get(k):
    get kth element of currentArray

Answer the following questions:

  1. What is the worst case time complexity of each operation?
  2. Is the amortized complexity of a sequence of k insert operations any better?

Part 2

Fortunately, there is a way to improve the amortized cost per operation of the flexible array. We change insertAtEnd so that the array doubles in size when it is full. Again, you may assume that the starting size of the array is 1.

insertAtEnd(elem):
    if currentArray is full:
        newArray = new array[length of current array * 2]
        copy each element of currentArray to newArray
        add elem to the first free position in newArray
        currentArray = newArray
    else:
        add elem to the first free position in currentArray

Answer the following questions:

  1. What is the new amortized cost per operation of k insert operations?

Part 3

Suppose we want to extend our datastructure to also support removing elements at the end. Specifically, we add the following operation:

deleteLast():
    delete last element of currentArray

To conserve memory, we may also want to shrink the array if it gets too empty. Here are two choices:

  1. When the array is half-full, shrink its size to half of what it was.
  2. When the array is one-fourth full, shrink its size to half of what it was.

Shrinking the array takes constant time. But you should assume that it notifies the operating system that it is taking less space, so the OS is free to allocate the space beyond the end of the shrunk region. That is, future allocation must again copy all elements.

Answer the following questions:

  1. Which option is better, and why? Give an example of a sequence of operations and the corresponding amortized analysis to support your claim.

Part 4

Suppose we need a slightly different functionality from our data structure. Instead of looking up the kth element, we need to be able to search for elements.

Specifically, we need to be able to:

One way to do this is to keep an array of elements in sorted order. Constant time access allows us to quickly search for elements using a binary search in O(log n) time, but inserting elements is slow, O(n), due to the requirement that the array remain sorted: when we add an element, all the elements after it in order must be shifted one array index to the right.

Here is a better design for a data structure that provides this functionality: Keep a list of arrays, numbered 0 through n, where array k is of size 2k. We will maintain the invariant that all arrays are either empty or full, and all full arrays are in sorted order.

For example,

Array 0: [3]
Array 1: [1, 6]
Array 2: [_, _, _, _]
Array 3: [2, 5, 7, 8, 10, 12, 20, 42]

To insert an element, we first create an array of size 1 containing only that element. Then we iterate through the list of arrays, starting with the smallest. If array k is empty, we replace it with the full new array and we are done. Otherwise, we merge array k into the new array, doubling its size. Then we set array k to be empty and move on to array k+1.

So to insert the element 4 into the example above, we would create the array [4], then merge [4] with [3] to get [3, 4], then merge that with [1, 6] to get [1, 3, 4, 6], and finally place [1, 3, 4, 6] in position 2, resulting in the following configuration:

Array 0: [_]
Array 1: [_, _]
Array 2: [1, 3, 4, 6]
Array 3: [2, 5, 7, 8, 10, 12, 20, 42]

The pseudocode looks like this:

makeSortedDict():
    listOfArrays = empty
search(elem):
    foreach array in listOfArrays:
        binary search for elem in array
insert(elem):
    newArray = [elem]
    foreach array in listOfArrays:
        if array is empty:
            replace array with newArray in listOfArrays
            return
        else:
            newArray = merge array with newArray
            array = empty
    add newArray to end of listOfArrays

Answer the following questions:

  1. What is the time complexity of search?
  2. What is the amortized cost per operation of k insert operations?
  3. Can you think of a data structure you already know that provides this functionality with a better time complexity? Naming the structure is fine; we don't expect a detailed response.

What to turn in: