Review of last time

Linked lists: draw some diagrams, go over some operations. Do the “reversing a list” problem, because it leads into stacks.

Inductive lists: write out the inductive definition, write the struct, write a couple of list operations:

Variations on lists

Ordered lists: like the ordered array from the first assignment. The insert and remove operations preserve sorted-ness of the list.

Circular lists: the tail" element of the list has its next pointer point, not to nullptr, but back to the head of the list. (In a doubly-linked circular list, prev of the head of the list points to the tail.) Loops that want to walk the entire list, instead of checking for nullptr, have to check for the node that they started at. This allows you to start at any node, and iterate the entire list.

List applications

Today we’re going to look at a number of data structures that can be built on top of lists (they can be built on top of other data types, like vector, too, with certain tradeoffs).

Stacks

A stack is a data structure where elements can be added, removed, and possibly accessed only at one end. E.g., you can think of it as a list where we can only use the operations push_front, pop_front, and maybe head (substituting back for front would work, too, but working from the head guarantees that all these operations take constant time).

We usually visualize a stack vertically, with the accessible end on top:

|  |
|1 |
|2 |
|3 |
+--+

1 is the top element, 3 is the bottom. Assuming we only used push_front, what is the order in which these elements were added?

push_front(3);
push_front(2);
push_front(1);

In the reverse order from which they are displayed; in particular, the top element is always the most recently added. The bottom element is always the oldest.

What operations do we need to do to get access to 3? We have to pop everything on top of it off:

pop_front(); // removes 1 
pop_front(); // removes 2
// now 3 is on top

Note that elements are removed in the reverse of the order they are added. That is, the last element added is the first to be removed. Because of this, stacks are sometimes called “last-in-first-out” structures or LIFO.

Usually, the stack operations are just called push and pop. We can build a minimal stack implementation on top of the standard forward_list class (the built-in singly-linked list class):

template<typename T>
class stack : private std::forward_list<T> {
  public: 

    void push(T value) { push_front(value); }

    T pop() {
        T ret = front();
        pop_front();
        return ret;
    }

    T peek() {
        return front();
    }

    using std::forward_list<T>::empty;

  private:
    using std::forward_list<T>::push_front;
    using std::forward_list<T>::pop_front;
    using std::forward_list<T>::front;
};

(Sometimes stacks add a third operation peek, for looking at the top element of the stack without removing it. This is equivalent to popping it off and then pushing it back on. We also add a member to check whether the stack is empty.)

We’re using private inheritance to build a stack out of the existing list class. (The private using declarations pull in the methods we need from the forward_list class, so we can refer to them without qualification.)

If you try to pop an element off an empty stack, this is an error known as stack underflow. Some stack implementations (particularly array-based ones) may impose a maximum size on the stack (i.e., the stack can “get full”); trying to push an element onto a full stack is known as stack overflow. Our list-based implementation has no artificial restriction on its maximum size.

An easy use for stacks is solving the reversing problem I gave you earlier. Just push the elements of the list onto a stack in order, and then pop them off into a different list; they will naturally be reversed, due to the LIFO property of stacks.

A use for stacks: matching parentheses

One basic usage for stacks is checking an arithmetic expression to see whether the parentheses are balanced. E.g., in

( ( ) ) ( )

the parens are balanced, but in

( ( ) ) (
( ( ) ) )

they are not (too many opening, closing, parentheses, respectively). We can use a stack to check whether the parentheses are balanced:

  1. When we see an opening parentheses, push a “marker” onto the stack (it doesn’t matter what).

  2. When see a closing parentheses, pop off the stack (if stack underflow occurs, STOP, there were too many closing parens).

  3. If, when we’ve read the entire expression, the stack is empty, then the parens were balanced; if the stack is not empty, then STOP, there were too many opening parentheses.

You might notice that in this trivial example, we could just count the number of opening parentheses seen so far, and subtract one for every closing paren. In fact, we are only using the height of the stack. That’s true, but let’s look at a more complicated example:

Suppose we want to match not just parentheses, but also square brackets, and curly braces, mixed together:

( { } ) [ ]  // balanced
( { ) [ } ]  // NOT balanced

now, just counting is not sufficient, as the second example illustrates. The numbers of the different types of delimiters are balanced, but their positions are not. For this, we need a stack, with a slightly modified procedure:

  1. Whenever we see an opening delimiter, push it onto the stack.

  2. Whenever we see a closing delimiter, pop the top element off the stack and check to see if it matches (is of the same type) as the closing one we just saw. If it is, continue; if not, STOP, the expression is unbalanced.

  3. If, when we reach the end of the expression, the stack is empty, then the expression was balanced. If the stack is not empty, then STOP, the expression was not balanced.

Stacks are useful whenever we need to “save our place” and then pick it up later. In this case, we ‘save’ the fact that “we are inside some mixture of parens/brackets/braces” and, when we see a closing symbol, pick up where we were before the most recent opening brace.

Applications of stacks: function calls

We’re actually already using a stack: the computer uses a stack to “keep track of where we left off” whenever we call a function. In the following code

void f() {
    ...
    g();
    ...
}

When g is called, it will need some space for its local variables, etc, but on the other hand, f probably has its own local variables. It would not be good if the call to g totally clobbered f‘s variables, so we need a way of saving our place within f (which includes the state of all local variables, as well as what point within the body of f we were executing) when we enter g. Similarly, when g returns, we need to resume f at exactly that point.

Every function, when it is called, creates an activation record on the computer’s stack. This record contains

When a function returns, it pops all its “stuff” off the stack, pushes its return value, and then jumps to the return address. (Function returns are sometimes handled on the stack as well: a function may push its return value onto the stack, with the expectation that the calling function will pop it off.)

Unrolling recursion

Because the computer uses a stack to implement function calls, we can rewrite any function that uses recursion to use an “explicit” stack.

E.g., consider

int rec_fib(int n) {
  if(n == 0 || n == 1)
    return 1;
  else
    return rec_fib(n-1) + rec_fib(n-2);
}

The humble recursive Fibonacci. In order to manually stack-ify the recursion, we need to consider what information is stored on the stack for each call to rec_fib:

In the base case, the function returns immediately, so there’s no need for an IP there. However, in the recursive case, the function can be in one of three “stages”: Before running rec_fib(n-1), after running rec_fib(n-1) but before (n-2) or after both recursive calls have returned. We need to store this stage information so that we can resume returned functions. We’ll create a struct for activation records to record this information:

struct actrec {
  int n;         // Parameter
  int ret;       // Return value    
  int stage = 0; // "Instruction pointer"
};

(Storing all this in one record is a little unrealistic; in reality, each of these would be pushed/popped onto the stack separately, and we’d have to keep track of what order to push them when calling a function, or pop them when returning. This order is part of the calling convention of a system: what, exactly, needs to happen in order to call a function?)

If stage == 0 then the function has just been called. If stage == 1 then the first recursive call has completed, but the second hasn’t been started yet. If stage == 2 then both recursive calls are finished, so we should return the result. This leads to the following manual-stack implementation:

#include <stack>

int stack_fib(int n) {
  stack<actrec> st;
  st.push(actrec{n, 0, 0});            // Call fib(n) in stage 0

  while(true) {
    actrec a = st.top();               // Run function on top of stack

    if(a.stage == 0) {                 // Just starting
      if(a.n == 0 || a.n == 1) {                                             
        st.pop();                      // n == 0,1; return
        st.top().ret += 1;             // Send return value up the stack
        st.top().stage++;              // Advance IP for calling function
      }
      else
        st.push(actrec{a.n-1,0,0});    // n > 1; recursive call n-1
    }
    else if(a.stage == 1) {            // Stage 1
      st.push(actrec{a.n-2,0,0});      // Recursive call n-2
    }
    else if(a.stage == 2) {            // Stage 2, both recursive calls done      
      st.pop();                        // Return

      if(!st.empty()) {        
        st.top().ret += a.ret;         // Pass return value up the stack
        st.top().stage++;              // Advance IP for calling function
      }
      else
        return a.ret;                  // Initial function call, we're done!
    }
  }
}

Note that it is never possible for a function to be in stage 1 or 2 when n == 0,1, because the first branch of the if makes it return (pop) immediately in that case.

Stack machines

Most computers these days have a stack, registers, and main memory (which can be used for both read-only program code, and read-write data), giving us three places to store data we are working with. Some early computers, however, had only a stack (they had a read-only memory, for code for the running program). Surprisingly, we can still write real programs on such a system. Here we’ll build a stack-based calculator. (This kind of calculation is sometimes called Reverse Polish Notation or RPN; it was used on some HP calculators.) Our calculator will accept lines of input from the user, break them up into space-separated tokens, and then evaluate each token individually. The token types are

Token Description
number Pushes number onto the stack
add Pops the top two elements, adds them, and pushes the result
sub Pops the top two elements, subtracts the first from the second, and pushes the result
mul Pops the top two elements, multiplies them, and pushes the result
div Pops the top two elements, divides the first by the second, and pushes the result

After every command line, the system will print the value on the top of the stack (via peek).

To compute, for example, (1 + 2) * 3 we would type

1 2 add 3 mul

(Demonstrate)

To compute 1 + 2 * 3 we would type

2 3 mul 1 add

(Demonstrate)

Note that the order in which we enter commands corresponds to the order of operations in the expression: the multiplication is performed first, so it is entered first.

The Forth programming language extends this paradigm into a full programming language, with procedures, variables, etc. See https://en.wikipedia.org/wiki/Forth_(programming_language) for more info. In particular, Forth adds:

Stack implementations

A singly-linked list (without the tail pointer; it is unnecessary here) is the most natural implementation, but an array-based implementation, with a fixed maximum size (called a “bounded stack”), is also relatively easy to implement, and may offer better performance (and is more efficient in terms of space used).

The array based implementation just keeps track of where the “top” element of the stack is (the bottom is always at array index 0). If top == -1 then the stack is empty. A sketch of an implementation looks like this:

template<typename T>
class array_stack {
  public:
    array_stack(int size) : st(new T[size]), top(-1), size(size) { }
    ~array_stack() { delete[] st; }

    // Copy ctor, overloaded assignment

    void push(T value) {
        if(top == size-1)
            throw out_of_range();

        st[++top] = value;
    }

    T pop() {
        if(top < 0)
            throw out_of_range();

        return st[top--];
    }

    T peek() {
        if(top < 0)
            throw out_of_range();

        return st[top];
    }

    bool empty() {
        return top < 0;
    }

  private:

    int top;  // Top of the stack
    T* st;    // The stack itself
    int size; // Total size of the stack
};

Queues

While a stack only allows you to access its elements at one end, a queue only allows you to add elements at the “front”, while only allowing you to remove/access elements at the opposite end. A queue can be visualized as a line of people at the grocery store; the first person to get in line is also the first person served. Queues are thus sometimes called “first-in-first-out” or “FIFO” structures.

Similar to stacks, it is an error (“queue underflow”) to try to remove elements from an empty queue. Likewise, some implementations have a maximum size; trying to add a new element results in a “queue overflow” error.

While stacks model a “stop and pickup where you left off” kind of scheme, queues, can be thought of as a “everyone takes turns” kind of scheme. Stacks are good for scenarios where we want to start up a new task, but we have to wait for it to finish before we can continue. Queues are good for scenarios where we want to start up an independent new task, that we don’t care about. We only want to wait until all tasks are finished.

Queue implementation

Once again, we can build a queue class on top of forward_list. We’ll use the head of the list for the “front” of the queue (where elements are removed), because we can remove elements from the head of a singly-linked list in \(O(1)\) time (and likewise, add elements to the end).

template<typename T>
class queue : private std::forward_list<T> {
  public:

    void enqueue(T value) {
        push_back(value);
    }

    T dequeue() {
        T ret = front();
        pop_front;
        return ret;
    }

    T peek() {
        return front();
    }

    using std::forward_list<T>::empty; 

  private:

    using std::forward_list<T>::push_back;
    using std::forward_list<T>::front;
    using std::forward_list<T>::pop_front;
};

The two fundamental queue operations are called enqueue and dequeue. As for stacks, we’ve added convenience methods peek (for looking at the soon-to-be-dequeued element) and empty (for testing for the empty queue). Note that unlike a stack, we cannot implement peek in terms of the fundamental queue operations: once we dequeue something, even if we immediately enqueue it, it will have to wait for everything in front of it to appear again.

(Show an example, with linked-list notation.)

Queue applications

We’ll see lots of applications later on, where they form the backbone of many breadth-first graph algorithms, but there are some useful applications.

Suppose we have several processes going at once in our program and we wish to switch between them, giving each one a “fair slice” of the program’s time. Suppose, also that processes can spawn other, new processes. We’ll model a process as an abstract base class:

class process {
  public:

    // Returns false if the process is finished
    bool run(queue<processes>& processes) = 0;
};

We pass the run method a reference to the queue of processes, so that it can “spawn” new processes by enqueuing them.

Our “process loop” will look like this:

queue<process> processes;

while(!processes.empty()) {
    process p = processes.dequeue();

    if(p.run())
        processes.enqueue(p); // This process wants another turn
}

Notice what happens when a process runs, and returns true: it was at the front of the queue, but now it is at the end of the queue. It won’t get another run until every other process has had its turn.

A process queue like this is actually how the operating system schedules the different programs running on it (with some variations). We don’t want to allow one program to hog the CPU, so after a program has run through an alotted amount of time (called its quantum) control is given to the next process. The difference is that the OS has the ability to interrupt processes and forcibly give control to another process; all we can do is hope that each process doesn’t do too much in its run method. (What we have actually implemented here is a simple form of “green threads”; threads that live in “userland” rather than being managed by the OS.)

Queues are often used in simulations, models of real-world situations in which agents must wait to access some shared resource. A whole sub-field of applied mathematics, queuing theory, deals with problems of this sort.

Queues for simulation

Often, queues (the data structure) are used to simulate actual queues in the real world, situations where entities have to “wait their turn” to access some limited but shared resource. The main questions that go into developing a simulation like this are:

Sometimes we might have actual data telling us exactly the times at which entities arrive, and how long their particular tasks take. This makes the simulation easier, so we’re going to assume that (if we don’t have it, we can easily generate random data, according to the averages, and then feed that in).

We might then want to ask questions like

A sketch of a simulator for something like this might be

This gives us something like this:


struct event {
    int start_time;
    int duration;
};

void simulate(vector<event>& events) {
    int time = 0;
    queue<event> q;

    // Create a task which is already finished when the simulation starts.
    event current_task = {-1, 0}; 

    while(!events.empty()) {
        // Find all events that should run 
        for(auto e : events) {
            if(e.start_time == time)
                q.enqueue(e);
        }

        // Check to see if the current task is complete
        if(current_task.start_time + current_task.duration <= time)
            current_task = q.dequeue();

        // Advance the clock
        time++;
    }
}

As an example, suppose we have these events:

Start time Duration
15 5
17 10
18 3
21 4

(Walk through example)

Of course, we’d want to be collecting statistics on how long each event waited to be processed, number of events processed, etc. It’s also rather inefficient to scan through time like this, since there may be large sections of time where nothing happens. A better way would be to find the time at which the next important thing will happen (new event arrives, or current event finishes) and just advance time directly to that spot.

Stacks vs. Queues for searching

Suppose we have a maze, and we want to find a path through it.

(Draw abstract maze).

There are two ways we can approach this problem:

(Walk through both as examples)

These two strategies are called “depth-first” and “breadth-first”. In the stack-based implementation, we pursue a single path all the way to its end, before exploring any other. In the queue-based implementation, we explore all available options a little bit at a time. Both methods have their pros and cons.

Array-based queues

It’s possible to build a fixed-size (“bounded”) queue that is backed by an array, rather than a list. With a stack this was easy, because the stack started at one end of the array and grew towards the other. With a queue, every enqueue/dequeue causes the queue to “crawl” towards the other end of the array. What do we do when we have no room left to enqueue new elements? The trick is to treat the array as a circular buffer, that is, if you move off one end, it wraps around to the other end.

(Illustrate with 10 element queue/array)

We store the starting and ending positions of the queue, but bear in mind that the end might be before the start, if the queue has wrapped around. So we can’t just subtract to figure out the number of elements; we have to store the size separately:

class array_queue {
  public:
    array_queue(int ms) {
        max_size = ms;
        size = 0;
        start = end = 0; 
        buffer = new int[ms];
    }

    void enqueue(int e);
    int dequeue();

  private:
    int* buffer; 
    int size;
    int max_size;
    int start; // Where next enqueue will occur
    int end;   // Where next dequeue will occur
};

The enqueue operation looks like this:

void array_queue::enqueue(int e) {
    if(size == max_size)
        throw queue_overflow;

    buffer[start] = e;
    start = (start + 1) % max_size; // Wrap around
    ++size;
}

The dequeue operation looks like this:

int array_queue::dequeue() {
  if(size == 0)
      throw queue_underflow;

  int e = buffer[end];
  end = (end + 1) % max_size;
  --size;
  return e;
}

And of course the size() method just returns size.

This implementation has \(O(1)\) enqueue and dequeue, and is more efficient in terms of space than the list version. The downside is that it has a fixed maximum size. (Actually, it’s possible to build a vector-array-queue hybrid which uses the vector grow-and-copy algorithm to expand the queue if we ever try to enqueue an element when it’s full. That gives us ammortized \(O(1)\) time enqueues, with unlimited size.)

Priority queues

I mentioned that the OS uses something like a queue to schedule different processes. In reality, some processes are more important than others, and should thus have a larger slice of the CPU’s time. In order to accommodate this, we use a priority queue. A priority queue is just a queue where, when you enqueue something you get to specify a priority for it. High priority items are not placed at the end of the queue, but closer to the front of it (so that they will be dequeued sooner).

It’s fairly easy to (inefficiently) implement priority queues on top of the list-based implementation: just add new elements at the front/back (whichever is fastest). When removing (“dequeuing”), search the list for the highest priority element and then swap it to the front/back and remove it. This makes adding new elements \(O(1)\), while removing elements is \(O(n)\), due to the need to search for the highest-priority element.

(An alternate method is to keep the contents of the queue sorted in priority order; this makes adding new elements \(O(n)\), but removing an element is only \(O(1)\).)

Later, we’ll see a new data structure called a heap that lets us do this more efficiently (heaps also support changing the priority of an item already in the queue, to make it run sooner or later).

A problem with priority queues, which normal queues don’t have, is that it’s possible, with a stream of high-priority tasks, that some low priority items may never be dequeued. E.g.,

1 1 1 1

and we have a task with priority 2. It will always be placed at the end of the queue (first to be dequeued), and thus none of the priority-1 tasks will run.

class priority_queue {
  public:

    void enqueue(int v, int p) {
      q.push_back(queue_elem{v,p});
    }

    int dequeue() {
      // Find the highest-priority element
      int highest = 0;
      for(int i = 0; i < q.size(); ++i)
        if(q[i].priority > q[highest])
          highest = i;

      // Swap to the back and return.
      swap(q.back(), q[highest]);
      int v = q.back().value;
      q.pop_back();
      return v;

    }

  private:
    struct queue_elem {
      int value;
      int priority;
    };

    vector<queue_elem> q;
};

It’s possible, although complex, to implement a priority queue based on a circular array, which would eliminate the \(O(n)\) cost of dequeue above.

Deques

A double-ended queue is called a deque; it supports adding and removing elements from both ends, but no random access to elements in the middle. An implementation on top of a doubly-linked list is relatively easy.

There aren’t as many applications for deques as for stacks and queues. Some scheduling systems for systems with multiple CPUs use them: when a process creates a new process, these are added at the front; when a CPU “steals” a process from another CPU, it takes from the rear, and adds to its own front (each CPU gets its own deque of processes).

As with stacks and queues, deques can be easily implemented as a doubly-linked list (doubly-linked, because we need to be able to push/pop in constant time from both the front and the back), or as an array, using a circular buffer. Unlike a queue, when building an array-based deque we should start placing new elements in the middle of the array, rather than at one of the ends, because the queue may grow in either direction. The same “array looping” technique must be used, with a modification because now the deque can grow above the size of the array (off the upper end), or below the bottom of the array (less than 0).