Justin Sybrandt

In the last post, we learned the absolute basics of Amortized Analysis. To make a long story short (or more specifically, to split a long story up between a series of smaller incremental posts) amortized analysis is a way to look at algorithm's typical cost rather than its strict worst-case run time.

So how does one determine this typical cost in an actually meaningful way?

Turns out there are two main methods, the first of which, the Banker's Paradigm, we will discuss here.

To motivate the Banker's Paradigm we will be analyzing the C++ implementation of a vector. In C++, a vector is not "a quantity represented by an arrow with both direction and magnitude" (regardless of what cartoon supervillains might lead you to believe). Instead, in C++, a vector is an easily resizable array.

Items can be added or removed from the vector, and the structure is smart enough to expand or contract its memory usage as the number of items it contains changes. This information is stored in an internal array, and new items are tacked onto the first free space available. If someone tries to add an element to a vector who's internal array is full, that vector must first make a larger array and copy all of its previous contents over. The following illustration should make this clearer.

The above picture shows what happens to the size of the vector (the shaded row) as new items are added (values 1 to 8). This should seem really similar to the binary counter problem from the previous blog post.

So is it true that adding an element to a vector is an O(n) operation? Well, yes, adding an n'th element could result in copying n-1 elements to a new array. But that only happens lgN times. Normally, adding a new element is constant. This sounds exactly like a problem we can amortize. In order to understand the amortization of this problem, let's consider a short thought experiment.

Imagine 8 people want to go on a cruise. The cruise line is a bit weird, and they only have a ship that carries one person, but the can get ships that carry 2, 4, 8 ,16 or 32 people. They only get a new ship if they see there are enough passengers to warrant it. Now imagine the ships are also weird, and they will only let someone on if they pay the ship \$1. How should the cruise line collect money from its passengers?

The first thought might be to charge each passenger \$1 every time they board a new ship. That sounds alright, but passengers would not be happy. In the following diagram, an orange cell corresponds to every time a passenger would be charged \$1, and at the bottom is each passenger's total cost.

So the first guy to show up is paying 4x more than the last guy! Obviously, this isn't a good idea.

So what if instead, the cruise line charged each person \$3 to enter. Then, they could keep all the extra money in a pot which they would use to pay the costs of moving between ships. This would actually work! To demonstrate, we can look at those first four passengers, and track what the pot of money looks like as people enter and move between ships.

The idea is that each person's bill covers their first entry, their first transfer, and the next transfer for an element which was already on board. From the looks of it, the shipping company gets an extra dollar in the pot from the first case (where the first passenger doesn't have a "buddy" to pay for when he first transfers) but the company will certainly never be in debt.

Because we can charge each passenger the same amount of money, we can think of think of the ship costs as constant. Similarly, when data is transferred between arrays in a vector, we can think of vector insertion as "constant." Although there are sometimes when every piece of data has to move to a new array, most of the time a new element can just fall in easily.

The logical tool we used, wherein we thought of each data element as paying some amount of money (or tokens) is called the Banker's Paradigm. Each operation can be thought of as costing a single token, and the question is what the smallest amount of money we can charge each element to complete the whole task?

Although this example was only conceptual, this method has been applied very rigorously to many data structures and operations to find amortized times. Hopefully, this allows you to at least understand the motivation and conceptual model of the Banker's Paradigm. In the next post we will discuss a more formulaic approach, referred to as the Physicist's Paradigm.

To Be Continued