Our next topic on amortized analysis is a parallel to the Banker’s Paradigm. This new approach, the physicist’s paradigm, is a little more mathematically rigorous, but ultimately should produce the same answer (or same type of answer) as its money-grubbing counterpart. Although both techniques should produce the same result, it is worth understanding both because often times problems are more easily understood when phrased as one or the other.
So what is it?
Think back to your last physics class, remember gravity? If you took any introduction to physics you likely learned about gravitational potential energy, which is the amount of energy stored in an objects distance from the earth. If you take a one kilogram ball and raise it one meter off the ground, you have bestowed it with 9.8 joules of energy. Releasing the ball also releases that energy, which can be seen when the ball falls to the ground. Now, if you remember this, you might also remember that it doesn’t matter what path the ball takes when you lift it up one meter. All that matters for the calculation of gravitational potential energy is the initial and final position.
How do we use it?
So what does this have to do with algorithms? Well, lets look at each step of an algorithm through the lens of energy functions. We are going to use this short function to determine how much work is done by our algorithm at each step. Here, a_i is the amortized cost required to get us from step i-1 to step i, while t is the actual cost of doing step i. Phi is a potential energy value for the data structure, and we look at the change in potential between the previous step and the current one. Now, if you are reading this and have no clue how to calculate the potential energy of a data structure, well, you are in the right place.
Phi should be something that is extremely easy to compute, and represents something fundamental about the state of the data structure. For example, if our structure is a binary tree, then we might say phi represents the number of rows, or leaves, or nodes in total. Unfortunately, if picking phi were obvious then no one would pay computer scientists to do amortized analysis. In fact, the only hard part of the physicists paradigm is picking this phi.
So lets look at an example:
Last time, when we talked about amortized analysis with the banker’s paradigm we started with binary addition. Because I already drew most of those graphics, I’m going to start there again.
So here we have our old example, from left to right you can read the binary versions of the numbers one through sixteen. Across the bottom you can see the number of flips required to get from the previous integer to the current one (for example, it takes 3 flips to go from the number 3 to the number 4).
So how do we pick phi here? Well, if we look at a single binary number what are some things we might pick? Lets imagine the number eleven in binary: 00001011. What could we calculate easily? The simplest ideas are typically the best places to start, so lets say that phi is the number of 1’s in the binary number. In this case, phi_11 = 3.
Next, we need to know t. How much work does it take to go from eleven (00001011) to twelve (00001100) in binary? Well the two lowest order bits change from 1’s to 0’s, while the third bit changes from a 0 to a 1. Overall, it takes three flips. Notice how on the above graphic the number of flips are listed across the bottom.
So now we can try to calculate our amortized cost. Because we already studied a binary counter using the banker’s paradigm, we know that a binary counter’s amortized cost should be constant.
And luckily, our amortized value is a constant 2. Now it is not the exact same constant as our banker’s paradigm, but this is okay. What matters more than whether we get a 2 or a 3 is whether we get a constant value or not. Additionally, because our amortized constant cost makes sense (its what we were expecting) we have a good sense that our choice of phi was correct too!
So now we know what amortized analysis is, as well as the two main ways to reason about a data structures amortized cost. These tools, the bankers paradigm and the physicist’s paradigm, will form a backbone for our later discussion of data structures designed specifically for good amortization.