[Raw images from cs.princeton.edu/~wayne/teaching/fibonacci-heap.pdf]

In the last section we discussed Binomial Queues and how they used a linked list of binomial trees to keep an efficient priority queue. While the binomial queue worked hard to give us an amortized constant time insertion operator, it sacrificed performance in other areas such as its O(log n) find-min operator. In this section we will explore another data structure which attempts to keep a priority queue as a linked list of trees, making its own trade-offs in performance. Mainly, the Fibonacci heap maintains a more complicated set of trees than its binomial counterpart, but in exchange for the maintenance overhead, Fibonacci heaps provide many amortized constant time operations. Because of this carefully constructed internal organization, the Fibonacci heap is one of the best performing heap-like data structures existing today.

There are two main differences between the Fibonacci heap and the binomial queue. Firstly, while the binomial queue required that at all times, only one tree of each order may exist in the structure, the Fibonacci heap may contain multiple trees of the same size. Additionally, while the Binomial queue consisted of only binomial trees, the Fibonacci heap contains trees of many shapes and sizes. Note that in both, trees must follow the min-heap property that parent nodes must always have a lower value than any of their children.

The main component of a Fibonacci heap is a rotated linked list of tree roots. When a new element is inserted, it can be added to this rotated linked list as a tree of size 1. Eventually trees may be combined, but not until an operation runs which requires additional structure. This data structure gets its performance benefits by being really lazy.

**Fibonacci Heap:**

- Find-min O(1)
- Delete-min O(log n)
- Insert O(1)
- Decrease-key O(1)
- Merge O(1)

For this analysis we are going to take advantage of the physicist’s paradigm. This, of course, means we need to have a potential function which allows us to quickly calculate the complexity of our data structure at each step. For a Fibonacci heap we will use

**#Trees + 2(#Marks)**

So the obvious question is, “what’s a mark?”

Marks:

Each node in a Fibonacci heap is either “marked” or “unmarked.” This is typically just kept as a Boolean associated with each node. All nodes are initially inserted unmarked and later become marked as heap methods are run. These marked nodes are then used to try and flatten the heap structure, making later operations run faster.

The general rule is that a node becomes marked if one of its children is “cut” from beneath it. In this context, cutting a subtree means removing the subtree from its parent and adding it to the root list.

Later, if a child of a marked node is to be cut, the marked parent node is cut as well. In this manner, the cut will recursively travel up the tree until an unmarked node is found or the root is reached. Now roots are treated as a special case. Although you could mark and cut roots, it doesn’t help us to remove a root node to just add it back into the root list. So to make our heap more efficient, root nodes will never be marked.

So this begs the question, when will a node be cut?

The main culprit is decrease-key because trees need to be reconfigured if the node’s key becomes smaller than its parent. Instead of doing complicated rotations, like in a red black tree, the node is simply cut, its parent marked, and it is added to the root list. If the parent were marked already, as just described, the parent would then be cut and the process would continue. The following animation describes this process.

[Raw images from cs.princeton.edu/~wayne/teaching/fibonacci-heap.pdf]

So far we have only described how single elements are added or how trees are split up, but what we have yet to describe is the process to which trees are combined. Whenever a delete-min operation occurs, the tree must be cleaned up. This can be done with a single pass through the root list, combining any trees which have the same degree. The following animation describes this.

[Raw images from cs.princeton.edu/~wayne/teaching/fibonacci-heap.pdf]

Now that we understand marked nodes, we can understand the amortized analysis of Fibonacci heaps. The following descriptions show the logic for determining the amortized cost of three of the more in-depth operations..

T_{b }and T_{f }are the number of trees before and after the operator’s execution, while M_{b} and M_{f} are the number of marked nodes before and after respectively. Δ_{Φ }is the change in the potential function as a result of the operation.

Insert:

Adds a new node to the root list.

T_{f }=_{ }T_{b}+1_{}

M_{f }= M_{b}

Δ_{Φ }= [(T_{b}+1) + 2(M_{f})] - [T_{b} + 2(M_{b})] = 1

When an element is removed, we take its children and add each child as a root in our root list. Then, roots are merged together until no two roots have the same rank (number of children).

R = rank of the largest ranked node after delete min finishes.

T_{f} ≤ R + 1

M_{f }= M_{b} _{}

Δ_{Φ }= [T_{f} + 2(M_{f})] - [T_{b} + 2(M_{b})] ≤ R + 1 - T_{b }≤ R_{}

Decrease Key:

A selected element is assigned a value less than its parent, causing a set of nodes to be cut and added to the root list.

C = # of cuts performed this operation

T_{f }= T_{b} + C

M_{f }≤ M_{b} - C + 2

Δ_{Φ }_{}≤ c + 2(-C + 2) = 4 - C ≤ 4