Often when designing data structures, trade offs are made in
order to make certain operations more computationally efficient at the cost of
other, hopefully less used, operations. The binomial queue is no exception.

Typically, five different operations are important with priority queues. These are find-min, delete-min, insert, decrease-key, and merge. If we look at a min heap, a standard data structure wherein the first element is the smallest in the collection, we see mediocre performance on most operations.

**Heap Performance:**

- find-min - O(1)
- delete-min O(log n)
- insert O(log n)
- decrease-key O(log n)
- merge O(n)

The binomial queue improves upon insertion and merge at the
expense of find-min. The rational is that many applications need to store more
data than is actually queried. In order words, there are likely going to be
more insertions than find-min calls. Binomial queues achieve this trade off by
replacing the heap’s internal binary tree with a circular list of binomial
trees, each enforcing the min-heap property that root nodes are smaller than
all decedent nodes. More on this
shortly.

**Binomial Queue Performance:**

- find-min - O(log n
- delete-min O(log n)
- insert O(1) //amortized
- decrease-key O(log n)
- merge O(log n)

**Binomial Trees:**

In contrast to a binary tree, where each node has at most
one parent and two children, a binomial tree grows by allowing a variable
number of children per node. The number of children underneath the root of binomial
tree is called that tree’s order. Additionally, a full binomial tree of order n
contains 2^{n} nodes. Lastly, those children of the root node are
themselves binomial trees of order 0 to n-1. Shown below are the trees of order
0 to 3, and note how the largest tree is comprised of all lower-order trees.

Note, that the tree definition also follows the min-heap property, this means that the root of the tree will be the smallest element of the overall tree. Also, this means that each child will be the smallest element of their respective sub-tree. Finding an element in the tree is straightforward and practically the same as a traditional heap. All that is needed is to scan through each node's children list. The query node may exist in any sub tree under a child with a value less than the query.

Additionally, trees can be merged. If two trees of the same length need to be combined, the larger of the two roots simply needs to be added to the children list of the smaller. This means merging a single tree is O(1).

**Binomial Queue:**

So we can use binomial trees to help with our overall data structure. We will store our binomial queue as a linked list of a binomial trees. We will store only a single tree of each order, and our linked list will keep references to each queue's root. Now, finding the min of the whole queue requires us to search through our list of roots (O(log n)). Adding a new element is equivalent to adding a tree of order 0. Then, we can just merge together trees until there is only one tree of each order. If you remember the sections on amortized analysis, you might find the binomial queue insertion process to be familiar. The merging process wherein trees "overflow" and merge together to become larger 2^{n} trees is identical to the binary addition problem. Thus the amortized cost of insertion is clearly O(1). Similarly, removing a node will split a larger tree until a lot of smaller ones, which are then merged together. Long story short, all operations can boil down to a small number of linked list traversals or tree merges.