2.04. The Ackermann Function
The Ackermann function is a large number notation that demonstrates how googological notations can be extremely simple but still produce numbers that are very large by any reasonable standard. In this page I will first discuss the slightly unusual history behind that function, and then the function as we know it today. Then I will prove by induction how to easily translate the Ackermann function into the more straightforward up-arrows.
History of the Ackermann function
The Original Ackermann Function
The two-argument Ackermann function noted A(x,y) as we know it today was not the original Ackermann function. The original Ackermann function was devised by German mathematician Wilhelm Ackermann. It was not made for the sake of largeness, but to make a point in mathematics: he used the function as an example of a computable function that is not primitive recursive. A primitive recursive function is basically one that is not at least on par with Knuth's up-arrows in growth rate.
The original Ackermann function was denoted with the Greek letter phi (φ), in an article in the German research journal "Mathematische Annalen" (Mathematical Annals). The monthly journal has run from 1869 to the present, and it's published in German, English, and French.
Ackermann defined his function like so:
φ(x,y,0) = x+y
φ(x,y,z) = φ(x,φ(x,y-1,z),z-1)
φ(x,0,z) = α(x,z-1) (α is the Greek letter alpha)
I call this 3-argument function the original Ackermann function.
The two-argument alpha function used in the case of φ(x,0,z) is a strange function, defined like so:
/0 if y = 0
α(x,y) = |1 if y = 1
Ackermann seems to have chosen the alpha function such that his phi function can directly translate into the well-established multiplication and exponentiation. His usage of an alpha function is a little peculiar, but it clearly shows that his function was not just made for the sake of largeness.
If we forget about the weird alpha function we are able to define the function a little less confusingly:
φ(x,y,0) = x+y
φ(x,y,1) = x*y
φ(x,y,2) = x^y
φ(x,0,z) where z > 2 = x
φ(x,y,z) where z > 2 and y > 0 = φ(x,φ(x,y-1,z),z-1)
Ackermann's phi function is very reminiscent of the definition of up-arrows (especially under the alternate definition I gave), as you may have noticed. For example let's try an example with 3 as the last argument:
= φ(3,7625597484987,2) — I would normally write 7625597484987 with commas as 7,625,597,484,987, but here the commas can be confused with commas separating arguments.
= 3^7,625,597,484,987 — that's equal to 3^^4, the 3.6-trillion-digit number we encountered in the article on up-arrows.
It can actually be shown that:
φ(x,y,3) = x^^(y+1)
φ(x,y,z) where z > 3 ~> x↑z-1(y+1)
That means that 3 is the largest z in Ackermann's function such that φ(x,y,z) can be compactly translated to Knuth's up-arrows.
Peter's Ackermann Function
How did the two-argument "Ackermann function" noted A(x,y) come to be? It started in a 1935 paper by Rosza Peter that was about recursive functions, just like the Ackermann's paper, and also in Mathematische Annalen just like Ackermann's. The function was still not defined A(x,y), but as φx(y), still with the Greek letter phi. Peter defined it like so:
φ0(y) = 2y+1
φx(0) = φx-1(1)
φx(y) = φx-1(φx(y-1))
I call this 2-argument phi function the Ackermann-Peter function, although the Ackermann-Peter function more often refers to the Ackermann function as we know it today. That is erroneous though, since the Ackermann function as we know it today is still a bit different from Peter's version. This is what brings us to Robinson's variant of the Ackermann function.
Robinson's Ackermann Function
Robinson's version of the Ackermann function was defined by Raphael Robinson in a 1948 paper lwhich is mostly a revised version of Peter's paper. Robinson defines his function Gxy like so:
G0y = y+1
Gx0 = Gx-11
Gxy = Gx-1Gx(y-1)
Robinson said that this G function was a simpler version of Peter's function (which in turn was a simpler version of Ackermann's function), and cited Ackermann's and Peter's work in the paper. It's the function "Ackermann-Peter function" usually refers to, but I call it the Ackermann-Robinson function, since Robinson was the one who defined it precisely like above.
Although Robinson's function is essentially the same as the Ackermann function as we know it today, the function as we know it today is denoted a bit differently.
The modern Ackermann function
Some writers in mathematics decided to call Robinson's function simply the Ackermann function, and decided to denote it a little differently:
A(x,y) = Gx(y).
There are many different other variants of the Ackermann function. Check out this page by Robert Munafo if you want to know more about the various recursive functions which have been named the "Ackermann function" at some point.
Anyway, this 2-argument design allows for a nice, compact, and exceptionally simple definition:
A(0,y) = y+1
A(x,0) = A(x-1,1)
A(x,y) = A(x-1,A(x,y-1))
Though maybe not immediately obvious, the function can easily produce very large values. In fact, none of the three articles I cited that devised variant Ackermann functions even mentioned its potential ability to make large numbers! Part of why the function's capability isn't obvious is because the function is difficult to work with by itself. To demonstrate this, let's try putting 2 as both of the arguments:
Hmm. It takes 28 steps to reduce A(2,2) to 7. Think about it. 28 steps. How are we supposed to even estimate A(3,3) or A(3,4) or higher? How will we EVER know when the function starts to explode?
Believe it or not, this two-argument Ackermann function translates reasonably nicely to up-arrows; the next section of the article will use induction to show how. If you don't know, induction is a method of proof that works like so: you first prove something true in the first possible case, and then you prove that if something's true in the xth case, then it's true for the (x+1)th case. Induction is an essential tool in mathematical proofs, and as we see it's an efficient way to prove things.
Translating the Ackermann function to up-arrows
Now I'll use induction to show how we can translate the Ackermann function to up-arrows and standard math.
First off, let's do A(0,n):
A(0,n) = n+1 by definition
A(1,n): here we must look for patterns.
A(1,0) = A(0,1) = 2
A(1,1) = A(0,A(1,0)) = A(0,2) = 3
A(1,2) = A(0,A(1,1)) = A(0,3) = 4
It seems that A(1,n) = n+2. We need to prove this with the two steps of induction.
The first possible case is A(1,0): A(1,0) = A(0,1) = 1+1 = 2, which is correct.
Now for the second step:
Assume that for an x A(1,x) = x+2.
Then, what is A(1,x+1)?
A(1,x+1) = A(0,A(1,x)) = A(1,x)+1 = x+3, which is equal to (x+1)+2.
Now we have proven that if A(1,x) = x+2, then A(1,x+1) = (x+1)+2. We've used induction to quickly prove something!
Now for A(2,n):
A(2,0) = A(1,1) = (by proof that A(1,x) = x+2)) = 3
A(2,1) = A(1,A(2,0)) = A(1,3) = 5
A(2,2) = A(2,A(2,1)) = A(1,5) = 7
It appears that A(2,x) = 2x+3. But we need to use induction to prove that:
A(2,0) = 3, as we saw, = 2*0+3
If A(2,x) = 2x+3, then A(2,x+1) = A(1,A(2,x)) = A(1,2x+3) = 2x+3+2 = 2x+5 = 2(x+1)+3
Once again we've proven a formula for A(2,n)! Now let's try generalizing it:
When trying some values, the values for A(x,y) when x>0 appear to be 2↑x-2(y+3) - 3 (where ↑-1 = + and ↑0 = *). Let's try it, first with y = 0:
A(x,0) = A(x-1,1) by definition = 2↑x-34 - 3 = 2↑x-32↑x-32 - 3 = 2↑x-23 - 3. This matches the formula.
If A(a,y) = 2↑a-2(y+3) - 3 if a = x or x-1
then A(x,y+1) = A(x-1,A(x,y))
and: A(x-1,A(x,y)) = 2↑x-3(2↑x-2(y+3)-3+3) - 3
= 2↑x-3(2↑x-2(y+3)) - 3
= (by definition of up-arrows) 2↑x-2(y+4) - 3
= 2↑x-2((y+1)+3) - 3. This also matches the formula.
Now we have proven the formulas for Ackermann to up-arrows. We have:
A(0,x) = x+1
A(1,x) = x+2
A(2,x) = 2x+3
A(3,x) = 2x+3-3
A(4,x) = 2^^(x+3) - 3
A(5,x) = 2^^^(x+3) - 3
Now let's try some values of the function.
Values and Properties of the Ackermann Function
Now we have proven how to translate the Ackermann function to up-arrows. With that in mind we should list some outputs of the function. Here's a table of some outputs:
There are several interesting things to note about this table.
First off, each row grows at a specific rate. Row 0 (for x = 0) grows at a linear rate as does row 1, row 2 grows at a faster linear rate, row 3 grows at an exponential rate, but row 4 grows at a tetrational growth rate. What is tetrational growth rate? Tetrational growth rate is growth rates similar to the numbers tetration produes, which we examined in the article on Knuth's up-arrows, so you should be familiar with it by now.
Row 5, then, grows at pentational rates, 6 at hexational rates, 7 at heptational rates, etc. This is actually a nice correspondence: row x at the table grows at a rate defined with the xth operator (in the sequence addition, multiplication, exponentiation, tetration, etc). So the Ackermann function grows at a similar rate to up-arrow notation.
A second thing to note is that each row after the first is a "nth term sequence" of the previous row. A nth term sequence is a sequence defined as follows:
f(0) = k, where k can be anything
f(n) = g(f(n-1)), where g(x) is any function
The sequence if we make k equal 1 and g(x) the xth prime number is a sequence called the prime recursion sequence. It's the best known example of an nth term sequence, and it starts 1, 2, 3, 5, 11, 31, 127, 709... 2 is the 1st prime number, 3 is the 2nd prime number, 5 is the 3rd prime number, 11 is the 5th prime number, and so on. Hopefully this allows us to better see the idea of nth term sequences. The Ackermann function might be considered the quintessential nth term sequence, because it consists of the successor function, the successor function's nth term sequence, that sequence's nth term sequence, and so on.
Don't be shocked at the growth rate of such an innocent-looking function. Its definition is notably similar to Knuth's up-arrows, which is exactly what I'll discuss next in this article.
The idea of googological functions
Now, to demonstrate the idea of googological functions, I'll compare the Ackermann function to Knuth's arrows and show how they mirror each other. To show this I'll put the definitions side by side:
The first rules of both functions are called the base rule, the rule upon which the entire function is based on. The base rule of a function in googology is usually based upon either addition or exponentiation. In the case of the Ackermann function addition is used, while in the case of up-arrows exponentiationis used. Addition is used if you want a function to have a natural non-arbitrary start, while exponentiation (which I prefer as a base rule) is used if you want a function to directly start off generating large numbers.
What about the second rule? The second rule in both cases is called the catastrophic rule, the rule that tells you how to decompose an expression. You can tell that a rule is a catastrophic rule if it involves feeding in the original expression into one of the arguments but with one of the arguments decreased. That rule is important because it is necessary to allow iteration to work, and as we saw iteration is extremely vital in googological functions.
Finally, the third rule in both functions is called the prime rule, the rule that tells you when to stop using the catastrophic rule. The prime rule is important because without it you would have to apply the catastrophic rule forever, which would not allow you to generate a finite number.
Nathan Ho, the founder of Googology Wiki, once gave an analogy of these three essential rules to a car:
The Base Rule is the fuel, the Catastrophic Rule is the engine, and the Prime Rule is the brake. All parts of the system are necessary:
Without the fuel, the engine can't run.
Without the engine, the fuel and brake are useless.
Without the brake, the engine will just keep going forever (violating the law of infinity).
Those rules (which Nathan Ho poetically referred to as the Trinity of Iterative Functions) are present in almost every googological function. Bowers explicitly uses these names in his array notation, which is the origin of these names. The Ackermann function and up-arrows have that rule as well. The fast-growing hierarchy has those rules in disguise, as does Extensible-E, Conway's chain arrow notation (which has two prime rules), Steinhaus-Moser notation (where one rule serves as both the base rule and the prime rule), and pretty much everything else.
I think the Ackermann function serves as a great example of how simple googologisms can get, and a great way to discuss the vital ideas of googological functions. This time I also decided to throw in a bit of history as well because the Ackermann function actually has somewhat unusual history as the Ackermann function as we know it today was not invented by its namesake, but based on something he invented.
Up next we'll examine some large number prefixes and suffixes to name numbers, then a few more notations.
 Mathematische Annalen volume 99, pages 118-133 "Zum Hilbertschen Aufbau der reellen Zahlen" (link), 1928 journal by Wilhelm Ackermann where he defined his 3-argument phi function. Info came from page 120.
 Mathematische Annalen volume 111, pages 42-60 "Konstruktion nichtrekursiver Funktionen" (link), 1935 journal by Rosza Peter where she defined what I call the Ackermann-Peter function. Info came from page 56.
 Bulletin of the American Mathematical Society volume 54 number 10, pages 987-993 "Recursion and Double Recursion" (link), 1948 journal by Raphael M. Robinson where he defined the Ackermann function as we know it today which I call the Ackermann-Robinson function. Info came from page 988.