# Hyperoperation

Template:Mergefrom {{#invoke:Hatnote|hatnote}} In mathematics, the hyperoperation sequence[nb 1] is an infinite sequence of arithmetic operations (called hyperoperations) that starts with the unary operation of successor (n = 0), then continues with the binary operations of addition (n = 1), multiplication (n = 2), and exponentiation (n = 3), after which the sequence proceeds with further binary operations extending beyond exponentiation, using right-associativity. For the operations beyond exponentiation, the nth member of this sequence is named by Reuben Goodstein after the Greek prefix of n suffixed with -ation (such as tetration (n = 4), pentation (n = 5), hexation (n = 6), etc.) and can be written as using n - 2 arrows in Knuth's up-arrow notation (when n ≥ 3). Each hyperoperation may be understood recursively in terms of the previous one by:

${\begin{matrix}a\uparrow ^{m}b&=&\underbrace {a\uparrow ^{m-1}(a\uparrow ^{m-1}(a\uparrow ^{m-1}(...(a\uparrow ^{m-1}(a\uparrow ^{m-1}a))...)))} \\&&b{\mbox{ copies of }}a\end{matrix}}$ (m ≥ 0)

It may also be defined according to the recursion rule part of the definition, as in Knuth's up-arrow version of the Ackermann function:

$a\uparrow ^{m}b=a\uparrow ^{m-1}(a\uparrow ^{m}(b-1))$ (m ≥ -1)

This can be used to easily show numbers much larger than those which scientific notation can, such as Skewes' number and googolplexplex, but there are some numbers which even they cannot easily show, such as Graham's number and TREE(3).

This recursion rule is common to many variants of hyperoperations (see below).

## Definition

$H_{n}(a,b)={\begin{cases}b+1&{\text{if }}n=0\\a&{\text{if }}n=1,b=0\\0&{\text{if }}n=2,b=0\\1&{\text{if }}n\geq 3,b=0\\H_{n-1}(a,H_{n}(a,b-1))&{\text{otherwise}}\end{cases}}\,\!$ (Note that for n = 0, the binary operation essentially reduces to a unary operation (successor function) by ignoring the first argument.)

For n = 0, 1, 2, 3, this definition reproduces the basic arithmetic operations of successor (which is a unary operation), addition, multiplication, and exponentiation, respectively, as

$H_{0}(a,b)=b+1\,\!,$ $H_{1}(a,b)=a+b\,\!,$ $H_{2}(a,b)=a\cdot b\,\!,$ $H_{3}(a,b)=a^{b}\,\!,$ and for n ≥ 4 it extends these basic operations beyond exponentiation to what can be written in Knuth's up-arrow notation as

$H_{4}(a,b)=a\uparrow \uparrow {b}\,\!,$ $H_{5}(a,b)=a\uparrow \uparrow \uparrow {b}\,\!,$ ...
$H_{n}(a,b)=a\uparrow ^{n-2}b{\text{ for }}n\geq 3\,\!,$ ...

Knuth's notation could be extended to negative indices ≥ -2 in such a way as to agree with the entire hyperoperation sequence, except for the lag in the indexing:

$H_{n}(a,b)=a\uparrow ^{n-2}b{\text{ for }}n\geq 0.\,\!$ The hyperoperations can thus be seen as an answer to the question "what's next" in the sequence: successor, addition, multiplication, exponentiation, and so on. Noting that

the relationship between basic arithmetic operations is illustrated, allowing the higher operations to be defined naturally as above. The parameters of the hyperoperation hierarchy are sometimes referred to by their analogous exponentiation term; so a is the base, b is the exponent (or hyperexponent), and n is the rank (or grade).

In common terms, the hyperoperations are ways of compounding numbers that increase in growth based on the iteration of the previous hyperoperation. The concepts of successor, addition, multiplication and exponentiation are all hyperoperations; the successor operation (producing x+1 from x) is the most primitive, the addition operator specifies the number of times 1 is to be added to itself to produce a final value, multiplication specifies the number of times a number is to be added to itself, and exponentiation refers to the number of times a number is to be multiplied by itself.

## Examples

This is a list of the first seven (0th to 6th) hyperoperations. (Notice that in this article, we define 00 as 1)

## Special cases

Hn(0, b) =

0, when n = 2, or n = 3, b ≥ 1, or n ≥ 4, b odd (≥ -1)
1, when n = 3, b = 0, or n ≥ 4, b even (≥ 0)
b, when n = 1
b + 1, when n = 0

Hn(a, 0) =

0, when n = 2
1, when n = 0, or n ≥ 3
a, when n = 1

Hn(a, -1) =[nb 2]

0, when n = 0, or n ≥ 4
a - 1, when n = 1
-a, when n = 2
${\frac {1}{a}}$ , when n = 3

## History

One of the earliest discussions of hyperoperations was that of Albert Bennett in 1914, who developed some of the theory of commutative hyperoperations (see below). About 12 years later, Wilhelm Ackermann defined the function $\phi (a,b,n)\,\!$ which somewhat resembles the hyperoperation sequence.

In his 1947 paper, R. L. Goodstein introduced the specific sequence of operations that are now called hyperoperations, and also suggested the Greek names tetration, pentation, etc., for the extended operations beyond exponentiation (because they correspond to the indices 4, 5, etc.). As a three-argument function, e.g., $G(n,a,b)=H_{n}(a,b)$ , the hyperoperation sequence as a whole is seen to be a version of the original Ackermann function $\phi (a,b,n)\,\!$ recursive but not primitive recursive — as modified by Goodstein to incorporate the primitive successor function together with the other three basic operations of arithmetic (addition, multiplication, exponentiation), and to make a more seamless extension of these beyond exponentiation.

The original three-argument Ackermann function $\phi \,\!$ uses the same recursion rule as does Goodstein's version of it (i.e., the hyperoperation sequence), but differs from it in two ways. First, $\phi (a,b,n)\,\!$ defines a sequence of operations starting from addition (n = 0) rather than the successor function, then multiplication (n = 1), exponentiation (n = 2), etc. Secondly, the initial conditions for $\phi \,\!$ result in $\phi (a,b,3)=a\uparrow \uparrow (b+1)\,\!$ , thus differing from the hyperoperations beyond exponentiation. The significance of the b + 1 in the previous expression is that $\phi (a,b,3)\,\!$ = $a^{a^{\cdot ^{\cdot ^{\cdot ^{a}}}}}\,\!$ , where b counts the number of operators (exponentiations), rather than counting the number of operands ("a"s) as does the b in $a\uparrow \uparrow b\,\!$ , and so on for the higher-level operations. (See the Ackermann function article for details.)

## Notations

This is a list of notations that have been used for hyperoperations.

### Variant starting from a

{{#invoke:main|main}}

In 1928, Wilhelm Ackermann defined a 3-argument function $\phi (a,b,n)$ which gradually evolved into a 2-argument function known as the Ackermann function. The original Ackermann function $\phi$ was less similar to modern hyperoperations, because his initial conditions start with $\phi (a,0,n)=a$ for all n > 2. Also he assigned addition to n = 0, multiplication to n = 1 and exponentiation to n = 2, so the initial conditions produce very different operations for tetration and beyond.

Another initial condition that has been used is $A(0,b)=2b+1$ (where the base is constant $a=2$ ), due to Rózsa Péter, which does not form a hyperoperation hierarchy.

### Variant starting from 0

In 1984, C. W. Clenshaw and F. W. J. Olver began the discussion of using hyperoperations to prevent computer floating-point overflows. Since then, many other authors have renewed interest in the application of hyperoperations to floating-point representation. (Since Hn(a, b) are all defined for b = -1) While discussing tetration, Clenshaw et al. assumed the initial condition $F_{n}(a,0)=0$ , which makes yet another hyperoperation hierarchy. Just like in the previous variant, the fourth operation is very similar to tetration, but offset by one.

### Commutative hyperoperations

Commutative hyperoperations were considered by Albert Bennett as early as 1914, which is possibly the earliest remark about any hyperoperation sequence. Commutative hyperoperations are defined by the recursion rule

$F_{n+1}(a,b)=\exp(F_{n}(\ln(a),\ln(b)))$ which is symmetric in a and b, meaning all hyperoperations are commutative. This sequence does not contain exponentiation, and so does not form a hyperoperation hierarchy.