Asaf Karagila
I don't have much choice...

Factorial algorithms and recursive thoughts

There are no comments on this post.

Recursion: (see recursion). As the joke goes. But that's actually a misnomer, since that would be an ill-founded definition, which is exactly the point where you can't do a recursive definition. I'm not here to analyse that joke, though. I'm here to talk about something else.

Some time ago, I was looking for something, and I couldn't even tell you what, and I came across an algorithm for computing the factorial of a number by recursion. Let's review the standard way recursive algorithm first:

\[\operatorname{factorial}_1(n)=\begin{cases} 1 & n=0\\ n\cdot\operatorname{factorial}_1(n-1) & n>0\end{cases}\]

This is the naive, and most straightforward way to do it. This is the classical example of factorial in programming, and it's fine. It works, and if you're limited to "standard integers" in programming (so \(2^{64}\) or less, what with the year being 2020) then you can't even go beyond \(20!\) anyway, so that's efficient enough.

In some compilers or even languages, e.g. Steel Bank Common Lisp, where tail-end optimisation is available, we can slightly modify the algorithm to get better results:

\[\operatorname{factorial}_2(n)=\operatorname{fac}(n,1)\]

\[\operatorname{fac}(n,k)=\begin{cases} k & n=0\\\operatorname{fac}(n-1,n\cdot k) & n>1\end{cases}\]

The compiler will recognise this and turn that into a loop, making the run much more efficient. But what about languages without that sweet-sweet tail-end recursion? Well, nothing much changed from the first attempt.

So, what happens if you're using a language that has "native support" for arbitrarily large natural numbers, and we want to compute something like \(10{,}000!\)? It has \(35{,}660\) decimal digits, so it's certainly not a problem to do that (memory-wise). But suddenly the number of iterations becomes more pertinent. And here is the algorithm I saw.

\[\operatorname{factorial}_3(n)=\operatorname{IP}(1,n)\]

\[\operatorname{IP}(i,j)=\begin{cases} i & i=j\\\operatorname{IP}(i,k)\cdot\operatorname{IP}(k+1,j) & i\neq j\text{ and }k\text{ is the midpoint between them}\end{cases}\]

The idea is to break the set \(\{1,\dots,n\}\) break it in half, compute the product over each half, and combine, and do that recursively until the intervals are "small enough". The implementation I saw was actually mixed, and for short enough intervals resorted to a loop instead (or: pre-optimised tail-end recursion if you want). The idea here is to have at most \(\log_2 n\) stack frames at any given moment. So instead of calling the same function \(10{,}000\) times, we are never more than \(15\) steps into the stack. Suddenly, this becomes a big deal.

So instead of defining the recursion linearly, we do it on a tree. And this is why we came here. We didn't come here to talk about algorithm efficiency, I'm a set theorist, if it runs in finite time you should be quiet and thank me. We came here to talk about recursion and induction. Let's turn our heads to induction for a moment, another confusing matter for first year students.

Axiom. Suppose that \(S\) is a set of natural numbers such that \(0\in S\) and if \(n\in S\), then \(n+1\in S\), then \(S=\Bbb N\).

This is a standard formulation of induction, and then we can use it, and the proofs are at first kind of funny, since we need to define \(S\) explicitly. For example:

Proposition. For every natural number \(n>3\), \(n!>2^n\).

Proof. Let \(S\) denote the set \(\{n\in\Bbb N\mid (n+4)!>2^{n+4}\}\). We check that \(0\in S\): \[(n+4)!=4!=24>16=2^4=2^{0+4}.\] Suppose now that \(n\in S\), then \[(n+1+4)!=(n+5)\cdot(n+4)!>2\cdot(n+4)!>2\cdot 2^{n+4}=2^{n+1+4},\] so \(n+1\in S\) as well. Therefore by the induction axiom, \(S=\Bbb N\) as wanted. \(\square\)

It gets better as one understands how this template fits the one we were taught in high school, and maybe even clear some misconception about how it works. But that's the easy part.

Axiom. Suppose that \(S\) is a set of natural numbers such that if \(n\) satisfies that for every \(k < n\), \(k\in S\), then \(n\in S\) as well.

This is the standard formulation of strong or complete induction. This is utter magic for students. Is there a base case? Is there a successor case? How do you use it?

Proposition. Every natural number greater than \(1\) is divisible by a prime number.

Proof. Define \(S\) to be the set of natural numbers satisfying the above with \(0\) and \(1\) added in (by vacuity). Suppose that \(n\) is any natural number and for every \(k < n\), \(k\in S\). If \(n=0\) or \(n=1\), then \(n\in S\) by vacuity; if \(n\) is a prime, then it is divisible by itself, a prime; and finally, if \(n\) is not a prime, write it as \(p\cdot q\) where \(1 < p,q < n\), and since \(p\) and \(q\) are already in \(S\) by the induction hypothesis, \(p\) is divisible by some prime, and so must be \(n\). \(\square\)

This proof is the first example of a complete induction, and it's a nice one, since it is really unclear how to prove this by the "usual induction". How does knowing anything about \(n\) will help us with \(n+1\) for this proof? It simply won't. But it's still not quite clear, if there's no basis, why do we separate the primes in a sort of trivial way, like what normally would be the base case in standard inductions? Well. The answer lies in generality.

Recall that a relation is well-founded if every non-empty subset has a minimal element. And it turns out that well-foundedness is exactly what we need in order to do recursion (and proofs by induction). So when you see induction or recursion, your first thought should always be well-foundedness. And in this proof we really utilise this fact. Of course we need to reformulate induction.

Axiom. Suppose that \((P,\leq)\) is a well-founded relation and \(S\subseteq P\) such that for every \(p\in P\), if \(\{q\in P\mid q < p\}\subseteq S\), then \(p\in S\), then \(S=P\).

Theorem. \((\Bbb N\setminus\{1\},\preceq)\) is well-founded, where \(n\prec m\iff\exists k(n\cdot k=m)\).

Proof. Note that if \(n\in\Bbb N\) and \(k\preceq n\), then \(k\leq n\). Therefore below each natural numbers there are only finitely many points. Therefore if \(A\) is a non-empty set, let \(n\in A\) be any point, and consider \(A_n=\{k\in A\mid k\preceq n\}\). Since \(A_n\) is finite, it has a minimal element, \(k\). If \(k\) was not minimal in \(A\), there would be some \(m\in A\) such that \(m\prec k\), but then \(m\preceq n\), so \(m\in A_n\), and therefore \(k\) was not minimal in \(A_n\). \(\square\)

Corollary. Every non-zero natural number greater than \(1\) is divisible by a prime.

Proof. Note that the prime numbers are exactly the minimal elements of \((\Bbb N\setminus\{1\},\preceq)\). Let \(n\) be any natural number greater than \(1\), then \(D_n=\{k\in\Bbb N\setminus\{1\}\mid k\preceq n\}\) is a downwards closed set, by well-foundedness it has a minimal element \(p\), which is minimal in the partial order (not just that in \(D_n\)). So by definition, \(p\) is a prime. \(\square\)

This corollary's proof can be even shorter if one observes that this is really a fact about well-founded orders: every element lies above a minimal element. This makes the proof ridiculously simple to the point you might be confused why that's even a proof.

So really, what we're doing here is an inductive proof on a partial order different from \(\Bbb N\) with its usual ordering. People who taught basic logic courses also did similar things when proving something like unique readability, or proofs "by induction on the length/complexity of the term/formula" are really just proofs on the order of "sub-formula".

And this takes us back to the factorial algorithm. Instead of running the recursion on \(\{1,\dots,n\}\) to compute \(n!\), we instead run the recursion on \(2^{\lceil\log_2 n\rceil}\), presented as a binary tree (growing downwards, of course). And what we gain is speed in overheads, but also understanding that recursion shouldn't always be linear. It is beneficial to look sideways, downwards, diagonally, and upside-down and find different structures on which we can perform recursion and induction more clearly.

If you tell that to someone who is teaching first year students a basic course where induction is covered they will likely say something like "Yes, you're right, but we just don't have time to dwell on that". But that's just wrong. Induction and recursion are so basic, and if you're training people to be able to solve abstract problems or even one day become developers, then they will only benefit from the ability to look at a problem from a different angle.

(For two years now I've been planning to write an introductory paper about induction and recursion. One step at a time, I suppose. One day I will do it.)


There are no comments on this post.

Want to comment? Send me an email!