Monday, February 18, 2008

Scheme For Enlightenment Part Eighteen - Ackermann

The Ackermann function is often used as a benchmark. It beats the crap out of your recursion system. But even with that caveat, it isn't a very good benchmark. Mainly, it's tough to scale it for reasonable run time for various systems. You can't just feed it new arguments and extrapolate times, though it could be put into a simple loop. The function is pretty simple. It only uses addition and subtraction. It has surprising runtime. There are tons of implementations out there. Here's one.

(define (a n m)
(if (eqv? n 0) (+ m 1)
(if (eqv? m 0)
(a (- n 1) 1)
(a (- n 1) (a n (- m 1))))))

The function a takes two arguments, n and m. If n is zero, it returns m + 1. This is the base condition, and eventually calls will be made to this ground state. If n isn't zero, then if m is zero, it calls itself with n - 1 and 1. Finally, if both n and m are not zero, it returns a funny nested call. It calls itself with n - 1, but the second argument is the return value of another call to itself. This last bit is interesting, as the return value of the Ackermann function can be really big. So it isn't clear that this function will ever return. It does. Eventually. If your machine is big enough. And if your Universe has been around long enough.

If you explore this function, you find that it seems to hit a brick wall. The enormous difference in speed between LispMe on the Palm and C on the desktop (146,000x) amounts to essentially nothing. You run into this wall right around (a 3 13).

int ack(int n, int m)
{
if (n == 0) {
return ++m;
}
if (m == 0) {
return ack(n - 1, 1);
} else {
return ack(n - 1, ack(n, m - 1));
}
}

So if you feed this C script arguments like (a 1 1), (a 1 13), (a 2 1), (a 2 13), (a 3 1), and even (a 3 11), you get answers quickly (less than 20 seconds). The argument (a 3 12) takes more like a minute, and (a 3 13) takes 3 1/2 minutes. And yet (a 3 14) stubbornly refuses to run. What i get is a stack crash, but only after quite some time goes by.

The Wikipedia article explains what's going on. While you read that, i'll write tomorrow's installment.

2 comments:

Anonymous said...

The Wikipedia article on the Ackermann function made my brain hurt.

Stephen said...

The Table of Values on the Wiki page shows that in row 3, that is (a 3 1), (a 3 2), etc., each next entry has a value that's twice as big. Since the function counts, the run time also doubles, more or less. So, a machine with twice as much memory might be expected to get one more entry in row 3. The formual for (a 3 n) is 2^(n+3)-3, so a 32 bit integer can store the answer for n = 29. However, stack space needed to compute it is much, much larger than the address space of a 32 bit machine.

In row 4, though, the values go up much, much faster. (a 4 0) is 13, (a 4 1) is 65533, but (a 4 2) has 19,729 digits.

Of course, Ackermann's function doesn't DO ANYTHING of value. So my interest in using heroics to compute larger values lies mostly in demonstrating heroics. It gets messy quickly.