|
 |
Kevin Wampler wrote:
> I'm not Darren, and he'd probably be more familiar with this than I, but
> here's what I recall.
>
> The class of computable functions (also known as recursive functions)
> are something that you're familiar with.
Er... no.
> Considered in their most basic
> context, these are all functions which take a natural number as input,
> provides an integer as output, and which you could write a computer
> program to compute. If there's a program which can compute this
> function for each possible natural number (in other words, it always
> halts), then you have a total computable function.
>
> Way back in the early days of people starting to understand this sort of
> stuff -- before diagionalization proofs were used for this sort of
> thing, there was a conjecture that this class of total computable
> functions was equivalent to the class of what are called primitive
> recursive functions. You can think of a primitive recursive function,
> basically (iirc), as any function you could code up if only allowed for
> loops which didn't change their loop bounds during their execution (no
> recursion and no while loops either obviously).
>
> While it seems "obvious" since you're familiar with diagonalization type
> arguements, that this class is strictly smaller than the class of total
> computable functions, you can get a bit of an idea for why people made
> this confusion if you attempt to actually think of a function which is
> total computable but not primitive recursive. Almost all the function
> you normally deal with (addition, subtraction, division, factorial,
> exponentiation, etc) are all primitive recursive.
>
> Here's where the Ackermann function comes in. It's obviously a total
> computable function, yet you can show that it cannot be primitive
> recursive. The essence of this argument is that the growth rate of the
> Ackermann function is larger than what could be achieved by any
> primitive recursive function. I believe that the Ackermann function was
> the first concrete example of a well defined total computable function
> which was not primitive recursive -- hence why people care about it.
I see... Thank you for the illumination. Now tell me what a
diagonalization argument is, and how you pronounce it without sounding
stupid. ;-)
> If you're curious as to why it grows so quickly, I suggest you code it
> up and examine it's behavior for a bit. It's an enjoyable exercise in
> experimentally poking around in the guts of a rather odd mathematical
> function.
Note how the Wikipedia article explicitly mentions a Haskell
implementation because it's so easy. ;-) Of course, if you want to
*observe* its behaviour rather than just get the answer, you'd code it a
bit differently...
> Fun Fact! The union-find data structure has as an amortized complexity
> bound for many of many of its operations a running time proportional to
> the *inverse* of the Ackermann function! which is the slowest growing
> non-constant big-O of any algorithm I'm aware of (though I've explicitly
> never looked for others).
Fun fact: This is how I found out about "Ackermann function". Just now.
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
 |