|
 |
Ah yes, it is impossible to write a computer program that can analyse
any possible computer program and tell you whether it halts in a finite
number of steps.
After reading about Cantor's diagonal argument, the cardinality of the
continuum, Turing degrees, and other mind-blowing stuff, it seems that
the *reason* the Halting Problem is undecidable boils down to this:
If there was a Magic() subroutine that could take the object code to
your whole program and tell you whether it halts or not, then you could
write a BlackMagic() subroutine that calls Magic(BlackMagic()) and then
purposely does the opposite of whatever Magic() claims that it's going
to do. Hence, it is impossible for Magic() to yield the correct answer,
since BlackMagic() will do the opposite of whatever Magic() claims it
will do.
Interestingly, even if you build some kind of hyper-computer that can
solve the Halting Problem, then by the argument above you could define a
new Hyper-Halting Problem which the hyper-computer would be just as
powerless to solve.
Looking the other way, the argument above doesn't work if the program to
be analysed isn't allowed to call Magic(), or perhaps one of the
operations used within Magic(). Which seems to just restate the fact
that a system can only prove results about systems weaker than itself.
Oddly enough:
http://mathworld.wolfram.com/HaltingProblem.html
"The halting problem is solvable for machines with less than four states."
That doesn't quite sound right...
The Halting Problem cannot be solved. You cannot write a program which
can decide for *every possible program*. But can you write one for "the
kind of programs that human beings tend to write"?
Well, first of all, good luck finding a rigorous definition for "the
kind of programs that human beings tend to write". But, assuming you
find one, the argument above still seems to apply. In other words, if it
*is* somehow possible to write a program that can determine whether
hand-written programs halt, the solution program cannot be hand-written. (!)
Another puzzling thing: Deciding whether a program halts within, say,
1000 steps *is* trivially solvable. Just run the program for (up to)
1000 steps, and see if it halts.
It's quite obvious that that ought to work. However, it looks like the
argument above should still apply, making it impossible to solve this
problem.
So what happens if you implement Magic() this way? Does BlackMagic()
halt or not halt? It seems that BlackMagic() would immediately call
Magic(), which would immediately begin simulating BlackMagic(), which
would immediately call Magic(), which would start simulating
BlackMagic()... Eventually, at some point, the top-level Magic() would
give up on its simulation and report back to the top-level BlackMagic()
that BlackMagic() takes more than 1000 steps to execute. Whereupon...
...Magic() has already used up thousands of cycles, and therefore
BlackMagic() can't be contrary and halt in less than 1000 cycles to
invalidate the analysis. Ah, trixy!
So it seems that the fallacy of solving the Halting Problem is in
Magic() being able to predict the outcome of an infinite number of
cycles of work with some finite number of cycles of processing.
Interesting...
That suggests another interesting question: Can you write a program that
takes any possible terminating program, and yields the same result in
strictly fewer steps? (E.g., if program P yields a result in K steps,
program Q given program P as input yields the same result in only K-1
steps.)
Unlike the Halting Problem, nobody would seriously expect the answer to
be no. Which is interesting, because when you think about it, they're
very similar programs really.
If Magic(P) computes the same thing as P but takes exactly 1 fewer
steps, then Magic(Magic(P)) computes the same thing in 2 fewer steps.
And by induction, by applying Magic() enough times, any result can be
computed in zero steps, which is impossible. Hence, Magic() does not
exist (or at least, doesn't work properly).
Here we're talking about *computation* compression, which is not unlike
the *data* compression case. If a compression algorithm could always
compress data by at least 1 bit, you could compress anything into zero
bits. And, conversely, decompress zero bits into anything. The latter is
obviously impossible.
The way the theorem is usually phrased is that "K bits can represent 2^K
possibilities, but K-1 bits can only decompress into 2^(K-1)
possibilities, which is *half* as many". I suppose you could relate the
number of cycles that a program runs for to the number of possible
outputs it can generate and prove it that way...
I note with some interest that if it's impossible for a program to not
halt, then the Halting Problem becomes [trivially] solvable. That's
cute. But why is it true? Because BlackMagic() tries to not halt if
Magic() says it should halt. It non-termination is impossible, then
BlackMagic() cannot have its evil way.
In fact, it seems that Magic() can function correctly so long as you
can't implement BlackMagic(), or something equivalent to it. So if you
want to know how to make the Halting Problem tractable, maybe you just
need to look at what components are required to implement BlackMagic()
[assuming that Magic() already exists].
For example, it appears as if it ought to be possible to implement
Magic() as a partial function that is defined only for programs that do
not call Magic(), nor contain an instruction sequence isomorphic to it.
In reality, I guess that might be quite a big limitation. For example,
there is Rule 30. This cellular automaton can be computed by the logic
function Y' = X xor (Y or Z). So I imagine with just a few bitwise (or
even integer arithmetic) instructions, you could implement Rule 30. And
it's known to be Turing-complete. Oh dears...
Post a reply to this message
|
 |