POV-Ray : Newsgroups : povray.off-topic : Musing on the Halting Problem : Re: Musing on the Halting Problem Server Time
3 Sep 2024 17:19:50 EDT (-0400)
  Re: Musing on the Halting Problem  
From: Darren New
Date: 7 Oct 2010 11:59:11
Message: <4cadee4f$1@news.povray.org>
Invisible wrote:
> the *reason* the Halting Problem is undecidable boils down to this:

That's just one of the proofs. The diagonalization proof is another. There 
are many programs (most?) that you can prove you can't analyze.

> Looking the other way, the argument above doesn't work if the program to 
> be analysed isn't allowed to call Magic(), or perhaps one of the 
> operations used within Magic(). Which seems to just restate the fact 
> that a system can only prove results about systems weaker than itself.

Yes. There are a number of problems about sets of integers that you can 
prove are true and can prove are unprovable in more powerful mathematical 
systems. I.e., Godel's incompleteness doesn't apply only to Godel strings.

> That doesn't quite sound right...

To translate it out of Wolfram self-congratulation-speak, he's saying that 
there are no three-state turing machines that are UTMs.

> The Halting Problem cannot be solved. 

It *can*, if your machine isn't sophisticated enough to represent the integers.

Can you solve the halting problem for all computer languages with no 
indefinite loops?  Of course - they all halt.

> can decide for *every possible program*. But can you write one for "the 
> kind of programs that human beings tend to write"?

I don't know why you think a machine with less than four states represents 
programs humans tend to write.

> Well, first of all, good luck finding a rigorous definition for "the 
> kind of programs that human beings tend to write". But, assuming you 
> find one, the argument above still seems to apply. In other words, if it 
> *is* somehow possible to write a program that can determine whether 
> hand-written programs halt, the solution program cannot be hand-written. 
> (!)

This is nonsense. I have no idea what you're trying to say. Where did 
"hand-written" come into it?  You're attacking a straw man.

> Another puzzling thing: Deciding whether a program halts within, say, 
> 1000 steps *is* trivially solvable. Just run the program for (up to) 
> 1000 steps, and see if it halts.

Funny enough, infinities make things much more complicated. That's why 
virtually every bit of math deals with infinities.

> It's quite obvious that that ought to work. However, it looks like the 
> argument above should still apply, making it impossible to solve this 
> problem.

What argument? The one about programs written by hand?

> So what happens if you implement Magic() this way? 

What way? By asking how many steps? No, you have a completely different 
function.  Figuring out if something halts in 1000 steps is done completely 
differently from figuring out if it halts ever, exactly because if the 
answer is "no" then you can't simulate the thing you're analyzing long 
enough to figure it out.

> So it seems that the fallacy of solving the Halting Problem is in 
> Magic() being able to predict the outcome of an infinite number of 
> cycles of work with some finite number of cycles of processing. 
> Interesting...

Yes, exactly.

> That suggests another interesting question: Can you write a program that 
> takes any possible terminating program, and yields the same result in 
> strictly fewer steps? (E.g., if program P yields a result in K steps, 
> program Q given program P as input yields the same result in only K-1 
> steps.)

No.

> Unlike the Halting Problem, nobody would seriously expect the answer to 
> be no.

I suspect, given your later writing, you meant "yes" in that sentence.

> The way the theorem is usually phrased is that "K bits can represent 2^K 
> possibilities, but K-1 bits can only decompress into 2^(K-1) 
> possibilities, which is *half* as many". I suppose you could relate the 
> number of cycles that a program runs for to the number of possible 
> outputs it can generate and prove it that way...

Especially on a TM.  A TM that runs for 1000 cycles can't write to 1001 
cells. Which is, incidentally, why you only get "unbounded" output from a 
TM, not infinite output. If you give a TM an infinite tape,it will never all 
get used up.

> I note with some interest that if it's impossible for a program to not 
> halt, then the Halting Problem becomes [trivially] solvable. That's 
> cute. But why is it true? Because BlackMagic() tries to not halt if 
> Magic() says it should halt. It non-termination is impossible, then 
> BlackMagic() cannot have its evil way.

Sounds like you're on your way to Rice's Theorem.
http://en.wikipedia.org/wiki/Rice%27s_theorem

> In fact, it seems that Magic() can function correctly so long as you 
> can't implement BlackMagic(), or something equivalent to it. So if you 
> want to know how to make the Halting Problem tractable, maybe you just 
> need to look at what components are required to implement BlackMagic() 
> [assuming that Magic() already exists].

The halting problem only makes sense in the context of partial functions.

> For example, it appears as if it ought to be possible to implement 
> Magic() as a partial function that is defined only for programs that do 
> not call Magic(), nor contain an instruction sequence isomorphic to it.

Not really.

> In reality, I guess that might be quite a big limitation. For example, 
> there is Rule 30. This cellular automaton can be computed by the logic 
> function Y' = X xor (Y or Z). So I imagine with just a few bitwise (or 
> even integer arithmetic) instructions, you could implement Rule 30. And 
> it's known to be Turing-complete. Oh dears...

Right. Exactly. There are an infinite number of programs isomorphic to 
BlackMagic, and you can't tell which they are. Plus, even if you disallow 
BlackMagic, you still haven't proven the existence of Magic. You've just 
ruined one proof of it.

Look back over the diagonalization argument. There's no mention in there of 
what gets computed for who or what. It (basically) shows there are more 
partial functions than there are complete functions, so no complete function 
will be able to list all the partial functions.

-- 
Darren New, San Diego CA, USA (PST)
   Serving Suggestion:
     "Don't serve this any more. It's awful."


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.