POV-Ray : Newsgroups : povray.off-topic : Musing on the Halting Problem Server Time
3 Sep 2024 19:16:35 EDT (-0400)
  Musing on the Halting Problem (Message 1 to 10 of 30)  
Goto Latest 10 Messages Next 10 Messages >>>
From: Invisible
Subject: Musing on the Halting Problem
Date: 7 Oct 2010 10:43:35
Message: <4caddc97$1@news.povray.org>
Ah yes, it is impossible to write a computer program that can analyse 
any possible computer program and tell you whether it halts in a finite 
number of steps.

After reading about Cantor's diagonal argument, the cardinality of the 
continuum, Turing degrees, and other mind-blowing stuff, it seems that 
the *reason* the Halting Problem is undecidable boils down to this:

If there was a Magic() subroutine that could take the object code to 
your whole program and tell you whether it halts or not, then you could 
write a BlackMagic() subroutine that calls Magic(BlackMagic()) and then 
purposely does the opposite of whatever Magic() claims that it's going 
to do. Hence, it is impossible for Magic() to yield the correct answer, 
since BlackMagic() will do the opposite of whatever Magic() claims it 
will do.



Interestingly, even if you build some kind of hyper-computer that can 
solve the Halting Problem, then by the argument above you could define a 
new Hyper-Halting Problem which the hyper-computer would be just as 
powerless to solve.

Looking the other way, the argument above doesn't work if the program to 
be analysed isn't allowed to call Magic(), or perhaps one of the 
operations used within Magic(). Which seems to just restate the fact 
that a system can only prove results about systems weaker than itself.



Oddly enough:

http://mathworld.wolfram.com/HaltingProblem.html

"The halting problem is solvable for machines with less than four states."

That doesn't quite sound right...



The Halting Problem cannot be solved. You cannot write a program which 
can decide for *every possible program*. But can you write one for "the 
kind of programs that human beings tend to write"?

Well, first of all, good luck finding a rigorous definition for "the 
kind of programs that human beings tend to write". But, assuming you 
find one, the argument above still seems to apply. In other words, if it 
*is* somehow possible to write a program that can determine whether 
hand-written programs halt, the solution program cannot be hand-written. (!)



Another puzzling thing: Deciding whether a program halts within, say, 
1000 steps *is* trivially solvable. Just run the program for (up to) 
1000 steps, and see if it halts.

It's quite obvious that that ought to work. However, it looks like the 
argument above should still apply, making it impossible to solve this 
problem.

So what happens if you implement Magic() this way? Does BlackMagic() 
halt or not halt? It seems that BlackMagic() would immediately call 
Magic(), which would immediately begin simulating BlackMagic(), which 
would immediately call Magic(), which would start simulating 
BlackMagic()... Eventually, at some point, the top-level Magic() would 
give up on its simulation and report back to the top-level BlackMagic() 
that BlackMagic() takes more than 1000 steps to execute. Whereupon...

...Magic() has already used up thousands of cycles, and therefore 
BlackMagic() can't be contrary and halt in less than 1000 cycles to 
invalidate the analysis. Ah, trixy!

So it seems that the fallacy of solving the Halting Problem is in 
Magic() being able to predict the outcome of an infinite number of 
cycles of work with some finite number of cycles of processing. 
Interesting...



That suggests another interesting question: Can you write a program that 
takes any possible terminating program, and yields the same result in 
strictly fewer steps? (E.g., if program P yields a result in K steps, 
program Q given program P as input yields the same result in only K-1 
steps.)

Unlike the Halting Problem, nobody would seriously expect the answer to 
be no. Which is interesting, because when you think about it, they're 
very similar programs really.

If Magic(P) computes the same thing as P but takes exactly 1 fewer 
steps, then Magic(Magic(P)) computes the same thing in 2 fewer steps. 
And by induction, by applying Magic() enough times, any result can be 
computed in zero steps, which is impossible. Hence, Magic() does not 
exist (or at least, doesn't work properly).

Here we're talking about *computation* compression, which is not unlike 
the *data* compression case. If a compression algorithm could always 
compress data by at least 1 bit, you could compress anything into zero 
bits. And, conversely, decompress zero bits into anything. The latter is 
obviously impossible.

The way the theorem is usually phrased is that "K bits can represent 2^K 
possibilities, but K-1 bits can only decompress into 2^(K-1) 
possibilities, which is *half* as many". I suppose you could relate the 
number of cycles that a program runs for to the number of possible 
outputs it can generate and prove it that way...



I note with some interest that if it's impossible for a program to not 
halt, then the Halting Problem becomes [trivially] solvable. That's 
cute. But why is it true? Because BlackMagic() tries to not halt if 
Magic() says it should halt. It non-termination is impossible, then 
BlackMagic() cannot have its evil way.

In fact, it seems that Magic() can function correctly so long as you 
can't implement BlackMagic(), or something equivalent to it. So if you 
want to know how to make the Halting Problem tractable, maybe you just 
need to look at what components are required to implement BlackMagic() 
[assuming that Magic() already exists].

For example, it appears as if it ought to be possible to implement 
Magic() as a partial function that is defined only for programs that do 
not call Magic(), nor contain an instruction sequence isomorphic to it.

In reality, I guess that might be quite a big limitation. For example, 
there is Rule 30. This cellular automaton can be computed by the logic 
function Y' = X xor (Y or Z). So I imagine with just a few bitwise (or 
even integer arithmetic) instructions, you could implement Rule 30. And 
it's known to be Turing-complete. Oh dears...


Post a reply to this message

From: Darren New
Subject: Re: Musing on the Halting Problem
Date: 7 Oct 2010 11:59:11
Message: <4cadee4f$1@news.povray.org>
Invisible wrote:
> the *reason* the Halting Problem is undecidable boils down to this:

That's just one of the proofs. The diagonalization proof is another. There 
are many programs (most?) that you can prove you can't analyze.

> Looking the other way, the argument above doesn't work if the program to 
> be analysed isn't allowed to call Magic(), or perhaps one of the 
> operations used within Magic(). Which seems to just restate the fact 
> that a system can only prove results about systems weaker than itself.

Yes. There are a number of problems about sets of integers that you can 
prove are true and can prove are unprovable in more powerful mathematical 
systems. I.e., Godel's incompleteness doesn't apply only to Godel strings.

> That doesn't quite sound right...

To translate it out of Wolfram self-congratulation-speak, he's saying that 
there are no three-state turing machines that are UTMs.

> The Halting Problem cannot be solved. 

It *can*, if your machine isn't sophisticated enough to represent the integers.

Can you solve the halting problem for all computer languages with no 
indefinite loops?  Of course - they all halt.

> can decide for *every possible program*. But can you write one for "the 
> kind of programs that human beings tend to write"?

I don't know why you think a machine with less than four states represents 
programs humans tend to write.

> Well, first of all, good luck finding a rigorous definition for "the 
> kind of programs that human beings tend to write". But, assuming you 
> find one, the argument above still seems to apply. In other words, if it 
> *is* somehow possible to write a program that can determine whether 
> hand-written programs halt, the solution program cannot be hand-written. 
> (!)

This is nonsense. I have no idea what you're trying to say. Where did 
"hand-written" come into it?  You're attacking a straw man.

> Another puzzling thing: Deciding whether a program halts within, say, 
> 1000 steps *is* trivially solvable. Just run the program for (up to) 
> 1000 steps, and see if it halts.

Funny enough, infinities make things much more complicated. That's why 
virtually every bit of math deals with infinities.

> It's quite obvious that that ought to work. However, it looks like the 
> argument above should still apply, making it impossible to solve this 
> problem.

What argument? The one about programs written by hand?

> So what happens if you implement Magic() this way? 

What way? By asking how many steps? No, you have a completely different 
function.  Figuring out if something halts in 1000 steps is done completely 
differently from figuring out if it halts ever, exactly because if the 
answer is "no" then you can't simulate the thing you're analyzing long 
enough to figure it out.

> So it seems that the fallacy of solving the Halting Problem is in 
> Magic() being able to predict the outcome of an infinite number of 
> cycles of work with some finite number of cycles of processing. 
> Interesting...

Yes, exactly.

> That suggests another interesting question: Can you write a program that 
> takes any possible terminating program, and yields the same result in 
> strictly fewer steps? (E.g., if program P yields a result in K steps, 
> program Q given program P as input yields the same result in only K-1 
> steps.)

No.

> Unlike the Halting Problem, nobody would seriously expect the answer to 
> be no.

I suspect, given your later writing, you meant "yes" in that sentence.

> The way the theorem is usually phrased is that "K bits can represent 2^K 
> possibilities, but K-1 bits can only decompress into 2^(K-1) 
> possibilities, which is *half* as many". I suppose you could relate the 
> number of cycles that a program runs for to the number of possible 
> outputs it can generate and prove it that way...

Especially on a TM.  A TM that runs for 1000 cycles can't write to 1001 
cells. Which is, incidentally, why you only get "unbounded" output from a 
TM, not infinite output. If you give a TM an infinite tape,it will never all 
get used up.

> I note with some interest that if it's impossible for a program to not 
> halt, then the Halting Problem becomes [trivially] solvable. That's 
> cute. But why is it true? Because BlackMagic() tries to not halt if 
> Magic() says it should halt. It non-termination is impossible, then 
> BlackMagic() cannot have its evil way.

Sounds like you're on your way to Rice's Theorem.
http://en.wikipedia.org/wiki/Rice%27s_theorem

> In fact, it seems that Magic() can function correctly so long as you 
> can't implement BlackMagic(), or something equivalent to it. So if you 
> want to know how to make the Halting Problem tractable, maybe you just 
> need to look at what components are required to implement BlackMagic() 
> [assuming that Magic() already exists].

The halting problem only makes sense in the context of partial functions.

> For example, it appears as if it ought to be possible to implement 
> Magic() as a partial function that is defined only for programs that do 
> not call Magic(), nor contain an instruction sequence isomorphic to it.

Not really.

> In reality, I guess that might be quite a big limitation. For example, 
> there is Rule 30. This cellular automaton can be computed by the logic 
> function Y' = X xor (Y or Z). So I imagine with just a few bitwise (or 
> even integer arithmetic) instructions, you could implement Rule 30. And 
> it's known to be Turing-complete. Oh dears...

Right. Exactly. There are an infinite number of programs isomorphic to 
BlackMagic, and you can't tell which they are. Plus, even if you disallow 
BlackMagic, you still haven't proven the existence of Magic. You've just 
ruined one proof of it.

Look back over the diagonalization argument. There's no mention in there of 
what gets computed for who or what. It (basically) shows there are more 
partial functions than there are complete functions, so no complete function 
will be able to list all the partial functions.

-- 
Darren New, San Diego CA, USA (PST)
   Serving Suggestion:
     "Don't serve this any more. It's awful."


Post a reply to this message

From: Warp
Subject: Re: Musing on the Halting Problem
Date: 10 Oct 2010 03:57:59
Message: <4cb17206@news.povray.org>
Invisible <voi### [at] devnull> wrote:
> If there was a Magic() subroutine that could take the object code to 
> your whole program and tell you whether it halts or not, then you could 
> write a BlackMagic() subroutine that calls Magic(BlackMagic()) and then 
> purposely does the opposite of whatever Magic() claims that it's going 
> to do. Hence, it is impossible for Magic() to yield the correct answer, 
> since BlackMagic() will do the opposite of whatever Magic() claims it 
> will do.

  That's just the simplest proof. It doesn't mean it's the only one, or
that the only way of making an undecidable algorithm is by making it call
the halting problem solver itself.

  While this is not a proof of anything, it's however, a marvelously simple
example of why solving the halting problem is extremely difficult:

  "A program that goes through all even numbers and checks if one of them
is a counter-example to the Goldbach's conjecture. The program terminates
when it finds the first such number."

  An algorithm which solves the halting problem would prove (or disprove)
the Goldbach's conjecture with its answer (if the answer is "it never
halts", then it proves the conjecture, while the answer "it halts"
disproves it, and this disproval would be valid even if it couldn't give
the counter-example).

  How could an algorithm resolve whether that program will ever halt or not?
Top mathematicians have struggled with that problem for over 250 years, to
no avail. A program which solves the halting problem for any given program
would make proving basically *all* mathematical problems trivial.

  While, as said, this is not a proof that it's not possible, it gives a good
idea why it would be a tremendously difficult problem to solve.

-- 
                                                          - Warp


Post a reply to this message

From: Darren New
Subject: Re: Musing on the Halting Problem
Date: 10 Oct 2010 10:25:57
Message: <4cb1ccf5@news.povray.org>
Warp wrote:
>   An algorithm which solves the halting problem would prove (or disprove)
> the Goldbach's conjecture with its answer

FWIW, Rice's Theorem basically say *any* property that some but not all 
functions have can be turned into the halting problem. So if there's 
anything non-trivial you want to find out about what a function computes, 
it's just as impossible to compute as the halting problem is.

-- 
Darren New, San Diego CA, USA (PST)
   Serving Suggestion:
     "Don't serve this any more. It's awful."


Post a reply to this message

From: Warp
Subject: Re: Musing on the Halting Problem
Date: 10 Oct 2010 12:35:04
Message: <4cb1eb37@news.povray.org>
Darren New <dne### [at] sanrrcom> wrote:
> Warp wrote:
> >   An algorithm which solves the halting problem would prove (or disprove)
> > the Goldbach's conjecture with its answer

> FWIW, Rice's Theorem basically say *any* property that some but not all 
> functions have can be turned into the halting problem. So if there's 
> anything non-trivial you want to find out about what a function computes, 
> it's just as impossible to compute as the halting problem is.

  One interesting thing about the example I mentioned is that the program
in question is quite short. I'm sure you could write the program in 2 or 3
lines of Haskell (of course it has to assume integers of unlimited size and
an unlimited amount of RAM). Yet proving whether this extremely short program
will ever terminate has puzzled top mathematicians for over 250 years.

-- 
                                                          - Warp


Post a reply to this message

From: Tim Cook
Subject: Re: Musing on the Halting Problem
Date: 10 Oct 2010 20:34:30
Message: <4cb25b96@news.povray.org>
On 2010-10-07 09:43, Invisible wrote:
> Ah yes, it is impossible to write a computer program that can analyse
> any possible computer program and tell you whether it halts in a finite
> number of steps.

How does a person go about deciding the same problem?  Or is that 
impossible, too?

-- 
Tim Cook
http://empyrean.freesitespace.net


Post a reply to this message

From: Darren New
Subject: Re: Musing on the Halting Problem
Date: 10 Oct 2010 21:29:07
Message: <4cb26863$1@news.povray.org>
Tim Cook wrote:
> On 2010-10-07 09:43, Invisible wrote:
>> Ah yes, it is impossible to write a computer program that can analyse
>> any possible computer program and tell you whether it halts in a finite
>> number of steps.
> 
> How does a person go about deciding the same problem?  Or is that 
> impossible, too?

That's impossible too.

Is this true or false:   "I am lying."

That's just one of the sorts of problems.

Note that there are many individual problems where you *can* figure this 
out. It's just that there's no way to figure it out for *all* problems, and 
no way of categorizing in advance which ones you can figure it out for and 
which you can't.

-- 
Darren New, San Diego CA, USA (PST)
   Serving Suggestion:
     "Don't serve this any more. It's awful."


Post a reply to this message

From: Invisible
Subject: Re: Musing on the Halting Problem
Date: 11 Oct 2010 04:11:39
Message: <4cb2c6bb$1@news.povray.org>
On 10/10/2010 08:57 AM, Warp wrote:

>    While this is not a proof of anything, it's however, a marvelously simple
> example of why solving the halting problem is extremely difficult:
>
>    "A program that goes through all even numbers and checks if one of them
> is a counter-example to the Goldbach's conjecture. The program terminates
> when it finds the first such number."
>
>    An algorithm which solves the halting problem would prove (or disprove)
> the Goldbach's conjecture with its answer (if the answer is "it never
> halts", then it proves the conjecture, while the answer "it halts"
> disproves it, and this disproval would be valid even if it couldn't give
> the counter-example).
>
>    How could an algorithm resolve whether that program will ever halt or not?
> Top mathematicians have struggled with that problem for over 250 years, to
> no avail. A program which solves the halting problem for any given program
> would make proving basically *all* mathematical problems trivial.
>
>    While, as said, this is not a proof that it's not possible, it gives a good
> idea why it would be a tremendously difficult problem to solve.

My favourite one is determining whether a point belongs to the 
Mandelbrot set. The program to do this is about 3 lines long. Depending 
on which point you choose to investigate, the program will either loop 
forever (indicating that the point belongs to the M set) or halt after 
finite iterations (indicating that the point belongs to the complement 
of M).

The trouble is, the most celebrated single property of the M set is that 
it's absurdly complicated.

When I was a teenager, I used to think that maybe instead of blindly 
iterating the equations, you could use the geometric properties of the 
set to figure out the answer faster. But now I realise that since the 
geometric forms repeat to infinity as well, there are still points which 
would be just as slow to categorise that way.

In fact I've seen how there are different ways of representing real 
numbers (ratios, continued fractions, radicals, etc.), and for almost 
every one, there are some real numbers that do not have a finite 
representation. It seems reasonable that no matter what algorithm you 
use for determining membership of M, there will still be cases requiring 
infinity iterations.

No matter which way round you look at it, it's just not solvable in 
finite time, *every* time.

And this is just one of the possible computer programs you could write. 
We haven't even considered any of the more complicated generalised sets 
one might contemplate. An algorithm that knows the special properties of 
every single set mathematically definable so that they can all be 
resolved in finite time? I double it...


Post a reply to this message

From: Invisible
Subject: Re: Musing on the Halting Problem
Date: 11 Oct 2010 11:58:05
Message: <4cb3340d$1@news.povray.org>
On 07/10/2010 04:59 PM, Darren New wrote:
> Invisible wrote:
>> the *reason* the Halting Problem is undecidable boils down to this:
>
> That's just one of the proofs. The diagonalization proof is another.

The proof involving BlackMagic() is only a very slight variation on 
diagonalisation. (That argument seems to work by constructing a binary 
function over the program and its input, and then constructing a 
function from it which is different by construction from any possible 
computable function, demonstrating that the halting function isn't 
computable. Or something like that.)

>> Looking the other way, the argument above doesn't work if the program
>> to be analysed isn't allowed to call Magic(), or perhaps one of the
>> operations used within Magic(). Which seems to just restate the fact
>> that a system can only prove results about systems weaker than itself.
>
> Yes. There are a number of problems about sets of integers that you can
> prove are true and can prove are unprovable in more powerful
> mathematical systems. I.e., Godel's incompleteness doesn't apply only to
> Godel strings.

Well, let's face it, who actually understands what the hell Godel's 
incompleteness theorems actually say?

>> That doesn't quite sound right...
>
> To translate it out of Wolfram self-congratulation-speak, he's saying
> that there are no three-state turing machines that are UTMs.

Oh, right. That's quite trivial...

>> The Halting Problem cannot be solved.
>
> It *can*, if your machine isn't sophisticated enough to represent the
> integers.
>
> Can you solve the halting problem for all computer languages with no
> indefinite loops? Of course - they all halt.

Which is the conclusion I also came to a few lines down. ;-)

>> can decide for *every possible program*. But can you write one for
>> "the kind of programs that human beings tend to write"?
>
> I don't know why you think a machine with less than four states
> represents programs humans tend to write.

No, this is a new line of thinking, not an extension of the previous one.

> This is nonsense. I have no idea what you're trying to say. Where did
> "hand-written" come into it? You're attacking a straw man.

I'm thinking "you can't solve the Halting Problem for any possible 
program, but you use solve it for *useful* programs?" But, evidently, no 
you can't.

> Funny enough, infinities make things much more complicated. That's why
> virtually every bit of math deals with infinities.

Really? I had no idea that was true.

>> It's quite obvious that that ought to work. However, it looks like the
>> argument above should still apply, making it impossible to solve this
>> problem.
>
> What argument? The one about programs written by hand?

No, the one about using BlackMagic() to break Magic().

>> So it seems that the fallacy of solving the Halting Problem is in
>> Magic() being able to predict the outcome of an infinite number of
>> cycles of work with some finite number of cycles of processing.
>> Interesting...
>
> Yes, exactly.

When you state it like that, it becomes "well *duh*, of *course* you 
can't do that!" And yet, stated the original way, it looks like figuring 
out whether a program halts ought to be "really easy" for some reason...

>> That suggests another interesting question: Can you write a program
>> that takes any possible terminating program, and yields the same
>> result in strictly fewer steps?
>
> No.

And nobody is surprised about this. (As I intended to type...)

>> The way the theorem is usually phrased is that "K bits can represent
>> 2^K possibilities, but K-1 bits can only decompress into 2^(K-1)
>> possibilities, which is *half* as many". I suppose you could relate
>> the number of cycles that a program runs for to the number of possible
>> outputs it can generate and prove it that way...
>
> Especially on a TM. A TM that runs for 1000 cycles can't write to 1001
> cells. Which is, incidentally, why you only get "unbounded" output from
> a TM, not infinite output. If you give a TM an infinite tape,it will
> never all get used up.

Never? Or only after infinite time? *Is* there an "*after* infinite time"?

Now I know why mathematics generally tries to avoid infinity whenever 
possible...

>> I note with some interest that if it's impossible for a program to not
>> halt, then the Halting Problem becomes [trivially] solvable. That's
>> cute. But why is it true? Because BlackMagic() tries to not halt if
>> Magic() says it should halt. If non-termination is impossible, then
>> BlackMagic() cannot have its evil way.
>
> Sounds like you're on your way to Rice's Theorem.

Yeah, I still don't quite understand Rice's theorem. It seems to say 
that *everything* is impossible, despite the fact that we actually *do* 
it almost every single day. So that can't quite be right...

> Look back over the diagonalization argument. There's no mention in there
> of what gets computed for who or what. It (basically) shows there are
> more partial functions than there are complete functions, so no complete
> function will be able to list all the partial functions.

If that's actually how it works, then the fact that there are more 
partial than total functions is so jaw-droppingly self-evident that it's 
quite a small proof.


Post a reply to this message

From: Warp
Subject: Re: Musing on the Halting Problem
Date: 11 Oct 2010 12:34:58
Message: <4cb33cb2@news.povray.org>
Tim Cook <z99### [at] gmailcom> wrote:
> How does a person go about deciding the same problem?  Or is that 
> impossible, too?

  There are some programs that if you can prove if they will halt or not,
you will receive 1 million dollars.

-- 
                                                          - Warp


Post a reply to this message

Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.