|
 |
On 07/10/2010 04:59 PM, Darren New wrote:
> Invisible wrote:
>> the *reason* the Halting Problem is undecidable boils down to this:
>
> That's just one of the proofs. The diagonalization proof is another.
The proof involving BlackMagic() is only a very slight variation on
diagonalisation. (That argument seems to work by constructing a binary
function over the program and its input, and then constructing a
function from it which is different by construction from any possible
computable function, demonstrating that the halting function isn't
computable. Or something like that.)
>> Looking the other way, the argument above doesn't work if the program
>> to be analysed isn't allowed to call Magic(), or perhaps one of the
>> operations used within Magic(). Which seems to just restate the fact
>> that a system can only prove results about systems weaker than itself.
>
> Yes. There are a number of problems about sets of integers that you can
> prove are true and can prove are unprovable in more powerful
> mathematical systems. I.e., Godel's incompleteness doesn't apply only to
> Godel strings.
Well, let's face it, who actually understands what the hell Godel's
incompleteness theorems actually say?
>> That doesn't quite sound right...
>
> To translate it out of Wolfram self-congratulation-speak, he's saying
> that there are no three-state turing machines that are UTMs.
Oh, right. That's quite trivial...
>> The Halting Problem cannot be solved.
>
> It *can*, if your machine isn't sophisticated enough to represent the
> integers.
>
> Can you solve the halting problem for all computer languages with no
> indefinite loops? Of course - they all halt.
Which is the conclusion I also came to a few lines down. ;-)
>> can decide for *every possible program*. But can you write one for
>> "the kind of programs that human beings tend to write"?
>
> I don't know why you think a machine with less than four states
> represents programs humans tend to write.
No, this is a new line of thinking, not an extension of the previous one.
> This is nonsense. I have no idea what you're trying to say. Where did
> "hand-written" come into it? You're attacking a straw man.
I'm thinking "you can't solve the Halting Problem for any possible
program, but you use solve it for *useful* programs?" But, evidently, no
you can't.
> Funny enough, infinities make things much more complicated. That's why
> virtually every bit of math deals with infinities.
Really? I had no idea that was true.
>> It's quite obvious that that ought to work. However, it looks like the
>> argument above should still apply, making it impossible to solve this
>> problem.
>
> What argument? The one about programs written by hand?
No, the one about using BlackMagic() to break Magic().
>> So it seems that the fallacy of solving the Halting Problem is in
>> Magic() being able to predict the outcome of an infinite number of
>> cycles of work with some finite number of cycles of processing.
>> Interesting...
>
> Yes, exactly.
When you state it like that, it becomes "well *duh*, of *course* you
can't do that!" And yet, stated the original way, it looks like figuring
out whether a program halts ought to be "really easy" for some reason...
>> That suggests another interesting question: Can you write a program
>> that takes any possible terminating program, and yields the same
>> result in strictly fewer steps?
>
> No.
And nobody is surprised about this. (As I intended to type...)
>> The way the theorem is usually phrased is that "K bits can represent
>> 2^K possibilities, but K-1 bits can only decompress into 2^(K-1)
>> possibilities, which is *half* as many". I suppose you could relate
>> the number of cycles that a program runs for to the number of possible
>> outputs it can generate and prove it that way...
>
> Especially on a TM. A TM that runs for 1000 cycles can't write to 1001
> cells. Which is, incidentally, why you only get "unbounded" output from
> a TM, not infinite output. If you give a TM an infinite tape,it will
> never all get used up.
Never? Or only after infinite time? *Is* there an "*after* infinite time"?
Now I know why mathematics generally tries to avoid infinity whenever
possible...
>> I note with some interest that if it's impossible for a program to not
>> halt, then the Halting Problem becomes [trivially] solvable. That's
>> cute. But why is it true? Because BlackMagic() tries to not halt if
>> Magic() says it should halt. If non-termination is impossible, then
>> BlackMagic() cannot have its evil way.
>
> Sounds like you're on your way to Rice's Theorem.
Yeah, I still don't quite understand Rice's theorem. It seems to say
that *everything* is impossible, despite the fact that we actually *do*
it almost every single day. So that can't quite be right...
> Look back over the diagonalization argument. There's no mention in there
> of what gets computed for who or what. It (basically) shows there are
> more partial functions than there are complete functions, so no complete
> function will be able to list all the partial functions.
If that's actually how it works, then the fact that there are more
partial than total functions is so jaw-droppingly self-evident that it's
quite a small proof.
Post a reply to this message
|
 |