POV-Ray : Newsgroups : povray.off-topic : Musing on the Halting Problem Server Time
3 Sep 2024 21:19:01 EDT (-0400)
  Musing on the Halting Problem (Message 21 to 30 of 30)  
<<< Previous 10 Messages Goto Initial 10 Messages
From: Darren New
Subject: Re: Musing on the Halting Problem
Date: 11 Oct 2010 20:04:33
Message: <4cb3a611$1@news.povray.org>
Kevin Wampler wrote:
> `uninteresting' when you restrict yourself to a finite set of 
> *programs*, not functions.

Well, Rice's theorem, yes. I was speaking of mathematics in general.

> 2) I don't agree with your view on the use of infinity in mathematics. 
> On the contrary, infinity (or rather unboundedness, which I assume is 
> that you meant) is often used as a way to *simplify* things. 

Well, the existence of infinities in your inputs complicate things more than 
a system where everything is finite. A turing machine with a fixed-length 
tape is far less interesting to prove things about than otherwise.

Stuff like big-O notation only uses infinities because you're again assuming 
your functions have infinities in them. Every program on a finite machine 
finishes in O(1) time.

But yes, certainly I didn't mean to imply using infinities is always more 
complex than trying to avoid them in a problem where it's appropriate.

-- 
Darren New, San Diego CA, USA (PST)
   Serving Suggestion:
     "Don't serve this any more. It's awful."


Post a reply to this message

From: Darren New
Subject: Re: Musing on the Halting Problem
Date: 11 Oct 2010 20:07:28
Message: <4cb3a6c0$1@news.povray.org>
Kevin Wampler wrote:
> Assuming that you were talking about "more programs which compute 
> partial functions than programs which compute total functions" it still 
> seems wrong, since I think it's pretty easy to show that there's 
> countable infinitely many of each.

That's a point.  I probably don't understand it as well as I think I do. :-)

-- 
Darren New, San Diego CA, USA (PST)
   Serving Suggestion:
     "Don't serve this any more. It's awful."


Post a reply to this message

From: Darren New
Subject: Re: Musing on the Halting Problem
Date: 11 Oct 2010 20:09:29
Message: <4cb3a739$1@news.povray.org>
Warp wrote:
> Darren New <dne### [at] sanrrcom> wrote:
>> It also is interesting to consider things like Chaitin's number or "the 
>> smallest uninteresting number" or other finite descriptions that name a 
>> number but you can't tell which number they name.
> 
>   Well, not all numerical concepts have an actual numerical value because
> there simply isn't a number that represents the concept. For example,
> "the smallest real number which is larger than zero" does not exist.

Sure. But Chaitin's number *does* have a value. It's just defined in a way 
that you can't compute it.  (Basically, "the probability that an arbitrarily 
selected program will halt", along with of course all the definitions and 
such to make that meaningful mathematically.)

And "the smallest uninteresting number" doesn't exist because the definition 
is self-contradictory.

-- 
Darren New, San Diego CA, USA (PST)
   Serving Suggestion:
     "Don't serve this any more. It's awful."


Post a reply to this message

From: Invisible
Subject: Re: Musing on the Halting Problem
Date: 12 Oct 2010 04:13:44
Message: <4cb418b8$1@news.povray.org>
On 11/10/2010 05:56 PM, Darren New wrote:
> Invisible wrote:
>> In fact I've seen how there are different ways of representing real
>> numbers (ratios, continued fractions, radicals, etc.), and for almost
>> every one, there are some real numbers that do not have a finite
>> representation.
>
> For *every* one, there are some real numbers that do not have a finite
> representation. That's what makes them irrational.

No, not having a finite ratio representation makes a number 
"irrational". Not having a finite polynomial representation makes it 
"transcendental". And so on and so forth.


Post a reply to this message

From: Invisible
Subject: Re: Musing on the Halting Problem
Date: 12 Oct 2010 04:15:52
Message: <4cb41938$1@news.povray.org>
>> For *every* one, there are some real numbers that do not have a finite
>> representation. That's what makes them irrational.
>
>    "Some" is quite an understatement.
>
>    The set of real numbers with a finite representation is countable, while
> the set of real numbers without a finite representation is uncountable,
> making the latter set much "larger".

On the other hand, the distinction is typically irrelevant. Usually you 
don't actually "need" the precise value of (say) Sqrt(2); some rational 
approximation is sufficient.

Perhaps for similar reasons, you don't often come across real numbers 
that aren't compactly representable...


Post a reply to this message

From: Invisible
Subject: Re: Musing on the Halting Problem
Date: 12 Oct 2010 11:35:49
Message: <4cb48055$1@news.povray.org>
On 11/10/2010 05:34 PM, Warp wrote:

>    There are some programs that if you can prove if they will halt or not,
> you will receive 1 million dollars.

http://www.zetagrid.net/

These guys spent years trying to prove the Riemann Hypothesis by seeing 
if a computer program ever halts...


Post a reply to this message

From: Invisible
Subject: Re: Musing on the Halting Problem
Date: 12 Oct 2010 11:36:18
Message: <4cb48072@news.povray.org>
>>>> That doesn't quite sound right...
>>>
>>> To translate it out of Wolfram self-congratulation-speak, he's saying
>>> that there are no three-state turing machines that are UTMs.
>>
>> Oh, right. That's quite trivial...
>
> Not *that* trivial. Proving that isn't particularly easy, I think. But
> once you prove it, yeah, it's pretty straightforward. That's the Wolfram
> self-congratulation-speak of which I spoke. :-)

Well, sure. Proving Fermat's Last Theorem is highly non-trivial. The 
result itself, though, is quite simple to explain.

>> I'm thinking "you can't solve the Halting Problem for any possible
>> program, but you use solve it for *useful* programs?" But, evidently,
>> no you can't.
>
> No, you can't. Plus, useful programs are often interactive, so there's
> really not even any way to analyze most of them properly.

Ah yes - it's a wetware issue...

>>> Funny enough, infinities make things much more complicated. That's why
>>> virtually every bit of math deals with infinities.
>>
>> Really? I had no idea that was true.
>
> Sure. Almost all functions of interest have domains of infinite size,
> for example. Otherwise, you'd just list out what the function returns
> for each possible input and there wouldn't be any interesting abstract
> questions left about it.

Not really. It's just that usually the function's definition is a much 
more compact way to represent it. For example, AES takes a pair of 
256-bit inputs and produces a 256-bit output. It's totally finite. You 
*could* write down a table of all possible input/output combinations. 
All 2^512 of them.

Actually, screw that. Apparently there are about 10^80 atoms in the 
observable universe. And 2^512 is almost the square of that number. So 
no, technically you cannot "write down" the function table, because the 
universe isn't big enough...

>>>> So it seems that the fallacy of solving the Halting Problem is in
>>>> Magic() being able to predict the outcome of an infinite number of
>>>> cycles of work with some finite number of cycles of processing.
>>>> Interesting...
>>>
>>> Yes, exactly.
>>
>> When you state it like that, it becomes "well *duh*, of *course* you
>> can't do that!"
>
> Except it really isn't "well duh" either. Would you say you can compile
> a program in a finite number of steps that you can't run in a finite
> number of steps? Of course. How different is "compiling" from "figuring
> out what the program does"?

More interestingly, if I add together infinity numbers of the form 
1/n^2, the answer appears to require an infinite amount of computation. 
And yet, Wolfram Alpha tells me, in finite time, that the answer is 
about 1.644. How impressive is that?

Mathematics is impressive partly because it sometimes lets you do 
seemingly impossible things like work out the answer to an infinite 
number of calculations.

> In contrast, how long does it take an instance of Conway's Life to
> rewrite an infinite number of cells?

Given that there are CAs which a Turing-complete, you have to wonder if 
you could wire together a handful of logic gates for each cell and make 
some kind of crazy supercomputer. Unfortunately, the problem then isn't 
going to be computing power, it'll be signal propagation speed...

>> Now I know why mathematics generally tries to avoid infinity whenever
>> possible...
>
> I don't think that's quite right. :-)

It's no secret that infinity has several extremely weird properties, and 
that you want to avoid using it unless you really can't avoid it.

>> Yeah, I still don't quite understand Rice's theorem. It seems to say
>> that *everything* is impossible, despite the fact that we actually
>> *do* it almost every single day. So that can't quite be right...
>
> No. It's saying "if some programs/functions satisfy condition X, and
> some don't, then you can't write a program/function to figure out which
> satisfy X and which don't, because any such X can be turned into an
> instance of the halting program. The only ones you can test are ones
> where either every program does X or no program does X."

Maybe it's like the Halting Problem though. You cannot determine whether 
every program does or does not halt. But you *can* determine whether 
*some* programs halt or not. Sometimes very easily, in fact.

>> If that's actually how it works, then the fact that there are more
>> partial than total functions is so jaw-droppingly self-evident that
>> it's quite a small proof.
>
> I'm glad you're so brilliant. :-) I'm glad you can intuitively determine
> whether one infinity is bigger than another infinity that way.

It seems intuitively obvious that one thing with X possibilities and 
another thing with X+1 possibilities shouldn't be the same size. Then 
again Alpha0 + 1 = Aleph0, so what do I know?


Post a reply to this message

From: Kevin Wampler
Subject: Re: Musing on the Halting Problem
Date: 12 Oct 2010 20:21:08
Message: <4cb4fb74$1@news.povray.org>
>
> Well, the existence of infinities in your inputs complicate things more
> than a system where everything is finite. A turing machine with a
> fixed-length tape is far less interesting to prove things about than
> otherwise.

This is generally true if you have actual infinities as inputs, but for 
the case when you merely have arbitrarily large (but always finite) 
inputs I'm actually having trouble thinking of a case where this is true 
that doesn't involve only "small" finite values (or in which you allow 
some unbounded quantities but not others, like in your comment on finite 
programs being O(1)).  Perhaps you had an example in mind?

I'm more used to seeing "arbitrarily large" or "arbitrarily small" 
values used to simplify things.

> Stuff like big-O notation only uses infinities because you're again
> assuming your functions have infinities in them. Every program on a
> finite machine finishes in O(1) time.
>

My example with big-O notation was actually slightly different than 
that.  If we really restrict ourselves to only bounded quantities, then 
it's impossible to actually define big-O notation since it's defined for 
"sufficiently large inputs".  Thus saying that every program on a finite 
machine finishes in O(1) times is sort of disallowing infinity in one 
area while letting it slip by in another.

My point with big-O notation was instead that the allowance of 
arbitrarily large inputs in the definition is actually massively 
simplifies the problem.  Without it we'd probably be restricted to an 
analysis where constant multiples in the running times really do matter, 
making it much harder to analyze the running times of programs in a 
general way.

Thus I think it's misleading to say that "big-O notation only uses 
infinities because you're again assuming your functions have infinities 
in them".  Rather big-O notation uses infinities because it greatly 
simplifies the analysis of programs (and it practice it turns out that 
the analysis for the case with arbitrarily large inputs carries over 
pretty well to the real-world case of large but bounded inputs).


I'm not entirely sure that we actually have different viewpoints on 
this, and it's possible that I'm just misunderstanding your argument. 
I'm mostly basing my assumptions about your position on your comment:

> Funny enough, infinities make things much more complicated. That's why
> virtually every bit of math deals with infinities.

Which seems to run counter to my experience, where I'm used to seeing 
unbounded values allowed as a simplifying assumption, and see them as 
common because it's such a powerful tool for allowing general statements 
about whatever branch of math you're considering.


Post a reply to this message

From: Darren New
Subject: Re: Musing on the Halting Problem
Date: 12 Oct 2010 21:01:41
Message: <4cb504f5$1@news.povray.org>
Kevin Wampler wrote:
> inputs I'm actually having trouble thinking of a case where this is true 

I don't know what "this" you're talking about. Your comments don't make 
sense in the context of the bit you quoted.

> My example with big-O notation was actually slightly different than 
> that.  

Right.

> Thus saying that every program on a finite 
> machine finishes in O(1) times is sort of disallowing infinity in one 
> area while letting it slip by in another.

Basically, yes.

> My point with big-O notation was instead that the allowance of 
> arbitrarily large inputs in the definition is actually massively 
> simplifies the problem.  

I didn't mean to say it didn't. I thought I had only said it was more 
interesting with infinities.

> I'm not entirely sure that we actually have different viewpoints on 
> this, and it's possible that I'm just misunderstanding your argument. 
> I'm mostly basing my assumptions about your position on your comment:
> 
>> Funny enough, infinities make things much more complicated. That's why
>> virtually every bit of math deals with infinities.
> 
> Which seems to run counter to my experience, 

In many cases where everything is finite, the math becomes trivial, because 
math doesn't care how big something is if it's bounded.

I probably should have added an "often" in there. Infinities often make 
things more complicated.

> where I'm used to seeing 
> unbounded values allowed as a simplifying assumption, and see them as 
> common because it's such a powerful tool for allowing general statements 
> about whatever branch of math you're considering.

Sure. I didn't mean using infinities in the analysis always makes things 
more complicated. I meant using infinities in the problem statement makes 
things interestingly complicated, as opposed to problems where everything is 
finite, everything halts, etc.

-- 
Darren New, San Diego CA, USA (PST)
   Serving Suggestion:
     "Don't serve this any more. It's awful."


Post a reply to this message

From: Kevin Wampler
Subject: Re: Musing on the Halting Problem
Date: 12 Oct 2010 22:20:06
Message: <4cb51756$1@news.povray.org>
On 10/12/2010 6:01 PM, Darren New wrote:
> Kevin Wampler wrote:
>> inputs I'm actually having trouble thinking of a case where this is true
>
> I don't know what "this" you're talking about. Your comments don't make
> sense in the context of the bit you quoted.
>

Yeah, the question I was attempting to ask was only tangentially related 
to the bit I quoted (and not very well thought out anyway).

To rephrase, I was wondering if you had any examples of an unbounded 
mathematical structure and an associated finite restriction where the 
kinds of questions it's "natural" to ask about the finite version are 
easier to answer than in the unbounded version?  In retrospect, 
finite-resource computation is an obvious example you already mentioned, 
but I'm having trouble thinking of others.


On a only partially related note, I was reading a blog entry on 
mathematical techniques for simplifying problems and ran across the 
relevant snipped:

"Make the Objects Really Big: One tool to better understand a problem 
that mathematicians use and we rarely seem to is: replace “finite” by 
various notions of “infinite.” The study of finite groups is extremely 
difficult: one reason is that the interplay between small primes and 
each other can be very complex and subtle. Studying instead infinite 
groups is not easy, but there are some cases where the theory does 
become much easier. For example, studying Lie groups is not easy, but it 
does avoid much of the “accidental” number issues that arise in finite 
groups. One way to see this is their classification occurred first and 
is simpler than the classification of finite simple groups."

I'm not really trying to prove a point with that, but it seemed 
interesting and relevant, and is one of the examples I had in mind of a 
case where infinities made things easier (hence my curiosity for more 
examples of the opposite).


Post a reply to this message

<<< Previous 10 Messages Goto Initial 10 Messages

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.