 |
 |
|
 |
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
"Warp" <war### [at] tag povray org> wrote in message
news:4a676772@news.povray.org...
> somebody <x### [at] y com> wrote:
> > Even so, one has to be careful, for it's possible to determine if any
such
> > finite program will halt or not - just run the program until all
possible
> > states would be exhausted.
> A finite program might never exhaust all possible states with unbounded
> memory.
True in that Turing machines do have unbounded memory. Of course no real
machine does or will.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
"andrel" <a_l### [at] hotmail com> wrote in message
news:4A6### [at] hotmail com...
> On 22-7-2009 20:30, somebody wrote:
> > "Warp" <war### [at] tag povray org> wrote in message
> > news:4a671eb6@news.povray.org...
> >> Invisible <voi### [at] dev null> wrote:
> >
> >>> It's common knowledge that if you write an arbitrary program in a
> >>> Turing-complete programming language, it is impossible to determine
> >>> whether the program will ever halt.
> >
> >> That's not stated correctly. It's impossible to create an algorithm
> > which
> >> would tell for all possible programs whether they terminate or not.
> >
> > Even so, one has to be careful, for it's possible to determine if any
such
> > finite program will halt or not - just run the program until all
possible
> > states would be exhausted.
> Why would the number of states be finite and why do you assume
determinism?
The Turing process does not prescribe for true randomness. And with
determinism plus finite resources, states will be finite.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
somebody <x### [at] y com> wrote:
> True in that Turing machines do have unbounded memory. Of course no real
> machine does or will.
That doesn't make solving the halting problem any easier.
It's like the computational complexity of algorithms: Since an algorithm
will always be run on a computer with limited resources, techincally speaking
that means that all (terminating) algorithms are O(1).
Of course that's not useful information. It doesn't tell us anything.
The unbounded case is much more informative and useful.
--
- Warp
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
On 22-7-2009 23:21, somebody wrote:
> "Warp" <war### [at] tag povray org> wrote in message
> news:4a676772@news.povray.org...
>> somebody <x### [at] y com> wrote:
>
>>> Even so, one has to be careful, for it's possible to determine if any
> such
>>> finite program will halt or not - just run the program until all
> possible
>>> states would be exhausted.
>
>> A finite program might never exhaust all possible states with unbounded
>> memory.
>
> True in that Turing machines do have unbounded memory. Of course no real
> machine does or will.
A Turing machine with unspecified memory size will do. Aside: Your cell
phone's state space far exceeds the number of particles in the universe
multiplied by the age of the universe measured in Plack times. What was
your point?
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Warp wrote:
> somebody <x### [at] y com> wrote:
>> True in that Turing machines do have unbounded memory. Of course no real
>> machine does or will.
>
> That doesn't make solving the halting problem any easier.
Sure it does, if you have a machine to test it on that has exponentially
more memory than the machine it runs on.
If you have a pseudo-Turing machine with 5 states and a binary tape only 10
symbols long, if it runs more then 5K steps, it's never going to halt.
The halting problem is specifically about Turing machines with unbounded
memory, if that's what you mean. (I know you know this, so I must be
misunderstanding what you mean.)
Maybe you mean it doesn't make practical program analysis significantly
easier to know it's only going to run on a machine with a few gigabytes of
memory?
> It's like the computational complexity of algorithms: Since an algorithm
> will always be run on a computer with limited resources, techincally speaking
> that means that all (terminating) algorithms are O(1).
Good thing that's not how O() is defined. :-)
--
Darren New, San Diego CA, USA (PST)
"We'd like you to back-port all the changes in 2.0
back to version 1.0."
"We've done that already. We call it 2.0."
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
andrel wrote:
>>> So no GOTO then?
>> Forward goto, sure. :-)
> define forward
Such that you never branch to an address you've already executed.
> Perhaps also things that are decidable are not really useful to know.
Nah. It's useful to know you haven't violated the type system, and that's
decidable.
--
Darren New, San Diego CA, USA (PST)
"We'd like you to back-port all the changes in 2.0
back to version 1.0."
"We've done that already. We call it 2.0."
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
On 22-7-2009 23:50, Darren New wrote:
> andrel wrote:
>>>> So no GOTO then?
>>> Forward goto, sure. :-)
>> define forward
>
> Such that you never branch to an address you've already executed.
That is a rather useful definition, but it may have to imply that the
code that you jump to is not reachable in any other way. Or perhaps that
is what you want.
>> Perhaps also things that are decidable are not really useful to know.
>
> Nah. It's useful to know you haven't violated the type system, and
> that's decidable.
>
I seem to remember that type systems had the same complexity as or might
even be Turing machines in themselves, but I assume you know better.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Darren New <dne### [at] san rr com> wrote:
> > It's like the computational complexity of algorithms: Since an algorithm
> > will always be run on a computer with limited resources, techincally speaking
> > that means that all (terminating) algorithms are O(1).
> Good thing that's not how O() is defined. :-)
I think it is. The problem "sort 10 elements" can be solved in O(1).
Likewise the problems "sort 1 million elements" and "sort 10^100000 elements"
are both O(1). Any problem on a computer with limited resources is O(1).
Larger O functions come into play only when the resources are unlimited.
However, it's not at all useful to think of algorithms as running within
limited resources. That doesn't give us any useful information. Computational
complexity analysis only gives useful information when assuming that resources
are unbounded.
--
- Warp
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Warp wrote:
> Darren New <dne### [at] san rr com> wrote:
>>> It's like the computational complexity of algorithms: Since an algorithm
>>> will always be run on a computer with limited resources, techincally speaking
>>> that means that all (terminating) algorithms are O(1).
>
>> Good thing that's not how O() is defined. :-)
>
> I think it is. [...]
> Larger O functions come into play only when the resources are unlimited.
I'm pretty sure O() is defined as some function whose value exceeds the
measurement of your algorithm as n approaches infinity, with n being the
size of the input (measured in some particular way). So O(n^2) means some
measurement of your algorithm will eventually be and stay below k*n^2 as n
grows without bound.
So if your input isn't unbounded, then O() doesn't make much sense. If your
input is unbounded, then you have an unlimited amount of resources going
into the problem, as input is one of your resources. Certainly you can't
sort an unbounded number of elements without an unbounded amount of resources.
I.e., the way O(n) is usually defined counts "n" in memory, and "n" has to
grow without bound. (I.e., O(n) usually assumes "offline" work.)
> However, it's not at all useful to think of algorithms as running within
> limited resources. That doesn't give us any useful information. Computational
> complexity analysis only gives useful information when assuming that resources
> are unbounded.
Certainly computational complexity analysis gives useful information for
bounded workloads in the sense that an O(lg n) sort is better than an O(n^2)
sort[1], even on smaller workloads. It's not really useful to say sorting 10
million text lines is O(1) because you know you only have 10 million of them.
I think we're still talking silly definitions, tho. We both know what's
going on here. :-)
[1] Modulo ridiculous constants, of course.
--
Darren New, San Diego CA, USA (PST)
"We'd like you to back-port all the changes in 2.0
back to version 1.0."
"We've done that already. We call it 2.0."
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
andrel wrote:
> That is a rather useful definition, but it may have to imply that the
> code that you jump to is not reachable in any other way. Or perhaps that
> is what you want.
As long as you haven't executed it before you goto it, you're OK. If you
skip forward over something, then branch back to it, then fall into what
you've already executed, you'll execute the same bit of code twice. But you
can't execute it *three* times without branching to something you've already
executed.
> I seem to remember that type systems had the same complexity as or might
> even be Turing machines in themselves, but I assume you know better.
It depends on the type system. Some are. Most aren't. I'm talking about the
ones that aren't. :-)
--
Darren New, San Diego CA, USA (PST)
"We'd like you to back-port all the changes in 2.0
back to version 1.0."
"We've done that already. We call it 2.0."
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |