|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
http://en.wikipedia.org/wiki/Turing_completeness
"""
A computer with access to an infinite tape of data is sometimes more
powerful than a Turing machine, because the tape can in principle contain
the solution to the halting problem, or some other undecidable problem. An
infinite tape of data is called a Turing oracle. Even a Turing oracle with
random data is not computable (with probability 1), since there are only
countably many computations but uncountably many oracles. So a computer with
a random Turing oracle can compute things that a Turing machine cannot.
"""
Ha! I knew there was a reason I thought the argument that Rule 110 was
turing complete was a bit flakey. Initializing an infinite tape for a
turing machine at least makes it more powerful than one with an
uninitialized tape. And rule 110 needs an infinitely initialized tape.
--
Darren New, San Diego CA, USA (PST)
Serving Suggestion:
"Don't serve this any more. It's awful."
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 23.12.2010 00:07, schrieb Darren New:
> Ha! I knew there was a reason I thought the argument that Rule 110 was
> turing complete was a bit flakey. Initializing an infinite tape for a
> turing machine at least makes it more powerful than one with an
> uninitialized tape. And rule 110 needs an infinitely initialized tape.
That's not a problem in cellular automata, as they - by definition -
have an infinitely initialized "tape" (or, rather, grid of cells).
Also note that rule 110 was proven to be /Turing complete/, not /Turing
equivalent/.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
clipka wrote:
> Am 23.12.2010 00:07, schrieb Darren New:
>
>> Ha! I knew there was a reason I thought the argument that Rule 110 was
>> turing complete was a bit flakey. Initializing an infinite tape for a
>> turing machine at least makes it more powerful than one with an
>> uninitialized tape. And rule 110 needs an infinitely initialized tape.
>
> That's not a problem in cellular automata, as they - by definition -
> have an infinitely initialized "tape" (or, rather, grid of cells).
Well, there's a problem with that, tho. Let's initialize the grid to be 1 if
the program represented by the X coordinate halts with the input represented
by the Y coordinate, and 0 if it doesn't. Bingo - just solved the halting
problem.
I'm not convinced. I'm not sure either way, mind. It just seems flakey and
not something you can assume.
Yes, a CA does an infinite amount of computation in each step. It's not
obvious to me that doing an infinite amount of computation before you
*start* makes it legal. Doing it with a multi-stage CA would work, but then
it wouldn't be rule 110 any more.
--
Darren New, San Diego CA, USA (PST)
Serving Suggestion:
"Don't serve this any more. It's awful."
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 04.01.2011 19:47, schrieb Darren New:
> clipka wrote:
>> Am 23.12.2010 00:07, schrieb Darren New:
>>
>>> Ha! I knew there was a reason I thought the argument that Rule 110 was
>>> turing complete was a bit flakey. Initializing an infinite tape for a
>>> turing machine at least makes it more powerful than one with an
>>> uninitialized tape. And rule 110 needs an infinitely initialized tape.
>>
>> That's not a problem in cellular automata, as they - by definition -
>> have an infinitely initialized "tape" (or, rather, grid of cells).
>
> Well, there's a problem with that, tho. Let's initialize the grid to be
> 1 if the program represented by the X coordinate halts with the input
> represented by the Y coordinate, and 0 if it doesn't. Bingo - just
> solved the halting problem.
You just added an oracle.
> I'm not convinced. I'm not sure either way, mind. It just seems flakey
> and not something you can assume.
Why not? Both CA and Turing machine are theoretical constructs anyway,
so you can assume just about anything.
> Yes, a CA does an infinite amount of computation in each step. It's not
> obvious to me that doing an infinite amount of computation before you
> *start* makes it legal. Doing it with a multi-stage CA would work, but
> then it wouldn't be rule 110 any more.
Actually you don't even need pre-initialized cells (or infinitely many
cells, for that matter) any more than you do with a Turing machine, if
you go for a CA with more than just two states.
All you need is a set of special states to represent some
"initialization area" which you place on both sides of your "payload
data", defined in such a way that it moves outward at 1 cell per cycle
while auto-generating the desired "background pattern".
There - you just eliminated the need for any pre-calculation. As the
maximum propagation speed of any pattern in a CA is 1 cell per cycle,
you are guaranteed to have the "background pattern" wherever you need
it. Flaw fixed.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 04.01.2011 21:08, schrieb clipka:
>
> Actually you don't even need pre-initialized cells (or infinitely many
> cells, for that matter) any more than you do with a Turing machine, if
> you go for a CA with more than just two states.
>
> All you need is a set of special states to represent some
> "initialization area" which you place on both sides of your "payload
> data", defined in such a way that it moves outward at 1 cell per cycle
> while auto-generating the desired "background pattern".
>
> There - you just eliminated the need for any pre-calculation. As the
> maximum propagation speed of any pattern in a CA is 1 cell per cycle,
> you are guaranteed to have the "background pattern" wherever you need
> it. Flaw fixed.
... oh, and to overcome the need of infinitely many cells you can of
course place a "cell factory" at each end of the string of cells, each
of which adds one more cell per cycle.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
clipka wrote:
> You just added an oracle.
Yep. That's kind of the point I'm making.
>> I'm not convinced. I'm not sure either way, mind. It just seems flakey
>> and not something you can assume.
>
> Why not? Both CA and Turing machine are theoretical constructs anyway,
> so you can assume just about anything.
Why not? Because if I allow for infinite initialization, I can calculate
things with a CA that I can't calculate with a TM. You just pointed that out
above.
> There - you just eliminated the need for any pre-calculation. As the
> maximum propagation speed of any pattern in a CA is 1 cell per cycle,
> you are guaranteed to have the "background pattern" wherever you need
> it. Flaw fixed.
It's only "flaw fixed" if you want to claim that what you just created is
rule 110. That's like arguing that a 1-state TM is universal, because all
you have to do is add a few more states.
I'm not arguing that a CA can't be Turing complete. I'm arguing that
*Wolfram's* CA requires an infinite amount of initialization *before* it can
function as a UTM.
Given that a CA does an infinite amount of computation each step, I just
don't know if that's problematic or not. Naturally Wolfram says it isn't,
but I haven't heard it addressed anywhere, and I've looked.
--
Darren New, San Diego CA, USA (PST)
Serving Suggestion:
"Don't serve this any more. It's awful."
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 04.01.2011 21:56, schrieb Darren New:
>> There - you just eliminated the need for any pre-calculation. As the
>> maximum propagation speed of any pattern in a CA is 1 cell per cycle,
>> you are guaranteed to have the "background pattern" wherever you need
>> it. Flaw fixed.
>
> It's only "flaw fixed" if you want to claim that what you just created
> is rule 110. That's like arguing that a 1-state TM is universal, because
> all you have to do is add a few more states.
No, not really. Your argument why 110 is flawed is based on the
assumption that its initialization /may/ be some kind of oracle. My
argument shows that this is not necessarily the case - and in particular
that it is /never/ the case if the CA is pre-initialized with a regular
pattern, as in that case you can always build a CA that simulates the
original CA without requiring infinite initialization; as the proof for
the Turing-completeness of Rule 110 uses such a regular pattern, there
is no oracle involved in the setup.
> I'm not arguing that a CA can't be Turing complete. I'm arguing that
> *Wolfram's* CA requires an infinite amount of initialization *before* it
> can function as a UTM.
And that's no problem, because CA are allowed to have that /by definition/.
Yes, that might make a CA /more/ powerful than a UTM. But that doesn't
mean it can't /simulate/ a UTM (thus making it Turing-complete), or that
its setup procedure /necessarily/ makes it any more difficult than a UTM
to actually simulate it in real life, or whatever problem it is you have
with that infinite initialization. In some cases it might be, but in the
case of simulating a UTM it isn't.
> Given that a CA does an infinite amount of computation each step, I just
> don't know if that's problematic or not. Naturally Wolfram says it
> isn't, but I haven't heard it addressed anywhere, and I've looked.
Note that even if you kind of have infinite "ROM" in a CA, within
limited time you can only access a limited amount of it (at most 2*N+1
cells in a 1D automaton, where N is the number of steps you compute),
because the information needs to propagate to the place where the output
is to end up. So effectively you're only initializing a limited number
of cells, unless the CA runs infinitely without ever entering a
repeating sequence.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
clipka wrote:
> And that's no problem, because CA are allowed to have that /by definition/.
That right there is what I'm asking about. I've never seen a definition that
allows for a CA to have an infinite initialization other than to a default
state. Do you have a citation to a definition that describes this? I am
having trouble coming up with a trivial way to define what would be the
allowable initialization and what would be the disallowed initialization
without making reference to something outside the realm of CAs. Perhaps if
you can describe its state based on a RE function of its index or something?
Note that a TM that just gets its initial tape initialized to purely random
symbols is also strictly more powerful than a TM that gets its tape
initialized to a constant. The thing about the TM is you can show that if
you initialize once cell per step, or if you keep a single extra register
with the index of the highest-read cell, you can avoid doing an infinite
initialization to start. (I.e., this would be the equivalent to the
tack-initializers-on-the-ends you were talking about for the linear CA.)
> unless the CA runs infinitely without ever entering a
> repeating sequence.
You mean, like a UTM might? ;-)
--
Darren New, San Diego CA, USA (PST)
Serving Suggestion:
"Don't serve this any more. It's awful."
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 1/4/2011 12:56 PM, Darren New wrote:
>
> Given that a CA does an infinite amount of computation each step, I just
> don't know if that's problematic or not. Naturally Wolfram says it
> isn't, but I haven't heard it addressed anywhere, and I've looked.
>
Right after the rule 110 UTM result was announced I read something
talking about exactly your argument:
http://cs.nyu.edu/pipermail/fom/2007-October/012156.html (response from
Tod Rowland here
http://forum.wolframscience.com/showthread.php?s=&threadid=1472)
I haven't read the proof, so I can't comment on it with any actual
knowledge, but this is the internet so I'll go ahead and say that my
impression is that the definition of "smallest universal machine" wasn't
defined precisely enough in the first place, and it seems, not
surprisingly, that you can "abuse" the initialization of the cells in a
CA. How one measures the "complexity" of such a machine seems to be a
matter of social convention more than anything else. I'd personally be
inclined not to allow such initializations, but clearly Wolfram is of a
different mind.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Kevin Wampler wrote:
> On 1/4/2011 12:56 PM, Darren New wrote:
>>
>> Given that a CA does an infinite amount of computation each step, I just
>> don't know if that's problematic or not. Naturally Wolfram says it
>> isn't, but I haven't heard it addressed anywhere, and I've looked.
>>
>
> Right after the rule 110 UTM result was announced I read something
> talking about exactly your argument:
> http://cs.nyu.edu/pipermail/fom/2007-October/012156.html (response from
> Tod Rowland here
> http://forum.wolframscience.com/showthread.php?s=&threadid=1472)
Cool. Altho I think we're talking more about the Cawley post on that page. I
understand the abstract, but I'm not sure what the implications are of
having CAs whose behavior is decidable on all finite initializations and
undecidable on an infinite repetative initialization.
> How one measures the "complexity" of such a machine seems to be a
> matter of social convention more than anything else.
Well, I think it's more than one needs to define what is meant.
If you can prove that an infinite repetitive initialization doesn't ever
give you more power than a constant initialization (i.e., in a similar way
as you can prove that extra tapes don't help a TM be more universal), then
I'd accept the 110 proof. But I didn't know if the Cawley paper makes that
implication or not.
--
Darren New, San Diego CA, USA (PST)
Serving Suggestion:
"Don't serve this any more. It's awful."
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|