 |
 |
|
 |
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Am 04.01.2011 21:08, schrieb clipka:
>
> Actually you don't even need pre-initialized cells (or infinitely many
> cells, for that matter) any more than you do with a Turing machine, if
> you go for a CA with more than just two states.
>
> All you need is a set of special states to represent some
> "initialization area" which you place on both sides of your "payload
> data", defined in such a way that it moves outward at 1 cell per cycle
> while auto-generating the desired "background pattern".
>
> There - you just eliminated the need for any pre-calculation. As the
> maximum propagation speed of any pattern in a CA is 1 cell per cycle,
> you are guaranteed to have the "background pattern" wherever you need
> it. Flaw fixed.
... oh, and to overcome the need of infinitely many cells you can of
course place a "cell factory" at each end of the string of cells, each
of which adds one more cell per cycle.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
clipka wrote:
> You just added an oracle.
Yep. That's kind of the point I'm making.
>> I'm not convinced. I'm not sure either way, mind. It just seems flakey
>> and not something you can assume.
>
> Why not? Both CA and Turing machine are theoretical constructs anyway,
> so you can assume just about anything.
Why not? Because if I allow for infinite initialization, I can calculate
things with a CA that I can't calculate with a TM. You just pointed that out
above.
> There - you just eliminated the need for any pre-calculation. As the
> maximum propagation speed of any pattern in a CA is 1 cell per cycle,
> you are guaranteed to have the "background pattern" wherever you need
> it. Flaw fixed.
It's only "flaw fixed" if you want to claim that what you just created is
rule 110. That's like arguing that a 1-state TM is universal, because all
you have to do is add a few more states.
I'm not arguing that a CA can't be Turing complete. I'm arguing that
*Wolfram's* CA requires an infinite amount of initialization *before* it can
function as a UTM.
Given that a CA does an infinite amount of computation each step, I just
don't know if that's problematic or not. Naturally Wolfram says it isn't,
but I haven't heard it addressed anywhere, and I've looked.
--
Darren New, San Diego CA, USA (PST)
Serving Suggestion:
"Don't serve this any more. It's awful."
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Am 04.01.2011 21:56, schrieb Darren New:
>> There - you just eliminated the need for any pre-calculation. As the
>> maximum propagation speed of any pattern in a CA is 1 cell per cycle,
>> you are guaranteed to have the "background pattern" wherever you need
>> it. Flaw fixed.
>
> It's only "flaw fixed" if you want to claim that what you just created
> is rule 110. That's like arguing that a 1-state TM is universal, because
> all you have to do is add a few more states.
No, not really. Your argument why 110 is flawed is based on the
assumption that its initialization /may/ be some kind of oracle. My
argument shows that this is not necessarily the case - and in particular
that it is /never/ the case if the CA is pre-initialized with a regular
pattern, as in that case you can always build a CA that simulates the
original CA without requiring infinite initialization; as the proof for
the Turing-completeness of Rule 110 uses such a regular pattern, there
is no oracle involved in the setup.
> I'm not arguing that a CA can't be Turing complete. I'm arguing that
> *Wolfram's* CA requires an infinite amount of initialization *before* it
> can function as a UTM.
And that's no problem, because CA are allowed to have that /by definition/.
Yes, that might make a CA /more/ powerful than a UTM. But that doesn't
mean it can't /simulate/ a UTM (thus making it Turing-complete), or that
its setup procedure /necessarily/ makes it any more difficult than a UTM
to actually simulate it in real life, or whatever problem it is you have
with that infinite initialization. In some cases it might be, but in the
case of simulating a UTM it isn't.
> Given that a CA does an infinite amount of computation each step, I just
> don't know if that's problematic or not. Naturally Wolfram says it
> isn't, but I haven't heard it addressed anywhere, and I've looked.
Note that even if you kind of have infinite "ROM" in a CA, within
limited time you can only access a limited amount of it (at most 2*N+1
cells in a 1D automaton, where N is the number of steps you compute),
because the information needs to propagate to the place where the output
is to end up. So effectively you're only initializing a limited number
of cells, unless the CA runs infinitely without ever entering a
repeating sequence.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
clipka wrote:
> And that's no problem, because CA are allowed to have that /by definition/.
That right there is what I'm asking about. I've never seen a definition that
allows for a CA to have an infinite initialization other than to a default
state. Do you have a citation to a definition that describes this? I am
having trouble coming up with a trivial way to define what would be the
allowable initialization and what would be the disallowed initialization
without making reference to something outside the realm of CAs. Perhaps if
you can describe its state based on a RE function of its index or something?
Note that a TM that just gets its initial tape initialized to purely random
symbols is also strictly more powerful than a TM that gets its tape
initialized to a constant. The thing about the TM is you can show that if
you initialize once cell per step, or if you keep a single extra register
with the index of the highest-read cell, you can avoid doing an infinite
initialization to start. (I.e., this would be the equivalent to the
tack-initializers-on-the-ends you were talking about for the linear CA.)
> unless the CA runs infinitely without ever entering a
> repeating sequence.
You mean, like a UTM might? ;-)
--
Darren New, San Diego CA, USA (PST)
Serving Suggestion:
"Don't serve this any more. It's awful."
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
On 1/4/2011 12:56 PM, Darren New wrote:
>
> Given that a CA does an infinite amount of computation each step, I just
> don't know if that's problematic or not. Naturally Wolfram says it
> isn't, but I haven't heard it addressed anywhere, and I've looked.
>
Right after the rule 110 UTM result was announced I read something
talking about exactly your argument:
http://cs.nyu.edu/pipermail/fom/2007-October/012156.html (response from
Tod Rowland here
http://forum.wolframscience.com/showthread.php?s=&threadid=1472)
I haven't read the proof, so I can't comment on it with any actual
knowledge, but this is the internet so I'll go ahead and say that my
impression is that the definition of "smallest universal machine" wasn't
defined precisely enough in the first place, and it seems, not
surprisingly, that you can "abuse" the initialization of the cells in a
CA. How one measures the "complexity" of such a machine seems to be a
matter of social convention more than anything else. I'd personally be
inclined not to allow such initializations, but clearly Wolfram is of a
different mind.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Kevin Wampler wrote:
> On 1/4/2011 12:56 PM, Darren New wrote:
>>
>> Given that a CA does an infinite amount of computation each step, I just
>> don't know if that's problematic or not. Naturally Wolfram says it
>> isn't, but I haven't heard it addressed anywhere, and I've looked.
>>
>
> Right after the rule 110 UTM result was announced I read something
> talking about exactly your argument:
> http://cs.nyu.edu/pipermail/fom/2007-October/012156.html (response from
> Tod Rowland here
> http://forum.wolframscience.com/showthread.php?s=&threadid=1472)
Cool. Altho I think we're talking more about the Cawley post on that page. I
understand the abstract, but I'm not sure what the implications are of
having CAs whose behavior is decidable on all finite initializations and
undecidable on an infinite repetative initialization.
> How one measures the "complexity" of such a machine seems to be a
> matter of social convention more than anything else.
Well, I think it's more than one needs to define what is meant.
If you can prove that an infinite repetitive initialization doesn't ever
give you more power than a constant initialization (i.e., in a similar way
as you can prove that extra tapes don't help a TM be more universal), then
I'd accept the 110 proof. But I didn't know if the Cawley paper makes that
implication or not.
--
Darren New, San Diego CA, USA (PST)
Serving Suggestion:
"Don't serve this any more. It's awful."
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Am 04.01.2011 23:48, schrieb Darren New:
> clipka wrote:
>> And that's no problem, because CA are allowed to have that /by
>> definition/.
>
> That right there is what I'm asking about. I've never seen a definition
> that allows for a CA to have an infinite initialization other than to a
> default state. Do you have a citation to a definition that describes
> this? I am having trouble coming up with a trivial way to define what
> would be the allowable initialization and what would be the disallowed
> initialization without making reference to something outside the realm
> of CAs. Perhaps if you can describe its state based on a RE function of
> its index or something?
"An initial state (time t=0) is selected by assigning a state for each
cell."
(Wikipedia)
Isn't that plain enough? You assign a state for /each/ cell. CA's don't
seem to make any limitations on this.
Just like you "create" a TM [tape] to be initially empty except for a
limited section, you "create" a CA with an explicit initial state for
/each/ cell. How you pick that state per cell is purely a CA design
decision, and it doesn't matter whether you fetch that state out of thin
air already providing (part of) your solution. Just name /any/ rule for
that pattern - I guess the only limitation is that if you define the
initial state at time t=0 as a function of any states at time t>0 you
have a CA with oracle - and in fact may find yourself running straight
into a paradoxon.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
On 22/12/2010 11:07 PM, Darren New wrote:
> Initializing an infinite tape for a
> turing machine at least makes it more powerful than one with an
> uninitialized tape.
OK.
> And rule 110 needs an infinitely initialized tape.
I'm still not convinced that this is actually necessary.
In other words, I suspect that rule 110 *is* Turing complete, it's just
that the proof provided is a bit weak.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
clipka wrote:
> "An initial state (time t=0) is selected by assigning a state for each
> cell."
>
> (Wikipedia)
>
> Isn't that plain enough? You assign a state for /each/ cell. CA's don't
> seem to make any limitations on this.
Well, from that same page:
"""
It is usually assumed that every cell in the universe starts in the same
state, except for a finite number of cells in other states, often called a
configuration. More generally, it is sometimes assumed that the universe
starts out covered with a periodic pattern, and only a finite number of
cells violate that pattern. The latter assumption is common in
one-dimensional cellular automata.
"""
So you "generally" don't start with an infinite aperiodic pattern.
What I was really interested in was a proof that starting with a periodic
pattern (except a finite number of cells) didn't give more computing power
to the CA than starting with a single state (except a finite number of
cells). Because Wolfram is arguing about the computational power of Rule 110.
> Just name /any/ rule for that pattern
I don't think that works, because then it lets the CA calculate stuff a TM
cannot, and I think that would lead to a trivial disproof of the
Church-Turing thesis.
Now, a pattern that differs from a regular repeating pattern in only a
finite number of places? Sure, that's likely good.
But if Wikipedia says a repeating pattern for linear CAs is common, I'll
assume that's sufficient to quell my disquiet with the Rule 110 proof in
that regard. I'd still like to see a proof that having the repeating pattern
doesn't add computational power, which is what the Crawley paper was
apparently discussing. I'll have to take time to look at that closer.
--
Darren New, San Diego CA, USA (PST)
Serving Suggestion:
"Don't serve this any more. It's awful."
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Invisible wrote:
> In other words, I suspect that rule 110 *is* Turing complete, it's just
> that the proof provided is a bit weak.
I suspect you can build a linear CA that's turing complete. I'm not
convinced that 110 without the infinite initialization can do that.
--
Darren New, San Diego CA, USA (PST)
Serving Suggestion:
"Don't serve this any more. It's awful."
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|
 |