|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
>> 1. It seems a logical and intuitive result.
>
> Uh, no?
Intuitive is in the eye of the beholder, is it not? Besides, you just
said yourself that there are CAs which are known to be Turing-complete;
why not this particular one?
>> 2. Nobody has disproved it.
>
> That's not how math works. "Hey, P=NP! Well, nobody disproved it."
Nobody has proved it either.
A question which hasn't been proven one way or the other is an open problem.
A question which has a proof one way, and would require only a single
counter-example to disprove the other way, I think can be considered proven.
Unless you're talking about the 4-colour problem...
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Orchid XP v8 wrote:
> Intuitive is in the eye of the beholder, is it not? Besides, you just
> said yourself that there are CAs which are known to be Turing-complete;
> why not this particular one?
Because the CAs that are turing complete prove they are so by having
building blocks for (for example) XOR and NOT gates, which we already know
are turing complete. In addition, the CAs that are known to be turing
complete don't require an infinite grid to run in but only an unbounded one.
Wolfram's CA requires an initialization of an infinite amount of data before
you start the CA in order for it to emulate a turing machine, as well as
doing an infinite amount of processing each step during the emulation.
Now, granted, a CA theoretically operates on an infinite amount of data each
step. I'd agree the rule is turing complete if (a) someone much smarter than
I agreed that an infinite amount of initialization is acceptable or (b) that
the rule did its own infinite initialization before it started computation.
>>> 2. Nobody has disproved it.
>>
>> That's not how math works. "Hey, P=NP! Well, nobody disproved it."
>
> Nobody has proved it either.
Right. And as far as I'm concerned, nobody proved Rule 30 is turing complete
either.
I mean, it's fine to say "I think it is possible to prove it even tho we
didn't yet", or "I think your objection based on infinite amounts of
initialization is not valid", or "I just have this gut feeling that it's
correct." But "because nobody has disproved it" doesn't add to any of
these. :-)
> A question which has a proof one way, and would require only a single
> counter-example to disprove the other way, I think can be considered
> proven.
Uh, no, there is no proof. There's a flawed proof. But (AFAIK) what he
proved the CA does isn't equivalent to a Turing machine. A proof that's
wrong, or that proves something other than what the author intended, isn't
"proof without a disproof". It's nothing.
Maybe I'm mistaken, and an infinite amount of initialization isn't
problematic to the proof. If so, some sort of citation to why would be handy.
--
Darren New, San Diego CA, USA (PST)
Forget "focus follows mouse." When do
I get "focus follows gaze"?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
I'm a bit puzzled here. The thread starts with ellipses and then, all
of a sudden and out of nowhere, some completely obscure and unexplained
references are made about some mysterious "Rule 30" and something called
a "CA", whatever that means, without nobody explaining what they mean or
what they are referring to.
For all I know CA could refer to Captain America. Maybe he is Turing
Complete. (Don't know what "Rule 30" refers to, though.)
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp wrote:
> I'm a bit puzzled here. The thread starts with ellipses and then, all
> of a sudden and out of nowhere, some completely obscure and unexplained
> references are made about some mysterious "Rule 30" and something called
> a "CA", whatever that means, without nobody explaining what they mean or
> what they are referring to.
I'm pretty sure we discussed it before, but unless you're into this sort of
math, I can see where it would be confusing.
CA = cellular automata.
Rule 30 is an encoding of a linear CA's rules as a binary number. You can
take the rules of a linear CA and express them as a truth table, then read
the result of each combination as a binary state, giving you one bit for
each possible input combination the CA will consider. The rule number (30 in
this case) tells you if you interpret the CA's result as a binary number, it
turns out to have the value 30.
For a similar example, "not" is rule 2, "AND" is rule 1, OR is rule 7, and
XOR is rule 6.
XOR:
0 x 0 = 0
0 x 1 = 1
1 x 0 = 1
1 x 1 = 0
rule 6 -^
Wolfram constructed a pattern that uses rule 30 to emulate a turing machine
if you read the results down the page as the linear CA calculates across the
page. That's cool, except he has to put an infinite number of "clock bit"
patterns on the line first. I'm not sure that's legit, since rule 30 doesn't
actually generate its own clock pulses. If he had a rule that created the
clock pulses on an otherwise uniform starting state and which then changed
over to rule 30 afterwards, I'd have no complaints.
But it's like saying "I have this Turing machine that emulates in only 2
states all other turing machines. But only if you start out on a tape where
each prime numbered cell has a non-empty symbol on it first."
--
Darren New, San Diego CA, USA (PST)
Forget "focus follows mouse." When do
I get "focus follows gaze"?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Darren New wrote:
> But it's like saying "I have this Turing machine that emulates in only 2
> states all other turing machines. But only if you start out on a tape
> where each prime numbered cell has a non-empty symbol on it first."
More specifically, it's like calling that two-state machines a "universal"
turing machine. All you need is an infinite amount of computation to prime
it. :-)
--
Darren New, San Diego CA, USA (PST)
Forget "focus follows mouse." When do
I get "focus follows gaze"?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Orchid XP v8 wrote:
>>> 1. It seems a logical and intuitive result.
>>
>> Uh, no?
>
> Intuitive is in the eye of the beholder, is it not? Besides, you just
> said yourself that there are CAs which are known to be Turing-complete;
> why not this particular one?
Are you *sure* that rule 30 is the right CA here? As far as I'm aware
of could find via Google rule 30 is *not* believed to be Turing
complete. It's entirely possible that I missed something, but it's a
class III automaton rather than a class IV CA so there's a reason I'm
doubting this.
It seems more likely that the proof you're talking about was done for
rule 110, which is the CA that Wolfram proposes to be Turing complete,
and there was a later proof by Matthew Cook establishing this provided
proper tape initialization and processing is done.
If you do have a reference giving the proof for rule 30 I'd be
interested to see it since it doesn't strike me as an intuitive result
and thus I'd probably be able to learn something from it.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Kevin Wampler wrote:
> Are you *sure* that rule 30 is the right CA here?
No, actually. I thought that sounded wrong, and 110 sounds much more like
what I remember, but I wasn't about to dig out the 30 pound tome to check. :-)
> and there was a later proof by Matthew Cook establishing this provided
> proper tape initialization and processing is done.
Right. There's a proper proof *providing* the initialization of the first
state is done. That's what I'm not sure is valid. I'm not saying it isn't,
but I'm not convinced it is. AFAIR, the definition of a turing machine
requires anything past a finite initial portion of the tape to be all
initialized to the same symbol.
Sorry for any confusion.
--
Darren New, San Diego CA, USA (PST)
Forget "focus follows mouse." When do
I get "focus follows gaze"?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Darren New wrote:
> Kevin Wampler wrote:
>> Are you *sure* that rule 30 is the right CA here?
>
> No, actually. I thought that sounded wrong, and 110 sounds much more
> like what I remember, but I wasn't about to dig out the 30 pound tome to
> check. :-)
I see. I had asked partially since if rule 30 isn't Turing complete it
provides a nice counterexample to the "it's a CA with non-trivial
behavior so it's probably Turing complete" viewpoint (which you were
also arguing against).
>> and there was a later proof by Matthew Cook establishing this provided
>> proper tape initialization and processing is done.
>
> Right. There's a proper proof *providing* the initialization of the
> first state is done. That's what I'm not sure is valid. I'm not saying
> it isn't, but I'm not convinced it is. AFAIR, the definition of a turing
> machine requires anything past a finite initial portion of the tape to
> be all initialized to the same symbol.
I swear I'd read a nice blog post about this at some point but I cannot
seem to find it again. As far as I remember the main gist was "you have
to be careful with the initialization since it's possible to combine two
non-Turing complete components and get a Turing complete result"
Ahh, I *think* this is what I was remembering:
http://forum.wolframscience.com/showthread.php?s=&threadid=1472
As near as I can tell, maybe it's best to view rule 110 as "weakly
universal" since the universality proof requires an infinite (and AFAIK
non-repetitive) initialization? That seems to convey the idea that it
has some interesting properties with respect to Turing completeness, but
that it's not quite the full-blown form of Turing completeness you'd
normally expect.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Darren New wrote:
> Wolfram constructed a pattern that uses rule 30 to emulate a turing
> machine
I should note that the proof (for rule 110) wasn't actually done by
Wolfram. It was done by Matthew Cook who was interning for Wolfram
Research at the time. Here's a snippet which will hopefully explain why
I think this is worth mentioning:
"The real problem with this result, however, is that it is not
Wolfram's. He didn't invent cyclic tag systems, and he didn't come up
with the incredibly intricate construction needed to implement them in
Rule 110. This was done rather by one Matthew Cook, while working in
Wolfram's employ under a contract with some truly remarkable provisions
about intellectual property. In short, Wolfram got to control not only
when and how the result was made public, but to claim it for himself. In
fact, his position was that the existence of the result was a trade
secret. Cook, after a messy falling-out with Wolfram, made the result,
and the proof, public at a 1998 conference on CAs. (I attended, and was
lucky enough to read the paper where Cook goes through the construction,
supplying the details missing from A New Kind of Science.) Wolfram, for
his part, responded by suing or threatening to sue Cook (now a penniless
graduate student in neuroscience), the conference organizers, the
publishers of the proceedings, etc."
From here: http://www.cscs.umich.edu/~crshalizi/reviews/wolfram/
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Kevin Wampler wrote:
> I see. I had asked partially since if rule 30 isn't Turing complete it
> provides a nice counterexample to the "it's a CA with non-trivial
> behavior so it's probably Turing complete" viewpoint (which you were
> also arguing against).
Fair enough.
> I swear I'd read a nice blog post about this at some point but I cannot
> seem to find it again. As far as I remember the main gist was "you have
> to be careful with the initialization since it's possible to combine two
> non-Turing complete components and get a Turing complete result"
Yes, that was my concern. In particular, a TM with an infinite pattern on
the initial tape isn't obviously as simple as a similar TM with a finite
pattern on the initial tape. It's not clear to me whether the same is true
of a CA that does an infinite computation at every step to start with.
How much of the human's understanding of what the pattern means is really
involved? How much of the infinite pattern being unimportant is just the
human's understanding that "oh, that's just the clock pulse". How much of
that infinite complexity is dismissed only because of "understanding" which
is specifically denied to the formalism?
For example, a TM with any finite initial pattern on the tape can be
trivially turned into a finite TM with a completely blank tape to start,
simply by prepending a series of instructions to write the appropriate
pattern to the tape. This isn't true of an infinite pattern, even if it's
repetitive. It *is* true of a CA, but you'd have to change the rule of the
CA in the middle of evaluation, and there's no obvious way in the formalism
to do that, since CAs don't keep any sort of state (corresponding to the
head position and state number) outside the actual "tape." I just never got
into the whole deal enough to know the answers to such questions, and I'm
not sure it even makes a difference. It would be interesting to know whether
there's a proof or a disproof of whether an infinite repetitive initial tape
for a TM makes a difference in what you can compute.
> Ahh, I *think* this is what I was remembering:
> http://forum.wolframscience.com/showthread.php?s=&threadid=1472
That seems to be talking about TMs, not CAs. Two different beasts when
talking about infinite tapes, since a CA does infinite computation at every
step, while a TM does not. I'm also almost certain that the Rule 110
universal machine starts with a repeating pattern (other than the "program")
so would fall under the "computational properties of linear cellular
automata on configurations that differ from spatially periodic ones in only
finitely many places" rule. Since the author seems to claim (in the
abstract) that "is the same on these configurations as on the space of
finite configurations" then it would seem they think they proved that Rule
110 is indeed turing complete in spite of needing infinite initialization.
The 2,3 TM is interesting only in that it's tiny. But if you're going to
argue that simple systems can make really complex results, the fact that a
2,3 TM can do so doesn't seem that much more surprising than a 3,5 TM for
example. Especially since all the TMs require sophisticated interpretation
of their results to support any of these proofs - you already have all kinds
of encoding issues outside the TM to interpret what's on the tape to figure
out the isomorphisms with other machines.
I think Wolfram's wanking with his smallest-TM contests in a way that
doesn't really add anything to his basic premise.
Interesting links tho. I'll have to read them in my copious spare time. ;-)
> As near as I can tell, maybe it's best to view rule 110 as "weakly
> universal" since the universality proof requires an infinite (and AFAIK
> non-repetitive) initialization? That seems to convey the idea that it
> has some interesting properties with respect to Turing completeness, but
> that it's not quite the full-blown form of Turing completeness you'd
> normally expect.
Sure. I just didn't find it interesting, given there *are* CAs that are
known to be (and easily provable to be) turing complete. OK, so now there's
a linear one. So? Only Wolfram crowing about how trivial it is made it
interesting, and I didn't consider it to be particularly trivial if it
needed an infinite amount of initialization. The Conway CAs that are indeed
turing complete have initial states that are scaled relative to the size of
the turing machine they're emulating, so the fact that they're "big" in some
absolute measure isn't all that interesting. They're still only 4 rules, at
absolute worst a 512-range CA instead of a 256-range (or was Wolfram's a
128-range?).
--
Darren New, San Diego CA, USA (PST)
Forget "focus follows mouse." When do
I get "focus follows gaze"?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|