 |
 |
|
 |
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Warp wrote:
> Darren New <dne### [at] san rr com> wrote:
>> Warp wrote:
>>> I don't think so. The very definition of "deterministic" is predictability.
>
>> That's where we disagree. It's completely possible to be both deterministic
>> and unpredictable. Indeed, that's exactly what the halting problem is all about.
>
> But a deterministic program will always behave the same way. It doesn't
> change its behavior from one execution to another. Thus it behaves predictably.
> "It will do the same thing it did the last time."
Only if presented with the exact same input. And you have to run it at
least once to find out what that result is. And, in the real world, once
you've run it, its previous choice has become part of the input for the next
run. You cannot present a human with the same choice twice.
A turing machine you couldn't start in a known state would be both
deterministic and unpredictable.
Now, I guess you could investigate free will if you could travel in time and
see whether the same events lead to the same results, but that would require
there being no randomness either.
>>> The very word itself is saying so. It's the opposite of "non-deterministic",
>>> which is unpredictability.
>
>> Also not quite true. Non-deterministic turing machines are very predictable
>> in their behavior.
>
> If it's non-deterministic, you cannot say how it will behave the next time
> it will be executed.
You can if it has exactly the same input. (Assuming there's only one
shortest path to the halting state, of course.)
I'm just pointing out that "non-deterministic" doesn't technically mean it
behaves differently each time. It simply means it's unpredictable *before*
you run the program. With the same input, an NDTM will run the same steps
each time. You just can't tell what those steps are before you run it.
>>> A chain of events is deterministic if it happens in a certain way because
>>> there's no other way it could have happened. If the exact same initial setup
>>> can be replicated, then the chain of events will happen in the exact same
>>> way again, completely predictably. That's the very definition of
>>> deterministic.
>
>> Yet, oddly, NDTMs behave that way. ;-)
>
> If it always behaves the same way, isn't it by definition deterministic?
Nope.
An NDTM has instructions that could do either X or Y when they're in the
same state with the same input. That's the "nondeterministic" part. But the
machine is required to execute the X (or the Y, whichever) that will cause
it to halt in the fewest number of steps. Which will it follow? You can't
tell, because you can't solve the halting problem. If it *does* halt, and
you feed it the same input again, will it follow the same steps again? Yes.
(Unless there are two paths that each lead to halting in the same number of
steps, at which point you really don't *care*, by the definition of the
math.) I.e., an NDTM will factor the number 18372819 the same way every
time: it'll guess that number's factors, and print them to the tape. You
just can't tell by looking at the machine what it'll guess.
I don't think you can tell if something's deterministic or not if you can't
examine its inner workings and you can't feed it the same input twice. Both
of these are true for people.
--
Darren New, San Diego CA, USA (PST)
I ordered stamps from Zazzle that read "Place Stamp Here".
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
> Mass and energy is quantized, and thus there's only a finite number of
> ways the balls can act. It doesn't matter how many possibilities there
> are,
> they are still finite.
Down at that level of detail, when stuff collides there is some random noise
due to the unpredictability of where sub-atomic particles will be and at
what velocity. Unless you can predict that (currently I believe it is
thought to be impossible) then you have no chance of exactly predicting the
outcome of a collision between two balls. The best bet you have will be to
say that within a certain percentage confidence the ball will be in a
certain range of positions, and as there are more collisions and the game
goes on, the range of the expected position will grow exponentially. It
probably means that even with perfect knowledge of the starting positions,
after 10 shots with multiple collisions the best you could say is that
you're 99% sure the balls will be on the table :-)
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Warp wrote:
> You are not deluded into thinking that something happened even though it
> didn't. You are deluded into thinking that it happened because you chose
> for it to happen, rather than it happening just as a consequence of previous
> events. If it happened as a consequence, it's not a choice.
>
Snort. Tell me something I don't know. Its not like I, at least, am not
perfectly aware that studies on how the brain thinks have shown
"awareness" as a post hock attempt to invent justification for an
action, which was already beginning, even before we became consciously
aware of it. Sentience is a deterministic machine, basing its actions on
thousands of independent *processes*, all which take place before
awareness, lying to itself about why it did something. Its a useful lie,
since sometimes the more integrated "conscious" mind can filter multiple
alternate paths, which might have been chosen, and ignored data, and
tell the action, which is already in progress, to halt, on the basis of
higher level reasoning, including recognition that the action in
progress is dangerous, or unwanted. The fiction is necessary, to make
sure that false actions, which can harm the individual, or things it
values, do not cause the harm, which was not directly perceptible on the
*simpler* levels. Duh!
--
void main () {
if version = "Vista" {
call slow_by_half();
call DRM_everything();
}
call functional_code();
}
else
call crash_windows();
}
<A HREF='http://www.daz3d.com/index.php?refid=16130551'>Get 3D Models,
3D Content, and 3D Software at DAZ3D!</A>
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Patrick Elliott wrote:
> perfectly aware that studies on how the brain thinks have shown
> "awareness" as a post hock attempt to invent justification for an
> action,
Which isn't at odds with the supposition that there's something supernatural
involved in helping people make "free will" choices. Perhaps the
supernatural part is what starts the chain of events.
> Sentience is a deterministic machine,
I don't think you know that either. :-) Certainly there's room for quantum
effects, even if you leave out the supernatural.
--
Darren New, San Diego CA, USA (PST)
I ordered stamps from Zazzle that read "Place Stamp Here".
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
scott wrote:
> collisions and the game goes on, the range of the expected position will
> grow exponentially.
In particular, it's caused to a great extent by the balls being convex,
thereby amplifying any difference between predicted and actual.
> It probably means that even with perfect knowledge
> of the starting positions, after 10 shots with multiple collisions the
> best you could say is that you're 99% sure the balls will be on the
> table :-)
The calculation I remember was after 15 shots you can't tell if any are on
the table. :-)
--
Darren New, San Diego CA, USA (PST)
I ordered stamps from Zazzle that read "Place Stamp Here".
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Warp wrote:
> Maybe free will cannot exist, and is just an invented, artificial and
> ultimately false notion. An illusion.
Actually, I had a bit of an insight while food shopping this morning.
You're defining free will as something supernatural, and asking whether
that's an illusion.
I prefer to define free will as the illusion, and then asserting "of course
we have free will."
--
Darren New, San Diego CA, USA (PST)
I ordered stamps from Zazzle that read "Place Stamp Here".
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Darren New <dne### [at] san rr com> wrote:
> How complex can the computer get before you'd say it's a sentient being
> making a choice based on what it wants? If we had SciFi levels of AI
> around, would you claim they're not sentient, not really making choices?
One (maybe not completely physical) possibility would be if the
computer/brain is able to make decision as a closed system. In other
words, it's capable of processing and changing information, and making
decisions without those decisions being the direct and inevitable
consequence of external input or quantum randomness. The decisions may
be *based* on the external input, but they are not the inevitable and
deterministic consequence of it. The computer/brain might be able to
use its own internal logic to make choices based on the input, but in a
way that from the outside it's impossible to predict which choises will
be made.
Can such closed system exist in the physical world? Could that idea break
some laws of physics (eg. something along the lines that new information
cannot be generated in a closed system or something)?
--
- Warp
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
On 14 Sep 2009 15:03:06 -0400, Warp <war### [at] tag povray org> wrote:
>
>> IMO it matters not a jot if we have free will or not. My stance is that we
>> should act as if we do have free will. The other way of living is a good excuse
>> for bad behaviour, again IMO.
>
> Why would it be a good excuse? Regardless of whether you have free will or
>not, you can still feel pain. If you make bad things, you are very probably
>going to suffer some physical or mental pain as a consequence (eg. sent to
>jail or whatever). It makes no sense to go to jail just because you think
>you have no free will.
Now I was thinking about certain religious sects who believed that they were the
"chosen ones" and would go to heaven no mater what they did. It was predestined.
But to answer your post. Not all people would feel bad because they did bad
things. If they could justify their actions by saying that, that was the way
they were then it was OK. Sad but not their fault.
You may not have met people like that but, unfortunately I have.
--
Regards
Stephen
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Warp wrote:
> Darren New <dne### [at] san rr com> wrote:
>> How complex can the computer get before you'd say it's a sentient being
>> making a choice based on what it wants? If we had SciFi levels of AI
>> around, would you claim they're not sentient, not really making choices?
>
> One (maybe not completely physical) possibility would be if the
> computer/brain is able to make decision as a closed system. In other
> words, it's capable of processing and changing information, and making
> decisions without those decisions being the direct and inevitable
> consequence of external input or quantum randomness.
I understand what you're saying. I'm not seeing how that addresses any of
those questions I asked.
Basically, I was trying to investigate what might be the cause of the
(assumed) presence of this non-physical mechanism that's present in humans
but not in rocks.
> The decisions may
> be *based* on the external input, but they are not the inevitable and
> deterministic consequence of it. The computer/brain might be able to
> use its own internal logic to make choices based on the input, but in a
> way that from the outside it's impossible to predict which choises will
> be made.
Sure. But for it to meet your definition, not just "impossible to predict"
but "supernatural." I.e., not arising from physical processes, right? I
mean, "based on quantum randomness" is also impossible to predict, but you
don't want to take that into account.
So you want basically for the mind to be able to react to something
unassociated with the brain. So those questions about an AI are
investigating what you mean by "mind" in the case it's not necessarily a
human or advanced animal.
> Can such closed system exist in the physical world? Could that idea break
> some laws of physics (eg. something along the lines that new information
> cannot be generated in a closed system or something)?
Normally a "closed system" means something different than what you're
talking about - in particular, you wouldn't be able to observe the behavior
of a person that's a "closed system", whereas the result of the person
making the choice is obvious to people outside. If I choose to go to the
store today, the other shoppers are going to know that, so I'm no longer a
closed system. Unless I'm misunderstanding what you're trying to express.
Also, yes, I *think* the amount of information in a closed system can
neither go up nor down, hence the people worrying about the quantum effects
of black holes, holographic universes, and stuff like that. But that's
really beyond my understandings of QM.
--
Darren New, San Diego CA, USA (PST)
I ordered stamps from Zazzle that read "Place Stamp Here".
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Darren New <dne### [at] san rr com> wrote:
> > One (maybe not completely physical) possibility would be if the
> > computer/brain is able to make decision as a closed system. In other
> > words, it's capable of processing and changing information, and making
> > decisions without those decisions being the direct and inevitable
> > consequence of external input or quantum randomness.
> I understand what you're saying. I'm not seeing how that addresses any of
> those questions I asked.
> Basically, I was trying to investigate what might be the cause of the
> (assumed) presence of this non-physical mechanism that's present in humans
> but not in rocks.
If the decisions made by a sentient being are
1) not random (in the quantum-mechanical sense), and
2) not a result of external influences, ie. not predictable
because the decisions are being made in a closed system rather than as a
consequence of the entire universe, then one could consider that sentient
being as having a will of its own, with choices which are not just a direct
consequence of external events, and this without necessarily having to
ascend above physics. (But, as I said, I'm not sure if this would break
some laws of physics regarding closed systems and what they can do.)
Of course if we examine the decisions from *inside* this closed system,
then we might find out that it is still completely bound to deterministic
and random consequences. However, from the *outside* it may be exactly as
if it was a being having true unbounded free will. (In other words, from
the outside it's impossible say whether the decisions are being done by
supernatural or natural means.)
This would make the sentient being different from a rock, which does
not have such an internal closed decision-making system.
This might be somewhat similar to what you already wrote in some of your
replies, and maybe this is just your point sinking in.
> > The decisions may
> > be *based* on the external input, but they are not the inevitable and
> > deterministic consequence of it. The computer/brain might be able to
> > use its own internal logic to make choices based on the input, but in a
> > way that from the outside it's impossible to predict which choises will
> > be made.
> Sure. But for it to meet your definition, not just "impossible to predict"
> but "supernatural."
I didn't really require for free choice to be supernatural. I only required
that it must not be bound to previous events nor randomness (else it wouldn't
really be free choice at all).
If a closed system I described is physically possible, then (I think) it
would perfectly *emulate* supernatural free will, even if it isn't really.
> > Can such closed system exist in the physical world? Could that idea break
> > some laws of physics (eg. something along the lines that new information
> > cannot be generated in a closed system or something)?
> Normally a "closed system" means something different than what you're
> talking about - in particular, you wouldn't be able to observe the behavior
> of a person that's a "closed system", whereas the result of the person
> making the choice is obvious to people outside. If I choose to go to the
> store today, the other shoppers are going to know that, so I'm no longer a
> closed system. Unless I'm misunderstanding what you're trying to express.
Wouldn't it be a closed system if the internal decision process is
impossible to observe from the outside, no matter what kind of stimulus
is being applied? In other words, the responses are completely unpredictable,
without necessarily being random (in the quantum-mechanical sense).
--
- Warp
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|
 |