POV-Ray : Newsgroups : povray.off-topic : Weekly calibration Server Time
6 Sep 2024 07:15:47 EDT (-0400)
  Weekly calibration (Message 31 to 40 of 106)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Darren New
Subject: Re: Weekly calibration
Date: 20 Apr 2009 17:08:50
Message: <49ece462$1@news.povray.org>
Mueen Nawaz wrote:
> 	OK. Not how I was interpreting it...

That's how the people in cryptography define it. :-) Basically, you can't 
have a random sequence. You can only have a random process. It's impossible 
to look at a sequence that has already been generated and decide if it's 
random or not.

> 	I know - just thought I'd point out it could be confusing.

Fair enough.

> 	Anyway, your original assertion was that a truly random sequence is
> necessarily a normal sequence. That doesn't match up with your
> definition of normal.

I didn't define the term "normal sequence", except by reference to "normal 
numbers", via Wikipedia.

> 	If I have a sequence derived from a Gaussian distribution, then it is
> truly random by the way you defined it. Given the whole sequence, up to
> a point, it doesn't tell me anything about what the next element could
> be (other than what is obvious - that it follows a Gaussian
> distribution).

Correct.

> However, for this sequence "every block of a particular
> length occurs with equal probability" does not hold.

I think there's confusion conflating "random" and "normal" in our 
conversation about distributions here. Not every random sequence is normal, 
and not all normal sequences are random. Sorry if I confused "the kind of 
random you assume idealized monkeys will generate on keyboards" with "any 
sort of random distribution".

To have a "normal number", you need linear distribution, basically. I.e., 
all substrings of a given length appear with equal asymptotic probability as 
the number of symbols you look at gets large. *Given* that, every sequence 
of any given length will appear if you let N be infinity.

In other words, in order to avoid Shakespeare appearing, you would have to 
have a non-random distribution *because* the sequence is infinite and not 
just unbounded. In exactly the same way that 0*X is zero no matter how big X 
gets, until X is actually infinite.

(lim(x->inf) 0*X) =/= (0*inf)

You can't reason about it based on "no matter how big it gets, ..."

-- 
   Darren New, San Diego CA, USA (PST)
   There's no CD like OCD, there's no CD I knoooow!


Post a reply to this message

From: Mueen Nawaz
Subject: Re: Weekly calibration
Date: 20 Apr 2009 17:54:01
Message: <49eceef9$1@news.povray.org>
Darren New wrote:
>>     If I have a sequence derived from a Gaussian distribution, then it is
>> truly random by the way you defined it. Given the whole sequence, up to
>> a point, it doesn't tell me anything about what the next element could
>> be (other than what is obvious - that it follows a Gaussian
>> distribution).
> 
> Correct.
> 
>> However, for this sequence "every block of a particular
>> length occurs with equal probability" does not hold.
> 
> I think there's confusion conflating "random" and "normal" in our
> conversation about distributions here. Not every random sequence is
> normal, and not all normal sequences are random. Sorry if I confused
> "the kind of random you assume idealized monkeys will generate on
> keyboards" with "any sort of random distribution".

	Not sure I'm getting it. Just to clarify, I'm using the word normal in
the way you originally meant it (normal numbers). I'm using normal
sequence in the same sense.

	I was merely pointing out that you originally said:

"Given that truly random sequences are normal, and in a normal sequence
every block of a particular length occurs with equal probability"

	I was giving you an example of a truly random sequence that was _not_
normal.

> To have a "normal number", you need linear distribution, basically.
> I.e., all substrings of a given length appear with equal asymptotic
> probability as the number of symbols you look at gets large. *Given*
> that, every sequence of any given length will appear if you let N be
> infinity.

	Agree - other than with the use of the word "will".<G> What I mean is
that what you say is how I understand what probability theory says.

> In other words, in order to avoid Shakespeare appearing, you would have
> to have a non-random distribution *because* the sequence is infinite and

	I think I get the general idea, but to nitpick, you can avoid it even
with a random distribution. If I have a distribution where the
probability of typing the letter 'e' is forbidden, it's still random
(not "truly" random).

	I suppose you may object to my referring to it as random, but it is
consistent with probability theory: A uniform distribution from 0 to 1
is a valid random distribution - even though you've excluded all numbers
greater than 1.


> not just unbounded. In exactly the same way that 0*X is zero no matter
> how big X gets, until X is actually infinite.
> 
> (lim(x->inf) 0*X) =/= (0*inf)

	That's really not helpful. I get quite fussy when people use infinity,
and insist on a lot of rigor (which I may not understand<G>) before I
accept anything said by it. The LHS is _always_ equal to 0 (assuming x
is a real number). It's not equal to the RHS because the RHS is
meaningless. It's undefined.

-- 
"Now we all know map companies hire guys who specialize in making map
folding a physical impossibility" - Adult Kevin Arnold in "Wonder Years"


                    /\  /\               /\  /
                   /  \/  \ u e e n     /  \/  a w a z
                       >>>>>>mue### [at] nawazorg<<<<<<
                                   anl


Post a reply to this message

From: Darren New
Subject: Re: Weekly calibration
Date: 20 Apr 2009 19:03:36
Message: <49ecff48$1@news.povray.org>
Mueen Nawaz wrote:
> 	Not sure I'm getting it. Just to clarify, I'm using the word normal in
> the way you originally meant it (normal numbers). I'm using normal
> sequence in the same sense.
> 
> 	I was merely pointing out that you originally said:
> 
> "Given that truly random sequences are normal, and in a normal sequence
> every block of a particular length occurs with equal probability"
> 
> 	I was giving you an example of a truly random sequence that was _not_
> normal.

Yes. I was phrasing it sloppily there, since we hadn't talked about other 
distributions at that point in the conversation. To clarify, a "normal 
sequence" in the way I'm using it means all possible subsequences have the 
same probability distribution in the asymptote. Hence, you can't take a 
gausian distribution of symbols and expect to get a normal sequence from them.

> 
>> To have a "normal number", you need linear distribution, basically.
>> I.e., all substrings of a given length appear with equal asymptotic
>> probability as the number of symbols you look at gets large. *Given*
>> that, every sequence of any given length will appear if you let N be
>> infinity.
> 
> 	Agree - other than with the use of the word "will".<G> What I mean is
> that what you say is how I understand what probability theory says.

OK. I don't think I know enough math to convince someone else of it. I'm 
just taking it on authority. It would seem to be the sort of thing amenable 
to proof, so when a math text says it's so, I'll take their word for it. :-) 
Especially when they point out a bunch of other hypotheses where they say 
"We think so but haven't proven it."

>> In other words, in order to avoid Shakespeare appearing, you would have
>> to have a non-random distribution *because* the sequence is infinite and
> 
> 	I think I get the general idea, but to nitpick, you can avoid it even
> with a random distribution. If I have a distribution where the
> probability of typing the letter 'e' is forbidden, it's still random
> (not "truly" random).

No, it's not random. If you go an infinite number of symbols and "e" doesn't 
appear, then "e" doesn't have the same frequency in the asymptote as 
something that *does* appear an infinite number of times.

What you do in that case is remap the output to encode fewer letters of 
english per bit, and interpret it that way. That's the equivalent of reading 
the number in some other base (binary/hex/etc).  I'm dealing with English as 
base-26 or base-127 or whatever character set you want to use.

> 	I suppose you may object to my referring to it as random, but it is
> consistent with probability theory: A uniform distribution from 0 to 1
> is a valid random distribution - even though you've excluded all numbers
> greater than 1.

Sure. But we're starting with the assumption that all the letters of the 
alphabet can be typed. If you give the monkeys a typewriter with no "e" key, 
of course they're not going to type out shakespeare.

And you can't say "they might never type the word 'the'", because if they 
typed "th" and you knew they never typed "the", then you again wouldn't have 
a random sequence, because you could predict with 1/25 assurance what the 
next letter would be instead of 1/26.

> It's not equal to the RHS because the RHS is
> meaningless. It's undefined.

It's not meaningless. It's merely not a number. That doesn't mean you can't 
work with it. Just like you can work with math where you have functions that 
are uncalculable, corresponding to functions that get stuck in infinite 
loops in a programming language.

In any case, therefore, the statement still holds, because 0 is defined and 
0*inf is not.

But I think I've exhausted my ability to convince you that Shakespeare 
necessarily appears in the output, if the output is infinite and making the 
other normal assumption that each possible character is typed with equal 
probability. :-)

-- 
   Darren New, San Diego CA, USA (PST)
   There's no CD like OCD, there's no CD I knoooow!


Post a reply to this message

From: Darren New
Subject: Re: Weekly calibration
Date: 20 Apr 2009 19:15:48
Message: <49ed0224$1@news.povray.org>
Darren New wrote:
> Yes. I was phrasing it sloppily there, since we hadn't talked about 
> other distributions at that point in the conversation. To clarify, a 
> "normal sequence" in the way I'm using it means all possible 
> subsequences have the same probability distribution in the asymptote. 

To clarify more, this includes sequences of length 1. So the mathematical 
proof I read a long time ago (and which I don't think I really followed at 
the time) basically said "if you have an equal probability of picking each 
symbol at random, and you string together an infinite number of those 
symbols, then you have an equal probability of picking any particular 
substring of any particular length." Hence, since there are an infinite 
number of substrings of length (size of shakespeare) and *something* in 
there must appear an infinite number of times, shakespeare too must appear 
an infinite number of times.

The heights of people aren't "truly random" even tho they might be 
statistically distributed. It depends on what you ignore when you do the 
measurements.

The difference between arbitrary math (unrelated to the universe) and 
statistics and science (related to the universe) that you described can be 
attributed to what you ignore when you take your measurements. It's actually 
kind of fascinating to think on. Someone once convinced me that subatomic 
particles like electrons aren't just fungible but actually identical, but I 
don't recall what the proof was. It was just logic and relativity and stuff 
like that when you got down to it.

-- 
   Darren New, San Diego CA, USA (PST)
   There's no CD like OCD, there's no CD I knoooow!


Post a reply to this message

From: Mueen Nawaz
Subject: Re: Weekly calibration
Date: 20 Apr 2009 20:29:38
Message: <49ed1372@news.povray.org>
Darren New wrote:
> Yes. I was phrasing it sloppily there, since we hadn't talked about
> other distributions at that point in the conversation. To clarify, a
> "normal sequence" in the way I'm using it means all possible
> subsequences have the same probability distribution in the asymptote.
> Hence, you can't take a gausian distribution of symbols and expect to
> get a normal sequence from them.

	OK - Agreed.

>>     Agree - other than with the use of the word "will".<G> What I mean is
>> that what you say is how I understand what probability theory says.
> 
> OK. I don't think I know enough math to convince someone else of it. I'm
> just taking it on authority. It would seem to be the sort of thing

	There's really nothing to convince. I'm agreeing with you that
probability theory states the same thing that you state. I have no
_mathematical_ argument against what you're saying (other than the
nitpicking - I agree with your math). I'm just uncomfortable with this
application of probability theory to the "real" world.

	See, in mathematics, everywhere I've seen infinity used with rigor (and
where I understood it), behind all the formalism is basically a
definition of what they mean. Sometimes it is intuitive, but there's no
guarantee that it will conform to reality. Indeed, it is often
impossible to compare it to reality, given the absence of infinities (or
at least our ability to measure them). We can say the cardinality of the
set of natural numbers is that of the rationals, but that's not a
universal truth in the "real" world - it's just a consequence of how we
define cardinality (and numbers).

	It's because of stuff like this that you get the Banach Tarski paradox
(among a bunch of paradoxes). It's due to the Axiom of Choice, which is
quite intuitive to most people. If the BT paradox seems impossible in
the real world, you'll have to give up on the A of C, which seems
counterintuitive.

	Now in probability theory, it's really just mathematics where we're
manipulating numbers and symbols. For cases where we deal with finite
and discrete quantities, probability theory seems to agree with reality.
If it states the probability of something is 0, we interpret that to
mean it will definitely _not_ happen (unless our modeling was off).
Likewise, if it states it is 1, we interpret that to mean it _will_ happen.

	When you now go to continuous distributions, picking a point is, in a
sense, meaningless to the theory. It can only give nonzero probabilities
to intervals, or collections of intervals, etc.** Likewise, probability
theory gives you a 0 for flipping a coin every second forever and never
getting a tails.

	The (usual) interpretation when applied to the real world is that
because it gave me a 0, such a thing can never happen.

	Can it? Can't it? I don't know. I can never know, because I can never
test it. Because of that, I'm wary of results probability theory gives
me for stuff like infinite sequences. I _can_ test it with the real
world for finite events. I'm not saying probability theory is
inconsistent or the math behind it is bad. I'm just saying I shouldn't
blindly accept it as being consistent with the real world just because
it works for finite cases.

	I won't accept that I can't get a forever continuous string of heads
from a coin unless someone can give me a _physical_ reason. There isn't
any - there's only a mathematical one.

<snipped a whole bunch of stuff because I'd just nitpick further>

>> It's not equal to the RHS because the RHS is
>> meaningless. It's undefined.
> 
> It's not meaningless. It's merely not a number. That doesn't mean you

	Well, I guess "meaningless" is vague. It _is_ undefined. About as
useful as dividing by 0.

> But I think I've exhausted my ability to convince you that Shakespeare
> necessarily appears in the output, if the output is infinite and making

	Well, perhaps we're just playing semantic games.

	As I said, I agree that probability theory agrees with you.

	I'm simply wary of using a _purely_ mathematical argument to make
statements about the real world. It invokes no aspect of the real world,
and no laws of physics that I'm aware of. It's just like saying that
yes, you _can_ execute the BT paradox in the real world...

	(As you can guess by now, I'm one of those who feel that the theorems
in mathematics are not necessarily tied to the physical laws of the
universe - other than in the manner that we can think them and
presumably our brains conform to the physical laws...).

** The reason, I suspect, is that theories of integration generally
don't have the integral changing if you happen to remove a point or any
set of measure 0 from the domain of integration.

-- 
"Now we all know map companies hire guys who specialize in making map
folding a physical impossibility" - Adult Kevin Arnold in "Wonder Years"


                    /\  /\               /\  /
                   /  \/  \ u e e n     /  \/  a w a z
                       >>>>>>mue### [at] nawazorg<<<<<<
                                   anl


Post a reply to this message

From: triple r
Subject: Re: Weekly calibration
Date: 20 Apr 2009 20:35:00
Message: <web.49ed13deb9c54b9363a1b7c30@news.povray.org>
Invisible <voi### [at] devnull> wrote:
> http://news.bbc.co.uk/1/3013959.stm

> "...and mostly typing the letter "s"."

For a second, I thought maybe they had discovered radio communication, but I
guess it was the humans that set up the radio link.  Too bad.

 - Ricky


Post a reply to this message

From: Kenneth
Subject: Re: Weekly calibration
Date: 21 Apr 2009 01:35:00
Message: <web.49ed5a9db9c54b93f50167bc0@news.povray.org>
Invisible <voi### [at] devnull> wrote:
> http://news.bbc.co.uk/1/3013959.stm


getting grant money seems to be easier than I thought!

KW


Post a reply to this message

From: scott
Subject: Re: Weekly calibration
Date: 21 Apr 2009 02:03:30
Message: <49ed61b2@news.povray.org>
> http://news.bbc.co.uk/1/3013959.stm
>
> WTF-O-Meter: 2.8

Hehe, I guess for some people it doesn't take much to "learn an awful lot" 
:-)


Post a reply to this message

From: scott
Subject: Re: Weekly calibration
Date: 21 Apr 2009 02:13:12
Message: <49ed63f8@news.povray.org>
>  A true evenly-distributed random number generator and an infinite amount
> of time is a lot, lot closer to fulfilling the claim, and the probability
> of the works coming up is unlimitedly high, but there's still no absolute
> guarantee.
>
>  (Many people think that in this last case the works *will* eventually
> appear with absolute certainty, but that's just the gambler's fallacy.)

Isn't it mathematical fact that the probability of the works not appearing 
is zero in the limit condition?

Like if you ask what is the probability of getting no heads when a coin is 
tossed N times, it is 2^-N, which is *equal* to zero in the limit as N tends 
to infinity.  That's what I was taught at school/university anyway.


Post a reply to this message

From: scott
Subject: Re: Weekly calibration
Date: 21 Apr 2009 02:28:12
Message: <49ed677c$1@news.povray.org>
>  Since there is an infinite amount of different finite sequences of
> letters, the probability of one specific sequence (in this case the works
> of Shakespeare) to appear is, mathematically speaking, zero.

Sorry Warp, but that's just bad math.

One way to explain why you are wrong is that yes, there are an infinite 
number of finite length sequences of letters, but then there are also an 
infinite number of finite length sequences that *contain* the works (plus 
other junk).  So the probability calculation is infinity/infinity, not 
1/infinity.

If you restrict the problem down to finite sequences of the same length as 
"the works", then now there are only a finite number of possiblities, and 
exactly 1 of them will be "the works".

Just to assume "the works" are W letter long, then the probability of a 
random sequnce R of length W exactly matching is 64^-W (assuming 64 
characters here).  Then, the probability of R *not* matching is (1-64^-W). 
If we take N sequences of random letters, then the probability of finding 
"the works" is given by  1-(1-64^-W)^N, which *equals* 1 in the limit of N 
tending to infinity.


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.