POV-Ray : Newsgroups : povray.off-topic : Prehistoric dust Server Time
5 Sep 2024 01:18:37 EDT (-0400)
  Prehistoric dust (Message 46 to 55 of 145)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Warp
Subject: Re: Prehistoric dust
Date: 18 May 2010 14:31:05
Message: <4bf2dce9@news.povray.org>
Orchid XP v8 <voi### [at] devnull> wrote:
> >>>   There were no children many hundred billion years ago.
> > 
> >> I was waiting for that one... *sigh*
> > 
> >   It's a so-called mathematician's answer.

> Technically, 1 hundred billion years is about 7.3 times the estimated 
> age of the universe

  Depends on whether you are using American billions or European billions.

-- 
                                                          - Warp


Post a reply to this message

From: Orchid XP v8
Subject: Re: Prehistoric dust
Date: 18 May 2010 14:35:53
Message: <4bf2de09$1@news.povray.org>
>>>>>   There were no children many hundred billion years ago.
>>>> I was waiting for that one... *sigh*
>>>   It's a so-called mathematician's answer.
> 
>> Technically, 1 hundred billion years is about 7.3 times the estimated 
>> age of the universe
> 
>   Depends on whether you are using American billions or European billions.

NAAAAAARGH!! >_<

I never did like Americans... (Then again, they probably don't like me 
either.)

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: nemesis
Subject: Re: Dusty
Date: 18 May 2010 14:37:51
Message: <4bf2de7f@news.povray.org>
Orchid XP v8 escreveu:
>>> 1997? o_O
>>
>> just in time for the first version of GHC... :-)
> 
> Actually about ten years *after* the first version of GHC.
> 
> Yes, I realise that sounds utterly absurd...

Timing was just as innacurate as the original post.  All for the sake of 
a good joke. :)

-- 
a game sig: http://tinyurl.com/d3rxz9


Post a reply to this message

From: Orchid XP v8
Subject: Re: Dusty
Date: 18 May 2010 14:40:02
Message: <4bf2df02$1@news.povray.org>
>>>> 1997? o_O
>>>
>>> just in time for the first version of GHC... :-)
>>
>> Actually about ten years *after* the first version of GHC.
>>
>> Yes, I realise that sounds utterly absurd...
> 
> Timing was just as innacurate as the original post.  All for the sake of 
> a good joke. :)

Hey, *I* had to look it up.

It still slightly frightens me that Haskell is actually this old... Just 
think how much better the world could be today if its ideas had caught 
on back then?

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: nemesis
Subject: Re: Dusty
Date: 18 May 2010 14:41:28
Message: <4bf2df58$1@news.povray.org>
Orchid XP v8 escreveu:
>>> It still somewhat blows my mind that you could do anything useful 
>>> with so little memory. Presumably for processing large datasets, most 
>>> of the data at any one time would be in secondary storage?
>>
>> Large datasets then were also very tiny compared to large datasets of 
>> today. :)
> 
> Sure. But 1MB is such a tiny amount of memory, it could only hold a few 
> thousand records (depending on their size). It would almost be faster to 
> process them by hand then go to all the trouble of punching cards and 
> feeding them through a computer.

that's not quite what Hollerith found with the American 1890's census. ;)

-- 
a game sig: http://tinyurl.com/d3rxz9


Post a reply to this message

From: nemesis
Subject: Re: Dusty
Date: 18 May 2010 14:48:48
Message: <4bf2e110@news.povray.org>
Orchid XP v8 escreveu:
>>>>> 1997? o_O
>>>>
>>>> just in time for the first version of GHC... :-)
>>>
>>> Actually about ten years *after* the first version of GHC.
>>>
>>> Yes, I realise that sounds utterly absurd...
>>
>> Timing was just as innacurate as the original post.  All for the sake 
>> of a good joke. :)
> 
> Hey, *I* had to look it up.
> 
> It still slightly frightens me that Haskell is actually this old... Just 
> think how much better the world could be today if its ideas had caught 
> on back then?

hey, Lisp was born before C or even Algol and that didn't help either! :P

-- 
a game sig: http://tinyurl.com/d3rxz9


Post a reply to this message

From: Orchid XP v8
Subject: Re: Dusty
Date: 18 May 2010 14:51:35
Message: <4bf2e1b7@news.povray.org>
>> It still slightly frightens me that Haskell is actually this old... 
>> Just think how much better the world could be today if its ideas had 
>> caught on back then?
> 
> hey, Lisp was born before C or even Algol and that didn't help either! :P

Oh well. I guess as in all other aspects of technology, the least 
effective technology always wins.

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Clarence1898
Subject: Re: Dusty
Date: 18 May 2010 14:55:01
Message: <web.4bf2e17aecb621efaba2b8dc0@news.povray.org>
Orchid XP v8 <voi### [at] devnull> wrote:
> >> So the concept of a filesystem storing named files already existed at
> >> this time?
> >
> > Generally, yes. But you usually wound up pre-allocating files, and they
> > were contiguous on disk.
>
> OK.
>
> Files only on disk? Or on tape too? (From what I've seen, punch cards
> didn't have this level of abstraction. It wouldn't be terribly necessary
> I guess...)

Data on punch cards are treated no differently than data on mag tape, paper tape
or disk.  It is a sequential file with a fixed record length of 80 bytes.
A program only knows it reads 80 byte records, it doesn't care what physical
device its on. Just don't try to read the punch card data backwards.

> >> Interesting. So the system actually "knows" where each field of a
> >> record is then?
> >
> > Records were fixed size, so it was trivial to calculate.
>
> OK. But does the system know where the *fields* in a record are? Or just
> what size the records are?

The file systems does not understand fields.  It understands file organization
(sequential, indexed, random access), record type (fixed, variable, undefined),
and maximum record length and block size.  Interpretation of data at the field
level is done by the program.  If the file is a part of a database, the database
manager handles field definition.

>
> >> Really? I didn't think anybody had mainframes any more... just big
> >> server farms.
> >
> > The people who want to do lots of I/O have machines where instead of
> > GPUs they have IOPs.  A 800,000 line phone switch, for example, is
> > pretty much all IOP, with something like a 68000 running the actual
> > switching part.
> >
> > Of course, what one might call a "PC" nowadays has a terabyte of RAM and
> > 96 quad-core processor chips, so the lines blur.
>
> Yeah, I think the term "mainframe" is probably obsolete now. There are
> probably more exact ways to describe what type of computer you mean.
>

There is a lot of overlap between a "mainframe" and a "PC". There are some PC
configurations that are more powerful than some mainframes.  So what is or is
not a mainframe can be debatable. There are very few things a modern mainframe
can't do that a PC can and vice versa.  You can run Linux on an IBM z/series
machine, just like on a PC.  As a user it would be difficult to tell the
difference.  Its all a matter of scale.

> --
> http://blog.orphi.me.uk/
> http://www.zazzle.com/MathematicalOrchid*

Isaac


Post a reply to this message

From: Clarence1898
Subject: Re: Dusty
Date: 18 May 2010 15:00:01
Message: <web.4bf2e35fecb621efaba2b8dc0@news.povray.org>
Orchid XP v8 <voi### [at] devnull> wrote:
> >> It still somewhat blows my mind that you could do anything useful with
> >> so little memory. Presumably for processing large datasets, most of
> >> the data at any one time would be in secondary storage?
> >
> > Large datasets then were also very tiny compared to large datasets of
> > today. :)
>
> Sure. But 1MB is such a tiny amount of memory, it could only hold a few
> thousand records (depending on their size). It would almost be faster to
> process them by hand then go to all the trouble of punching cards and
> feeding them through a computer. So it must have been possible to
> process larger datasets than that somehow.

You never read the entire dataset into memory. You process it a record at a
time.
The only limit on file size is the media, not memory.  There is no difference in
memory consumption between processing 10 records or 10 million records.

>
> > see the revolution that were programs like ed (and its successor vi) in
> > bringing flexible terminal text editing rather than wasting tons of
> > paper... :)
>
> ...not to mention card...
>
> --
> http://blog.orphi.me.uk/
> http://www.zazzle.com/MathematicalOrchid*

Isaac


Post a reply to this message

From: Orchid XP v8
Subject: Re: Dusty
Date: 18 May 2010 15:14:32
Message: <4bf2e718$1@news.povray.org>
>>>> It still somewhat blows my mind that you could do anything useful with
>>>> so little memory. Presumably for processing large datasets, most of
>>>> the data at any one time would be in secondary storage?
>>> Large datasets then were also very tiny compared to large datasets of
>>> today. :)
>> Sure. But 1MB is such a tiny amount of memory, it could only hold a few
>> thousand records (depending on their size). It would almost be faster to
>> process them by hand then go to all the trouble of punching cards and
>> feeding them through a computer. So it must have been possible to
>> process larger datasets than that somehow.
> 
> You never read the entire dataset into memory. You process it a record at a
> time.

Right. That's what I figured.

> The only limit on file size is the media, not memory.  There is no difference in
> memory consumption between processing 10 records or 10 million records.

If you're trying to, say, sort data into ascending order, how do you do 
that? A bubble sort? (Requires two records in memory at once - but, more 
importantly, requires rewritable secondary storage.)

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.