|
 |
>>>> It still somewhat blows my mind that you could do anything useful with
>>>> so little memory. Presumably for processing large datasets, most of
>>>> the data at any one time would be in secondary storage?
>>> Large datasets then were also very tiny compared to large datasets of
>>> today. :)
>> Sure. But 1MB is such a tiny amount of memory, it could only hold a few
>> thousand records (depending on their size). It would almost be faster to
>> process them by hand then go to all the trouble of punching cards and
>> feeding them through a computer. So it must have been possible to
>> process larger datasets than that somehow.
>
> You never read the entire dataset into memory. You process it a record at a
> time.
Right. That's what I figured.
> The only limit on file size is the media, not memory. There is no difference in
> memory consumption between processing 10 records or 10 million records.
If you're trying to, say, sort data into ascending order, how do you do
that? A bubble sort? (Requires two records in memory at once - but, more
importantly, requires rewritable secondary storage.)
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
 |