POV-Ray : Newsgroups : povray.off-topic : How far we have come : Re: How far we have come Server Time
7 Sep 2024 07:25:36 EDT (-0400)
  Re: How far we have come  
From: Darren New
Date: 13 Jul 2008 18:08:45
Message: <487a7ced$1@news.povray.org>
Warp wrote:
>> Ada?
>   1983.

Ada-95, then? ;-)

>> FORTH?
>   1970's.

Oh. Yeah, OK, it was around a lot longer than I knew. Standardized a lot 
later, tho.  Thanks. :-)

>> Sure, languages like Java waste memory if you allocate each integer as a 
>> separate object, so you make an array of integers to hold your bitmap, 
>> and deal with the minor pain that architectural decision entails.
> 
>   What if you want objects with two or three integers inside them?
> Or an integer and a floating point value?

You still put it in an array, or two parallel arrays. Or use C# instead. 
Don't attribute Java's limitations to every safe/GCed language :-)

I think the basic problem is that after the early 80's, people weren't 
designing programming languages based on 8-bit processors with 16-bit 
address spaces. I mean, C and FORTRAN were considered terribly wasteful 
of resources back when 1K of RAM was a lot of memory. :-)

I guess most people doing that stuff figured memory-restricted languages 
were pretty much a solved problem. Pick from C, Fortran, COBOL, Ada, 
FORTH, Assembler, Pascal, BASIC, APL, LISP, Algol, PL/1, C++, ... as you 
like. Why make another like that? What's missing? All those languages 
ran comfortably in and could do considerable (for the time) computation 
on a 64K machine. If you don't have the memory to run Excel, write it in 
APL. If you don't have the memory to run Perl, write it in COBOL. If 
Prolog is too big, use LISP.

It seems obvious to me that if one is interesting in developing new 
languages, it's a good idea to target the machines with new 
capabilities. You want to solve in your language the kinds of problems 
that don't come up when you only have small machines, like how to 
efficiently organize terabytes of data spread over a dozen cities in a 
way that you never, ever have an outage.

Of course, you also have things like befunge, brainfuck, intercal, and 
all the other joke languages which can nevertheless be interpreted with 
a handful of memory. :-)  I suspect this isn't what you meant, tho.

And of course you have problems that push the limits of big machines 
too. Video games, artificial intelligence, physics simulations (of 
various accuracies), data mining (say, google), etc. But a lot of those 
kinds of problems can be broken into a more-core part that you write in 
a difficult efficient language and a less-core part that you write in a 
more powerful less efficient language. Which is kind of the same all the 
way down the stack - C is a lot easier than VHDL to get some piece of 
functionality out of, but you might need a dozen C instructions to do 
what one custom VHDL blob of gates could do in a couple of clock cycles. 
Why is software less reliable than hardware? One reason is that software 
is used to do the stuff that is far too complex to do in hardware. Why 
do people build languages like Java or Erlang or Haskel that compile 
down to C? To build the systems you couldn't build in C because it's too 
low-level conceptually, lacking vital flexibility.

-- 
Darren New / San Diego, CA, USA (PST)
  Helpful housekeeping hints:
   Check your feather pillows for holes
    before putting them in the washing machine.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.