 |
 |
|
 |
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Orchid XP v8 schrieb:
>>> - PostScript was invented 10 years before laser printers existed. (It
>>> was apparently designed specifically with laser printers in mind, as
>>> I had always believed.)
>>
>> You're wrong here: The first laser printer dates back to 1969, while
>> even the roots of PostScript date no further back than 1976.
>> Furthermore, the language was initially targeted at the offset
>> printing industry to drive Computer-to-Film imagesetters, and was only
>> later adapted to laser printers.
>
> Wikipedia suggests that it was developed specifically for laser
> printing. (I may be wrong on the date that laser printers were
> *invented*, but they did not become common until very, very much later.
> Not unlike C++, apparently...)
Well, /my/ Wikipedia claims otherwise - see
http://en.wikipedia.org/wiki/PostScript#History:
"Warnock left with Chuck Geschke and founded Adobe Systems in December
1982. They created a simpler language, similar to InterPress, called
PostScript, which went on the market in 1984. At about this time they
were visited by Steve Jobs, who urged them to /adapt PostScript to be
used as the language for driving laser printers/."
(emphasis added)
Laser printers were originally invented not for quality, but for sheer
speed (I guess they were the first printers to feature only rotating
parts, with no linear movement whatsoever, so no acceleration was needed
when "geared up" for printing), and were found at data centers for quite
a while.
And, as mentioned, there were other areas of use for PostScript before that.
>> - Perl predates the Internet by half a decade. (WTF?) I can only imagine
>>> it began life as a Unixy text-munging system in the style of awk,
>>> sed, etc.
>>
>> You surely mean it predates the /World Wide Web/ by half a decade.
>
> Before the WWW, nobody outside the millitary knew the Internet existed.
Many university students did, and certainly virtually all informatics
students. The Internet had long grown beyond its roots in the ARPANET
into the scientific world, as a tool for file and e-mail transfer as
well as remote access to other universities' data centers. First
commercial use of Internet dates back to 1988. Usenet became part of the
Internet before the WWW era, too. Porn was exchanged via the internet on
a more-or-less regular basis years before the first HTTP server was set up.
>>> - JavaScript predates Java. (WTF?!)
>>
>> ... under the titles "Mocha" and later "LiveScript", yes. The name
>> JavaScript wasn't coined until December 1995 - when Java was already
>> released to the public (not in 1996, as your chart implies) - probably
>> in an attempt to benefit from the Java hype of those days.
>
> What do you mean "probably"? ;-) The language is utterly unrelated to
> Java in any way...
Well, "probably" in the sense of "sounds pretty likely, even though
nobody can give proof".
> I mean the Internet becoming known by the general public. ("The
> Internet" can be traced back to a secret classified military project
> which was probably around for *decades* before this, knowing the US
> millitary...)
That would have been the ARPANET, with the first data link being
established in 1969.
However, that's just the root of the /technology/. The first nucleus of
the actual network that later came to be known as the Internet - the
NSFNET - was established in 1985 (with a 56 kBit/s backbone - imagine
that!), linking 6 university computing centers.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Invisible schrieb:
>> In any case, PostScript /is/ a full-fledged programming language, and
>> was always intended to be.
>
> According to Wikipedia, it was always designed to be a page-description
> language.
I'd rather say, a "page-description programming language".
The whole structure of the language indicates that Turing-completeness
wasn't just something added later - it quite clearly seems to have been
in there right from the start.
/Conceptually/ it has always been a full-fledged programming language,
even though the initial /use case/ was of course page description.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Invisible schrieb:
>> My uncle said that back then, they created ad-hoc, file-based database
>> management systems by themselves. People were much bolder back then. :)
>
> But what did dthey *store* these files on? Punch cards?!
Exchangable hard disk drives (around since 1956)? Magnetic tape (used in
the computing world since the 1950's, after having already seen decades
of service for analog signal recording)?
You're underestimating the historic arsenal of data storage - and
possibly overestimating the volume of data processed back then.
AFAIK punchcards and punch tape were used primarily for data input and
output, not data storage (though of course they could double-feature as
a backup of the input or output data). Data storage was instead
typically done on magnetic tapes.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
clipka wrote:
> Many university students did, and certainly virtually all informatics
> students.
ftp://ftp.rfc-editor.org/in-notes/rfc1.txt
By 1969, there were already open standards processes talking about how to
improve the internet. Both universities, telephone companies, and computer
manufacturers knew about it.
Now, you needed a special dedicated computer just to keep up with a
56kilobit connection, but that doesn't mean anybody didn't know about it.
--
Darren New, San Diego CA, USA (PST)
I ordered stamps from Zazzle that read "Place Stamp Here".
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Invisible escreveu:
>> C was basically portable assembly for the original Unix.
>
> ...and yet, it doesn't make it especially easy to do low-level stuff.
It doesn't?
>> Smalltalk is actually from about 1977 or something, isn't it?
>
> Yes. Very ancient, as the chart shows.
Please don't call me ancient. :P
>>> - SQL existed 15 years before high-capacity storage devices appeared.
>>
>> My uncle said that back then, they created ad-hoc, file-based database
>> management systems by themselves. People were much bolder back then. :)
>
> But what did dthey *store* these files on? Punch cards?!
Magnetic tapes, in the 1970's. Magnetic tapes, in the form of cassette
tapes were also available for home use, as MSX and perhaps C64 owners
may remember.
>> Are you talking about Miranda? Yes, it was the spiritual basis for
>> Haskell.
>
> Haskell 1.0 is 1990. I guess green-screens is stretching it a litle, but
> lots of people were still using MS-DOS regularly long after that date.
> (And writing stuff in QBASIC and similar.)
>
> Miranda, of course, is even older. (See the chart.)
Your concept of old reminds me of childhood. I was a youth in 1990 so
it doesn't seem that old to me. :)
--
a game sig: http://tinyurl.com/d3rxz9
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
>>> C was basically portable assembly for the original Unix.
>>
>> ...and yet, it doesn't make it especially easy to do low-level stuff.
>
> It doesn't?
I mean, sure, it has support for twiddling bits and stuff. But you'd
think if you were doing low-level work, you would have a way to
explicitly say how many bits you want to use. Yet C provides no such
facility. Every CPU I know of provides an instruction to check for
sign-overflow, but C ignores overflows by default, and provides no way
to check for them if you want to. (Besides manually testing the operands
before doing the operation.)
> Please don't call me ancient. :P
Ancient. :-P
>>>> - SQL existed 15 years before high-capacity storage devices appeared.
>>>
>>> My uncle said that back then, they created ad-hoc, file-based
>>> database management systems by themselves. People were much bolder
>>> back then. :)
>>
>> But what did dthey *store* these files on? Punch cards?!
>
> Magnetic tapes, in the 1970's. Magnetic tapes, in the form of cassette
> tapes were also available for home use, as MSX and perhaps C64 owners
> may remember.
The concept of performing a multi-table join where the tables are all
stored on magnetic tape scares me. o_O
My God, it could take months...
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
"Orchid XP v8" <voi### [at] dev null> wrote in message
news:4ae0be3d$1@news.povray.org...
> I mean, sure, it has support for twiddling bits and stuff. But you'd think
> if you were doing low-level work, you would have a way to explicitly say
> how many bits you want to use. Yet C provides no such facility.
Wouldn't bit fields do what you're describing? As in:
struct PackBits {
unsigned int field1:2;
unsigned int field2:1;
unsigned int field3:4;
unsigned int field4:1;
}
I can't recall if that's in K&R, but I'm sure it's at least ANSI C.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Captain Jack wrote:
> Wouldn't bit fields do what you're describing?
Only if they're all aligned to byte boundaries, and you know what order the
fields are in. I.e., yes, but in no way portably. There's no way to portably
(for example) lay a C structure for an IP datagram header onto the header
and just use it.
Contrast with (say) Ada, where you can say
type FatTable = packed array[0..1000] of integer 0..4095;
and automatically get instructions that pack and unpack 12-bit entries in
your array of memory. (Modulo syntax, mind. :-)
Ada also supports context switching, interrupt handling, defined ways of
pointing to particular areas of memory (i.e., so you can tell the linker to
put the machine registers at a particular place), arbitrary ranges for
integers, for floats, and for decimal numbers. (i.e., you tell Ada what
range/precision you need, and it picks the best representation, instead of
trying to find the best representation from amongst the things the compiler
offers.) It supports prioritized interrupt handling, including blocking
lower-level interrupts while a higher-level one is running, handling
priority inversion, and scheduling threads based on the same priorities as
interrupts. It also supports volatile variables (which might be changed by
hardware) and atomic operations (where you can guarantee that if you're
writing a 2-byte integer, you won't get an instruction that stores the value
using two 1-byte store instructions, which is also important for hardware),
and "protected" operations that take advantage of hardware instructions for
blocking multiple threads (i.e., which take advantage of hardware locks).
I don't think C handles *any* of that. About the closest it comes is
volatile (sort of) and undefined behavior that *often* does what you'd
expect when using addresses, unless your memory model is too far different
from C's.
--
Darren New, San Diego CA, USA (PST)
I ordered stamps from Zazzle that read "Place Stamp Here".
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
"Darren New" <dne### [at] san rr com> wrote in message
news:4ae0c668$1@news.povray.org...
> Captain Jack wrote:
>> Wouldn't bit fields do what you're describing?
>
> Only if they're all aligned to byte boundaries, and you know what order
> the fields are in. I.e., yes, but in no way portably. There's no way to
> portably (for example) lay a C structure for an IP datagram header onto
> the header and just use it.
That's true... I know the specification calls for packing the bytes as
tightly as possible, but there's no standard spec for alignment.
I used to use Borland's Turbo C (v2, IIRC) way back when on DOS machines. I
remember that it had custom pre-processor directives for controlling byte
and word alignment, but I'm sure those weren't in any way standard.
I also used to make use of its "asm" keyword which would let me insert x86
assembler code into the middle of my C code, and I'd often use that to
squeeze some extra bits out of my memory usage. Contrast that with my
current job, where we use .NET, and I don't even keep track of what I've
allocated and deallocated, or how much I've used. I seem to have grown fat
and lazy on the backs of the developers at Redmond. 8D
> I don't think C handles *any* of that. About the closest it comes is
> volatile (sort of) and undefined behavior that *often* does what you'd
> expect when using addresses, unless your memory model is too far different
> from C's.
But that was what's so great about pure C... nothing lets you shoot yourself
in the foot with confidence the way C does. <g>
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Captain Jack wrote:
> That's true... I know the specification calls for packing the bytes as
> tightly as possible, but there's no standard spec for alignment.
I also believe (but I am too lazy to look it up right now) that it's
impossible to portably know whether
struct PackBits alpha = {0, 1, 0, 0};
struct PackBits beta = {0, 0, 0, 1};
alpha or beta will yield a larger number when cast to an int. I.e., I don't
think the standard even says whether the fields are allocated MSB or LSB first.
> I also used to make use of its "asm" keyword which would let me insert x86
> assembler code into the middle of my C code, and I'd often use that to
> squeeze some extra bits out of my memory usage.
Yep. When you really need to talk to the machine directly, C falls down.
That was the point of asking "why is C better?" It was only better for
portability compared to the other languages of the time.
--
Darren New, San Diego CA, USA (PST)
I ordered stamps from Zazzle that read "Place Stamp Here".
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|
 |