POV-Ray : Newsgroups : povray.off-topic : Complicated Server Time
29 Jul 2024 22:27:55 EDT (-0400)
  Complicated (Message 11 to 20 of 52)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Orchid XP v8
Subject: Re: Complicated
Date: 6 Jun 2011 13:40:09
Message: <4ded10f9@news.povray.org>
On 06/06/2011 04:59 PM, Darren New wrote:
> On 6/3/2011 7:45, Invisible wrote:
>> (By contrast, a *real* 32-bit chip like the Motorola 68000 has
>> registers A0
>> through A7 and D0 through D7, and when you do an operation,
>
> Which is fine if you're not trying to be assembly-language compatible.

That's amusing. The 68000 is a 16-bit processor, which is 
forwards-compatible with the 68020. You run the same code, but now all 
the 32-bit operations automatically go twice as fast. It doesn't /get/ 
much more compatible than that. (Admittedly later processors did 
introduce a few new instructions that won't work on the older models.)

>> In short, they kept kludging more and more stuff in. Having a stack-based
>> FPU register file is a stupid, stupid idea.
>
> Not when your FPU is a separate chip from your CPU.

In what way does having a bizarre machine model help here?

>> But now all our software depends on this arrangement,
>
> Not any longer. For example, I believe gcc has a command-line switch to
> say "use x87 instructions" instead of loading floats via the MMX
> instructions.

Yeah, but every OS will have to support the old arrangement forever 
more, every VM product will have to support it forever more, and every 
processor design will have to support it forever more.

>> Aliasing the MMX registers to the FPU registers was stupid,
>
> No, it saved chip space.

It's quite clear that the design motivation behind this was not chip 
space but OS support. Compared to the space taken up by huge caches, a 
piffling 7 registers is nothing...

>> The list goes on...
>
> It would be nice if it was practical to throw out all software and start
> over every time we had a new idea, wouldn't it? But then, everything
> would be as successful as Haskell. ;-)

This is precisely why Haskell's unofficial motto is "avoid success at 
all costs". (I.e., once you are successful, you have to *care* about 
backwards compatibility.)

I would argue that backwards compatibility is about balance. On the one 
hand, if you change your entire platform every three minutes, nobody 
will build anything for it. (For example, Java.) On the other hand, if 
you support everything forever, you end up with an unmanageable mess. 
(For example, IA32.)

I keep hoping that some day somebody will come up with a chip design 
that runs crappy old 16-bit MS-DOS stuff under software emulation, but 
runs real Big Boy software that people might give a damn about on a 
platform with a modern, coherant design. But apparently I'm just dreaming...

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Orchid XP v8
Subject: Re: Complicated
Date: 6 Jun 2011 14:07:09
Message: <4ded174d$1@news.povray.org>
>> Right.. There would need to be some sort of controller that could
>> transfer
>> data from Block A to Block B. Addressing would be quite ...
>> interesting in
>> this scheme.
>
> This is a well known problem with lots of good solutions, depending on
> how you implement the interconnections. The problem is that lots of
> algorithms are inherently sequential.

Sometimes I start thinking like this:

Why do we have computers in the first place?

To do the sorts of calculations that humans suck at.

What sorts of calculations do humans suck at?

Calculations which aren't easily parallelisable.

If this line of logic is correct... um... we may have a problem here.

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Darren New
Subject: Re: Complicated
Date: 6 Jun 2011 14:12:24
Message: <4ded1888$1@news.povray.org>
On 6/6/2011 10:40, Orchid XP v8 wrote:
> On 06/06/2011 04:59 PM, Darren New wrote:
>> On 6/3/2011 7:45, Invisible wrote:
>>> (By contrast, a *real* 32-bit chip like the Motorola 68000 has
>>> registers A0
>>> through A7 and D0 through D7, and when you do an operation,
>>
>> Which is fine if you're not trying to be assembly-language compatible.
>
> That's amusing. The 68000 is a 16-bit processor, which is
> forwards-compatible with the 68020.

But that's because it has the same machine code. I'm talking about the 8086 
being assembler-language compatible with the 8080.

>>> In short, they kept kludging more and more stuff in. Having a stack-based
>>> FPU register file is a stupid, stupid idea.
>>
>> Not when your FPU is a separate chip from your CPU.
>
> In what way does having a bizarre machine model help here?

First, it's not bizarre; it's pretty much how many (for example) VMs define 
their machine language. It's called a zero-address machine. Second, it's 
because the op codes don't need to have register numbers in them, so they 
can be smaller and hence faster to transfer. Most mathematics involving FP 
that you are actually willing to pay extra to speed up wind up being larger 
expressions, I'd wager. Plus, the intermediate registers were 80 bits.

> Yeah, but every OS will have to support the old arrangement forever more,
> every VM product will have to support it forever more, and every processor
> design will have to support it forever more.

That's not all our software. That's a pretty tiny part of a context switch.

>>> Aliasing the MMX registers to the FPU registers was stupid,
>> No, it saved chip space.
>
> It's quite clear that the design motivation behind this was not chip space
> but OS support.

True, in this case. But why would you say "the old OS works with new 
software to take advantage of this feature" is stupid?

> This is precisely why Haskell's unofficial motto is "avoid success at all
> costs". (I.e., once you are successful, you have to *care* about backwards
> compatibility.)

That's why I mentioned Haskell. Unfortunately, real-world companies building 
billion-dollar semiconductor fabs don't get to actively avoid success.

> I keep hoping that some day somebody will come up with a chip design that
> runs crappy old 16-bit MS-DOS stuff under software emulation, but runs real
> Big Boy software that people might give a damn about on a platform with a
> modern, coherant design.

They do. Intel chips are RISCs interpreting IA32 instructions. :-)


-- 
Darren New, San Diego CA, USA (PST)
   "Coding without comments is like
    driving without turn signals."


Post a reply to this message

From: clipka
Subject: Re: Complicated
Date: 6 Jun 2011 14:14:34
Message: <4ded190a$1@news.povray.org>
Am 06.06.2011 19:40, schrieb Orchid XP v8:

> I keep hoping that some day somebody will come up with a chip design
> that runs crappy old 16-bit MS-DOS stuff under software emulation, but
> runs real Big Boy software that people might give a damn about on a
> platform with a modern, coherant design. But apparently I'm just
> dreaming...

That's what Intel tried with the Itanium.

"I'm making a note here: Huge Success."

So for the records, it was AMD who convinced people that a backward 
compatible 64-bit processor would be a much better idea. And it was the 
consumers who bought that message.

In the end it was the users' fear of <Insert Your Favorite 32-bit PC 
Video Game Here> running slower than on their older system that won over 
any rationale.


Post a reply to this message

From: Orchid XP v8
Subject: Re: Complicated
Date: 6 Jun 2011 14:27:27
Message: <4ded1c0f@news.povray.org>
On 06/06/2011 07:14 PM, clipka wrote:
> Am 06.06.2011 19:40, schrieb Orchid XP v8:
>
>> I keep hoping that some day somebody will come up with a chip design
>> that runs crappy old 16-bit MS-DOS stuff under software emulation, but
>> runs real Big Boy software that people might give a damn about on a
>> platform with a modern, coherant design. But apparently I'm just
>> dreaming...
>
> That's what Intel tried with the Itanium.
>
> "I'm making a note here: Huge Success."

As far as I can tell, the trouble with Itanium is that it had very, very 
poor performance. This is not much incentive to switch.

(Which is surprising really, because on paper it looks like a really 
good design...)

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: clipka
Subject: Re: Complicated
Date: 6 Jun 2011 14:33:33
Message: <4ded1d7d$1@news.povray.org>
Am 06.06.2011 20:07, schrieb Orchid XP v8:

>> This is a well known problem with lots of good solutions, depending on
>> how you implement the interconnections. The problem is that lots of
>> algorithms are inherently sequential.
>
> Sometimes I start thinking like this:
>
> Why do we have computers in the first place?
>
> To do the sorts of calculations that humans suck at.
>
> What sorts of calculations do humans suck at?
>
> Calculations which aren't easily parallelisable.
>
> If this line of logic is correct... um... we may have a problem here.

Fortunately it isn't.

Humans suck at /any/ calculations requiring higher degree of precision 
than rule-of-thumb estimates. The human brain is "designed" to work 
/despite/ uncertainties rather than avoid or eliminate them.

However, humans also suck at understanding systems, and are much better 
at understanding single entities working on a problem sequentially. At 
least that's typically true for men - maybe the next generation of 
computers needs women as software developers.


Post a reply to this message

From: Orchid XP v8
Subject: Re: Complicated
Date: 6 Jun 2011 14:39:46
Message: <4ded1ef2@news.povray.org>
>> If this line of logic is correct... um... we may have a problem here.
>
> Fortunately it isn't.
>
> Humans suck at /any/ calculations requiring higher degree of precision
> than rule-of-thumb estimates. The human brain is "designed" to work
> /despite/ uncertainties rather than avoid or eliminate them.
>
> However, humans also suck at understanding systems, and are much better
> at understanding single entities working on a problem sequentially. At
> least that's typically true for men - maybe the next generation of
> computers needs women as software developers.

I don't think I agree with any of this.

Pick any two locations in London. Ask a London cabbie how to get from 
one to the other. I guarantee they can do it faster than any satnav 
computer.

Pick up a picture of Harrison Ford. Show it to a bunch of people. Almost 
all of them will instantly be able to tell you who it's a picture of. 
Now try getting a computer to figure that out. Good luck with that.

The human brain is really very, very good at certain tasks. Quite 
astonishingly good, when you actually think about it. But it's very bad 
at certain other tasks.

I think of it as being a bit like GPGPU. The brain is a special-purpose 
computational device which is absurdly good at the tasks its designed 
for, and quite bad at everything else. To get good performance on other 
problems, you have to artificially transform them into a problem that 
"looks like" one they're good at. (A bit like the way GPGPU originally 
meant encoding your data as video textures before you can process it.) 
There are people in the Guinness Book of Records who can do crazy things 
like compute the 72nd root of a 15-digit number in their head in under 
10 seconds. It's just that most people can't do that.

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Orchid XP v8
Subject: Re: Complicated
Date: 6 Jun 2011 14:42:56
Message: <4ded1fb0@news.povray.org>
>> In what way does having a bizarre machine model help here?
>
> First, it's not bizarre; it's pretty much how many (for example) VMs
> define their machine language. It's called a zero-address machine.
> Second, it's because the op codes don't need to have register numbers in
> them, so they can be smaller and hence faster to transfer. Most
> mathematics involving FP that you are actually willing to pay extra to
> speed up wind up being larger expressions, I'd wager. Plus, the
> intermediate registers were 80 bits.

I suppose if you had a really deep stack, this might make sense. You 
could just keep subexpressions on the stack in their natural order, and 
everything would be fine. Unfortunately, 7 deep is nowhere near enough. 
You end up needing to constantly rearrange the data to avoid spilling 
registers back to main memory. (The only explanation I can come up with 
is that if RAM is faster than CPU, spilling is no biggie.)

>> It's quite clear that the design motivation behind this was not chip
>> space
>> but OS support.
>
> True, in this case. But why would you say "the old OS works with new
> software to take advantage of this feature" is stupid?

Kludging the design in a way which will haunt us forever just to get a 
product to market a few months faster sounds pretty stupid to me.

>> This is precisely why Haskell's unofficial motto is "avoid success at all
>> costs". (I.e., once you are successful, you have to *care* about
>> backwards compatibility.)
>
> That's why I mentioned Haskell. Unfortunately, real-world companies
> building billion-dollar semiconductor fabs don't get to actively avoid
> success.

Fortunately, Haskell gave up avoiding success some time ago...

>> I keep hoping that some day somebody will come up with a chip design that
>> runs crappy old 16-bit MS-DOS stuff under software emulation, but runs
>> real
>> Big Boy software that people might give a damn about on a platform with a
>> modern, coherant design.
>
> They do. Intel chips are RISCs interpreting IA32 instructions. :-)

In other words, they are RISC chips with none of the advantages of RISC.

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: clipka
Subject: Re: Complicated
Date: 6 Jun 2011 15:02:40
Message: <4ded2450@news.povray.org>
Am 06.06.2011 20:27, schrieb Orchid XP v8:
> On 06/06/2011 07:14 PM, clipka wrote:
>> Am 06.06.2011 19:40, schrieb Orchid XP v8:
>>
>>> I keep hoping that some day somebody will come up with a chip design
>>> that runs crappy old 16-bit MS-DOS stuff under software emulation, but
>>> runs real Big Boy software that people might give a damn about on a
>>> platform with a modern, coherant design. But apparently I'm just
>>> dreaming...
>>
>> That's what Intel tried with the Itanium.
>>
>> "I'm making a note here: Huge Success."
>
> As far as I can tell, the trouble with Itanium is that it had very, very
> poor performance. This is not much incentive to switch.
>
> (Which is surprising really, because on paper it looks like a really
> good design...)

Surprising?

The concept relied heavily on compile-time optimization, which may still 
have had room for optimization back then.

The CPU was designed to work with RAMbus, which turned out to have 
serious initial performance problems on the RAM chip side, and AFAIK 
never managed to catch up with improvements on the DDR protocol.

The CPU was a fairly fresh start, so there may have been significantly 
more room for improvement in the chip design than in contemporary x86 
designs.

I'm also not sure whether it was "very, very" poor performance.


Post a reply to this message

From: clipka
Subject: Re: Complicated
Date: 6 Jun 2011 15:28:15
Message: <4ded2a4f$1@news.povray.org>
Am 06.06.2011 20:39, schrieb Orchid XP v8:
>>> If this line of logic is correct... um... we may have a problem here.
>>
>> Fortunately it isn't.
>>
>> Humans suck at /any/ calculations requiring higher degree of precision
>> than rule-of-thumb estimates. The human brain is "designed" to work
>> /despite/ uncertainties rather than avoid or eliminate them.
>>
>> However, humans also suck at understanding systems, and are much better
>> at understanding single entities working on a problem sequentially. At
>> least that's typically true for men - maybe the next generation of
>> computers needs women as software developers.
>
> I don't think I agree with any of this.
>
> Pick any two locations in London. Ask a London cabbie how to get from
> one to the other. I guarantee they can do it faster than any satnav
> computer.
>
> Pick up a picture of Harrison Ford. Show it to a bunch of people. Almost
> all of them will instantly be able to tell you who it's a picture of.
> Now try getting a computer to figure that out. Good luck with that.

Not a contradiction to my point; note that /those/ types of 
"calculations" require almost exactly the /opposite/ of precision. Which 
is the domain /computers/ suck at.

For instance, the answers from the people to whom you show the Harrison 
Ford photograph will probably often contain phrases such as "I /think/ 
that's Harrison Ford": They fail at identifying him beyond any doubt 
(i.e. /exactly/), and instead identify him with a certain "error margin".

Likewise, the London cabbie will /not/ pick /the/ fastest route. He'll 
just pick a "sufficiently fast" route. Based not on parameters that can 
be exactly quantified, but on experience and intuition. And not because 
he /knows/ the route to be fast, but because he's /sufficiently 
convinced/ it is.

> The human brain is really very, very good at certain tasks. Quite
> astonishingly good, when you actually think about it. But it's very bad
> at certain other tasks.

Exactly. And among those "certain other tasks" is virtually anything 
involving precise computations.

> There are people in the Guinness Book of Records who can do crazy things
> like compute the 72nd root of a 15-digit number in their head in under
> 10 seconds. It's just that most people can't do that.

Yes, you /can/ train a human to do high-precision calculations. But 
you'd need a huge number of such people (and a REALLY HUGE supply of 
coffee :-)) to perform even the simplest multi-step calculations that way.


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.