POV-Ray : Newsgroups : povray.off-topic : Complicated Server Time
30 Jul 2024 04:23:29 EDT (-0400)
  Complicated (Message 13 to 22 of 52)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Darren New
Subject: Re: Complicated
Date: 6 Jun 2011 14:12:24
Message: <4ded1888$1@news.povray.org>
On 6/6/2011 10:40, Orchid XP v8 wrote:
> On 06/06/2011 04:59 PM, Darren New wrote:
>> On 6/3/2011 7:45, Invisible wrote:
>>> (By contrast, a *real* 32-bit chip like the Motorola 68000 has
>>> registers A0
>>> through A7 and D0 through D7, and when you do an operation,
>>
>> Which is fine if you're not trying to be assembly-language compatible.
>
> That's amusing. The 68000 is a 16-bit processor, which is
> forwards-compatible with the 68020.

But that's because it has the same machine code. I'm talking about the 8086 
being assembler-language compatible with the 8080.

>>> In short, they kept kludging more and more stuff in. Having a stack-based
>>> FPU register file is a stupid, stupid idea.
>>
>> Not when your FPU is a separate chip from your CPU.
>
> In what way does having a bizarre machine model help here?

First, it's not bizarre; it's pretty much how many (for example) VMs define 
their machine language. It's called a zero-address machine. Second, it's 
because the op codes don't need to have register numbers in them, so they 
can be smaller and hence faster to transfer. Most mathematics involving FP 
that you are actually willing to pay extra to speed up wind up being larger 
expressions, I'd wager. Plus, the intermediate registers were 80 bits.

> Yeah, but every OS will have to support the old arrangement forever more,
> every VM product will have to support it forever more, and every processor
> design will have to support it forever more.

That's not all our software. That's a pretty tiny part of a context switch.

>>> Aliasing the MMX registers to the FPU registers was stupid,
>> No, it saved chip space.
>
> It's quite clear that the design motivation behind this was not chip space
> but OS support.

True, in this case. But why would you say "the old OS works with new 
software to take advantage of this feature" is stupid?

> This is precisely why Haskell's unofficial motto is "avoid success at all
> costs". (I.e., once you are successful, you have to *care* about backwards
> compatibility.)

That's why I mentioned Haskell. Unfortunately, real-world companies building 
billion-dollar semiconductor fabs don't get to actively avoid success.

> I keep hoping that some day somebody will come up with a chip design that
> runs crappy old 16-bit MS-DOS stuff under software emulation, but runs real
> Big Boy software that people might give a damn about on a platform with a
> modern, coherant design.

They do. Intel chips are RISCs interpreting IA32 instructions. :-)


-- 
Darren New, San Diego CA, USA (PST)
   "Coding without comments is like
    driving without turn signals."


Post a reply to this message

From: clipka
Subject: Re: Complicated
Date: 6 Jun 2011 14:14:34
Message: <4ded190a$1@news.povray.org>
Am 06.06.2011 19:40, schrieb Orchid XP v8:

> I keep hoping that some day somebody will come up with a chip design
> that runs crappy old 16-bit MS-DOS stuff under software emulation, but
> runs real Big Boy software that people might give a damn about on a
> platform with a modern, coherant design. But apparently I'm just
> dreaming...

That's what Intel tried with the Itanium.

"I'm making a note here: Huge Success."

So for the records, it was AMD who convinced people that a backward 
compatible 64-bit processor would be a much better idea. And it was the 
consumers who bought that message.

In the end it was the users' fear of <Insert Your Favorite 32-bit PC 
Video Game Here> running slower than on their older system that won over 
any rationale.


Post a reply to this message

From: Orchid XP v8
Subject: Re: Complicated
Date: 6 Jun 2011 14:27:27
Message: <4ded1c0f@news.povray.org>
On 06/06/2011 07:14 PM, clipka wrote:
> Am 06.06.2011 19:40, schrieb Orchid XP v8:
>
>> I keep hoping that some day somebody will come up with a chip design
>> that runs crappy old 16-bit MS-DOS stuff under software emulation, but
>> runs real Big Boy software that people might give a damn about on a
>> platform with a modern, coherant design. But apparently I'm just
>> dreaming...
>
> That's what Intel tried with the Itanium.
>
> "I'm making a note here: Huge Success."

As far as I can tell, the trouble with Itanium is that it had very, very 
poor performance. This is not much incentive to switch.

(Which is surprising really, because on paper it looks like a really 
good design...)

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: clipka
Subject: Re: Complicated
Date: 6 Jun 2011 14:33:33
Message: <4ded1d7d$1@news.povray.org>
Am 06.06.2011 20:07, schrieb Orchid XP v8:

>> This is a well known problem with lots of good solutions, depending on
>> how you implement the interconnections. The problem is that lots of
>> algorithms are inherently sequential.
>
> Sometimes I start thinking like this:
>
> Why do we have computers in the first place?
>
> To do the sorts of calculations that humans suck at.
>
> What sorts of calculations do humans suck at?
>
> Calculations which aren't easily parallelisable.
>
> If this line of logic is correct... um... we may have a problem here.

Fortunately it isn't.

Humans suck at /any/ calculations requiring higher degree of precision 
than rule-of-thumb estimates. The human brain is "designed" to work 
/despite/ uncertainties rather than avoid or eliminate them.

However, humans also suck at understanding systems, and are much better 
at understanding single entities working on a problem sequentially. At 
least that's typically true for men - maybe the next generation of 
computers needs women as software developers.


Post a reply to this message

From: Orchid XP v8
Subject: Re: Complicated
Date: 6 Jun 2011 14:39:46
Message: <4ded1ef2@news.povray.org>
>> If this line of logic is correct... um... we may have a problem here.
>
> Fortunately it isn't.
>
> Humans suck at /any/ calculations requiring higher degree of precision
> than rule-of-thumb estimates. The human brain is "designed" to work
> /despite/ uncertainties rather than avoid or eliminate them.
>
> However, humans also suck at understanding systems, and are much better
> at understanding single entities working on a problem sequentially. At
> least that's typically true for men - maybe the next generation of
> computers needs women as software developers.

I don't think I agree with any of this.

Pick any two locations in London. Ask a London cabbie how to get from 
one to the other. I guarantee they can do it faster than any satnav 
computer.

Pick up a picture of Harrison Ford. Show it to a bunch of people. Almost 
all of them will instantly be able to tell you who it's a picture of. 
Now try getting a computer to figure that out. Good luck with that.

The human brain is really very, very good at certain tasks. Quite 
astonishingly good, when you actually think about it. But it's very bad 
at certain other tasks.

I think of it as being a bit like GPGPU. The brain is a special-purpose 
computational device which is absurdly good at the tasks its designed 
for, and quite bad at everything else. To get good performance on other 
problems, you have to artificially transform them into a problem that 
"looks like" one they're good at. (A bit like the way GPGPU originally 
meant encoding your data as video textures before you can process it.) 
There are people in the Guinness Book of Records who can do crazy things 
like compute the 72nd root of a 15-digit number in their head in under 
10 seconds. It's just that most people can't do that.

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Orchid XP v8
Subject: Re: Complicated
Date: 6 Jun 2011 14:42:56
Message: <4ded1fb0@news.povray.org>
>> In what way does having a bizarre machine model help here?
>
> First, it's not bizarre; it's pretty much how many (for example) VMs
> define their machine language. It's called a zero-address machine.
> Second, it's because the op codes don't need to have register numbers in
> them, so they can be smaller and hence faster to transfer. Most
> mathematics involving FP that you are actually willing to pay extra to
> speed up wind up being larger expressions, I'd wager. Plus, the
> intermediate registers were 80 bits.

I suppose if you had a really deep stack, this might make sense. You 
could just keep subexpressions on the stack in their natural order, and 
everything would be fine. Unfortunately, 7 deep is nowhere near enough. 
You end up needing to constantly rearrange the data to avoid spilling 
registers back to main memory. (The only explanation I can come up with 
is that if RAM is faster than CPU, spilling is no biggie.)

>> It's quite clear that the design motivation behind this was not chip
>> space
>> but OS support.
>
> True, in this case. But why would you say "the old OS works with new
> software to take advantage of this feature" is stupid?

Kludging the design in a way which will haunt us forever just to get a 
product to market a few months faster sounds pretty stupid to me.

>> This is precisely why Haskell's unofficial motto is "avoid success at all
>> costs". (I.e., once you are successful, you have to *care* about
>> backwards compatibility.)
>
> That's why I mentioned Haskell. Unfortunately, real-world companies
> building billion-dollar semiconductor fabs don't get to actively avoid
> success.

Fortunately, Haskell gave up avoiding success some time ago...

>> I keep hoping that some day somebody will come up with a chip design that
>> runs crappy old 16-bit MS-DOS stuff under software emulation, but runs
>> real
>> Big Boy software that people might give a damn about on a platform with a
>> modern, coherant design.
>
> They do. Intel chips are RISCs interpreting IA32 instructions. :-)

In other words, they are RISC chips with none of the advantages of RISC.

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: clipka
Subject: Re: Complicated
Date: 6 Jun 2011 15:02:40
Message: <4ded2450@news.povray.org>
Am 06.06.2011 20:27, schrieb Orchid XP v8:
> On 06/06/2011 07:14 PM, clipka wrote:
>> Am 06.06.2011 19:40, schrieb Orchid XP v8:
>>
>>> I keep hoping that some day somebody will come up with a chip design
>>> that runs crappy old 16-bit MS-DOS stuff under software emulation, but
>>> runs real Big Boy software that people might give a damn about on a
>>> platform with a modern, coherant design. But apparently I'm just
>>> dreaming...
>>
>> That's what Intel tried with the Itanium.
>>
>> "I'm making a note here: Huge Success."
>
> As far as I can tell, the trouble with Itanium is that it had very, very
> poor performance. This is not much incentive to switch.
>
> (Which is surprising really, because on paper it looks like a really
> good design...)

Surprising?

The concept relied heavily on compile-time optimization, which may still 
have had room for optimization back then.

The CPU was designed to work with RAMbus, which turned out to have 
serious initial performance problems on the RAM chip side, and AFAIK 
never managed to catch up with improvements on the DDR protocol.

The CPU was a fairly fresh start, so there may have been significantly 
more room for improvement in the chip design than in contemporary x86 
designs.

I'm also not sure whether it was "very, very" poor performance.


Post a reply to this message

From: clipka
Subject: Re: Complicated
Date: 6 Jun 2011 15:28:15
Message: <4ded2a4f$1@news.povray.org>
Am 06.06.2011 20:39, schrieb Orchid XP v8:
>>> If this line of logic is correct... um... we may have a problem here.
>>
>> Fortunately it isn't.
>>
>> Humans suck at /any/ calculations requiring higher degree of precision
>> than rule-of-thumb estimates. The human brain is "designed" to work
>> /despite/ uncertainties rather than avoid or eliminate them.
>>
>> However, humans also suck at understanding systems, and are much better
>> at understanding single entities working on a problem sequentially. At
>> least that's typically true for men - maybe the next generation of
>> computers needs women as software developers.
>
> I don't think I agree with any of this.
>
> Pick any two locations in London. Ask a London cabbie how to get from
> one to the other. I guarantee they can do it faster than any satnav
> computer.
>
> Pick up a picture of Harrison Ford. Show it to a bunch of people. Almost
> all of them will instantly be able to tell you who it's a picture of.
> Now try getting a computer to figure that out. Good luck with that.

Not a contradiction to my point; note that /those/ types of 
"calculations" require almost exactly the /opposite/ of precision. Which 
is the domain /computers/ suck at.

For instance, the answers from the people to whom you show the Harrison 
Ford photograph will probably often contain phrases such as "I /think/ 
that's Harrison Ford": They fail at identifying him beyond any doubt 
(i.e. /exactly/), and instead identify him with a certain "error margin".

Likewise, the London cabbie will /not/ pick /the/ fastest route. He'll 
just pick a "sufficiently fast" route. Based not on parameters that can 
be exactly quantified, but on experience and intuition. And not because 
he /knows/ the route to be fast, but because he's /sufficiently 
convinced/ it is.

> The human brain is really very, very good at certain tasks. Quite
> astonishingly good, when you actually think about it. But it's very bad
> at certain other tasks.

Exactly. And among those "certain other tasks" is virtually anything 
involving precise computations.

> There are people in the Guinness Book of Records who can do crazy things
> like compute the 72nd root of a 15-digit number in their head in under
> 10 seconds. It's just that most people can't do that.

Yes, you /can/ train a human to do high-precision calculations. But 
you'd need a huge number of such people (and a REALLY HUGE supply of 
coffee :-)) to perform even the simplest multi-step calculations that way.


Post a reply to this message

From: Invisible
Subject: Re: Complicated
Date: 7 Jun 2011 04:37:14
Message: <4dede33a$1@news.povray.org>
>> I don't think I agree with any of this.
>>
>> Pick any two locations in London. Ask a London cabbie how to get from
>> one to the other. I guarantee they can do it faster than any satnav
>> computer.
>>
>> Pick up a picture of Harrison Ford. Show it to a bunch of people. Almost
>> all of them will instantly be able to tell you who it's a picture of.
>> Now try getting a computer to figure that out. Good luck with that.
>
> Not a contradiction to my point; note that /those/ types of
> "calculations" require almost exactly the /opposite/ of precision. Which
> is the domain /computers/ suck at.

>> The human brain is really very, very good at certain tasks. Quite
>> astonishingly good, when you actually think about it. But it's very bad
>> at certain other tasks.
>
> Exactly. And among those "certain other tasks" is virtually anything
> involving precise computations.

To quote myself again: I don't agree with any of this.

Try walking across the room. Trust me, this requires some pretty damned 
precise computations. Don't believe me? Drink some alcohol, and try the 
same task. Hard, isn't it? You know robotics engineers have struggled 
for years to make a bipedal robot that can perform the same apparently 
trivial task?

Balance is hard. A few grams one side or the other and you fall over. 
That sounds pretty precise to me. And yet people ride bicycles. How many 
robots have you seen ride a bicycle?

You say people can't recognise a familiar face with /certainty/, but I 
reject that. A computer might look at a face and rate the probability of 
it belonging to various people, but when I see a person I know, I *know* 
exactly who I'm looking at. Instantly. What's more, when I see a 
computer-generated image of Davy Jones, I recognise it as resembling 
Bill Nighy. How many computer facial recognition programs can do that? 
Again, sounds pretty damned accurate to me.

There are computer programs that compute a "fingerprint" of a piece of 
music, and can supposedly recognise the same recording that has been 
altered slightly. But a human listener can recognise the same piece of 
music performed by a completely different group, and a totally unrelated 
style. No computer can do that. And it's not just that humans use more 
"fuzzy" and less "precise" matching to do their recognition; if that 
were the case, the matching would just be really unreliable. But 
actually, humans are REALLY FRIGGING GOOD at this stuff.


Post a reply to this message

From: Darren New
Subject: Re: Complicated
Date: 7 Jun 2011 09:37:03
Message: <4dee297f$1@news.povray.org>
On 6/7/2011 1:37, Invisible wrote:
> But a human listener can recognise the same piece of music
> performed by a completely different group, and a totally unrelated style. No
> computer can do that.

This is incorrect. I watched our server listen to someone singing 
"Yesterday" on American Idol and our computer correctly recognizing it as a 
Beatles song.

-- 
Darren New, San Diego CA, USA (PST)
   "Coding without comments is like
    driving without turn signals."


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.