POV-Ray : Newsgroups : povray.off-topic : Mac Plus vs AMD Dual Core Server Time
12 Oct 2024 03:16:44 EDT (-0400)
  Mac Plus vs AMD Dual Core (Message 91 to 100 of 170)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Jim Henderson
Subject: Re: Mac Plus vs AMD Dual Core
Date: 24 Oct 2007 17:37:34
Message: <471fbb1e$1@news.povray.org>
On Wed, 24 Oct 2007 21:14:54 +0100, Orchid XP v7 wrote:

> Jim Henderson wrote:
>> On Wed, 24 Oct 2007 20:22:30 +0100, Orchid XP v7 wrote:
>> 
>>> Arguably it would be faster to start again from scratch rather than
>>> try and learn the OO codebase... :-S
>> 
>> Doubt it, otherwise new coders coming into the project would never get
>> started (into any project, for that matter).
> 
> I always wondered about this... How *do* you get started on a new
> project with millions of lines of code and no documentation?

You spend lots of time reading code and asking questions.  :-)

Or, what I did, was spend lots of time reading code and asking questions, 
but not as much as a programmer learning to contribute to the project 
would, and just spent my time looking at the pieces I was interested in.

I also came from a practical knowledge background from implementing the 
product, and spent a fair amount of time using source code analysis tools 
to look at call stacks taken from running systems.  Didn't give me a 
comprehensive view of what was going on, but I was able to suggest fixes 
for a couple of issues to the developers that were feasible.

Jim


Post a reply to this message

From: Darren New
Subject: Re: Mac Plus vs AMD Dual Core
Date: 24 Oct 2007 20:38:30
Message: <471fe586$1@news.povray.org>
Warp wrote:
>   So the implicit conversion rules of C for signed and unsigned integers
> are weird? Care to suggest better conversion rules?

No, they make perfect sense, when you consider that they're two 
different ranges of values fit into the same number of bits. I'm merely 
pointing out that the errors you describe comes from treating "int"s as 
"integers". There's no such thing as an "unsigned integer", only an 
"unsigned int".  It was a nit, nothing more. :-)

I.e., the problem isn't the conversion rules, but the fact that people 
code in C without constantly keeping in mind the limitations caused by 
the low-level nature of the values and the lack of error checking.

-- 
   Darren New / San Diego, CA, USA (PST)
     Remember the good old days, when we
     used to complain about cryptography
     being export-restricted?


Post a reply to this message

From: Darren New
Subject: Re: Mac Plus vs AMD Dual Core
Date: 24 Oct 2007 20:47:08
Message: <471fe78c@news.povray.org>
Warp wrote:
>   Having automatic unbounded arithmetic types is fine as long as you don't
> care about efficiency.

Ah yes. The "it's better to be fast than correct" philosophy. It serves 
Microsoft so well, after all. ;-)

>   Integral types which support unlimited precision can *not* be as efficient
> as CPU-register-sized integers

Sure they can. I've used many computers where the unbounded arithmetic 
was just as fast as the bounded arithmetic when you stayed within bounds.

> unless you explicitly tell to the compiler
> that "this variable will always stay within these limits", in which case
> you are already stuck with the same limitation as the C integral variables.

Nope. Ada, for example, allows you to specify an upper size on variables 
and then actually *enforces* it, rather than just failing in bizarre 
ways like leaving you with the sum of two positives being negative. And 
Ada isn't known for its inefficiencies.

> If you don't tell the compiler those limits and it has to make sure that
> in case of overflow it switches to unlimited precision, then those integral
> types simply cannot be as fast as the bounded ones. It's just physically
> impossible.  If nothing else, the compiler will have to add an overflow
> check after each single operation done with those integers, thus adding
> clock cycles and code size (filling code caches faster).

Errr, no. I've used plenty of CPUs that provide for traps on overflow, 
so the normal case is fast and when the value overflows, it traps out 
and switches to the slower bignums rather than giving the wrong answers.

>   I'm also sure that being forced to prepare for unlimited precision math
> makes many compiler optimizations impossible (which would be possible with
> register-sized integers).

Maybe. I would think this is more a CPU design issue than anything. If 
the trap carried enough information, I bet you could handle it in the 
compiler.

>   (Also, in the general case a compiler cannot deduce by examining a piece
> of code that a variable will never have a value outside certain boundaries.
> I'm certain this kind of check would be equivalent to the halting problem.)

Yah, probably. On the other hand, you expect humans to be able to do 
this? If the compiler can't figure it out, neither can the programmer.

-- 
   Darren New / San Diego, CA, USA (PST)
     Remember the good old days, when we
     used to complain about cryptography
     being export-restricted?


Post a reply to this message

From: scott
Subject: Re: Mac Plus vs AMD Dual Core
Date: 25 Oct 2007 02:59:54
Message: <47203eea$1@news.povray.org>
>> Microsoft are not going to employ 50 people for a few months to go 
>> through and optimise for RAM usage just to make you feel better.
>
> Indeed no - their job is to research new techniques for slowing software 
> down as much as possible to boost sales of expensive new hardware. 
> (Presumably this is why the hardware vendors love them so much...)

There job is to make money for the company.  50 people writing Word 2010 
will make more money than 50 people optimising Word 2003.  They can't help 
it, it's all the customers that would prefer paying for a buggy and memory 
hungry Word 2010 compared to a lean and mean Word 2003.

If you want to change soemthing, you need to convince the majority of 
computer users NOT to buy the latest software for MS...  Good luck.

> Now I'm puzzled - when I bought 1 GB of RAM for my PC, I had to pay 
> several hundred pounds for it... Am I living in an alternate reality or 
> something?

You're living in the past :-)  Price of computer stuff goes down pretty 
quickly - it's always surprising if you haven't looked for a while.  I 
bought a 64MB memory stick for £40 a few years back - now I was surprised 
that a 1GB one is under £10.

>>> I mean, 20 *years* ago computers could do that instanteneously with a 
>>> fraction of the RAM and CPU power. Why are we not coding like that any 
>>> more??
>>
>> Because we (well, most of us) have better computers than we did 20 years 
>> ago?
>
> And that's just it, isn't it?
>
> Why bother fixing the problem when you can just throw more hardware at it.

I think it's more the case of the software writers taking advantage of the 
hardware improvements.  If MS were still only offering an uber-streamlined 
version of some old Win NT to run on our 3 GHz dual-core machines, I think 
Linux and MacOS would be doing pretty well :-)


Post a reply to this message

From: scott
Subject: Re: Mac Plus vs AMD Dual Core
Date: 25 Oct 2007 03:04:18
Message: <47203ff2$1@news.povray.org>
> I'm still surprised I can get a shaded textured 3D scene with shadows 
> calculated refreshed faster than my monitor sync rate, but PowerPoint 
> can't smoothly scroll text onto the screen without tearing it. :-?

Is that on XP? - I think that's because in the XP desktop you can't redraw 
in sync with the monitor.  This is fixed in Vista because (IIRC) everything 
is double-buffered before being shown on the screen, just like in a 
full-screen game that uses the 3D card.


Post a reply to this message

From: Warp
Subject: Re: Mac Plus vs AMD Dual Core
Date: 25 Oct 2007 07:14:00
Message: <47207a78@news.povray.org>
Darren New <dne### [at] sanrrcom> wrote:
> I'm merely 
> pointing out that the errors you describe comes from treating "int"s as 
> "integers". There's no such thing as an "unsigned integer", only an 
> "unsigned int".  It was a nit, nothing more. :-)

  I didn't understand that.

-- 
                                                          - Warp


Post a reply to this message

From: Warp
Subject: Re: Mac Plus vs AMD Dual Core
Date: 25 Oct 2007 07:28:54
Message: <47207df5@news.povray.org>
Darren New <dne### [at] sanrrcom> wrote:
> >   Integral types which support unlimited precision can *not* be as efficient
> > as CPU-register-sized integers

> Sure they can. I've used many computers where the unbounded arithmetic 
> was just as fast as the bounded arithmetic when you stayed within bounds.

  The only way for that is that the CPU has explicit support. In other
words, the CPU itself makes the bound checks at no additional cost in
clock cycles instead of the compiler having to add additional opcodes
which perform the checks.

  AFAIK x86 processors do not have such a feature. The only thing an integer
overflow will do is to set a flag, which can then be checked with additional
opcodes, at the cost of additional clock cycles.

> > unless you explicitly tell to the compiler
> > that "this variable will always stay within these limits", in which case
> > you are already stuck with the same limitation as the C integral variables.

> Nope. Ada, for example, allows you to specify an upper size on variables 
> and then actually *enforces* it, rather than just failing in bizarre 
> ways like leaving you with the sum of two positives being negative. And 
> Ada isn't known for its inefficiencies.

  This would require even more CPU support than simply checking for a
register-sized integer overflow, as you would have to check if the value
goes out of arbitrarily-set boundaries (instead of just overflowing or
underflowing over). In other words, you would have to be able to tell
the CPU, at no clock cycle cost, "if this register ever gets a value
smaller than this or larger than this, throw an exception". I have hard
time believing any existing CPU has such a feature.

  With normal CPUs the compiler would have to put a comparison and a
conditional jump after each single operation done to the variable, at the
cost of additional clock cycles. (Such a conditional jump might even mess
up the CPU's branching prediction logic, making it even more inefficient.)

  Unless you give me an understandable and logical explanation of how these
kinds of checks could be implemented without making the code slower, I just
cannot believe it's possible.

> Errr, no. I've used plenty of CPUs that provide for traps on overflow, 
> so the normal case is fast and when the value overflows, it traps out 
> and switches to the slower bignums rather than giving the wrong answers.

  I must admit that I'm not 100% sure if the x86 architecture supports
integer register overflow exceptions, but I'm quite certain it doesn't.

  Sure, there are some other CPUs which do, but in practical terms that
doesn't help the average programmer too much.

-- 
                                                          - Warp


Post a reply to this message

From: Orchid XP v7
Subject: Re: Mac Plus vs AMD Dual Core
Date: 25 Oct 2007 14:18:00
Message: <4720ddd8@news.povray.org>
scott wrote:

> There job is to make money for the company.  50 people writing Word 2010 
> will make more money than 50 people optimising Word 2003.  They can't 
> help it, it's all the customers that would prefer paying for a buggy and 
> memory hungry Word 2010 compared to a lean and mean Word 2003.
> 
> If you want to change soemthing, you need to convince the majority of 
> computer users NOT to buy the latest software for MS...  Good luck.

Well, I don't stand a chance against the M$ marketing machine. Even 
Apple doesn't - and they have money.

Sadly, M$ has managed to convince the general population that it is 
"normal" for computers to not work propperly. If you bought a washing 
machine and it didn't work properly, you'd take it back and demand a 
refund. But when people buy a computer and the software on it doesn't 
quite work properly, people just think this is "normal" and 
"acceptable". This, truely, is M$'s contribution to the field of 
computer science.

(It really makes me angry that M$ are allowed to broadcast adverts on TV 
telling everybody how "good" they are. In my mind, this should be 
illegal under the trade descriptions act. But anyway...)

>> Now I'm puzzled - when I bought 1 GB of RAM for my PC, I had to pay 
>> several hundred pounds for it... Am I living in an alternate reality 
>> or something?
> 
> You're living in the past :-)  Price of computer stuff goes down pretty 
> quickly - it's always surprising if you haven't looked for a while.  I 
> bought a 64MB memory stick for £40 a few years back - now I was 
> surprised that a 1GB one is under £10.

I *was* going to sell my old CPU on ebay. I mean, it's a moderately old 
now, but I paid about £250 for it when I got it.

However, this was before I discovered that you can buy it new (exact 
same model, clock speed, socket, everything) for £21.

£21. Retail boxed. With a warranty.

Who the hell is going to buy a second hand one?

>> And that's just it, isn't it?
>>
>> Why bother fixing the problem when you can just throw more hardware at 
>> it.
> 
> I think it's more the case of the software writers taking advantage of 
> the hardware improvements.  If MS were still only offering an 
> uber-streamlined version of some old Win NT to run on our 3 GHz 
> dual-core machines, I think Linux and MacOS would be doing pretty well :-)

There's a difference between "taking advantage of" and "wasting".

Do you even remember when WinXP first came out? And how everybody has 
utterly horrified at the minimum hardware requirements to make it 
function acceptably? It's been around so long now that everybody seems 
to have forgotten that XP takes four times as much hardware to do the 
same thing as older OSes managed to do quite happily...

(And then there's the sad fact that M$ doesn't know the difference 
between "operating system" and "entertainment system". Even in the "pro" 
version of XP, you still get lots of silly toys like games and video 
players and so forth that I have to spend ages uninstalling. Surely what 
most businesses actually want is a tiny OS to run their *real* 
applications on top of...)


Post a reply to this message

From: Warp
Subject: Re: Mac Plus vs AMD Dual Core
Date: 25 Oct 2007 17:29:37
Message: <47210ac1@news.povray.org>
Orchid XP v7 <voi### [at] devnull> wrote:
> Sadly, M$ has managed to convince the general population that it is 
> "normal" for computers to not work propperly. If you bought a washing 
> machine and it didn't work properly, you'd take it back and demand a 
> refund. But when people buy a computer and the software on it doesn't 
> quite work properly, people just think this is "normal" and 
> "acceptable". This, truely, is M$'s contribution to the field of 
> computer science.

  To be fair, though, there's no such a thing as a bug-free (production
scale) operating system or software. All major operating systems during
the history of computing have had security and bug patches and upgrades.

  Sure, some operating system have annually more security holes discovered
than others. However, big part of it is due to the popularity of those
operating systems. More popular -> more people use it -> more bugs are
discovered and more people hack those systems. Obscure unix-flavored
operating systems used in some obscure mainframes at the cellar of a few
dozen of obscure companies surely have also quite many security holes,
but they are not discovered because not many people use them nor try to
hack them.

  OTOH, I must admit that there are some operating systems which, while
being relatively popular, have also surprisingly low security hole counts.
MacOS X is a good example. FreeBSD is also probably such an OS. Linux does
not score so well in this scale, but that's, again, at least in part due
to the popularity of linux among hackers. (IOW it may appear as linux had
more security holes than eg. FreeBSD, but that may be in part caused because
not so many people try to hack FreeBSD as Linux. In a way, this is actually
a good thing for Linux because more security holes are discovered and
patched that way.)

> Do you even remember when WinXP first came out? And how everybody has 
> utterly horrified at the minimum hardware requirements to make it 
> function acceptably? It's been around so long now that everybody seems 
> to have forgotten that XP takes four times as much hardware to do the 
> same thing as older OSes managed to do quite happily...

  I wonder if you could even install XP in a 386, in any shape or form.
Any modern linux distro should be installable in a 386. Even X might work
if you use a superlight window manager, so you will not even be confined
to the command prompt.

  (Why would anyone even want to install linux in a 386? Well, if you
have one laying around, it makes a supercheap firewall or small-scale ftp
server, for instance.)

-- 
                                                          - Warp


Post a reply to this message

From: Jim Henderson
Subject: Re: Mac Plus vs AMD Dual Core
Date: 25 Oct 2007 19:19:35
Message: <47212487@news.povray.org>
On Thu, 25 Oct 2007 17:29:37 -0400, Warp wrote:

> Any modern linux distro should be installable in a 386.

Should be, but many aren't because the installation kernels are compiled 
for Pentium or later processors.

(Spent time doing such an install - put the hard drive in a "modern" 
computer for the install, then moved it back to the 386)

Oh, and on the security front, NetWare pre-4.x.  IIRC, not one security 
patch ever needed to be issued, and it was very widespread back then (2.x 
and 3.x days).

Jim


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.