POV-Ray : Newsgroups : povray.general : Real benefit of a 64 bit Pov binary on a 64 bit CPU in a 64 bit opsys? Server Time
1 Aug 2024 04:16:02 EDT (-0400)
  Real benefit of a 64 bit Pov binary on a 64 bit CPU in a 64 bit opsys? (Message 21 to 30 of 68)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Warp
Subject: Re: Real benefit of a 64 bit Pov binary on a 64 bit CPU in a 64 bit opsys?
Date: 1 Aug 2006 20:14:49
Message: <44cfee79@news.povray.org>
On a side note, there are a few applications which truely benefit from
a 64-bit architecture and would be much slower in 32-bit ones.

  One excellent example are computer chess programs. The best chess
computers in the world have always been 64-bit, and for a good reason:

  Bitmasks of the chess board are an efficient technique to speed up
calculations. For instance, one bitmask could tell to which squares
a certain piece can move. Performing a logical and of this bitmask and
another telling the location of another piece tells the program really
fast (in 1 clock cycle or less, depending on the processor) whether the
latter can be captured by the former. This technique is used a lot in
the best chess software out there.

  Now, a bitmask of the chess board requires, naturally, 64 bits (because
that's the amount of squares in the board). In a 64-bit processor it's
the size of a register, and thus performing operations on these bitmasks
is really fast. In 32-bit processors 2 registers (and thusly two operations)
would have to be used for each bitmask and each operation done to them,
making it considerably slower.

  Unfortunately these special cases are rather rare, and for example
POV-Ray is not a program which would benefit much from 64-bit integers
(except for the added address space if there's more than 4 gigabytes
of memory in the computer).

-- 
                                                          - Warp


Post a reply to this message

From: Tim Cook
Subject: Re: Real benefit of a 64 bit Pov binary on a 64 bit CPU in a 64 bit opsys?
Date: 2 Aug 2006 00:49:37
Message: <44d02ee1$1@news.povray.org>
So what's all the hubbub over 32-bit processors?  Aren't they slower 
than 16-bit CPUs in exactly the same way as 64s are to 32s?  It just 
take longer to assign anything and nothing needs the precision other 
than some specialized applications.  Besides, nobody needs more than 640 
kB of RAM...

*wink*

-- 
Tim Cook
http://home.bellsouth.net/p/PWP-empyrean

-----BEGIN GEEK CODE BLOCK-----
Version: 3.12
GFA dpu- s: a?-- C++(++++) U P? L E--- W++(+++)>$
N++ o? K- w(+) O? M-(--) V? PS+(+++) PE(--) Y(--)
PGP-(--) t* 5++>+++++ X+ R* tv+ b++(+++) DI
D++(---) G(++) e*>++ h+ !r--- !y--
------END GEEK CODE BLOCK------


Post a reply to this message

From: Stefan Viljoen
Subject: Re: Real benefit of a 64 bit Pov binary on a 64 bit CPU in a 64 bit opsys?
Date: 2 Aug 2006 02:19:06
Message: <44d043d9@news.povray.org>
Mike Sobers spake:


> I wasn't arguing that the 64-bit processors should be twice as fast.  The
> answer to that question is clearly no.  I was trying to explain why
> reasonably intelligent people might ask the question.  It's because some
> integer operations may be doubled up using the extra registry space.  As
> you've pointed out, due to all the bit shifting and underflow/overflow
> error detection, there is clearly no speed increase, and very few
> operations could actually benefit from this approach anyway.  However, I
> think it was an excellent question to ask, and the resulting discussion
> here has been enlightening.  Hopefully we've all learned a lot more about
> how 64-bit computing works, which was the intent to begin with.  Thanks
> for your contribution to the discussion (no sarcasm here).
> 
> Mike

Hi Mike

Thanks, that's a rational view. It seems like sometimes in here stupid
people (like me) aren't allowed to ask stupid questions about stuff they
don't understand, because they promptly get beaten over the head with their
own question. :)

I've always liked the Pov group, cause you get so few elitist and "you're
stupid for even asking that! How can you even -think- that? Explain your
stupidity RIGHT NOW!" type answers in here. Guess that's changing.

But then, that's the POINT of asking a question - why ask a question (or
answer them) if you are omniscent and already know everything about
everything? Must be boring knowing everything about everything - nothing
new to learn or discover...

I just thought 64 is twice 32 , so it's got to be good for -something- and
do -something- better than a 32 bit processor. As you point out, it seemed
logical that -something- is twice -something- else - ok, its the register
sizes. So what? Now I know. No harm done.

-- 
Stefan Viljoen
Software Support Technician / Programmer
Polar Design Solutions


Post a reply to this message

From: Warp
Subject: Re: Real benefit of a 64 bit Pov binary on a 64 bit CPU in a 64 bit opsys?
Date: 2 Aug 2006 04:35:11
Message: <44d063be@news.povray.org>
Stefan Viljoen <spamnot@ <removethis>polard.com> wrote:
> Thanks, that's a rational view. It seems like sometimes in here stupid
> people (like me) aren't allowed to ask stupid questions about stuff they
> don't understand, because they promptly get beaten over the head with their
> own question. :)

  I was just asking, out of curiosity, what is it that makes some people
believe that a 64-bit system should/might be twice as fast (or just faster
by some degrees) than an equivalent 32-bit system.

  It just feels that people never stop to think rationally about these
things. "What does it actually mean that it's a 64-bit system?" After
one thinks about that question and comes to a rational answer then one
should/may perhaps realize that there's no logical reason why a 64-bit
system should be twice as fast as a 32-bit system.

  Perhaps one reason might be that the popularization of 64-bit desktop
systems (64-bitness has always been something only popular in obscure
big servers until now) as well as the popularization of dual-core desktop
systems (again, multiple processors have been something only used in
obscure big servers until now) have coincided, and thus people might
get those two things mixed up and think that they are somewhat related
(even though they really aren't; their popularization in desktop systems
at the same time is just coincidence).

  Most people are also probably too young to remember the shift from
16-bit systems to 32-bit systems in Intel-based computers and have never
experienced first-hand the speed difference between a 16-bit binary
compared to a 32-bit binary (if they do the same thing there's basically
no speed difference except when big amounts of memory are needed or if
32-bit arithmetic is a very relevant part of the program's calculations).

> I just thought 64 is twice 32

  Did you stop to think *what* is it that is "twice"? What it could
possibly be?

-- 
                                                          - Warp


Post a reply to this message

From: Ger
Subject: Re: Real benefit of a 64 bit Pov binary on a 64 bit CPU in a 64 bit opsys?
Date: 2 Aug 2006 05:27:43
Message: <44d0700e@news.povray.org>
Warp wrote:

> Stefan Viljoen <spamnot@ <removethis>polard.com> wrote:
>> Thanks, that's a rational view. It seems like sometimes in here stupid
>> people (like me) aren't allowed to ask stupid questions about stuff they
>> don't understand, because they promptly get beaten over the head with
>> their own question. :)
> 
>   I was just asking, out of curiosity, what is it that makes some people
> believe that a 64-bit system should/might be twice as fast (or just faster
> by some degrees) than an equivalent 32-bit system.
> 
>   It just feels that people never stop to think rationally about these
> things. "What does it actually mean that it's a 64-bit system?" After
> one thinks about that question and comes to a rational answer then one
> should/may perhaps realize that there's no logical reason why a 64-bit
> system should be twice as fast as a 32-bit system.
> 
>   Perhaps one reason might be that the popularization of 64-bit desktop
> systems (64-bitness has always been something only popular in obscure
> big servers until now) as well as the popularization of dual-core desktop
> systems (again, multiple processors have been something only used in
> obscure big servers until now) have coincided, and thus people might
> get those two things mixed up and think that they are somewhat related
> (even though they really aren't; their popularization in desktop systems
> at the same time is just coincidence).
> 
>   Most people are also probably too young to remember the shift from
> 16-bit systems to 32-bit systems in Intel-based computers and have never
> experienced first-hand the speed difference between a 16-bit binary
> compared to a 32-bit binary (if they do the same thing there's basically
> no speed difference except when big amounts of memory are needed or if
> 32-bit arithmetic is a very relevant part of the program's calculations).
> 
>> I just thought 64 is twice 32
> 
>   Did you stop to think *what* is it that is "twice"? What it could
> possibly be?
> 

On the other hand, did you stop to think that not everybody is familiar with
the inner workings of a computer? That there are actually very few people,
compared to the numbers that use them, that are knowlegable about the inner
workings?
Looking back I can see the validity in Stefan's reasoning.
-- 
Ger


Post a reply to this message

From: Vincent Le Chevalier
Subject: Re: Real benefit of a 64 bit Pov binary on a 64 bit CPU in a 64 bit opsys?
Date: 2 Aug 2006 05:34:45
Message: <44d071b5$1@news.povray.org>

> Perhaps one reason might be that the popularization of 64-bit desktop
>  systems (64-bitness has always been something only popular in
> obscure big servers until now) as well as the popularization of
> dual-core desktop systems (again, multiple processors have been
> something only used in obscure big servers until now) have coincided,
> and thus people might get those two things mixed up and think that
> they are somewhat related (even though they really aren't; their
> popularization in desktop systems at the same time is just
> coincidence).
> 

Implying that 64-bits systems are generally better or faster than
32-bits ones might also be related to the habit of measuring the
performance, or rather generation, of game consoles in terms of bits. I
remember clearly the time when every review was speaking of 8-bits,
16-bits, 32-bits, etc. Hey, I even think the first time I heard of bits
was actually in this context :-)
Roughly each generation was doubling the "bits", so it was in the end
associated with performance. Maybe some remnants of that habit among the
people who played consoles explain the misunderstanding...

On a related note, I wonder if raster graphics are sped up when there
are more bits in the registers ? The evolution of game consoles would
suggest so, but I'm not sure of where GPUs are in terms of bitness, they
could be working in a different bitness than the processor...

In the light of this discussion, one has to wonder why would the average
desktop user need a 64-bit system anyway. Are we going to eat up 4GB of
RAM using word processors and browsing the web ? Talk about bloat :-)

-- 
Vincent


Post a reply to this message

From: Nicolas George
Subject: Re: Real benefit of a 64 bit Pov binary on a 64 bit CPU in a 64 bit opsys?
Date: 2 Aug 2006 05:35:29
Message: <44d071e1$1@news.povray.org>
Warp  wrote in message <44cfee79@news.povray.org>:
> (except for the added address space if there's more than 4 gigabytes
> of memory in the computer).

Actually, you do not need more than 4GB of physical memory to get the
advantages of 64bit addressing: it is enough to have more than 4GB
(actually, 3 or so, on most modern operating systems, because the kernel
keeps some address space for itself) of _virtual_ memory.

Sometimes, it is faster to let the system handle swapping on a few page
faults than put tests and indirections everywhere in the program to check
for the availability of such or such data structure.

Sometimes, it is much slower, agreed. This depends completely on the shape
of the memory access of the program, and it is something very hard to
predict. But it is always much _simpler_ to rely on the system's virtual
memory.


Post a reply to this message

From: Warp
Subject: Re: Real benefit of a 64 bit Pov binary on a 64 bit CPU in a 64 bit opsys?
Date: 2 Aug 2006 08:39:55
Message: <44d09d1b@news.povray.org>
Nicolas George <nicolas$george@salle-s.org> wrote:
> Sometimes, it is faster to let the system handle swapping on a few page
> faults than put tests and indirections everywhere in the program to check
> for the availability of such or such data structure.

  I didn't understand this. What "such or such data structure" are you
talking about, and what does virtual memory has to do with that?

-- 
                                                          - Warp


Post a reply to this message

From: Warp
Subject: Re: Real benefit of a 64 bit Pov binary on a 64 bit CPU in a 64 bit opsys?
Date: 2 Aug 2006 08:44:49
Message: <44d09e40@news.povray.org>
Vincent Le Chevalier <gal### [at] libertyallsurfspamfr> wrote:
> In the light of this discussion, one has to wonder why would the average
> desktop user need a 64-bit system anyway. Are we going to eat up 4GB of
> RAM using word processors and browsing the web ? Talk about bloat :-)

  Remember the time when a 100MB hard disk was HUGE? Probably many thought
back then "why would the average user need such a huge hard disk?"
  This was probably the case with every major developement in computers.
And history shows us that even though we *now* can't think of anything
really necessary, there *will* be something. Resources create the needs.

-- 
                                                          - Warp


Post a reply to this message

From: Nicolas George
Subject: Re: Real benefit of a 64 bit Pov binary on a 64 bit CPU in a 64 bit opsys?
Date: 2 Aug 2006 10:03:17
Message: <44d0b0a5$1@news.povray.org>
Warp  wrote in message <44d09d1b@news.povray.org>:
>   I didn't understand this. What "such or such data structure" are you
> talking about, and what does virtual memory has to do with that?

Let us take a simple case: assume you have a simple but big list of big
objects. First, let us assume that all the objects fit in memory. Then, you
only have to use a big array of pointers to the objects, and everything is
said.

But if all the objects do not fit in memory, you will need to store some of
them on hard disk while you do not use them. For each object, either the
pointer is not null, and gives the address of the object, or it is null, and
the object is not present in memory, but saved in a file called
objXXXXXXXX.swap (where XXXXXXXX is the number of the object). Each time you
want to access an object, you have to check if the pointer is null; if it is
null, you have to pick another object and save it to disk to make room, and
load the object you want.

Saving and loading objects is a painful task. Checking for null pointers
everywhere makes your code more complex and slower. Furthermore, if your
data structure is more complex than an array of big objects, you have to
somehow store the status of each swappable part so it is always available,
which means using a pointer to pointer-or-null instead of a direct pointer.
Last of all, if your program is multithreaded, you have to lock even
read-only data structures to prevent it from being swapped out by another
thread. And you also need to make statistics usage of your object, to avoid
swapping out an object you will need a few milliseconds later.

Good news: all this is in fact built in the modern processors and operating
systems. For all memory references, processors are able to use the address
not as a direct reference to physical memory, but as a reference to a
translation table; they can suspend the course of the program and start a
specific processing if the translation table shows that the target is not
available; they can also maintain very basic statistics about access to
these references. Operating systems use these facilities, filling the
translation table and handling unavailable references, to provide to its
processes a virtual memory potentially much bigger than the physical memory
of the computer.

For example, on a host with 16 MB of memory and more than 2 GB of hard
drive, a process could run as if it had 2 GB of memory, the OS copying
chunks of memory to and from hard drive as needed. Depending on the pattern
of the memory accesses of the process, the result can be almost as fast as
if the computer actually had 2 GB of memory, or awfully slow.

In the cases where the result is bad, doing the swapping "by hand" as I
described earlier may give better results, because it is possible to use a
knowledge of the algorithm to better select parts of the data structure to
swap out or pre-fetch. But most of the time it will not do much good,
because some algorithms just can not be swapped. Thus, it is often better to
just let the OS do its work.

Anyway, such a mechanism is limited by the size of the input to the
translation mapping, compared to the size of the target objects. When it is
done explicitly, those can be chosen to match exactly the needs of the
program. On the other hand, when relying on the processor facilities,
everything is limited by the pointer size. So on a 32 bit processor, a
process can never see more than 4 GB of (virtual) memory at once. Everything
beyond must be done explicitly.

In a time when a computer with 32 GB of memory and 4 GB of hard drive was a
very powerful computer, that was quite ok. But in a time when any
supermarket computer has 1 GB of memory and 200 GB of hard drive, it is time
to step forward.


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.