POV-Ray : Newsgroups : povray.general : Real benefit of a 64 bit Pov binary on a 64 bit CPU in a 64 bit opsys? : Re: Real benefit of a 64 bit Pov binary on a 64 bit CPU in a 64 bit opsys? Server Time
1 Aug 2024 06:22:37 EDT (-0400)
  Re: Real benefit of a 64 bit Pov binary on a 64 bit CPU in a 64 bit opsys?  
From: Nicolas George
Date: 2 Aug 2006 10:03:17
Message: <44d0b0a5$1@news.povray.org>
Warp  wrote in message <44d09d1b@news.povray.org>:
>   I didn't understand this. What "such or such data structure" are you
> talking about, and what does virtual memory has to do with that?

Let us take a simple case: assume you have a simple but big list of big
objects. First, let us assume that all the objects fit in memory. Then, you
only have to use a big array of pointers to the objects, and everything is
said.

But if all the objects do not fit in memory, you will need to store some of
them on hard disk while you do not use them. For each object, either the
pointer is not null, and gives the address of the object, or it is null, and
the object is not present in memory, but saved in a file called
objXXXXXXXX.swap (where XXXXXXXX is the number of the object). Each time you
want to access an object, you have to check if the pointer is null; if it is
null, you have to pick another object and save it to disk to make room, and
load the object you want.

Saving and loading objects is a painful task. Checking for null pointers
everywhere makes your code more complex and slower. Furthermore, if your
data structure is more complex than an array of big objects, you have to
somehow store the status of each swappable part so it is always available,
which means using a pointer to pointer-or-null instead of a direct pointer.
Last of all, if your program is multithreaded, you have to lock even
read-only data structures to prevent it from being swapped out by another
thread. And you also need to make statistics usage of your object, to avoid
swapping out an object you will need a few milliseconds later.

Good news: all this is in fact built in the modern processors and operating
systems. For all memory references, processors are able to use the address
not as a direct reference to physical memory, but as a reference to a
translation table; they can suspend the course of the program and start a
specific processing if the translation table shows that the target is not
available; they can also maintain very basic statistics about access to
these references. Operating systems use these facilities, filling the
translation table and handling unavailable references, to provide to its
processes a virtual memory potentially much bigger than the physical memory
of the computer.

For example, on a host with 16 MB of memory and more than 2 GB of hard
drive, a process could run as if it had 2 GB of memory, the OS copying
chunks of memory to and from hard drive as needed. Depending on the pattern
of the memory accesses of the process, the result can be almost as fast as
if the computer actually had 2 GB of memory, or awfully slow.

In the cases where the result is bad, doing the swapping "by hand" as I
described earlier may give better results, because it is possible to use a
knowledge of the algorithm to better select parts of the data structure to
swap out or pre-fetch. But most of the time it will not do much good,
because some algorithms just can not be swapped. Thus, it is often better to
just let the OS do its work.

Anyway, such a mechanism is limited by the size of the input to the
translation mapping, compared to the size of the target objects. When it is
done explicitly, those can be chosen to match exactly the needs of the
program. On the other hand, when relying on the processor facilities,
everything is limited by the pointer size. So on a 32 bit processor, a
process can never see more than 4 GB of (virtual) memory at once. Everything
beyond must be done explicitly.

In a time when a computer with 32 GB of memory and 4 GB of hard drive was a
very powerful computer, that was quite ok. But in a time when any
supermarket computer has 1 GB of memory and 200 GB of hard drive, it is time
to step forward.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.