POV-Ray : Newsgroups : povray.off-topic : Found the dynamic optimization things... : Re: Found the dynamic optimization things... Server Time
30 Sep 2024 13:18:50 EDT (-0400)
  Re: Found the dynamic optimization things...  
From: Darren New
Date: 23 Sep 2008 14:38:40
Message: <48d937b0$1@news.povray.org>
Warp wrote:
>   It's not a question of whether it's dynamically loaded or not. It's
> a question of whether programs which are running at the same time and
> share a library use only one instance of that library, or is that library
> loaded separately for both programs.

I would think the read-only parts of the code could certainly be shared. 
There's no good reason why things like Java .class files couldn't be 
loaded into shared memory.

>   There may be speed optimization advantages when loading the library on
> a per-application basis, but at the cost of enormously increased memory
> consumption. If you have a couple of hundreds of programs using the same
> big library, if the library is loaded for every single one of them, the
> system will require huge amounts of extra RAM.

Yep. Altho I would think you would need to actually measure how much of 
the shared code is actually used and shared in modern applications.

>   Loading a dynamically loadable library does not change the program being
> executed (other than it's told where the functions it wants to call are
> located).

Um, yes, it does. Sure. There's places you can branch to after loading 
the code that you couldn't branch to before.

Or, to put it another way, say you have a C++ class, and nowhere in any 
of your code do you reference the public integer field "xyz" that's in 
that class. (Including nowhere do you invoke any methods of the class 
that reference xyz.) The compiler Singularity uses would simply not 
allocate space for that field in any of the classes. Now, if you 
dynamically loaded code and passed it an instance of that class that 
*did* reference the field, you'd be broken.

So, yes, dynamically loading code changes the executable *process*.

>   Inlining kernel code into the application makes sense only if that
> doesn't increase the memory footprint of the application considerably.

Right. You'd have to measure it. Welcome to time/space tradeoffs. ;-)

>> Check out the Singularity OS. Real live working OS, with one of the 
>> fundamental assumptions being that once you start a process, *nothing* 
>> in the executable can change. No dynamic loading, no plug-ins, etc.
> 
>   Does that mean that every application has all the system libraries
> statically linked to them, and this system library code exists in all
> of them and are loaded into memory for all of them?

Depends on what you consider to be "system library code". The system is 
broken into a large number of small processes. So there isn't really a 
whole lot of "system library code" going on there. Stuff like "open()" 
isn't in the kernel, so it's not really a library. The runtime is linked 
in staticall, yes.

Each process does have its own garbage collector, but that's good 
because each process can use a different garbage collector, depending on 
the types of garbage it collects.

I think one of the things they're looking at is indeed shared-memory 
libraries, but I don't think they've gone very far in that direction.

It did, yes, seem somewhat wasteful, and it's a research project, so 
they may not be too worried about it.  But I can't think of any 
theoretical reason why having large blocks of executable code being 
shared amongst multiple processes would be especially difficult in 
Singularity. They just haven't done it yet.

-- 
Darren New / San Diego, CA, USA (PST)


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.