|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp wrote:
> In a typical modern OS there may be hundreds of processes running even
> if the system has just booted up and the user has not started any program
> of his own. There are all kinds of drivers, services, task managers,
> window managers, firewalls... you name it. Every single one of them uses
> the same system libraries (eg. typically libc plus a few others in linux).
I understand that. How much of libc is actually used in common by a
majority of those programs? What sorts of things do you think are in
libc that's used by all the programs, other than (let's see) I/O and
perhaps the floating point stuff (which probably isn't used by too many
device drivers, task managers, or firewalls :-).
In any case, like I said, there are possibilities in that system that
you could have libraries like that available at certain shared physical
addresses if you wanted, or being set up as services in their own
process, for large discrete-functionality packages. Indeed, this is just
exactly how the kernel itself is set up. They just don't do it for
other packages, yet, as far as I know.
> Just because the user has not started any program doesn't mean there
> isn't a big bunch of programs running.
Yes. And far more in Singularity, because all those things actually are
separate processes.
--
Darren New / San Diego, CA, USA (PST)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp wrote:
> I faintly remember asking about this subject in the other group once.
> I think it was about how Haskell manages (or if it manages at all) to
> create precompiled (possibly dynamically-loadable) libraries which
> nevertheless work with any user-defined type and user-defined functions.
For statically-linked libraries, GHC needs both the object code and an
"interface file" which tells it everything it could need to know about
the stuff exported from the module (e.g., how many bytes of space does
this exported type take up?) For dynamically-linked libraries, it's more
tricky.
> In object-oriented programming this is typically achieved with dynamic
> binding (ie. virtual functions): The precompiled library doesn't need
> to know the user-defined type as long as it has been inherited from a
> base class defined in the library. The virtual table attached to objects
> of that class allows the precompiled library to call the proper user-defined
> function. (In a way you could say that the user code is telling the library
> which function to call for each of the virtual functions in the base class.)
>
> However, how does haskell manage to do this? Suppose you have something
> like this precompiled into a dynamically loadable library:
>
> foldl1 (+) someList
>
> Also assume the user has defined his own element type for the list and
> the (+) function for that element type, and then calls the dynamically
> loaded library by giving it a list with elements of that type. How does
> the library know which (+) function to call? How does it know that a (+)
> function exists for that element type in the first place?
The foldl1 function expects a list and a function that can operate on
the elements of that list. In this case, all that happens is that the
*caller* passes foldl1 a pointer to the appropriate implementation of
(+) for the data type in question. The foldl1 function itself knows
nothing about this; it's up to the caller. Since the caller presumably
"knows about" this user-defined data type, that's no problem.
A more interesting example might be if somebody does
sum someList
Now the library function "sum" is being passed a list. The type checker
will ensure that the list can be summed, but how does the precompiled
"sum" function know what the hell function to sum it with?
The answer is that under the covers, the compiled "sum" function
actually takes an extra parameter pointing to a virtual table - exactly
like in an OOP language. The caller has to provide a pointer to the
correct table for the type in question.
And what if the caller doesn't know the type either? Well then the
caller also takes a pointer as an extra hidden argument. And so on and
so forth until we reach a point in the code where the type *is*
statically known.
To summarise: if a type is statically known, the correct function is
looked up at compile time. If a type is unknown at compile time, a
virtual table pointer is secretly passed in.
(Actually, "sum" is a tiny little function, so it's rather likely to be
inlined. If the function it's immediately inlined into knows the type
statically, all the vtable lookups get optimised out. Alternatively,
"sum" is still optimised to perform only 1 vtable lookup, not 1 on each
iteration.)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Orchid XP v8 wrote:
> I'll see if I can figure out what the current status of this feature
> is... I'm curios myself.
Building libraries as DLLs used to work, but is currently broken.
You can still build a whole Haskell program as a DLL rather than an EXE,
but that's all. (And why the hell would you want to? Load up 5 Haskell
DLLs and you have 5 copies of the RTS and several copies of the Haskell
libraries statically linked in... urgh!)
It was working until roughly GHC 6.4 (~2005), and then it broke.
Apparently it's due to be back (on all platforms) in the next version of
GHC - which is now actually overdue for release. (There was supposed to
be an RC out by now.) The various developer docs are unclear as to
whether this feature really will be present or not.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Darren New <dne### [at] sanrrcom> wrote:
> > But doesn't that make each program selfish? In other words, it only takes
> > care of itself not running out of memory but completely disregards any other
> > program running in the system at the same time?
> Yes? And how is that different from, say, Windows or Linux? Maybe I'm
> not understanding your point.
When a C/C++ program frees at least a certain amount of memory in Windows
and Linux, that memory is also freed from the system, and becomes available
to other programs.
If a GC'd program never runs the GC, it will keep all that memory reserved
even though it doesn't use it. Moreover, a GC'd system often allocates a
lot more memory than it really needs (because "freed" memory cannot be
reused until the GC is run).
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Invisible <voi### [at] devnull> wrote:
> > However, how does haskell manage to do this? Suppose you have something
> > like this precompiled into a dynamically loadable library:
> >
> > foldl1 (+) someList
> >
> > Also assume the user has defined his own element type for the list and
> > the (+) function for that element type, and then calls the dynamically
> > loaded library by giving it a list with elements of that type. How does
> > the library know which (+) function to call? How does it know that a (+)
> > function exists for that element type in the first place?
> The foldl1 function expects a list and a function that can operate on
> the elements of that list. In this case, all that happens is that the
> *caller* passes foldl1 a pointer to the appropriate implementation of
> (+) for the data type in question. The foldl1 function itself knows
> nothing about this; it's up to the caller. Since the caller presumably
> "knows about" this user-defined data type, that's no problem.
You didn't understand me. The line "foldl1 (+) someList" is *in* the
precompiled library. It was not written by the user.
When you are compiling the library you don't know what the type of
elements are in the list nor what is the correct (+) function to call
for those types.
> To summarise: if a type is statically known, the correct function is
> looked up at compile time. If a type is unknown at compile time, a
> virtual table pointer is secretly passed in.
Exactly where is this virtual table pointer stored?
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp wrote:
> You didn't understand me. The line "foldl1 (+) someList" is *in* the
> precompiled library. It was not written by the user.
Ah, I see.
Well "foldl1 (+)" is the implementation of "sum", which is indeed in the
Haskell standard library (and hence precompiled). So what you're asking
is, "what happens if I call 'sum' on some custom datatype that I just
wrote?"
> Exactly where is this virtual table pointer stored?
Each time you create a datatype that supports (+), (-), etc., the
compiler generates a table pointing to the appropriate implementations
of these functions for that datatype.
Each time you compile a function that accepts an arbitrary numeric type,
the compiler secretly adds an extra pointer argument to that function.
When a caller calls that function, the compiler secretly adds a pointer
to the correct table into the function call. (And as I noted, if the
caller doesn't know the type, it must have received a pointer itself in
the same way, so it just passes that on.)
If that makes sense?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp wrote:
> When a C/C++ program frees at least a certain amount of memory in Windows
> and Linux, that memory is also freed from the system, and becomes available
> to other programs.
True. On the other hand, if you have a lot of small allocations and you
can't condense them, you can wind up wasting a lot of memory because you
have a few bytes allocated on each of dozens of pages. With a compacting
GC, this situation doesn't occur.
> If a GC'd program never runs the GC, it will keep all that memory reserved
> even though it doesn't use it.
I imagine in this case you'd need to implement GC mechanisms that ran
well. For example, you might force a GC after every N number of new
pages are allocated.
Now you have me curious - I'll have to look at the code to see when the
GC gets triggered. It'll be interesting to see how easy that is to find.
> Moreover, a GC'd system often allocates a
> lot more memory than it really needs (because "freed" memory cannot be
> reused until the GC is run).
On the other hand, a compacting GC doesn't have wasted space in the
pages where the data is stored after a GC runs.
Certainly, a GCed program that doesn't run the GC often enough will
waste memory, just like a Windows program that doesn't deallocate its
memory resources when it's finished will use more memory than it needs
to. If your video game doesn't clean up the structures for dead aliens
until you get to the end of the level, you're likely using up more
memory than you need to.
Doctor, Doctor, it hurts when I do this.
--
Darren New / San Diego, CA, USA (PST)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|