POV-Ray : Newsgroups : povray.off-topic : Tell me it isn't so! : Re: Tell me it isn't C Server Time
9 Oct 2024 09:17:55 EDT (-0400)
  Re: Tell me it isn't C  
From: Invisible
Date: 22 Jul 2009 10:15:13
Message: <4a671ef1$1@news.povray.org>
>> True. But presumably if you #include a file that defines a function 
>> prototype, you also need to compile and link whatever file it is that 
>> contains the source code for that function?
> 
>   The dependencies are not for deciding what should be compiled and linked
> into the final executable, or how the compilation should be done. The
> dependencies exist simply to make compilation faster when changes are done
> to the source files. If you have a 10-million lines-of-code program and
> you modify one line, you don't want to re-compile everything. You only want
> to re-compile the minimum amount necessary to get the updated executable.
> That's what dependencies are for. This is all done automatically by the IDE
> (or makefile).

I see. So you still have to manually define what needs to be linked somehow?

>> (Except for the OS header 
>> files; I have literally *no clue* how that works...)
> 
>   System library implementations have already been pre-compiled and come
> with the system (or the compiler in some cases). Often they are dynamically
> loadable, but you can link them statically to your executable if you want.
> 
>   Dependencies to most system libraries are automatically added to the
> program by the compiler without you having to do anything about it. When
> the dependency is not there by default, you simply have to specify which
> system library you want to use in your IDE or by telling the compiler
> (in the Unix side that would be those -l command-line parameters).
> 
>   To put it in simple terms: In the Windows side that's where a program
> depending on a DLL file comes from. A DLL is simply a precompiled library
> (which the OS loads dynamically when a program needs it).

As I understand it (i.e., probably wrong), it works something like this:

- You write foo.c that contains a function called foo().

- You write foo.h which simply states that a function called foo() 
exists somewhere.

- main.c does a #include "foo.h"

- When main.c is compiled, it produces main.o, which contains several 
references to a foo() that should exist somewhere, but we don't know 
where yet.

- The linker takes main.o and foo.o, and replaces every jump to foo() 
with a jump to an actual machine address. The resulting program is 
actually runnable.

What I can't figure out is what happens if foo() is actually a function 
somewhere in the OS kernel. Presumably in each version of the kernel, 
the base address of this function is going to be different... so how the 
hell does the linker know what it is?

The way this works on the Amiga is that you can't just *call* an OS 
function. First you have to look up its base address in a table. 
Basically, memory address 0x00000004 contains a pointer to a table. 
Every OS function has a unique ID which is an offset into this table, 
which enables you to lookup the function's base pointer. And then you 
can jump to it. All of which takes about two-dozen assembly 
instructions... I have literally no idea how the hell you'd do it from C 
though.

>> True... But I'm still wondering how VS does this in the absence of 
>> header files.
> 
>   Does what?

Figures out what parts of the contents of a source file should be 
accessible from other files.

Like, if aux.c contains foo(), bar() and baz(), which of these should be 
available from elsewhere? Normally if you only wanted foo() to be 
public, you'd only put foo() in the header file. (And for God's sake 
remember to update the header file when foo() changes its type signature!)


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.