|
|
So here's a question: how does linking actually work?
allows you to write functions that call functions that don't exist. This
leaves you with an object file containing unresolved references. The
linker then resolves these references.
designed this for only has 2KB of memory implemented as a mercury delay
line" or some such stupidity... ]
But how does that actually *work*? The C compiler transforms C source
code into executable object code. Basically, the object file contains
(among other things) raw machine code, which the processor knows how to
execute. Calling a subroutine is implemented as an unconditional jump
op-code. If you know the jump target, then by all means, fill in the
target address. But if the target hasn't been resolved yet... how do you
fit the entire symbol name into 32 bits?
(32 bits is only 4 characters. And C++ in particular seems determined to
transform even the most trivial function call into an 8-mile symbol name!)
Presumably the object file contains some metadata too. Stuff that tells
you it *is* an object file, what the target processor is, what symbols
it exposes publicly, etc. But I'm not sure how unresolved function calls
are implemented.
For that matter, how does the linker sort all this out? Does it actually
load the entire final binary into memory while it untangles it? Or does
it somehow manage to incrementally build the file on disk? [I guess it's
perhaps implementation-defined...] My Linux box is sitting here with
16GB of RAM, and can probably handle holding the whole 0.2MB program in
RAM at once. The original 2KB system that C was designed for? Not so much.
For that matter, is it possible to store *data* in an object file? (As
opposed to executable code.)
Post a reply to this message
|
|