|
 |
Warp wrote:
>> In Ada, you don't calculate the checksums of every source. You calculate the
>> checksum of the file you're compiling when you compile it. In C#, your file
>> is either associated with a project (i.e., with the equivalent of a
>> makefile) or it's installed in the global assembly cache with a version
>> number and a cryptographic checksum on it you can check against.
>
> So if you have thousands of files in your project, the compiler will
> calculate the checksums of every single one of them every time you want
> to compile? And it was you who complained how creating makefile rules for
> C files is inefficient... Right.
Read the second sentence again. You calculate the checksum of the file when
you compile it. You then store that in the object code.
If you have library A and B, and program C that uses them, then you compile
the headers for A and B, generating two checksums in the object code. Then
you compile the body of library A, which puts the checksum for the header
for A into the body of A, along with the checksum for the body of A. Same
with B. Then you compile C against the headers for A and B, and generate a
checksum for C into C, along with storing the checksums for A and B's header
object code.
When you link, the linker checks that every reference to A's object code has
the same checksum, that every reference to B's object code has the same
checksum, etc.
If you change A's header file and recompile it, you'll have a different
checksum in A's header file object code, and you'll have to recompile both
A's body and C before you can relink. You can, however, recompile A's
implementation without having to recompile C.
> Yeah, and if your hard drive dies, you also get a failure. Also a big
> difference. I still don't get your point.
That you know the code you're running isn't the code you have the source for.
> You talk as if accidentally touching an object file is a relatively
> common happenstance.
So you've never updated a library that you'd already compiled against? What
do you think "DLL Hell" is?
If I compile, today, against September's libraries, then I install
November's libraries, stuff doesn't recompile.
> in the fifteen or so years I have been using
> makefiles, guess how many times that has happened to me.
It *can* work OK, if everything is done carefully and you're not using
pre-compiled stuff for half of what you're doing. C has always been pretty
good as long as you have all the source code and everyone working on it is
careful.
>>> There are like a million scenarios where you accidentally do something to
>>> your files which will cause a malfunction (regardless of your IDE).
>
>> Sure. You're apparently trying to miss my point here, which is that the idea
>> that basing dependency information and content-change information solely on
>> file timestamps is iffy.
>
> The situations where it causes problem are so extremely rare that it just
> isn't worth the price of the compiler calculating checksums of every single
> file in the project every time you want to do a quick compile.
Nobody does that, tho. Nobody *needs* to, because type information and
declarations are actually stored in object files. Neither Ada nor C#
calculate any checksums except when they compile the code they're
calculating the checksums for. Given that calculating the checksum is
undoubtedly faster than even -M on gcc, and you only do it when you actually
recompile something, I'm not sure why you think it would be a bad idea.
You can't do it in C because there's no place to store the checksum from a
.h file, because a .h file doesn't create code. (Generally speaking, of course.)
> Imagine that
> every time you want to compile your project, you had to wait a minute for
> the compiler to check all the checksums, when with the current scheme it
> can do it in a few seconds.
Yeah, that would suck. Good thing nobody works it that way.
Of course, the projects I've worked on spent at least that much time
recursing into makefiles just to check, so that's no biggy.
> Nevertheless, you are still blaming the programming language for a
> defect you see in the compiling tools. All this has nothing to do with C.
See above. C doesn't generate object files for .h files, so *if* you wanted
to use checksums, then yes, you *would* have to do it this sucky way. But
the languages that don't use #include don't have to do it this sucky way.
You have pointed out exactly what my point in this part of the discussion
is, and exactly why C (and other source-include macro languages) sucks in
this particular respect.
>>> Of course the makefile assumes that you have specified all the necessary
>>> dependencies. It cannot read your mind.
>
>> I have never seen a makefile that lists as a dependency of my code all the
>> things that stdio.h includes.
>
> And that makes it impossible to create such a makefile (especially if you
> are using a dendepency-generation tool)?
No. I'm just saying that in practice, it doesn't happen, so in practice,
there are still times when you have to either do a make-clean or risk having
broken code.
> Anyways, exactly why would you want to recompile your program if stdio.h
> changes? You see, stdio.h is standardized, and if it was changed in such
> way that programs would need to be recompiled for them to work, it would
> mean that the C standard has been changed to be backwards-incompatible and
> it would break basically all programs in existence.
Nope. All I need to do is redefine O_RDWR to have a different value or
something like that. (OK, not stdio.h, because that's very modularized.)
Or, if you want, errno.h or any of the other less-well-thought-out include
files.
Of course, if someone comes along and does a #define in some include file
that conflicts with something in stdio.h, then you're screwed too. Ever
spend three days trying to figure out why putting
extern int write(int,char*,int);
at the top of your source code so you could print some debugging info throws
a compiler error saying "too many close braces"?
>>> If you are using a tool to create the dependency lists and that tool
>>> fails for some reason, blame the tool, not make.
>
>> If I'm using a tool to create makefiles, then sure, but that's just saying
>> makefiles are so useless that I have to automate their creation. Why would I
>> use a tool to create makefiles using a tool that just does the job properly?
>
> You could make the same argument for any program that doesn't do everything,
> and needs to work in conjunction with another program in order to perform
> some task. Your argument is moot.
No. I'm saying that make only does one thing, and it does it so poorly that
almost nobody actually uses it manually. I'm not against independent tools.
> As for makefiles, not all things that can be done with makefiles can be
> automated.
No. And those things that can be done with makefiles like that shouldn't be
using makefiles to start with, because it's probably the wrong tool.
>> There's really only one task going on here - dependency checking. That's the
>> one and only thing Make does.
>
> Make can't know if eg. a code-generating program depends on some data
> files unless you tell it.
Yep! And there's the problem.
Right now, for example, I have an XML file I use to specify the levels for
my game. In those XML files I include the lists of animations going on in
different places, the textures, etc etc etc.
Know what? Make doesn't handle that. Know what else? The tool I'm using
does, because as I compile that XML file into the internal format, I'm
telling the build system "By the way, the results of compiling this XML also
depends on that texture, that sound effect, and those two animations."
If I were to try to do this with Make, I'd either be repeating all the
information manually, or I'd have to write a program to parse that XML file
and pull out the appropriate dependencies, except output them in a different
way.
> I gave an example in my other post of an actual project I have been
> working in where makefiles are useful for something else besides purely
> compiling C++.
Yes, I think we crossed in the mail.
>> I'm sorry, but if anyone in the world asked me "how do I invoke the gcc
>> compiler", I'd say "use /usr/bin/gcc" or something. I would *not* add "but
>> be careful not to pass the -M flag, because that doesn't invoke the compiler."
>
> Actually I'm invoking the C preprocessor. I could call it directly, but
> gcc does it automatically when you specify the -M parameter. The calls
> "gcc -M test.cc" and "cpp -M test.cc" produce the exact same result.
> I could simply use the latter if I wanted to be pedantic.
I'll just let this one drop, if you're actually going to argue that "C
preprocessor" isn't a part of the compiler.
> Instead, we have "project files" in IDEs which do... what? Well, exactly
> the same thing as makefiles.
No. They solve the same problem that makefiles are supposed to solve. They
don't do the same thing as makefiles. And that's the distinction I'm making.
It's like saying "C++ does exactly the same thing as C: It takes a fairly
machine-oriented source file and turns it into machine code." But you would
give me grief for saying that without pointing out that C++ does it much
*better* than C, and solves some of the problems that C simply can't handle,
right?
> In fact, "project files" are just makefiles,
No, they're not.
> using some other IDE-specific syntax. (They might have features that 'make'
> doesn't support, but in principle they are the same thing.)
And in principle, C++ is the same thing as C, it just has some features that
C doesn't support, right?
> The only difference is that most IDEs generate the dependency rules for
> source code files automatically so you don't have to write the few magic
> lines into the project file yourself.
No. Most of them have a different way of both specifying and evaluating
dependency rules.
> However, again, in principle it's
> no different from a generic makefile that generates dependency rules
> automatically (like the one I pasted in an earlier post).
No, that's exactly what I'm saying. Did you even read what I wrote is in the
"makefile" of a C# project? In what sense is that "the same thing as a
makefile", other than it solves the problem of only recompiling what needs
to be recompiled?
In C#'s project files, there are names of neither source code files nor
object code files, other than "here's the list of files to compile into this
project". They're just not in there. You don't have to specify other files
you depend on. You don't have a list saying "this file depends on that file
and the other file".
> You still can't get past the fact that in unix different tasks are
> separated into different tools. Fine, you hate that. Move on.
You're completely ignoring what I'm saying. Fine, move on.
Actually, now that I think about it, yes, this actually is part of the
problem. The problem is that the thing that actually *depends* on those
dependencies is indeed independent of the build system, and that means you
have to put the same dependency information in two different places.
Thinking on it (see the end of this mess), this is exactly how the Visual
Studio build system gets around the problem, regardless of the complexity of
the dependency chain. The first time you compile the program, it saves what
other files it depended on. So, basically, the first "make" creates the
makefile for you. It also, incidentally, thus knows what the output files
are that were created and can safely delete them when you clean without
accidentally clobbering something else.
I prefer the DRY principle, where my dependencies are only held in one
place, especially when that place is checked automatically. Your suggesting
that -M is sufficient is simply making the thing ass-backwards, building the
dependency list for the compiler instead of vice versa. You're basically
running the code thru the compiler to figure out what the dependencies are,
then using those dependencies to run the code through the compiler *again*.
Other build systems eliminate that first step, which is one less set of
duplicated data you have to manually keep up to date right there.
>> Now, go regenerate the rules without
>> the -M flag.
>
> And why exactly would I want to do that? You might as well ask "compile
> the program without using the compiler".
And you'd complain if one of the steps of compiling the program was to tell
the compiler what machine code to generate for different parts of your code,
right?
>> I'm not saying that generating the dependencies, *today*, isn't
>> easy. I'm saying that the job Make was created to handle, namely recording
>> and evaluating dependencies, is obsolete.
>
> And how do you propose to handle dependencies on things that the compiler
> does not handle, such as creating data files from some input files using
> some tools (and perhaps recompile the program only if those data files
> change)?
Yet, oddly enough, that's what I've been doing all week. Funny, that.
Certainly I say what compiler handles each kind of source code: I can't run
a PNG file through gcc. But that's just setting a default rule for a
particular piece of code.
>> Makefile dependency lists are, for
>> the most part, created by other tools
>
> Only program source code dependencies. If you have other types of
> dependencies you still need to specify them by hand because no automated
> tool can read your mind.
No, this is just factually incorrect. I have no need to specify by hand
that my animation as specified in my XML depends on those six files. (Well,
obviously, I'm specifying it in the XML just like I'd be writing #include
into my C sources. I'm not specifying it anywhere outside my own data files,
tho.)
Note also that this means when I change compiler options (like whether I
have debugging or optimization turned on, etc) then I can recompile the
code, because the compiler has the opportunity to store a dependency on its
own command-line arguments, or at least the command-line arguments that
might make a difference in whether the object code will be different.
--
Darren New, San Diego CA, USA (PST)
"How did he die?" "He got shot in the hand."
"That was fatal?"
"He was holding a live grenade at the time."
Post a reply to this message
|
 |