POV-Ray : Newsgroups : povray.off-topic : Unix shell Server Time
3 Sep 2024 17:17:31 EDT (-0400)
  Unix shell (Message 21 to 30 of 88)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Warp
Subject: Re: Unix shell
Date: 29 Jan 2011 16:34:42
Message: <4d4487f1@news.povray.org>
Darren New <dne### [at] sanrrcom> wrote:
> Besides, I thought you don't like C either.

  The issue was the #include scheme, which you seem to hate and I really
can't understand why.

> >   Also, in this particular case, what was the point? It did not add anything
> > relevant or interesting to the thread, nor did it answer the original
> > question.

> Sorry you didn't understand the point.  The point is that most of the other 
> languages out there don't have the sort of nested include behavior that C 
> has, and hence don't need makefiles that are anything more than simply the 
> list of source files that compile into each object library or executable.

  Firstly, that doesn't answer the original question. It's not like Andrew
was making a choice of programming language for his next project. He just
wanted an easy way to compile a bunch of programs. Your answer doesn't help
his problem in any way.

  Secondly, what is it exactly that allows other languages to compile only
what has changed, rather than everything every time? Right: The compiler
keeps track of changed files and compiles only what has changed.

  In a unix system this tracking has been separated into its own set of
tools, rather than being part of the compiler. *That* is essentially what
you are complaining about. In other words, the C/C++ compilers doesn't do
everything out-of-the-box, but instead you need additional tools.

  Well, if you don't like it, then don't. It's just how the unix ideology is.
You might disagree with the ideology and argue how it's antiquated, but it's
not something that bothers everybody. Some people like one tool doing one
thing, rather than one program trying to do everything.

  You are, hence, barking up the wrong tree. It's not the programming
language that is the "problem". It's the separation of compiler and
dependency file system that irks you. You are just blaming the programming
language for it because you love to hate it.

> In C# for example, the equivalent of the makefile says "here's the source 
> files to compile. Here's the name of the DLL to create. Here's the global 
> DLLs to link against (specified by content, not path). Here's the list of 
> makefiles for other possibly-uncompiled sources I depend on. Here's the 
> #define symbols to define and other compiler options." There's no list of 
> chunks of source code unrelated to the object code that this compilation 
> depends on. There's no big long list of which libraries link against other 
> libraries, or what order to search them in, or what order to compile things 
> in, or which source files use which types, or a 5000-character long compile 
> line specifying the source code locations of every declaration used anywhere 
> in your program.  (And yes, I've had individual makefile compilation lines 
> that spanned several pages.)

  Can you guess how many source code dependency definitions I have ever
written when I have used Microsoft's Visual C++ or Apple's Xcode?

  That's right. You are putting the blame on the programming language,
when what really irks you are the unix compiling tools. You just love
to hate C, so you just can't miss an opportunity to bash it.

  Yes, I hate C too, but I don't blame it for flaws that it doesn't have.

>  > (this is, afaik, what gcc does when you run it with the -M option)

> BTW, why not just build this into the compiler? Then you wouldn't need 
> makefiles.

  Sometimes the dependencies are more complicated. For example you could
have a program in your project which generates code, which then gets
compiled into the actual executable. That code-generation program in turn
may depend on some input files from which it generates the code.

  You only want to compile the code generating program if its source files
change. You only want to run that program if the data files change. You
only want to recompile your actual executable if either of those (or its
own source files) change.

  Obviously a compiler cannot know all this if you simply give it a list
of all the source code files. It needs slightly more elaborate rules.

  Most IDEs support creating these types of dependencies (eg. Xcode does,
Visual Studio probably does too, even though I have so far not needed to
do something like that with it). Obviously you can achieve this with a
makefile as well (after all, it's exactly what it is designed to do).
And yes, I have needed to do this, several times.

  A makefile / project file can be quite useful even further. For example,
you could have a rule to create a distribution package from your program.
The creation of this distribution package ought to have dependencies on
all the files which need to be included. Hence if you eg. change one of
those data files and then build the distribution package, the executable
binary will be compiled first, so that it's up to date. You can't do this
with a simple shell script. (And yes, I also have needed to do this.)

> >> When someone asked why the C 
> >> standard limited global names to six characters and I said it was because of 
> >> FORTRAN-based linkers, you didn't seem to mind.
> > 
> >   Perhaps because it was a relevant answer to a presented question?

> Sure. As was this.

  How did it answer the original question?

-- 
                                                          - Warp


Post a reply to this message

From: Darren New
Subject: Re: Unix shell
Date: 29 Jan 2011 17:22:42
Message: <4d449332$1@news.povray.org>
Warp wrote:
> Darren New <dne### [at] sanrrcom> wrote:
>> If you change environment variables, it fails.
> 
>   I'm not exactly sure which environment variables a compiler depends on
> which would affect the compilation process. Usually command-line arguments
> (to the compiler) are used to affect the compilation.

http://www.gnu.org/software/hello/manual/make/Implicit-Variables.html#Implicit-Variables

Stuff like CFLAGS and LDFLAGS and LIBS and INCLUDEPATH, all of which tends 
to come from outside the environment. Sure, you *can* get around it, but in 
practice people don't.

>   Well, I suppose you could make the makefile use some environment variables
> to affect the compilation. Seems rather contrived and counter-productive
> (because you are now putting part of your project settings outside of your
> "project file", which is usually were they should be).
> 
>   If you want changes to the makefile itself cause everything to be
> recompiled, you could add the makefile to the dependency rules.

Unless it's in a different project, sure. If you want your kernel module to 
recompile any time the C runtime library or the different bits of the kernel 
it depends on changes, you have a bit of tracking down to do.

Besides, have you *ever* seen someone actually do this in a hand-generated 
makefile?

>> If you restore an 
>> older version of an include file, it fails.
> 
>   I'm pretty certain most IDEs don't eg. calculate checksums of every single
> source file to see if they have changed from the last time that it was
> compiled. 

In languages where it's important to be correct, like Ada, yes they do. In 
C#, yes they do, because they don't *look* at include files, they look at 
object files. C# doesn't *have* include files. That's the point. If you say 
"use the types from *that* assembly" and then later you replace the assembly 
with an incompatible version, your code will get recompiled (or will refuse 
to start if you try to run it without recompiling).

In Ada, you don't calculate the checksums of every source. You calculate the 
checksum of the file you're compiling when you compile it. In C#, your file 
is either associated with a project (i.e., with the equivalent of a 
makefile) or it's installed in the global assembly cache with a version 
number and a cryptographic checksum on it you can check against.

> In a project with thousands of files it could take a while to
> calculate checksums for all of them.

When it's important, you compile the header file separately, like in Ada. 
That's my problem with the way C does #include. Ada header files aren't 
chunks of arbitrary source code that you include nested in other chunks of 
source code. Ada external type declarations aren't a byproduct of compiling 
something unrelated to the type you're declaring. They're a separate piece 
of code that can be version controlled and dependency analyzed on its own. 
If you recompile a header file and don't recompile everything that depends 
on it in Ada, your code won't link, regardless of date stamps.

>> If you change the source file 
>> then accidentally change the object file it fails.
> 
>   Yeah, and if you accidentally delete the makefile, it also fails. Duh.

No, if you delete the makefile you get an error. If you touch the object 
code after changing the source, you get a silent failure. There's a big 
difference there.

> There are like a million scenarios where you accidentally do something to
> your files which will cause a malfunction (regardless of your IDE).

Sure. You're apparently trying to miss my point here, which is that the idea 
that basing dependency information and content-change information solely on 
file timestamps is iffy.  Yes, there's a lot of ways to break code, but 
those aren't relevant to the discussion we're having about Make.  There 
aren't a whole lot of scenarios where you accidentally do something to your 
Ada code and wind up with an executable that doesn't match the sources you 
compiled it from.

>   Of course the makefile assumes that you have specified all the necessary
> dependencies. It cannot read your mind.

I have never seen a makefile that lists as a dependency of my code all the 
things that stdio.h includes.

>   If you are using a tool to create the dependency lists and that tool
> fails for some reason, blame the tool, not make.

If I'm using a tool to create makefiles, then sure, but that's just saying 
makefiles are so useless that I have to automate their creation. Why would I 
use a tool to create makefiles using a tool that just does the job properly?

>   There's nothing wrong in using two tools to perform a task rather than
> one. Do you also blame 'ls' for not supporting everything that 'sed'
> supports?

There's really only one task going on here - dependency checking. That's the 
one and only thing Make does.

>   'make' is not used only to compile C/C++ programs. It can be used for
> quite many tasks, including things that have nothing to do with compiling
> a program. 'make' is designed to be a versatile tool for these tasks.

That's what I asked earlier. I just never found anything that *actually* 
uses the dependency analysis of make to build something other than C-like 
code or libraries therefrom.

Do you have things in your makefile for processing the graphics for your 
games? Does it only reprocess them only when the source graphics change?

It's not like "make test" only runs if you recompiled something (or, better, 
only tests the modules you changed), or "make clean" doesn't delete *.o 
instead of going through each file to see if it's there first.

>   Many IDEs also support this, but they aglomerate everything into one
> single program. 

Well, no, not really. In simple situations (like a directory full of Java 
source), they just give it to the compiler to figure out, because the 
compiler already knows all the dependencies. In complex situations, they 
have to do more than just look at timestamps.

>> So you invoke the compiler to create the makefile. You're splitting hairs 
>> here.
> 
>   I'm not. gcc does not "compile" the program when I run it with the -M
> parameter. It just runs a simplified C preprocessor to track all the
> included files. The major difference is, obviously, speed.

Sure. But you're invoking the compiler, and with the same sets of 
command-line options and environment variable expansions as when you do the 
full compilation.

I'm sorry, but if anyone in the world asked me "how do I invoke the gcc 
compiler", I'd say "use /usr/bin/gcc" or something. I would *not* add "but 
be careful not to pass the -M flag, because that doesn't invoke the compiler."

The point is that you're using C-specific code written by the same people 
who wrote the compiler to figure out what the C-specific dependencies are. 
The speed is irrelevant. What's relevant is that the C nested includes are 
sufficiently complex that you need a tool to generate the input to the next 
tool, where the "next tool" was supposed to stand alone on its own in 
simpler times.  In more complex times, the stand-alone tool is mostly 
useless. We have gotten to the point where any time "make" saves you enough 
compilation time to be worthwhile, generating the makefiles is too complex 
to be easily maintainable.  Make lives on not because it's good, but because 
it's as ubiquitous as C. But environments that don't use C don't use make 
either, because it's just not a very good tool.

>> If I can't easily create the makefile by looking at the source code 
>> I'm trying to compile, it's baroque. :-)
> 
>   I just can't see what's so wrong in using a tool to create the makefile
> dependency rules. Do you really *want* to be writing makefiles by hand?

I have no trouble at all writing makefile rules by hand when it's *not* C 
that I'm compiling. That's exactly my point. C's dependencies are 
sufficiently opaque that it's actually problematic just to figure out what 
they are, to the point where one feels the need to actually write a number 
of tools to figure it out, including modifying the compiler to help you out.

But even back in the MS-DOS 3 days, I was writing editor macros to suck 
#include declarations out of C sources to generate makefiles, simply because 
of the need for *correctness* rather than complexity.

>>>   Besides, you only have to create the rules once. If the dependencies
>>> don't change, you don't need to create them every time you want to compile.
> 
>> You have to create the rules every time you change anything that might 
>> affect the compilation, whether that includes the .c file you're compiling 
>> or not.
> 
>   Generating the rules is pretty fast, and the need doesn't present itself
> very often, and re-generating the rules is quite easy. I just can't see
> the problem here.

Regenerating the rules is only quite easy because someone added a flag to 
GCC to make it regenerate the rules. Now, go regenerate the rules without 
the -M flag. I'm not saying that generating the dependencies, *today*, isn't 
easy. I'm saying that the job Make was created to handle, namely recording 
and evaluating dependencies, is obsolete. Makefile dependency lists are, for 
the most part, created by other tools, and they're inadequate to actually 
capture all the dependencies that people have in modern code. It was a fine 
tool back when you had a dozen C files and a half-dozen include files in 
your project, and your choice was whether you wanted the compiler to 
optimize or not. Nowadays,
http://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html#Optimize-Options
http://www.boost.org/doc/libs/1_45_0/libs/config/doc/html/boost_config/rationale.html#boost_config.rationale.the_problem

You're arguing that the dependency rules for C's #include directive isn't 
complicated because the compiler can figure it out for you. In your words, 
"well, duh."  That doesn't make the rules easy. It just means that the 
complexity is encapsulated in the compiler. And that means if you're 
compiling a different language with different dependency rules, the sorts of 
things Make does just aren't the best way of doing it. C# needs something 
more than timestamps to determine dependencies. Java doesn't need makefiles 
at all for the code itself. Ada needs a completely different process, not 
infrequently involving a database. All the interpreted languages have their 
own thing going on.

(It also ignores the fact that you have to figure such crap out when you 
write the code, too. I mean, really, three #include files to invoke "open"? 
How would you even guess what include files the various #defines were in if 
it wasn't a system library documented all over the place? It's entirely 
possible for someone to give you a blob of source code and you have no way 
of even knowing how to correctly generate the dependencies from it: for 
example, it wasn't unusual at the last place to have a bunch of different 
include files with different values for the same #define in different 
directories, and spend a couple days trying to figure out which include path 
to use to get it to link correctly against the pre-compiled libraries also 
included. But that's also due to the nature of not putting type information 
into object code files as much as #include per se.)

-- 
Darren New, San Diego CA, USA (PST)
  "How did he die?"   "He got shot in the hand."
     "That was fatal?"
          "He was holding a live grenade at the time."


Post a reply to this message

From: Darren New
Subject: Re: Unix shell
Date: 29 Jan 2011 17:56:40
Message: <4d449b28$1@news.povray.org>
Warp wrote:
> Darren New <dne### [at] sanrrcom> wrote:
>> Besides, I thought you don't like C either.
> 
>   The issue was the #include scheme, which you seem to hate and I really
> can't understand why.

Experience! :-)

I have explained it numerous times, with specific examples and general 
descriptions, so I'm going to assume it's one of those things you just won't 
ever understand. I think sometimes you fail to understand things that you 
disagree with.

>   Firstly, that doesn't answer the original question. It's not like Andrew
> was making a choice of programming language for his next project. He just
> wanted an easy way to compile a bunch of programs. Your answer doesn't help
> his problem in any way.

It does.  "Make is not appropriate for your task." :-)

Granted, the conversation has progressed since then.  But the basic comment 
was "by the time you need make, make will be inadequate to the task." Then I 
explained why, which is what we're arguing about now.

>   Secondly, what is it exactly that allows other languages to compile only
> what has changed, rather than everything every time? Right: The compiler
> keeps track of changed files and compiles only what has changed.

Sometimes. Other times, not. When it *is* the compiler doing it, it's using 
a dependency analysis more sophisticated than just file timestamps, which is 
all that make supports.

>   In a unix system this tracking has been separated into its own set of
> tools, rather than being part of the compiler. 

Except for, you know, that pesky -M switch. That's what I'm talking about. 
The tracking *is* built into the compiler. It would be a pain in the butt to 
use make to track dependencies without it being built into the compiler.

> *That* is essentially what
> you are complaining about. In other words, the C/C++ compilers doesn't do
> everything out-of-the-box, but instead you need additional tools.

Nope. Other languages have separate tools too. It's just built into the 
compiler for simple cases.

>   Well, if you don't like it, then don't. 

And *that*, my friend, is how this discussion started. Andrew asked about 
make, I said "You won't like it, so use something else", and you got upset 
that I explained why he won't like it. :-)

>   You are, hence, barking up the wrong tree. It's not the programming
> language that is the "problem". It's the separation of compiler and
> dependency file system that irks you.

No, it's the inability to correctly figure out dependencies based on the 
source files that's really the cause of the troubles.

>   That's right. You are putting the blame on the programming language,
> when what really irks you are the unix compiling tools. 

We were talking about make. I said "make was designed to deal with C's 
#include mess." The fact that there are other tools that deal with C's 
#include mess better than make does doesn't mean C's #include mess is less 
messy, nor does it mean that make is the right tool for dealing with other 
dependency issues. Which is what we're discussing.

 > You just love
> to hate C, so you just can't miss an opportunity to bash it.

I actually like C. It just has a number of legacy flaws, one of which is 
using #include instead of having actual compiled declarations stored anywhere.

>   Obviously a compiler cannot know all this if you simply give it a list
> of all the source code files. It needs slightly more elaborate rules.

Yet, oddly enough, that's precisely what the XNA content pipeline does, and 
you just give it a list of all the source files, including the graphics and 
audio and stuff. Sure, you mention the name of the code used to compile it, 
but there's enough information from that to track down where the executable 
and such is. It also makes it possible for the thing that's compiling your 
collection of frames into an animation to say "the resultant animation 
depends on this list of frames as well as the text file specifying the list 
of frames", which you can't do with a makefile. (At least, not without 
writing the -M equivalent for your personal processors and remembering to 
run it when you change the list of frames.)

You can achieve it with a makefile, yes, but primarily not by having the 
dependency stuff going on, but by building them as different projects. That 
is, one tends to have a top-level makefile that says "first run make on the 
content generator, then run make on the content, then run make on the 
packaging directory."

Maybe it's just because every single makefile I've ever seen in my entire 
life really, really sucks and doesn't work right either that I'm down on make.

-- 
Darren New, San Diego CA, USA (PST)
  "How did he die?"   "He got shot in the hand."
     "That was fatal?"
          "He was holding a live grenade at the time."


Post a reply to this message

From: Darren New
Subject: Re: Unix shell
Date: 29 Jan 2011 18:23:06
Message: <4d44a15a$1@news.povray.org>
Warp wrote:
>   The issue was the #include scheme, which you seem to hate and I really
> can't understand why.

Let's see if I can express it simply and clearly enough:

1) Every source file should generate object code when compiled. In other 
words, declarations should generate output when you pass them to the 
compiler. Not necessarily *executable* statements, mind, but some sort of .o 
file.

2) It should be possible to deduce from object code what types that object 
code implements, without reference to the source code, well enough to 
determine if two object files can inter-operate (i.e., can be correctly 
linked together). This implies that linking incompatible compilations of the 
same type can also be noticed - i.e., if the caller passes packed structures 
to a callee that expects unpacked structures, this should be caught.

When you're lacking property #1, there's no clear and obvious "type" in your 
system. All you have is declarations which may or may not have been changed 
and which may or may not match your definitions. (Otherwise, people would 
not need to be encouraged to include xyz.h in their xyz.c file.)


C lacks property #1, which leads to #include files because manually 
repeating the code is too error-prone. This leads to the need to track 
dependencies between source files that come from different authors.

Without property #1, a consumer of a C-enabled library has to track 
dependencies on *source* code written by the provider of the library, and 
that *source* code is (by the time it's on the consumer's machine) unrelated 
in any way to the library code the consumer is trying to consume. I.e., 
there's no practical way for the consumer to use the library without a .h 
file, but that .h file is not part of the actual library code that gets 
deployed. It's entirely possible that the .h source code doesn't match the 
.o/.lib code you wind up linking against.

When you're lacking property #2, you not only need to have the object code 
library, but you need to know how to compile your code to make it work with 
that object code library. So now you have three things to harmonize: the .h 
file for the library, the .so file for the library, and the compile-time 
arguments for the library, and you have to pick compile time arguments for 
your own compilation that are compatible.

Most implementations of C also lack property #2, which means you actually 
have to track down the correct library source code and compilation options 
to use an already-compiled library. Interestingly enough, one of the 
criticisms of C++ is that each compiler tends to implement (to the extent it 
does) property #2 somewhat differently.

In other words, with #includes, using a library means you have to figure out 
how to correctly compile someone else's code, in an environment that the 
library author never tried compiling in, just to invoke "already compiled" code.

Also:

In *shitty* code, the fact that #include's aren't restricted to type 
declarations but rather can arbitrarily change the syntax of the language 
you're using is also a problem. Languages should strive to make it difficult 
to write shitty code, *especially* shitty code that makes people who write 
*good* code have a harder job of it. Again, this isn't a problem where one 
does not need to compile someone else's code in order to use pre-compiled code.



I hope that answers it. You may *disagree* it's a problem, but I hope you 
understand it.

-- 
Darren New, San Diego CA, USA (PST)
  "How did he die?"   "He got shot in the hand."
     "That was fatal?"
          "He was holding a live grenade at the time."


Post a reply to this message

From: Patrick Elliott
Subject: Re: Unix shell
Date: 29 Jan 2011 23:30:52
Message: <4d44e97c$1@news.povray.org>
On 1/29/2011 1:21 PM, Darren New wrote:
> Warp wrote:
>>> I'm not. I'm just pointing out why Make is so baroque.
>>
>> That claim doesn't make any sense. All IDE project files are like
>> makefiles
>> (just with a different format). It's not like a makefile is somewhat
>> special
>> and different from any project file.
>
> There are lots of ways in which make is inferior to most of the other
> build tools. For one, the only out-dated-ness information it tracks is
> file timestamps. If you change environment variables, it fails. If you
> restore an older version of an include file, it fails. If you change the
> source file then accidentally change the object file it fails. If you
> change a file your compilation depends on that isn't listed as a
> dependency (like stdio.h say) it fails.
>
>>> The fact that all the stuff in the makefile is something you can't
>>> automate,
>>
>> What do you mean you can't automate? Of course you can. There are tons of
>> tools to automatically create makefiles (and makefile rules).
>
> The existence of tools to generate a makefile is exactly the point I'm
> making. At that point, you're no longer using Make. You have a makefile
> as an intermediate step in your compilation. That's not automating
> something with make. That's automating something with a makefile generator.
>
Yeash.. Remember a damn mess with some MUD I was trying to run at one 
point. Only versions I could find pre-compiled for Windows all had ANSI 
disabled, and, worse, unlike some newer ones, anything you did add had 
to be compiled into the core. Makefile was for like VC++4.0, which 
VC++5.0 didn't like, but I didn't even know that initially, because I 
had to first find a copy of Bison for Windows, because the make file 
made a make file, as one of its steps, along with a bunch of 
libraries/changes, to compile under Windows, instead of *nix, and... 
well, things just went increasingly bad from there.. lol

Trying to work out what went wrong, and why, from the damn code, and the 
code that built the code, and the code that ran the code, that built the 
code... o.O

-- 
void main () {
   If Schrödingers_cat is alive or version > 98 {
     if version = "Vista" {
       call slow_by_half();
       call DRM_everything();
     }
     call functional_code();
   }
   else
     call crash_windows();
}

<A HREF='http://www.daz3d.com/index.php?refid=16130551'>Get 3D Models, 
3D Content, and 3D Software at DAZ3D!</A>


Post a reply to this message

From: Darren New
Subject: Re: Unix shell
Date: 30 Jan 2011 00:31:41
Message: <4d44f7bd$1@news.povray.org>
Patrick Elliott wrote:
> Trying to work out what went wrong, and why, from the damn code, and the 
> code that built the code, and the code that ran the code, that built the 
> code... o.O

Precisely. When the configuration file to create the build tools is based on 
turing-complete macro processing, you're gonna just be screwed if there's 
anything wrong.

-- 
Darren New, San Diego CA, USA (PST)
  "How did he die?"   "He got shot in the hand."
     "That was fatal?"
          "He was holding a live grenade at the time."


Post a reply to this message

From: Warp
Subject: Re: Unix shell
Date: 30 Jan 2011 03:26:48
Message: <4d4520c8@news.povray.org>
Darren New <dne### [at] sanrrcom> wrote:
> >   I'm pretty certain most IDEs don't eg. calculate checksums of every single
> > source file to see if they have changed from the last time that it was
> > compiled. 

> In languages where it's important to be correct, like Ada, yes they do. In 
> C#, yes they do, because they don't *look* at include files, they look at 
> object files. C# doesn't *have* include files. That's the point. If you say 
> "use the types from *that* assembly" and then later you replace the assembly 
> with an incompatible version, your code will get recompiled (or will refuse 
> to start if you try to run it without recompiling).

> In Ada, you don't calculate the checksums of every source. You calculate the 
> checksum of the file you're compiling when you compile it. In C#, your file 
> is either associated with a project (i.e., with the equivalent of a 
> makefile) or it's installed in the global assembly cache with a version 
> number and a cryptographic checksum on it you can check against.

  So if you have thousands of files in your project, the compiler will
calculate the checksums of every single one of them every time you want
to compile? And it was you who complained how creating makefile rules for
C files is inefficient... Right.

> >> If you change the source file 
> >> then accidentally change the object file it fails.
> > 
> >   Yeah, and if you accidentally delete the makefile, it also fails. Duh.

> No, if you delete the makefile you get an error. If you touch the object 
> code after changing the source, you get a silent failure. There's a big 
> difference there.

  Yeah, and if your hard drive dies, you also get a failure. Also a big
difference. I still don't get your point.

  You talk as if accidentally touching an object file is a relatively
common happenstance. Well, in the fifteen or so years I have been using
makefiles, guess how many times that has happened to me. (In fact, a HD
dying has happened to be more often than that.)

> > There are like a million scenarios where you accidentally do something to
> > your files which will cause a malfunction (regardless of your IDE).

> Sure. You're apparently trying to miss my point here, which is that the idea 
> that basing dependency information and content-change information solely on 
> file timestamps is iffy.

  The situations where it causes problem are so extremely rare that it just
isn't worth the price of the compiler calculating checksums of every single
file in the project every time you want to do a quick compile. Imagine that
every time you want to compile your project, you had to wait a minute for
the compiler to check all the checksums, when with the current scheme it
can do it in a few seconds.

  Nevertheless, you are still blaming the programming language for a
defect you see in the compiling tools. All this has nothing to do with C.

> >   Of course the makefile assumes that you have specified all the necessary
> > dependencies. It cannot read your mind.

> I have never seen a makefile that lists as a dependency of my code all the 
> things that stdio.h includes.

  And that makes it impossible to create such a makefile (especially if you
are using a dendepency-generation tool)?

  Anyways, exactly why would you want to recompile your program if stdio.h
changes? You see, stdio.h is standardized, and if it was changed in such
way that programs would need to be recompiled for them to work, it would
mean that the C standard has been changed to be backwards-incompatible and
it would break basically all programs in existence.

  Yeah, very likely to happen.

  Even if stdio.h would be changed in a backwards-compatible way (eg. a
new non-standard function declaration is added) you would not need to
recompile any existing program. It would not affect them.

  If you are making an extremely rare program which truly needs to be
recompiled every time stdio.h changes, the solution to this is rather
trivial.

> >   If you are using a tool to create the dependency lists and that tool
> > fails for some reason, blame the tool, not make.

> If I'm using a tool to create makefiles, then sure, but that's just saying 
> makefiles are so useless that I have to automate their creation. Why would I 
> use a tool to create makefiles using a tool that just does the job properly?

  You could make the same argument for any program that doesn't do everything,
and needs to work in conjunction with another program in order to perform
some task. Your argument is moot.

  As for makefiles, not all things that can be done with makefiles can be
automated.

> >   There's nothing wrong in using two tools to perform a task rather than
> > one. Do you also blame 'ls' for not supporting everything that 'sed'
> > supports?

> There's really only one task going on here - dependency checking. That's the 
> one and only thing Make does.

  Make can't know if eg. a code-generating program depends on some data
files unless you tell it.

> >   'make' is not used only to compile C/C++ programs. It can be used for
> > quite many tasks, including things that have nothing to do with compiling
> > a program. 'make' is designed to be a versatile tool for these tasks.

> That's what I asked earlier. I just never found anything that *actually* 
> uses the dependency analysis of make to build something other than C-like 
> code or libraries therefrom.

> Do you have things in your makefile for processing the graphics for your 
> games? Does it only reprocess them only when the source graphics change?

  I gave an example in my other post of an actual project I have been
working in where makefiles are useful for something else besides purely
compiling C++.

> >> So you invoke the compiler to create the makefile. You're splitting hairs 
> >> here.
> > 
> >   I'm not. gcc does not "compile" the program when I run it with the -M
> > parameter. It just runs a simplified C preprocessor to track all the
> > included files. The major difference is, obviously, speed.

> Sure. But you're invoking the compiler, and with the same sets of 
> command-line options and environment variable expansions as when you do the 
> full compilation.

> I'm sorry, but if anyone in the world asked me "how do I invoke the gcc 
> compiler", I'd say "use /usr/bin/gcc" or something. I would *not* add "but 
> be careful not to pass the -M flag, because that doesn't invoke the compiler."

  Actually I'm invoking the C preprocessor. I could call it directly, but
gcc does it automatically when you specify the -M parameter. The calls
"gcc -M test.cc" and "cpp -M test.cc" produce the exact same result.
I could simply use the latter if I wanted to be pedantic.

> The point is that you're using C-specific code written by the same people 
> who wrote the compiler to figure out what the C-specific dependencies are. 
> The speed is irrelevant. What's relevant is that the C nested includes are 
> sufficiently complex that you need a tool to generate the input to the next 
> tool, where the "next tool" was supposed to stand alone on its own in 
> simpler times.  In more complex times, the stand-alone tool is mostly 
> useless. We have gotten to the point where any time "make" saves you enough 
> compilation time to be worthwhile, generating the makefiles is too complex 
> to be easily maintainable.  Make lives on not because it's good, but because 
> it's as ubiquitous as C. But environments that don't use C don't use make 
> either, because it's just not a very good tool.

  Instead, we have "project files" in IDEs which do... what? Well, exactly
the same thing as makefiles. In fact, "project files" are just makefiles,
using some other IDE-specific syntax. (They might have features that 'make'
doesn't support, but in principle they are the same thing.)

  Yes, you can use project files for things other than purely compiling
the program, such as running command-line tools to build data files and
such. Exactly what makefiles do. (And yes, I have done this at least with
Xcode project files.)

  The only difference is that most IDEs generate the dependency rules for
source code files automatically so you don't have to write the few magic
lines into the project file yourself. However, again, in principle it's
no different from a generic makefile that generates dependency rules
automatically (like the one I pasted in an earlier post).

> >> If I can't easily create the makefile by looking at the source code 
> >> I'm trying to compile, it's baroque. :-)
> > 
> >   I just can't see what's so wrong in using a tool to create the makefile
> > dependency rules. Do you really *want* to be writing makefiles by hand?

> I have no trouble at all writing makefile rules by hand when it's *not* C 
> that I'm compiling. That's exactly my point. C's dependencies are 
> sufficiently opaque that it's actually problematic just to figure out what 
> they are, to the point where one feels the need to actually write a number 
> of tools to figure it out, including modifying the compiler to help you out.

  You still can't get past the fact that in unix different tasks are
separated into different tools. Fine, you hate that. Move on.

> >   Generating the rules is pretty fast, and the need doesn't present itself
> > very often, and re-generating the rules is quite easy. I just can't see
> > the problem here.

> Regenerating the rules is only quite easy because someone added a flag to 
> GCC to make it regenerate the rules.

  Actually to the C preprocessor.

> Now, go regenerate the rules without 
> the -M flag.

  And why exactly would I want to do that? You might as well ask "compile
the program without using the compiler".

> I'm not saying that generating the dependencies, *today*, isn't 
> easy. I'm saying that the job Make was created to handle, namely recording 
> and evaluating dependencies, is obsolete.

  And how do you propose to handle dependencies on things that the compiler
does not handle, such as creating data files from some input files using
some tools (and perhaps recompile the program only if those data files
change)?

> Makefile dependency lists are, for 
> the most part, created by other tools

  Only program source code dependencies. If you have other types of
dependencies you still need to specify them by hand because no automated
tool can read your mind.

-- 
                                                          - Warp


Post a reply to this message

From: Warp
Subject: Re: Unix shell
Date: 30 Jan 2011 03:47:08
Message: <4d45258c@news.povray.org>
Darren New <dne### [at] sanrrcom> wrote:
> Warp wrote:
> > Darren New <dne### [at] sanrrcom> wrote:
> >> Besides, I thought you don't like C either.
> > 
> >   The issue was the #include scheme, which you seem to hate and I really
> > can't understand why.

> Experience! :-)

  Or lack thereof, it seems.

> I have explained it numerous times, with specific examples and general 
> descriptions, so I'm going to assume it's one of those things you just won't 
> ever understand. I think sometimes you fail to understand things that you 
> disagree with.

  It just looks to me that whenever there is a problem with something,
you just love to blame it on C/C++ if it's somehow involved.

> >   Firstly, that doesn't answer the original question. It's not like Andrew
> > was making a choice of programming language for his next project. He just
> > wanted an easy way to compile a bunch of programs. Your answer doesn't help
> > his problem in any way.

> It does.  "Make is not appropriate for your task." :-)

> Granted, the conversation has progressed since then.  But the basic comment 
> was "by the time you need make, make will be inadequate to the task." Then I 
> explained why, which is what we're arguing about now.

  And which tool, exactly, did you recommend instead? I remember you
mentioning shell scripts. Yeah, those can track dependencies just fine.

> >   In a unix system this tracking has been separated into its own set of
> > tools, rather than being part of the compiler. 

> Except for, you know, that pesky -M switch. That's what I'm talking about. 
> The tracking *is* built into the compiler. It would be a pain in the butt to 
> use make to track dependencies without it being built into the compiler.

  It's actually in the C preprocessor, which is a separate tool.

  You *could* have a separate dependency rule creation tool that does
nothing else. On the other hand, it was easier to add that to the C
preprocessor, so they did that.

> > *That* is essentially what
> > you are complaining about. In other words, the C/C++ compilers doesn't do
> > everything out-of-the-box, but instead you need additional tools.

> Nope. Other languages have separate tools too. It's just built into the 
> compiler for simple cases.

  So when other languages have dependency rule generation built in, it's ok,
but when the C compiler has, it's somehow not ok.

> >   Well, if you don't like it, then don't. 

> And *that*, my friend, is how this discussion started.

  In other words, you didn't answer his question, which is my point.

> Andrew asked about 
> make, I said "You won't like it, so use something else", and you got upset 
> that I explained why he won't like it. :-)

  And what exactly is that "something else"? How was your answer helpful
in any way?

> >   You are, hence, barking up the wrong tree. It's not the programming
> > language that is the "problem". It's the separation of compiler and
> > dependency file system that irks you.

> No, it's the inability to correctly figure out dependencies based on the 
> source files that's really the cause of the troubles.

  So now you are changing the argument. First the problem was that you
need a separate tool to generate the dependency rules, but now the problem
is "the inability to *correctly* figure out dependencies", whatever that
means.

  Which tool incorrectly figures out the dependencies?

  Why does it feel that you are desperately trying to pile up more and more
perceived problems to the issue?

>  > You just love
> > to hate C, so you just can't miss an opportunity to bash it.

> I actually like C.

  You are certainly making a very good job at hiding it.

> >   Obviously a compiler cannot know all this if you simply give it a list
> > of all the source code files. It needs slightly more elaborate rules.

> Yet, oddly enough, that's precisely what the XNA content pipeline does, and 
> you just give it a list of all the source files, including the graphics and 
> audio and stuff. Sure, you mention the name of the code used to compile it, 
> but there's enough information from that to track down where the executable 
> and such is. It also makes it possible for the thing that's compiling your 
> collection of frames into an animation to say "the resultant animation 
> depends on this list of frames as well as the text file specifying the list 
> of frames", which you can't do with a makefile.

  I don't get it. That's exactly what you do with make, which was exactly
my point (and exactly why I'm comparing it to the "project files" of other
IDEs).

  You seem to be desperately trying to find flaws that just aren't there.

  Maybe what irks you is that with makefiles you need to write a bit more
syntax to get the same effect.

> You can achieve it with a makefile, yes, but primarily not by having the 
> dependency stuff going on, but by building them as different projects. That 
> is, one tends to have a top-level makefile that says "first run make on the 
> content generator, then run make on the content, then run make on the 
> packaging directory."

  I suppose you could do that, but I don't think it's necessary.

-- 
                                                          - Warp


Post a reply to this message

From: Warp
Subject: Re: Unix shell
Date: 30 Jan 2011 03:54:58
Message: <4d452761@news.povray.org>
Darren New <dne### [at] sanrrcom> wrote:
> Patrick Elliott wrote:
> > Trying to work out what went wrong, and why, from the damn code, and the 
> > code that built the code, and the code that ran the code, that built the 
> > code... o.O

> Precisely. When the configuration file to create the build tools is based on 
> turing-complete macro processing, you're gonna just be screwed if there's 
> anything wrong.

  So let me get this straight: He was trying to compile a program with
Visual C++, the project did not have a Visual C++ project file, he was
trying to compile it in Windows which is not Unix and hence does not use
the same core tools, and consequently he had big problems in compiling
the program. Yet this is still somehow a problem with makefiles.

  It seems to me that anybody could make a post with any random problem
they had, and if a makefile was somehow involved, you would immediately
rush to agree with a "precisely!" answer regardless of what was the real
cause of the problem.

-- 
                                                          - Warp


Post a reply to this message

From: Darren New
Subject: Re: Unix shell
Date: 30 Jan 2011 15:08:20
Message: <4d45c534@news.povray.org>
Warp wrote:
>> In Ada, you don't calculate the checksums of every source. You calculate the 
>> checksum of the file you're compiling when you compile it. In C#, your file 
>> is either associated with a project (i.e., with the equivalent of a 
>> makefile) or it's installed in the global assembly cache with a version 
>> number and a cryptographic checksum on it you can check against.
> 
>   So if you have thousands of files in your project, the compiler will
> calculate the checksums of every single one of them every time you want
> to compile? And it was you who complained how creating makefile rules for
> C files is inefficient... Right.

Read the second sentence again. You calculate the checksum of the file when 
you compile it. You then store that in the object code.

If you have library A and B, and program C that uses them, then you compile 
the headers for A and B, generating two checksums in the object code. Then 
you compile the body of library A, which puts the checksum for the header 
for A into the body of A, along with the checksum for the body of A. Same 
with B. Then you compile C against the headers for A and B, and generate a 
checksum for C into C, along with storing the checksums for A and B's header 
object code.

When you link, the linker checks that every reference to A's object code has 
the same checksum, that every reference to B's object code has the same 
checksum, etc.

If you change A's header file and recompile it, you'll have a different 
checksum in A's header file object code, and you'll have to recompile both 
A's body and C before you can relink.  You can, however, recompile A's 
implementation without having to recompile C.

>   Yeah, and if your hard drive dies, you also get a failure. Also a big
> difference. I still don't get your point.

That you know the code you're running isn't the code you have the source for.

>   You talk as if accidentally touching an object file is a relatively
> common happenstance. 

So you've never updated a library that you'd already compiled against? What 
do you think "DLL Hell" is?

If I compile, today, against September's libraries, then I install 
November's libraries, stuff doesn't recompile.

 > in the fifteen or so years I have been using
> makefiles, guess how many times that has happened to me.

It *can* work OK, if everything is done carefully and you're not using 
pre-compiled stuff for half of what you're doing. C has always been pretty 
good as long as you have all the source code and everyone working on it is 
careful.

>>> There are like a million scenarios where you accidentally do something to
>>> your files which will cause a malfunction (regardless of your IDE).
> 
>> Sure. You're apparently trying to miss my point here, which is that the idea 
>> that basing dependency information and content-change information solely on 
>> file timestamps is iffy.
> 
>   The situations where it causes problem are so extremely rare that it just
> isn't worth the price of the compiler calculating checksums of every single
> file in the project every time you want to do a quick compile. 

Nobody does that, tho. Nobody *needs* to, because type information and 
declarations are actually stored in object files.  Neither Ada nor C# 
calculate any checksums except when they compile the code they're 
calculating the checksums for. Given that calculating the checksum is 
undoubtedly faster than even -M on gcc, and you only do it when you actually 
recompile something, I'm not sure why you think it would be a bad idea.

You can't do it in C because there's no place to store the checksum from a 
.h file, because a .h file doesn't create code. (Generally speaking, of course.)

 > Imagine that
> every time you want to compile your project, you had to wait a minute for
> the compiler to check all the checksums, when with the current scheme it
> can do it in a few seconds.

Yeah, that would suck. Good thing nobody works it that way.

Of course, the projects I've worked on spent at least that much time 
recursing into makefiles just to check, so that's no biggy.

>   Nevertheless, you are still blaming the programming language for a
> defect you see in the compiling tools. All this has nothing to do with C.

See above.  C doesn't generate object files for .h files, so *if* you wanted 
to use checksums, then yes, you *would* have to do it this sucky way. But 
the languages that don't use #include don't have to do it this sucky way. 
You have pointed out exactly what my point in this part of the discussion 
is, and exactly why C (and other source-include macro languages) sucks in 
this particular respect.

>>>   Of course the makefile assumes that you have specified all the necessary
>>> dependencies. It cannot read your mind.
> 
>> I have never seen a makefile that lists as a dependency of my code all the 
>> things that stdio.h includes.
> 
>   And that makes it impossible to create such a makefile (especially if you
> are using a dendepency-generation tool)?

No. I'm just saying that in practice, it doesn't happen, so in practice, 
there are still times when you have to either do a make-clean or risk having 
broken code.

>   Anyways, exactly why would you want to recompile your program if stdio.h
> changes? You see, stdio.h is standardized, and if it was changed in such
> way that programs would need to be recompiled for them to work, it would
> mean that the C standard has been changed to be backwards-incompatible and
> it would break basically all programs in existence.

Nope. All I need to do is redefine O_RDWR to have a different value or 
something like that. (OK, not stdio.h, because that's very modularized.)

Or, if you want, errno.h or any of the other less-well-thought-out include 
files.

Of course, if someone comes along and does a #define in some include file 
that conflicts with something in stdio.h, then you're screwed too. Ever 
spend three days trying to figure out why putting

    extern int write(int,char*,int);

at the top of your source code so you could print some debugging info throws 
a compiler error saying "too many close braces"?

>>>   If you are using a tool to create the dependency lists and that tool
>>> fails for some reason, blame the tool, not make.
> 
>> If I'm using a tool to create makefiles, then sure, but that's just saying 
>> makefiles are so useless that I have to automate their creation. Why would I 
>> use a tool to create makefiles using a tool that just does the job properly?
> 
>   You could make the same argument for any program that doesn't do everything,
> and needs to work in conjunction with another program in order to perform
> some task. Your argument is moot.

No. I'm saying that make only does one thing, and it does it so poorly that 
almost nobody actually uses it manually. I'm not against independent tools.

>   As for makefiles, not all things that can be done with makefiles can be
> automated.

No. And those things that can be done with makefiles like that shouldn't be 
using makefiles to start with, because it's probably the wrong tool.

>> There's really only one task going on here - dependency checking. That's the 
>> one and only thing Make does.
> 
>   Make can't know if eg. a code-generating program depends on some data
> files unless you tell it.

Yep! And there's the problem.

Right now, for example, I have an XML file I use to specify the levels for 
my game. In those XML files I include the lists of animations going on in 
different places, the textures, etc etc etc.

Know what? Make doesn't handle that. Know what else? The tool I'm using 
does, because as I compile that XML file into the internal format, I'm 
telling the build system "By the way, the results of compiling this XML also 
depends on that texture, that sound effect, and those two animations."

If I were to try to do this with Make, I'd either be repeating all the 
information manually, or I'd have to write a program to parse that XML file 
and pull out the appropriate dependencies, except output them in a different 
way.

>   I gave an example in my other post of an actual project I have been
> working in where makefiles are useful for something else besides purely
> compiling C++.

Yes, I think we crossed in the mail.

>> I'm sorry, but if anyone in the world asked me "how do I invoke the gcc 
>> compiler", I'd say "use /usr/bin/gcc" or something. I would *not* add "but 
>> be careful not to pass the -M flag, because that doesn't invoke the compiler."
> 
>   Actually I'm invoking the C preprocessor. I could call it directly, but
> gcc does it automatically when you specify the -M parameter. The calls
> "gcc -M test.cc" and "cpp -M test.cc" produce the exact same result.
> I could simply use the latter if I wanted to be pedantic.

I'll just let this one drop, if you're actually going to argue that "C 
preprocessor" isn't a part of the compiler.

>   Instead, we have "project files" in IDEs which do... what? Well, exactly
> the same thing as makefiles. 

No. They solve the same problem that makefiles are supposed to solve. They 
don't do the same thing as makefiles. And that's the distinction I'm making.

It's like saying "C++ does exactly the same thing as C: It takes a fairly 
machine-oriented source file and turns it into machine code." But you would 
give me grief for saying that without pointing out that C++ does it much 
*better* than C, and solves some of the problems that C simply can't handle, 
right?

 > In fact, "project files" are just makefiles,

No, they're not.

> using some other IDE-specific syntax. (They might have features that 'make'
> doesn't support, but in principle they are the same thing.)

And in principle, C++ is the same thing as C, it just has some features that 
C doesn't support, right?

>   The only difference is that most IDEs generate the dependency rules for
> source code files automatically so you don't have to write the few magic
> lines into the project file yourself.

No. Most of them have a different way of both specifying and evaluating 
dependency rules.

> However, again, in principle it's
> no different from a generic makefile that generates dependency rules
> automatically (like the one I pasted in an earlier post).

No, that's exactly what I'm saying. Did you even read what I wrote is in the 
"makefile" of a C# project? In what sense is that "the same thing as a 
makefile", other than it solves the problem of only recompiling what needs 
to be recompiled?

In C#'s project files, there are names of neither source code files nor 
object code files, other than "here's the list of files to compile into this 
project". They're just not in there. You don't have to specify other files 
you depend on. You don't have a list saying "this file depends on that file 
and the other file".

>   You still can't get past the fact that in unix different tasks are
> separated into different tools. Fine, you hate that. Move on.

You're completely ignoring what I'm saying. Fine, move on.

Actually, now that I think about it, yes, this actually is part of the 
problem. The problem is that the thing that actually *depends* on those 
dependencies is indeed independent of the build system, and that means you 
have to put the same dependency information in two different places.

Thinking on it (see the end of this mess), this is exactly how the Visual 
Studio build system gets around the problem, regardless of the complexity of 
the dependency chain. The first time you compile the program, it saves what 
other files it depended on. So, basically, the first "make" creates the 
makefile for you. It also, incidentally, thus knows what the output files 
are that were created and can safely delete them when you clean without 
accidentally clobbering something else.

I prefer the DRY principle, where my dependencies are only held in one 
place, especially when that place is checked automatically. Your suggesting 
that -M is sufficient is simply making the thing ass-backwards, building the 
dependency list for the compiler instead of vice versa. You're basically 
running the code thru the compiler to figure out what the dependencies are, 
then using those dependencies to run the code through the compiler *again*. 
Other build systems eliminate that first step, which is one less set of 
duplicated data you have to manually keep up to date right there.

>> Now, go regenerate the rules without 
>> the -M flag.
> 
>   And why exactly would I want to do that? You might as well ask "compile
> the program without using the compiler".

And you'd complain if one of the steps of compiling the program was to tell 
the compiler what machine code to generate for different parts of your code, 
right?

>> I'm not saying that generating the dependencies, *today*, isn't 
>> easy. I'm saying that the job Make was created to handle, namely recording 
>> and evaluating dependencies, is obsolete.
> 
>   And how do you propose to handle dependencies on things that the compiler
> does not handle, such as creating data files from some input files using
> some tools (and perhaps recompile the program only if those data files
> change)?

Yet, oddly enough, that's what I've been doing all week. Funny, that. 
Certainly I say what compiler handles each kind of source code: I can't run 
a PNG file through gcc. But that's just setting a default rule for a 
particular piece of code.

>> Makefile dependency lists are, for 
>> the most part, created by other tools
> 
>   Only program source code dependencies. If you have other types of
> dependencies you still need to specify them by hand because no automated
> tool can read your mind.

No, this is just factually incorrect.  I have no need to specify by hand 
that my animation as specified in my XML depends on those six files. (Well, 
obviously, I'm specifying it in the XML just like I'd be writing #include 
into my C sources. I'm not specifying it anywhere outside my own data files, 
tho.)

Note also that this means when I change compiler options (like whether I 
have debugging or optimization turned on, etc) then I can recompile the 
code, because the compiler has the opportunity to store a dependency on its 
own command-line arguments, or at least the command-line arguments that 
might make a difference in whether the object code will be different.

-- 
Darren New, San Diego CA, USA (PST)
  "How did he die?"   "He got shot in the hand."
     "That was fatal?"
          "He was holding a live grenade at the time."


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.