|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> Day #1: Compiling, linking and running.
> Day #2: Program structure. Functions.
> Day #3: Variables and constants. Assignments. Printing results.
> Day #4: Statements, blocks and expressions. Branching.
> Day #5: Writing functions.
Well, I wasn't expecting great things. And I'm not /finding/ great
things. :-P
Day 1 contains almost no code. Which is fair enough. It begins by
explaining how all complex programs inherently have to be written using
OO techniques or they will be too difficult. It continues by explaining
how somebody took C, the best programming language in the world, and
added OO features to it to create C++, the new best programming language
in the world.
On one point it is clear: Should I learn C before learning C++? The
answer comes back as "no". Which seems quite correct to me.
On the subject of Java, it made me laugh. (Remember when this is
published: 1999.) Apparently "all the C++ programmers who left for Java
are now coming back to C++". That made me chuckle. As if C++ and Java
target the same problem domain or something. But the real giggler was this:
"In any case, C++ and Java are so similar that learning one involves
learning 90% of the other."
If that doesn't make you laugh out loud, I don't know what will. ;-)
Then we are told that "more than any other programming language, C++
demands that you /design/ a program before you build it". Followed by
some mumbo-jumbo about how programmer time is more expensive than
machine time, and how design mistakes are expensive, etc.
Half way through, we get to type in Hello World and run it. As you'd expect.
Then there's a cryptic reference to the fact that iostream and
iostream.h aren't the same, but the differences are "subtle, exotic, and
beyond the scope of an introductory primer". Yes, the 879-page monolith
of a book is only an "introductory primer".
(Incidentally, the table of contents runs to 21 pages. I always feel
that if the table of contents /itself/ merits another table of contents,
you're doing it wrong.)
Anyway, the book decides that it's going to stick with iostream.h,
possibly for compatibility reasons - it isn't really elaborated on much.
The chapter closes out with "You should feel confident that learning C++
is the right decision for anyone interested in programming in the next
decade."
Because, let's face it, C++ is the only programming language in use,
isn't it?
(Some comment about "interested in programming" verses "interested in
getting paid to program" could perhaps be made here...)
Day 2 continues to supply the laughs, opening with such gems as
"Every time you run the compiler, the preprocessor runs. The
preprocessor is discussed on day 21."
Right. So you've just brought up the fact that this thing called a
preprocessor exists, and in the next breath told me you're not going to
actually explain that at all until /the final chapter of the book/?
The entire book is riddled with stuff like this. Basically sentences
saying "you won't understand this for another two weeks". Yeah, great,
thanks for that. Mind you, if the order wasn't quite so scrambled up, it
would be easier. And I'm fairly sure you don't need an entire "day" just
to comprehend Hello World. (Does somebody have pages to fill?)
Anyway, apparently main() must always return int. It is against the ANSI
standard for it to return void. (No mention of it having parameters.
Indeed, even the return value is described as an "obscure feature which
we won't make use of".)
In keeping with the above, we are told that the notation "\n" won't be
explained until day 17 (although actually it's explained on day 3, in
complete contradiction to this statement), that cout is an object and we
won't find out what the heck that means until day 6, and cout is also a
stream and that isn't explained until day 16. (Why are you bothering to
tell me about all the things you aren't going to tell me about? Can't
you /summarise/ the salient points quickly or something?)
Then there's some mumbo-jumbo about how you can use cout "to do
addition", and how when you do count << 8+5, then "8+5 is passed to
cout, but 13 is what is actually printed". Nice, clear explanation,
that. :-P
There's a little bit about comments and how they work. The author
suggests that "comments should not explain /what/ is happening, but
/why/ it is happening".
And then the book insists that we have to learn about functions,
"because they're used constantly". So we learn how to define new
functions, how to call them, how to pass arguments into them and return
values out of them. Notice that we haven't learned about what variables
are or how to use them yet. We also haven't learned about types yet. But
the book insists that functions come first. It then goes on to not
define any function except main() for the next several chapters. :-P
In an example demonstrating all this, cin is casually dropped in but not
mentioned anywhere. Presumably we're supposed to /guess/ this detail.
"The difficulty of programming is that so much of what you have to learn
depends on everything else." Pfft. I think you just suck at explaining
it. :-P
Day 3: ...and /now/ we get to learn what variables are, how to assign to
them, and what types there are and how to use them. Because, hey, we
couldn't possibly have done that /before/ learning about passing
arguments to functions, no?
There's some garbled mumbo about how "memory" and "RAM" are different.
It doesn't really make sense.
And then it starts talking about types. Apparently C++ adds a new "bool"
type, which is supposed to be 1 byte. Apparently "char" doesn't mean
character at all, it means a 1-byte integer (and by default, a signed one).
Now, this I did not know, but: There is a long int, and a short int. And
then there's just int. I had always thought these were three different
sizes of integer. But it appears that actually, long int is one size,
short int is another size, and plain int refers to whichever one the
compiler writer chose on a whim. So there's only actually two integer
sizes, and plain int means "I don't care".
The book helpfully points out that on a "Pentium" system, int = long
int, whereas on older PCs int = short int. (Remember when this was
published? 1999?)
The book suggests "never use int, always use short int or long int". If
the table is to be believed, long int is 32-bits. Christ knows what you
do if you want more bits than that...
But hey, at least double and float work in a sane way, right?
The book casually mentions something which /seems/ to be claiming that
variables are not initialised to anything in particular unless you
specifically request this. That's interesting; I didn't know that.
We are shown how to do a typedef. And then we get a nice little program
to print out (the printable parts of) the ASCII character set. (Notice
that we haven't covered for-loops yet, and the program consists of a
single for-loop.)
On page 50, I get a chuckle when next to an example listing, they feel
the need to point out that "*" denotes multiplication. Even though we've
already used it to perform multiplication in a dozen examples so far.
Hmm, might wanna put that earlier in the book if you feel it's
necessary. :-P
It tells you how to use enum. Looks simple enough.
Apparently assigning a double to an int is only a /warning/, not a
compile-time error. The book helpfully fails to specify how the
conversion is actually done. (From the examples, it appears to round
towards negative infinity... but it would be kinda useful to have that
/stated/.)
Alright, Day 4! And the first thing we learn is this: Every C++
statement is also an expression.
This is obviously an extremely sick and twisted idea, and whoever
thought of it was either mentally disturbed or merely failed to
comprehend what a horrifying thing he just did.
On one hand, it means that
x = y = z;
is a perfectly valid statement in C++. On the other hand, it also means that
while (x[i] = y[i--]) ;
is perfectly valid. You sick, sick people.
Apparently every R-value is an L-value, but not vice versa. Which is
fair enough, I guess.
In most programming languages, performing division promotes integers to
reals. But not in C++, apparently. (Again, it is unspecified exactly how
integer division works. Whether the author actually doesn't /know/ how
it works or merely thought it unimportant is unclear.) At this point, we
are told that there's two ways to convert something to an double:
double x = (double)y;
double x = static_cast<double>y;
Apparently the former is bad, and the latter is good. No indication as
to why, it just is.
More giggles: There's a list of the self-modification operators. Check
it out:
+= Self-addition.
= Self-subtraction.
*= Self-multiplication.
/= Self-division.
Notice something missing there?
Then we get the ++ and -- operators explained to us. (Even though we've
already seen them used several times.) This being C++, some insane wacko
thought that having c++ and ++c would be a good idea, and wouldn't be
confusing in any way.
Next we learn that false=0, and true /= 0. (Christ, I'm /never/ going to
remember that!) Apparently there's a new bool type, but details are thin
as to what the exact significance of this is.
Then we learn about if/then, if/then/else, and finally ?: is
demonstrated, without once mentioning how it's different from
if/then/else. (I.e., it only works on expressions, not statements. Oh,
but wait! Silly me, statements /are/ expressions...)
Day 5 begins by explaining, all over again, what functions are, why
that's useful, and how you use them. We learn how to pass values in, and
return results out. So... /why/ did we need this in day 2, in half as
much detail and then never used again?
Apparently every function must be declared before it can be used. (This
constantly trips me up...) It seems a function prototype doesn't need to
include argument names, just their types. (If only Java was like this!)
Bizarrely, if you don't specify a return type, it defaults to int. Not,
say, void. :-o
We've got a source code listing where one line of text is randomly
indented by a different amount than the rest of the surrounding code.
Clearly this book has been through some tough QA. I'm only 92 pages into
it - that's about 10% of the way through.
There's a side-box that shows you the syntax for a function prototype
[even though the text already explained this]. But, for reasons unknown,
the diagram /actually/ shows a normal function definition, but with a
stray semicolon at the end of the prototype... In short, somebody seems
to have got their stuff mixed up.
It appears that in C++ it is legal for a function to overwrite its
arguments. In fact, there's an example of implementing a swap()
function, to demonstrate that it completely fails to swap its arguments
in the caller. Having done an extensive demonstration to call our
attention to this fact, we learn that we won't be told how to write
swap() correctly until day 8. Thanks for that. Couldn't you have brought
this example up, say, on day 8?
It seems that C++ allows variables to be declared anywhere inside a
block, and that variable is then local to the inner-most block. (Not
that the text says it this clearly.) And it appears that nested
functions are not allowed. Which is fine.
Then we learn about global variables. Apparently if you write a variable
outside of any block, it's global. No word on exactly when it's
initialised, or precisely what "global" actually means. For example, I'm
/guessing/ the variable is only in-scope /below/ the line where it's
defined. That's how functions work, after all.
The book then warns about the dangers of using global variables. To
quote the book itself:
"In C++, global variables are legal, but they are almost never used.
C++ grew out of C, and in C global variables are a dangerous but
necessary tool."
"Globals are dangerous because they are shared data, and one function
can change a global variable in a way that is invisible to another
function. This can and does create bugs that are very difficult to find."
"On Day 14 you'll see a powerful alternative to global variables that
C++ offers, but that is unavailable in C."
So now you understand why global variables are bad, right?
No, I didn't think so.
As somebody who first learned to program in BASIC, where /all/ variables
are global variables, I know all about how unsafe global variables are.
But this book makes no real attempt to explain what the problem is,
other than that "a function could change a global variable in a way that
is invisible to another function". WTF is /that/ supposed to even /mean/?!
Seriously. It's /slightly important/ to understand why using a global
variable is a bad idea. This is basic bread-and-butter programming
knowledge. Way to completely fail to explain it. :-P
Next, we learn that function arguments can have default values. But get
this:
"Any or all of the function's parameters can be assigned default
values. The one restriction is this: If any of the parameters does not
have a default value, no previous parameter may have a default value."
...um, what?
So you're saying that you can freely choose which ones have defaults,
except that if a parameter doesn't have a default, nothing to the left
can either?
So, aren't you basically saying all of the non-default parameters have
to come before all of the default parameters? That doesn't sound like
"any or all of the function's parameters can be assigned default
values". That sounds like the parameters are split into two chunks. Why
not just /say/ that? :-P
We are now on page 108, and that's where I stopped reading. So far none
of the burning questions I want answered have been addressed. But the
next chapter looks promising. I just hope it doesn't try to explain that
metaphore about an object being like a spark plug again... >_<
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Le 16/04/2012 22:11, Orchid Win7 v1 nous fit lire :
>> Day #1: Compiling, linking and running.
>> Day #2: Program structure. Functions.
>> Day #3: Variables and constants. Assignments. Printing results.
>> Day #4: Statements, blocks and expressions. Branching.
>> Day #5: Writing functions.
>
> Well, I wasn't expecting great things. And I'm not /finding/ great
> things. :-P
>
>
>
> Day 1 contains almost no code. Which is fair enough. It begins by
> explaining how all complex programs inherently have to be written using
> OO techniques or they will be too difficult. It continues by explaining
> how somebody took C, the best programming language in the world, and
> added OO features to it to create C++, the new best programming language
> in the world.
>
> On one point it is clear: Should I learn C before learning C++? The
> answer comes back as "no". Which seems quite correct to me.
Well, there is nothing really to learn in C, once you have learned
assembly programming of course.
>
> On the subject of Java, it made me laugh.
> "In any case, C++ and Java are so similar that learning one involves
> learning 90% of the other."
Right & wrong. Class, reference, direct inheritance are similar. complex
inheritance, exception handling, pointer are not.
Traditional Java C++ war... not interesting.
> Then we are told that "more than any other programming language, C++
> demands that you /design/ a program before you build it". Followed by
> some mumbo-jumbo about how programmer time is more expensive than
> machine time, and how design mistakes are expensive, etc.
Well, they should look at some Ada or smalltalk... but it's right. Think
before you start coding.
> Then there's a cryptic reference to the fact that iostream and
> iostream.h aren't the same, but the differences are "subtle, exotic, and
> beyond the scope of an introductory primer". Yes, the 879-page monolith
> of a book is only an "introductory primer".
I.e: the author has no clue for you.
>
> Day 2 continues to supply the laughs, opening with such gems as
>
> Anyway, apparently main() must always return int. It is against the ANSI
> standard for it to return void.
The problem is to explain Unix: the return of main() is used as the
return of the called program... and some utility like "make" use that
result to check success or failure.
> Then there's some mumbo-jumbo about how you can use cout "to do
> addition", and how when you do count << 8+5, then "8+5 is passed to
> cout, but 13 is what is actually printed". Nice, clear explanation,
> that. :-P
Oh great, it's not cout that does the addition any way! So you now have
a false explanation.
>
> There's a little bit about comments and how they work. The author
> suggests that "comments should not explain /what/ is happening, but
> /why/ it is happening".
At least a sensible sentence.
What is happening, at low level, is in the code.
Why it is happening is of higher level, and should be put in the comment.
> In an example demonstrating all this, cin is casually dropped in but not
> mentioned anywhere. Presumably we're supposed to /guess/ this detail.
Guess what: it's an input stream (whereas cout is output stream)
> Day 3:
> And then it starts talking about types. Apparently C++ adds a new "bool"
> type, which is supposed to be 1 byte. Apparently "char" doesn't mean
> character at all, it means a 1-byte integer (and by default, a signed one).
The definition of type are inherited from C (and they f****d it large).
There was no bool in C.
A char is enough bits to store a native glyph. and you can assume that
the number of bit is at least 8.
It can be 9. It can be 16. It can even be 32. (the 9 are met only on
architecture with 9 bits per bytes in the hardware bus)
One warning: sizeof() will return the size in char, not bytes.
Second warning: the signedness of char is local to the compiler.
>
> Now, this I did not know, but: There is a long int, and a short int. And
> then there's just int. I had always thought these were three different
> sizes of integer. But it appears that actually, long int is one size,
> short int is another size, and plain int refers to whichever one the
> compiler writer chose on a whim. So there's only actually two integer
> sizes, and plain int means "I don't care".
In 1999, there was only issue with 3 integers type. It was 32 bits time.
On nowadays 64 bits, there is also a new one "long long int" !
As inherited from C, it's a total garbage, you can only assume:
char <= short <= int <= long <= long long
unsigned char can store up to 255
unsigned short can store up to 65535
unsigned long can store up to 2^32-1
unsigned long long can store up to 2^64-1
Notice that it is possible that "char = long"
>
> The book suggests "never use int, always use short int or long int". If
> the table is to be believed, long int is 32-bits. Christ knows what you
> do if you want more bits than that...
>
> But hey, at least double and float work in a sane way, right?
Do not rely on that. I'm not sure C mandate that the double & float
should obey the IEEE-754 rules, or even their size, nor their
representation and mantissa/offset bitcount.
Usually, float is a 32 bits, double is 64 bits. But it is known that
some compiler would perform the computation on 80 bits registers...
>
> The book casually mentions something which /seems/ to be claiming that
> variables are not initialised to anything in particular unless you
> specifically request this. That's interesting; I didn't know that.
Yep, variables are allocated, but whatever was in the memory at that
time is their value. You'd better set one.
> Apparently assigning a double to an int is only a /warning/, not a
> compile-time error. The book helpfully fails to specify how the
> conversion is actually done. (From the examples, it appears to round
> towards negative infinity... but it would be kinda useful to have that
> /stated/.)
The problem is they did not yet introduce the promotion/implicit cast.
Issue: it might be a local architecture/compiler choice.
> Alright, Day 4! And the first thing we learn is this: Every C++
> statement is also an expression.
>
> This is obviously an extremely sick and twisted idea, and whoever
> thought of it was either mentally disturbed or merely failed to
> comprehend what a horrifying thing he just did.
>
> On one hand, it means that
>
> x = y = z;
>
> is a perfectly valid statement in C++. On the other hand, it also means
> that
>
> while (x[i] = y[i--]) ;
>
> is perfectly valid. You sick, sick people.
It came from C. Put the blame on C.
>
> Apparently every R-value is an L-value, but not vice versa. Which is
> fair enough, I guess.
>
> In most programming languages, performing division promotes integers to
> reals. But not in C++, apparently. (Again, it is unspecified exactly how
> integer division works. Whether the author actually doesn't /know/ how
> it works or merely thought it unimportant is unclear.)
Operation on integers are perform with integers.
Integer division is performed as usual: 14/5 is 2.
> At this point, we
> are told that there's two ways to convert something to an double:
>
> double x = (double)y;
>
> double x = static_cast<double>y;
>
> Apparently the former is bad, and the latter is good. No indication as
> to why, it just is.
cast is bad. Cast with () is bad.
Cast with static_cast<> is bad also, but at least you know it.
(it's bad by design: if you need a double, pass a double)
I'm in favor of calling the constructor instead... but you have not seen
that yet.
double x(y);
> This being C++, some insane wacko
> thought that having c++ and ++c would be a good idea, and wouldn't be
> confusing in any way.
They are just explaining C so far.
>
> Next we learn that false=0, and true /= 0. (Christ, I'm /never/ going to
> remember that!) Apparently there's a new bool type, but details are thin
> as to what the exact significance of this is.
Once again, C did not had bool, C++ has.
>
> Then we learn about if/then, if/then/else, and finally ?: is
> demonstrated, without once mentioning how it's different from
> if/then/else. (I.e., it only works on expressions, not statements. Oh,
> but wait! Silly me, statements /are/ expressions...)
not really. expression becomes statement when you push a ; at the end.
And now for a mad trick: if you follow an expression with a , (comma),
its value will be replaced by the expression after the comma...
you only need one statement per block. so you can do dirty mad function
like:
int foo()
{
return bar(),zoo()?kel():azbe(),foobar();
}
And it works in assignment & tests too...
>
>
>
> Day 5 begins by explaining, all over again, what functions are, why
> that's useful, and how you use them. We learn how to pass values in, and
> return results out. So... /why/ did we need this in day 2, in half as
> much detail and then never used again?
>
> Apparently every function must be declared before it can be used. (This
> constantly trips me up...) It seems a function prototype doesn't need to
> include argument names, just their types. (If only Java was like this!)
> Bizarrely, if you don't specify a return type, it defaults to int. Not,
> say, void. :-o
yep. C++ is maniac: you must provide the prototype before using the
function.
You can have the parameter's names in the prototype if you like.
> There's a side-box that shows you the syntax for a function prototype
> [even though the text already explained this]. But, for reasons unknown,
> the diagram /actually/ shows a normal function definition, but with a
> stray semicolon at the end of the prototype... In short, somebody seems
> to have got their stuff mixed up.
Nop.
int main(int,char**); // is enough for the compiler
int main(int argc, char** argv); // is more similar to the definition
Both are ok.
>
> It appears that in C++ it is legal for a function to overwrite its
> arguments. In fact, there's an example of implementing a swap()
> function, to demonstrate that it completely fails to swap its arguments
> in the caller. Having done an extensive demonstration to call our
> attention to this fact, we learn that we won't be told how to write
> swap() correctly until day 8. Thanks for that. Couldn't you have brought
> this example up, say, on day 8?
>
> It seems that C++ allows variables to be declared anywhere inside a
> block, and that variable is then local to the inner-most block. (Not
> that the text says it this clearly.) And it appears that nested
> functions are not allowed. Which is fine.
Variable can be declared anywhere. including in a for() parentheses.
They are usable from their declaration down to the end of the
inner-block that contained them.
(in the case of the for(), they can be used inside the () and the {}
block of the for, but not further)
>
> Then we learn about global variables. Apparently if you write a variable
> outside of any block, it's global. No word on exactly when it's
> initialised, or precisely what "global" actually means. For example, I'm
> /guessing/ the variable is only in-scope /below/ the line where it's
> defined. That's how functions work, after all.
Global variables are either scoped by the file in which they are (static
prefix), or to the whole program.
(you haven't seen namespace yet).
Their position is irrelevant, but it's like prototype and declaration:
you need the prototype before using it. The declaration of a global
variable can also be used as its prototype, but you can only have one
declaration.
prototype (at head or in a header file):
int foo;
declaration (anywhere, usually at head to avoid a prototype):
int foo=5;
> So now you understand why global variables are bad, right?
>
> No, I didn't think so.
> Seriously. It's /slightly important/ to understand why using a global
> variable is a bad idea. This is basic bread-and-butter programming
> knowledge. Way to completely fail to explain it. :-P
>
> Next, we learn that function arguments can have default values. But get
> this:
>
> "Any or all of the function's parameters can be assigned default
> values. The one restriction is this: If any of the parameters does not
> have a default value, no previous parameter may have a default value."
>
> ...um, what?
In foo(a,b,c,d,e), if you do not provide a default value for c, you
cannot provide one for either a or b.
At one point in the argument list, you must start providing default
values, for all argument at the right of that point.
>
> So you're saying that you can freely choose which ones have defaults,
> except that if a parameter doesn't have a default, nothing to the left
> can either?
Yes.
>
> So, aren't you basically saying all of the non-default parameters have
> to come before all of the default parameters? That doesn't sound like
> "any or all of the function's parameters can be assigned default
> values". That sounds like the parameters are split into two chunks. Why
> not just /say/ that? :-P
Because, in usage, with the strong typing of C++, you might nevertheless
provide some arguments and get the default values for others.
foo(a,b,d) might be possible (with a prototype of foo(a,b,c=,d=,e=); )
as long as the type of d is not related to the one of c (but it's more
complex, due to internal type promotion by the parser)
(you should understand "foo(a,b,c,d,e)" as "foo(type_a a, type_b b,
type_c c, type_d d, type_e e)", and so on for the default)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
>> On one point it is clear: Should I learn C before learning C++? The
>> answer comes back as "no". Which seems quite correct to me.
>
> Well, there is nothing really to learn in C, once you have learned
> assembly programming of course.
Except that assembly generally follows a simple and consistent syntax,
whereas C is written in C. :-P
>> On the subject of Java, it made me laugh.
>
>> "In any case, C++ and Java are so similar that learning one involves
>> learning 90% of the other."
>
> Right& wrong. Class, reference, direct inheritance are similar. complex
> inheritance, exception handling, pointer are not.
>
> Traditional Java C++ war... not interesting.
War nothing. The two languages bear a superficial resemblance to each
other, but beyond that they're /radically different/. To the degree that
learning any OO language helps you learn any other OO language, C++ and
Java are similar. Beyond that, they're two totally different languages.
Learning Java teaches you maybe 10% of C++, not 90%. And vice versa.
>> Then we are told that "more than any other programming language, C++
>> demands that you /design/ a program before you build it".
>
> Well, they should look at some Ada or smalltalk... but it's right. Think
> before you start coding.
Try Haskell. A typical coding session involves coming up with a
breakthrough mathematical insight, followed by some ingenious
programming. Certainly you can't just hack on it until it works.
> I.e: the author has no clue for you.
Yeah, that's pretty much the conclusion I reached too.
It says in the preface that the author is the president of some company
that supplies C++ training. It doesn't say anywhere that he's actually a
C++ programmer. :-P
>> Anyway, apparently main() must always return int. It is against the ANSI
>> standard for it to return void.
>
> The problem is to explain Unix: the return of main() is used as the
> return of the called program... and some utility like "make" use that
> result to check success or failure.
Not just make; a whole host of programs make use of a program's return
code. (And not just on Unix either; Windows does the same.) But this is
apparently an "obscure feature" which you don't need to know about. :-P
>> Then there's some mumbo-jumbo about how you can use cout "to do
>> addition", and how when you do count<< 8+5, then "8+5 is passed to
>> cout, but 13 is what is actually printed". Nice, clear explanation,
>> that. :-P
>
> Oh great, it's not cout that does the addition any way! So you now have
> a false explanation.
With this level of conceptual muddling about something of fundamental
importance, you can see why my opinion of the book is so low. :-P
>> "comments should not explain /what/ is happening, but
>> /why/ it is happening".
>
> At least a sensible sentence.
Yeah. The purpose of comments is to explain stuff which /isn't/ obvious
from reading the source code itself. (I'm also of the opinion that less
is more. I dislike code that's so littered with comments that you can't
see the code properly. Then again, maybe that's because I don't have a
syntax highlighter...)
>> In an example demonstrating all this, cin is casually dropped in but not
>> mentioned anywhere. Presumably we're supposed to /guess/ this detail.
>
> Guess what: it's an input stream (whereas cout is output stream)
Oh, *I* know that. But no thanks to the book that's supposed to be
teaching me. :-P
> The definition of type are inherited from C (and they f****d it large).
> There was no bool in C.
> A char is enough bits to store a native glyph. and you can assume that
> the number of bit is at least 8.
> It can be 9. It can be 16. It can even be 32. (the 9 are met only on
> architecture with 9 bits per bytes in the hardware bus)
That's pretty messed up.
> One warning: sizeof() will return the size in char, not bytes.
Oh, that's fun. Let us hope that char *is* one byte, otherwise... damn!
I guess this is why autoconf exists. :-P
> Second warning: the signedness of char is local to the compiler.
Really? The book seems to say that all integer types default to signed,
unless you request unsigned. (AFAIK, there isn't a way to request signed.)
> In 1999, there was only issue with 3 integers type. It was 32 bits time.
Well, the book still talks about 16-bit code.
> On nowadays 64 bits, there is also a new one "long long int" !
long long int? Woah, that's special...
> As inherited from C, it's a total garbage, you can only assume:
>
> char<= short<= int<= long<= long long
> Notice that it is possible that "char = long"
So much for C and C++ being for system-level programming. :-P
>> But hey, at least double and float work in a sane way, right?
>
> Do not rely on that. I'm not sure C mandate that the double& float
> should obey the IEEE-754 rules, or even their size, nor their
> representation and mantissa/offset bitcount.
Well, no, it'll be whatever the underlying platform supports. Which on
any sane x86 system is going to be IEEE-754. If I was trying to program
a microcontroller, I might be worried. On a desktop PC, I'm not too
concerned.
> Usually, float is a 32 bits, double is 64 bits. But it is known that
> some compiler would perform the computation on 80 bits registers...
Is there a way to explicitly request 80 bits?
> Yep, variables are allocated, but whatever was in the memory at that
> time is their value. You'd better set one.
Right. So if I request a 2GB array, it won't actually sit there trying
to zero all 2GB of RAM, right before my for-loop rolls over it to
initialise it /again/.
>> Apparently assigning a double to an int is only a /warning/, not a
>> compile-time error. The book helpfully fails to specify how the
>> conversion is actually done.
>
> Issue: it might be a local architecture/compiler choice.
Oh God.
>> while (x[i] = y[i--]) ;
>>
>> is perfectly valid. You sick, sick people.
>
> It came from C. Put the blame on C.
C, the PDP assembly language that thinks it's a high-level language. :-P
(I forgot who wrote that...)
>> In most programming languages, performing division promotes integers to
>> reals. But not in C++, apparently. (Again, it is unspecified exactly how
>> integer division works. Whether the author actually doesn't /know/ how
>> it works or merely thought it unimportant is unclear.)
>
> Operation on integers are perform with integers.
> Integer division is performed as usual: 14/5 is 2.
So, it always rounds downwards?
>> Next we learn that false=0, and true /= 0. (Christ, I'm /never/ going to
>> remember that!) Apparently there's a new bool type, but details are thin
>> as to what the exact significance of this is.
>
> Once again, C did not had bool, C++ has.
I notice that "true" and "false" seem to be valid names now, which is
useful...
> And now for a mad trick: if you follow an expression with a , (comma),
> its value will be replaced by the expression after the comma...
Oh that's sick. So comma really /is/ an operator? Wow, that's evil.
I wonder if brackets are operators too? Perhaps you can do some crazy
code obfuscation with that as well. :-P
>> Apparently every function must be declared before it can be used. (This
>> constantly trips me up...) It seems a function prototype doesn't need to
>> include argument names, just their types. (If only Java was like this!)
>> Bizarrely, if you don't specify a return type, it defaults to int. Not,
>> say, void. :-o
>
> yep. C++ is maniac: you must provide the prototype before using the
> function.
> You can have the parameter's names in the prototype if you like.
I was referring more to the fact that it's legal to not specify the
return type, and if you don't, it defaults to something absurd.
>> There's a side-box that shows you the syntax for a function prototype
>> [even though the text already explained this]. But, for reasons unknown,
>> the diagram /actually/ shows a normal function definition, but with a
>> stray semicolon at the end of the prototype... In short, somebody seems
>> to have got their stuff mixed up.
>
> Nop.
>
> int main(int,char**); // is enough for the compiler
> int main(int argc, char** argv); // is more similar to the definition
>
> Both are ok.
To be clear: The diagram claims that a "function prototype" looks like this:
double foo(int, int, int) ;
{
statement1;
statement2;
statement3;
}
This is neither a valid prototype nor a valid definition. It's garbage.
If you take out the function body, it becomes a valid prototype. If you
take out the semicolon and add some argument names, it becomes a valid
definition. But is written, it's garbage.
>> Then we learn about global variables. Apparently if you write a variable
>> outside of any block, it's global. No word on exactly when it's
>> initialised, or precisely what "global" actually means. For example, I'm
>> /guessing/ the variable is only in-scope /below/ the line where it's
>> defined. That's how functions work, after all.
>
> Global variables are either scoped by the file in which they are (static
> prefix), or to the whole program.
> (you haven't seen namespace yet).
> Their position is irrelevant, but it's like prototype and declaration:
> you need the prototype before using it. The declaration of a global
> variable can also be used as its prototype, but you can only have one
> declaration.
>
> prototype (at head or in a header file):
> int foo;
>
> declaration (anywhere, usually at head to avoid a prototype):
> int foo=5;
You just explained more in a handful of sentences than the book
explained in an entire chapter. :-P
>> Next, we learn that function arguments can have default values. But get
>> this:
>>
>> "Any or all of the function's parameters can be assigned default
>> values. The one restriction is this: If any of the parameters does not
>> have a default value, no previous parameter may have a default value."
>>
>> ...um, what?
>
> At one point in the argument list, you must start providing default
> values, for all argument at the right of that point.
...which makes infinitely more sense than what the book said. :-P
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Perhaps more alarming than anything I read in a book: Apparently in C++
it is legal for a function to not have a return statement.
As in, I declared my function as returning a value. I ran the program
and printed out the result of the function. I got a segfault. I changed
the code a bit, and ran it again. This time, it printed garbage, and
/then/ it segfaulted.
When I looked at my function, I found I'd written "it->second;" rather
than "return it->second;". Not only is this apparently legal, it doesn't
even generate a compile-time /warning/.
(Contrast this with Java. If the compiler cannot statically prove that
every single possible program branch ends with a return or a throw, it
point-blank refuses to compile your class. It gives you warnings if code
is unreachable. It even complains if you have a void function that uses
return just to exit early, and there's nothing afterwards...)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 16/04/2012 09:11 PM, Orchid Win7 v1 wrote:
> Day 5
>
> We are now on page 108, and that's where I stopped reading.
OK. So the book bothers to go to all the trouble of explaining what an
inline function is, and why that's good. And in the next sentence, it says:
"Note that inline functions can bring a heavy cost. If the function
is called 10 times, the inline code is copied into the calling functions
each of those 10 times. The tiny improvement in speed you might achieve
is more than swamped by the increase in the size of the executable
program. Even the speed increase might be illusory. First, today's
optimizing compilers do a terrific job on their own, and declaring a
function inline almost never results in a big gain. More important, the
increased size brings its own performance cost."
In other words "now that we've explained how to use this feature, you
should never, ever use it". Great. Thanks for that...
Now, I'm used to programming languages where the decision to inline
something or not is down to the compiler. It's not something an
application programmer would ever have to worry about. And it seems that
the inline directive is only a "hint" in C++ anyway, so I have to
wonder, whether this particular directive is now obsolete.
The book has this to say on the subject of recursion:
"Both types of recursion, direct and indirect, come in two varieties:
those that eventually end and produce an answer, and those that never
end and produce a runtime failure. Programmers think that the latter is
quite funny (when it happens to somebody else)."
Um... WTF?
Anyway, this is followed by an O(n^2) time Fibonacci function. I kid you
not. Because, let's face it, obscure mathematical problems with no
connection to reality are the only reason anybody would ever use
recursion, right? :-D
I love the dire warnings that you could use a number less than 15,
because otherwise the program might consume a vast amount of memory.
(For goodness' sake, how much RAM does 15 stack frames take up?!)
"Some compilers have difficulty with the use of operators in a cout
statement. If you receive a warning on line 28, place parentheses around
the subtraction operation."
I'm going to go out on a limb and guess that this stopped being a
problem ten years ago. :-P
Anyway, like the previous segment, we wrap up with
"Recursion is not used often in C++ programming, but it can be a
powerful and elegant tool for certain needs. Recursion is a tricky part
of advanced programming. It is presented here because it can be useful
to understand the fundamentals of how it works, but don't worry too much
if you don't fully understand all the details."
In other words, yet again, "now you know how this works, you don't need
to actually use it".
Now we get a tale about how function calling works.
"Most introductory books don't try to answer these questions, but
without understanding this information, you'll find that programming
remains a fuzzy mystery."
Uh... yeah, whatever.
"Few programmers bother with any level of detail below the idea of
values in RAM. [...] You do need to understand how memory is organized,
however. Without a reasonably strong mental picture of where your
variables are when they are created and how values are passed among
functions, it will all remain an unmanageable mystery."
I'm not sure what's so "unmanageable" about it all. Provided you don't
try to reach behind the abstraction, there's no particular need to
understand how it works. Then again, this is a low-level language, and
if you're bothering to code in it, that probably means that you're
intending to go behind the abstractions at some point...
"As a C++ programmer, you'll often be concerned with the global name
space, the free store, the registers, the code space, and the stack."
OK.
"Registers are a special area of memory built right into the CPU."
Erm...
"They take care of internal housekeeping."
...actually...
"A lot of what goes on in the registers is beyond the scope of this
book, but what we are concerned with is the set of registers responsible
for pointing, at any given moment, to the next line of code. We'll call
these registers, together, the instruction pointer."
So I'm guessing an architecture exists where the instruction pointer
/isn't/ a single register then? :-P
I'm loving how they have to explain the idea of a stack using a crude
drawing of a smiling, happy man putting a plate onto the top of a stack
of plates. Because saying that in a sentence just doesn't bring it home,
does it?
I'm looking at the description of function calls now. I'm loving how in
step #3, the base address of the function is loaded into the instruction
pointer, and then in step #5 the function's arguments are pushed onto
the stack. Are you /sure/ it happens in that order??
Still, it does answer something I've always wondered about: What *is*
the C calling convention? Apparently it's to pass all function arguments
and function results via the stack, and also to store all local
variables on the stack. (By contrast, if you're coding in assembly,
usually you pass everything in registers. Apparently C doesn't do this.)
Ho hum. On to day 6, where hopefully I'm going to start learning
something useful...
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 17/04/2012 11:16, Invisible wrote:
> Perhaps more alarming than anything I read in a book: Apparently in C++
> it is legal for a function to not have a return statement.
>
> As in, I declared my function as returning a value. I ran the program
> and printed out the result of the function. I got a segfault. I changed
> the code a bit, and ran it again. This time, it printed garbage, and
> /then/ it segfaulted.
>
> When I looked at my function, I found I'd written "it->second;" rather
> than "return it->second;". Not only is this apparently legal, it doesn't
> even generate a compile-time /warning/.
It's when I read things like that that I'm glad I found C# :-)
Post a reply to this message
|
|
| |
| |
|
|
From: Invisible
Subject: Re: Teach yourself C++ in 21 strange malfunctions
Date: 17 Apr 2012 06:55:49
Message: <4f8d4c35@news.povray.org>
|
|
|
| |
| |
|
|
>> When I looked at my function, I found I'd written "it->second;" rather
>> than "return it->second;". Not only is this apparently legal, it doesn't
>> even generate a compile-time /warning/.
>
> It's when I read things like that that I'm glad I found C# :-)
It's when I see things like this that I'm glad I use Haskell, not some
low-level performance-oriented language that doesn't mind giving you
garbage results if it makes the code 0.02% faster...
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
>> It's when I read things like that that I'm glad I found C# :-)
>
> It's when I see things like this that I'm glad I use Haskell, not some
> low-level performance-oriented language that doesn't mind giving you
> garbage results if it makes the code 0.02% faster...
...and then I feel depressed because I'm the only person in the UK who
programs in Haskell, and the rest of the industry /does/ use C++,
regardless of what I think about it. :-(
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Le 17/04/2012 12:47, Invisible a écrit :
>
> Still, it does answer something I've always wondered about: What *is*
> the C calling convention? Apparently it's to pass all function arguments
> and function results via the stack, and also to store all local
> variables on the stack. (By contrast, if you're coding in assembly,
> usually you pass everything in registers. Apparently C doesn't do this.)
C cannot assume a single processor, hence cannot have a set of registers
for arguments.
(on a M68000, what would you do if your functions has more than 16
arguments (D0-D7 and A0-A7) ? And on that small i286, more than 4 16 bits ?)
Not only does everything goes on the stack, but the alignment on the
stack is also sensible to the compiler (and the ABI mode).
(and in C, there is sometimes a difference in the stacking of arguments
when K&R or ANSI prototype has been provided (or not, in which case K&R
is assumed); C++ does not have that issue, as prototype is mandatory)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"You make a new type by declaring a class."
Wrong.
"A class is just a collection of variables - often of different types -
combined with a set of related functions."
It might be more accurate to say that an /object/ is a collection of
variables. At any rate, "collection" usually means something more
specific in this case.
"A class enables you to encapsulate, or bundle, these variables parts
and various functions into one collection, which is called an object."
Where do I start?
I'm pretty sure "encapsulate" doesn't mean what you think it means. :-P
A "collection" usually refers to a group of objects, which isn't what
you're talking about.
And finally, this is perhaps the most vague definition of an object I've
ever heard. :-P
The next goes on to explain that classes are good because everything
related to a thing is in one place. (Like you couldn't just write all
the code together if it wasn't for classes.) Obviously this /isn't/
what's so special about classes. The /actual/ benefits don't appear to
be mentioned.
I'm going to stop now, because otherwise I'll just go on forever... >_<
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|