POV-Ray : Newsgroups : povray.off-topic : Teach yourself C++ in 21 days Server Time
29 Jul 2024 10:29:29 EDT (-0400)
  Teach yourself C++ in 21 days (Message 21 to 30 of 168)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: scott
Subject: Re: Teach yourself C++ in 21 strange malfunctions
Date: 17 Apr 2012 08:44:05
Message: <4f8d6595$1@news.povray.org>
> It's when I see things like this that I'm glad I use Haskell, not some
> low-level performance-oriented language that doesn't mind giving you
> garbage results if it makes the code 0.02% faster...

Has Haskell ever been used for something where the execution speed is 
actually important? :-)

If Haskell had a decent IDE and easy documented access to the rest of 
the system I might consider using it.  But then F# exists already and I 
haven't paid much attention to that so far...

I think you need to be working with very specific types of problems to 
warrant using Haskell, which isn't to say nobody uses it.


Post a reply to this message

From: Warp
Subject: Re: Days 1-5
Date: 17 Apr 2012 09:40:45
Message: <4f8d72dd@news.povray.org>
Orchid Win7 v1 <voi### [at] devnull> wrote:
> Then there's a cryptic reference to the fact that iostream and 
> iostream.h aren't the same, but the differences are "subtle, exotic, and 
> beyond the scope of an introductory primer".

  One is standard, the other isn't. What is there more to explain?

> (Incidentally, the table of contents runs to 21 pages. I always feel 
> that if the table of contents /itself/ merits another table of contents, 
> you're doing it wrong.)

  I have sometimes seen the opposite extreme: A doorstopper book with
a laughably small table of contents which isn't very helpful.

> Anyway, apparently main() must always return int. It is against the ANSI 
> standard for it to return void. (No mention of it having parameters. 
> Indeed, even the return value is described as an "obscure feature which 
> we won't make use of".)

  I don't think it would have been very hard to say something like "the
return value is used by the operating system as a success/failure value"
and then explain that 0 means success and other values can be used as a
failure code. (It is actually important to know this especially when
creating a command-line program. The return value of main() is
significant.)

> Then there's some mumbo-jumbo about how you can use cout "to do 
> addition", and how when you do count << 8+5, then "8+5 is passed to 
> cout, but 13 is what is actually printed". Nice, clear explanation, 
> that. :-P

  Actually "8+5" is not being passed to cout. cout is getting one single
integral value. (Basically invariably this value will be calculated by
the compiler at compile time.) Anyways, it's not like it's important.

> And then it starts talking about types. Apparently C++ adds a new "bool" 
> type, which is supposed to be 1 byte.

  I don't think the standard defines the size of bool. Most compilers
probably make it 1.

> Apparently "char" doesn't mean 
> character at all, it means a 1-byte integer (and by default, a signed one).

  The standard only guarantees that sizeof(char) is 1, but not that char
is 1 byte. (It is possible for char to be larger in some systems. It just
means that everything else is then a multiple of that size.)

  The standard also lets the signedness of char as implementation-defined.
(In most practical systems it's signed.)

  (Curiously, "char" and "signed char" are two *different* types in C and
C++ even if "char" is signed, while for example "int" and "signed int" are
completely synonymous and the "signed" keyword is redundant in the latter
case. This comes from the prehistoric dark ages of C.)

> Now, this I did not know, but: There is a long int, and a short int. And 
> then there's just int. I had always thought these were three different 
> sizes of integer. But it appears that actually, long int is one size, 
> short int is another size, and plain int refers to whichever one the 
> compiler writer chose on a whim. So there's only actually two integer 
> sizes, and plain int means "I don't care".

  Nope. The standard does not say that. The only thing the standard says
is that:

    sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long)

and that sizeof(char) = 1 (which does not mean "1 byte", but "1 indexable
memory unit".)

  (Yes, it is possible and standard-conforming for char to be as large
as long.)

  In practice in most 32-bit systems short will be 16 bits and int/long
will be 32 bits. In most 64-bit systems it will be the same except that
long will be 64 bits.

  Curiously, the new standard specifically states that the new "long long"
type has to be at least 64 bits. (The new standard also defines official
typedefs for integrals of specific sizes.)

> The book suggests "never use int, always use short int or long int". If 
> the table is to be believed, long int is 32-bits. Christ knows what you 
> do if you want more bits than that...

  "Never use int" is a silly thing to say.

  If you need an integral that's the natural word size of the target system,
use int. If you need an integral with a specific number of bits, use one
of the new standard typedefs. Use "long long" for an integral of at least
64 bits (at least if your compiler supports C++11). There's no standard
support for larger integrals (because they are not supported by CPUs).

> The book casually mentions something which /seems/ to be claiming that 
> variables are not initialised to anything in particular unless you 
> specifically request this. That's interesting; I didn't know that.

  C hackers don't want the extra clock cycle to initialize a variable
which will be immediately assigned a new value anyways.

  As a side effect you get all kinds of weird behavior if you forget to
initialize a variable.

> Apparently assigning a double to an int is only a /warning/, not a 
> compile-time error. The book helpfully fails to specify how the 
> conversion is actually done.

  The integral part of the value is assigned to the int.

> is a perfectly valid statement in C++. On the other hand, it also means that

>    while (x[i] = y[i--]) ;

> is perfectly valid. You sick, sick people.

  I think that's Undefined Behavior because the same variable is being
modified and referenced more than once in the same expression.

  Even if it isn't UB, better not do it like that.

> In most programming languages, performing division promotes integers to 
> reals. But not in C++, apparently.

  C is a wrapper around assembly, and assembly supports integral division.
Hence dividing integers performs integral division. Oftentimes you *want*
integral division (in which case promoting to floating point would be an
enormous waste of resources).

  If you really want promotion to floating point, then you have to cast
explicitly.

> At this point, we 
> are told that there's two ways to convert something to an double:

>    double x = (double)y;

>    double x = static_cast<double>y;

> Apparently the former is bad, and the latter is good. No indication as 
> to why, it just is.

  The former casts *anything* into a double, while the latter casts only
compatible types. (Which means that the latter will give you a compiler
error if you try to cast from an incompatible type by mistake.)

> This being C++, some insane wacko 
> thought that having c++ and ++c would be a good idea, and wouldn't be 
> confusing in any way.

  Who gets confused? I don't.

  There's certainly a big difference between eg:

    while(c++ < 10) std::cout << c << "\n";

and

    while(++c < 10) std::cout << c << "\n";

  If you don't want to made the increment "inline" like that, the
alternative is more verbose.

> Next we learn that false=0, and true /= 0. (Christ, I'm /never/ going to 
> remember that!) Apparently there's a new bool type, but details are thin 
> as to what the exact significance of this is.

  You are never going to remember that 0 is false?

> Then we learn about if/then, if/then/else, and finally ?: is 
> demonstrated, without once mentioning how it's different from 
> if/then/else. (I.e., it only works on expressions, not statements. Oh, 
> but wait! Silly me, statements /are/ expressions...)

  ?: cannot contain blocks of code, and both branches much evaluate to
the same type. No such limitation for if/then/else.

> Bizarrely, if you don't specify a return type, it defaults to int. Not, 
> say, void. :-o

  Not in standard C++ (where it's an error).

> Then we learn about global variables. Apparently if you write a variable 
> outside of any block, it's global. No word on exactly when it's 
> initialised, or precisely what "global" actually means. For example, I'm 
> /guessing/ the variable is only in-scope /below/ the line where it's 
> defined. That's how functions work, after all.

  If you define a variable (or function) in the global namespace, it will
be visible in the entire program. (Yes, you can access it from a different
compilation unit. You just need to declare it with "extern" in that other
compilation unit to do that.)

  The problem with global variables is, obviously, name clashes. You'll
start getting weird linker errors about duplicate names if you make a habit
of using lots of globals (and then happen to use the same name in two
compilation units).

  (Also, from a design point of view, global variables decrease abstraction,
which is bad.)

-- 
                                                          - Warp


Post a reply to this message

From: Invisible
Subject: Re: Teach yourself C++ in 21 strange malfunctions
Date: 17 Apr 2012 09:47:19
Message: <4f8d7467$1@news.povray.org>
On 17/04/2012 01:44 PM, scott wrote:
>> It's when I see things like this that I'm glad I use Haskell, not some
>> low-level performance-oriented language that doesn't mind giving you
>> garbage results if it makes the code 0.02% faster...
>
> Has Haskell ever been used for something where the execution speed is
> actually important? :-)

Well... The Haskell compiler itself. The Darcs source control system. 
Several web frameworks / web servers. (If you believe the hype, some of 
these web servers rival Apache in terms of performance.)

If there's one thing Haskell lacks, it's a "killer application". There's 
nothing I can really point at which demonstrates either the speed or 
usefulness of Haskell. The best I can do is some microbenchmarks on the 
language shootout.

> If Haskell had a decent IDE and easy documented access to the rest of
> the system I might consider using it. But then F# exists already and I
> haven't paid much attention to that so far...

Haskell has /an/ IDE... I'm not sure it warrants the description of 
"decent" yet though. Last time I tried it, it had reached the stage of 
"kinda clunky, but basically usable".

http://antimatroid.files.wordpress.com/2010/07/leksah.png

If by "easy access to the rest of the system" you mean "it works great 
on POSIX systems", then yeah, sure. :-) Oh, wait, you meant Windows, the 
platform that 98% of the entire world uses? Sorry, no luck.

(OK, that's a slight exaggeration. The basic system works flawlessly on 
all popular platforms. But if you want to talk to the outside world, you 
end up needing to know very low level system calls.)

If you want /documentation/... then yeah, I see why you're not using 
Haskell.

In short, in terms of how nicely it's all packaged up, Haskell just 
can't compete with the might of Java or F# or Erlang or whatever. 
Haskell is a hobby project put together by a few dozen open-source 
developers from around the world. It's got nothing on Microsoft or 
Oracle or Ericsson.

> I think you need to be working with very specific types of problems to
> warrant using Haskell, which isn't to say nobody uses it.

What, you mean like programs involving complex data manipulations? 
Problems where you want to actually get the correct answer? Problems 
where you'd like to be able to still understand the code in two years' 
time? Because, to me, that sounds like a pretty /huge/ problem domain. ;-)

It's sad, really. You can use C or Java or whatever, which is horrible 
to code in but lets you actually get stuff done. Or you can code in 
Haskell, which is a joyful celebration of how programming should be, but 
then it's a nightmare to actually interact with the outside world... I 
keep hoping that some day this situation will be fixed. But I doubt it.


Post a reply to this message

From: Warp
Subject: Re: Days 5-
Date: 17 Apr 2012 10:00:19
Message: <4f8d7772@news.povray.org>
Invisible <voi### [at] devnull> wrote:
>    "Note that inline functions can bring a heavy cost. If the function 
> is called 10 times, the inline code is copied into the calling functions 
> each of those 10 times. The tiny improvement in speed you might achieve 
> is more than swamped by the increase in the size of the executable 
> program.

  He seems to think that the 'inline' keyword forces the compiler to inline.
It doesn't. The compiler is completely free to not inline it, if it decides
that it would be detrimental.

  As for increasing the size of the executable, who cares? If the executable
gets a few hundreds of bytes larger, big deal.

  The 'inline' keyword has a secondary, but much more important role,
though: It's an instruction for the compiler that says, basically, "if
this function appears in more than one compilation unit, don't give me
a linker error; instead merge them into one".

  Anyways, 'inline' should usually only be used for very short functions
that are absolutely crucial for speed. Otherwise there's little benefit.

> Now, I'm used to programming languages where the decision to inline 
> something or not is down to the compiler. It's not something an 
> application programmer would ever have to worry about. And it seems that 
> the inline directive is only a "hint" in C++ anyway, so I have to 
> wonder, whether this particular directive is now obsolete.

  Some newer compilers (such as the newest version of gcc) are able to
inline functions between compilation units. However, this is a rare,
quite advanced feature. (AFAIK only gcc implements this so far.)

  Traditionally, if you need a short function to be as fast as possible,
you need to define it in the module's header file, and then it *must* be
declared 'inline' (or else you'll get linker errors).

> I love the dire warnings that you could use a number less than 15, 
> because otherwise the program might consume a vast amount of memory. 
> (For goodness' sake, how much RAM does 15 stack frames take up?!)

  That would probably be 15^2. The author might have been using a 16-bit
DOS compiler.

>    "Recursion is not used often in C++ programming, but it can be a 
> powerful and elegant tool for certain needs. Recursion is a tricky part 
> of advanced programming. It is presented here because it can be useful 
> to understand the fundamentals of how it works, but don't worry too much 
> if you don't fully understand all the details."

> In other words, yet again, "now you know how this works, you don't need 
> to actually use it".

  Recursion is often handy to implement algorithms that are very recursive
in nature, but one should be careful to not use too much stack space.
(Generally speaking C++ compilers do not perform tail recursion
optimizations. I'm not completely sure why.)

  If your recursion depth is O(log n), then it's usually safe. If it starts
being O(n), then it's more worrisome. (Well, depending on the expected
amount of data, of course.)

  Usually recursion is also slower than an equivalent iterative solution
(*especially* if the iterative solution works with O(1) extra memory.)

>    "Registers are a special area of memory built right into the CPU."

> Erm...

  Even C programmers stopped worrying about CPU registers somewhere in the
early 80's. In C++ they have never been an issue. It's a headache of the
compiler, not the programmer.

> Still, it does answer something I've always wondered about: What *is* 
> the C calling convention?

  It actually depends on the OS.

  You don't need to worry about calling conventions unless you are
implementing something really, *really* low-level. Machine-code level.
(One example would be if you are writing a compiler.)

-- 
                                                          - Warp


Post a reply to this message

From: Warp
Subject: Re: Teach yourself C++ in 21 strange malfunctions
Date: 17 Apr 2012 10:03:32
Message: <4f8d7834@news.povray.org>
Invisible <voi### [at] devnull> wrote:
> When I looked at my function, I found I'd written "it->second;" rather 
> than "return it->second;". Not only is this apparently legal, it doesn't 
> even generate a compile-time /warning/.

  What compiler would that be?

    int test()
    {
    }

test.cc:3:13: warning: no return statement in function returning non-void

-- 
                                                          - Warp


Post a reply to this message

From: Invisible
Subject: Re: Days 1-5
Date: 17 Apr 2012 10:04:34
Message: <4f8d7872$1@news.povray.org>
On 17/04/2012 02:40 PM, Warp wrote:
> Orchid Win7 v1<voi### [at] devnull>  wrote:
>> Then there's a cryptic reference to the fact that iostream and
>> iostream.h aren't the same, but the differences are "subtle, exotic, and
>> beyond the scope of an introductory primer".
>
>    One is standard, the other isn't. What is there more to explain?

The book doesn't actually mention this fact. It seems to suggest that 
you just use whichever one you fancy. And then says something about "but 
we're going to use iostream.h for compatibility reasons".

So I'm presuming iostream is the standards-compliant one?

>> Anyway, apparently main() must always return int. It is against the ANSI
>> standard for it to return void. (No mention of it having parameters.
>> Indeed, even the return value is described as an "obscure feature which
>> we won't make use of".)
>
>    I don't think it would have been very hard to say something like "the
> return value is used by the operating system as a success/failure value"
> and then explain that 0 means success and other values can be used as a
> failure code. (It is actually important to know this especially when
> creating a command-line program. The return value of main() is
> significant.)

Yeah, an exit code is not exactly an "obscure feature". I can understand 
the book not using it, but that doesn't mean the same as being useless.

>> Then there's some mumbo-jumbo about how you can use cout "to do
>> addition", and how when you do count<<  8+5, then "8+5 is passed to
>> cout, but 13 is what is actually printed". Nice, clear explanation,
>> that. :-P
>
>    Actually "8+5" is not being passed to cout.

Quite. The text completely fails to explain what's /actually/ happening. 
A bit later on it starts talking about how "the compiler" calls XYZ when 
you do ABC. Obviously, the compiler doesn't call anything; it just 
generates code. :-P

> Anyways, it's not like it's important.

It is if you want to form an understanding of what the stuff you're 
typing in actually /means/. The text makes it sound like adding is 
somehow special to cout, which clearly it isn't. You can do this with 
/any/ function call.

>> And then it starts talking about types. Apparently C++ adds a new "bool"
>> type, which is supposed to be 1 byte.
>
>    I don't think the standard defines the size of bool. Most compilers
> probably make it 1.

I have a sinking feeling you're right.

>> Apparently "char" doesn't mean
>> character at all, it means a 1-byte integer (and by default, a signed one).
>
>    The standard only guarantees that sizeof(char) is 1, but not that char
> is 1 byte. (It is possible for char to be larger in some systems. It just
> means that everything else is then a multiple of that size.)
>
>    The standard also lets the signedness of char as implementation-defined.
> (In most practical systems it's signed.)

Oh... goodie.

>    (Curiously, "char" and "signed char" are two *different* types in C and
> C++ even if "char" is signed, while for example "int" and "signed int" are
> completely synonymous and the "signed" keyword is redundant in the latter
> case. This comes from the prehistoric dark ages of C.)

Wait... I didn't even know you could /do/ that! o_O

>    Nope. The standard does not say that. The only thing the standard says
> is that:
>
>      sizeof(char)<= sizeof(short)<= sizeof(int)<= sizeof(long)
>
> and that sizeof(char) = 1 (which does not mean "1 byte", but "1 indexable
> memory unit".)
>
>    (Yes, it is possible and standard-conforming for char to be as large
> as long.)
>
>    In practice in most 32-bit systems short will be 16 bits and int/long
> will be 32 bits. In most 64-bit systems it will be the same except that
> long will be 64 bits.
>
>    Curiously, the new standard specifically states that the new "long long"
> type has to be at least 64 bits. (The new standard also defines official
> typedefs for integrals of specific sizes.)

Ouch. My head.

>    "Never use int" is a silly thing to say.
>
>    If you need an integral that's the natural word size of the target system,
> use int. If you need an integral with a specific number of bits, use one
> of the new standard typedefs.

Now /that/ actually makes sense. Where are these typedefs located? Do 
you have to include them specifically, or...?

> Use "long long" for an integral of at least
> 64 bits (at least if your compiler supports C++11). There's no standard
> support for larger integrals (because they are not supported by CPUs).

Not supported _yet_. ;-)

>    C hackers don't want the extra clock cycle to initialize a variable
> which will be immediately assigned a new value anyways.

I figured.

>    As a side effect you get all kinds of weird behavior if you forget to
> initialize a variable.

Yeah, it's great, isn't it? ;-)

I had hoped for maybe a compile-time warning. I haven't actually tried 
it to see if I get one though...

(Java actually refuses to compile your code if it can't satisfy itself 
that /every/ variable is initialised. Which gets annoying when every 
variable /is/ initialised before use, but the compiler can't prove it...)

>> Apparently assigning a double to an int is only a /warning/, not a
>> compile-time error. The book helpfully fails to specify how the
>> conversion is actually done.
>
>    The integral part of the value is assigned to the int.

So... it always rounds towards zero?

>> is a perfectly valid statement in C++. On the other hand, it also means that
>
>>     while (x[i] = y[i--]) ;
>
>> is perfectly valid. You sick, sick people.
>
>    I think that's Undefined Behavior because the same variable is being
> modified and referenced more than once in the same expression.

Aren't expressions guaranteed to execute left-to-right?

>    Even if it isn't UB, better not do it like that.

But that's just it. All C programmers always code in this kind of style. 
It's part of what makes C so terrifying to try to read.

>> In most programming languages, performing division promotes integers to
>> reals. But not in C++, apparently.
>
>    C is a wrapper around assembly, and assembly supports integral division.
> Hence dividing integers performs integral division. Oftentimes you *want*
> integral division

Most languages seem to provide a separate operator for that. But sure, OK.

>> At this point, we
>> are told that there's two ways to convert something to an double:
>
>>     double x = (double)y;
>
>>     double x = static_cast<double>y;
>
>> Apparently the former is bad, and the latter is good. No indication as
>> to why, it just is.
>
>    The former casts *anything* into a double, while the latter casts only
> compatible types. (Which means that the latter will give you a compiler
> error if you try to cast from an incompatible type by mistake.)

Now why the hell they couldn't just /say/ that I don't know...

>> Next we learn that false=0, and true /= 0. (Christ, I'm /never/ going to
>> remember that!) Apparently there's a new bool type, but details are thin
>> as to what the exact significance of this is.
>
>    You are never going to remember that 0 is false?

I know that zero is one thing, and everything else is the other thing. 
But I always struggle to remember whether zero means true or false. The 
solution, of course, is to never ever use integers as bools.

>> Then we learn about if/then, if/then/else, and finally ?: is
>> demonstrated, without once mentioning how it's different from
>> if/then/else. (I.e., it only works on expressions, not statements. Oh,
>> but wait! Silly me, statements /are/ expressions...)
>
>    ?: cannot contain blocks of code, and both branches much evaluate to
> the same type. No such limitation for if/then/else.

Quite. The book doesn't bother mentioning any of that.

>> Bizarrely, if you don't specify a return type, it defaults to int. Not,
>> say, void. :-o
>
>    Not in standard C++ (where it's an error).

Oh thank god!

>> Then we learn about global variables. Apparently if you write a variable
>> outside of any block, it's global. No word on exactly when it's
>> initialised, or precisely what "global" actually means. For example, I'm
>> /guessing/ the variable is only in-scope /below/ the line where it's
>> defined. That's how functions work, after all.
>
>    If you define a variable (or function) in the global namespace, it will
> be visible in the entire program. (Yes, you can access it from a different
> compilation unit. You just need to declare it with "extern" in that other
> compilation unit to do that.)
>
>    The problem with global variables is, obviously, name clashes. You'll
> start getting weird linker errors about duplicate names if you make a habit
> of using lots of globals (and then happen to use the same name in two
> compilation units).

 From what I'm seeing, just having multiple compilation units is a 
nightmare. (That's one of the reasons I'm reading this book. I'm hoping 
it's going to explain how to do this stuff properly.)

>    (Also, from a design point of view, global variables decrease abstraction,
> which is bad.)

 From a design point of view, global variables are a terrible idea. But 
that the book bothers to explain why.


Post a reply to this message

From: Invisible
Subject: Re: Teach yourself C++ in 21 strange malfunctions
Date: 17 Apr 2012 10:09:13
Message: <4f8d7989$1@news.povray.org>
On 17/04/2012 03:03 PM, Warp wrote:
> Invisible<voi### [at] devnull>  wrote:
>> When I looked at my function, I found I'd written "it->second;" rather
>> than "return it->second;". Not only is this apparently legal, it doesn't
>> even generate a compile-time /warning/.
>
>    What compiler would that be?
>
>      int test()
>      {
>      }
>
> test.cc:3:13: warning: no return statement in function returning non-void

#include <iostream>
#include <string>

std::string test()
{
}

int main()
{
   std::cout << test();
   return 0;
}

orphi@linux-z30b:~/Work/Logic-01> make Test
g++     Test.cpp   -o Test
orphi@linux-z30b:~/Work/Logic-01> ./Test
Segmentation fault
orphi@linux-z30b:~/Work/Logic-01> g++ --version
g++ (SUSE Linux) 4.5.1 20101208 [gcc-4_5-branch revision 167585]
Copyright (C) 2010 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.


Post a reply to this message

From: Invisible
Subject: Re: Days 5-
Date: 17 Apr 2012 10:17:54
Message: <4f8d7b92$1@news.povray.org>
On 17/04/2012 03:00 PM, Warp wrote:
> Invisible<voi### [at] devnull>  wrote:
>>     "Note that inline functions can bring a heavy cost. If the function
>> is called 10 times, the inline code is copied into the calling functions
>> each of those 10 times. The tiny improvement in speed you might achieve
>> is more than swamped by the increase in the size of the executable
>> program.
>
>    He seems to think that the 'inline' keyword forces the compiler to inline.
> It doesn't. The compiler is completely free to not inline it, if it decides
> that it would be detrimental.

He even /says/ that a few paragraphs later. "Inline is a hint, which the 
compiler may ignore."

>    As for increasing the size of the executable, who cares? If the executable
> gets a few hundreds of bytes larger, big deal.

Yeah, that's the really puzzling bit. I mean, unless the function is 
huge [which is a bad idea anyway], or is called from a bazillion places, 
why would inlining it "bring its own performance costs"? I don't get that.

>    The 'inline' keyword has a secondary, but much more important role,
> though: It's an instruction for the compiler that says, basically, "if
> this function appears in more than one compilation unit, don't give me
> a linker error; instead merge them into one".

OK, /that/ sounds rather more significant.

>    Anyways, 'inline' should usually only be used for very short functions
> that are absolutely crucial for speed. Otherwise there's little benefit.

The book claims that if you write a function body inside a class 
definition, that makes the method inline. Is this true? I thought there 
was no difference either way...

>> Now, I'm used to programming languages where the decision to inline
>> something or not is down to the compiler. It's not something an
>> application programmer would ever have to worry about. And it seems that
>> the inline directive is only a "hint" in C++ anyway, so I have to
>> wonder, whether this particular directive is now obsolete.
>
>    Some newer compilers (such as the newest version of gcc) are able to
> inline functions between compilation units. However, this is a rare,
> quite advanced feature. (AFAIK only gcc implements this so far.)

OK. Presumably /within/ a single compilation unit it's already going to 
inline anything it thinks is worth it though. (?)

>    Traditionally, if you need a short function to be as fast as possible,
> you need to define it in the module's header file, and then it *must* be
> declared 'inline' (or else you'll get linker errors).

Right.

>> I love the dire warnings that you could use a number less than 15,
>> because otherwise the program might consume a vast amount of memory.
>> (For goodness' sake, how much RAM does 15 stack frames take up?!)
>
>    That would probably be 15^2. The author might have been using a 16-bit
> DOS compiler.

He does mention DOS several times, yes.

Even so, there might be 15^2 function calls, but only at most 15 of them 
will be /active/ simultaneously - which means only 15 stack frames at 
once. No?

>> In other words, yet again, "now you know how this works, you don't need
>> to actually use it".
>
>    Recursion is often handy to implement algorithms that are very recursive
> in nature, but one should be careful to not use too much stack space.

>    Usually recursion is also slower than an equivalent iterative solution
> (*especially* if the iterative solution works with O(1) extra memory.)

Well, yes. As I understand it, by default C++ doesn't reserve a whole 
lot of stack space, so you're likely to overflow it quite quickly. And 
as you say, being non-recursive means you skip all the overhead of 
jumping into and out of functions, which is faster. (It probably enables 
more compiler optimisations too, I wouldn't wonder...)

>> Still, it does answer something I've always wondered about: What *is*
>> the C calling convention?
>
>    It actually depends on the OS.
>
>    You don't need to worry about calling conventions unless you are
> implementing something really, *really* low-level. Machine-code level.
> (One example would be if you are writing a compiler.)

I'm a curious soul. I think to have some idea how things work, even if I 
don't know all of the details... ;-)


Post a reply to this message

From: Warp
Subject: Re: Days 1-5
Date: 17 Apr 2012 10:26:39
Message: <4f8d7d9f@news.povray.org>
Invisible <voi### [at] devnull> wrote:
> Now /that/ actually makes sense. Where are these typedefs located? Do 
> you have to include them specifically, or...?

  IIRC they are in <cstdint>.

> I had hoped for maybe a compile-time warning. I haven't actually tried 
> it to see if I get one though...

  Some compilers will give you a warning if they think that a variable
is being used uninitialized, but given that the general problem of
determining that is unsolvable, it's not fool-proof.

  There are some tools (such as valgrind) that will tell you at runtime.

> Aren't expressions guaranteed to execute left-to-right?

  Expressions with side-effects can happen in any order. The compiler is
free to optimize expressions in any way it likes (as long as the result
is always the same if the expression had no multiple side-effects on the
same variable).

  For example, if you do a "table[i++] = i;" the compiler can decide to
increment the 'i' before it assigns it to that location, or after the
entire statement is done. The general solution to that is: Don't do it.

> >    Even if it isn't UB, better not do it like that.

> But that's just it. All C programmers always code in this kind of style. 
> It's part of what makes C so terrifying to try to read.

  C programmers (at least competent ones) do not have multiple side-effects
on the same variable inside the same expression. They may have multiple
side-effects on *different* variables, which is ok. For example:

    while(*dest++ = *src++) {}

  There are two side-effects there, but they are operating on different
variables, so there's no ambiguity.

> I know that zero is one thing, and everything else is the other thing. 
> But I always struggle to remember whether zero means true or false. The 
> solution, of course, is to never ever use integers as bools.

  What do you think this will print?

    bool b = true;
    std::cout << b << std::endl;

-- 
                                                          - Warp


Post a reply to this message

From: scott
Subject: Re: Teach yourself C++ in 21 strange malfunctions
Date: 17 Apr 2012 10:30:50
Message: <4f8d7e9a$1@news.povray.org>
>> I think you need to be working with very specific types of problems to
>> warrant using Haskell, which isn't to say nobody uses it.
>
> What, you mean like programs involving complex data manipulations?
> Problems where you want to actually get the correct answer? Problems
> where you'd like to be able to still understand the code in two years'
> time? Because, to me, that sounds like a pretty /huge/ problem domain. ;-)

Give a specific example then, of a real commercial problem where 
selecting Haskell would be the best choice.  I'm not saying none exist, 
just wondering exactly what type of software problem Haskell would excel 
at. Presumably it's not making a 3D game or an application that is 
mostly GUI.

> It's sad, really. You can use C or Java or whatever, which is horrible
> to code in but lets you actually get stuff done. Or you can code in
> Haskell, which is a joyful celebration of how programming should be, but
> then it's a nightmare to actually interact with the outside world... I
> keep hoping that some day this situation will be fixed. But I doubt it.

In the end companies that are trying to make money will just use 
whichever system lets them solve the problems as quickly as possible. 
This results in all manner of different languages being used (even 
Haskell!).  But I think the reason you imagine C++/Java/C# being so 
popular is that the huge markets of software don't match with Haskell. 
Desktop software needs a familiar GUI and easy documented access to 
APIs.  Apps for phones need good access to hardware (and usually the OS 
maker has chosen a language for you).  Games need high performance and 
good access to hardware APIs.


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.