|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Le 23/05/2013 19:33, Orchid Win7 v1 a écrit :
> Today, I spent about 2 hours trying to figure out why the hell all the
> tests pass when I compile in Debug mode, but two of them fail if I
> compile in Release mode.
>
> Literally, every time I run in Debug mode, all the tests pass. It only
> fails in Release mode. In particular, that means I can't fire up the
> debugger to see why it's failing. You can only debug in Debug mode, and
> in Debug mode it works perfectly.
>
> Apparently Google is your friend. After some minimal amount of effort, I
> came across a very well-written forum post which helpfully explains that
> in Debug mode all variables are guaranteed to be initialised to default
> values, whereas in Release mode variables take on whatever random
> gibberish happens to be in memory, unless you remember to explicitly
> initialise them to something sane.
>
> Ouch. >_<
>
That's why options like -Wall -Wextra -pedantic -Wold-style-cast are my
friends with g++.
I do not know MSVC enough, but it might have that kind of warnings too
(on explicit demand, of course)
> Now I guess I understand why Java, C#, et al make such a fuss about
> insisting on explicit initialisation and / or explicitly specifying
> default initialisation rules...
>
>
>
> The other fun thing is that C++ allows you to "throw" absolutely
> anything. Several places in the code throw std::string, presumably in
> the hope that this will result in some helpful error message being printed.
>
> It doesn't. ;-)
with gdb, "catch throw" is wonderful before "run" in such cases.
Now, someone need to have an exploration of the standard exception
mechanism, as there is already a lot of derived types (and many allow a
string parameter which will be incorporated in the what() return)
--
Just because nobody complains does not mean all parachutes are perfect.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Orchid Win7 v1 <voi### [at] devnull> wrote:
> Apparently Google is your friend. After some minimal amount of effort, I
> came across a very well-written forum post which helpfully explains that
> in Debug mode all variables are guaranteed to be initialised to default
> values, whereas in Release mode variables take on whatever random
> gibberish happens to be in memory, unless you remember to explicitly
> initialise them to something sane.
Initializing variables takes clock cycles, which is why C hackers don't
want them being initialized in situations where they are going to be
assigned to anyway... (As if this matters at all in 99.999% of the cases.)
Many compilers will analyze the code and give a warning about variables
being used uninitialized, but this analysis will always be inevitably
limited (because, as you may know, proving that eg. a variable is used
uninitialized is an improvable problem.)
There are some external tools that can be used to analyze the program
while it runs, and will detect things like this (as well as memory leaks
and accessing freed memory or out-of-bound accesses.) The only free one
I know of is valgrind. However, it only works on Linux and Mac OS X. No
such luck in Windows.
There are commercial programs that do the same (and more.) One that I know
of is AQtime.
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Orchid Win7 v1 <voi### [at] devnull> wrote:
> This would be acceptable if VC offered some how to find out WHAT
> TEMPLATE it was attempting to expand when the error occurred. But no, it
> just shows you the source code for the STL (or Boost or whatever) and
> points at the line on which the problem was actually detected. There
> seems to be no way that I can discover what line of which file resulting
> in this template being invoked in the first place - and THAT is surely
> where the problem is, not in STL itself.
All compilers I know if will give you the full chain of calls that
ended up in the erroneous line. (And usually looking at the first error
message that refers to your code will immediately reveal the problem.)
I would be quite surprised if VC didn't do this.
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 24/05/2013 06:00 PM, Warp wrote:
> All compilers I know if will give you the full chain of calls that
> ended up in the erroneous line. (And usually looking at the first error
> message that refers to your code will immediately reveal the problem.)
>
> I would be quite surprised if VC didn't do this.
...which probably just means I need to go discover where in the UI this
information is hidden. It's probably quite simple once you know where to
look.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
>> in Debug mode all variables are guaranteed to be initialised to default
>> values, whereas in Release mode variables take on whatever random
>> gibberish happens to be in memory, unless you remember to explicitly
>> initialise them to something sane.
>>
>> Ouch.>_<
>
> That's why options like -Wall -Wextra -pedantic -Wold-style-cast are my
> friends with g++.
>
> I do not know MSVC enough, but it might have that kind of warnings too
> (on explicit demand, of course)
I think MSVC actually *does* output warnings... It's just that every
time you compile the sources, it generates many hundred lines of
"stuff", and any warning messages are swamped by all the other output. I
think I've seen a warning or two flash past, but it would be quite
time-consuming to actually go read them all. (And most warnings are just
"printf is deprecated; please use printf_s instead".)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 24/05/2013 05:58 PM, Warp wrote:
> Initializing variables takes clock cycles, which is why C hackers don't
> want them being initialized in situations where they are going to be
> assigned to anyway... (As if this matters at all in 99.999% of the cases.)
I was thinking about this the other day. AFAIK, C was designed to run on
the PDP line of mainframes. Back in those days, the path to maximising
performance was to minimise the number of opcodes to be executed. That's
why we had CISC; the more work done per opcode, the fewer opcodes and
hence the fewer fetch / decode cycles wasted.
Today, it appears to me that the number of opcodes is nearly moot. If
you do 20 unnecessary arithmetic operations, on today's super-scalar
architectures with deep pipelining, it'll probably run most of that lot
in parallel anyway. But if your code causes a CACHE MISS or a BRANCH
MISPREDICTION... it will cost you HUNDREDS of compute cycles.
In summary, it seems that doing work twice is no longer expensive.
Accessing memory in the wrong order and doing indirect jumps are the
expensive things now. (So... I guess that makes dynamic dispatch really
expensive then?)
> Many compilers will analyze the code and give a warning about variables
> being used uninitialized, but this analysis will always be inevitably
> limited (because, as you may know, proving that eg. a variable is used
> uninitialized is an improvable problem.)
Yeah, I think VC might actually be giving me warnings, but they're
getting lost in the miles of other output.
Part of the problem is probably also that I don't completely understand
how variable initialisation works in C++. (E.g., class constructors get
called whether you want them to or not, so if it has a nullary
constructor, it should be initialised to something sane...)
> There are some external tools that can be used to analyze the program
> while it runs, and will detect things like this (as well as memory leaks
> and accessing freed memory or out-of-bound accesses.) The only free one
> I know of is valgrind. However, it only works on Linux and Mac OS X. No
> such luck in Windows.
Oh, really? I wasn't aware valgrind didn't work on Windows... (Then
again, it's not like I've looked into it. I doubt I could even figure
out how to work such a complex tool.)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Orchid Win7 v1 <voi### [at] devnull> wrote:
> I think MSVC actually *does* output warnings... It's just that every
> time you compile the sources, it generates many hundred lines of
> "stuff", and any warning messages are swamped by all the other output. I
> think I've seen a warning or two flash past, but it would be quite
> time-consuming to actually go read them all.
In principle you should get 0 warnings for a properly-written program.
> (And most warnings are just
> "printf is deprecated; please use printf_s instead".)
That particular warning can be turned off in VC. A google search should
tell you how.
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Orchid Win7 v1 <voi### [at] devnull> wrote:
> On 24/05/2013 05:58 PM, Warp wrote:
> > Initializing variables takes clock cycles, which is why C hackers don't
> > want them being initialized in situations where they are going to be
> > assigned to anyway... (As if this matters at all in 99.999% of the cases.)
> I was thinking about this the other day. AFAIK, C was designed to run on
> the PDP line of mainframes. Back in those days, the path to maximising
> performance was to minimise the number of opcodes to be executed. That's
> why we had CISC; the more work done per opcode, the fewer opcodes and
> hence the fewer fetch / decode cycles wasted.
C was also developed in a time where compilers did almost zero
optimization. Most C programs were "hand-optimized" for something
like 20 years, before compiler technology caught up, making such
manual optimization almost completely moot.
(Just as a concrete example, "i * 2" would for a quite long time produce
an actual multiplication opcode, which was extremely slow especially back
in those days, which is why it was usually written as "i << 1" by C hackers,
which produces a bit shift opcode that's much faster. Nowadays compilers
will detect both situations and use whatever is faster in the target
architecture, making the whole manual optimization completely moot.)
> In summary, it seems that doing work twice is no longer expensive.
> Accessing memory in the wrong order and doing indirect jumps are the
> expensive things now. (So... I guess that makes dynamic dispatch really
> expensive then?)
Calling a virtual function in C++ is no slower in practice than calling
a regular function. That additional indirection level is a minuscule
overhead compared to everything else that's happening with a function call.
It might make some difference in rare cases with very small functions
that are called in a really tight inner loop that runs for millions of
iterations, especially if said function can be inlined by the compiler.
However, it's rare to need dynamic dispatch in such situations anyway.
> Part of the problem is probably also that I don't completely understand
> how variable initialisation works in C++. (E.g., class constructors get
> called whether you want them to or not, so if it has a nullary
> constructor, it should be initialised to something sane...)
Basic types do not get implicitly initialized (except in some
circumstances), user-defined types do. In other words, if you have
an int and a std::string as a member of a class, the int won't be
implicitly initialized and you have to explicitly initialize it in
the constructor. The std::string will, because it's a class, and thus
doesn't need to be explicitly initialized.
A raw pointer is a basic type and thus will likewise not be implicitly
initialized. std::shared_ptr is a class and will always be initialized
(to null.)
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
>> I was thinking about this the other day. AFAIK, C was designed to run on
>> the PDP line of mainframes. Back in those days, the path to maximising
>> performance was to minimise the number of opcodes to be executed. That's
>> why we had CISC; the more work done per opcode, the fewer opcodes and
>> hence the fewer fetch / decode cycles wasted.
>
> C was also developed in a time where compilers did almost zero
> optimization. Most C programs were "hand-optimized" for something
> like 20 years, before compiler technology caught up, making such
> manual optimization almost completely moot.
From what I can tell, C was developed at a time when you actually ran
CPP by feeding in your source files and header files on several tapes,
and having CPP output the final, combined output onto another tape. You
then unload CPP, load CC, and then it reads the tape in and spews out
the machine code as it reads. (Which is why you need forward
declarations and stuff; the source code is literally too large to hold
in memory all at once.)
When 16K was a huge amount of RAM, these kinds of gyrations were
necessary. On my dev box, with 4GB of RAM, it seems kinda silly...
(Having said that, if you have a microcontroller with 2K of RAM and 4K
of ROM, then C is about the only language that can target it.)
> (Just as a concrete example, "i * 2" would for a quite long time produce
> an actual multiplication opcode, which was extremely slow especially back
> in those days, which is why it was usually written as "i<< 1" by C hackers,
> which produces a bit shift opcode that's much faster. Nowadays compilers
> will detect both situations and use whatever is faster in the target
> architecture, making the whole manual optimization completely moot.)
Curiously, the Haskell compiler does heaps and heaps of really
high-level optimisations like removing redundant computations, inlining
functions, transforming nested conditional tests and so on. But it
utterly fails to perform trivial low-level optimisations like replacing
arithmetic with bitshifts. Partly because that stuff obviously varies
somewhat per-platform - and partly because it's not very "exciting".
Design a radical new optimisation pass and you can publish a paper on
it. Implement mundane stuff that other compilers have done for years and
nobody will care.
(This is in part why there's now an LLVM backend. Hopefully LLVM will do
this kind of stuff for you...)
>> In summary, it seems that doing work twice is no longer expensive.
>> Accessing memory in the wrong order and doing indirect jumps are the
>> expensive things now. (So... I guess that makes dynamic dispatch really
>> expensive then?)
>
> Calling a virtual function in C++ is no slower in practice than calling
> a regular function. That additional indirection level is a minuscule
> overhead compared to everything else that's happening with a function call.
It's not so much the jump, it's the not being able to start prefetching
the instructions at the other end until after the target address has
been computed, leading to a pipeline bubble.
That said, if you're running JIT-compiled code with garbage collection
and whatnot, the overhead of a few extra jumps is probably moot. (E.g.,
if your code is Java or C# or Python or something.)
>> Part of the problem is probably also that I don't completely understand
>> how variable initialisation works in C++.
>
> Basic types do not get implicitly initialized (except in some
> circumstances), user-defined types do.
Yeah, for some reason I had it in my head that it's whether the variable
is a class member or just a local variable. What you said makes more sense.
> A raw pointer is a basic type and thus will likewise not be implicitly
> initialized.
So it points to random memory?
> std::shared_ptr is a class and will always be initialized
That is what I thought.
> (to null.)
That's the bit I failed to anticipate.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
>> A raw pointer is a basic type and thus will likewise not be implicitly
>> initialized.
>
> So it points to random memory?
>
Yes. C++ still allows you to shout yourself in the foot.
--
/*Francois Labreque*/#local a=x+y;#local b=x+a;#local c=a+b;#macro P(F//
/* flabreque */L)polygon{5,F,F+z,L+z,L,F pigment{rgb 9}}#end union
/* @ */{P(0,a)P(a,b)P(b,c)P(2*a,2*b)P(2*b,b+c)P(b+c,<2,3>)
/* gmail.com */}camera{orthographic location<6,1.25,-6>look_at a }
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|