POV-Ray : Newsgroups : povray.off-topic : Adventures with C++ : Re: Adventures with C++ Server Time
29 Jul 2024 00:23:44 EDT (-0400)
  Re: Adventures with C++  
From: Orchid Win7 v1
Date: 24 May 2013 17:08:53
Message: <519fd6e5$1@news.povray.org>
>> I was thinking about this the other day. AFAIK, C was designed to run on
>> the PDP line of mainframes. Back in those days, the path to maximising
>> performance was to minimise the number of opcodes to be executed. That's
>> why we had CISC; the more work done per opcode, the fewer opcodes and
>> hence the fewer fetch / decode cycles wasted.
>
> C was also developed in a time where compilers did almost zero
> optimization. Most C programs were "hand-optimized" for something
> like 20 years, before compiler technology caught up, making such
> manual optimization almost completely moot.

 From what I can tell, C was developed at a time when you actually ran 
CPP by feeding in your source files and header files on several tapes, 
and having CPP output the final, combined output onto another tape. You 
then unload CPP, load CC, and then it reads the tape in and spews out 
the machine code as it reads. (Which is why you need forward 
declarations and stuff; the source code is literally too large to hold 
in memory all at once.)

When 16K was a huge amount of RAM, these kinds of gyrations were 
necessary. On my dev box, with 4GB of RAM, it seems kinda silly...

(Having said that, if you have a microcontroller with 2K of RAM and 4K 
of ROM, then C is about the only language that can target it.)

> (Just as a concrete example, "i * 2" would for a quite long time produce
> an actual multiplication opcode, which was extremely slow especially back
> in those days, which is why it was usually written as "i<<  1" by C hackers,
> which produces a bit shift opcode that's much faster. Nowadays compilers
> will detect both situations and use whatever is faster in the target
> architecture, making the whole manual optimization completely moot.)

Curiously, the Haskell compiler does heaps and heaps of really 
high-level optimisations like removing redundant computations, inlining 
functions, transforming nested conditional tests and so on. But it 
utterly fails to perform trivial low-level optimisations like replacing 
arithmetic with bitshifts. Partly because that stuff obviously varies 
somewhat per-platform - and partly because it's not very "exciting". 
Design a radical new optimisation pass and you can publish a paper on 
it. Implement mundane stuff that other compilers have done for years and 
nobody will care.

(This is in part why there's now an LLVM backend. Hopefully LLVM will do 
this kind of stuff for you...)

>> In summary, it seems that doing work twice is no longer expensive.
>> Accessing memory in the wrong order and doing indirect jumps are the
>> expensive things now. (So... I guess that makes dynamic dispatch really
>> expensive then?)
>
> Calling a virtual function in C++ is no slower in practice than calling
> a regular function. That additional indirection level is a minuscule
> overhead compared to everything else that's happening with a function call.

It's not so much the jump, it's the not being able to start prefetching 
the instructions at the other end until after the target address has 
been computed, leading to a pipeline bubble.

That said, if you're running JIT-compiled code with garbage collection 
and whatnot, the overhead of a few extra jumps is probably moot. (E.g., 
if your code is Java or C# or Python or something.)

>> Part of the problem is probably also that I don't completely understand
>> how variable initialisation works in C++.
>
> Basic types do not get implicitly initialized (except in some
> circumstances), user-defined types do.

Yeah, for some reason I had it in my head that it's whether the variable 
is a class member or just a local variable. What you said makes more sense.

> A raw pointer is a basic type and thus will likewise not be implicitly
> initialized.

So it points to random memory?

> std::shared_ptr is a class and will always be initialized

That is what I thought.

> (to null.)

That's the bit I failed to anticipate.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.