|
![](/i/fill.gif) |
In article <403897c7$1@news.povray.org>, "Tek" <tek### [at] evilsuperbrain com>
wrote:
> Couldn't compilers inline functions 10 years ago?
Not as intelligently.
> Anyway, my point isn't just the function call overhead, it's the fact
> that it's not an operator internally supported by the compiler with a
> bunch of rules, it's a series of instructions of unspecified length
> elsewhere.
Which you have to call no matter what. It being done through an operator
function rather than through an explicit member function call makes
exactly zero difference.
> Well, it can lose the function call overhead, but not the overhead of
> it being a function (i.e. several instructions) rather than a simple
> instruction (which is really my point).
Zero overhead. None. Nada. Not one extra instruction.
> What I'm saying is I want "+" to produce the same amount of assembler code
> whenever I use it.
Well, you can't. Adding some things takes more work. Using member
functions doesn't help this at all.
> > For example, suppose you have this (in C++):
>
> Aha! but you'd never define an object for a type inherently supported by the
> compiler, only for types which require something more complex.
Actually, iterators often do exactly this, being a thin wrapper over a
pointer.
> So you end up with "+" becoming different amounts of compiled code
> depending on the context in which it's used. This makes it harder to
> optimise code, since it is harder to keep track of where the more
> complex + functions are being invoked.
Just be aware of what you're adding. It's no harder than keeping track
of a bunch of methods named add(), mult(), etc, and the code is much
easier to read.
> I think you've missed my point, I'm not saying operator overloading does
> anything different to function calls, I'm saying addition functions for two
> matrices are different to addition functions for the processor's inbuilt
> types, and it is useful when optimising to keep track of this difference.
How is this so? If you need to add two matrices, you need to add two
matrices. You can't do this with the same code for adding two integers.
> Operator overloading is great if you want to code at a higher level without
> being bogged down by thinking about what's happening at the lowest level, but
> when optimising code you want to do the opposite! Heck, if were even remotely
> feasible we'd write everything in assembler...
No...then you run into the "can't see the forest for the trees" problem.
You'll spend too much time optimizing tiny little things, and completely
miss larger optimizations that can have a huge impact. And I still
haven't seen an argument against including operator overloading in a
language...even if this were true, it would apply only to specific
projects.
--
Christopher James Huff <cja### [at] earthlink net>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: <chr### [at] tag povray org>
http://tag.povray.org/
Post a reply to this message
|
![](/i/fill.gif) |