POV-Ray : Newsgroups : povray.off-topic : Questionable optimizations : Re: Questionable optimizations Server Time
5 Sep 2024 19:27:21 EDT (-0400)
  Re: Questionable optimizations  
From: clipka
Date: 19 Jul 2009 09:20:00
Message: <web.4a631cc82c54829feecd81460@news.povray.org>
Darren New <dne### [at] sanrrcom> wrote:
> But I'm trying to figure out why you would add optimizations
> specifically to improve the performance of code you are optimizing *because*
> you know it is flawed.

No, optimizers don't optimize code *because* it's flawed, just maybe *although*
it's flawed.

Or, possibly more particularly fitting the intention in this case: Although it
is *redundant*.

After all, that's a crucual part of optimization. It allows programmers to write
source code that is more self-descriptive and more self-defensive, yet at the
speed of hand-optimized code.

For instance, one might write:

    if (i <= LOWER_BOUND) {
        return ERR_TOO_LOW;
    }
    // very long code here
    if (i > LOWER_BOUND && i <= UPPER_BOUND) {
        // (1)
        return OK;
    }

The test for i>LOWER_BOUND is, of course, perfectly redundant. But explicitly
stating it may help...

(a) to make it easier to see for the casual reader that at (1), i>LOWER_BOUND is
always true; after all, they may not have read the part before the very long
code, or may have forgotten about the i<=LOWER_BOUND test by then; and

(b) to make sure that at (1), i>LOWER_BOUND is true even when someone tampers
with the remainder of the code and happens to remove the first check.


> Right.  I think the comments on the report of it that I read implied that
> the assignment was added later, and the problem is that in C, the
> declarations all had to be at the top.
>
> So someone wrote
>     blah * xyz;
>     if (!xyz) return failure;
>     ... use xyz ...
>
> Someone else came along and added a declaration
>     blah * xyz;
>     another pdq = xyz->something;
>     if (!xyz) ....

That may indeed explain how the code came to be there in the first place.

> Instead, they should have said
>     blah * xyz;
>     another pdq;
>     if (!xyz) ...
>     pdq = xyz->something;

Yesss. Absolutely.

I must confess that I used to do it otherwise, too. It was PC-lint that taught
me not to, by constantly pestering me about it. And developing embedded
software for the automotive industry, I had to obey it: MISRA rules forbid to
use *any* language constructs that officially lead to unspecified behavior.


> Agreed. The problem is that the compiler optimized out good code based on
> bogus code. My only question is whether I'm missing something here, because
> such an optimization seems really a bad idea.

Optimizers aren't designed to detect bogus code - they're designed to speed up
things.

Even most compilers do a pretty poor job at detecting bogus code.

That's why you need static code-analysis tools, specifically designed to
identify such bogosities.


> > There's even some reason to argue that if the programmer happily dereferences a
> > pointer without checking it for NULL, why shouldn't the compiler assume that
> > other provisions have been made that it cannot be NULL in the first place?
>
> That's exactly what the compiler did. It looked, saw the dereference of the
> pointer, then the check that the pointer isn't null, and optimized out the
> check for the pointer being null. But the fact that the programmer said to
> check for NULL at a point where all code paths have already dereferenced the
> pointer would seem to be at least a warning, not an "oh good, here's an
> optimization I can apply."

As I already pointed out above, it may also have been code that the developer
left in there just for clarity, to be removed later, or whatever, *expecting*
the compiler to optimize it away.


> I'm surprised and dismayed when the kernel gets code checked in that's
> syntactically invalid,

At this point, in a good commercial project the developer would already get his
head chopped off - by colleagues wo *do* their homework and therefore hit this
stumbling block when trying to compile their own changes to do their own module
testing (and as good developers had suspected themselves to have done something
wrong first, and spent hours to track down the problem, until ultimately
identifying their colleague as the culprit and being *not* amused).

Checking in code you never actually compiled yourself? Hey, haven't done our
homework, have we?!?

> then released,

.... and at this point it would be the build manager's head to roll if it's a
single-platform project or the particular code applies to all projects - or the
test team leader's head if it's intended to be a portable thing and the code is
activated only on certain target platforms (unless of course the code is a fix
for an exotic platform that isn't available in-house).

In free-software projects with all contributors being volunteers, unfortunately
there's no authority to chop off some heads. After all, if you treat the
volunteers too harshly, off they go to someplace more relaxed.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.