POV-Ray : Newsgroups : povray.off-topic : Questionable optimizations Server Time
5 Sep 2024 17:14:40 EDT (-0400)
  Questionable optimizations (Message 11 to 20 of 44)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: clipka
Subject: Re: Questionable optimizations
Date: 18 Jul 2009 17:20:00
Message: <web.4a623b5e2c54829f707a5c110@news.povray.org>
Darren New <dne### [at] sanrrcom> wrote:
> The exploit was a bug in the kernel that dereferenced a pointer before
> checking for null, and the compiler silently optimized out the later check
> for null. If you can get the first dereference to work (by mapping some
> valid memory to the address associated with the null pointer value) then you
> skip over code people thought they wrote into their program and which the
> compiler removed.

I'd rather say it was a flaw in the kernel, forgetting to already check for null
*before* dereferencing.

In such a "mission-critical" piece of software such as a kernel, *all* that
comes in from untrusted sources should *always* be checked thoroughly before
doing *anything* with it. "Oh, if this pointer happens to be NULL, the software
will core dump anyway" is a *very* questionable mode of operation.

Code dereferencing a possibly-NULL pointer *is* bogus - it's nothing you can
simply blame on the compiler. And mission-critical code should not *never* be
bogus.

There's even some reason to argue that if the programmer happily dereferences a
pointer without checking it for NULL, why shouldn't the compiler assume that
other provisions have been made that it cannot be NULL in the first place?

I'm also surprised that this flaw wasn't found and fixed earlier. There's tools
like PC-lint (taken as an example here because I'm a bit familiar with it) that
do a pretty good job at complaining when you're trying to dereference a
potentially-NULL pointer. I mean, you'd expect that *someone* runs *something*
like PC-lint on code as widely-used as a Linux kernel?!

Well, maybe that's a problem with free-software projects: PC-lint costs money.
Companies developing commercial software can afford to buy a few licenses. The
individual parties in a free-software project possibly can't, or refuse to pay
anything for their tools, or refuse to be the only ones paying for something
that benefits all.


Post a reply to this message

From: Warp
Subject: Re: Questionable optimizations
Date: 18 Jul 2009 18:05:38
Message: <4a624732@news.povray.org>
clipka <nomail@nomail> wrote:
> Code dereferencing a possibly-NULL pointer *is* bogus - it's nothing you can
> simply blame on the compiler. And mission-critical code should not *never* be
> bogus.

  I didn't get the impression that they were dereferencing a null pointer
deliberately in order to get an exception, or that it was a mistake. (The
only "mistake" was not realizing what kind of optimization gcc would do.)

-- 
                                                          - Warp


Post a reply to this message

From: Darren New
Subject: Re: Questionable optimizations
Date: 18 Jul 2009 19:20:15
Message: <4a6258af$1@news.povray.org>
clipka wrote:
> I'd rather say it was a flaw in the kernel, forgetting to already check for null
> *before* dereferencing.

Yes, exactly. But I'm trying to figure out why you would add optimizations 
specifically to improve the performance of code you are optimizing *because* 
you know it is flawed.

Note that this isn't just a general dead-code-elimination. This is 
specifically checking for "test for null after already dereferencing", as 
there's an explicit compiler chip to turn that off.

> In such a "mission-critical" piece of software such as a kernel, *all* that
> comes in from untrusted sources should *always* be checked thoroughly before
> doing *anything* with it. 

Right.  I think the comments on the report of it that I read implied that 
the assignment was added later, and the problem is that in C, the 
declarations all had to be at the top.

So someone wrote
    blah * xyz;
    if (!xyz) return failure;
    ... use xyz ...

Someone else came along and added a declaration
    blah * xyz;
    another pdq = xyz->something;
    if (!xyz) ....

Instead, they should have said
    blah * xyz;
    another pdq;
    if (!xyz) ...
    pdq = xyz->something;


 > "Oh, if this pointer happens to be NULL, the software
> will core dump anyway" is a *very* questionable mode of operation.

Exactly.

> Code dereferencing a possibly-NULL pointer *is* bogus - it's nothing you can
> simply blame on the compiler. And mission-critical code should not *never* be
> bogus.

Agreed. The problem is that the compiler optimized out good code based on 
bogus code. My only question is whether I'm missing something here, because 
such an optimization seems really a bad idea.

> There's even some reason to argue that if the programmer happily dereferences a
> pointer without checking it for NULL, why shouldn't the compiler assume that
> other provisions have been made that it cannot be NULL in the first place?

That's exactly what the compiler did. It looked, saw the dereference of the 
pointer, then the check that the pointer isn't null, and optimized out the 
check for the pointer being null. But the fact that the programmer said to 
check for NULL at a point where all code paths have already dereferenced the 
pointer would seem to be at least a warning, not an "oh good, here's an 
optimization I can apply."

The only reason I can think of for that would be code generated from some 
other language, at which point I'd think you'd want a compiler flag to turn 
*on* that optimization, with it being off by default.

> I'm also surprised that this flaw wasn't found and fixed earlier. 

I'm surprised and dismayed when the kernel gets code checked in that's 
syntactically invalid, then released, then when people complain about it 
they say "Oh, I didn't know how that code builds" and someone else says "try 
changing the ownership of the source file to be root", and then a few weeks 
later they have a patch on some website somewhere unassociated with the 
actual original project that makes it compile.

I'm beyond being surprised at how bad the free stuff can be if you're 
actually a programmer. </rant>

-- 
   Darren New, San Diego CA, USA (PST)
   "We'd like you to back-port all the changes in 2.0
    back to version 1.0."
   "We've done that already. We call it 2.0."


Post a reply to this message

From: Darren New
Subject: Re: Questionable optimizations
Date: 18 Jul 2009 19:21:44
Message: <4a625908@news.povray.org>
Slime wrote:
> Of course this is a contrived example, but it's probably not too far from a 
> real use case.

Ah, that makes sense. Thank you for clarifying that.

It *is* code generation. Just from the same language you're compiling. That 
didn't occur to me. :-)

-- 
   Darren New, San Diego CA, USA (PST)
   "We'd like you to back-port all the changes in 2.0
    back to version 1.0."
   "We've done that already. We call it 2.0."


Post a reply to this message

From: Doctor John
Subject: Re: Questionable optimizations
Date: 19 Jul 2009 08:45:18
Message: <4a63155e@news.povray.org>
Darren New wrote:
> Doctor John wrote:
>> At a guess that would be:
>> http://www.theregister.co.uk/2009/07/17/linux_kernel_exploit/
> 
> Yes, but since that wasn't the point of my question and people might
> reasonably assume I'm bashing on Linux if I posted a link to it, I
> thought I'd omit it.  The question was why the compiler felt the need
> for *that* optimization in the first place. When would it ever be a good
> idea?
> 

Actually, I was going to ask the group the same question but you just
beat me to it :-) and you could hardly assume that I was in any way
anti-Linux.
What bothers me (apart from the unwanted optimisation) is why Torvalds
et al have chosen to remain silent on this.
Another thing to remember is that it is not only the kernel code that is
the problem here, it's also gcc - so comments from the Stallman camp
would also be appropriate.

John
-- 
"Eppur si muove" - Galileo Galilei


Post a reply to this message

From: clipka
Subject: Re: Questionable optimizations
Date: 19 Jul 2009 09:20:00
Message: <web.4a631cc82c54829feecd81460@news.povray.org>
Darren New <dne### [at] sanrrcom> wrote:
> But I'm trying to figure out why you would add optimizations
> specifically to improve the performance of code you are optimizing *because*
> you know it is flawed.

No, optimizers don't optimize code *because* it's flawed, just maybe *although*
it's flawed.

Or, possibly more particularly fitting the intention in this case: Although it
is *redundant*.

After all, that's a crucual part of optimization. It allows programmers to write
source code that is more self-descriptive and more self-defensive, yet at the
speed of hand-optimized code.

For instance, one might write:

    if (i <= LOWER_BOUND) {
        return ERR_TOO_LOW;
    }
    // very long code here
    if (i > LOWER_BOUND && i <= UPPER_BOUND) {
        // (1)
        return OK;
    }

The test for i>LOWER_BOUND is, of course, perfectly redundant. But explicitly
stating it may help...

(a) to make it easier to see for the casual reader that at (1), i>LOWER_BOUND is
always true; after all, they may not have read the part before the very long
code, or may have forgotten about the i<=LOWER_BOUND test by then; and

(b) to make sure that at (1), i>LOWER_BOUND is true even when someone tampers
with the remainder of the code and happens to remove the first check.


> Right.  I think the comments on the report of it that I read implied that
> the assignment was added later, and the problem is that in C, the
> declarations all had to be at the top.
>
> So someone wrote
>     blah * xyz;
>     if (!xyz) return failure;
>     ... use xyz ...
>
> Someone else came along and added a declaration
>     blah * xyz;
>     another pdq = xyz->something;
>     if (!xyz) ....

That may indeed explain how the code came to be there in the first place.

> Instead, they should have said
>     blah * xyz;
>     another pdq;
>     if (!xyz) ...
>     pdq = xyz->something;

Yesss. Absolutely.

I must confess that I used to do it otherwise, too. It was PC-lint that taught
me not to, by constantly pestering me about it. And developing embedded
software for the automotive industry, I had to obey it: MISRA rules forbid to
use *any* language constructs that officially lead to unspecified behavior.


> Agreed. The problem is that the compiler optimized out good code based on
> bogus code. My only question is whether I'm missing something here, because
> such an optimization seems really a bad idea.

Optimizers aren't designed to detect bogus code - they're designed to speed up
things.

Even most compilers do a pretty poor job at detecting bogus code.

That's why you need static code-analysis tools, specifically designed to
identify such bogosities.


> > There's even some reason to argue that if the programmer happily dereferences a
> > pointer without checking it for NULL, why shouldn't the compiler assume that
> > other provisions have been made that it cannot be NULL in the first place?
>
> That's exactly what the compiler did. It looked, saw the dereference of the
> pointer, then the check that the pointer isn't null, and optimized out the
> check for the pointer being null. But the fact that the programmer said to
> check for NULL at a point where all code paths have already dereferenced the
> pointer would seem to be at least a warning, not an "oh good, here's an
> optimization I can apply."

As I already pointed out above, it may also have been code that the developer
left in there just for clarity, to be removed later, or whatever, *expecting*
the compiler to optimize it away.


> I'm surprised and dismayed when the kernel gets code checked in that's
> syntactically invalid,

At this point, in a good commercial project the developer would already get his
head chopped off - by colleagues wo *do* their homework and therefore hit this
stumbling block when trying to compile their own changes to do their own module
testing (and as good developers had suspected themselves to have done something
wrong first, and spent hours to track down the problem, until ultimately
identifying their colleague as the culprit and being *not* amused).

Checking in code you never actually compiled yourself? Hey, haven't done our
homework, have we?!?

> then released,

.... and at this point it would be the build manager's head to roll if it's a
single-platform project or the particular code applies to all projects - or the
test team leader's head if it's intended to be a portable thing and the code is
activated only on certain target platforms (unless of course the code is a fix
for an exotic platform that isn't available in-house).

In free-software projects with all contributors being volunteers, unfortunately
there's no authority to chop off some heads. After all, if you treat the
volunteers too harshly, off they go to someplace more relaxed.


Post a reply to this message

From: clipka
Subject: Re: Questionable optimizations
Date: 19 Jul 2009 09:50:00
Message: <web.4a6324022c54829feecd81460@news.povray.org>
Doctor John <joh### [at] homecom> wrote:
> What bothers me (apart from the unwanted optimisation) is why Torvalds
> et al have chosen to remain silent on this.

What? And lose ground on their "safest operating system on earth" territory?

I did believe that, too, but hearing these news and thinking about the
background and how the "top hats" went to deal with the issue, I guess it's
time to start thinking out of the box once again: The Linux developer
community, too, want to "sell" their products in a way. Both their kernel, and
their FSF ideals.

Now, after always claming that free software is the superior development
approach, they prove that some of its drawbacks are potentially more serious
than they would like them to be.


Post a reply to this message

From: Warp
Subject: Re: Questionable optimizations
Date: 19 Jul 2009 10:11:48
Message: <4a6329a4@news.povray.org>
clipka <nomail@nomail> wrote:
> [-- text/plain, encoding 8bit, charset: iso-8859-1, 18 lines --]

> Doctor John <joh### [at] homecom> wrote:
> > What bothers me (apart from the unwanted optimisation) is why Torvalds
> > et al have chosen to remain silent on this.

> What? And lose ground on their "safest operating system on earth" territory?

> I did believe that, too, but hearing these news and thinking about the
> background and how the "top hats" went to deal with the issue, I guess it's
> time to start thinking out of the box once again: The Linux developer
> community, too, want to "sell" their products in a way. Both their kernel, and
> their FSF ideals.

> Now, after always claming that free software is the superior development
> approach, they prove that some of its drawbacks are potentially more serious
> than they would like them to be.

  Aren't you exaggerating a bit?

  Do you know how many serious security holes are found each year in Linux?
I think the number is in the *hundreds*. A few of these bugs each year provide
a way to get root privileges by running a program exploiting the bug. It has
happened dozens of times in the past, and it will happen in the future. This
is just another one of those.

  (And before anyone says anything, no, Windows is not better. Windows is
year after year always at the top of the list of most security flaws found
during the year.)

  You *can't* expect the OS to be completely safe.

-- 
                                                          - Warp


Post a reply to this message

From: Darren New
Subject: Re: Questionable optimizations
Date: 19 Jul 2009 11:32:42
Message: <4a633c9a$1@news.povray.org>
Warp wrote:
>   Do you know how many serious security holes are found each year in Linux?

And I don't think this one is even in a kernel that was released into major 
distributions, for that matter. :-)

>   (And before anyone says anything, no, Windows is not better. Windows is
> year after year always at the top of the list of most security flaws found
> during the year.)

Is this still true?  I think there's more 3rd-party flakey code, but not 
stuff that comes with Windows necessarily. Certainly not as much as it used 
to be.

-- 
   Darren New, San Diego CA, USA (PST)
   "We'd like you to back-port all the changes in 2.0
    back to version 1.0."
   "We've done that already. We call it 2.0."


Post a reply to this message

From: Darren New
Subject: Re: Questionable optimizations
Date: 19 Jul 2009 11:40:39
Message: <4a633e77$1@news.povray.org>
clipka wrote:
> Optimizers aren't designed to detect bogus code - they're designed to speed up
> things.

Sure, but it sounded like an optimization that only worked on bogus code, 
which seemed like a waste of time of the person writing the optimizer.

Macros make a good explanation why, tho.

> As I already pointed out above, it may also have been code that the developer
> left in there just for clarity, to be removed later, or whatever, *expecting*
> the compiler to optimize it away.

I think this is different. Your cases, sure.

> At this point, in a good commercial project the developer would already get his
> head chopped off

Well, to be fair, they all know it's bogus. It's been at 0.9.x for like five 
years. :-) I'm always amused at open source folks who won't recognise that 
the first thing they give to the public is 1.0 regardless of how you number it.

> Checking in code you never actually compiled yourself? Hey, haven't done our
> homework, have we?!?

Crap like this would be completely untenable before Google, really.

> test team leader's head if it's intended to be a portable thing and the code is
> activated only on certain target platforms (unless of course the code is a fix
> for an exotic platform that isn't available in-house).

This happened to be some mips-specific assembly. Not exactly exotic, but 
then why are you changing that file if you don't have a mips chip to test it 
on in the first place?

-- 
   Darren New, San Diego CA, USA (PST)
   "We'd like you to back-port all the changes in 2.0
    back to version 1.0."
   "We've done that already. We call it 2.0."


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.