|
 |
clipka wrote:
> I'd rather say it was a flaw in the kernel, forgetting to already check for null
> *before* dereferencing.
Yes, exactly. But I'm trying to figure out why you would add optimizations
specifically to improve the performance of code you are optimizing *because*
you know it is flawed.
Note that this isn't just a general dead-code-elimination. This is
specifically checking for "test for null after already dereferencing", as
there's an explicit compiler chip to turn that off.
> In such a "mission-critical" piece of software such as a kernel, *all* that
> comes in from untrusted sources should *always* be checked thoroughly before
> doing *anything* with it.
Right. I think the comments on the report of it that I read implied that
the assignment was added later, and the problem is that in C, the
declarations all had to be at the top.
So someone wrote
blah * xyz;
if (!xyz) return failure;
... use xyz ...
Someone else came along and added a declaration
blah * xyz;
another pdq = xyz->something;
if (!xyz) ....
Instead, they should have said
blah * xyz;
another pdq;
if (!xyz) ...
pdq = xyz->something;
> "Oh, if this pointer happens to be NULL, the software
> will core dump anyway" is a *very* questionable mode of operation.
Exactly.
> Code dereferencing a possibly-NULL pointer *is* bogus - it's nothing you can
> simply blame on the compiler. And mission-critical code should not *never* be
> bogus.
Agreed. The problem is that the compiler optimized out good code based on
bogus code. My only question is whether I'm missing something here, because
such an optimization seems really a bad idea.
> There's even some reason to argue that if the programmer happily dereferences a
> pointer without checking it for NULL, why shouldn't the compiler assume that
> other provisions have been made that it cannot be NULL in the first place?
That's exactly what the compiler did. It looked, saw the dereference of the
pointer, then the check that the pointer isn't null, and optimized out the
check for the pointer being null. But the fact that the programmer said to
check for NULL at a point where all code paths have already dereferenced the
pointer would seem to be at least a warning, not an "oh good, here's an
optimization I can apply."
The only reason I can think of for that would be code generated from some
other language, at which point I'd think you'd want a compiler flag to turn
*on* that optimization, with it being off by default.
> I'm also surprised that this flaw wasn't found and fixed earlier.
I'm surprised and dismayed when the kernel gets code checked in that's
syntactically invalid, then released, then when people complain about it
they say "Oh, I didn't know how that code builds" and someone else says "try
changing the ownership of the source file to be root", and then a few weeks
later they have a patch on some website somewhere unassociated with the
actual original project that makes it compile.
I'm beyond being surprised at how bad the free stuff can be if you're
actually a programmer. </rant>
--
Darren New, San Diego CA, USA (PST)
"We'd like you to back-port all the changes in 2.0
back to version 1.0."
"We've done that already. We call it 2.0."
Post a reply to this message
|
 |