|
 |
Warp wrote:
> Darren New <dne### [at] san rr com> wrote:
>> Someone recently found a bug whereby a piece of C code read
>
>> xyz->pdq = some value;
>> ....
>> if (!xyz) return (error code);
>> ...
>> do_something_dangerous();
>
>
>> Someone mapped a page into 0, invoked the bit of the kernel that had the
>> code, and did something dangerous in spite of xyz being null. It turns out
>> GCC looks at the test for NULL in the second line, decides that since it has
>> already been dereferenced in the first line it can't be NULL (or a segfault
>> would have thrown), and eliminates that test.
>
> Why couldn't the compiler assume that if nothing modifies xyz between
> the assignment and the test, that it doesn't get modified? It seems to me
> like a rather valid assumption.
There's no assumption xyz didn't get modified. The assumption is that since
you referenced it in the first line, the second line will never fail because
the first line would have dumped core. I.e., if the first line had been
int pdq = *xyz;
if (!xyz) do_something();
then the compiler would omit the code to invoke do_something(), as well as
the test. No assignment to xyz, but just a reference.
I'm trying to figure out what possible benefit such an optimization could
have. I.e., this code is obviously *wrong* - why would you bother to include
optimizations to improve the performance of erroneous code?
> (Btw, AFAIK the C standard specifially says that a null pointer cannot
> contain a valid value. Thus the compiler can assume that it can't contain
> a valid value.)
Yep.
--
Darren New, San Diego CA, USA (PST)
"We'd like you to back-port all the changes in 2.0
back to version 1.0."
"We've done that already. We call it 2.0."
Post a reply to this message
|
 |