|
 |
Darren New <dne### [at] san rr com> wrote:
> Someone recently found a bug whereby a piece of C code read
> xyz->pdq = some value;
> ....
> if (!xyz) return (error code);
> ...
> do_something_dangerous();
> Someone mapped a page into 0, invoked the bit of the kernel that had the
> code, and did something dangerous in spite of xyz being null. It turns out
> GCC looks at the test for NULL in the second line, decides that since it has
> already been dereferenced in the first line it can't be NULL (or a segfault
> would have thrown), and eliminates that test.
Why couldn't the compiler assume that if nothing modifies xyz between
the assignment and the test, that it doesn't get modified? It seems to me
like a rather valid assumption.
The only exception is if xyz is not local to the function, in which case
it might get modified by another thread. But in that case you have to tell
the compiler that it may be modified at any time (by at the very least
specifying that xyz is volatile, although it's better that access to it
is guarded by a mutex lock).
(Btw, AFAIK the C standard specifially says that a null pointer cannot
contain a valid value. Thus the compiler can assume that it can't contain
a valid value.)
--
- Warp
Post a reply to this message
|
 |