POV-Ray : Newsgroups : povray.off-topic : Questionable optimizations : Questionable optimizations Server Time
5 Sep 2024 13:14:37 EDT (-0400)
  Questionable optimizations  
From: Darren New
Date: 17 Jul 2009 19:50:34
Message: <4a610e4a$1@news.povray.org>
Someone recently found a bug whereby a piece of C code read

xyz->pdq = some value;
....
if (!xyz) return (error code);
...
do_something_dangerous();


Someone mapped a page into 0, invoked the bit of the kernel that had the 
code, and did something dangerous in spite of xyz being null.  It turns out 
GCC looks at the test for NULL in the second line, decides that since it has 
already been dereferenced in the first line it can't be NULL (or a segfault 
would have thrown), and eliminates that test.

Now the question I have is, why in the world would you optimize that test 
away? Is using a value and then testing it for null later so frequent that 
you need to throw away that test? And if so, wouldn't it be better to simply 
warn that it probably isn't what you intend, just like happens with 
comparing an unsigned value to a negative number?  I just can't imagine a 
situation where the compiler can prove that xyz has been dereferenced 
without it being checked yet the null test later isn't indicative of a 
programming error.

-- 
   Darren New, San Diego CA, USA (PST)
   "We'd like you to back-port all the changes in 2.0
    back to version 1.0."
   "We've done that already. We call it 2.0."


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.