POV-Ray : Newsgroups : povray.off-topic : Questionable optimizations Server Time
5 Sep 2024 15:21:50 EDT (-0400)
  Questionable optimizations (Message 1 to 10 of 44)  
Goto Latest 10 Messages Next 10 Messages >>>
From: Darren New
Subject: Questionable optimizations
Date: 17 Jul 2009 19:50:34
Message: <4a610e4a$1@news.povray.org>
Someone recently found a bug whereby a piece of C code read

xyz->pdq = some value;
....
if (!xyz) return (error code);
...
do_something_dangerous();


Someone mapped a page into 0, invoked the bit of the kernel that had the 
code, and did something dangerous in spite of xyz being null.  It turns out 
GCC looks at the test for NULL in the second line, decides that since it has 
already been dereferenced in the first line it can't be NULL (or a segfault 
would have thrown), and eliminates that test.

Now the question I have is, why in the world would you optimize that test 
away? Is using a value and then testing it for null later so frequent that 
you need to throw away that test? And if so, wouldn't it be better to simply 
warn that it probably isn't what you intend, just like happens with 
comparing an unsigned value to a negative number?  I just can't imagine a 
situation where the compiler can prove that xyz has been dereferenced 
without it being checked yet the null test later isn't indicative of a 
programming error.

-- 
   Darren New, San Diego CA, USA (PST)
   "We'd like you to back-port all the changes in 2.0
    back to version 1.0."
   "We've done that already. We call it 2.0."


Post a reply to this message

From: Darren New
Subject: Re: Questionable optimizations
Date: 17 Jul 2009 21:33:38
Message: <4a612672@news.povray.org>
Darren New wrote:
> Someone recently found a bug whereby a piece of C code read

... Which should be interpreted as "I recently found a discussion of a bug 
that someone else found..." :-)  Just to clear that up.

-- 
   Darren New, San Diego CA, USA (PST)
   "We'd like you to back-port all the changes in 2.0
    back to version 1.0."
   "We've done that already. We call it 2.0."


Post a reply to this message

From: Doctor John
Subject: Re: Questionable optimizations
Date: 18 Jul 2009 07:43:35
Message: <4a61b567$1@news.povray.org>
Darren New wrote:
> Darren New wrote:
>> Someone recently found a bug whereby a piece of C code read
> 
> .... Which should be interpreted as "I recently found a discussion of a
> bug that someone else found..." :-)  Just to clear that up.
> 

At a guess that would be:
http://www.theregister.co.uk/2009/07/17/linux_kernel_exploit/

John
-- 
"Eppur si muove" - Galileo Galilei


Post a reply to this message

From: Warp
Subject: Re: Questionable optimizations
Date: 18 Jul 2009 08:00:02
Message: <4a61b942@news.povray.org>
Darren New <dne### [at] sanrrcom> wrote:
> Someone recently found a bug whereby a piece of C code read

> xyz->pdq = some value;
> ....
> if (!xyz) return (error code);
> ...
> do_something_dangerous();


> Someone mapped a page into 0, invoked the bit of the kernel that had the 
> code, and did something dangerous in spite of xyz being null.  It turns out 
> GCC looks at the test for NULL in the second line, decides that since it has 
> already been dereferenced in the first line it can't be NULL (or a segfault 
> would have thrown), and eliminates that test.

  Why couldn't the compiler assume that if nothing modifies xyz between
the assignment and the test, that it doesn't get modified? It seems to me
like a rather valid assumption.

  The only exception is if xyz is not local to the function, in which case
it might get modified by another thread. But in that case you have to tell
the compiler that it may be modified at any time (by at the very least
specifying that xyz is volatile, although it's better that access to it
is guarded by a mutex lock).

  (Btw, AFAIK the C standard specifially says that a null pointer cannot
contain a valid value. Thus the compiler can assume that it can't contain
a valid value.)

-- 
                                                          - Warp


Post a reply to this message

From: Darren New
Subject: Re: Questionable optimizations
Date: 18 Jul 2009 15:48:20
Message: <4a622704$1@news.povray.org>
Doctor John wrote:
> At a guess that would be:
> http://www.theregister.co.uk/2009/07/17/linux_kernel_exploit/

Yes, but since that wasn't the point of my question and people might 
reasonably assume I'm bashing on Linux if I posted a link to it, I thought 
I'd omit it.  The question was why the compiler felt the need for *that* 
optimization in the first place. When would it ever be a good idea?

-- 
   Darren New, San Diego CA, USA (PST)
   "We'd like you to back-port all the changes in 2.0
    back to version 1.0."
   "We've done that already. We call it 2.0."


Post a reply to this message

From: Darren New
Subject: Re: Questionable optimizations
Date: 18 Jul 2009 15:52:01
Message: <4a6227e1$1@news.povray.org>
Warp wrote:
> Darren New <dne### [at] sanrrcom> wrote:
>> Someone recently found a bug whereby a piece of C code read
> 
>> xyz->pdq = some value;
>> ....
>> if (!xyz) return (error code);
>> ...
>> do_something_dangerous();
> 
> 
>> Someone mapped a page into 0, invoked the bit of the kernel that had the 
>> code, and did something dangerous in spite of xyz being null.  It turns out 
>> GCC looks at the test for NULL in the second line, decides that since it has 
>> already been dereferenced in the first line it can't be NULL (or a segfault 
>> would have thrown), and eliminates that test.
> 
>   Why couldn't the compiler assume that if nothing modifies xyz between
> the assignment and the test, that it doesn't get modified? It seems to me
> like a rather valid assumption.

There's no assumption xyz didn't get modified. The assumption is that since 
you referenced it in the first line, the second line will never fail because 
the first line would have dumped core.  I.e., if the first line had been

   int pdq = *xyz;
   if (!xyz) do_something();

then the compiler would omit the code to invoke do_something(), as well as 
the test. No assignment to xyz, but just a reference.

I'm trying to figure out what possible benefit such an optimization could 
have. I.e., this code is obviously *wrong* - why would you bother to include 
optimizations to improve the performance of erroneous code?

>   (Btw, AFAIK the C standard specifially says that a null pointer cannot
> contain a valid value. Thus the compiler can assume that it can't contain
> a valid value.)

Yep.

-- 
   Darren New, San Diego CA, USA (PST)
   "We'd like you to back-port all the changes in 2.0
    back to version 1.0."
   "We've done that already. We call it 2.0."


Post a reply to this message

From: Tim Attwood
Subject: Re: Questionable optimizations
Date: 18 Jul 2009 15:54:32
Message: <4a622878$1@news.povray.org>
>  (Btw, AFAIK the C standard specifially says that a null pointer cannot
> contain a valid value. Thus the compiler can assume that it can't contain
> a valid value.)

The problem is that in C a null pointer is represented
by 0, but 0 is a valid memory address. So when you
have a valid pointer to address 0, then the optimizer
thinks you are checking for null, not for address 0.

It does seem bad to me to have a so much bloat
in the control switches for the GCC optimizer.
Take a look...
http://gcc.gnu.org/onlinedocs/gcc-4.4.0/gcc/Optimize-Options.html#Optimize-Options


Post a reply to this message

From: Darren New
Subject: Re: Questionable optimizations
Date: 18 Jul 2009 16:07:40
Message: <4a622b8c$1@news.povray.org>
Tim Attwood wrote:
> The problem is that in C a null pointer is represented
> by 0,

Depends on the architecture, really, but in most cases, yes.

> but 0 is a valid memory address. So when you
> have a valid pointer to address 0, then the optimizer
> thinks you are checking for null, not for address 0.

The exploit was a bug in the kernel that dereferenced a pointer before 
checking for null, and the compiler silently optimized out the later check 
for null. If you can get the first dereference to work (by mapping some 
valid memory to the address associated with the null pointer value) then you 
skip over code people thought they wrote into their program and which the 
compiler removed.

> It does seem bad to me to have a so much bloat
> in the control switches for the GCC optimizer.

Tell me about it. Just wait till you have the fun of cross-compiling the 
compiler. :-)

-- 
   Darren New, San Diego CA, USA (PST)
   "We'd like you to back-port all the changes in 2.0
    back to version 1.0."
   "We've done that already. We call it 2.0."


Post a reply to this message

From: Slime
Subject: Re: Questionable optimizations
Date: 18 Jul 2009 16:13:43
Message: <4a622cf7$1@news.povray.org>
Let's say I wrote a macro that does something that happens to be useful to 
me, along these lines:

#define DO_USEFUL_STUFF( OBJECT ) if ( OBJECT ) OBJECT->usefulMethod();

Now, I go and use this macro in different places. In some places, the object 
passed is likely to be NULL. In others, it's not. I would be glad that the 
compiler is optimizing out the unnecessary checks for me, while still 
letting me benefit from the general usefulness of my macro.

Of course this is a contrived example, but it's probably not too far from a 
real use case.

 - Slime
 [ http://www.slimeland.com/ ]


Post a reply to this message

From: Warp
Subject: Re: Questionable optimizations
Date: 18 Jul 2009 16:58:56
Message: <4a623790@news.povray.org>
Tim Attwood <tim### [at] anti-spamcomcastnet> wrote:
> >  (Btw, AFAIK the C standard specifially says that a null pointer cannot
> > contain a valid value. Thus the compiler can assume that it can't contain
> > a valid value.)

> The problem is that in C a null pointer is represented
> by 0, but 0 is a valid memory address. So when you
> have a valid pointer to address 0, then the optimizer
> thinks you are checking for null, not for address 0.

  Well, dereferencing a null pointer is undefined behavior, so from the point
of view of the standard the compiler can do whatever it wants. Thus gcc is
not doing anything wrong here (because after a null pointer dereference there
is no "wrong" to be done).

  If someone is to be blamed is the designers of the kernel if they decided
that a null pointer points to a valid address, against the C standard, and
then implemented the kernel in C (using a standard-conforming C compiler).

-- 
                                                          - Warp


Post a reply to this message

Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.