POV-Ray : Newsgroups : povray.off-topic : Questionable optimizations Server Time
5 Sep 2024 17:19:46 EDT (-0400)
  Questionable optimizations (Message 35 to 44 of 44)  
<<< Previous 10 Messages Goto Initial 10 Messages
From: Darren New
Subject: Re: Questionable optimizations
Date: 19 Jul 2009 17:42:09
Message: <4a639331$1@news.povray.org>
clipka wrote:
> flawed code which I assume would have been discovered earlier in a
> commercial environment 

Interestingly, I can imagine this particular flaw might be easier to find by 
a bad guy in proprietary code. You can look at the machine code to see 
there's no check for NULL in that routine.

If the source code is available, how many people are really going to look at 
the generated machine code to see if security checks were optimized out by 
the compiler?  Obviously someone did, or came across it by accident, or 
something. (I didn't read the original original report.)

Just a thought...

How many routines in Linux look like they check for buffer overrun but don't 
because the compiler did something wrong or unexpected? Of those, how many 
will people notice, compared to the legions of people single-stepping thru 
IE.exe with a debugger looking for flaws? :-)

-- 
   Darren New, San Diego CA, USA (PST)
   "We'd like you to back-port all the changes in 2.0
    back to version 1.0."
   "We've done that already. We call it 2.0."


Post a reply to this message

From: Doctor John
Subject: Re: Questionable optimizations
Date: 20 Jul 2009 14:22:23
Message: <4a64b5df@news.povray.org>
And, of course, its now fixed

> The Linux folks have meanwhile:
> 
> - Fixed the actual bug.  ;)  (CVE-2009-1897)
>   Only affects 2.6.30,2.6.30.1.
> 
>   2.6.30.2 release soon.
> 
> - Added -fno-delete-null-pointers to their Makefiles
> 
>   Also in 2.6.30.2 and 2.
> 
> - fixed the personality - PER_CLEAR_ON_SETTID inheritance issue (CVE-2009-1895)
>   to work around mmap_min_addr protection.
>   Affects 2.6.23-2.6.30.1
> 
>   2.6.30.2 and 2.6.27.x releases soon.
> 
> I am not sure about the SELinux policy error he used to 
> exploit the RHEL 5.? Beta.
> 
> Ciao, Marcus

I'm quoting from an email sent to me; I have no reason to distrust the
source

John
-- 
"Eppur si muove" - Galileo Galilei


Post a reply to this message

From: clipka
Subject: Re: Questionable optimizations
Date: 21 Jul 2009 09:20:00
Message: <web.4a65bfb32c54829f537313280@news.povray.org>
Darren New <dne### [at] sanrrcom> wrote:
> clipka wrote:
> > flawed code which I assume would have been discovered earlier in a
> > commercial environment
>
> Interestingly, I can imagine this particular flaw might be easier to find by
> a bad guy in proprietary code. You can look at the machine code to see
> there's no check for NULL in that routine.

It may be interesting news to you that examining compiled piece of software is
just as easy with open-source software as it is with closed-source software...
;)

And knowing this particular compiler behavior, the bad guy's job has become a
whole lot easier with open-source software: Just get a good static
code-analysis tool and have it grind the code for places where pointers are
de-referenced without checking for NULL first.

*Then* dig through the compiled code to see if the compiler optimized away a
later check for NULL.

So however you toss and turn it: Breaking into any piece of software is easier
if it's open-source than if it's closed-source.


Butt also note that of course the *good* guys' job *should* be a lot easier with
open-source, too: Just get a good static code-analysis tool... (You know the
rest.) That's the very paradigm on which the alleged superior safety of open
source is founded: More eyes looking at the code will spot more of the bugs.

The problem - at least in this case - is that the good guys obviously didn't do
it. Or they didn't listen to one another when some of them did.

If that's how it typically works in reality, then as far as security is
concerned the whole superiority of the open source concept crumbles, leaving
only its disadvantages in this respect. Making software open source doesn't
improve security (or quality in general) *per se* - the good guys need to do
their homework, too.


> How many routines in Linux look like they check for buffer overrun but don't
> because the compiler did something wrong or unexpected? Of those, how many
> will people notice,

Yet how many more could a static code-analysis tool notice? Quite a lot, I bet.
Unless the compiler is outright buggy of course, but that would surface sooner
or later, too.

The major problems are code constructs that lead to undefined behavior according
to the C (or C++) standard specifications - because "undefined behavior" *by
definition* includes the potential for security breaches. Static code analysis
tools do a great job at identifying the use of such constructs.


Post a reply to this message

From: Darren New
Subject: Re: Questionable optimizations
Date: 21 Jul 2009 11:51:02
Message: <4a65e3e6$1@news.povray.org>
clipka wrote:
> It may be interesting news to you that examining compiled piece of software is
> just as easy with open-source software as it is with closed-source software...

Oh, I know that. I was just saying that many might not even look for 
(essentially) compiler errors if they have the source.

> And knowing this particular compiler behavior, the bad guy's job has become a
> whole lot easier with open-source software: Just get a good static
> code-analysis tool and have it grind the code for places where pointers are
> de-referenced without checking for NULL first.

True.  If you think of it.

> So however you toss and turn it: Breaking into any piece of software is easier
> if it's open-source than if it's closed-source.

Yes. Perhaps the word "easier" should have been "more likely."

> Yet how many more could a static code-analysis tool notice? Quite a lot, I bet.

Hopefully, people are running such static analysis tools on their 
proprietary software too. :-)

> Unless the compiler is outright buggy of course, but that would surface sooner
> or later, too.

I understand the good folks at JPL actually *do* disassemble the machine 
code the compiler generated and checks that it does what they think it does. 
When you're sending something to Mars, it's probably worth it.

> Static code analysis
> tools do a great job at identifying the use of such constructs.

Well, not so good, no.  At least, not in C. Otherwise, buffer overruns 
wouldn't be the black hat's attack of choice for C programs.

You can make a language where it's a lot easier to find such things, tho, 
even without a lot of runtime overhead.

-- 
   Darren New, San Diego CA, USA (PST)
   "We'd like you to back-port all the changes in 2.0
    back to version 1.0."
   "We've done that already. We call it 2.0."


Post a reply to this message

From: clipka
Subject: Re: Questionable optimizations
Date: 21 Jul 2009 13:00:00
Message: <web.4a65f2f62c54829f537313280@news.povray.org>
Darren New <dne### [at] sanrrcom> wrote:
> Hopefully, people are running such static analysis tools on their
> proprietary software too. :-)

Hopefully, yes. And hopefully the QA guys running those tools are heard.

But given the cost of typical statc code analysis tools, you may be more likely
to find them in companies than in open-source projects.


> I understand the good folks at JPL actually *do* disassemble the machine
> code the compiler generated and checks that it does what they think it does.
> When you're sending something to Mars, it's probably worth it.

Definitely so: It's probably cheaper to have hordes of expensive experts inspect
every line of source code and every byte of compiled code - both with automatic
tools *and* manually - than it would be to send a "beta" satellite first. Let
alone that some missions are single-chance: A favorable planetary constellation
as with the Voyager probes won't come again anytime soon, for instance.

Maybe one problem with open source is also that its value is underestimated;
something like, "if it doesn't cost anything to code, how can it be worth
investing any money into QA?".

> > Static code analysis
> > tools do a great job at identifying the use of such constructs.
>
> Well, not so good, no.  At least, not in C. Otherwise, buffer overruns
> wouldn't be the black hat's attack of choice for C programs.

Granted, some variants of these are indeed hard to identify even with static
code analysis tools. Other variants, though, are darn easy for these tools.

Anyway, we're talking about a thing here that is very simple to identify
automatically. And even *that* wasn't detected, or people who detected it
weren't listened to.


> You can make a language where it's a lot easier to find such things, tho,
> even without a lot of runtime overhead.

Sure, no argument here. One of the very few valid reasons for using C is that it
is extremely widespread; but this widespread use has significant side effects:

- It makes it (comparatively) easy to port code to virtually any target
platform. (Of course there's a circular thing here: C is the language of choice
when maximum portability is needed, because there's C compilers for virtually
all platforms; and there's C compilers for virtually all platforms because C is
the language of choice when maximum portability is needed. Still that's the way
it happens to be.) For a project like the Linux kernel that is aimed at high
portability, C/C++ therefore seems to be the *only* reasonable choice.

- The most common C compilers for the most common platforms are used so heavily
that even compiler bugs related to unconventional cases are still quite likely
to manifest soon.

Another reason is speed, of course, but just like with Assembler it could be
argued that only the most heavily-used portions of a project should resort to C
for speed these days.


Post a reply to this message

From: Darren New
Subject: Re: Questionable optimizations
Date: 21 Jul 2009 13:39:52
Message: <4a65fd68$1@news.povray.org>
clipka wrote:
> Anyway, we're talking about a thing here that is very simple to identify
> automatically. 

Certainly. And for that matter, it *was* identified automatically. It's just 
that the wrong solution was taken. :-)

> For a project like the Linux kernel that is aimed at high portability, 

Well, no.  The Linux kernel wasn't started aimed at high portability. Just 
the opposite, really.

> - The most common C compilers for the most common platforms are used so heavily
> that even compiler bugs related to unconventional cases are still quite likely
> to manifest soon.

True.

> Another reason is speed, of course, but just like with Assembler it could be
> argued that only the most heavily-used portions of a project should resort to C
> for speed these days.

Mmmm.... Debatable. :-)  It really depends on the rest of the system. If 
you're writing in some language fundamentally different from C, you might 
spend more time translating into C data structures than you do to run it. 
Nobody says "Wow, this SQL query is really slow. Let's rewrite it in C."

-- 
   Darren New, San Diego CA, USA (PST)
   "We'd like you to back-port all the changes in 2.0
    back to version 1.0."
   "We've done that already. We call it 2.0."


Post a reply to this message

From: clipka
Subject: Re: Questionable optimizations
Date: 22 Jul 2009 01:00:01
Message: <web.4a669bd12c54829f785322500@news.povray.org>
Darren New <dne### [at] sanrrcom> wrote:
> > For a project like the Linux kernel that is aimed at high portability,
>
> Well, no.  The Linux kernel wasn't started aimed at high portability. Just
> the opposite, really.

Yeah, but what is the *current* aim?


> > Another reason is speed, of course, but just like with Assembler it could be
> > argued that only the most heavily-used portions of a project should resort to C
> > for speed these days.
>
> Mmmm.... Debatable. :-)  It really depends on the rest of the system. If
> you're writing in some language fundamentally different from C, you might
> spend more time translating into C data structures than you do to run it.
> Nobody says "Wow, this SQL query is really slow. Let's rewrite it in C."

I said, it *could* be argued. And BTW, *some* SQL queries are indeed easier to
do by just pumping the data into a C program and doing the filtering & sortng
manually (though generally you'd first have a look to see if you can somehow
improve performance by tweaking the DB, rephrasing the query, or the like).

And in general, serious languages tend to have *some* generic interface to
libraries written in C (though then again, some of these interfaces seem to be
pathological, as with Perl for instance; seen a colleague once struggling to
pass a NULL pointer or some such into a C API function).


Post a reply to this message

From: Darren New
Subject: Re: Questionable optimizations
Date: 22 Jul 2009 10:18:54
Message: <4a671fce@news.povray.org>
clipka wrote:
> Yeah, but what is the *current* aim?

 From *my* experience, the current aim is to dick around with it for fun so 
it takes several weeks of intense concentration and trial-and-error to get 
each new point release to even compile.

> I said, it *could* be argued. And BTW, *some* SQL queries are indeed easier to
> do by just pumping the data into a C program and doing the filtering & sortng
> manually (though generally you'd first have a look to see if you can somehow
> improve performance by tweaking the DB, rephrasing the query, or the like).

Exactly. My point was that nobody rewrites a chunk of SQL in C. They pull 
the data and manipulate it in C, which is rather different from cracking 
open the RDBMS and modifying it to add their query as a SQL primitive. :-)

> And in general, serious languages tend to have *some* generic interface to
> libraries written in C 

Sure, but that's because C is close to assembler. Anything with a reasonable 
calling convention can do that, assuming the CPU is designed to run C. Most 
of them also have some generic interface to Windows COM as well. I'm not 
sure what the point is. :-)

LISP machines, Smalltalk machines, and FORTH machines all lack generic 
interfaces to libraries written in C. :-)

> (though then again, some of these interfaces seem to be
> pathological, as with Perl for instance; seen a colleague once struggling to
> pass a NULL pointer or some such into a C API function).

Precisely my point. The farther from C your language gets, the less likely 
you can easily invoke C.

-- 
   Darren New, San Diego CA, USA (PST)
   "We'd like you to back-port all the changes in 2.0
    back to version 1.0."
   "We've done that already. We call it 2.0."


Post a reply to this message

From: Darren New
Subject: Re: Questionable optimizations
Date: 27 Jul 2009 21:46:11
Message: <4a6e5863@news.povray.org>
Here's a whole description of the whole chain of events that led up to it 
breaking.  I don't know if it got posted before, but it's pretty interesting.

http://lwn.net/SubscriberLink/342330/f66e8ace8a572bcb/

-- 
   Darren New, San Diego CA, USA (PST)
   "We'd like you to back-port all the changes in 2.0
    back to version 1.0."
   "We've done that already. We call it 2.0."


Post a reply to this message

From: Warp
Subject: Re: Questionable optimizations
Date: 28 Jul 2009 08:18:44
Message: <4a6eeca4@news.povray.org>
Darren New <dne### [at] sanrrcom> wrote:
> Here's a whole description of the whole chain of events that led up to it 
> breaking.  I don't know if it got posted before, but it's pretty interesting.

> http://lwn.net/SubscriberLink/342330/f66e8ace8a572bcb/

"But Herbert's patch added a line which dereferences the pointer prior
to the check. That, of course, is a bug."

  That's what I thought. You can't blame a compiler optimization for
"screwing up" buggy code, even though without the optimization the bug
would perhaps not have been symptomatic by pure chance.

-- 
                                                          - Warp


Post a reply to this message

<<< Previous 10 Messages Goto Initial 10 Messages

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.