POV-Ray : Newsgroups : povray.off-topic : Questionable optimizations Server Time
5 Sep 2024 21:24:49 EDT (-0400)
  Questionable optimizations (Message 31 to 40 of 44)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 4 Messages >>>
From: clipka
Subject: Re: Questionable optimizations
Date: 19 Jul 2009 15:40:00
Message: <web.4a6375962c54829f69d21dbe0@news.povray.org>
Warp <war### [at] tagpovrayorg> wrote:
>   You are twisting the whole thing in a really strange way.

I'm not twisting the thing, I'm just twisting the perspective. And I wouldn't
call it strange, but rather just unconventional.


>   It doesn't change the fact that Linux is more secure for the average
> user than Windows is, for the simple reason that Linux is not targetted
> as much as Windows is.

I'm not saying "Linux kernel needs more attention because it is less secure".

I'm saying "Linux kernel needs more attention because a breach of the Linux
kernel poses a higher security risk". Seen from a larger perspective than just
the rather egocentric "how safe is *my* individual computer" perspective.


Post a reply to this message

From: clipka
Subject: Re: Questionable optimizations
Date: 19 Jul 2009 15:55:00
Message: <web.4a63793d2c54829f69d21dbe0@news.povray.org>
Warp <war### [at] tagpovrayorg> wrote:
> clipka <nomail@nomail> wrote:
> > Warp <war### [at] tagpovrayorg> wrote:
> > >   (And before anyone says anything, no, Windows is not better. Windows is
> > > year after year always at the top of the list of most security flaws found
> > > during the year.)
>
> > True, but the superiority of Linux crumbles in my eyes, if the responsible
> > people brush aside security holes that easily.
>
>   Then the answer is rather simple, isn't it: Don't use Linux.

No, the answer is rather, "Stop proclaiming that Linux' security is superior",
or "Get back to making Linux as secure as it is claimed to be".


> > And knowing (through obvious proof) that the Linux kernel code isn't checked
> > with professional tools
>
>   Define "professional tool".

In this sense, roughly speaking anything that a professional SW developing
entity would be willing(!) to pay more money for than the average hobbyist
would be willing to spend.

Note that free software might qualify in this sense, too.


> > I'm not saying "they're worse than Microsoft" - all I'm saying is "they're no
> > better".
>
>   That's BS. Basically every time a security hole is found in the linux kernel,
> a patch appears in a matter of *hours*.

Did you verify that assumption, or are you just repeating hearsay?

I see now a security hole which the top hats apparently weren't even *willing*
to fix, and flawed code which I assume would have been discovered earlier in a
commercial environment - so I'm throwing that hearsay overboard right now
because *that* seems to be BS.


>   How soon do you get security patches for Windows when security flaws are
> found? Certainly not within hours. At best within days, at worst within
> months (yes, it has happened).
>
>   So yes, the linux community *is* better in security than MS is.

By now I seriously doubt it.


Post a reply to this message

From: clipka
Subject: Re: Questionable optimizations
Date: 19 Jul 2009 16:15:01
Message: <web.4a637dd92c54829f69d21dbe0@news.povray.org>
Darren New <dne### [at] sanrrcom> wrote:
>  > email virii
>
> I saw a great rant from someone who actually knows Latin about "virii".
> "Virii" is apparently the plural of some completely unrelated latin word,
> like "voice" or "people" or something.  "Virus" is apparently already a mass
> noun not unlike "stuff".

Actually, the word "virus" is taken from latin, originally meaning
"slime/poison/venom" ; the correct latin plural would be "vira", while "viri"
would be singular genitive; a double-i form ("virii") does not exist.

There's also a latin word having the plural form "viri" - that would be "vir",
meaning "man/hero"; again, no double-i form here.

If there existed a lating word with the plural form "virii", it would have to be
"viri" in singular. To the best of my knowledge, there is no such word in latin.


Post a reply to this message

From: andrel
Subject: Re: Questionable optimizations
Date: 19 Jul 2009 17:22:36
Message: <4A638E9C.8070404@hotmail.com>
On 19-7-2009 21:17, Darren New wrote:
> Warp wrote:
>>   It doesn't change the fact that Linux is more secure for the average
>> user than Windows is, for the simple reason that Linux is not targetted
>> as much as Windows is.
> 
> I think he's saying the average Linux user isn't the same as the average 
> Windows user, and the average Linux's user's machine is more valuable to 
> attack. You're just measuring two different ways.
> 
A third difference is that Linux administrators of those more critical 
machines are generally more aware of threads and more knowledgeable.
<back to lurk mode for this thread>


Post a reply to this message

From: Darren New
Subject: Re: Questionable optimizations
Date: 19 Jul 2009 17:42:09
Message: <4a639331$1@news.povray.org>
clipka wrote:
> flawed code which I assume would have been discovered earlier in a
> commercial environment 

Interestingly, I can imagine this particular flaw might be easier to find by 
a bad guy in proprietary code. You can look at the machine code to see 
there's no check for NULL in that routine.

If the source code is available, how many people are really going to look at 
the generated machine code to see if security checks were optimized out by 
the compiler?  Obviously someone did, or came across it by accident, or 
something. (I didn't read the original original report.)

Just a thought...

How many routines in Linux look like they check for buffer overrun but don't 
because the compiler did something wrong or unexpected? Of those, how many 
will people notice, compared to the legions of people single-stepping thru 
IE.exe with a debugger looking for flaws? :-)

-- 
   Darren New, San Diego CA, USA (PST)
   "We'd like you to back-port all the changes in 2.0
    back to version 1.0."
   "We've done that already. We call it 2.0."


Post a reply to this message

From: Doctor John
Subject: Re: Questionable optimizations
Date: 20 Jul 2009 14:22:23
Message: <4a64b5df@news.povray.org>
And, of course, its now fixed

> The Linux folks have meanwhile:
> 
> - Fixed the actual bug.  ;)  (CVE-2009-1897)
>   Only affects 2.6.30,2.6.30.1.
> 
>   2.6.30.2 release soon.
> 
> - Added -fno-delete-null-pointers to their Makefiles
> 
>   Also in 2.6.30.2 and 2.
> 
> - fixed the personality - PER_CLEAR_ON_SETTID inheritance issue (CVE-2009-1895)
>   to work around mmap_min_addr protection.
>   Affects 2.6.23-2.6.30.1
> 
>   2.6.30.2 and 2.6.27.x releases soon.
> 
> I am not sure about the SELinux policy error he used to 
> exploit the RHEL 5.? Beta.
> 
> Ciao, Marcus

I'm quoting from an email sent to me; I have no reason to distrust the
source

John
-- 
"Eppur si muove" - Galileo Galilei


Post a reply to this message

From: clipka
Subject: Re: Questionable optimizations
Date: 21 Jul 2009 09:20:00
Message: <web.4a65bfb32c54829f537313280@news.povray.org>
Darren New <dne### [at] sanrrcom> wrote:
> clipka wrote:
> > flawed code which I assume would have been discovered earlier in a
> > commercial environment
>
> Interestingly, I can imagine this particular flaw might be easier to find by
> a bad guy in proprietary code. You can look at the machine code to see
> there's no check for NULL in that routine.

It may be interesting news to you that examining compiled piece of software is
just as easy with open-source software as it is with closed-source software...
;)

And knowing this particular compiler behavior, the bad guy's job has become a
whole lot easier with open-source software: Just get a good static
code-analysis tool and have it grind the code for places where pointers are
de-referenced without checking for NULL first.

*Then* dig through the compiled code to see if the compiler optimized away a
later check for NULL.

So however you toss and turn it: Breaking into any piece of software is easier
if it's open-source than if it's closed-source.


Butt also note that of course the *good* guys' job *should* be a lot easier with
open-source, too: Just get a good static code-analysis tool... (You know the
rest.) That's the very paradigm on which the alleged superior safety of open
source is founded: More eyes looking at the code will spot more of the bugs.

The problem - at least in this case - is that the good guys obviously didn't do
it. Or they didn't listen to one another when some of them did.

If that's how it typically works in reality, then as far as security is
concerned the whole superiority of the open source concept crumbles, leaving
only its disadvantages in this respect. Making software open source doesn't
improve security (or quality in general) *per se* - the good guys need to do
their homework, too.


> How many routines in Linux look like they check for buffer overrun but don't
> because the compiler did something wrong or unexpected? Of those, how many
> will people notice,

Yet how many more could a static code-analysis tool notice? Quite a lot, I bet.
Unless the compiler is outright buggy of course, but that would surface sooner
or later, too.

The major problems are code constructs that lead to undefined behavior according
to the C (or C++) standard specifications - because "undefined behavior" *by
definition* includes the potential for security breaches. Static code analysis
tools do a great job at identifying the use of such constructs.


Post a reply to this message

From: Darren New
Subject: Re: Questionable optimizations
Date: 21 Jul 2009 11:51:02
Message: <4a65e3e6$1@news.povray.org>
clipka wrote:
> It may be interesting news to you that examining compiled piece of software is
> just as easy with open-source software as it is with closed-source software...

Oh, I know that. I was just saying that many might not even look for 
(essentially) compiler errors if they have the source.

> And knowing this particular compiler behavior, the bad guy's job has become a
> whole lot easier with open-source software: Just get a good static
> code-analysis tool and have it grind the code for places where pointers are
> de-referenced without checking for NULL first.

True.  If you think of it.

> So however you toss and turn it: Breaking into any piece of software is easier
> if it's open-source than if it's closed-source.

Yes. Perhaps the word "easier" should have been "more likely."

> Yet how many more could a static code-analysis tool notice? Quite a lot, I bet.

Hopefully, people are running such static analysis tools on their 
proprietary software too. :-)

> Unless the compiler is outright buggy of course, but that would surface sooner
> or later, too.

I understand the good folks at JPL actually *do* disassemble the machine 
code the compiler generated and checks that it does what they think it does. 
When you're sending something to Mars, it's probably worth it.

> Static code analysis
> tools do a great job at identifying the use of such constructs.

Well, not so good, no.  At least, not in C. Otherwise, buffer overruns 
wouldn't be the black hat's attack of choice for C programs.

You can make a language where it's a lot easier to find such things, tho, 
even without a lot of runtime overhead.

-- 
   Darren New, San Diego CA, USA (PST)
   "We'd like you to back-port all the changes in 2.0
    back to version 1.0."
   "We've done that already. We call it 2.0."


Post a reply to this message

From: clipka
Subject: Re: Questionable optimizations
Date: 21 Jul 2009 13:00:00
Message: <web.4a65f2f62c54829f537313280@news.povray.org>
Darren New <dne### [at] sanrrcom> wrote:
> Hopefully, people are running such static analysis tools on their
> proprietary software too. :-)

Hopefully, yes. And hopefully the QA guys running those tools are heard.

But given the cost of typical statc code analysis tools, you may be more likely
to find them in companies than in open-source projects.


> I understand the good folks at JPL actually *do* disassemble the machine
> code the compiler generated and checks that it does what they think it does.
> When you're sending something to Mars, it's probably worth it.

Definitely so: It's probably cheaper to have hordes of expensive experts inspect
every line of source code and every byte of compiled code - both with automatic
tools *and* manually - than it would be to send a "beta" satellite first. Let
alone that some missions are single-chance: A favorable planetary constellation
as with the Voyager probes won't come again anytime soon, for instance.

Maybe one problem with open source is also that its value is underestimated;
something like, "if it doesn't cost anything to code, how can it be worth
investing any money into QA?".

> > Static code analysis
> > tools do a great job at identifying the use of such constructs.
>
> Well, not so good, no.  At least, not in C. Otherwise, buffer overruns
> wouldn't be the black hat's attack of choice for C programs.

Granted, some variants of these are indeed hard to identify even with static
code analysis tools. Other variants, though, are darn easy for these tools.

Anyway, we're talking about a thing here that is very simple to identify
automatically. And even *that* wasn't detected, or people who detected it
weren't listened to.


> You can make a language where it's a lot easier to find such things, tho,
> even without a lot of runtime overhead.

Sure, no argument here. One of the very few valid reasons for using C is that it
is extremely widespread; but this widespread use has significant side effects:

- It makes it (comparatively) easy to port code to virtually any target
platform. (Of course there's a circular thing here: C is the language of choice
when maximum portability is needed, because there's C compilers for virtually
all platforms; and there's C compilers for virtually all platforms because C is
the language of choice when maximum portability is needed. Still that's the way
it happens to be.) For a project like the Linux kernel that is aimed at high
portability, C/C++ therefore seems to be the *only* reasonable choice.

- The most common C compilers for the most common platforms are used so heavily
that even compiler bugs related to unconventional cases are still quite likely
to manifest soon.

Another reason is speed, of course, but just like with Assembler it could be
argued that only the most heavily-used portions of a project should resort to C
for speed these days.


Post a reply to this message

From: Darren New
Subject: Re: Questionable optimizations
Date: 21 Jul 2009 13:39:52
Message: <4a65fd68$1@news.povray.org>
clipka wrote:
> Anyway, we're talking about a thing here that is very simple to identify
> automatically. 

Certainly. And for that matter, it *was* identified automatically. It's just 
that the wrong solution was taken. :-)

> For a project like the Linux kernel that is aimed at high portability, 

Well, no.  The Linux kernel wasn't started aimed at high portability. Just 
the opposite, really.

> - The most common C compilers for the most common platforms are used so heavily
> that even compiler bugs related to unconventional cases are still quite likely
> to manifest soon.

True.

> Another reason is speed, of course, but just like with Assembler it could be
> argued that only the most heavily-used portions of a project should resort to C
> for speed these days.

Mmmm.... Debatable. :-)  It really depends on the rest of the system. If 
you're writing in some language fundamentally different from C, you might 
spend more time translating into C data structures than you do to run it. 
Nobody says "Wow, this SQL query is really slow. Let's rewrite it in C."

-- 
   Darren New, San Diego CA, USA (PST)
   "We'd like you to back-port all the changes in 2.0
    back to version 1.0."
   "We've done that already. We call it 2.0."


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 4 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.