POV-Ray : Newsgroups : povray.off-topic : Linux really costs a _lot_ more than $40 Server Time
10 Oct 2024 07:22:20 EDT (-0400)
  Linux really costs a _lot_ more than $40 (Message 150 to 159 of 189)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Jim Henderson
Subject: Re: Linux really costs a _lot_ more than $40
Date: 23 Nov 2008 00:03:38
Message: <4928e42a$1@news.povray.org>
On Sat, 22 Nov 2008 09:46:38 -0800, Darren New wrote:

> Jim Henderson wrote:
>> why would you not be able to update it consistently?
> 
> You write the .h file. While trying to write the .so file, you A) don't
> have permissions to do so,

The updater requires root rights to run, so it has permissions to do that.

> B) find that it's open by someone else as a shared text segment, 

I don't believe that would matter, again, it's the inode that's open, and 
when the file is overwritten a new inode is created and the old one is 
destroyed.

> C) run out of disk space,

That would create other problems as well - as would running out of system 
filehandles (which is interesting to watch on *nix systems - I wrote a 
program to malloc() all the memory in a system once when in college - was 
to get students off of workstations when they were playing the local MUD 
if there were students who had actual work to do and there were no 
machines available.  Once the malloc() program ran they couldn't even run 
ps to find out what happened - just had to stop-A the system and reboot.  
Interesting to watch on the Sun SLC Diskless workstations, because they'd 
swap over the network to the server in the server room - which is 
invariably where we ran it from, so we could see all the disk activity.

> D) have your process killed,
> E) have the power fail,
> ....

Yeah, those could happen and could introduce problems - so you just 
reinstall the packages and that makes things consistent.  Come to think, 
I've had to do that once or twice.

Jim


Post a reply to this message

From: Darren New
Subject: Re: Linux really costs a _lot_ more than $40
Date: 23 Nov 2008 00:42:10
Message: <4928ed32@news.povray.org>
Jim Henderson wrote:
> On Sat, 22 Nov 2008 09:46:38 -0800, Darren New wrote:
> 
>> Jim Henderson wrote:
>>> why would you not be able to update it consistently?
>> You write the .h file. While trying to write the .so file, you A) don't
>> have permissions to do so,
> 
> The updater requires root rights to run, so it has permissions to do that.

Not always. See network-mounted file systems, file systems mounted 
read-only, write-protect tabs, etc.

>> B) find that it's open by someone else as a shared text segment, 
> 
> I don't believe that would matter,

You're mistaken.

> again, it's the inode that's open, and 
> when the file is overwritten a new inode is created and the old one is 
> destroyed.

Only if you unlink the old file then creat() a new one.  If you actually 
open the current file for writing (and it's executing or sticky in swap, 
and it has a shared text segment), you get an error, even as root.

>> C) run out of disk space,
> 
> That would create other problems as well

Yes? So?  We know it's bad and you should avoid it, yes.

Note that nobody expects "I ran out of disk space" (or "I ran out of 
file handles or swap space") to result in their root directory getting 
truncated, for example. So it's not so bad that it should leave other 
parts of your system in an inconsistent state.

I also left out "Linux out-of-memory killer randomly decided to nuke my 
process because someone else grabbed a bunch of memory."

>> D) have your process killed,
>> E) have the power fail,
>> ....
> 
> Yeah, those could happen and could introduce problems - so you just 
> reinstall the packages and that makes things consistent.

And until you do, things are broken. Which is kind of my point.

It's also not an orthogonal solution. Everyone has to reinvent the wheel 
for themselves when trying to update multiple files consistently.

And if the package manager updates the database *before* it updates the 
files, you might never know it.

-- 
Darren New / San Diego, CA, USA (PST)


Post a reply to this message

From: Jim Henderson
Subject: Re: Linux really costs a _lot_ more than $40
Date: 23 Nov 2008 14:50:11
Message: <4929b3f3@news.povray.org>
On Sat, 22 Nov 2008 21:42:12 -0800, Darren New wrote:

> Jim Henderson wrote:
>> On Sat, 22 Nov 2008 09:46:38 -0800, Darren New wrote:
>> 
>>> Jim Henderson wrote:
>>>> why would you not be able to update it consistently?
>>> You write the .h file. While trying to write the .so file, you A)
>>> don't have permissions to do so,
>> 
>> The updater requires root rights to run, so it has permissions to do
>> that.
> 
> Not always. See network-mounted file systems, file systems mounted
> read-only, write-protect tabs, etc.

You'd have to configure things pretty weird to get part of the relevant 
filesystems mounted in a way that was inconsistent with the requirements 
- such as being able to update development header files but not shared 
libraries.  You'd almost have to be trying to create a situation where 
the updater doesn't work - and sure, you could do that, but what would be 
the point other than to break things?

>>> B) find that it's open by someone else as a shared text segment,
>> 
>> I don't believe that would matter,
> 
> You're mistaken.

Well, it's something I've never run into, and I've only been running 
Linux for about 12 years...

>> again, it's the inode that's open, and when the file is overwritten a
>> new inode is created and the old one is destroyed.
> 
> Only if you unlink the old file then creat() a new one.  If you actually
> open the current file for writing (and it's executing or sticky in swap,
> and it has a shared text segment), you get an error, even as root.

Well, the installer does just that - it unlinks and creates new files, 
near as I can tell.  But even in a case like that, the updater will error 
and say that a file it needs to access can't be updated, and it will give 
you the option of aborting or attempting to continue - same kind of error 
message when it tries to auto-refresh after making a network change 
(which I think is stupid, because it takes the network connection down 
and ALWAYS errors once).

>>> C) run out of disk space,
>> 
>> That would create other problems as well
> 
> Yes? So?  We know it's bad and you should avoid it, yes.

My point is that you would notice this probably before running out of 
disk space.  Thinking about it, though, I have run into this before, and 
the integrity didn't get messed up.

> Note that nobody expects "I ran out of disk space" (or "I ran out of
> file handles or swap space") to result in their root directory getting
> truncated, for example. So it's not so bad that it should leave other
> parts of your system in an inconsistent state.
> 
> I also left out "Linux out-of-memory killer randomly decided to nuke my
> process because someone else grabbed a bunch of memory."
> 
>>> D) have your process killed,
>>> E) have the power fail,
>>> ....
>> 
>> Yeah, those could happen and could introduce problems - so you just
>> reinstall the packages and that makes things consistent.
> 
> And until you do, things are broken. Which is kind of my point.
> 
> It's also not an orthogonal solution. Everyone has to reinvent the wheel
> for themselves when trying to update multiple files consistently.
> 
> And if the package manager updates the database *before* it updates the
> files, you might never know it.

If you never know it, though, then things are working as expected - if 
they're not, you can run a comparison of the database to the filesystem 
using the RPM options to do so.

Jim


Post a reply to this message

From: Nicolas Alvarez
Subject: Re: Linux really costs a _lot_ more than $40
Date: 23 Nov 2008 15:07:36
Message: <4929b808@news.povray.org>
Jim Henderson wrote:
> I don't believe that would matter, again, it's the inode that's open, and
> when the file is overwritten a new inode is created and the old one is
> destroyed.

Uh, it's not. Depends on your definition of "overwritten".

Open an existing file (inode), write into it. Did the inode change?


Post a reply to this message

From: Darren New
Subject: Re: Linux really costs a _lot_ more than $40
Date: 23 Nov 2008 15:19:26
Message: <4929bace$1@news.povray.org>
Jim Henderson wrote:
> You'd have to configure things pretty weird to get part of the relevant 
> filesystems mounted in a way that was inconsistent with the requirements 

Sure.  However, you're looking at it from the narrow context of one 
particular program under Linux.  Being able to deal with such problems 
in a generic way is a benefit.

>>>> B) find that it's open by someone else as a shared text segment,
>>> I don't believe that would matter,
>> You're mistaken.
> 
> Well, it's something I've never run into, and I've only been running 
> Linux for about 12 years...

http://www.derkeiler.com/Mailing-Lists/FreeBSD-Security/2002-07/10992.html

It's not really common, unless you're trying to update executables while 
they're running. It's not hard to make it happen. We've been over that 
here, a few weeks ago.

> Well, the installer does just that - it unlinks and creates new files, 
> near as I can tell.  

Most likely. Probably because (d'oh) you can't write to a file that's 
being executed, yes? ;-)

>>>> C) run out of disk space,
>>> That would create other problems as well
>> Yes? So?  We know it's bad and you should avoid it, yes.
> 
> My point is that you would notice this probably before running out of 
> disk space.

Unless the update is what runs you out of space, or (just maybe) there's 
more than one person using the computer, like? Or some background job 
that's maybe doing something like receiving email?

>> And if the package manager updates the database *before* it updates the
>> files, you might never know it.
> 
> If you never know it, though, then things are working as expected

And if "as expected" means "still has the security holes in it that the 
update was supposed to patch", then yes, that's as expected. Not what 
you want, but as expected. :-)


But yes, if you're only talking about package management per se (which 
is indeed where we started) then there are obviously solutions in place. 
It's just a shame they didn't generalize it so you can make it work for 
other people too, or over the network, or in spite of failures, etc.

-- 
Darren New / San Diego, CA, USA (PST)


Post a reply to this message

From: Jim Henderson
Subject: Re: Linux really costs a _lot_ more than $40
Date: 23 Nov 2008 16:10:16
Message: <4929c6b8@news.povray.org>
On Sun, 23 Nov 2008 12:19:30 -0800, Darren New wrote:

> Jim Henderson wrote:
>> You'd have to configure things pretty weird to get part of the relevant
>> filesystems mounted in a way that was inconsistent with the
>> requirements
> 
> Sure.  However, you're looking at it from the narrow context of one
> particular program under Linux.  Being able to deal with such problems
> in a generic way is a benefit.

Well, sure, but dealing with every possible and conceivable exception 
gets into areas where even in professional development it's decided not 
to go.  It's good to deal with common exceptions, but more obscure and 
unlikely possiblities aren't any more likely to be dealt with in OSS 
development than in commercial software development.

About the only place I've ever seen 100% exception handling expected is 
in ACM programming contests.

>>>>> B) find that it's open by someone else as a shared text segment,
>>>> I don't believe that would matter,
>>> You're mistaken.
>> 
>> Well, it's something I've never run into, and I've only been running
>> Linux for about 12 years...
> 
> http://www.derkeiler.com/Mailing-Lists/FreeBSD-
Security/2002-07/10992.html

FreeBSD != Linux, but you knew that already. ;-)

> It's not really common, unless you're trying to update executables while
> they're running. It's not hard to make it happen. We've been over that
> here, a few weeks ago.

And in such a case IME the updater errors and tells you there was a 
problem.

>> Well, the installer does just that - it unlinks and creates new files,
>> near as I can tell.
> 
> Most likely. Probably because (d'oh) you can't write to a file that's
> being executed, yes? ;-)

Well, yeah, that would fall under the category of "handling the 
exception", no? ;-)

>>>>> C) run out of disk space,
>>>> That would create other problems as well
>>> Yes? So?  We know it's bad and you should avoid it, yes.
>> 
>> My point is that you would notice this probably before running out of
>> disk space.
> 
> Unless the update is what runs you out of space, or (just maybe) there's
> more than one person using the computer, like? Or some background job
> that's maybe doing something like receiving email?

Sure, and as I said, I've had that happen to me - didn't need to 
reinstall the system, had to clean a few things up, but this sort of 
thing can happen with any OS, not just Linux.  Ever try writing to the 
Windows registry with C: full?  (I knew you were waiting for me to bring 
Windows into the discussion - there you go. ;-) )

>>> And if the package manager updates the database *before* it updates
>>> the files, you might never know it.
>> 
>> If you never know it, though, then things are working as expected
> 
> And if "as expected" means "still has the security holes in it that the
> update was supposed to patch", then yes, that's as expected. Not what
> you want, but as expected. :-)

So you need to run a validation - but again as I've said a few times, my 
experience has been that an *error* condition like you describe stops the 
upgrade process, it doesn't silently say "well, that's OK" and continue 
without notifying the user.

> But yes, if you're only talking about package management per se (which
> is indeed where we started) then there are obviously solutions in place.
> It's just a shame they didn't generalize it so you can make it work for
> other people too, or over the network, or in spite of failures, etc.

"or over the network" - yeah, that works, but you have to follow special 
procedures.  Anyone who sets a system up with remote /usr or /var 
partitions knows that - or should know that.

"in spite of failures" - huh?  Update a system by writing files that 
cannot be written because of a full disk (say) even though the disk is 
full?  I'm not quite sure how you'd achieve the feat of writing to a full 
disk without causing corruption.  But again, that's why the updater 
displays an error message rather than silently failing. :-)

Jim


Post a reply to this message

From: Jim Henderson
Subject: Re: Linux really costs a _lot_ more than $40
Date: 23 Nov 2008 16:11:09
Message: <4929c6ed$1@news.povray.org>
On Sun, 23 Nov 2008 18:07:45 -0200, Nicolas Alvarez wrote:

> Jim Henderson wrote:
>> I don't believe that would matter, again, it's the inode that's open,
>> and when the file is overwritten a new inode is created and the old one
>> is destroyed.
> 
> Uh, it's not. Depends on your definition of "overwritten".
> 
> Open an existing file (inode), write into it. Did the inode change?

That's not the situation we're talking about - we're talking about 
replacing something like a shared library - I've *never* seen shared 
libraries updated by opening them and writing a change into them, then 
saving them out again.

Jim


Post a reply to this message

From: Darren New
Subject: Re: Linux really costs a _lot_ more than $40
Date: 24 Nov 2008 01:57:31
Message: <492a505b$1@news.povray.org>
Jim Henderson wrote:
> Well, sure, but dealing with every possible and conceivable exception 
> gets into areas where even in professional development it's decided not 
> to go. 

No, I  mean you can make it generic. Instead of looking at each of the 
30+ possible causes of error, you can say "Any error rolls back the 
changes."

You don't write a database transaction and say "What if the power fails? 
What if the process is killed? What if I run out of space? What if ...?"
Instead, you just build a transaction, and if anything fails, you roll 
back the transaction.

That's what I'm talking about. Then you don't have to build things like 
"check the package database against the file system to make sure it's 
consistent."

> FreeBSD != Linux, but you knew that already. ;-)

Except this error has been out there since UNIX v7 at least. But you 
knew that already.

> Well, yeah, that would fall under the category of "handling the 
> exception", no? ;-)

Not really. It's correcting an error you detected. If the error 
correction step also fails, you're pretty screwed.

For example, to make things work, you need to up date the executable and 
three dynamic libraries. You update the executable, and one of the 
libraries, and then your network connection to the machine hosting the 
files fails. You can't handle that exception by rolling back your 
changes manually.

> Sure, and as I said, I've had that happen to me - didn't need to 
> reinstall the system, had to clean a few things up, but this sort of 
> thing can happen with any OS, not just Linux.  Ever try writing to the 
> Windows registry with C: full?  (I knew you were waiting for me to bring 
> Windows into the discussion - there you go. ;-) )

Well, yes. That's why you have file system transactions in Windows. 
That's kinda exactly my point. If you start a kernel transaction, copy 
some files, update the registry, then bomb out, your changes get rolled 
back automatically. Just like any other database system, and regardless 
of why you bombed out or over which network the files are mounted.

> So you need to run a validation - but again as I've said a few times, my 
> experience has been that an *error* condition like you describe stops the 
> upgrade process, it doesn't silently say "well, that's OK" and continue 
> without notifying the user.

Unless the error is "you pulled the plug" or "the RAID fell over" or 
something like that.

> "in spite of failures" - huh?  Update a system by writing files that 
> cannot be written because of a full disk (say) even though the disk is 
> full?

No. Not having half-written files caused by the disk being full. Or 
having only (say) two out of file files updated because the disk got 
full while you were writing the third file.

-- 
Darren New / San Diego, CA, USA (PST)


Post a reply to this message

From: Darren New
Subject: Re: Linux really costs a _lot_ more than $40
Date: 24 Nov 2008 02:02:09
Message: <492a5171$1@news.povray.org>
Darren New wrote:
> files fails. You can't handle that exception by rolling back your 
> changes manually.

By which I mean, you can't handle that exception by writing code to 
inspect errno and automatically undoing the changes you made. Of course 
you can handle it "manually" as in a human intervening.

-- 
Darren New / San Diego, CA, USA (PST)


Post a reply to this message

From: Darren New
Subject: Re: Linux really costs a _lot_ more than $40
Date: 24 Nov 2008 02:15:00
Message: <492a5474$1@news.povray.org>
Jim Henderson wrote:
> I've *never* seen shared 
> libraries updated by opening them and writing a change into them, then 
> saving them out again.

So, you haven't used gcc much, then? :-)

-- 
Darren New / San Diego, CA, USA (PST)


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.