POV-Ray : Newsgroups : povray.off-topic : Linux really costs a _lot_ more than $40 : Re: Linux really costs a _lot_ more than $40 Server Time
10 Oct 2024 09:14:22 EDT (-0400)
  Re: Linux really costs a _lot_ more than $40  
From: Jim Henderson
Date: 23 Nov 2008 14:50:11
Message: <4929b3f3@news.povray.org>
On Sat, 22 Nov 2008 21:42:12 -0800, Darren New wrote:

> Jim Henderson wrote:
>> On Sat, 22 Nov 2008 09:46:38 -0800, Darren New wrote:
>> 
>>> Jim Henderson wrote:
>>>> why would you not be able to update it consistently?
>>> You write the .h file. While trying to write the .so file, you A)
>>> don't have permissions to do so,
>> 
>> The updater requires root rights to run, so it has permissions to do
>> that.
> 
> Not always. See network-mounted file systems, file systems mounted
> read-only, write-protect tabs, etc.

You'd have to configure things pretty weird to get part of the relevant 
filesystems mounted in a way that was inconsistent with the requirements 
- such as being able to update development header files but not shared 
libraries.  You'd almost have to be trying to create a situation where 
the updater doesn't work - and sure, you could do that, but what would be 
the point other than to break things?

>>> B) find that it's open by someone else as a shared text segment,
>> 
>> I don't believe that would matter,
> 
> You're mistaken.

Well, it's something I've never run into, and I've only been running 
Linux for about 12 years...

>> again, it's the inode that's open, and when the file is overwritten a
>> new inode is created and the old one is destroyed.
> 
> Only if you unlink the old file then creat() a new one.  If you actually
> open the current file for writing (and it's executing or sticky in swap,
> and it has a shared text segment), you get an error, even as root.

Well, the installer does just that - it unlinks and creates new files, 
near as I can tell.  But even in a case like that, the updater will error 
and say that a file it needs to access can't be updated, and it will give 
you the option of aborting or attempting to continue - same kind of error 
message when it tries to auto-refresh after making a network change 
(which I think is stupid, because it takes the network connection down 
and ALWAYS errors once).

>>> C) run out of disk space,
>> 
>> That would create other problems as well
> 
> Yes? So?  We know it's bad and you should avoid it, yes.

My point is that you would notice this probably before running out of 
disk space.  Thinking about it, though, I have run into this before, and 
the integrity didn't get messed up.

> Note that nobody expects "I ran out of disk space" (or "I ran out of
> file handles or swap space") to result in their root directory getting
> truncated, for example. So it's not so bad that it should leave other
> parts of your system in an inconsistent state.
> 
> I also left out "Linux out-of-memory killer randomly decided to nuke my
> process because someone else grabbed a bunch of memory."
> 
>>> D) have your process killed,
>>> E) have the power fail,
>>> ....
>> 
>> Yeah, those could happen and could introduce problems - so you just
>> reinstall the packages and that makes things consistent.
> 
> And until you do, things are broken. Which is kind of my point.
> 
> It's also not an orthogonal solution. Everyone has to reinvent the wheel
> for themselves when trying to update multiple files consistently.
> 
> And if the package manager updates the database *before* it updates the
> files, you might never know it.

If you never know it, though, then things are working as expected - if 
they're not, you can run a comparison of the database to the filesystem 
using the RPM options to do so.

Jim


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.