POV-Ray : Newsgroups : povray.off-topic : Who was looking for message-passing OS examples? Server Time
7 Sep 2024 09:24:53 EDT (-0400)
  Who was looking for message-passing OS examples? (Message 45 to 54 of 64)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Darren New
Subject: Re: OS what-ifs
Date: 14 Aug 2008 20:25:13
Message: <48a4cce9@news.povray.org>
Jim Henderson wrote:
> Um, I think you'll find that Linux is a derivative of Minix, not UNIX.  
> At best it's Unix-like.

I don't know you'd call it a "derivative" of either, really. Clearly the 
whole thing is very UNIX-like, and since I'm only talking about the 
design of the OS (the UI, the API, the file system layout, etc), it 
doesn't really matter either way, since Minix and Unix both share the 
whole *ix bit.

-- 
Darren New / San Diego, CA, USA (PST)


Post a reply to this message

From: Darren New
Subject: Re: Singularity
Date: 15 Aug 2008 13:17:35
Message: <48a5ba2f$1@news.povray.org>
Warp wrote:
>   If you are doing a simple linear traversal, maybe, but if it's any more
> complicated than that...

Hmmm... Apparently there are mechanisms in place (which I don't really 
understand) that let you interactively prove to the compiler that some 
sequence of operations is safe, and then that gets recorded and the 
compiler can take advantage of it.  I.e., if you can prove that you 
never go out of bounds, even if the proof is "non-obvious" to the 
machine, then the compiler will omit the run-time checks.  Awesome.  :-)

Personally, I can't imagine how you go about doing such a thing, except 
maybe adding stuff to the code describing what/why you think it's true 
and running it thru the compiler again, which doesn't seem like 
"interactive" to me. But I'm not finding anything on line that isn't 
either "it's really cool" or "here's 40 pages of mathematics describing 
how it works."


Also, I imagine you could put in appropriate assertions, such that if 
you say (for example)

void flog(int[] myints, int startinx) {
   assert myints.length > 500;
   assert startinx > 100 && startinx < 400;
   for (int i = startinx - 50; i < startinx + 50; i++)
      myints[i] = myints[i+10];
}

then the compiler could track the possible ranges of values, and you'd 
get runtime checks at the entry to the function but not inside the loop, 
as an example.

But yeah, figuring out which next bit of object to bounce a ray off of 
is obviously going to take some run-time checks.

But honestly, I've never seen code where which element gets accessed 
next is obvious to a programmer but not to the compiler. I've never seen 
code where you could prove to a person's satisfaction that it was 
correctly accessing the array but couldn't prove it in a formal way 
given what's in the code itself, assuming you have all the code in front 
of you, of course.

Do you have any examples of that? I'm sure there must be some out there, 
but I don't do that sort of programming, I think. I think the closest 
I've gotten is knowing that the program that generated the file put 
things in it such that the program reading the file doesn't have to 
check. (E.g., the writer of the file never puts more than 80 chars per 
line, so the reader doesn't have to check, and that's because I wrote 
them both myself.)

-- 
Darren New / San Diego, CA, USA (PST)


Post a reply to this message

From: Orchid XP v8
Subject: Re: Singularity
Date: 15 Aug 2008 13:45:55
Message: <48a5c0d3$1@news.povray.org>
Jim Henderson wrote:

> I've always said that Microsoft is outstanding at producing software 
> that's "just good enough" - ie, it is buggy, but it's good *enough* that 
> people aren't flocking away.

And I've always said that M$'s greatest achievement is in *redefining* 
what people will consider to be "good enough".

Not so many years ago, software that wasn't 100% crash-free was 
unacceptable. Today this is considered "normal". And it's all due to M$.

> That doesn't mean that they wouldn't/couldn't/shouldn't go through a 
> process of striving for continuous improvement in their development 
> processes.  And clearly that's something they do (I've known people who 
> have worked in MS Engineering, so this isn't conjecture on my part - it's 
> based on conversations with former colleagues who worked at MS in that 
> capacity).

Really? It's actually to their best advantage economically to make their 
software as inefficient as possible. (Although making it work 
*correctly* would be beneficial to them, making it work *efficiently* 
would cause them to lose money.)

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Jim Henderson
Subject: Re: OS what-ifs
Date: 15 Aug 2008 13:58:31
Message: <48a5c3c7$1@news.povray.org>
On Thu, 14 Aug 2008 17:25:13 -0700, Darren New wrote:

> Jim Henderson wrote:
>> Um, I think you'll find that Linux is a derivative of Minix, not UNIX.
>> At best it's Unix-like.
> 
> I don't know you'd call it a "derivative" of either, really. Clearly the
> whole thing is very UNIX-like, and since I'm only talking about the
> design of the OS (the UI, the API, the file system layout, etc), it
> doesn't really matter either way, since Minix and Unix both share the
> whole *ix bit.

That's more of a POSIX thing IIRC.  Tannenbaum would say that they're 
different as well, but Linux started as a free MINIX (since MINIX was 
distributed under a restricted license at the time).

Jim


Post a reply to this message

From: Jim Henderson
Subject: Re: Singularity
Date: 15 Aug 2008 14:02:30
Message: <48a5c4b6@news.povray.org>
On Fri, 15 Aug 2008 18:46:00 +0100, Orchid XP v8 wrote:

> And I've always said that M$'s greatest achievement is in *redefining*
> what people will consider to be "good enough".
> 
> Not so many years ago, software that wasn't 100% crash-free was
> unacceptable. Today this is considered "normal". And it's all due to M$.

Well, again, fair play to Microsoft - computing has gotten a lot more 
complex over the last 20 years.

>> That doesn't mean that they wouldn't/couldn't/shouldn't go through a
>> process of striving for continuous improvement in their development
>> processes.  And clearly that's something they do (I've known people who
>> have worked in MS Engineering, so this isn't conjecture on my part -
>> it's based on conversations with former colleagues who worked at MS in
>> that capacity).
> 
> Really? It's actually to their best advantage economically to make their
> software as inefficient as possible. (Although making it work
> *correctly* would be beneficial to them, making it work *efficiently*
> would cause them to lose money.)

Are you old enough to be *that* cynical? ;-)

There is something to what you say, though; one of the factors that I've 
seen (and heard discussed) that caused the decline of NetWare was that it 
was *too* stable.  People installed the server and forgot about it.  Look 
at the rather well-known story about the school that actually closed in a 
NetWare 2.x server in a closet because they forgot about it.  Not an 
urban legend, this actually happened (University of North Carolina IIRC).

There were other factors as well that contributed to the decline of 
NetWare, including some really bad missteps on Novell's part, rebranding 
it to "IntraNetWare", which I consider one of the biggest blunders the 
company has made *and* not necessarily learned from as well as it should 
have been).  Having a bit of instability keeps the system in mind, and MS 
does an outstanding job of keeping people on the "upgrade treadmill".

Jim


Post a reply to this message

From: Darren New
Subject: Re: Singularity
Date: 15 Aug 2008 14:16:46
Message: <48a5c80e$1@news.povray.org>
Orchid XP v8 wrote:
> Not so many years ago, software that wasn't 100% crash-free was 
> unacceptable. 

Nonsense.  I'm guessing it actually crashed at a higher rate, but 
nowadays you have orders of magnitude more people using software.

Or do you forget "sad mac" and "guru meditation" and "kernel panic". Of 
course all these things are common terms in the industry because they 
never, ever happened.

-- 
Darren New / San Diego, CA, USA (PST)


Post a reply to this message

From: Warp
Subject: Re: Singularity
Date: 15 Aug 2008 14:22:41
Message: <48a5c971@news.povray.org>
Darren New <dne### [at] sanrrcom> wrote:
> But honestly, I've never seen code where which element gets accessed 
> next is obvious to a programmer but not to the compiler. I've never seen 
> code where you could prove to a person's satisfaction that it was 
> correctly accessing the array but couldn't prove it in a formal way 
> given what's in the code itself, assuming you have all the code in front 
> of you, of course.

  Of course everything can be proven formally. The problem is how hard it
is to prove it. If you need to be a mathematician to formally prove that
certain values always keep within certain boundaries, that can be pretty
demanding for a programmer.

  One practical situation where it may be hard to prove you never access
out of boundaries is when you have a dynamic hash table, where the elements
at a certain position form a linked list, but all the elements are still
stored in the same dynamic array. Each element is a pair: The data, and an
index to the next element in the linked list. When you search for a value,
you calculate the hash index (which can already be hard to prove to be
inside the dynamically-sized has table), and check if the value is at that
position. If it isn't, you take from that position the index to the next
position and check that, etc.

  Assume that elements can be removed from the hash table, and that the
hash table array size can be reduced if enough elements are freed
appropriately. Naturally you would now have to prove that all the indices
left in the hash table point to valid indices in the now-smaller hash
table. In other words, it's not enough to prove that indices are valid
when you insert a new element in the hash table, you also have to prove
that the existing indices keep valid when the size of the hash table is
reduced.

  I would be pretty impressed by the programming language where you are
even able to express that.

  Intuitively proving this to a human could be much easier (although
it would probably not be a formal proof).

-- 
                                                          - Warp


Post a reply to this message

From: Warp
Subject: Re: Singularity
Date: 15 Aug 2008 14:32:11
Message: <48a5cbab@news.povray.org>
Orchid XP v8 <voi### [at] devnull> wrote:
> Not so many years ago, software that wasn't 100% crash-free was 
> unacceptable. Today this is considered "normal". And it's all due to M$.

  OTOH, 20 years ago most serious operating systems (and consequently other
software) were running huge servers with hundreds, if not thousands of
users.

  This was rather common especially at universities and other similar
academies: You were lucky if you had access to an actual desktop computer.
Usually all you got was a dumb terminal connected to the main server,
which was used by hundreds of others at the same time.

  Back then it was completely unthinkable that the server would crash
randomly. It was more or less an assumption that the server could handle
without problems hundreds of simultaneous users logged in, all of them
running a half dozen of programs, and most of them using the internet
(or other network).

  Today the situation is more or less reversed at those same places:
You are lucky if you get to log in into a big server. Instead, you are
usually stuck with a crappy desktop computer (usually running Windows,
but if you are really lucky, Linux).

  AFAIK, in many of those places they run weekly - if not even nightly -
restores to all the desktop computers automatically (ie. wipe the system
clean and put a fresh copy of the system from a network disk, or whatever),
rather than to wait for them to rot. (Usually student files are stored in
a network disk rather than the computer itself.)

-- 
                                                          - Warp


Post a reply to this message

From: Darren New
Subject: Re: Singularity
Date: 15 Aug 2008 14:47:08
Message: <48a5cf2c$1@news.povray.org>
Warp wrote:
>   Of course everything can be proven formally. The problem is how hard it
> is to prove it. If you need to be a mathematician to formally prove that
> certain values always keep within certain boundaries, that can be pretty
> demanding for a programmer.

Fair enough. I guess the research is in making it easier to prove. 
Apparently the stuff for proving Ada (or rather the subset called 
"Spark") is valid lets you do proofs by induction and by contradiction, 
which seems pretty ... intense.

>   One practical situation where it may be hard to prove you never access

An excellent example. Thanks.

I'm wondering if it's really as hard as it sounds, *given the code*. 
Obviously the hash is going to have to be taken modulo the table size, 
for example, and storing the link into the hash table is going to have 
to calculate the link somehow. I think by leaving out the details, it 
might look harder to prove than it is.  Given the code for calculating 
what the next link would be, and where to store it into the table, it 
might be pretty obvious you can't run off the end.

Of course, shrinking the table might also be either really hard, or less 
hard, to prove, depending on the actual code that decides when to shrink 
the table. How does the code know it's time to shrink the table? The 
same code that checks there aren't pointers off the end of the table 
would be able to prove that it's safe to shrink the table.

If you actually have such code you could show, that would be cool. 
Otherwise, yah, it's probably a good example. Something like B-tree 
maintenance could be interesting to look at.

>   I would be pretty impressed by the programming language where you are
> even able to express that.

Here's an example out of the Spec# document, for inserting into a 
variable-length array:

one can specify the postconditions of Insert as follows:
   void Insert(int index , object value)
     ensures Count == old(Count) + 1;
     ensures value == this[index ];
     ensures Forall{int i in 0 : index ; old(this[i]) == this[i]};
     ensures Forall{int i in index : old(Count);
         old(this[i]) == this[i + 1]};
     { ... }

(Which basically says the array is one larger, the element you passed in 
shows up where you said, the ones before didn't change, and the ones 
after moved up one.)

So indicating that all the second elements of the entries in the hash
are less than the size of the hash is just a Forall expression there.

Forall(int i in 0 : hasharray.size; hasharray[i].next < hasharray.size);

Now, of course, proving that without a runtime check is the hard part.

>   Intuitively proving this to a human could be much easier (although
> it would probably not be a formal proof).

Hmmmm... Hard to say, really. Certainly it would be easier to be less 
precise, but I'm not sure it would be easy to *prove* it to a human than 
it would be to prove it to a machine. For example, it's pretty easy for 
a compiler to prove a binary search is safe and to remove the runtime 
checks from that.

Correctness, yes. But safety?  I'm still unconvinced.

As in, it could be much easier to prove for a text editor you'll never 
overflow the screen buffer than it is to prove that the sequence of 
operations you do to insert a line of text on the screen actually leaves 
the screen looking how you'd expect it to. The types of things you want 
to prove for a safe language are actually rather constrained compared to 
"proving this program matches the specs."

Personally, my take is that if you can't formally prove your code 
doesn't have holes in it, the run-time check is probably worthwhile at 
that point, if you're actually interested in the robustness of your 
code. If performance is a problem, put in enough asserts that the 
runtime checks happen outside the hot spots instead of inside the hot 
spots.

-- 
Darren New / San Diego, CA, USA (PST)


Post a reply to this message

From: Darren New
Subject: Re: Singularity
Date: 15 Aug 2008 15:02:19
Message: <48a5d2bb$1@news.povray.org>
Warp wrote:
>   OTOH, 20 years ago most serious operating systems (and consequently other
> software) were running huge servers with hundreds, if not thousands of
> users.

... but still running a fairly small collection of software compared to 
what tends to run on machines today. The Linux boot partition today is 
multiple times the size of the biggest hard drives you could put on 
"desktop" computers 20 years ago, and orders of magnitude bigger than 
the disk space available on a mainframe 40 years ago.  10 years ago I 
was impressed when the boss brought in a Windows machine with 128Meg in 
it. Nowadays you can't even boot Windows in that, let alone do something 
useful.

And of course, the hardware is flakier too. That's why NASA puts 
20-year-old technology into satellites and space ships. You read the 
google papers, and you see they run some job and it takes on the average 
1000 machines, 7 minutes, and during that time on average two machines 
will have some sort of hardware failure. Your software can seem really 
flakey when your swap space has an unreliable sector in it - btdt.

Plus, software changes much faster today.  People in a university aren't 
going to use a compiler last updated (except for bug fixes) 10 years 
ago. Nobody these days uses a C++ compiler last updated before the STL 
was created. Nobody uses a database engine that has been stable for 10 
years, or even an OS that has been stable for 10 years.

It's easy to make a system that doesn't crash. Get it working, then stop 
dicking around with it. Windows servers are quite reliable, for example, 
because people don't install new stuff and then take it off, plug in 
random USB devices, try out new screen savers, etc. They're very careful 
with their upgrades, just like they are for Linux.

>   This was rather common especially at universities and other similar
> academies: You were lucky if you had access to an actual desktop computer.

Given the power of desktop computers 20 years ago, I'd disagree with the 
"lucky" part of that. ;-)  The stupidest thing the CIS department did at 
my undergrad school was to start teaching assembler on Commodore PETs 
instead of on the mainframe.

-- 
Darren New / San Diego, CA, USA (PST)


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.