POV-Ray : Newsgroups : povray.off-topic : I am convinced... Server Time
3 Sep 2024 19:19:56 EDT (-0400)
  I am convinced... (Message 21 to 30 of 43)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Darren New
Subject: Re: I am convinced...
Date: 21 Dec 2010 13:04:10
Message: <4d10ec1a@news.povray.org>
scott wrote:
> Because there are a tiny number of unix users who would follow 
> instructions such as "you must run this attachment as admin to regain 
> access to your bank account" from a random email. 

http://en.wikipedia.org/wiki/Christmas_Tree_EXEC

Enough that it made international news at the time. Granted, this was IBM 
big iron, not unix, but the principle is the same.

-- 
Darren New, San Diego CA, USA (PST)
   Serving Suggestion:
     "Don't serve this any more. It's awful."


Post a reply to this message

From: Darren New
Subject: Re: I am convinced...
Date: 21 Dec 2010 13:38:31
Message: <4d10f427@news.povray.org>
Warp wrote:
> Darren New <dne### [at] sanrrcom> wrote:
>>>   If the very first version of DOS had had a similar account/password
>>> system as unixes,
> 
>> ... then it wouldn't have run on an 8086, and MS would be broke.
> 
>   Or maybe we would have much better PCs today because they would not be
> based on (and mostly backwards-compatible with) a totally antiquated and
> obsolete architecture designed by IBM and Intel.

Maybe. But the machines had to be cheap enough for individuals to buy. More 
importantly, they had to be cheap enough that they could go into the right 
spot on the budget.

The only thing that made PCs "cheap" at the time was the clone makers. It's 
hard to say whether things would have taken off or not, but it's not like 
there weren't powerful UNIX-based machines with well-designed CPUs in the 
mix at the time.

>   Think about how the game industry has boosted the development of graphics
> cards. Imagine if the same boost would have been done to the PC architecture
> by OS vendors.

They had things like that. The Dolphin running Smalltalk. The Amiga's 
specialized chips. LISP machines. IBM made an APL luggable computer I worked 
on for a while. Heck, even the Mac was in the competition. And at the low 
end you had dozens of radio shack, Apple ][, Vector Graphics, Tektroniks, 
Kay-Pro, Commodore, and a dozen other brands of machines, many of which were 
mostly compatible with each other at the software level via CP/M.

And the 8086/8088 was designed to run Pascal, which also didn't take off.

So... what happened?

>> It's hard to say. Most of the other systems of the day didn't have it either.
> 
>   Multi-user unix systems were certainly being used in many environments
> (eg. at universities with thousands of students) back when Windows95 didn't
> even exist. Back then things like logins, passwords and access rights were
> a given in those system. Yes, I have personal experience.

Sure. But the other computers you'd buy yourself, for one user, didn't have 
any password stuff. I'm talking mostly the 8-bit computers that the IBM PC 
wound up replacing.  Sure, I used Solaris machines and even z8000-based UNIX 
machines. They weren't something you'd buy for a secretary, tho.

And when you have a computer with thousands of users, you have someone 
knowledgeable taking care of it, and everyone with access has a basic 
understanding of computers or they wouldn't have access. It wasn't a general 
purpose tool - it was a computing tool, and to use it, you needed a 
fundamental understanding of how the computer worked.

>> Contrast with something like Singularity, where you explicitly list every 
>> program you're going to run
> 
>   I never said that unix is the perfect system. I just said that it's
> *better* (in terms of safety) because the fundamental design is different
> (namely, it's intended to be a multi-user system).

Well, Windows is too, *now*. You can blame Windows users for the problems, 
and blame Microsoft for Windows users. But having actually dealt with people 
who don't know what they're doing, I have come to the conclusion that if you 
put a general purpose tool in the hands of someone who has *no* idea how it 
works, you're going to get scripted behaviors (i.e., people who take notes 
on the steps they have to go thru to send a picture to their grandchild) 
with no understanding or desire for understanding of the implications for 
anything except "did it work?"

In other words, I don't think that having had logins from the beginning 
would have taught people that mail can be forged. I mean, heck, do you think 
having logins on the computer would teach people not to fall for Nigerian 
scams? Do you think it would teach people not to fall for phishing scams? 
Why would you think it would teach people not to fall for any other sort of 
forged mail?

>   The point is that if operating systems had had the proper design from
> the start, things like computer viruses wouldn't exist 

I disagree. That's exactly why I listed all the security flaws that UNIX 
fixed over time. Such OSes *would* have bugs, *did* have bugs, and they'd 
continue to have bugs as new capabilities were introduced. Nobody has a 
Morris worm before SMTP. Nobody had UUCP appending mail to /etc/passwd 
before UUCP was around. Nobody stole passwords by connecting to a coworker's 
X terminal before X was invented. Every operating systems has had viruses 
and worms and such, including those who had multi-user access controls built 
in to start with. Granted, it's hard to know exactly how many, especially 
given the explosive growth in the number of machines in use and the 
explosive growth in the number of computer-naive users. But, too, when a bug 
was found in UNIX, it wasn't valuable to avoid reporting it, so they got 
fixed instead of exploited, usually. Except Kevin Mitnick, who also made 
international news by stealing stuff not from Windows machines, but from 
UNIX machiens.

Now we're in a different world, where it's actually valuable to find and 
exploit flaws, rather than reporting them when you come across them in 
normal usage. (Sort of like now suddenly we need to protect airplanes from 
terrorists, and not just the passengers. :-)

Basically, history does not bear out your statement, and the disproof is why 
I listed all the UNIX flaws that had been fixed over time.

Heck, by all estimates, Mac OSX has more security holes in it than Windows 
does (per installed unit, obviously), and it's based on an OS that has 
always had logins.

That said, yes, certainly a system that has always had multi-user 
authentication (and, more importantly, a separation of administrative duties 
from daily operations) is superior to one that doesn't. But when every 
system now has multi-user controls, and people try to deliver applications 
over the internet yadda yadda, you wind up with viruses that don't need 
administrative privs to propagate. I suspect that's the vast majority of 
active viruses now - those that steal personal information or add you to a 
bot-net, neither of which need (or even want) admin privileges.

-- 
Darren New, San Diego CA, USA (PST)
   Serving Suggestion:
     "Don't serve this any more. It's awful."


Post a reply to this message

From: Darren New
Subject: Re: I am convinced...
Date: 21 Dec 2010 13:41:21
Message: <4d10f4d1@news.povray.org>
Warp wrote:
> I wouldn't be surprised if that was the case today as well.

I saw an interesting article a couple weeks ago (on LWM?) where the author 
looked at the security bugs fixed, and tracked back thru the archives to 
figure out where they'd been introduced, to see if the number of security 
holes in Linux is going up or down. Apparently, the number is down, very 
slowly, but it's close to the measurement noise.  (It's hard to tell where a 
bug was introduced in a system where it's a number of interacting systems 
that cause the bug, for example.)

-- 
Darren New, San Diego CA, USA (PST)
   Serving Suggestion:
     "Don't serve this any more. It's awful."


Post a reply to this message

From: Warp
Subject: Re: I am convinced...
Date: 21 Dec 2010 13:42:35
Message: <4d10f51b@news.povray.org>
Darren New <dne### [at] sanrrcom> wrote:
> I don't think it applies any longer, tho. Indeed, in many ways I think 
> Windows might have a more secure architecture than UNIX nowadays, even if in 
> practice it's not quite up to snuff and in practice it gets attacked more.

  Has the amount of viruses, worms and especially malware declined with
Windows Vista and 7?

-- 
                                                          - Warp


Post a reply to this message

From: Darren New
Subject: Re: I am convinced...
Date: 21 Dec 2010 14:03:34
Message: <4d10fa06$1@news.povray.org>
Warp wrote:
> Darren New <dne### [at] sanrrcom> wrote:
>> I don't think it applies any longer, tho. Indeed, in many ways I think 
>> Windows might have a more secure architecture than UNIX nowadays, even if in 
>> practice it's not quite up to snuff and in practice it gets attacked more.
> 
>   Has the amount of viruses, worms and especially malware declined with
> Windows Vista and 7?

Compared to XP? I would certainly think so. More importantly, they've moved 
to attacking things *other* than the OS. Like java in web pages, or trojans, 
or phishing scams. I don't think that the number of malware attacks says 
much about the security of the systems nowadays, as much as the popularity.

-- 
Darren New, San Diego CA, USA (PST)
   Serving Suggestion:
     "Don't serve this any more. It's awful."


Post a reply to this message

From: Orchid XP v8
Subject: Re: I am convinced...
Date: 21 Dec 2010 15:38:41
Message: <4d111051$1@news.povray.org>
On 20/12/2010 06:41 PM, Warp wrote:

>    On the subject of virus scanners in particular, I'd say that the very
> need to have such scanners is a symptom of fundamentally bad OS design.

I'd go along with that.

>    The unix philosophy of OS design has always been a step or two closer
> to the safer design (with respect to computer viruses and other malware)
> then the typical DOS/Windows (and other similar OS's in the past) design.
> The reason for this is that unixes have always been designed to be
> multi-user operating systems while DOS/Windows has been designed to be
> a single-user OS with no regard to security.

That's pretty much it, right there.

>    The DOS/Windows design always took basically the exact opposite approach:
> Whatever the user wants to run or do, the OS allows.

For a machine which is physically incapable of being networked, this is 
a perfectly reasonable way to proceed. The Commodore 64 and the ZX 
Spectrum had exactly the same "flaw" in their design. Only if a computer 
is *capable* of being used by more than one person does any question of 
"security" even exist.

> Unfortunately it took over 20
> years for Microsoft to rid itself of this mentality (for some reason MS
> has always been very slow to adopt certain ideas).

*This* is where it all went wrong.

Remind me: Is this or is this not the same Microsoft that famously took 
the additude of "oh, this 'Internet' thingy is just a trendy fad; it'll 
never last"? For a long, long time they seemed to believe that 
networking was somehow "unimportant".

Either way, when they realised the true situation, they should have 
changed their design practises. Radically.

> NT had security, but it wasn't even intended for normal users.

No, it wasn't. Which is a pitty, because it was quite a good OS.

Still, the main OS kernel lives on in some form. The Win9x kernel is 
gone, and 2000, XP, Vista and 7 are all based (increasingly loosely) on 
the NT kernel series.

In some ways, NT actually has *more* security features than Unix. For 
example, take file security. Under Unix, I can set permissions for one 
user, one group, and everybody else. Under NT, I can set permissions for 
as many users and user groups as I damned well like. And this isn't a 
theoretical ability; I use it extensively in my day job. Not only that, 
but Unix has umask, while NT actually allows you to set the default file 
permissions on a per-folder basis. And to reset the permissions on all 
the files in a folder easily. Hell, I can even control who is allowed to 
*see* the permissions on a file or folder.

On top of that, you make the OS log a record every single time a 
particular action is performed on a given file. (Unix, on the other 
hand, doesn't even provide a way for a process to be *notified* when a 
file changes, much less to log such changes at the OS level.) Again, I 
can control this at the file level, so I only get log entries for files 
that I actually care about, not every single file in the entire 
filesystem. And not for every action, and not for every user.

And that's just files. You can also set permissions controlling who is 
allowed to kill a given process. Or clear the print jobs in a given 
queue. And more besides. (Sadly, the OS GUI doesn't expose most of this 
very useful control, and there are certainly no CLI tools for this 
either. Indeed, if you want to set file permissions, you can *only* do 
it via the GUI. Well, no, now there's PowerShell which can probably do 
it...)

> It wasn't until XP that some
> *semblance* of security was introduced (yet, nevertheless, the mentality
> of the regular user being by default the superuser was still there

NT was where the big security changes happened. Before NT, any talk of 
"security" was a nonesense. XP is just the first really well-known 
consumer OS to feature these changes. (Windows 2000 didn't really get 
noticed much, for whatever reason. Note that Windows 2000 /= Windows ME.)

Unfortunately, it turns out that 99% of all Windows software still 
*assumes* that it has permission to do everything. And thus, you come up 
with stupid glitches like, for example,

   Nero claims that you do not possess a CD burner, *unless* you are 
logged in as a superuser.

One could argue that it's not so much /Microsoft/ that is slow to adapt, 
of even it's /users/. No, it's the people writing Windows software. If I 
had a penny for every time I've had to do something stupid just to make 
the buggy, barely-functional device driver for some crappy cheap-arse 
piece of hardware work...



Of course, Microsoft don't really help themselves sometimes. They have 
an almost obsessive tendency of making everything as scriptable, 
programmable and customisable as possible. I guess because all those 
extra features look good on the tin?

For example, if you use Outlook (not Outlook Express) you can create an 
"email" which is actually a full-fledged application, in effect. (This 
might also require Exchange, I'm not sure.) It ranges from simple stuff 
like sending out appointments and questionnaires, right up to building 
complex fillable forms, and having the server receive the responses and 
do non-trivial processing with them, possibly producing additional 
emails in response. Stuff like that.

All of which is very *powerful* and everything. But the net result is 
that everything from emails to Word documents to spreadsheets, 
databases, presentations, and so forth all can have arbitrary executable 
code embedded in it, and more often then not executed immediately as 
soon as you touch the time, often without you even realising it. This 
enables you to create very "rich" documents. For example, the other day 
I saw a PowerPoint presentation which is like a hyperlinked, browseable 
product catelogue. Very impressive stuff. And it's no secret that you 
can use Access to build what ammounts to a desktop application.

If you think in terms of shiney features, all of this sounds fanstastic. 
If you think in terms of computer security, all of this sounds like a 
catastrophy just waiting to happen. I mean, who thought that a *word 
processed document* being able to alter the local filesystem was a good 
idea?! Most users have no clue that you can do this. But I promise you, 
you can. (Or, you could. These days as soon as you open a Word document, 
you have to OK a dozen messages just to make it open, regardless of 
whether it does anything even mildly dangerous.)

Of course, having access to the local filesystem allows a Word document 
to be part of a big happy desktop application, developed using just MS 
Office. Why write a GUI application when you can just customise Word a 
little, and use a flat XML file as your database?

Now, if the system had been designed with security in mind from the 
start, nobody would have done anything so stupid. But now all these 
features exist (and, weirdly, continue to be designed), and somehow you 
have to make it secure. MS's idea of "make it secure" isn't "disable 
local filesystem access unless there's a damned good reason to enable 
it". It isn't even "check whether the macro tries to access the local 
filesystem" (which is trivially checkable). No, it's "slap a big, fat, 
flashing warning on top of EVERYTHING, dangerous or not". Because, let's 
face it, that's very easy to code.

And after the 198th time the user sees this annoying, unecessary error 
message, they just stop paying attention. And then when a *real* threat 
comes along, the user will blindly and automatically click "yes, please 
run this unsigned code". Because they've had to click it a thousand 
times before in order to get stuff done.

This is not "security". The whole XP Security Center is nothing more 
than a nag screen that constantly whines at you. "Turn on updates. 
Install a pricey AV product. Turn on the firewall." It does nothing to 
actually increase security. (Automatic updates themselves may do. But 
Linux has copied that idea too now...)



Really, there are several reasons why Windows (and related MS products) 
are less secure than their Unix counterparts:

- Microsoft failed to realise that networks would become "important" 
(and hence, security would be necessary).

- Once they did decide that networking actually was the future, they 
implemented lots of security stuff but failed to really make full use of it.

- There is now a huge codebase of cheap, buggy, unsupported software 
which people expect to work on Windows. If you start actually doing 
things in a secure way, most of this software will break. (This is 
technically a GOOD THING, but it doesn't sound very good to the people 
who just want to do their stuff.)

- Microsoft thinks that endless rafts of whizzy features are more 
important than computer security. (That's quite a serious problem, right 
there.)

- Unix is the OS for computer experts. Windows is the OS for idiots so 
stupid that arguably they shouldn't be let near a computer in the first 
place. Wanna guess which one has the biggest security problems?

- Windows systems outnumber Unix systems 10^4 : 1. Wanna guess which one 
most people spend their time trying to attack?

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Darren New
Subject: Re: I am convinced...
Date: 21 Dec 2010 15:55:22
Message: <4d11143a$1@news.povray.org>
Orchid XP v8 wrote:
> Either way, when they realised the true situation, they should have 
> changed their design practises. Radically.

Really, few other people understood the security flaws at the time either. 
HTTP wasn't invented with encryption in mind. SMTP had no controls on who 
could send mail. Basically, there wasn't even a standard encryption suite 
until long after Windows was networked and HTTP started being used for 
commercial applications.

 > Indeed, if you want to set file permissions, you can *only* do
 > it via the GUI.

cacls is so old it's deprecated in favor of icacls.

And no, Windows doesn't tend to provide command-line tools for things you 
can do via scripting. That's what scripting calls are for, and why 
powershell is looked on so favorably by windows admins.

When your fundamental OS interface is essentially object-oriented and 
API-based, you don't get a whole lot of non-programming command-line tools 
to manage stuff.  It's only when most of your configuration is stored as 
text files that you get text-based tools to manage it.

> Unfortunately, it turns out that 99% of all Windows software still 
> *assumes* that it has permission to do everything.

I don't believe that's true any more. It might have been when XP first came out.

> One could argue that it's not so much /Microsoft/ that is slow to adapt, 
> of even it's /users/. No, it's the people writing Windows software. If I 
> had a penny for every time I've had to do something stupid just to make 
> the buggy, barely-functional device driver for some crappy cheap-arse 
> piece of hardware work...

Exactly.

> Of course, Microsoft don't really help themselves sometimes. They have 
> an almost obsessive tendency of making everything as scriptable, 
> programmable and customisable as possible. I guess because all those 
> extra features look good on the tin?

It's in part because they have big customers who need that stuff. It's not 
like UNIX isn't scriptable programmable and customizable.

> All of which is very *powerful* and everything. But the net result is 
> that everything from emails to Word documents to spreadsheets, 
> databases, presentations, and so forth all can have arbitrary executable 
> code embedded in it,

And *that* is exactly why malware is still around in multi-user systems. And 
it happens in browsers and everything too, so it's not restricted to 
microsoft code.

> And it's no secret that you 
> can use Access to build what ammounts to a desktop application.

Well, that's kind of what it's for.

> And after the 198th time the user sees this annoying, unecessary error 
> message, they just stop paying attention. And then when a *real* threat 
> comes along, the user will blindly and automatically click "yes, please 
> run this unsigned code". Because they've had to click it a thousand 
> times before in order to get stuff done.

That's a part of the problem. You can't ask users to be making that sort of 
decision, either at the "access the file system" level or at the more 
fine-grained level. The Android mechanism makes a lot of sense to users. But 
in a system where you don't declare what code you're going to run or what 
privileges you need (which applies to both UNIX and Windows and most other 
systems), this doesn't work.

And that's why I thought the new types of hardware coming out (portables, 
games, phones, and custom server hardware) might provide a chance to get 
away from these legacy problems.

> - Microsoft failed to realise that networks would become "important" 
> (and hence, security would be necessary).

I disagree this was unique to Microsoft.

> - There is now a huge codebase of cheap, buggy, unsupported software 
> which people expect to work on Windows. If you start actually doing 
> things in a secure way, most of this software will break. (This is 
> technically a GOOD THING, but it doesn't sound very good to the people 
> who just want to do their stuff.)

Exactly.

> - Microsoft thinks that endless rafts of whizzy features are more 
> important than computer security. (That's quite a serious problem, right 
> there.)

I believe starting around the time of XP SP2 they realized their lack of 
security was actually hurting their business.

> - Unix is the OS for computer experts. Windows is the OS for idiots so 
> stupid that arguably they shouldn't be let near a computer in the first 
> place. Wanna guess which one has the biggest security problems?
> 
> - Windows systems outnumber Unix systems 10^4 : 1. Wanna guess which one 
> most people spend their time trying to attack?

Yes and yes.


-- 
Darren New, San Diego CA, USA (PST)
   Serving Suggestion:
     "Don't serve this any more. It's awful."


Post a reply to this message

From: Orchid XP v8
Subject: Re: I am convinced...
Date: 22 Dec 2010 04:55:52
Message: <4d11cb28$1@news.povray.org>
On 20/12/2010 05:53 PM, Darren New wrote:
> that traditional OSes hold back the more sophisticated (as in, far from
> machine language) languages.

Perhaps. I guess it kind of depends on what you're trying to do. If you 
have an application that just does its computations in its own little 
world, then it's fine. If, on the other hand, the application wants to 
do arbitrary stuff with the whole machine, then... yeah.

Wasn't there an OS written in Smalltalk at some point? (Or was that only 
for special hardware?) Certainly you can see that if the Smalltalk 
runtime was running on the bare metal, it would have all sorts of 
interesting consequences. (For example, since *everything* can be 
changed, you could invent some new bizare concept of files.)

Speaking of which...

   http://halvm.com/

That's Haskell running on bare metal. (Well, OK, no it isn't, it's 
running under a Xen hypervisor. But that's /almost/ the same thing.) 
Apparently the guys at Galios actually use this thing, so that (say) 
your web server runs in one VM, all by itself, and your DB runs in 
another VM, and so on. But it would be interesting to see what happens 
if you tried to write an actual "operating system" in Haskell, rather 
than just an application.

> http://www.artima.com/lejava/articles/azul_pauseless_gc.html

That contains a lot of "look at us, aren't we clever!" and not very much 
technical detail. It seems the only novel detail is that they're using 
the memory protection hardware to trap and remap accesses to moved 
objects. That's a neat detail, and one which (they assert) would be too 
inefficient with a normal OS. But that seems to be just about it.

Haskell has a rather interesting invariant: since *most* objects are 
immutable, old objects can never point to new objects. It seems like 
this should make some kind of interesting GC algorithm possible.

Personally, I thought the article on GC avoiding paging was more 
interesting.

> Traditional file system interfaces probably do too.

Now, do you mean "file systems", or do you mean "interfaces to file 
systems"?

> It's interesting that this sort of stuff is starting to get to the point
> where people will be willing to break with compatibility at some level.
> Phones, game consoles, set-top boxes, and eventually probably
> "enterprise" or "cloud" type servers will all be willing to consider a
> different operating system that puts limits on compatibility with
> previous languages and libraries.

Well, we'll see what happens.

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Darren New
Subject: Re: I am convinced...
Date: 22 Dec 2010 13:42:35
Message: <4d12469b$1@news.povray.org>
Orchid XP v8 wrote:
> Perhaps. I guess it kind of depends on what you're trying to do. If you 
> have an application that just does its computations in its own little 
> world, then it's fine. If, on the other hand, the application wants to 
> do arbitrary stuff with the whole machine, then... yeah.

Not necessarily. There are memory models that really just don't mix, for 
example. GCed memory with non-GCed external resources is the classic. This 
paper describes how if you assume you have a C-like/Pascal-like workload, 
using thousands of page faults a second to do GC isn't going to be 
efficient. Etc.

> Wasn't there an OS written in Smalltalk at some point? (Or was that only 
> for special hardware?)

Yes. The "dolphin" computer. It had programmable microcode, so when you 
booted a Smalltalk image, the machine code *was* smalltalk bytecodes. When 
you booted a COBOL program, the machine code was appropriate for COBOL.

> Certainly you can see that if the Smalltalk 
> runtime was running on the bare metal, it would have all sorts of 
> interesting consequences. (For example, since *everything* can be 
> changed, you could invent some new bizare concept of files.)

They had what you'd consider a bizarre concept for files. That's the point. 
You didn't have to worry about "destructors" for files, precisely because 
"file" was built into the language, not the OS. So to speak.

> if you tried to write an actual "operating system" in Haskell, rather 
> than just an application.

But that's the point. When your application runs on the bare metal, there's 
no distinction between OS and application.

>> http://www.artima.com/lejava/articles/azul_pauseless_gc.html
> 
> That contains a lot of "look at us, aren't we clever!" and not very much 
> technical detail. It seems the only novel detail is that they're using 
> the memory protection hardware to trap and remap accesses to moved 
> objects. That's a neat detail, and one which (they assert) would be too 
> inefficient with a normal OS. But that seems to be just about it.

That's the only neat technical trick they used to implement their other 
tricks. I think it's more interesting that they're actually getting away 
from the linear memory model that C and Pascal foisted on us.

In other words, they're using an OO memory model. Objects are contiguous 
blobs, but it doesn't really matter beyond that where they are in the 
address space. So instead of doing a compacting garbage collection in a 
linear space, they do a compacting garbage collection on a per-page basis. 
*That* is really the innovation.

> Haskell has a rather interesting invariant: since *most* objects are 
> immutable, old objects can never point to new objects. It seems like 
> this should make some kind of interesting GC algorithm possible.

Erlang takes great advantage of that too, yes.

> Personally, I thought the article on GC avoiding paging was more 
> interesting.

Yeah. Again, it's the sort of thing you have a hard time doing in an OS 
tuned for something with a memory model like C's.

>> Traditional file system interfaces probably do too.
> 
> Now, do you mean "file systems", or do you mean "interfaces to file 
> systems"?

Interfaces. The reality of file systems nowadays isn't anywhere near what 
the interface looks like. Internally, it's much closer to multic's file 
system, yet it's generally presented with UNIX file system semantics.

(In Multics, the basic way of getting to a file was essentially memmap. 
That's why UNIX files are all represented as an array of bytes, with no 
records, no insert/delete bytes, etc. It's modeling multics' interface 
without the convenience of either having it in memory *or* having the OS 
provide you useful services.)

> Well, we'll see what happens.

Yeah, sadly, I don't have a whole lot of hope. Altho the thing with each 
application running in its own Xen hypervisor slot sounds interesting. I'll 
have to look at the implications of that a little more closely.

-- 
Darren New, San Diego CA, USA (PST)
   Serving Suggestion:
     "Don't serve this any more. It's awful."


Post a reply to this message

From: Orchid XP v8
Subject: Re: I am convinced...
Date: 22 Dec 2010 14:10:25
Message: <4d124d21$1@news.povray.org>
On 22/12/2010 06:42 PM, Darren New wrote:
> Orchid XP v8 wrote:
>> Perhaps. I guess it kind of depends on what you're trying to do. If
>> you have an application that just does its computations in its own
>> little world, then it's fine. If, on the other hand, the application
>> wants to do arbitrary stuff with the whole machine, then... yeah.
>
> Not necessarily. There are memory models that really just don't mix, for
> example. GCed memory with non-GCed external resources is the classic.

And yet, code written in languages assuming GC management and languages 
assuming manual management can not only interoperate, but even be linked 
together in the same process image.

> This paper describes how if you assume you have a C-like/Pascal-like
> workload, using thousands of page faults a second to do GC isn't going
> to be efficient. Etc.

True enough.

>> Wasn't there an OS written in Smalltalk at some point? (Or was that
>> only for special hardware?)
>
> Yes. The "dolphin" computer. It had programmable microcode, so when you
> booted a Smalltalk image, the machine code *was* smalltalk bytecodes.
> When you booted a COBOL program, the machine code was appropriate for
> COBOL.

That sounds moderately mental. I'd hate to think what happens if you get 
the microcode wrong...

>> Certainly you can see that if the Smalltalk runtime was running on the
>> bare metal, it would have all sorts of interesting consequences. (For
>> example, since *everything* can be changed, you could invent some new
>> bizare concept of files.)
>
> They had what you'd consider a bizarre concept for files. That's the
> point. You didn't have to worry about "destructors" for files, precisely
> because "file" was built into the language, not the OS. So to speak.

Not even built into the language - built into a library. And, 
presumably, by changing that library, you can change how the filesystem 
works...

(Does anybody remember how "Longhorn" was supposed to ship with a 
filesystem that was actually a relational database? *Really* glad they 
scrapped that idea!)

>> if you tried to write an actual "operating system" in Haskell, rather
>> than just an application.
>
> But that's the point. When your application runs on the bare metal,
> there's no distinction between OS and application.

Well, to some extent there /is/. By definition, an application is 
something that solves a real-world problem, and an OS is something that 
provides services to applications. Although in the case of something 
like a Smalltalk or Haskell OS, the "OS" might be statically linked into 
the application image - i.e., the "OS" might just be a runtime library.

> In other words, they're using an OO memory model. Objects are contiguous
> blobs, but it doesn't really matter beyond that where they are in the
> address space. So instead of doing a compacting garbage collection in a
> linear space, they do a compacting garbage collection on a per-page
> basis. *That* is really the innovation.

Sure. If the virtual address space supports it, why not? (Other than 
"because most OS designs assume that you would never want to do such a 
thing".)

>>> Traditional file system interfaces probably do too.
>>
>> Now, do you mean "file systems", or do you mean "interfaces to file
>> systems"?
>
> Interfaces. The reality of file systems nowadays isn't anywhere near
> what the interface looks like. Internally, it's much closer to multic's
> file system, yet it's generally presented with UNIX file system semantics.
>
> (In Multics, the basic way of getting to a file was essentially memmap.
> That's why UNIX files are all represented as an array of bytes, with no
> records, no insert/delete bytes, etc. It's modeling multics' interface
> without the convenience of either having it in memory *or* having the OS
> provide you useful services.)

OK, now I must know... WTF is this "memmap" that I keep hearing about?

>> Well, we'll see what happens.
>
> Yeah, sadly, I don't have a whole lot of hope. Altho the thing with each
> application running in its own Xen hypervisor slot sounds interesting.
> I'll have to look at the implications of that a little more closely.

Remember that Galios mainly does work for people who want high assurance 
software. Stuff with mathematical guarantees of correctness, and so 
forth. For that kind of insanity, being able to guarantee that the OS 
isn't going to do something weird (because there isn't one) is probably 
quite useful. I'm more dubious about how useful it is for everyone else. 
The idea of experimental OS design using Haskell is quite interesting, 
however.

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.