POV-Ray : Newsgroups : povray.off-topic : I am convinced... Server Time
3 Sep 2024 19:20:34 EDT (-0400)
  I am convinced... (Message 24 to 33 of 43)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Warp
Subject: Re: I am convinced...
Date: 21 Dec 2010 13:42:35
Message: <4d10f51b@news.povray.org>
Darren New <dne### [at] sanrrcom> wrote:
> I don't think it applies any longer, tho. Indeed, in many ways I think 
> Windows might have a more secure architecture than UNIX nowadays, even if in 
> practice it's not quite up to snuff and in practice it gets attacked more.

  Has the amount of viruses, worms and especially malware declined with
Windows Vista and 7?

-- 
                                                          - Warp


Post a reply to this message

From: Darren New
Subject: Re: I am convinced...
Date: 21 Dec 2010 14:03:34
Message: <4d10fa06$1@news.povray.org>
Warp wrote:
> Darren New <dne### [at] sanrrcom> wrote:
>> I don't think it applies any longer, tho. Indeed, in many ways I think 
>> Windows might have a more secure architecture than UNIX nowadays, even if in 
>> practice it's not quite up to snuff and in practice it gets attacked more.
> 
>   Has the amount of viruses, worms and especially malware declined with
> Windows Vista and 7?

Compared to XP? I would certainly think so. More importantly, they've moved 
to attacking things *other* than the OS. Like java in web pages, or trojans, 
or phishing scams. I don't think that the number of malware attacks says 
much about the security of the systems nowadays, as much as the popularity.

-- 
Darren New, San Diego CA, USA (PST)
   Serving Suggestion:
     "Don't serve this any more. It's awful."


Post a reply to this message

From: Orchid XP v8
Subject: Re: I am convinced...
Date: 21 Dec 2010 15:38:41
Message: <4d111051$1@news.povray.org>
On 20/12/2010 06:41 PM, Warp wrote:

>    On the subject of virus scanners in particular, I'd say that the very
> need to have such scanners is a symptom of fundamentally bad OS design.

I'd go along with that.

>    The unix philosophy of OS design has always been a step or two closer
> to the safer design (with respect to computer viruses and other malware)
> then the typical DOS/Windows (and other similar OS's in the past) design.
> The reason for this is that unixes have always been designed to be
> multi-user operating systems while DOS/Windows has been designed to be
> a single-user OS with no regard to security.

That's pretty much it, right there.

>    The DOS/Windows design always took basically the exact opposite approach:
> Whatever the user wants to run or do, the OS allows.

For a machine which is physically incapable of being networked, this is 
a perfectly reasonable way to proceed. The Commodore 64 and the ZX 
Spectrum had exactly the same "flaw" in their design. Only if a computer 
is *capable* of being used by more than one person does any question of 
"security" even exist.

> Unfortunately it took over 20
> years for Microsoft to rid itself of this mentality (for some reason MS
> has always been very slow to adopt certain ideas).

*This* is where it all went wrong.

Remind me: Is this or is this not the same Microsoft that famously took 
the additude of "oh, this 'Internet' thingy is just a trendy fad; it'll 
never last"? For a long, long time they seemed to believe that 
networking was somehow "unimportant".

Either way, when they realised the true situation, they should have 
changed their design practises. Radically.

> NT had security, but it wasn't even intended for normal users.

No, it wasn't. Which is a pitty, because it was quite a good OS.

Still, the main OS kernel lives on in some form. The Win9x kernel is 
gone, and 2000, XP, Vista and 7 are all based (increasingly loosely) on 
the NT kernel series.

In some ways, NT actually has *more* security features than Unix. For 
example, take file security. Under Unix, I can set permissions for one 
user, one group, and everybody else. Under NT, I can set permissions for 
as many users and user groups as I damned well like. And this isn't a 
theoretical ability; I use it extensively in my day job. Not only that, 
but Unix has umask, while NT actually allows you to set the default file 
permissions on a per-folder basis. And to reset the permissions on all 
the files in a folder easily. Hell, I can even control who is allowed to 
*see* the permissions on a file or folder.

On top of that, you make the OS log a record every single time a 
particular action is performed on a given file. (Unix, on the other 
hand, doesn't even provide a way for a process to be *notified* when a 
file changes, much less to log such changes at the OS level.) Again, I 
can control this at the file level, so I only get log entries for files 
that I actually care about, not every single file in the entire 
filesystem. And not for every action, and not for every user.

And that's just files. You can also set permissions controlling who is 
allowed to kill a given process. Or clear the print jobs in a given 
queue. And more besides. (Sadly, the OS GUI doesn't expose most of this 
very useful control, and there are certainly no CLI tools for this 
either. Indeed, if you want to set file permissions, you can *only* do 
it via the GUI. Well, no, now there's PowerShell which can probably do 
it...)

> It wasn't until XP that some
> *semblance* of security was introduced (yet, nevertheless, the mentality
> of the regular user being by default the superuser was still there

NT was where the big security changes happened. Before NT, any talk of 
"security" was a nonesense. XP is just the first really well-known 
consumer OS to feature these changes. (Windows 2000 didn't really get 
noticed much, for whatever reason. Note that Windows 2000 /= Windows ME.)

Unfortunately, it turns out that 99% of all Windows software still 
*assumes* that it has permission to do everything. And thus, you come up 
with stupid glitches like, for example,

   Nero claims that you do not possess a CD burner, *unless* you are 
logged in as a superuser.

One could argue that it's not so much /Microsoft/ that is slow to adapt, 
of even it's /users/. No, it's the people writing Windows software. If I 
had a penny for every time I've had to do something stupid just to make 
the buggy, barely-functional device driver for some crappy cheap-arse 
piece of hardware work...



Of course, Microsoft don't really help themselves sometimes. They have 
an almost obsessive tendency of making everything as scriptable, 
programmable and customisable as possible. I guess because all those 
extra features look good on the tin?

For example, if you use Outlook (not Outlook Express) you can create an 
"email" which is actually a full-fledged application, in effect. (This 
might also require Exchange, I'm not sure.) It ranges from simple stuff 
like sending out appointments and questionnaires, right up to building 
complex fillable forms, and having the server receive the responses and 
do non-trivial processing with them, possibly producing additional 
emails in response. Stuff like that.

All of which is very *powerful* and everything. But the net result is 
that everything from emails to Word documents to spreadsheets, 
databases, presentations, and so forth all can have arbitrary executable 
code embedded in it, and more often then not executed immediately as 
soon as you touch the time, often without you even realising it. This 
enables you to create very "rich" documents. For example, the other day 
I saw a PowerPoint presentation which is like a hyperlinked, browseable 
product catelogue. Very impressive stuff. And it's no secret that you 
can use Access to build what ammounts to a desktop application.

If you think in terms of shiney features, all of this sounds fanstastic. 
If you think in terms of computer security, all of this sounds like a 
catastrophy just waiting to happen. I mean, who thought that a *word 
processed document* being able to alter the local filesystem was a good 
idea?! Most users have no clue that you can do this. But I promise you, 
you can. (Or, you could. These days as soon as you open a Word document, 
you have to OK a dozen messages just to make it open, regardless of 
whether it does anything even mildly dangerous.)

Of course, having access to the local filesystem allows a Word document 
to be part of a big happy desktop application, developed using just MS 
Office. Why write a GUI application when you can just customise Word a 
little, and use a flat XML file as your database?

Now, if the system had been designed with security in mind from the 
start, nobody would have done anything so stupid. But now all these 
features exist (and, weirdly, continue to be designed), and somehow you 
have to make it secure. MS's idea of "make it secure" isn't "disable 
local filesystem access unless there's a damned good reason to enable 
it". It isn't even "check whether the macro tries to access the local 
filesystem" (which is trivially checkable). No, it's "slap a big, fat, 
flashing warning on top of EVERYTHING, dangerous or not". Because, let's 
face it, that's very easy to code.

And after the 198th time the user sees this annoying, unecessary error 
message, they just stop paying attention. And then when a *real* threat 
comes along, the user will blindly and automatically click "yes, please 
run this unsigned code". Because they've had to click it a thousand 
times before in order to get stuff done.

This is not "security". The whole XP Security Center is nothing more 
than a nag screen that constantly whines at you. "Turn on updates. 
Install a pricey AV product. Turn on the firewall." It does nothing to 
actually increase security. (Automatic updates themselves may do. But 
Linux has copied that idea too now...)



Really, there are several reasons why Windows (and related MS products) 
are less secure than their Unix counterparts:

- Microsoft failed to realise that networks would become "important" 
(and hence, security would be necessary).

- Once they did decide that networking actually was the future, they 
implemented lots of security stuff but failed to really make full use of it.

- There is now a huge codebase of cheap, buggy, unsupported software 
which people expect to work on Windows. If you start actually doing 
things in a secure way, most of this software will break. (This is 
technically a GOOD THING, but it doesn't sound very good to the people 
who just want to do their stuff.)

- Microsoft thinks that endless rafts of whizzy features are more 
important than computer security. (That's quite a serious problem, right 
there.)

- Unix is the OS for computer experts. Windows is the OS for idiots so 
stupid that arguably they shouldn't be let near a computer in the first 
place. Wanna guess which one has the biggest security problems?

- Windows systems outnumber Unix systems 10^4 : 1. Wanna guess which one 
most people spend their time trying to attack?

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Darren New
Subject: Re: I am convinced...
Date: 21 Dec 2010 15:55:22
Message: <4d11143a$1@news.povray.org>
Orchid XP v8 wrote:
> Either way, when they realised the true situation, they should have 
> changed their design practises. Radically.

Really, few other people understood the security flaws at the time either. 
HTTP wasn't invented with encryption in mind. SMTP had no controls on who 
could send mail. Basically, there wasn't even a standard encryption suite 
until long after Windows was networked and HTTP started being used for 
commercial applications.

 > Indeed, if you want to set file permissions, you can *only* do
 > it via the GUI.

cacls is so old it's deprecated in favor of icacls.

And no, Windows doesn't tend to provide command-line tools for things you 
can do via scripting. That's what scripting calls are for, and why 
powershell is looked on so favorably by windows admins.

When your fundamental OS interface is essentially object-oriented and 
API-based, you don't get a whole lot of non-programming command-line tools 
to manage stuff.  It's only when most of your configuration is stored as 
text files that you get text-based tools to manage it.

> Unfortunately, it turns out that 99% of all Windows software still 
> *assumes* that it has permission to do everything.

I don't believe that's true any more. It might have been when XP first came out.

> One could argue that it's not so much /Microsoft/ that is slow to adapt, 
> of even it's /users/. No, it's the people writing Windows software. If I 
> had a penny for every time I've had to do something stupid just to make 
> the buggy, barely-functional device driver for some crappy cheap-arse 
> piece of hardware work...

Exactly.

> Of course, Microsoft don't really help themselves sometimes. They have 
> an almost obsessive tendency of making everything as scriptable, 
> programmable and customisable as possible. I guess because all those 
> extra features look good on the tin?

It's in part because they have big customers who need that stuff. It's not 
like UNIX isn't scriptable programmable and customizable.

> All of which is very *powerful* and everything. But the net result is 
> that everything from emails to Word documents to spreadsheets, 
> databases, presentations, and so forth all can have arbitrary executable 
> code embedded in it,

And *that* is exactly why malware is still around in multi-user systems. And 
it happens in browsers and everything too, so it's not restricted to 
microsoft code.

> And it's no secret that you 
> can use Access to build what ammounts to a desktop application.

Well, that's kind of what it's for.

> And after the 198th time the user sees this annoying, unecessary error 
> message, they just stop paying attention. And then when a *real* threat 
> comes along, the user will blindly and automatically click "yes, please 
> run this unsigned code". Because they've had to click it a thousand 
> times before in order to get stuff done.

That's a part of the problem. You can't ask users to be making that sort of 
decision, either at the "access the file system" level or at the more 
fine-grained level. The Android mechanism makes a lot of sense to users. But 
in a system where you don't declare what code you're going to run or what 
privileges you need (which applies to both UNIX and Windows and most other 
systems), this doesn't work.

And that's why I thought the new types of hardware coming out (portables, 
games, phones, and custom server hardware) might provide a chance to get 
away from these legacy problems.

> - Microsoft failed to realise that networks would become "important" 
> (and hence, security would be necessary).

I disagree this was unique to Microsoft.

> - There is now a huge codebase of cheap, buggy, unsupported software 
> which people expect to work on Windows. If you start actually doing 
> things in a secure way, most of this software will break. (This is 
> technically a GOOD THING, but it doesn't sound very good to the people 
> who just want to do their stuff.)

Exactly.

> - Microsoft thinks that endless rafts of whizzy features are more 
> important than computer security. (That's quite a serious problem, right 
> there.)

I believe starting around the time of XP SP2 they realized their lack of 
security was actually hurting their business.

> - Unix is the OS for computer experts. Windows is the OS for idiots so 
> stupid that arguably they shouldn't be let near a computer in the first 
> place. Wanna guess which one has the biggest security problems?
> 
> - Windows systems outnumber Unix systems 10^4 : 1. Wanna guess which one 
> most people spend their time trying to attack?

Yes and yes.


-- 
Darren New, San Diego CA, USA (PST)
   Serving Suggestion:
     "Don't serve this any more. It's awful."


Post a reply to this message

From: Orchid XP v8
Subject: Re: I am convinced...
Date: 22 Dec 2010 04:55:52
Message: <4d11cb28$1@news.povray.org>
On 20/12/2010 05:53 PM, Darren New wrote:
> that traditional OSes hold back the more sophisticated (as in, far from
> machine language) languages.

Perhaps. I guess it kind of depends on what you're trying to do. If you 
have an application that just does its computations in its own little 
world, then it's fine. If, on the other hand, the application wants to 
do arbitrary stuff with the whole machine, then... yeah.

Wasn't there an OS written in Smalltalk at some point? (Or was that only 
for special hardware?) Certainly you can see that if the Smalltalk 
runtime was running on the bare metal, it would have all sorts of 
interesting consequences. (For example, since *everything* can be 
changed, you could invent some new bizare concept of files.)

Speaking of which...

   http://halvm.com/

That's Haskell running on bare metal. (Well, OK, no it isn't, it's 
running under a Xen hypervisor. But that's /almost/ the same thing.) 
Apparently the guys at Galios actually use this thing, so that (say) 
your web server runs in one VM, all by itself, and your DB runs in 
another VM, and so on. But it would be interesting to see what happens 
if you tried to write an actual "operating system" in Haskell, rather 
than just an application.

> http://www.artima.com/lejava/articles/azul_pauseless_gc.html

That contains a lot of "look at us, aren't we clever!" and not very much 
technical detail. It seems the only novel detail is that they're using 
the memory protection hardware to trap and remap accesses to moved 
objects. That's a neat detail, and one which (they assert) would be too 
inefficient with a normal OS. But that seems to be just about it.

Haskell has a rather interesting invariant: since *most* objects are 
immutable, old objects can never point to new objects. It seems like 
this should make some kind of interesting GC algorithm possible.

Personally, I thought the article on GC avoiding paging was more 
interesting.

> Traditional file system interfaces probably do too.

Now, do you mean "file systems", or do you mean "interfaces to file 
systems"?

> It's interesting that this sort of stuff is starting to get to the point
> where people will be willing to break with compatibility at some level.
> Phones, game consoles, set-top boxes, and eventually probably
> "enterprise" or "cloud" type servers will all be willing to consider a
> different operating system that puts limits on compatibility with
> previous languages and libraries.

Well, we'll see what happens.

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Darren New
Subject: Re: I am convinced...
Date: 22 Dec 2010 13:42:35
Message: <4d12469b$1@news.povray.org>
Orchid XP v8 wrote:
> Perhaps. I guess it kind of depends on what you're trying to do. If you 
> have an application that just does its computations in its own little 
> world, then it's fine. If, on the other hand, the application wants to 
> do arbitrary stuff with the whole machine, then... yeah.

Not necessarily. There are memory models that really just don't mix, for 
example. GCed memory with non-GCed external resources is the classic. This 
paper describes how if you assume you have a C-like/Pascal-like workload, 
using thousands of page faults a second to do GC isn't going to be 
efficient. Etc.

> Wasn't there an OS written in Smalltalk at some point? (Or was that only 
> for special hardware?)

Yes. The "dolphin" computer. It had programmable microcode, so when you 
booted a Smalltalk image, the machine code *was* smalltalk bytecodes. When 
you booted a COBOL program, the machine code was appropriate for COBOL.

> Certainly you can see that if the Smalltalk 
> runtime was running on the bare metal, it would have all sorts of 
> interesting consequences. (For example, since *everything* can be 
> changed, you could invent some new bizare concept of files.)

They had what you'd consider a bizarre concept for files. That's the point. 
You didn't have to worry about "destructors" for files, precisely because 
"file" was built into the language, not the OS. So to speak.

> if you tried to write an actual "operating system" in Haskell, rather 
> than just an application.

But that's the point. When your application runs on the bare metal, there's 
no distinction between OS and application.

>> http://www.artima.com/lejava/articles/azul_pauseless_gc.html
> 
> That contains a lot of "look at us, aren't we clever!" and not very much 
> technical detail. It seems the only novel detail is that they're using 
> the memory protection hardware to trap and remap accesses to moved 
> objects. That's a neat detail, and one which (they assert) would be too 
> inefficient with a normal OS. But that seems to be just about it.

That's the only neat technical trick they used to implement their other 
tricks. I think it's more interesting that they're actually getting away 
from the linear memory model that C and Pascal foisted on us.

In other words, they're using an OO memory model. Objects are contiguous 
blobs, but it doesn't really matter beyond that where they are in the 
address space. So instead of doing a compacting garbage collection in a 
linear space, they do a compacting garbage collection on a per-page basis. 
*That* is really the innovation.

> Haskell has a rather interesting invariant: since *most* objects are 
> immutable, old objects can never point to new objects. It seems like 
> this should make some kind of interesting GC algorithm possible.

Erlang takes great advantage of that too, yes.

> Personally, I thought the article on GC avoiding paging was more 
> interesting.

Yeah. Again, it's the sort of thing you have a hard time doing in an OS 
tuned for something with a memory model like C's.

>> Traditional file system interfaces probably do too.
> 
> Now, do you mean "file systems", or do you mean "interfaces to file 
> systems"?

Interfaces. The reality of file systems nowadays isn't anywhere near what 
the interface looks like. Internally, it's much closer to multic's file 
system, yet it's generally presented with UNIX file system semantics.

(In Multics, the basic way of getting to a file was essentially memmap. 
That's why UNIX files are all represented as an array of bytes, with no 
records, no insert/delete bytes, etc. It's modeling multics' interface 
without the convenience of either having it in memory *or* having the OS 
provide you useful services.)

> Well, we'll see what happens.

Yeah, sadly, I don't have a whole lot of hope. Altho the thing with each 
application running in its own Xen hypervisor slot sounds interesting. I'll 
have to look at the implications of that a little more closely.

-- 
Darren New, San Diego CA, USA (PST)
   Serving Suggestion:
     "Don't serve this any more. It's awful."


Post a reply to this message

From: Orchid XP v8
Subject: Re: I am convinced...
Date: 22 Dec 2010 14:10:25
Message: <4d124d21$1@news.povray.org>
On 22/12/2010 06:42 PM, Darren New wrote:
> Orchid XP v8 wrote:
>> Perhaps. I guess it kind of depends on what you're trying to do. If
>> you have an application that just does its computations in its own
>> little world, then it's fine. If, on the other hand, the application
>> wants to do arbitrary stuff with the whole machine, then... yeah.
>
> Not necessarily. There are memory models that really just don't mix, for
> example. GCed memory with non-GCed external resources is the classic.

And yet, code written in languages assuming GC management and languages 
assuming manual management can not only interoperate, but even be linked 
together in the same process image.

> This paper describes how if you assume you have a C-like/Pascal-like
> workload, using thousands of page faults a second to do GC isn't going
> to be efficient. Etc.

True enough.

>> Wasn't there an OS written in Smalltalk at some point? (Or was that
>> only for special hardware?)
>
> Yes. The "dolphin" computer. It had programmable microcode, so when you
> booted a Smalltalk image, the machine code *was* smalltalk bytecodes.
> When you booted a COBOL program, the machine code was appropriate for
> COBOL.

That sounds moderately mental. I'd hate to think what happens if you get 
the microcode wrong...

>> Certainly you can see that if the Smalltalk runtime was running on the
>> bare metal, it would have all sorts of interesting consequences. (For
>> example, since *everything* can be changed, you could invent some new
>> bizare concept of files.)
>
> They had what you'd consider a bizarre concept for files. That's the
> point. You didn't have to worry about "destructors" for files, precisely
> because "file" was built into the language, not the OS. So to speak.

Not even built into the language - built into a library. And, 
presumably, by changing that library, you can change how the filesystem 
works...

(Does anybody remember how "Longhorn" was supposed to ship with a 
filesystem that was actually a relational database? *Really* glad they 
scrapped that idea!)

>> if you tried to write an actual "operating system" in Haskell, rather
>> than just an application.
>
> But that's the point. When your application runs on the bare metal,
> there's no distinction between OS and application.

Well, to some extent there /is/. By definition, an application is 
something that solves a real-world problem, and an OS is something that 
provides services to applications. Although in the case of something 
like a Smalltalk or Haskell OS, the "OS" might be statically linked into 
the application image - i.e., the "OS" might just be a runtime library.

> In other words, they're using an OO memory model. Objects are contiguous
> blobs, but it doesn't really matter beyond that where they are in the
> address space. So instead of doing a compacting garbage collection in a
> linear space, they do a compacting garbage collection on a per-page
> basis. *That* is really the innovation.

Sure. If the virtual address space supports it, why not? (Other than 
"because most OS designs assume that you would never want to do such a 
thing".)

>>> Traditional file system interfaces probably do too.
>>
>> Now, do you mean "file systems", or do you mean "interfaces to file
>> systems"?
>
> Interfaces. The reality of file systems nowadays isn't anywhere near
> what the interface looks like. Internally, it's much closer to multic's
> file system, yet it's generally presented with UNIX file system semantics.
>
> (In Multics, the basic way of getting to a file was essentially memmap.
> That's why UNIX files are all represented as an array of bytes, with no
> records, no insert/delete bytes, etc. It's modeling multics' interface
> without the convenience of either having it in memory *or* having the OS
> provide you useful services.)

OK, now I must know... WTF is this "memmap" that I keep hearing about?

>> Well, we'll see what happens.
>
> Yeah, sadly, I don't have a whole lot of hope. Altho the thing with each
> application running in its own Xen hypervisor slot sounds interesting.
> I'll have to look at the implications of that a little more closely.

Remember that Galios mainly does work for people who want high assurance 
software. Stuff with mathematical guarantees of correctness, and so 
forth. For that kind of insanity, being able to guarantee that the OS 
isn't going to do something weird (because there isn't one) is probably 
quite useful. I'm more dubious about how useful it is for everyone else. 
The idea of experimental OS design using Haskell is quite interesting, 
however.

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Darren New
Subject: Re: I am convinced...
Date: 22 Dec 2010 16:28:00
Message: <4d126d60@news.povray.org>
Orchid XP v8 wrote:
> And yet, code written in languages assuming GC management and languages 
> assuming manual management can not only interoperate, but even be linked 
> together in the same process image.

Sure. But then you get things like "finalizers" to clean up unmanaged 
resources, the "using" statement to enforce the use of finalizers, all kinds 
of odd rules in the GC engine that slow it down because it has to deal with 
finalizers, etc.

Sort of the same way that when a process in UNIX exits, there's a list of 
things the kernel cleans up for it on its way out.

> That sounds moderately mental. I'd hate to think what happens if you get 
> the microcode wrong...

The same as if you get any other program wrong. What do you mean? It's 
probably far easier than writing a modern bytecode interpreter like the JVM.

> Not even built into the language - built into a library. And, 
> presumably, by changing that library, you can change how the filesystem 
> works...

Right. That's the idea. Basically, what's a file? It's a data structure you 
access. Why is a file different from an array of bytes held by a process 
that just never exits?

> Well, to some extent there /is/. By definition, an application is 
> something that solves a real-world problem, and an OS is something that 
> provides services to applications.

Where do you draw the distinction, if there's exactly one application 
running on the custom OS?

>> In other words, they're using an OO memory model. Objects are contiguous
>> blobs, but it doesn't really matter beyond that where they are in the
>> address space. So instead of doing a compacting garbage collection in a
>> linear space, they do a compacting garbage collection on a per-page
>> basis. *That* is really the innovation.
> 
> Sure. If the virtual address space supports it, why not? (Other than 
> "because most OS designs assume that you would never want to do such a 
> thing".)

Well, that's exactly the point. Most computer languages completely ignore 
virtual memory. Only managed OO languages come close to having a concept of 
memory as something other than an array of bytes, and even that is more 
segments than pages.

I used to use an HP mainframe that was incapable of running C, because it 
was a segmented chip. It was really old, but it would be ideal for running 
an OO language nowadays, because you had tons of segments available, etc. 
Basically, pointers pointed to segments, not bytes, so it was in hindsight a 
very OO kind of architecture.

The Burroughs B series had typed machine code. Each memory address also had 
tag bits saying what type was stored there, so the machine code just had an 
"add" instruction, not an "add integers" or "add floats". The operands said 
what the type was. And array accesses had to go through an array pointer, 
which included the bounds of the array.  Again, another machine you couldn't 
run C on.

> OK, now I must know... WTF is this "memmap" that I keep hearing about?

http://lmgtfy.com/?q=what+is+memmap

> I'm more dubious about how useful it is for everyone else. 

As I've said, I think that the very small and the very large will both be 
using this sort of thing more and more. If you can cut $5 off the cost of 
building a TiVO by using a language other than C, you'll see that happening. 
If you can cache 20% more stuff on a memcached-like server by ditching 
Linux, you'll see Amazon starting to do that I expect. Most large servers 
are single-purpose nowadays anyway.

-- 
Darren New, San Diego CA, USA (PST)
   Serving Suggestion:
     "Don't serve this any more. It's awful."


Post a reply to this message

From: Invisible
Subject: Re: I am convinced...
Date: 31 Jan 2011 08:28:04
Message: <4d46b8e4$1@news.povray.org>
On 22/12/2010 09:27 PM, Darren New wrote:

> Sort of the same way that when a process in UNIX exits, there's a list
> of things the kernel cleans up for it on its way out.

I didn't know it actually did that. I thought that (for example) if a 
Unix process exits without freeing the memory it allocated, that memory 
remains allocated forever. (Presumably it gets pages out to disk fairly 
soon, but even so...)

>> That sounds moderately mental. I'd hate to think what happens if you
>> get the microcode wrong...
>
> The same as if you get any other program wrong. What do you mean? It's
> probably far easier than writing a modern bytecode interpreter like the
> JVM.

I was thinking more that if you get the microcode right, you might 
physically fry the processor.

>> Well, to some extent there /is/. By definition, an application is
>> something that solves a real-world problem, and an OS is something
>> that provides services to applications.
>
> Where do you draw the distinction, if there's exactly one application
> running on the custom OS?

I suppose to some extent it's a bit arbitrary. But if you have, say, a 
library who's only purpose is to take graphics requests and poke the 
necessary hardware registers to make pixels change colour, you could 
call that part of an OS.

The "first" OS was of course the Disk Operating System, remember. ;-)

> I used to use an HP mainframe that was incapable of running C, because
> it was a segmented chip. It was really old, but it would be ideal for
> running an OO language nowadays, because you had tons of segments
> available, etc. Basically, pointers pointed to segments, not bytes, so
> it was in hindsight a very OO kind of architecture.

I'm not sure I see how.

> The Burroughs B series had typed machine code. Each memory address also
> had tag bits saying what type was stored there, so the machine code just
> had an "add" instruction, not an "add integers" or "add floats". The
> operands said what the type was. And array accesses had to go through an
> array pointer, which included the bounds of the array. Again, another
> machine you couldn't run C on.

That sounds much more interesting.

>> OK, now I must know... WTF is this "memmap" that I keep hearing about?
>
> http://lmgtfy.com/?q=what+is+memmap

I'm not sure I completely understand.

So, a memory-mapped file is a region of virtual memory which contains 
the same data as a file on disk? And when you access anything in that 
region, the necessary pages are read from disk? (And, presumably, saved 
back to that file rather than being copied to swap when physical memory 
is required.)

So... how do you change the size of the file then?


Post a reply to this message

From: Darren New
Subject: Re: I am convinced...
Date: 31 Jan 2011 13:56:53
Message: <4d4705f5$1@news.povray.org>
Invisible wrote:
> On 22/12/2010 09:27 PM, Darren New wrote:
> 
>> Sort of the same way that when a process in UNIX exits, there's a list
>> of things the kernel cleans up for it on its way out.
> 
> I didn't know it actually did that. I thought that (for example) if a 
> Unix process exits without freeing the memory it allocated, that memory 
> remains allocated forever. (Presumably it gets pages out to disk fairly 
> soon, but even so...)

No, that would be AmigaOS (intentionally, it turns out). :-)

Linux closes files, deallocates memory, reclaims disk space for unlinked 
files that this process had the last open on, unlocks files, closes the file 
holding the executable code, frees up page maps, and releases certain kinds 
of semaphores. And probably more, nowadays.

>>> That sounds moderately mental. I'd hate to think what happens if you
>>> get the microcode wrong...
>>
>> The same as if you get any other program wrong. What do you mean? It's
>> probably far easier than writing a modern bytecode interpreter like the
>> JVM.
> 
> I was thinking more that if you get the microcode right, you might 
> physically fry the processor.

Oh. Not in any microcode I've ever seen (not that I've seen much), but I 
imagine it's possible.

>> I used to use an HP mainframe that was incapable of running C, because
>> it was a segmented chip. It was really old, but it would be ideal for
>> running an OO language nowadays, because you had tons of segments
>> available, etc. Basically, pointers pointed to segments, not bytes, so
>> it was in hindsight a very OO kind of architecture.
> 
> I'm not sure I see how.

Because you couldn't have pointers into the middle of segments. The memory 
model was a bunch of "objects" in the sense that they were atomic lumps of 
memory that you could move around without adjusting pointers everywhere.

I only read about the assembly language of the thing, tho, so I don't really 
know any more details than that.

>> The Burroughs B series had typed machine code. Each memory address also
>> had tag bits saying what type was stored there, so the machine code just
>> had an "add" instruction, not an "add integers" or "add floats". The
>> operands said what the type was. And array accesses had to go through an
>> array pointer, which included the bounds of the array. Again, another
>> machine you couldn't run C on.
> 
> That sounds much more interesting.

That too. You can look up the details with google.

> 
>>> OK, now I must know... WTF is this "memmap" that I keep hearing about?
>>
>> http://lmgtfy.com/?q=what+is+memmap
> 
> I'm not sure I completely understand.
> 
> So, a memory-mapped file is a region of virtual memory which contains 
> the same data as a file on disk?

You know how swap space works, right? The page file?

memmap is using the exact same mechanism, except it pages out to the file 
you specify instead of "the swap space".

Or, viewed another way, all of memory is memmapped, and that system call 
lets you pick which file it's memmapped into instead of the default.

> So... how do you change the size of the file then?

I don't know that you do, if that's how you're accessing it. I haven't used 
it in so long that it's all probably changed by now.

-- 
Darren New, San Diego CA, USA (PST)
  "How did he die?"   "He got shot in the hand."
     "That was fatal?"
          "He was holding a live grenade at the time."


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.