 |
 |
|
 |
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
On Wed, 13 Aug 2008 14:27:48 -0700, Darren New wrote:
>>>>> Or "I can't let you install telnet[1] until you have some sort of
>>>>> TCP/IP stack installed."
>>>>
>>>> Isn't this what RPM does?
>>>
>>> No.
>>
>> Really? I thought that was the entire *point* of package managers.
>
> To some extent. Package managers tell you which dynamic libraries are
> needed for which programs. They don't enforce anything, and you cannot
> (for example) look at an RPM without installing it and know if it'll
> work right once you're done installing it.
Well, RPMs aren't package *managers*, they're packages. RPM is a package
manager, and it does a reasonably good job of enforcing dependencies -
you can override with --nodeps, but IME it does a good job for those who
need them enforced and lets those who know better if a dependency is
reasonable or not override.
Jim
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
On Thu, 14 Aug 2008 09:13:53 -0700, Darren New wrote:
> I think it's not so much that Linus T said "If I make it look like UNIX,
> I can use all the tools." I think it was probably at least as much "If
> I make it look like UNIX, I won't have to figure out how an OS *should*
> work." Hence, it starts out with all the brokenness of UNIX, then
> slowly piles on even more patches to try to make it useful, as long as
> you're not trying to maintain binary compatibility anyway.
Um, I think you'll find that Linux is a derivative of Minix, not UNIX.
At best it's Unix-like.
Jim
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Jim Henderson wrote:
> it does a reasonably good job of enforcing dependencies -
There are relatively few kinds of dependencies it can enforce, and
nothing enforces that the dependecy declarations are correct, for
example. If the package doesn't allow it, the package manager isn't
going to be able to do it.
You cannot, for example, look at the list of packages installed on the
system and tell whether another package will install correctly - there
may be unwritable files in the way that aren't tracked by the package
manager, for example. The RPM may install files not listed in the
manifest, and may not install every file listed in the manifest. If the
package needs to add a user to the FTP server or something, there's
nothing in the RPM that lets you look at it and tell automatically that
adding that user will be necessary and needs to succeed before the
package is installed. There is nothing in an RPM, as far as I know, that
says which system services need to be enabled before you can start this
one. (Sure, it's in the init.d script, but that's not in the package
manifest, AFAIK.)
The Singularity package manager doesn't have this flaw, because the
manifest controls what gets installed. There's no shell script in the
package.
I'm not sure what you were trying to say with
> Well, RPMs aren't package *managers*, they're packages. RPM is a package
> manager,
I know that. That's what I was talking about. Nothing I said conflicts
with this, as far as I can see.
--
Darren New / San Diego, CA, USA (PST)
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
On Wed, 13 Aug 2008 14:57:25 +0100, Invisible wrote:
> Does anybody *else* find it ironic that Micro$oft - the corporation
> internationally renowned for its poor quality, buggy products - is
> interested in methods of producing high-quality software?
No. Well, at least, I don't.
I'm not a fan of Microsoft, *however* it doesn't surprise me that they
would look for ways to improve code quality without needing to invest
massive amounts of energy, time, and money to do so. Better production
methods are one way of accomplishing this goal.
I've always said that Microsoft is outstanding at producing software
that's "just good enough" - ie, it is buggy, but it's good *enough* that
people aren't flocking away.
That doesn't mean that they wouldn't/couldn't/shouldn't go through a
process of striving for continuous improvement in their development
processes. And clearly that's something they do (I've known people who
have worked in MS Engineering, so this isn't conjecture on my part - it's
based on conversations with former colleagues who worked at MS in that
capacity).
Jim
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Jim Henderson wrote:
> Um, I think you'll find that Linux is a derivative of Minix, not UNIX.
> At best it's Unix-like.
I don't know you'd call it a "derivative" of either, really. Clearly the
whole thing is very UNIX-like, and since I'm only talking about the
design of the OS (the UI, the API, the file system layout, etc), it
doesn't really matter either way, since Minix and Unix both share the
whole *ix bit.
--
Darren New / San Diego, CA, USA (PST)
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Warp wrote:
> If you are doing a simple linear traversal, maybe, but if it's any more
> complicated than that...
Hmmm... Apparently there are mechanisms in place (which I don't really
understand) that let you interactively prove to the compiler that some
sequence of operations is safe, and then that gets recorded and the
compiler can take advantage of it. I.e., if you can prove that you
never go out of bounds, even if the proof is "non-obvious" to the
machine, then the compiler will omit the run-time checks. Awesome. :-)
Personally, I can't imagine how you go about doing such a thing, except
maybe adding stuff to the code describing what/why you think it's true
and running it thru the compiler again, which doesn't seem like
"interactive" to me. But I'm not finding anything on line that isn't
either "it's really cool" or "here's 40 pages of mathematics describing
how it works."
Also, I imagine you could put in appropriate assertions, such that if
you say (for example)
void flog(int[] myints, int startinx) {
assert myints.length > 500;
assert startinx > 100 && startinx < 400;
for (int i = startinx - 50; i < startinx + 50; i++)
myints[i] = myints[i+10];
}
then the compiler could track the possible ranges of values, and you'd
get runtime checks at the entry to the function but not inside the loop,
as an example.
But yeah, figuring out which next bit of object to bounce a ray off of
is obviously going to take some run-time checks.
But honestly, I've never seen code where which element gets accessed
next is obvious to a programmer but not to the compiler. I've never seen
code where you could prove to a person's satisfaction that it was
correctly accessing the array but couldn't prove it in a formal way
given what's in the code itself, assuming you have all the code in front
of you, of course.
Do you have any examples of that? I'm sure there must be some out there,
but I don't do that sort of programming, I think. I think the closest
I've gotten is knowing that the program that generated the file put
things in it such that the program reading the file doesn't have to
check. (E.g., the writer of the file never puts more than 80 chars per
line, so the reader doesn't have to check, and that's because I wrote
them both myself.)
--
Darren New / San Diego, CA, USA (PST)
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Jim Henderson wrote:
> I've always said that Microsoft is outstanding at producing software
> that's "just good enough" - ie, it is buggy, but it's good *enough* that
> people aren't flocking away.
And I've always said that M$'s greatest achievement is in *redefining*
what people will consider to be "good enough".
Not so many years ago, software that wasn't 100% crash-free was
unacceptable. Today this is considered "normal". And it's all due to M$.
> That doesn't mean that they wouldn't/couldn't/shouldn't go through a
> process of striving for continuous improvement in their development
> processes. And clearly that's something they do (I've known people who
> have worked in MS Engineering, so this isn't conjecture on my part - it's
> based on conversations with former colleagues who worked at MS in that
> capacity).
Really? It's actually to their best advantage economically to make their
software as inefficient as possible. (Although making it work
*correctly* would be beneficial to them, making it work *efficiently*
would cause them to lose money.)
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
On Thu, 14 Aug 2008 17:25:13 -0700, Darren New wrote:
> Jim Henderson wrote:
>> Um, I think you'll find that Linux is a derivative of Minix, not UNIX.
>> At best it's Unix-like.
>
> I don't know you'd call it a "derivative" of either, really. Clearly the
> whole thing is very UNIX-like, and since I'm only talking about the
> design of the OS (the UI, the API, the file system layout, etc), it
> doesn't really matter either way, since Minix and Unix both share the
> whole *ix bit.
That's more of a POSIX thing IIRC. Tannenbaum would say that they're
different as well, but Linux started as a free MINIX (since MINIX was
distributed under a restricted license at the time).
Jim
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
On Fri, 15 Aug 2008 18:46:00 +0100, Orchid XP v8 wrote:
> And I've always said that M$'s greatest achievement is in *redefining*
> what people will consider to be "good enough".
>
> Not so many years ago, software that wasn't 100% crash-free was
> unacceptable. Today this is considered "normal". And it's all due to M$.
Well, again, fair play to Microsoft - computing has gotten a lot more
complex over the last 20 years.
>> That doesn't mean that they wouldn't/couldn't/shouldn't go through a
>> process of striving for continuous improvement in their development
>> processes. And clearly that's something they do (I've known people who
>> have worked in MS Engineering, so this isn't conjecture on my part -
>> it's based on conversations with former colleagues who worked at MS in
>> that capacity).
>
> Really? It's actually to their best advantage economically to make their
> software as inefficient as possible. (Although making it work
> *correctly* would be beneficial to them, making it work *efficiently*
> would cause them to lose money.)
Are you old enough to be *that* cynical? ;-)
There is something to what you say, though; one of the factors that I've
seen (and heard discussed) that caused the decline of NetWare was that it
was *too* stable. People installed the server and forgot about it. Look
at the rather well-known story about the school that actually closed in a
NetWare 2.x server in a closet because they forgot about it. Not an
urban legend, this actually happened (University of North Carolina IIRC).
There were other factors as well that contributed to the decline of
NetWare, including some really bad missteps on Novell's part, rebranding
it to "IntraNetWare", which I consider one of the biggest blunders the
company has made *and* not necessarily learned from as well as it should
have been). Having a bit of instability keeps the system in mind, and MS
does an outstanding job of keeping people on the "upgrade treadmill".
Jim
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Orchid XP v8 wrote:
> Not so many years ago, software that wasn't 100% crash-free was
> unacceptable.
Nonsense. I'm guessing it actually crashed at a higher rate, but
nowadays you have orders of magnitude more people using software.
Or do you forget "sad mac" and "guru meditation" and "kernel panic". Of
course all these things are common terms in the industry because they
never, ever happened.
--
Darren New / San Diego, CA, USA (PST)
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|
 |