POV-Ray : Newsgroups : povray.off-topic : Is this the end of the world as we know it? Server Time
31 Jul 2024 02:20:50 EDT (-0400)
  Is this the end of the world as we know it? (Message 181 to 190 of 545)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Orchid XP v8
Subject: Re: Is this the end of the world as we know it?
Date: 8 Oct 2011 07:53:42
Message: <4e9039c6$1@news.povray.org>
>> I've never figured out how to get out of dependency hell in Linux. Like,
>> you ask it to install one tiny application, and it wants to install an
>> entire ecosystem to support that.
>
> In Windows, you have the entire ecosystem to support it.  It's called
> "Windows".

Yeah, that's basically what it comes down to.

"Windows" is a product. You install it, or you don't install it. And 
that's about all there is to it.

"Linux" is a huge soup of different applications and programs written by 
hundreds of people over the course of several decades. There are so many 
features and options. There are a dozen different ways to accomplish 
every task. And every user-level program will use a different one of 
those subsystems, so you have to redundantly install and configure 
almost all of them.

> As a friend of mine who works for Microsoft said when I complained about
> Windows 7's insane use of disk space for 'caching' OS install files and
> the whole MSOCache, "What's the problem?  You can buy a 2 TB drive for
> under a hundred bucks - what's 30 GB of space to cache these install
> files?"

See, to some of us "a hundred bucks" is actually quite a lot of money. 
My current PC has 4 drives in it totalling less than 1 TB all together. 
If I was going to go to all the expensive of buying a terabyte of 
storage, it would be because I want to store a terabyte of *useful 
data*. Not just so that Windows will get out of bed. Sheesh...

> If you want a simple editor, look at nano, vi, or joe.  Small footprint,
> small dependency list.

Yeah, and really awkward to operate.

Of course, it was just an example. It doesn't really matter which 
program you're talking about; if you have KDE and want to run a GNOME 
application (or vice versa), you're going to have to install two entire 
WMs, even though you only ever use one of them.

>> I've never tried to install a Windows application and had to download 8
>> GB of data,
>
> That's because in Windows you have one desktop environment, and one set
> of dependencies.  Choice comes with a cost.  If you don't want the
> choices, use Windows.  Or Mac.

Oddly enough, I do use Windows. (I've never actually seen a physical Mac 
except in a shop.)

>> or had my entire Windows installation completely cease
>> functioning to the point where I have to reinstall.
>
> "Orchid XP v8" - you once said that the "v8" indicated how many times you
> had reinstalled Windows XP.  So I call BS. ;)

I've never had software break my PC so badly that reinstalling was the 
only way to get it to work again. I've had /plenty/ of software refuse 
to uninstall cleanly, or install stuff I didn't want. Now and then I 
usually end up reinstalling Windows just to keep it tidy. But I've never 
been /forced/ to reinstall. It's always been something I decided to do 
voluntarily.

>> About the worst
>> thing that can happen is that you need to install the .NET runtime.
>> (Obviously, this problem is because .NET exists. If we could get rid of
>> that, the problem would go away.)
>
> It seems you'd be happier with statically linked executables.

Well, that way you would only be downloading the libraries that the 
problem actually /uses/...

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Jim Henderson
Subject: Re: Is this the end of the world as we know it?
Date: 8 Oct 2011 08:05:55
Message: <4e903ca3$1@news.povray.org>
On Sat, 08 Oct 2011 12:53:41 +0100, Orchid XP v8 wrote:

> "Windows" is a product. You install it, or you don't install it. And
> that's about all there is to it.
> 
> "Linux" is a huge soup of different applications and programs written by
> hundreds of people over the course of several decades. There are so many
> features and options. There are a dozen different ways to accomplish
> every task. And every user-level program will use a different one of
> those subsystems, so you have to redundantly install and configure
> almost all of them.

Well, technically, "Linux" is the kernel.  GNU/Linux is the system, and a 
distribution is GNU/Linux + applications.

>> As a friend of mine who works for Microsoft said when I complained
>> about Windows 7's insane use of disk space for 'caching' OS install
>> files and the whole MSOCache, "What's the problem?  You can buy a 2 TB
>> drive for under a hundred bucks - what's 30 GB of space to cache these
>> install files?"
> 
> See, to some of us "a hundred bucks" is actually quite a lot of money.
> My current PC has 4 drives in it totalling less than 1 TB all together.
> If I was going to go to all the expensive of buying a terabyte of
> storage, it would be because I want to store a terabyte of *useful
> data*. Not just so that Windows will get out of bed. Sheesh...

Well, yes, and that was my point to my friend.  I don't have an extra 
hundred bucks kicking around right now because I'm currently seeking a 
job (yes, 5 months now I've been looking).

I've got a 2 TB external USB drive, but try running a Win7 VM over a USB2 
connection.  Performance is going to suck, pretty much guaranteed.  I 
also need more memory for this system.

>> If you want a simple editor, look at nano, vi, or joe.  Small
>> footprint, small dependency list.
> 
> Yeah, and really awkward to operate.

Convenience comes at a price, but nano isn't terribly awkward to 
operate.  No moreso than Edit on MS-DOS was, IIRC.  (I might be thinking 
of joe).

> Of course, it was just an example. It doesn't really matter which
> program you're talking about; if you have KDE and want to run a GNOME
> application (or vice versa), you're going to have to install two entire
> WMs, even though you only ever use one of them.

If you install a GNOME application, you're using the GNOME libraries (a 
key part of the window manager) and GTK+ widgets.

>>> or had my entire Windows installation completely cease functioning to
>>> the point where I have to reinstall.
>>
>> "Orchid XP v8" - you once said that the "v8" indicated how many times
>> you had reinstalled Windows XP.  So I call BS. ;)
> 
> I've never had software break my PC so badly that reinstalling was the
> only way to get it to work again. I've had /plenty/ of software refuse
> to uninstall cleanly, or install stuff I didn't want. Now and then I
> usually end up reinstalling Windows just to keep it tidy. But I've never
> been /forced/ to reinstall. It's always been something I decided to do
> voluntarily.

Same here with Linux.  In fact, upgrading my laptop to openSUSE 12.1 beta 
1 right now.  My choice, and I may take it back to 11.4 as I need it 
working on Tuesday-Friday next week.

>>> About the worst
>>> thing that can happen is that you need to install the .NET runtime.
>>> (Obviously, this problem is because .NET exists. If we could get rid
>>> of that, the problem would go away.)
>>
>> It seems you'd be happier with statically linked executables.
> 
> Well, that way you would only be downloading the libraries that the
> problem actually /uses/...

Well, no, you wouldn't be, because they'd be in the actual program.  But 
then you get into poor code reuse and duplication of shareable code, 
which eats up disk space.

Jim


Post a reply to this message

From: Orchid XP v8
Subject: Re: Is this the end of the world as we know it?
Date: 8 Oct 2011 08:18:16
Message: <4e903f88$1@news.povray.org>
On 08/10/2011 01:05 PM, Jim Henderson wrote:
> On Sat, 08 Oct 2011 12:53:41 +0100, Orchid XP v8 wrote:
>
>> "Windows" is a product. You install it, or you don't install it. And
>> that's about all there is to it.
>>
>> "Linux" is a huge soup of different applications and programs written by
>> hundreds of people over the course of several decades. There are so many
>> features and options. There are a dozen different ways to accomplish
>> every task. And every user-level program will use a different one of
>> those subsystems, so you have to redundantly install and configure
>> almost all of them.
>
> Well, technically, "Linux" is the kernel.  GNU/Linux is the system, and a
> distribution is GNU/Linux + applications.

Strictly speaking, that is of course true. However, that's not what is 
usually meant in common usage.

>> See, to some of us "a hundred bucks" is actually quite a lot of money.
>
> Well, yes, and that was my point to my friend.

This is something Microsoft has always historically not seemed to 
understand.

>> Of course, it was just an example. It doesn't really matter which
>> program you're talking about; if you have KDE and want to run a GNOME
>> application (or vice versa), you're going to have to install two entire
>> WMs, even though you only ever use one of them.
>
> If you install a GNOME application, you're using the GNOME libraries (a
> key part of the window manager) and GTK+ widgets.

I understand /why/ this happens. It's just frustrating, is all. I don't 
see why I should need to install Samba. Why can't I just install, you 
know, the GTK+ widgets? It seems to me that Linux dependency chains are 
just /way/ too coarse.

>> I've never had software break my PC so badly that reinstalling was the
>> only way to get it to work again.
>
> Same here with Linux.  In fact, upgrading my laptop to openSUSE 12.1 beta
> 1 right now.  My choice, and I may take it back to 11.4 as I need it
> working on Tuesday-Friday next week.

OK, that's astonishing. Every attempt I've never made at upgrading an 
existing Linux install from one distro release to another has /always/ 
ended in massive breakage, usually to the point that when I boot the 
system the kernel just panics and stops. You would have thought clicking 
"upgrade now" and waiting for the progress bar to finish would work, but 
noooo...

>>> It seems you'd be happier with statically linked executables.
>>
>> Well, that way you would only be downloading the libraries that the
>> problem actually /uses/...
>
> Well, no, you wouldn't be, because they'd be in the actual program.  But
> then you get into poor code reuse and duplication of shareable code,
> which eats up disk space.

Yeah, there's a down-side too.

Really, I'd just be happier if I could install just the functionallity 
that's strictly necessary, rather than installing everything even 
remotely related. Linux package manages seem to do a really poor job of 
dependency management. (Don't get me started on when one random program 
decides it wants a different version of the Linux kernel or something...)

Still, the problem escalates to a whole new level if you try to install 
something /not/ available from your distro's package manager. Everybody 
raves about how great it is that you can install everything from a big 
old list. But you can't, of course. There will be packages that aren't 
in the list.

Under Windows, if you want to install something, you just download it 
and install it. Under Linux, you probably have to download a tarball, 
work out how to unzip and untar it, figure out where the "install me 
now" script is, and then watch as it directs you to install a different 
version of gcc, asks where the kernel header files are, tries to 
auto-detect the stuff it needs... It almost never works. To the point 
where which Linux I use on my VM depends mostly on which one has VMware 
driver packages provided.

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Orchid XP v8
Subject: Re: Is this the end of the world as we know it?
Date: 8 Oct 2011 08:40:09
Message: <4e9044a9$1@news.povray.org>
>> 1.
>>
>> Unix was explicitly designed to be an OS for computer experts. It
>> assumes you know what you're doing. It provides no help or assistence of
>> any kind. Commands print status messages only in the event of a problem.
>> Success is indicated by silence. There are few warning prompts or
>> confirmation messages.
>
> Well, yes and no.  It was designed for people who work with computers,
> yes.  It does have plenty of warning prompts and confirmation messages,
> but it does vary from program to program.

I'm told part of this also dates back to the time before video displays, 
where command responses actually got printed on paper, and the 
communication was over a slow serial link.

>> Windows is explicitly designed to be used by morons. The kind of people
>> who shouldn't even be let near anything more complicated than a
>> doorbell.
>
> Ironically, Windows until recently has done a pretty piss poor job of
> actually keeping people from injuring themselves.  Why?  Because Gates'
> initial specifications were for no security.

Well, in the days before computer networks, security was pretty much a 
moot point. The only way to compromise a computer is to gain physical 
access. If somebody has physical access, there's not much you can do in 
software.

> Windows since WinNT has had
> to deal with backwards compatibility, and is *finally* getting to a point
> where it's harder for a user to aim the bazooka at their foot and pull
> the trigger.

Yeah, they do tend to prioritise easy of use higher than security, which 
isn't particularly to my liking. But hey...

> UAC is a pain in the ass for advanced users.  It's a necessary component
> for the average user.

What's UAC? Is that new in Windows 7 or something? (I've only used Vista.)

>> 2.
>>
>> To configure a Unix program, you must manually edit text files. To
>> configure a Windows program, you use an actual options screen, which
>> does things like prevent you selecting invalid combinations of features,
>> refering to non-existent paths, and so on. In the main, this is /easier/
>> than editing text files. You don't have to worry about mistyping things,
>> for example.
>
> No.  Many UNIX programs have GUIs now.  You may well have heard of CDE,
> KDE, GNOME, LXDE, Unity - all GUIs.  You may also have heard of YaST,
> Webmin, and other GUI-based (and web-based) admin tools that don't
> require you edit text files and do let you select from a predefined list.

...until you want to configure something that the GUI doesn't have an 
option for. Or you run some script which auto-configured your Apache, 
and now the shiny Apache front-end can't understand the config file any 
more and gets all confused. Or you do anything slightly advanced.

See, the configuration files are still the primary interface for 
configuring most stuff. The new GUI front-ends that most Linux 
distributions ship with now are exactly that - front-ends that make the 
thing look a bit more pretty. It's all too easy to break them though, or 
to end up with cryptic error messages and need to look under the hood to 
find the "real" error and how to fix it.

Windows programs tend to be designed around the GUI first and foremost. 
(Which isn't eithout drawbacks; it makes scripting harder, for example.)

> Why do I want to configure something from the CLI?  My server is
> headless, and dedicating memory to the GUI sucks resources.  So I have a
> server that I run without a GUI at all.  CLI editing is faster *if you
> know what you're doing*.

Right. That isn't something that is going to worry the average home user 
who's just trying to surf the 'net. That's something only a computer 
expert would care about. And there are various tools for doing it. 
(E.g., apparently Computer Management works remotely now. And you can 
script it.)

> CMD.EXE is quite useful.  I still use it on Windows Server 2008R2 and
> Windows 7 to perform filesystem operations.  Why?  Because I can do many
> of those things faster with a command prompt than with a mouse.  Removing
> my hand from the keyboard and using the mouse slows me down.  Time is
> money.  Wasted time is lost money.

The CLI /is/ superior for certain tasks. That's why it exists, after 
all. I won't dispute that. (Although CMD.EXE is a fairly week 
implementation of the concept.)

>> It is perfectly possible to programmatically edit the Registry. Indeed,
>> it's /easier/ than manipulating text files. You don't have to figure out
>> where the hell the file is stored and learn /yet another/ file format.
>> You just issue a couple of Win32 calls. All programs store their
>> configuration data in a single, common format - the registry.
>
> Hmmm, so writing a program to edit the registry is faster than going to /
> etc/apache2 and editing httpd.conf?  I don't think so.

Firing up RegEdit and going to the appropriate key is roughly as easy as 
opening up a text editor on your program's configuration file. The only 
real difference is that configuration files usually have actual 
documentation, whereas the registry typically doesn't.

>> On the down side, registry settings tend to be completely undocumented.
>
> Oh, and more than that, Microsoft routinely warns people *not* to edit
> the registry if they don't know what they're doing, and most of the
> Technet articles I've seen that include editing the registry include a
> "proceed at your own risk" disclaimer in case you totally fuck the system
> over with your change.
>
> I have yet to see a text file change on a Linux system that can hork the
> system up as badly as Microsoft wants you to believe Windows can be
> messed up with a single registry change.

Looking at the registry is effectively like looking at every 
configuration file on your entire Linux box. Sure, /most/ settings that 
you could change won't do any harm. However, since the harmless ones are 
right next to the utterly critical ones, one wrong step can totally 
floor the system. Possibly instantly. (Another thing about the registry: 
changes can take effect immediately. How many Linux programs "watch" 
their configuration file(s)?)

> Because with Linux it's pretty rare to have to reboot to affect the
> change.  It's sometimes easier, but to this day, I continue to be amazed
> at how frequently a Windows system has to be restarted.  Twice during
> installation, and if you're applying system updates, sometimes multiple
> times to get everything current (certainly with XP, later versions are
> somewhat better).

Ubuntu seems to contantly want me to reboot when I install updates too. 
I think the problem is more that Windows requires updating more often.

>> There are even tools to automate some of this. With just a factory
>> default install of a Windows server OS, I can press a few buttons in a
>> GUI and apply configuration settings to every Windows machine on the
>> network. With Unix, I'd have to go off and script something.
>
> Wrong.
>
> With openSUSE, for example, I can do an installation and at the end of
> the installation, the installer asks if I want to create an autoyast file
> so I can clone the system or do identical installs for multiple servers.
>
> Trivial.  No scripting required.

You mean WITH ONE PARTICULAR LINUX it's trivial.

That's just it. Windows is one product, with one set of management 
tools. The original Unix, as best as I can tell, has almost no 
management features at all. You're supposed to roll your own. So every 
major distro builder has built their own independent system of 
management tools.

If you wanted to compare how easy this is, you can't really compare 
"Windows" to "Linux". You'd have to compare "Windows" to "Debian", 
"Ubuntu", "OpenSUSE", "Fedora", ...

>> For example, where I work:
>> - Every PC has the screensaver set to come on after 2 minutes. You must
>> use a password to unlock it.
>>
>> In addition, at the touch of a button, the guys at HQ can make every PC
>> on the network (or just certain groups of them) install a specific piece
>> of software.
>>
>> If you wanted to do any of this with Linux, you would have a whole bunch
>> of scripting ahead of you.
>
> Wrong, again on the Linux front.  I personally know people who administer
> *thousands* of Linux servers.  I worked for a company that has a product
> to apply updates on a schedule to remote Linux systems.

A company that "makes a product" to enable you to do this.

Yes, almost /any/ OS can have software written for it that makes remote 
management easy. The question is how widely available that is.

>> In summary:
>>
>> You can configure Windows just as easily as, if not /more/ easily than a
>> Unix system.
>
> Nope, and to state this is pretty much an uninformed opinion based on a
> deep(er) knowledge of Windows and a lack of knowledge about modern Linux
> systems and how they're deployed in corporate environments.

I think we can summarise as follows:

Unix gives you standard tools for building any kind of management 
infrastructure you can imagine. But it doesn't actually provide such an 
infrastructure by default.

Windows gives you one standard set of management tools, out of the box. 
If those tools don't quite cover what you want, you have a slightly 
harder problem then you would with Unix, but it's hardly intractable.

And, I would imagine, various individual Linux distros probably provide 
their own unique management tools. I doubt any of these work for more 
than one distro, however...

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Jim Henderson
Subject: Re: Is this the end of the world as we know it?
Date: 8 Oct 2011 08:40:50
Message: <4e9044d2@news.povray.org>
On Sat, 08 Oct 2011 13:18:14 +0100, Orchid XP v8 wrote:

> On 08/10/2011 01:05 PM, Jim Henderson wrote:
>> On Sat, 08 Oct 2011 12:53:41 +0100, Orchid XP v8 wrote:
>>
>>> "Windows" is a product. You install it, or you don't install it. And
>>> that's about all there is to it.
>>>
>>> "Linux" is a huge soup of different applications and programs written
>>> by hundreds of people over the course of several decades. There are so
>>> many features and options. There are a dozen different ways to
>>> accomplish every task. And every user-level program will use a
>>> different one of those subsystems, so you have to redundantly install
>>> and configure almost all of them.
>>
>> Well, technically, "Linux" is the kernel.  GNU/Linux is the system, and
>> a distribution is GNU/Linux + applications.
> 
> Strictly speaking, that is of course true. However, that's not what is
> usually meant in common usage.

And, truth be told, most of the preconfiguration that's done during 
installation is sufficient for the majority of users.  So no, no need to 
configure "all of them".

>>> See, to some of us "a hundred bucks" is actually quite a lot of money.
>>
>> Well, yes, and that was my point to my friend.
> 
> This is something Microsoft has always historically not seemed to
> understand.

Well, in defense of my friend at Microsoft, he was in the consulting 
organisation, and they ordered *15,000* laptops from a particular 
manufacturer just for their consultants.

It's hard to understand why people have trouble affording a single hard 
drive when you buy in such bulk quantities.

But he's a funny guy - actually quite cynical about the tech industry as 
a whole.  He's called the whole thing a 'scam' for years.

>>> Of course, it was just an example. It doesn't really matter which
>>> program you're talking about; if you have KDE and want to run a GNOME
>>> application (or vice versa), you're going to have to install two
>>> entire WMs, even though you only ever use one of them.
>>
>> If you install a GNOME application, you're using the GNOME libraries (a
>> key part of the window manager) and GTK+ widgets.
> 
> I understand /why/ this happens. It's just frustrating, is all. I don't
> see why I should need to install Samba. Why can't I just install, you
> know, the GTK+ widgets? It seems to me that Linux dependency chains are
> just /way/ too coarse.

That's because you've never spent time looking at those interdependencies.

After all, on Windows, you have CIFS/SMB available on all systems by 
default.  You take it for granted on Windows, but for the rest of the 
world, there is a choice.

>>> I've never had software break my PC so badly that reinstalling was the
>>> only way to get it to work again.
>>
>> Same here with Linux.  In fact, upgrading my laptop to openSUSE 12.1
>> beta 1 right now.  My choice, and I may take it back to 11.4 as I need
>> it working on Tuesday-Friday next week.
> 
> OK, that's astonishing. Every attempt I've never made at upgrading an
> existing Linux install from one distro release to another has /always/
> ended in massive breakage, usually to the point that when I boot the
> system the kernel just panics and stops. You would have thought clicking
> "upgrade now" and waiting for the progress bar to finish would work, but
> noooo...

I've upgraded openSUSE from 11.0->11.1->11.2->11.4 (I gave 11.3 a miss).  
This upgrade I'm running now is a beta, so I expected problems.  And I 
got them, the upgrade failed and gave me the very helpful error message 
"An error occurred during the upgrade".  Nice.  That's getting reported.

Booted the system and GRUB thinks it's still 11.4 (as the grub update 
didn't apparently run) and there were a few hundred packages still to 
update.  Not sure why it failed, and the log went when I rebooted it.

So, booted the system manually, replaced the repos with the proper repos, 
and am doing a "zypper dup" to upgrade it.

Fortunately, I backed up the old partitions with partimage, so I can 
restore them if necessary.

The worst upgrade hell I've ever heard of, though, was MS' own corporate 
upgrade from Windows Server 2000 to Windows Server 2003.  I was told they 
upgraded to each incremental pre-release alpha, beta, and release 
candidate on several of their internal servers.  It was a nightmare, and 
the basis of their recommendation to do rip-and-replace upgrades rather 
than in-place upgrades.

>>>> It seems you'd be happier with statically linked executables.
>>>
>>> Well, that way you would only be downloading the libraries that the
>>> problem actually /uses/...
>>
>> Well, no, you wouldn't be, because they'd be in the actual program. 
>> But then you get into poor code reuse and duplication of shareable
>> code, which eats up disk space.
> 
> Yeah, there's a down-side too.

There are always tradeoffs.

> Really, I'd just be happier if I could install just the functionallity
> that's strictly necessary, rather than installing everything even
> remotely related. Linux package manages seem to do a really poor job of
> dependency management. (Don't get me started on when one random program
> decides it wants a different version of the Linux kernel or
> something...)

Programs usually don't care about the kernel version, unless they're 
kernel modules (or provide them).

RPM does a pretty good job of dependency management, but you have to take 
care not to add too many repositories, and don't mix and match repo 
versions.  That'll break things quite quickly.

For openSUSE, it's generally recommended you have 4 repos and no more:

1.  OSS
2.  Non-OSS
3.  Update
4.  Packman

And that's it.  Anything more and - if you're inexperienced - you'll end 
up shooting yourself in the foot, and probably take the other foot off 
for good measure along with 3 fingers on your left hand.

But, in true Linux fashion, you'll get to choose the 2 remaining 
fingers. ;)

> Still, the problem escalates to a whole new level if you try to install
> something /not/ available from your distro's package manager. Everybody
> raves about how great it is that you can install everything from a big
> old list. But you can't, of course. There will be packages that aren't
> in the list.

Actually, with openSUSE's Open Build Service, you can.  If you don't find 
something you need and it's OSS, ask in the forums if someone can build 
it - if it isn't already there under someone's home project in OBS, 
there's a guy on staff (malcolmlewis) who has been happy to get the 
package built.

Oh, and OBS?  Builds packages not just for openSUSE.  It can build for 
RedHat, Fedora, Debian, Ubuntu, and a few others.

> Under Windows, if you want to install something, you just download it
> and install it. Under Linux, you probably have to download a tarball,
> work out how to unzip and untar it, figure out where the "install me
> now" script is, and then watch as it directs you to install a different
> version of gcc, asks where the kernel header files are, tries to
> auto-detect the stuff it needs... It almost never works. 

Certainly if you don't know what you're doing, it almost never works.  If 
you know what you're doing, then it almost never fails (and when it does, 
it's usually a dependency version issue or a bug in the code that 
prevents the compile from happening).

Again, OBS solves this problem for a lot of distros, not just openSUSE.  
It even builds the packages for you on a server farm located "out there" 
somewhere.  Multiple architectures, too.  It's pretty slick.

> To the point
> where which Linux I use on my VM depends mostly on which one has VMware
> driver packages provided.

VMware provides their own tools, but there are free (as in OSS) tools as 
well.  ISTR they're included with openSUSE, in fact.

Jim


Post a reply to this message

From: Jim Henderson
Subject: Re: Is this the end of the world as we know it?
Date: 8 Oct 2011 09:15:07
Message: <4e904cdb@news.povray.org>
On Sat, 08 Oct 2011 13:40:07 +0100, Orchid XP v8 wrote:

>>> Unix was explicitly designed to be an OS for computer experts. It
>>> assumes you know what you're doing. It provides no help or assistence
>>> of any kind. Commands print status messages only in the event of a
>>> problem. Success is indicated by silence. There are few warning
>>> prompts or confirmation messages.
>>
>> Well, yes and no.  It was designed for people who work with computers,
>> yes.  It does have plenty of warning prompts and confirmation messages,
>> but it does vary from program to program.
> 
> I'm told part of this also dates back to the time before video displays,
> where command responses actually got printed on paper, and the
> communication was over a slow serial link.

Sure, but as the technology advances, some things have been updated when 
it was deemed appropriate.

>>> Windows is explicitly designed to be used by morons. The kind of
>>> people who shouldn't even be let near anything more complicated than a
>>> doorbell.
>>
>> Ironically, Windows until recently has done a pretty piss poor job of
>> actually keeping people from injuring themselves.  Why?  Because Gates'
>> initial specifications were for no security.
> 
> Well, in the days before computer networks, security was pretty much a
> moot point. The only way to compromise a computer is to gain physical
> access. If somebody has physical access, there's not much you can do in
> software.

Yes and no.  I worked in a computer lab at university that had boot 
diskettes for the network.  I can tell you viruses we had didn't spread 
because of the network.  They couldn't; they didn't understand INT2F 
redirection.

>> Windows since WinNT has had
>> to deal with backwards compatibility, and is *finally* getting to a
>> point where it's harder for a user to aim the bazooka at their foot and
>> pull the trigger.
> 
> Yeah, they do tend to prioritise easy of use higher than security, which
> isn't particularly to my liking. But hey...

Ease of use and security often need to be balanced.  More secure = harder 
to use.  Less secure = easier to use.

>> UAC is a pain in the ass for advanced users.  It's a necessary
>> component for the average user.
> 
> What's UAC? Is that new in Windows 7 or something? (I've only used
> Vista.)

User Access Controls, introduced in Vista IIRC.

>>> To configure a Unix program, you must manually edit text files. To
>>> configure a Windows program, you use an actual options screen, which
>>> does things like prevent you selecting invalid combinations of
>>> features, refering to non-existent paths, and so on. In the main, this
>>> is /easier/ than editing text files. You don't have to worry about
>>> mistyping things, for example.
>>
>> No.  Many UNIX programs have GUIs now.  You may well have heard of CDE,
>> KDE, GNOME, LXDE, Unity - all GUIs.  You may also have heard of YaST,
>> Webmin, and other GUI-based (and web-based) admin tools that don't
>> require you edit text files and do let you select from a predefined
>> list.
> 
> ...until you want to configure something that the GUI doesn't have an
> option for. Or you run some script which auto-configured your Apache,
> and now the shiny Apache front-end can't understand the config file any
> more and gets all confused. Or you do anything slightly advanced.

Just like with Windows and the registry.  You can configure any settings 
that you want, as long as the UI has them.  The minute it doesn't, out 
comes regedit.

Sorry, that's a straw man, and you know it.

> See, the configuration files are still the primary interface for
> configuring most stuff. The new GUI front-ends that most Linux
> distributions ship with now are exactly that - front-ends that make the
> thing look a bit more pretty. It's all too easy to break them though, or
> to end up with cryptic error messages and need to look under the hood to
> find the "real" error and how to fix it.

Hmm, and GUI frotn-ends for Windows configuration aren't anything more 
than ways of modifying the registry (or in now the rare case, an INI file 
somewhere)?  Sorry, again, you've constructed a straw man.

> Windows programs tend to be designed around the GUI first and foremost.
> (Which isn't eithout drawbacks; it makes scripting harder, for example.)

Well, Microsoft programs tend to be.  Windows programs as a whole - some 
are, some aren't.  I've seen some pretty crappy designs for Windows UIs 
in my time.  Blackberry Desktop comes to mind immediately.

>> Why do I want to configure something from the CLI?  My server is
>> headless, and dedicating memory to the GUI sucks resources.  So I have
>> a server that I run without a GUI at all.  CLI editing is faster *if
>> you know what you're doing*.
> 
> Right. That isn't something that is going to worry the average home user
> who's just trying to surf the 'net. That's something only a computer
> expert would care about. And there are various tools for doing it.
> (E.g., apparently Computer Management works remotely now. And you can
> script it.)

You know, the average home user surfing the net doesn't have to do diddly 
with a Linux box either.  It isn't rocket science to install a Linux 
distribution these days (certainly not like when I started using it back 
in the 90's), and if the system is preconfigured the way a Windows one is 
(ie, installed by someone who knows what the hell they're doing), the 
average computer user can be trained how to launch Firefox or Chrome and 
surf the web.

>> CMD.EXE is quite useful.  I still use it on Windows Server 2008R2 and
>> Windows 7 to perform filesystem operations.  Why?  Because I can do
>> many of those things faster with a command prompt than with a mouse. 
>> Removing my hand from the keyboard and using the mouse slows me down. 
>> Time is money.  Wasted time is lost money.
> 
> The CLI /is/ superior for certain tasks. That's why it exists, after
> all. I won't dispute that. (Although CMD.EXE is a fairly week
> implementation of the concept.)

But for many Windows users, it's what they're most familiar with.

>>> It is perfectly possible to programmatically edit the Registry.
>>> Indeed, it's /easier/ than manipulating text files. You don't have to
>>> figure out where the hell the file is stored and learn /yet another/
>>> file format. You just issue a couple of Win32 calls. All programs
>>> store their configuration data in a single, common format - the
>>> registry.
>>
>> Hmmm, so writing a program to edit the registry is faster than going to
>> / etc/apache2 and editing httpd.conf?  I don't think so.
> 
> Firing up RegEdit and going to the appropriate key is roughly as easy as
> opening up a text editor on your program's configuration file. The only
> real difference is that configuration files usually have actual
> documentation, whereas the registry typically doesn't.

Firing up regedit isn't "programming".  Navigating through the hives to 
find the right key is a freaking nightmare.  Even when you know the key 
you want to navigate to.

Give me a text editor and a config file *any day*.  Most of those config 
files have documentation in the comments.  Show me a full description for 
what any given registry key does *within regedit*, and then I *might* 
believe that it's "as easy as editing a config file on Linux".

>>> On the down side, registry settings tend to be completely
>>> undocumented.
>>
>> Oh, and more than that, Microsoft routinely warns people *not* to edit
>> the registry if they don't know what they're doing, and most of the
>> Technet articles I've seen that include editing the registry include a
>> "proceed at your own risk" disclaimer in case you totally fuck the
>> system over with your change.
>>
>> I have yet to see a text file change on a Linux system that can hork
>> the system up as badly as Microsoft wants you to believe Windows can be
>> messed up with a single registry change.
> 
> Looking at the registry is effectively like looking at every
> configuration file on your entire Linux box. Sure, /most/ settings that
> you could change won't do any harm. However, since the harmless ones are
> right next to the utterly critical ones, one wrong step can totally
> floor the system. 

You're simply wrong about this.  Been using Linux since the 90s, pretty 
much daily, and *never* have brought a Linux system down *instantly* by 
changing a config file.  N.E.V.E.R.

> Possibly instantly. 

On Linux?  Highly unlikely.

> (Another thing about the registry:
> changes can take effect immediately. How many Linux programs "watch"
> their configuration file(s)?)

Several do.  If I change the configuration file for vsftpd, for example, 
or sshd, the change comes into play the next time a user connects to it.  
xinetd is a wonderful thing.

>> Because with Linux it's pretty rare to have to reboot to affect the
>> change.  It's sometimes easier, but to this day, I continue to be
>> amazed at how frequently a Windows system has to be restarted.  Twice
>> during installation, and if you're applying system updates, sometimes
>> multiple times to get everything current (certainly with XP, later
>> versions are somewhat better).
> 
> Ubuntu seems to contantly want me to reboot when I install updates too.
> I think the problem is more that Windows requires updating more often.

Only if there's a kernel update.  Ubuntu may prompt more frequently 
because it's more convenient and what users coming from Windows are used 
to.

I've spent a fair amount of time recently installing Windows Server 
2008R2 and SQL Server 2008R2 for some work I've been doing.  The install 
is smoother than Server 2000 and 2003, I'll grant.  But still, it's in 
the stone ages compared to Linux in terms of reboots.

>>> There are even tools to automate some of this. With just a factory
>>> default install of a Windows server OS, I can press a few buttons in a
>>> GUI and apply configuration settings to every Windows machine on the
>>> network. With Unix, I'd have to go off and script something.
>>
>> Wrong.
>>
>> With openSUSE, for example, I can do an installation and at the end of
>> the installation, the installer asks if I want to create an autoyast
>> file so I can clone the system or do identical installs for multiple
>> servers.
>>
>> Trivial.  No scripting required.
> 
> You mean WITH ONE PARTICULAR LINUX it's trivial.

Fedora also has a similar tool.  I'm sure the Ubuntu Customization Kit 
also is capable of something similar.

> That's just it. Windows is one product, with one set of management
> tools. The original Unix, as best as I can tell, has almost no
> management features at all. You're supposed to roll your own. So every
> major distro builder has built their own independent system of
> management tools.

The *original* Unix was built in the 60's, and much of what was true for 
that is simply not true today.  That would be like me saying Windows was 
totally insecure because it was with Windows 1.0.  Such a statement would 
be complete bullshit; so is making statements about Linux based on Unix 
developed in the 60's.

> If you wanted to compare how easy this is, you can't really compare
> "Windows" to "Linux". You'd have to compare "Windows" to "Debian",
> "Ubuntu", "OpenSUSE", "Fedora", ...

In reality, that is certainly true.  Because we're talking about a 
complete system, and "Linux" isn't.  A distribution is.

But then a distribution includes things that Microsoft doesn't include - 
office suites, several gigabytes of other applications, and so on.

>> Wrong, again on the Linux front.  I personally know people who
>> administer *thousands* of Linux servers.  I worked for a company that
>> has a product to apply updates on a schedule to remote Linux systems.
> 
> A company that "makes a product" to enable you to do this.
> 
> Yes, almost /any/ OS can have software written for it that makes remote
> management easy. The question is how widely available that is.

Sorry, I thought the Windows management software to manage thousands of 
servers constituted a "product made by Microsoft to make managing large 
scale Windows deployments easier" was a paid for product called SMS, 
combined with some built-in directory features like GPOs (which of course 
require Windows server - but wait, that's not free, is it?).

Oh, and that company I worked for?  Also makes a tool for managing 
Windows servers that's actually quite a lot better than the Windows 
native tools (including arguably many of the paid-for add-ons).

>> Nope, and to state this is pretty much an uninformed opinion based on a
>> deep(er) knowledge of Windows and a lack of knowledge about modern
>> Linux systems and how they're deployed in corporate environments.
> 
> I think we can summarise as follows:
> 
> Unix gives you standard tools for building any kind of management
> infrastructure you can imagine. But it doesn't actually provide such an
> infrastructure by default.

Again, wrong.  And if you want to talk about Unix, Unix and Linux, while 
having some common tools, are not actually the same thing.

> Windows gives you one standard set of management tools, out of the box.
> If those tools don't quite cover what you want, you have a slightly
> harder problem then you would with Unix, but it's hardly intractable.

And Unix/Linux management is hardly intractable either.  But to listen to 
you, it's freakin' impossible - because if you don't know it, it MUST be 
impossible, right?

> And, I would imagine, various individual Linux distros probably provide
> their own unique management tools. I doubt any of these work for more
> than one distro, however...

Webmin is an example of one that's cross-distribution.  Oh, and the 
cost?  It's free - OSS.  Works the same regardless of which supported 
distribution it's put on.

Jim


Post a reply to this message

From: Orchid XP v8
Subject: Re: Is this the end of the world as we know it?
Date: 8 Oct 2011 09:44:09
Message: <4e9053a9$1@news.povray.org>
>> This is something Microsoft has always historically not seemed to
>> understand.
>
> Well, in defense of my friend at Microsoft, he was in the consulting
> organisation, and they ordered *15,000* laptops from a particular
> manufacturer just for their consultants.
>
> It's hard to understand why people have trouble affording a single hard
> drive when you buy in such bulk quantities.

Yeah, I guess that's what it comes down to.

> But he's a funny guy - actually quite cynical about the tech industry as
> a whole.  He's called the whole thing a 'scam' for years.

He's right...

(And I don't just mean Windows, or Linux. I mean the entire software 
industry.)

>> I understand /why/ this happens. It's just frustrating, is all. I don't
>> see why I should need to install Samba. Why can't I just install, you
>> know, the GTK+ widgets? It seems to me that Linux dependency chains are
>> just /way/ too coarse.
>
> That's because you've never spent time looking at those interdependencies.
>
> After all, on Windows, you have CIFS/SMB available on all systems by
> default.  You take it for granted on Windows, but for the rest of the
> world, there is a choice.

I still don't see why it's necessary to install a network protocl just 
to run a text editor.

>> OK, that's astonishing. Every attempt I've never made at upgrading an
>> existing Linux install from one distro release to another has /always/
>> ended in massive breakage, usually to the point that when I boot the
>> system the kernel just panics and stops. You would have thought clicking
>> "upgrade now" and waiting for the progress bar to finish would work, but
>> noooo...
>
> I've upgraded openSUSE from 11.0->11.1->11.2->11.4 (I gave 11.3 a miss).

Me and my dad tried updating OpenSUSE one time. After several days of 
hell, we decided never to attempt this ever again.

> The worst upgrade hell I've ever heard of, though, was MS' own corporate
> upgrade from Windows Server 2000 to Windows Server 2003.  I was told they
> upgraded to each incremental pre-release alpha, beta, and release
> candidate on several of their internal servers.  It was a nightmare, and
> the basis of their recommendation to do rip-and-replace upgrades rather
> than in-place upgrades.

Uh, yeah. Updating Windows in-place isn't something I'd recommend 
*either*...

>> Really, I'd just be happier if I could install just the functionallity
>> that's strictly necessary, rather than installing everything even
>> remotely related. Linux package manages seem to do a really poor job of
>> dependency management. (Don't get me started on when one random program
>> decides it wants a different version of the Linux kernel or
>> something...)
>
> Programs usually don't care about the kernel version, unless they're
> kernel modules (or provide them).

Or use features that are built into the kernel. (Stuff like 
cryptographic primitives, sound support, file change monitoring...)

> RPM does a pretty good job of dependency management

Well, some distros use RPM, some use .deb, some use something else 
entirely. I've yet to see a package manager where it's entirely clear 
what the heck is going on, or why selecting one small application 
requires a 2GB download.

> but you have to take care not to add too many repositories

I don't even know how to do that.

> But, in true Linux fashion, you'll get to choose the 2 remaining
> fingers. ;)

LOL.

>> Still, the problem escalates to a whole new level if you try to install
>> something /not/ available from your distro's package manager. Everybody
>> raves about how great it is that you can install everything from a big
>> old list. But you can't, of course. There will be packages that aren't
>> in the list.
>
> Actually, with openSUSE's Open Build Service, you can.

That's a nice idea. I can't comment on whether it works or not (given 
tha I've never heard of it before). I guess it doesn't help if you have 
time pressure - but hey, it's free...

>> Under Windows, if you want to install something, you just download it
>> and install it. Under Linux, you probably have to download a tarball,
>> work out how to unzip and untar it, figure out where the "install me
>> now" script is, and then watch as it directs you to install a different
>> version of gcc, asks where the kernel header files are, tries to
>> auto-detect the stuff it needs... It almost never works.
>
> Certainly if you don't know what you're doing, it almost never works.  If
> you know what you're doing, then it almost never fails (and when it does,
> it's usually a dependency version issue or a bug in the code that
> prevents the compile from happening).

Last time I tried this with VMware tools, it went something like this:
- Where are the kernel headers?
- No, the headers for the *running* kernel?
- OK, now install gcc please.
- No, the version of gcc that the kernel was compiled with.
At that point, I discovered that the version of gcc in question isn't 
available for this release of Ubuntu. WTF?

>> To the point
>> where which Linux I use on my VM depends mostly on which one has VMware
>> driver packages provided.
>
> VMware provides their own tools, but there are free (as in OSS) tools as
> well.  ISTR they're included with openSUSE, in fact.

Yeah, I tried several distros, and some of them just had no VMware 
support at all, some of them you could install packages for VMware as an 
option [not that it explains WTF each package does], and some of them 
automatically installed a bunch of VMware stuff without me even asking. 
It's as if the software somehow "knows" it's running in a VM...

VMware Tools comes with a script that's supposed to compile and install 
the necessary kernel modules, but I have never, ever seen it work. It 
always fails. Not that I blame them; there are such radical differences 
between distros that targetting all of them looks like a hopelessly 
difficult task.

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Orchid XP v8
Subject: Re: Is this the end of the world as we know it?
Date: 8 Oct 2011 10:05:46
Message: <4e9058ba$1@news.povray.org>
>> Yeah, they do tend to prioritise easy of use higher than security, which
>> isn't particularly to my liking. But hey...
>
> Ease of use and security often need to be balanced.  More secure = harder
> to use.  Less secure = easier to use.

Yeah, I'm aware of that. The other issue is backwards compatibility. If 
you make the system more secure, old software tends to stop working. 
(E.g., to this day Nero only works if you're an Administrator. No 
reason, it's just poorly written.)

>>> UAC is a pain in the ass for advanced users.  It's a necessary
>>> component for the average user.
>>
>> What's UAC? Is that new in Windows 7 or something? (I've only used
>> Vista.)
>
> User Access Controls, introduced in Vista IIRC.

Oh, OK. Maybe I just haven't run across that one yet then. I don't use 
my Vista machine all that much.

>>> No.  Many UNIX programs have GUIs now.
>>
>> ...until you want to configure something that the GUI doesn't have an
>> option for. Or you run some script which auto-configured your Apache,
>> and now the shiny Apache front-end can't understand the config file any
>> more and gets all confused. Or you do anything slightly advanced.
>
> Just like with Windows and the registry.  You can configure any settings
> that you want, as long as the UI has them.  The minute it doesn't, out
> comes regedit.
>
> Sorry, that's a straw man, and you know it.

>> See, the configuration files are still the primary interface for
>> configuring most stuff.
>
> Hmm, and GUI frotn-ends for Windows configuration aren't anything more
> than ways of modifying the registry (or in now the rare case, an INI file
> somewhere)?  Sorry, again, you've constructed a straw man.

It's a mentality difference, not a technological one.

Under Unix, the primary way to control most software is through 
configuration files. These days Linux has added pretty front-ends to 
some of these systems, but they tend to be designed only for the people 
who aren't smart enough to use the "real" interface - i.e., edit the 
text fails manually.

Under Windows, the GUI is the "real" interface. The configuration data 
is stored in the registry, but you're not supposed to edit it directly. 
You're supposed to go to the GUI first. Therefore, much more effort it 
put into making the GUI cover everything. (And far less effort is put 
into making the registry data human-readable, or even documented.)

It's just about where the developers focus their attention. Under Unix, 
the configuration file is the definite interface. Under Windows, the GUI 
is. (Or possibly the COM interface. But I don't know much about that one...)

>> Windows programs tend to be designed around the GUI first and foremost.
>
> Well, Microsoft programs tend to be.  Windows programs as a whole - some
> are, some aren't.  I've seen some pretty crappy designs for Windows UIs
> in my time.  Blackberry Desktop comes to mind immediately.

Oh yeah, but /all/ platforms have crappy software. Heck, when I tried 
KLogic, its simulations sometimes GAVE THE WRONG ANSWER. And frequently, 
just tweaking a few lines would crash the problem. So much for OSS 
always being of high quality... In truth, crap software exists 
everywhere. The interesting question is where the /good/ software is.

>>> Why do I want to configure something from the CLI?  My server is
>>> headless, and dedicating memory to the GUI sucks resources.
>>
>> Right. That isn't something that is going to worry the average home user
>> who's just trying to surf the 'net.
>
> You know, the average home user surfing the net doesn't have to do diddly
> with a Linux box either.

Sure. But that isn't the point. The point is that how powerful the CLI 
interface is only matters to power users. Which isn't who Windows is 
primarily aimed at.

>> The CLI /is/ superior for certain tasks. That's why it exists, after
>> all. I won't dispute that. (Although CMD.EXE is a fairly week
>> implementation of the concept.)
>
> But for many Windows users, it's what they're most familiar with.

Granted.

>> Firing up RegEdit and going to the appropriate key is roughly as easy as
>> opening up a text editor on your program's configuration file.
>
> Firing up regedit isn't "programming".  Navigating through the hives to
> find the right key is a freaking nightmare.  Even when you know the key
> you want to navigate to.

No more nightmarish than navigating to a particular file. You just click 
on a tree view. Just like a file browser.

> Give me a text editor and a config file *any day*.  Most of those config
> files have documentation in the comments.  Show me a full description for
> what any given registry key does *within regedit*, and then I *might*
> believe that it's "as easy as editing a config file on Linux".

Like I said, it's not the primary interface. You're supposed to use it 
only as a last resort. I'll admit I'd like it a lot more if there was 
more documentation for the registry.

>>> I have yet to see a text file change on a Linux system that can hork
>>> the system up as badly as Microsoft wants you to believe Windows can be
>>> messed up with a single registry change.
>>
>> Looking at the registry is effectively like looking at every
>> configuration file on your entire Linux box. Sure, /most/ settings that
>> you could change won't do any harm. However, since the harmless ones are
>> right next to the utterly critical ones, one wrong step can totally
>> floor the system.
>
> You're simply wrong about this.  Been using Linux since the 90s, pretty
> much daily, and *never* have brought a Linux system down *instantly* by
> changing a config file.  N.E.V.E.R.

You've misparsed what I wrote. I meant that getting the Windows registry 
wrong can down Windows instantly. I very much doubt you could do 
anything similar to Linux.

>> (Another thing about the registry:
>> changes can take effect immediately. How many Linux programs "watch"
>> their configuration file(s)?)
>
> Several do.  If I change the configuration file for vsftpd, for example,
> or sshd, the change comes into play the next time a user connects to it.
> xinetd is a wonderful thing.

Interesting. I'm pretty sure I had to send SIG_HUP (or whatever it is) 
to sshd to get it to notice that I just turned off password 
authentication...

[Let's not even get into the fact that the registry is transactional, 
while text files aren't. Or that it supports storing binary blobs 
relatively efficiently...]

>> Ubuntu seems to contantly want me to reboot when I install updates too.
>> I think the problem is more that Windows requires updating more often.
>
> Only if there's a kernel update.  Ubuntu may prompt more frequently
> because it's more convenient and what users coming from Windows are used
> to.

That's just ironic. Doing something defective because that's how Windows 
does it. Ha!

> I've spent a fair amount of time recently installing Windows Server
> 2008R2 and SQL Server 2008R2 for some work I've been doing.  The install
> is smoother than Server 2000 and 2003, I'll grant.  But still, it's in
> the stone ages compared to Linux in terms of reboots.

AFAIK, you boot the CD, do the text-mode bit, reboot into GUI mode, 
reboot one final time, and you're done. That's, like, 2 reboots. Hardly 
excessive...

>> That's just it. Windows is one product, with one set of management
>> tools. The original Unix, as best as I can tell, has almost no
>> management features at all. You're supposed to roll your own. So every
>> major distro builder has built their own independent system of
>> management tools.
>
> The *original* Unix was built in the 60's, and much of what was true for
> that is simply not true today.  That would be like me saying Windows was
> totally insecure because it was with Windows 1.0.  Such a statement would
> be complete bullshit; so is making statements about Linux based on Unix
> developed in the 60's.

It's also true that people write software that targets "Unix". It 
expects standard Unix tools like make, patch, cc and so forth, and it 
builds from source. The original Unix flavour provides all these tools, 
but it doesn't provide much in the way of pre-build, widely standardised 
management features. (Partly, as I presume you're hinting, because when 
Unix was new, PCs didn't exist yet. If you have one computer, what do 
you need remote management for?)

>> If you wanted to compare how easy this is, you can't really compare
>> "Windows" to "Linux". You'd have to compare "Windows" to "Debian",
>> "Ubuntu", "OpenSUSE", "Fedora", ...
>
> In reality, that is certainly true.  Because we're talking about a
> complete system, and "Linux" isn't.  A distribution is.
>
> But then a distribution includes things that Microsoft doesn't include -
> office suites, several gigabytes of other applications, and so on.

That I will grant you. Originally Windows was literally just an OS with 
a text editor. If you wanted to get /anything/ done, you had to pay 
money to install more software. (That's slowly changing of course. Now 
you have a web browser and a movie player and even video editing built 
in, and everybody screaming "monopoly!"...)

>> Windows gives you one standard set of management tools, out of the box.
>> If those tools don't quite cover what you want, you have a slightly
>> harder problem then you would with Unix, but it's hardly intractable.
>
> And Unix/Linux management is hardly intractable either.  But to listen to
> you, it's freakin' impossible - because if you don't know it, it MUST be
> impossible, right?

That isn't what I'm trying to say.

You said "Windows stores everything in the registry, which means you 
can't do any management stuff on it like you can with Linux". I'm 
demonstrating that, no, that's not the case at all. You might not be 
able to grep a text file and run sed over it to effect a configuration 
change, but you also don't /need/ to with Windows. There are other ways 
to reach the same goal - many of them easier than Unix shell scripting...

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Jim Henderson
Subject: Re: Is this the end of the world as we know it?
Date: 8 Oct 2011 10:32:14
Message: <4e905eee@news.povray.org>
On Sat, 08 Oct 2011 15:05:45 +0100, Orchid XP v8 wrote:

>>> Yeah, they do tend to prioritise easy of use higher than security,
>>> which isn't particularly to my liking. But hey...
>>
>> Ease of use and security often need to be balanced.  More secure =
>> harder to use.  Less secure = easier to use.
> 
> Yeah, I'm aware of that. The other issue is backwards compatibility. If
> you make the system more secure, old software tends to stop working.
> (E.g., to this day Nero only works if you're an Administrator. No
> reason, it's just poorly written.)

Yep, backwards compatibility does become an issue as well.

>>>> UAC is a pain in the ass for advanced users.  It's a necessary
>>>> component for the average user.
>>>
>>> What's UAC? Is that new in Windows 7 or something? (I've only used
>>> Vista.)
>>
>> User Access Controls, introduced in Vista IIRC.
> 
> Oh, OK. Maybe I just haven't run across that one yet then. I don't use
> my Vista machine all that much.

IIRC, it's kinda forced upon you.  In Win7, you can disable it pretty 
easily, but it's pretty hard to miss.

>> Hmm, and GUI frotn-ends for Windows configuration aren't anything more
>> than ways of modifying the registry (or in now the rare case, an INI
>> file somewhere)?  Sorry, again, you've constructed a straw man.
> 
> It's a mentality difference, not a technological one.

Yes, that is certainly true.

> Under Unix, the primary way to control most software is through
> configuration files. These days Linux has added pretty front-ends to
> some of these systems, but they tend to be designed only for the people
> who aren't smart enough to use the "real" interface - i.e., edit the
> text fails manually.

Well, yes and no.  Users of SUSE products (openSUSE and SLE*) often do 
know how to do the manual edits, but prefer using YaST anyways.

> Under Windows, the GUI is the "real" interface. The configuration data
> is stored in the registry, but you're not supposed to edit it directly.

Except for when there's no other way.

> You're supposed to go to the GUI first. Therefore, much more effort it
> put into making the GUI cover everything. (And far less effort is put
> into making the registry data human-readable, or even documented.)

Well, again, yes and no.  On Linux, some developers do put a lot of 
effort into hiding the backend.  Look at compizconfig, for example (not 
simple-ccsm, but the full configuration tool).  It can write its 
configuration to the flat file config file, or to a gconf backend, which 
in some ways mimics a registry.  The intention is that users modify the 
config through the tool and not by editing the files directly.

In openSUSE and SLE, there are in fact several files that are explicitly 
commented with "DO NOT EDIT THIS FILE DIRECTLY".

> It's just about where the developers focus their attention. Under Unix,
> the configuration file is the definite interface. Under Windows, the GUI
> is. (Or possibly the COM interface. But I don't know much about that
> one...)

Well, again, on Linux it depends.

Just like on Windows it used to depend on whether the developer wrote an 
INI file or used the registry.

>>> Windows programs tend to be designed around the GUI first and
>>> foremost.
>>
>> Well, Microsoft programs tend to be.  Windows programs as a whole -
>> some are, some aren't.  I've seen some pretty crappy designs for
>> Windows UIs in my time.  Blackberry Desktop comes to mind immediately.
> 
> Oh yeah, but /all/ platforms have crappy software. Heck, when I tried
> KLogic, its simulations sometimes GAVE THE WRONG ANSWER. And frequently,
> just tweaking a few lines would crash the problem. So much for OSS
> always being of high quality... In truth, crap software exists
> everywhere. The interesting question is where the /good/ software is.

Well, look at OpenOffice or LibreOffice.  Those are not programs designed 
for the geek, they're designed for the casual user.  You can't lump all 
programs on Linux in one category and all programs on Windows in the 
other - there's crossover.

>>>> Why do I want to configure something from the CLI?  My server is
>>>> headless, and dedicating memory to the GUI sucks resources.
>>>
>>> Right. That isn't something that is going to worry the average home
>>> user who's just trying to surf the 'net.
>>
>> You know, the average home user surfing the net doesn't have to do
>> diddly with a Linux box either.
> 
> Sure. But that isn't the point. The point is that how powerful the CLI
> interface is only matters to power users. Which isn't who Windows is
> primarily aimed at.

Yes, but on the one hand you're arguing about the average home user 
there.  Detailed configuration isn't something they need to worry about 
on *either* platform.

>>> Firing up RegEdit and going to the appropriate key is roughly as easy
>>> as opening up a text editor on your program's configuration file.
>>
>> Firing up regedit isn't "programming".  Navigating through the hives to
>> find the right key is a freaking nightmare.  Even when you know the key
>> you want to navigate to.
> 
> No more nightmarish than navigating to a particular file. You just click
> on a tree view. Just like a file browser.

No need for a file browser with CLI in Linux (though if you want, you can 
use something like mc).  I've navigated the /etc directory on Linux, and 
I've navigated the registry on several versions of Windows (including the 
most recent non-beta releases).  I'll take the /etc directory any day.

>> Give me a text editor and a config file *any day*.  Most of those
>> config files have documentation in the comments.  Show me a full
>> description for what any given registry key does *within regedit*, and
>> then I *might* believe that it's "as easy as editing a config file on
>> Linux".
> 
> Like I said, it's not the primary interface. You're supposed to use it
> only as a last resort. I'll admit I'd like it a lot more if there was
> more documentation for the registry.

And with openSUSE, editing the files directly is a last resort as well, 
with YaST being the preferred and recommended tool.

>> You're simply wrong about this.  Been using Linux since the 90s, pretty
>> much daily, and *never* have brought a Linux system down *instantly* by
>> changing a config file.  N.E.V.E.R.
> 
> You've misparsed what I wrote. I meant that getting the Windows registry
> wrong can down Windows instantly. I very much doubt you could do
> anything similar to Linux.

OK, I guess I did.  Hey, it was 7:15 AM here and I've been up all 
night. ;)

>> Several do.  If I change the configuration file for vsftpd, for
>> example, or sshd, the change comes into play the next time a user
>> connects to it. xinetd is a wonderful thing.
> 
> Interesting. I'm pretty sure I had to send SIG_HUP (or whatever it is)
> to sshd to get it to notice that I just turned off password
> authentication...

Just like in Windows, it depends on the program, and how long ago.  You 
may have noticed that Linux development isn't exactly stagnant.

> [Let's not even get into the fact that the registry is transactional,
> while text files aren't. Or that it supports storing binary blobs
> relatively efficiently...]

Transactionality is a function of the filesystem, and I use a journaled 
filesystem.

And I've yet to see anything more effective than a binary blob as a 
file.  It's just inconvenient to use binary data when the primary purpose 
of configuration files is for them not to be obscured.

>>> Ubuntu seems to contantly want me to reboot when I install updates
>>> too. I think the problem is more that Windows requires updating more
>>> often.
>>
>> Only if there's a kernel update.  Ubuntu may prompt more frequently
>> because it's more convenient and what users coming from Windows are
>> used to.
> 
> That's just ironic. Doing something defective because that's how Windows
> does it. Ha!

Sometimes distros choose that route because it's just easier than 
educating the user.  I would prefer if they educated the user instead.

>> I've spent a fair amount of time recently installing Windows Server
>> 2008R2 and SQL Server 2008R2 for some work I've been doing.  The
>> install is smoother than Server 2000 and 2003, I'll grant.  But still,
>> it's in the stone ages compared to Linux in terms of reboots.
> 
> AFAIK, you boot the CD, do the text-mode bit, reboot into GUI mode,
> reboot one final time, and you're done. That's, like, 2 reboots. Hardly
> excessive...

On an openSUSE installation, you boot the DVD, do the installation, and 
then it boots the installed kernel.  In most situations, it doesn't POST 
the machine again before it's up and running.

But then start applying patches on Windows.  To get 2008R2 current, 
that's probably 2-3 more reboots.

Doing the same on my openSUSE boxes, it's one reboot.  Period.  *If* 
there's a kernel update.

>>> That's just it. Windows is one product, with one set of management
>>> tools. The original Unix, as best as I can tell, has almost no
>>> management features at all. You're supposed to roll your own. So every
>>> major distro builder has built their own independent system of
>>> management tools.
>>
>> The *original* Unix was built in the 60's, and much of what was true
>> for that is simply not true today.  That would be like me saying
>> Windows was totally insecure because it was with Windows 1.0.  Such a
>> statement would be complete bullshit; so is making statements about
>> Linux based on Unix developed in the 60's.
> 
> It's also true that people write software that targets "Unix". It
> expects standard Unix tools like make, patch, cc and so forth, and it
> builds from source. The original Unix flavour provides all these tools,
> but it doesn't provide much in the way of pre-build, widely standardised
> management features. (Partly, as I presume you're hinting, because when
> Unix was new, PCs didn't exist yet. If you have one computer, what do
> you need remote management for?)

You clearly haven't looked at how Linux code development is done with 
modern package management.

>>> If you wanted to compare how easy this is, you can't really compare
>>> "Windows" to "Linux". You'd have to compare "Windows" to "Debian",
>>> "Ubuntu", "OpenSUSE", "Fedora", ...
>>
>> In reality, that is certainly true.  Because we're talking about a
>> complete system, and "Linux" isn't.  A distribution is.
>>
>> But then a distribution includes things that Microsoft doesn't include
>> - office suites, several gigabytes of other applications, and so on.
> 
> That I will grant you. Originally Windows was literally just an OS with
> a text editor. If you wanted to get /anything/ done, you had to pay
> money to install more software. (That's slowly changing of course. Now
> you have a web browser and a movie player and even video editing built
> in, and everybody screaming "monopoly!"...)

Slightly different situation when the manufacturer is extorting OEMs to 
pre-install Windows on every machine they ship (and charge for a license 
regardless of whether they ship Windows or not).  That actually is an 
abuse of monopoly power; the US Antitrust trial found that, and so did 
the EC's investigation.

>>> Windows gives you one standard set of management tools, out of the
>>> box. If those tools don't quite cover what you want, you have a
>>> slightly harder problem then you would with Unix, but it's hardly
>>> intractable.
>>
>> And Unix/Linux management is hardly intractable either.  But to listen
>> to you, it's freakin' impossible - because if you don't know it, it
>> MUST be impossible, right?
> 
> That isn't what I'm trying to say.
> 
> You said "Windows stores everything in the registry, which means you
> can't do any management stuff on it like you can with Linux". I'm
> demonstrating that, no, that's not the case at all. You might not be
> able to grep a text file and run sed over it to effect a configuration
> change, but you also don't /need/ to with Windows. There are other ways
> to reach the same goal - many of them easier than Unix shell
> scripting...

You don't seem to understand that Unix shell scripting is one tool of a 
variety of tools available.

If you know bash better than powershell, then how exactly is bash more 
difficult than powershell?

(The converse is also true)

BTW, those management tools we were talking about my former employer 
selling?  Turns out there's actually a free suite as well.  Relatively 
recent release, so I don't know all the details.  But it is cross-
distribution too IIRC.

Jim


Post a reply to this message

From: Jim Henderson
Subject: Re: Is this the end of the world as we know it?
Date: 8 Oct 2011 10:42:47
Message: <4e906167@news.povray.org>
On Sat, 08 Oct 2011 14:44:07 +0100, Orchid XP v8 wrote:

>>> This is something Microsoft has always historically not seemed to
>>> understand.
>>
>> Well, in defense of my friend at Microsoft, he was in the consulting
>> organisation, and they ordered *15,000* laptops from a particular
>> manufacturer just for their consultants.
>>
>> It's hard to understand why people have trouble affording a single hard
>> drive when you buy in such bulk quantities.
> 
> Yeah, I guess that's what it comes down to.

That's probably a significant part of it.

>> But he's a funny guy - actually quite cynical about the tech industry
>> as a whole.  He's called the whole thing a 'scam' for years.
> 
> He's right...
> 
> (And I don't just mean Windows, or Linux. I mean the entire software
> industry.)

He didn't mean just Windows or Linux either.  And he's been around the 
software industry for longer than either of us.

>>> I understand /why/ this happens. It's just frustrating, is all. I
>>> don't see why I should need to install Samba. Why can't I just
>>> install, you know, the GTK+ widgets? It seems to me that Linux
>>> dependency chains are just /way/ too coarse.
>>
>> That's because you've never spent time looking at those
>> interdependencies.
>>
>> After all, on Windows, you have CIFS/SMB available on all systems by
>> default.  You take it for granted on Windows, but for the rest of the
>> world, there is a choice.
> 
> I still don't see why it's necessary to install a network protocl just
> to run a text editor.

That's because you're not grokking the similarities between Windows and 
Linux.

Seriously.

Try installing Notepad on Windows without installing Windows Networking.

Oh, you can't do that.  Why?  Because Windows Networking is an integrated 
component of the operating system.

Guess what - it's also an integrated component of GNOME, because 
interoperability matters.

>>> OK, that's astonishing. Every attempt I've never made at upgrading an
>>> existing Linux install from one distro release to another has /always/
>>> ended in massive breakage, usually to the point that when I boot the
>>> system the kernel just panics and stops. You would have thought
>>> clicking "upgrade now" and waiting for the progress bar to finish
>>> would work, but noooo...
>>
>> I've upgraded openSUSE from 11.0->11.1->11.2->11.4 (I gave 11.3 a
>> miss).
> 
> Me and my dad tried updating OpenSUSE one time. After several days of
> hell, we decided never to attempt this ever again.

It's a shame you didn't come over to the forums and ask for some help.

>> The worst upgrade hell I've ever heard of, though, was MS' own
>> corporate upgrade from Windows Server 2000 to Windows Server 2003.  I
>> was told they upgraded to each incremental pre-release alpha, beta, and
>> release candidate on several of their internal servers.  It was a
>> nightmare, and the basis of their recommendation to do rip-and-replace
>> upgrades rather than in-place upgrades.
> 
> Uh, yeah. Updating Windows in-place isn't something I'd recommend
> *either*...

I generally wouldn't recommend it for any OS, but it can be a bit easier 
with Linux if your /home partition is separate from the rest of the 
system.  Worst case, you do a fresh install of the root partition and 
leave the /home data alone.

>>> Really, I'd just be happier if I could install just the functionallity
>>> that's strictly necessary, rather than installing everything even
>>> remotely related. Linux package manages seem to do a really poor job
>>> of dependency management. (Don't get me started on when one random
>>> program decides it wants a different version of the Linux kernel or
>>> something...)
>>
>> Programs usually don't care about the kernel version, unless they're
>> kernel modules (or provide them).
> 
> Or use features that are built into the kernel. (Stuff like
> cryptographic primitives, sound support, file change monitoring...)

Depends on the ABI in question.  Many of them are fairly stable, but some 
are not.

>> RPM does a pretty good job of dependency management
> 
> Well, some distros use RPM, some use .deb, some use something else
> entirely. I've yet to see a package manager where it's entirely clear
> what the heck is going on, or why selecting one small application
> requires a 2GB download.

Well, again, it comes down to understanding the interdependencies, rather 
than throwing your hands up in the air and saying "it's too damned 
complex for anyone to understand."

>> but you have to take care not to add too many repositories
> 
> I don't even know how to do that.

In openSUSE:

sudo yast2 repositories

>> But, in true Linux fashion, you'll get to choose the 2 remaining
>> fingers. ;)
> 
> LOL.

I figured some humour was called for. :)

>>> Still, the problem escalates to a whole new level if you try to
>>> install something /not/ available from your distro's package manager.
>>> Everybody raves about how great it is that you can install everything
>>> from a big old list. But you can't, of course. There will be packages
>>> that aren't in the list.
>>
>> Actually, with openSUSE's Open Build Service, you can.
> 
> That's a nice idea. I can't comment on whether it works or not (given
> tha I've never heard of it before). I guess it doesn't help if you have
> time pressure - but hey, it's free...

It's free, and you can be damned sure it works.  It's been out there for 
a few years.

>>> Under Windows, if you want to install something, you just download it
>>> and install it. Under Linux, you probably have to download a tarball,
>>> work out how to unzip and untar it, figure out where the "install me
>>> now" script is, and then watch as it directs you to install a
>>> different version of gcc, asks where the kernel header files are,
>>> tries to auto-detect the stuff it needs... It almost never works.
>>
>> Certainly if you don't know what you're doing, it almost never works. 
>> If you know what you're doing, then it almost never fails (and when it
>> does, it's usually a dependency version issue or a bug in the code that
>> prevents the compile from happening).
> 
> Last time I tried this with VMware tools, it went something like this: -
> Where are the kernel headers?
> - No, the headers for the *running* kernel? - OK, now install gcc
> please.
> - No, the version of gcc that the kernel was compiled with. At that
> point, I discovered that the version of gcc in question isn't available
> for this release of Ubuntu. WTF?

I can't speak to Ubuntu.  openSUSE has a pretty strict "no kernel 
upgrades" policy within a particular version.  (That doesn't mean "no 
updates" - security updates are backported by the openSUSE kernel team, 
and important enhancements frequently are as well AFAIK).  That means 
it's incredibly rare to have to deal with something like that with VMware 
once it's working.

>>> To the point
>>> where which Linux I use on my VM depends mostly on which one has
>>> VMware driver packages provided.
>>
>> VMware provides their own tools, but there are free (as in OSS) tools
>> as well.  ISTR they're included with openSUSE, in fact.
> 
> Yeah, I tried several distros, and some of them just had no VMware
> support at all, some of them you could install packages for VMware as an
> option [not that it explains WTF each package does], and some of them
> automatically installed a bunch of VMware stuff without me even asking.
> It's as if the software somehow "knows" it's running in a VM...
> 
> VMware Tools comes with a script that's supposed to compile and install
> the necessary kernel modules, but I have never, ever seen it work. It
> always fails. Not that I blame them; there are such radical differences
> between distros that targetting all of them looks like a hopelessly
> difficult task.

I've seen it fail, but I've seen it succeed more often than not.

It's a shame you don't ask questions in the Linux forums related to the 
distribution you use.  Those issues are often easily resolved, and 
novices can get help instead of just bitching "this damned stuff never 
works right!"

Jim


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.