|
|
Am 06.08.2021 um 15:43 schrieb jr:
>> My *NIX expertise is very low, that's why I'm not doing much for *NIX.
>>
>> That doesn't mean I don't care. It just means I can't do.
>
> glad to read this. (very) probably I read too much into things. (like your
> post re declare=X=Y syntax, where you wrote using a "faux Windows command line";
> cannot imagine a shortage of (computing) resource on your side, so wondered why
> isn't he using a VM with a BSD or some Linux, then a "real" command-line would
> be at hand)
I did resort to working with a VM for some time for testing code for
*NIX compatibility, but given my low knowledge level of the *NIX world I
found it rather a hassle compared to my Windows jockey mouse-pusher
comfort zone. Particularly the process of transferring the
version-to-test to the test VM kept bothering me.
Nowadays I have the Windows Subsystem for Linux at my disposal if needs
be, which feels a bit more integrated. For instance, I can pop up an
Ubuntu terminal from my Windows explorer, taking me straight to a
specific directory on my Windows file system.
It still is a thing I try to avoid though.
So if you see an error with POV-Ray's *NIX command line interface, why
wouldn't I test it with the Windows pseudo-command line first? If the
error is there as well, chances are I can fix it from within my comfort
zone, and don't need to add stress to my life by taking another stroll
in the *NIX world.
Even if the error I find should turn out to be located in the
Windows-specific portion of the code, chances are the problem with the
Unix side of things is very similar in nature. So even if I can't avoid
a detour through the *NIX world entirely, I'll need to spend less time
there because I already know what I'm about to encounter there.
>> And a developers' manual, just because we could. I had already set up a
>> few scripts and configs for that purpose years ago for my own use, but
>> never got around to setting up a channel for publishing the generated
>> documents.
>
> moot, of course, but wonder whether publishing that to the wiki would have been
> so very different (time+effort-wise).
Abso-bloody-lutely.
The Wiki is primarily designed to host content entered manually via the
Wiki interface itself.
I presume there are also channels for bulk uploads of content, but
they'd have to be in a format supported by the Wiki.
The developers' manual is currently a guesstimated 90% automatically
generated from the source code, and 10% manually edited information,
compiled by a dedicated tool that generates either a suite of
full-fledged HTML pages or a single PDF.
To cram that content into the Wiki would have required writing import
scripts. And I have 0 - zero, zilch - knowledge about the Wiki import
format, while presumably Jim has just as much knowledge about the format
generated by said tool (beyond the fact that it's HTML, of course).
Also, the Wiki is not really designed to handle information that may
change over time.
That's already a bit of a hassle when it comes to changes to the user
manual as new features are added to the scene language, or limitations
are lifted, or other some such.
The information in the developers' manual can be even more
version-specific, especially in times where I'm again doing one of my
refactoring sprees where I re-arrange parts of the internal
architecture, or throw away and re-write entire sections of the code.
>> There's no fault in being somewhat suspicious of 3rd party services like
>> GitHub. But let that not blind you to their benefits.
>
> personally, my main "beef" with such sites is that in order to orient myself and
> look at some info, my browser has to divulge all sorts of info about the system
> it's running on.
Does it though? Or is it just set to freely do that, and the other end
takes the liberty to make use of that?
If you were to bar your browser from divulging that information, would
you really hit a brick wall?
> and a requirement to create an account just to comment/add an
> issue? when a captcha to confirm it's not a bot would do. (the IP address is
> logged anyway)
It's been ages since I've seen an issue reporting page that doesn't
require you to at least disclose your e-mail address. Which is all for
the better in my opinion, because as a developer I want a chance to
contact the issue reporters, in case I have further questions.
And a system hosting hundreds - nay, thousands upon thousands - of
projects, some of which carry big names? Nope, just a captcha won't cut
it. Even if you can effectively protect against spam that way (which I
doubt for sites of a certain magnitude), you couldn't protect against
trolling. And in such a scenario, trolling would happen, period. Maybe
not to all projects, but to some. Especially the high-profile ones.
(Also, I for one am repulsed by sites that make me count traffic lights
or bridges or whatever just to get access to them.)
>> Division of labour has been one of the most efficient strategies in the
>> history of humanity, and this is just another application of that principle.
>
> sure. true also of alternatives, like eg 'fossil'.
Now I think you're confusing the technological platform (such as Git)
with a particular service based on that platform (such as GitHub).
Back when we made the decision, we had our reasons to pick Git as the
technological platform, and once that had been settled, to choose GitHub
as the hosting service. I can elaborate why those two specifically, if
you are interested.
The main point though is that the decision has been made. Whether it is
the _perfect_ choice is a moot question: Unless some pressing need crops
up, the hassle of migrating even to a different hosting service - let
alone an entirely different technological platform - far outweighs any
benefit such a move might provide in the long run. Let alone that other
hosting services and platforms also tend to have their downsides.
Post a reply to this message
|
|