|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"jr" <cre### [at] gmailcom> wrote:
> ...
> ah, found the 'README.*'s -- once I looked in the correct directory. (will get
> back on these in a couple of days)
the 'NEWS' file makes it clear the release is source only, so 'README.bin' could
simply be deleted. the 'README' and 'README.md' files are, essentially,
identical. suggest updating the 'Dependencies' and 'Generating configure..'
sections in 'README.md', and rename that to 'README'. the 'README.unix' too
needs some updating (from v3.6); I think that file will become (more) relevant
again (cf X support).
the 'ChangeLog', 'AUTHORS', 'COPYING', 'NEWS', and 'README*' files ought to
"migrate" to the archive's top-level directory.
regards, jr.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 05.08.2021 um 12:05 schrieb jr:
> the 'ChangeLog', 'AUTHORS', 'COPYING', 'NEWS', and 'README*' files ought to
> "migrate" to the archive's top-level directory.
That's where they currently are in the tarball, are they not?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
hi,
clipka <ano### [at] anonymousorg> wrote:
> Am 05.08.2021 um 12:05 schrieb jr:
>
> > the 'ChangeLog', 'AUTHORS', 'COPYING', 'NEWS', and 'README*' files ought to
> > "migrate" to the archive's top-level directory.
>
> That's where they currently are in the tarball, are they not?
I was looking at the /unix/ files (only). duplicates can just be deleted. the
'README.md' in the top-level dir is newer, still suggest that should become
"the" new 'README'; then only 'README.unix' needs moving up.
regards, jr.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 05.08.2021 um 16:31 schrieb jr:
> 'README.md' in the top-level dir is newer, still suggest that should become
> "the" new 'README'; then only 'README.unix' needs moving up.
No, not really. `README.md` is specificially aimed at someone looking at
the entire repository package (or, even more to the point, someone
looking at the repository on GitHub).
The `README` in the Unix-specific package should be aimed specifically
at someone looking at that particular tarball.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
hi,
clipka <ano### [at] anonymousorg> wrote:
> Am 05.08.2021 um 16:31 schrieb jr:
>
> > 'README.md' in the top-level dir is newer, still suggest that should become
> > "the" new 'README'; then only 'README.unix' needs moving up.
>
> No, not really. `README.md` is specificially aimed at someone looking at
> the entire repository package (or, even more to the point, someone
> looking at the repository on GitHub).
then, surely, it should be on github, and not in the archive; a paragraph in the
'README' with repository url would suffice (to my thinking).
> The `README` in the Unix-specific package should be aimed specifically
> at someone looking at that particular tarball.
agree. and that tarball will already have been downloaded. so I'd be looking
for intro/overview + general instructions - only.
anyway, the whole thing makes me wonder why you .. bother. *NIX-ness is not a
high priority for you, I feel, so why even have tarballs? would just "git
clone" not be preferential?
(this rant is "tainted" -- probably -- by your mentioning that even the POV-Ray
development code resides "in the cloud" now rather than on own(ed) server(s))
regards, jr.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 05.08.2021 um 18:02 schrieb jr:
> anyway, the whole thing makes me wonder why you .. bother. *NIX-ness is not a
> high priority for you, I feel, so why even have tarballs? would just "git
> clone" not be preferential?
I think that's a severe misunderstanding.
My *NIX expertise is very low, that's why I'm not doing much for *NIX.
That doesn't mean I don't care. It just means I can't do.
> (this rant is "tainted" -- probably -- by your mentioning that even the POV-Ray
> development code resides "in the cloud" now rather than on own(ed) server(s))
It has been ever since we moved to a Git-based solution, right at the
time of the 3.7.0.0 release.
And to be frank, the GitHub infrastructure around the repo has been
quite an asset in the development work ever since. With the manpower
available to us, it would have been impossible to set up (let alone
maintain!) anything even remotely like it on our own turf.
As a matter of fact, *NIX-ness might have been the feature to benefit
most. With the dev team stocked pretty much with pure Windows jockeys,
automated test builds were the only thing that had us on our toes
regarding *NIX-incompatibilities. Setting up such facilities on our own
would have required quite the effort.
Automated test builds also helped a lot to get us through the times when
C++11 and clang both started to see widespread use, sending additional
ripples across the boost library, and opened up new portability pitfalls
due to incomatibilities between ever shifting boost versions, certain
constructs that turned out to no longer work in C++11, and the like.
With us Windows jockeys having no (or only cumbersome) access to a truly
C++11-compatible (let alone C++11-strict) development environment back
then, the availability of not just one but three(!) independent
free-for-open-source hosted build test services was invaluable to keep
POV-Ray compatible with all the fast-changing world of C++ back then,
both the established and the emerging.
We also got some feedback and contributions via GitHub that we might not
have gotten otherwise. Certainly not on as large a scale as in CompuServ
times, but still.
Among those who got into touch with us were the folks who maintain the
"homebrew" packages to provide *NIX software for MacOS. Which put
official MacOS compatibility back on the menu, after it had already
dropped off the back of the truck in the years prior.
The issue tracker also proved useful, if only because it meant we no
longer had to waste time keeping spammers out of our self-hosted bug
tracker.
And the fact that *NIX tarballs are back on the menu is also courtesy of
GitHub, because as we now migrated all the automated build tests from
3rd party services to GitHub's new own, we found that we could easily do
additional stuff whenever we were to auto-build Windows binaries. Even
stuff that would require a *NIX machine to run. (Or a MacOS machine, for
that matter, but that's not a thing that has manifested so far). So we
added *NIX tarballs to that build process.
And a developers' manual, just because we could. I had already set up a
few scripts and configs for that purpose years ago for my own use, but
never got around to setting up a channel for publishing the generated
documents.
Which is another boon of GitHub: It is so much easier to set up a new
release there, with any arbitrary set of associated downloadables, than
it would be on our own web server.
Which is what has gotten you folks each and every alpha release since
v3.7.0.0. I have no access to the web server to bundle up and publish
releases there, and I wouldn't have dared to bother Chris with anything
other than betas or better. Let alone that it would have taken a couple
of days minimum (if not weeks) for each such release to eventually make
it onto some downloadable page.
I wouldn't even have seen the benefit of such releases in the first
place. It was more a matter of, "hey, we can do this on a regular basis
with almost zero effort, so why not."
And I won't even mention the occasional experimental build, such as the
OpenType support builds.
Even whether beta.1 would be out yet, without GitHub's ease of deploying
software, is anybody's guess. It might still be in the pipeline between
me and Chris Cason. Or I might still be procrastinating about even
actually running the build process on my local machine. Having GitHub
run it is so much easier and leaves far less room for PEBCAK errors,
once the process has been set up.
There's no fault in being somewhat suspicious of 3rd party services like
GitHub. But let that not blind you to their benefits.
Division of labour has been one of the most efficient strategies in the
history of humanity, and this is just another application of that principle.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
hi,
clipka <ano### [at] anonymousorg> wrote:
> Am 05.08.2021 um 18:02 schrieb jr:
>
> > anyway, the whole thing makes me wonder why you .. bother. *NIX-ness is not a
> > high priority for you, I feel, so why even have tarballs? would just "git
> > clone" not be preferential?
>
> I think that's a severe misunderstanding.
>
> My *NIX expertise is very low, that's why I'm not doing much for *NIX.
>
> That doesn't mean I don't care. It just means I can't do.
glad to read this. (very) probably I read too much into things. (like your
post re declare=X=Y syntax, where you wrote using a "faux Windows command line";
cannot imagine a shortage of (computing) resource on your side, so wondered why
isn't he using a VM with a BSD or some Linux, then a "real" command-line would
be at hand)
> > (this rant is "tainted" -- probably -- by your mentioning that even the POV-Ray
> > development code resides "in the cloud" now rather than on own(ed) server(s))
>
> It has been ever since we moved to a Git-based solution, right at the
> time of the 3.7.0.0 release.
> ...
> We also got some feedback and contributions via GitHub that we might not
> have gotten otherwise. Certainly not on as large a scale as in CompuServ
> times, but still.
>
> Among those who got into touch with us were the folks who maintain the
> "homebrew" packages to provide *NIX software for MacOS. Which put
> official MacOS compatibility back on the menu, after it had already
> dropped off the back of the truck in the years prior.
additional channels of communication is useful.
> ...
> And a developers' manual, just because we could. I had already set up a
> few scripts and configs for that purpose years ago for my own use, but
> never got around to setting up a channel for publishing the generated
> documents.
moot, of course, but wonder whether publishing that to the wiki would have been
so very different (time+effort-wise).
> ...
> There's no fault in being somewhat suspicious of 3rd party services like
> GitHub. But let that not blind you to their benefits.
personally, my main "beef" with such sites is that in order to orient myself and
look at some info, my browser has to divulge all sorts of info about the system
it's running on. and a requirement to create an account just to comment/add an
issue? when a captcha to confirm it's not a bot would do. (the IP address is
logged anyway)
> Division of labour has been one of the most efficient strategies in the
> history of humanity, and this is just another application of that principle.
sure. true also of alternatives, like eg 'fossil'.
regards, jr.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 06.08.2021 um 15:43 schrieb jr:
>> My *NIX expertise is very low, that's why I'm not doing much for *NIX.
>>
>> That doesn't mean I don't care. It just means I can't do.
>
> glad to read this. (very) probably I read too much into things. (like your
> post re declare=X=Y syntax, where you wrote using a "faux Windows command line";
> cannot imagine a shortage of (computing) resource on your side, so wondered why
> isn't he using a VM with a BSD or some Linux, then a "real" command-line would
> be at hand)
I did resort to working with a VM for some time for testing code for
*NIX compatibility, but given my low knowledge level of the *NIX world I
found it rather a hassle compared to my Windows jockey mouse-pusher
comfort zone. Particularly the process of transferring the
version-to-test to the test VM kept bothering me.
Nowadays I have the Windows Subsystem for Linux at my disposal if needs
be, which feels a bit more integrated. For instance, I can pop up an
Ubuntu terminal from my Windows explorer, taking me straight to a
specific directory on my Windows file system.
It still is a thing I try to avoid though.
So if you see an error with POV-Ray's *NIX command line interface, why
wouldn't I test it with the Windows pseudo-command line first? If the
error is there as well, chances are I can fix it from within my comfort
zone, and don't need to add stress to my life by taking another stroll
in the *NIX world.
Even if the error I find should turn out to be located in the
Windows-specific portion of the code, chances are the problem with the
Unix side of things is very similar in nature. So even if I can't avoid
a detour through the *NIX world entirely, I'll need to spend less time
there because I already know what I'm about to encounter there.
>> And a developers' manual, just because we could. I had already set up a
>> few scripts and configs for that purpose years ago for my own use, but
>> never got around to setting up a channel for publishing the generated
>> documents.
>
> moot, of course, but wonder whether publishing that to the wiki would have been
> so very different (time+effort-wise).
Abso-bloody-lutely.
The Wiki is primarily designed to host content entered manually via the
Wiki interface itself.
I presume there are also channels for bulk uploads of content, but
they'd have to be in a format supported by the Wiki.
The developers' manual is currently a guesstimated 90% automatically
generated from the source code, and 10% manually edited information,
compiled by a dedicated tool that generates either a suite of
full-fledged HTML pages or a single PDF.
To cram that content into the Wiki would have required writing import
scripts. And I have 0 - zero, zilch - knowledge about the Wiki import
format, while presumably Jim has just as much knowledge about the format
generated by said tool (beyond the fact that it's HTML, of course).
Also, the Wiki is not really designed to handle information that may
change over time.
That's already a bit of a hassle when it comes to changes to the user
manual as new features are added to the scene language, or limitations
are lifted, or other some such.
The information in the developers' manual can be even more
version-specific, especially in times where I'm again doing one of my
refactoring sprees where I re-arrange parts of the internal
architecture, or throw away and re-write entire sections of the code.
>> There's no fault in being somewhat suspicious of 3rd party services like
>> GitHub. But let that not blind you to their benefits.
>
> personally, my main "beef" with such sites is that in order to orient myself and
> look at some info, my browser has to divulge all sorts of info about the system
> it's running on.
Does it though? Or is it just set to freely do that, and the other end
takes the liberty to make use of that?
If you were to bar your browser from divulging that information, would
you really hit a brick wall?
> and a requirement to create an account just to comment/add an
> issue? when a captcha to confirm it's not a bot would do. (the IP address is
> logged anyway)
It's been ages since I've seen an issue reporting page that doesn't
require you to at least disclose your e-mail address. Which is all for
the better in my opinion, because as a developer I want a chance to
contact the issue reporters, in case I have further questions.
And a system hosting hundreds - nay, thousands upon thousands - of
projects, some of which carry big names? Nope, just a captcha won't cut
it. Even if you can effectively protect against spam that way (which I
doubt for sites of a certain magnitude), you couldn't protect against
trolling. And in such a scenario, trolling would happen, period. Maybe
not to all projects, but to some. Especially the high-profile ones.
(Also, I for one am repulsed by sites that make me count traffic lights
or bridges or whatever just to get access to them.)
>> Division of labour has been one of the most efficient strategies in the
>> history of humanity, and this is just another application of that principle.
>
> sure. true also of alternatives, like eg 'fossil'.
Now I think you're confusing the technological platform (such as Git)
with a particular service based on that platform (such as GitHub).
Back when we made the decision, we had our reasons to pick Git as the
technological platform, and once that had been settled, to choose GitHub
as the hosting service. I can elaborate why those two specifically, if
you are interested.
The main point though is that the decision has been made. Whether it is
the _perfect_ choice is a moot question: Unless some pressing need crops
up, the hassle of migrating even to a different hosting service - let
alone an entirely different technological platform - far outweighs any
benefit such a move might provide in the long run. Let alone that other
hosting services and platforms also tend to have their downsides.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
hi,
clipka <ano### [at] anonymousorg> wrote:
> Am 06.08.2021 um 15:43 schrieb jr:
> ...
>> [not] using a VM with a BSD or some Linux...
>
> I did resort to working with a VM for some time for testing code for
> *NIX compatibility, but given my low knowledge level of the *NIX world I
> found it rather a hassle compared to my Windows jockey mouse-pusher
> comfort zone. Particularly the process of transferring the
> version-to-test to the test VM kept bothering me.
directory trees/drives can be shared.
> Nowadays I have the Windows Subsystem for Linux at my disposal if needs
> be, which feels a bit more integrated. For instance, I can pop up an
> Ubuntu terminal from my Windows explorer, taking me straight to a
> specific directory on my Windows file system.
>
> It still is a thing I try to avoid though.
</shakes-head> :-)
> ... So even if I can't avoid
> a detour through the *NIX world entirely, I'll need to spend less time
> there because I already know what I'm about to encounter there.
don't really agree. for instance, there never has been a functional, let alone
conceptual, equivalent of X on MS Windows. (was glad to read (in 'INSTALL'?)
that the preview X window proper may be brought back)
>>> And a developers' manual...
>> moot, of course, but wonder whether publishing that to the wiki would have been
>> so very different (time+effort-wise).
>
> Abso-bloody-lutely.
>
> The Wiki is primarily designed to host content entered manually via the
> Wiki interface itself.
>
> I presume there are also channels for bulk uploads of content, but
> they'd have to be in a format supported by the Wiki.
>
> The developers' manual is currently a guesstimated 90% automatically
> generated from the source code, and 10% manually edited information,
> compiled by a dedicated tool that generates either a suite of
> full-fledged HTML pages or a single PDF.
>
> To cram that content into the Wiki would have required writing import
> scripts. And I have 0 - zero, zilch - knowledge about the Wiki import
> format, while presumably Jim has just as much knowledge about the format
> generated by said tool (beyond the fact that it's HTML, of course).
well, adding a link to the "suite of full-fledged HTML pages" certainly would
not be too .. taxing.
agree on editing aspects etc, though that doesn't apply for generated stuff.
>> ... my browser has to divulge all sorts of info about the system
>> it's running on.
>
> Does it though? Or is it just set to freely do that, and the other end
> takes the liberty to make use of that?
>
> If you were to bar your browser from divulging that information, would
> you really hit a brick wall?
is besides the point. I think that the onus is on the site to only ask for what
is needed. personally, I take what I need, I do not grab more/everything just
because I can, and expect (of course) the same from others.
my "solution" to this is to use a dedicated machine[*] for all browsing.
[*] a Chromebook, so I have an option to "powerwash" if needed.
>> ... captcha ...
>
> It's been ages since I've seen an issue reporting page that doesn't
> require you to at least disclose your e-mail address. Which is all for
> the better in my opinion, because as a developer I want a chance to
> contact the issue reporters, in case I have further questions.
sure, email would be ok (even with verification ;-)).
>> sure. true also of alternatives, like eg 'fossil'.
>
> Now I think you're confusing the technological platform (such as Git)
> with a particular service based on that platform (such as GitHub).
perhaps, though it seems comparable to me.
regards, jr.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 07.08.2021 um 09:23 schrieb jr:
>>>> And a developers' manual...
...
>> To cram that content into the Wiki would have required writing import
>> scripts. And I have 0 - zero, zilch - knowledge about the Wiki import
>> format, while presumably Jim has just as much knowledge about the format
>> generated by said tool (beyond the fact that it's HTML, of course).
>
> well, adding a link to the "suite of full-fledged HTML pages" certainly would
> not be too .. taxing.
It still requires to put that army of HTML pages _somewhere_ first.
Did I mention that I do not have access to the POV-Ray web server? Nor
do I really care to get such access. It's not the kind of work I want to
put on my plate. I'm no good at it.
>>> sure. true also of alternatives, like eg 'fossil'.
>>
>> Now I think you're confusing the technological platform (such as Git)
>> with a particular service based on that platform (such as GitHub).
>
> perhaps, though it seems comparable to me.
Not really.
Technically, they might not differ that much.
In terms of how widespread they are though (and therefore how familiar
potential contributors might be with them, and how easy it might be to
find a compatible tool that they would feel comfortable using), Git and
Fossil are worlds apart.
You can't take two steps in open source development these days without
stumbling across Git. There's not a single modern development tool out
there without at least one Git plug-in (except for tools that have Git
support already built in, or don't provide any plug-in interface at
all). And the choice of tools for Git is so plentiful that it does
include quite a few that cater to people who don't have their brain
wired "the proper git way".
Fossil? The first time I've ever heard of it was when you just brought
it up.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|