|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 2/8/2016 9:33 PM, Jim Holsenback wrote:
> On 2/1/2016 8:52 AM, Stephen wrote:
>> There has been some talk about how the search function in the Help
>> file is broken.
>> I had forgotten about this workaround until I used it without thinking.
>> When you search for a keyword and Help takes you to the top of a large
>> page. Left click on the right hand pane and press Ctrl + F to bring up a
>> search box. There you can search for your keyword.
>>
>
> Just to be clear about this I'm responding to the head of this thread so
> as to not have it taken that I'm commenting to anyone in particular.
>
> I'm absolutely certain the problem is all the indexentry tags that DID
> NOT get converted to the new format imposed at the last minute.
>
> I'm going to go ahead and step back from this.
attached is a gzip archive which contains php scripts notes and results
that show the scope of the problem. no images included to reduce the
payload of the archive
createSubIndex.php scans the files in directory tagged (these files are
un-post processed windows docs pulled from wiki) and writes to directory
indexed. the .txt files are the break outs. of special interest ...
progress.txt as you can see there are > 1500 tags that need to be
converted in the wiki mark-up
Post a reply to this message
Attachments:
Download 'subject-index.tar.gz' (1442 KB)
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 2/13/2016 1:02 PM, Jim Holsenback wrote:
> On 2/8/2016 9:33 PM, Jim Holsenback wrote:
> createSubIndex.php scans the files in directory tagged (these files are
> un-post processed windows docs pulled from wiki) and writes to directory
> indexed. the .txt files are the break outs. of special interest ...
> progress.txt as you can see there are > 1500 tags that need to be
> converted in the wiki mark-up
several more things ...
- unpack the archive preserving directory structure. work in
subject-index. i've "grouped" the tag types. create-SubIndex has some
hints (comments in the code) and so do the .txt filenames that i used.
- i figured if i could filter, i could convert them to a landing spot
from the index. the simpler (and some of the compound) tag forms have
been done. sans the conversion to a landing place.
- compound tags are a single landing place for more than one topic from
the index
- i've discovered some of the compound tags (in the mark-up) are
malformed, that is they aren't formatted correctly, or their intent is
not clear. some of the more complex tags can be made simpler without
sacrificing intent. finding them and cleaning them up (on the wiki) is
the biggest task.
- once the mark-up has been fixed and all tags types can be parsed
correctly. it's easy enough to reformat into a landing place.
... i'm sure there's more
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 13.02.2016 um 19:02 schrieb Jim Holsenback:
>> I'm going to go ahead and step back from this.
>
> attached is a gzip archive which contains php scripts notes and results
> that show the scope of the problem. no images included to reduce the
> payload of the archive
I'm afraid that to me, who has virtually no PHP experience whatsoever,
the contents of that archive don't give many clues.
First of all, what is the nature of the work to do? I previously had the
impression that it was a matter of modifying the Wiki content to fit an
existing conversion process, but this seems more like modifying an
existing conversion process to fit the Wiki content, right?
But somehow I think I fail to see the actual problem. After all, it
seems to be just a matter of replacing everything that matches
`{{#indexentry:ANY TEXT}}` with `<a name="ndxntry_NUMBER"
id="ndxntry_NUMBER"></a>`, while compiling a list(*) of mappings between
keywords (as specified by the `ANY TEXT` portion) and the respective
replacement hypertext anchors (as given by `ndxntry_NUMBER`).
(* Or so I presume; I haven't been able to identify that generated list
yet. I'd have expected the process to generate a .hhk file, but that
doesn't seem to be the case.)
Analyzing the `ANY TEXT` portion to catch the special cases seems to be
the tricky part, but even that doesn't look too difficult. At present I
only see the `Keyword1|Keyword2|...` and `Keyword, Section` cases that
need handling. The former should be easy: Just generate multiple
separate entries in the mapping file. The latter -- well, that depends
on the format of the file that needs to be generated.
Am I missing something fundamental here?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 2/23/2016 9:11 PM, clipka wrote:
> Am 13.02.2016 um 19:02 schrieb Jim Holsenback:
>
>>> I'm going to go ahead and step back from this.
>>
>> attached is a gzip archive which contains php scripts notes and results
>> that show the scope of the problem. no images included to reduce the
>> payload of the archive
>
> I'm afraid that to me, who has virtually no PHP experience whatsoever,
> the contents of that archive don't give many clues.
>
> First of all, what is the nature of the work to do? I previously had the
> impression that it was a matter of modifying the Wiki content to fit an
> existing conversion process, but this seems more like modifying an
> existing conversion process to fit the Wiki content, right?
>
>
> But somehow I think I fail to see the actual problem. After all, it
> seems to be just a matter of replacing everything that matches
> `{{#indexentry:ANY TEXT}}` with `<a name="ndxntry_NUMBER"
> id="ndxntry_NUMBER"></a>`, while compiling a list(*) of mappings between
> keywords (as specified by the `ANY TEXT` portion) and the respective
> replacement hypertext anchors (as given by `ndxntry_NUMBER`).
>
> (* Or so I presume; I haven't been able to identify that generated list
> yet. I'd have expected the process to generate a .hhk file, but that
> doesn't seem to be the case.)
>
> Analyzing the `ANY TEXT` portion to catch the special cases seems to be
> the tricky part, but even that doesn't look too difficult. At present I
> only see the `Keyword1|Keyword2|...` and `Keyword, Section` cases that
> need handling. The former should be easy: Just generate multiple
> separate entries in the mapping file. The latter -- well, that depends
> on the format of the file that needs to be generated.
>
>
> Am I missing something fundamental here?
>
I NEVER was responsible for the windows documentation (chm version) I
just produced a html version (from wiki) that contained the indexentry
tags as they appeared in the wiki mark-up and passed it on to Chris. He
did some post processing that converted to chm and also produced the
searchable index. In the process of attempting to unravel I discovered
that the markup has more than a few indexentry tags that need to be
checked / fixed. The main php script in the archive I attached (earlier)
was just a tool to help quantify the problem a identify where the
offending tags are in the markup. Once that has been addressed I'd
imagine the post processing code would need to be looked at as well.
That brings to mind why chm ... it's been obsolete for sometime now and
only exists as legacy now. At any rate I got tired of being the only
person doing /any/ of the grunt work necessary to get this fixed. I
don't use windows version so this just dropped off my radar. I getting
plenty of mileage out of stand-alone unix version docs. When I want to
find something quickly I just go 3.3.1.2 Keywords section.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 24.02.2016 um 14:41 schrieb Jim Holsenback:
> On 2/23/2016 9:11 PM, clipka wrote:
>> Am I missing something fundamental here?
>>
>
> I NEVER was responsible for the windows documentation (chm version) I
> just produced a html version (from wiki) that contained the indexentry
> tags as they appeared in the wiki mark-up and passed it on to Chris. He
> did some post processing that converted to chm and also produced the
> searchable index.
That does sound fundamental indeed.
> That brings to mind why chm ... it's been obsolete for sometime now and
> only exists as legacy now.
That's pretty simple to answer: It's what POV-Ray for Windows has been
using for its inbuilt help for quite a while; switching to any
alternative would require changing the program code -- and in the case
of the official alternative the input to the help file compiler would be
exactly the same, so it wouldn't solve the problem at all.
As a matter of fact, the newest official "alternative" to CHM seems to
be to have no context-sensitive help at all, or to roll your own.
> At any rate I got tired of being the only
> person doing /any/ of the grunt work necessary to get this fixed.
If the goal is to repair stuff in the Wiki, I wonder whether there is
anyone else /able/ to do that "grunt work" with reasonable effort. After
all, it seems like something that can be automated -- but that certainly
requires "bulk" access to the Wiki, which you seem to have. I presume
Chris has this level of access, too, but we all know he doesn't have
much time to spare.
> I
> don't use windows version so this just dropped off my radar. I getting
> plenty of mileage out of stand-alone unix version docs. When I want to
> find something quickly I just go 3.3.1.2 Keywords section.
Still not as elegant as placing the cursor on a keyword and pressing "F1".
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 2/24/2016 8:41 AM, Jim Holsenback wrote:
> That brings to mind why chm ... it's been obsolete for sometime now and
> only exists as legacy now. At any rate I got tired of being the only
> person doing /any/ of the grunt work necessary to get this fixed. I
> don't use windows version so this just dropped off my radar. I getting
> plenty of mileage out of stand-alone unix version docs. When I want to
> find something quickly I just go 3.3.1.2 Keywords section.
CHM is pretty great. Quick loading when compared to PHP. Can search all
help files or just within one page. Nested, alphabetical index with more
stuff than just key terms. A TOC that is always visible and within easy
reach, and automatically shows/highlights the page you are on.
Context-sensitive help, as clipka mentioned.
I don't think anyone here is going to roll an alternative that is just
as good. Just because an OSS alternative is /possible/ doesn't mean
anyone will get around to actually making it.
Mike
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 2/24/2016 10:05 PM, Mike Horvath wrote:
> On 2/24/2016 8:41 AM, Jim Holsenback wrote:
>> That brings to mind why chm ... it's been obsolete for sometime now and
>> only exists as legacy now. At any rate I got tired of being the only
>> person doing /any/ of the grunt work necessary to get this fixed. I
>> don't use windows version so this just dropped off my radar. I getting
>> plenty of mileage out of stand-alone unix version docs. When I want to
>> find something quickly I just go 3.3.1.2 Keywords section.
>
> CHM is pretty great. Quick loading when compared to PHP. Can search all
> help files or just within one page. Nested, alphabetical index with more
> stuff than just key terms. A TOC that is always visible and within easy
> reach, and automatically shows/highlights the page you are on.
> Context-sensitive help, as clipka mentioned.
You've misunderstood ... PHP is ONLY used to pull from wiki and format
back into to the standalone html packages (win, nix and mac)
>
> I don't think anyone here is going to roll an alternative that is just
> as good. Just because an OSS alternative is /possible/ doesn't mean
> anyone will get around to actually making it.
>
>
> Mike
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 2/25/2016 9:12 AM, Jim Holsenback wrote:
> On 2/24/2016 10:05 PM, Mike Horvath wrote:
>> On 2/24/2016 8:41 AM, Jim Holsenback wrote:
>>> That brings to mind why chm ... it's been obsolete for sometime now and
>>> only exists as legacy now. At any rate I got tired of being the only
>>> person doing /any/ of the grunt work necessary to get this fixed. I
>>> don't use windows version so this just dropped off my radar. I getting
>>> plenty of mileage out of stand-alone unix version docs. When I want to
>>> find something quickly I just go 3.3.1.2 Keywords section.
>>
>> CHM is pretty great. Quick loading when compared to PHP. Can search all
>> help files or just within one page. Nested, alphabetical index with more
>> stuff than just key terms. A TOC that is always visible and within easy
>> reach, and automatically shows/highlights the page you are on.
>> Context-sensitive help, as clipka mentioned.
>
> You've misunderstood ... PHP is ONLY used to pull from wiki and format
> back into to the standalone html packages (win, nix and mac)
>
>>
>> I don't think anyone here is going to roll an alternative that is just
>> as good. Just because an OSS alternative is /possible/ doesn't mean
>> anyone will get around to actually making it.
>>
>>
>> Mike
>
I didn't mention PHP.
Mike
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 25.02.2016 um 21:18 schrieb Mike Horvath:
> On 2/25/2016 9:12 AM, Jim Holsenback wrote:
>> On 2/24/2016 10:05 PM, Mike Horvath wrote:
...
>>> CHM is pretty great. Quick loading when compared to PHP. Can search all
...
>> You've misunderstood ... PHP is ONLY used to pull from wiki and format
>> back into to the standalone html packages (win, nix and mac)
...
> I didn't mention PHP.
You didn't?
My eyes must be getting bad then. ;)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 2/25/2016 10:17 PM, clipka wrote:
> Am 25.02.2016 um 21:18 schrieb Mike Horvath:
>> On 2/25/2016 9:12 AM, Jim Holsenback wrote:
>>> On 2/24/2016 10:05 PM, Mike Horvath wrote:
> ...
>>>> CHM is pretty great. Quick loading when compared to PHP. Can search all
> ...
>>> You've misunderstood ... PHP is ONLY used to pull from wiki and format
>>> back into to the standalone html packages (win, nix and mac)
> ...
>> I didn't mention PHP.
>
> You didn't?
> My eyes must be getting bad then. ;)
>
Oops! Well, I meant that some people have advocated having only a wiki
and no standalone docs. I would prefer keeping the CHM docs over this.
Mike
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |