|
|
clipka <ano### [at] anonymousorg> wrote:
> Am 25.06.2018 um 17:12 schrieb Kenneth:
> > So I assume that NO one has had any luck trying to #include a UTF-8 -encoded
> > file, regardless of the text-editing app that was used.
>
> UTF-8 encoded files /without/ a signature are fine(*). Unfortunately,
> Notepad and WordPad can't write those.
Ah yes, of course. (I did see that tidbit of information in my research, but it
didn't sink in.) All clear now.
>
> (* For certain definitions of "fine", that is; the `global_settings {
> charset FOO }` mechanism isn't really useful for mixing files with
> different encodings. Also, the editor component of POV-Ray for Windows
> doesn't do UTF-8.)
[snip]
> Now proper handling
> of text encoding can be as simple as supporting UTF-8 with and without
> signature - no need to even worry about plain ASCII, being merely a
> special case of UTF-8 without signature.
>
Going forward:
When you have finished your work on restructuring the parser, and someone wants
to write an #include file using UTF-8 encoding (with or without a BOM): Which of
the following two contructs is the proper way to code the scene file/#include
file combo:
A)
Scene file:
global_settings{... charset utf8}
#include "MY FILE.txt" // encoded as UTF-8 but with no charset keyword
OR B):
scene file:
global_settings{...) // no charset keyword
#include "MY FILE.txt" // encoded as UTF-8 and with its *own*
// global_settings{charset utf8}
I'm still a bit confused as to which is correct-- although B) looks like the
logical choice(?). The documentation about 'charset' seems to imply this.
Post a reply to this message
|
|