|
|
Am 07.07.2021 um 04:06 schrieb Bald Eagle:
> Oh, and in case you're trying this with the "ANSI_GDT" font, version
> 1.0, from 1998: That one also seems to be borked, and is not helping.
>
> No idea.
> I tried with the 3 shipped fonts, and then moved on.
You mean the fonts that come with POV-Ray?
Noooooooo - they don't have much in terms of support for non-ASCII
characters at all, let alone Unicode.
> As for the character codes given on that web page:
>
> - The Alt+X codes _should_ directly translate to `\uXXXX` codes.
> Provided the borkedness of POV-Ray's TrueType handling doesn't get in
> the way, and `charset` is set to `utf8` or `sys`.
>
> They do not. But I found a page that provides a mapping.
?? - That's quite a surprise there. The Alt+X input method in Word
(4-digit/letter hexadecimal code followed by Alt+X) is _specifically_
designed to enter Unicode codepoints using hexadecimal notation, and
`\uXXXX` is _specifically_ designed to specify Unicode codepoints using
hexadecimal notation. If they deviate, something is wrong.
(At least for 4-digit/character codes. Those with more digits/letters
are another matter.)
> "We are the Bork. You will be dissimilated, and your assstinktiveness will be
> retained in the source code."
For v3.8, yes.
For later versions (which we've decided to call v4.0, while postponing
other more radical renewals for some v5.0), things will be cleaned up.
> I downloaded FontForge and that seems to work very nicely.
>
> What that will tell you is that lots of fonts don't have very much content - and
> there are tons of unicode entries "left blank".
No surprise there. Providing glyphs for all Unicode codepoints defined
so far would be a ton of work (remember, you'd want to define all of
them in the same style; and you'd need to know at least some basics
about the typography of each and every script in the world to get it
right; for example, a lot of fonts get the Euro sign typographically
wrong, and the fonts that provide an uppercase variant of the German
sz-ligature often get that one wrong as well) and also require tons of
storage space. So fonts typically cater to a particular set of
languages, and leave all others empty.
Software that does a lot of text displaying (most notably browsers)
solve this problem by using a system of fallback fonts (none of which
cover all of Unicode individually, but in total they do) to display
characters that aren't available in the primary font of a web page.
> lucida-sans-unicode.ttf is a fairly complete font for these purposes, but better
> is:
>
> arial-unicode-ms.ttf Which has quite a lot of the goodies.
Yes, those two fonts are my go-to as well when it comes to
"Unicode-richness".
> . . . but absent that, the character mapping for the
> \uNNNN codes seems to be fairly reliable and consistent across the handful of
> fonts I sifted through.
>
> Other than that, POV-Ray seems to have handled everything fairly nicely. Some
> of the glyphs are a wee bit spindly, but that's the font, not POV-Ray. I just
> deleted the whole charset and utf8, and so now there's no parse warning, and
> everything works great anyway.
There are scenarios though where perfectly valid fonts would end up
garbled by POV-Ray. Fortunately they seem to be rare with high-quality
fonts.
Post a reply to this message
|
|