clipka <ano### [at] anonymousorg> wrote:
> - The v3.7 implementation simply swallowed any contiguous sequence of
> non-ASCII bytes at the start of a scene file, and just /presumed/ them
> to be an UTF-8 signature BOM. The v3.8.0-x.tokenizer implementation
> actually checks whether the non-ASCII byte sequence matches the UTF-8
> signature BOM.
This was actually a feature. Originally it checked, but it turned out that at
least at the time several editors created incorrect BOMs...
Post a reply to this message