Am 01.06.2018 um 23:15 schrieb Thorsten Froehlich:
> clipka <ano### [at] anonymousorg> wrote:
>> - The v3.7 implementation simply swallowed any contiguous sequence of
>> non-ASCII bytes at the start of a scene file, and just /presumed/ them
>> to be an UTF-8 signature BOM. The v3.8.0-x.tokenizer implementation
>> actually checks whether the non-ASCII byte sequence matches the UTF-8
>> signature BOM.
> This was actually a feature. Originally it checked, but it turned out that at
> least at the time several editors created incorrect BOMs...
Thanks for the info. Should the change prompt any issue reports, I'll
know what to do. For now, I'll just take the chance.
Post a reply to this message