clipka <ano### [at] anonymousorg> wrote:
> Am 09.12.2018 um 16:09 schrieb jr:
> > it's quite a bit slower than the previous x.tokenizer version (from 5576 to 6095
> > seconds for the first test), also shows a slight increase in "K tokens"
> > processed for same scene.
> The token count thing worries me a bit. I couldn't care less if I
> inadvertently introduced a slight change to the rules for how the number
> of tokens is counted as a side effect of some other sensible change, but
> I can't think of any recent change that might have such an effect; are
> you sure it's not simply an artifact of the way you observe this number?
there's more than one way to read it? :-)
> And are you sure we are talking about the same reference version
> (x-tokenizer.9945666)? If so, can you narrow down the scene language
> construct for which the values differ?
yes, and not really. I used 9945627 as the "baseline", then ran the same scene
with 9945666 and 9960461. all three were run "remote" on an otherwise idling
machine. as for the language construct, I can not even guess, I have never
really looked at POV-Ray sources. essentially, nested loops calling a macro
that uses inside() to trace, ie a scan of a volume.
curiously, the alpha reports the "no objects in scene" warning one line earlier
than the x.tokenizer versions.
> That said, the main focus of this version is on furthering my
> understanding of what's left of the legacy parser code. To that end,
> I've peppered the code with checks to verify some assumptions that may
> or may not hold true, and what I'm really interested in right now is
> reports of cases where they don't. Those should manifest as parse errors.
ah, I might not be much help here, I have no code requiring earlier than 3.6,
still, I'll keep using the latest x.tokenizer as the day-to-day version.
Post a reply to this message