POV-Ray : Newsgroups : povray.pov4.discussion.general : SDL2 : Re: SDL2 Server Time
22 Jan 2025 05:43:23 EST (-0500)
  Re: SDL2  
From: William F Pokorny
Date: 16 Jan 2025 14:42:38
Message: <6789612e$1@news.povray.org>
On 1/16/25 11:48, Bald Eagle wrote:
> Ingo:  Thanks for the reminder about Nim.
> There's some pretty good ideas in there.
> I'm hoping we can begin to figure something out in this coming year.
> 
> 
> https://pbrt.org/
> 
> https://pbr-book.org/3ed-2018/contents
> 
> https://pbr-book.org/3ed-2018/Scene_Description_Interface
> I found this quite interesting, especially the part discussing immediate mode vs
> retained mode style.
> 
> William Pokorny: you might find some of the stuff in "Utilities" useful.
> 

Thanks for the reminder. I spent some time years back looking over that 
book and code, but confess to not having gone back to it for five years 
plus. Certainly good ideas there-in.

> 
> Do we think that there's a way to export POV-Ray's "Abstract Syntax Tree" or
> whatever we use after the scene has been parsed and lexed?

Disclaimer. I am no parser / (POV-Ray parsing) expert...

... Suppose maybe code could be hacked and whacked to export something 
for, say, the expression parsing alone.

Beyond that, my bet is it would be difficult to do without a significant 
investment of time - and the result won't be clean. I certainly don't 
have anything in hand. Maybe others have made runs at such work and have 
something more?

Today's POV-Ray / yuqk parsers are convoluted and tangled with the 
raytracing code itself. I see the current parsers as more like 
semi-direct text to scene translators.

> 
> Comparing and contrasting a new parser with one that's being developed would be
> a useful debugging tool, and a way to measure progress and completion.
> 

In offline discussions with jr, we discussed some approaches to a brand 
new parser and with a POV-Ray 4.0 (or 5.0). There is value to thinking 
hard about what Scene Description Language 2.0 should be and perhaps 
writing potential parsers for it that interface to something. I'd lean 
toward that 'something' being a limited set of today's SDL while working 
up what we want any language implementation to be. (Perhaps flattened / 
un-nested SDL using very few language directives)

Much in the POV-Ray source code needs/could-use work/refining. This 
includes the core functionality itself - and the core feature set is 
where I've been focused for the better part of a decade now.

I'm working inside out, functionality-wise, and deal with yuqk's parser 
derivative only to the degree I must to support this clean up (move to 
more solid functionality) push.

I don't have the bandwidth mentally / physically to do much more than 
what I'm doing already with core features and yuqk parser 'adjustments'. 


My hope is some of what I'm doing will be useful directly as code and 
help make clearer what the parsing / functionality should be for any 
v4.0 / v5.0. yuqk as a resource / reference.

Stuff popping into my head
--------------------------
Aside: Somewhat near term I plan to move the global_settings radiosity 
(and later photon) set up out of the parser altogether. I think much 
about these features should always have been ini set up items - not 
having anything to do with the parser!

Aside: I think we could pick up a chunk of performance by adding a step 
between the parser and prior to rendering where we re-allocate 'related 
stuff' in contiguous memory. This happens a few place today. With 
meshes, for example, due an internal, post memory allocation during 
parsing; it's a some lessor reason why meshes are fast. The 'related 
stuff' determination would probably include information passed from the 
parser - but how to do this is only a VERY vague idea in my head.
---

Anyhow... I agree with working out how to actively push forward toward 
what v4.0 / v5.0 should be and I believe ideas must be tried to work out 
any final path/result. Hearing my Dad chide me with: "The job doesn't 
get done by looking at it." :-)

Bill P.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.