POV-Ray : Newsgroups : povray.pov4.discussion.general : SDL2 Server Time
22 Jan 2025 06:03:55 EST (-0500)
  SDL2 (Message 1 to 10 of 15)  
Goto Latest 10 Messages Next 5 Messages >>>
From: jr
Subject: SDL2
Date: 13 Jan 2025 11:30:00
Message: <web.67853e8e1b740bc5f3dafc4d6cde94f1@news.povray.org>
hi,

the SDL2 design needs to have introspection facilities comparable to Tcl, or
better, ideally, I think.  one feature we're all (I think) keen to see would be
a means to find out a variable's type.

so if I wanted to write a macro or procedure which takes one argument that could
be, say for a colour, a 3-vector, 4-vector, or a 5-vector.

ideally we'd have keywords, of course, but even a simple boolean function which
takes two arguments, a reference type and the variable to check, would do.

something like:

if (isType(<0,0,0,0,0>, arg))
  // full colour.
elseif (isType(<0,0,0,0>, arg))
  // rgbf or rgbt
....

with keywords would be neater, of course:

if (isType(v5type, arg))
  // full colour.
elseif (isType(v4type, arg))
  // rgbf or rgbt
....

(I'm not a language designer, as you can tell :-))


regards, jr.


Post a reply to this message

From: Bald Eagle
Subject: Re: SDL2
Date: 13 Jan 2025 13:20:00
Message: <web.678558a5f5bf97863018e75b25979125@news.povray.org>
"jr" <cre### [at] gmailcom> wrote:
> one feature we're all (I think) keen to see would be
> a means to find out a variable's type.

I think this would best be illustrated by a .pov file trying to #read from a
data file of some sort.

This thread will quickly become a collection of disparate ideas that will need
to be sorted and prioritized.

I'd suggest creating a document (or better, a spreadsheet - for reasons I can
expand on) to append and annotate, thus keeping everything in one place.

Author, feature type (object, attribute [texture, pigment pattern], flow
control, introspective keyword, syntactical sugar, etc.), priority (core,
wanted, maybe-nice-to-have)

The more granular we make it, the easier it will be to deal with everything, and
make appropriate recommendations and changes.

Examples from existing languages would help easily illustrate syntax and usage
without having to write everything out from the outset.

Approaching this from a "unix-style" way of thinking: everything has a single
purpose, and the result is achieved by assembling the functional parts into a
cohesive whole.  This way, any time something has to be fixed/updated, the input
and the output remain unchanged, and it's only how things are handled internally
that changes.  Old versions ought to be deprecated and fully retained with
commentary, so as not to lose ideas and historical explanations about WHY things
were done.  Every keyword/feature ought to have a version number, and the last
working version ought to be retained in full (perhaps as keyword_lw so that the
newer version of the full software can still be used without rollbacks).

- BW


Post a reply to this message

From: Bald Eagle
Subject: Re: SDL2
Date: 13 Jan 2025 14:30:00
Message: <web.67856984f5bf97863018e75b25979125@news.povray.org>
At this juncture, we really need a functional flowchart showing exactly HOW
POV-Ray goes from the SDL in .pov file to the final rendered image.

The only people that I know of who might have such knowledge (and who we're seen
post here "recently") are (in alphabetical order)

Chris Cason
Thorsten Frohlich
Jerome Grimbert
William Pokorny
Yvo Smellenburgh

If they could help lay things out such that we can get a head start on
understanding what really needs to happen to potentially rewrite everything from
scratch, that would be a big help.
Also, if they are in contact with other past members of the POV-Ray development
team, perhaps they can solicit some commentary from people that we haven't seen
in a while.

As far as I can tell (off the top of my head / rushed):
We have the parsing phase:
We have the parser that nibbles away at the SDL character by character,
identifying keywords and such.
All of that has to get filtered through error-handling routines.
Then the lexer/tokenizer converts all of that to (numerical?) tokens.
(guessing / speculating)
for every token, some sort of hierarchy is constructed
a bounding box is calculated

Then we have the render phase:
A ray gets shot from the camera to the coordinates corresponding to the current
screen pixel
The ray gets tested against "all" the bounding boxes (presumably there is some
kind of tree optimization)
If a bounding box is hit, then the ray gets tested against "all" objects in a
CSG (let's just call everything a CSG for now) and a solver gets invoked to
determine the ray-object intersection.
Once the intersection point is determined, the normal, texture, finish, etc get
handled.  This is likely the most complicated part, since we can have media,
transparent objects, ior, reflections, etc.
Stuff gets done with antialiasing
Pixel gets assigned an rgb color.

Please fill in the blanks and expand upon the generalizations, and correct the
errors.

- BW


Post a reply to this message

From: Bald Eagle
Subject: Re: SDL2
Date: 14 Jan 2025 15:45:00
Message: <web.6786cbdcf5bf97863018e75b25979125@news.povray.org>
Look what/who I found:

https://www.martinjules.com/projects/single-project?id=5


Post a reply to this message

From: ingo
Subject: Re: SDL2
Date: 15 Jan 2025 01:35:00
Message: <web.6787561df5bf978617bac71e8ffb8ce3@news.povray.org>
"Bald Eagle" <cre### [at] netscapenet> wrote:
>
> This thread will quickly become a collection of disparate ideas that will need
> to be sorted and prioritized.
>

As an example, a SQL parser that produces an Abstract Syntax Tree, written in
Nim, a readable language.

The parsers code:
https://github.com/nim-lang/Nim/blob/version-2-2/lib/pure/parsesql.nim

The parsers doc:
https://nim-lang.org/docs/parsesql.html

Nim documentation:
https://nim-lang.org/documentation.html

ingo


Post a reply to this message

From: Bald Eagle
Subject: Re: SDL2
Date: 16 Jan 2025 11:50:00
Message: <web.67893871f5bf97865e04e68c25979125@news.povray.org>
Ingo:  Thanks for the reminder about Nim.
There's some pretty good ideas in there.
I'm hoping we can begin to figure something out in this coming year.


https://pbrt.org/

https://pbr-book.org/3ed-2018/contents

https://pbr-book.org/3ed-2018/Scene_Description_Interface
I found this quite interesting, especially the part discussing immediate mode vs
retained mode style.

William Pokorny: you might find some of the stuff in "Utilities" useful.


Do we think that there's a way to export POV-Ray's "Abstract Syntax Tree" or
whatever we use after the scene has been parsed and lexed?

Comparing and contrasting a new parser with one that's being developed would be
a useful debugging tool, and a way to measure progress and completion.


Post a reply to this message

From: William F Pokorny
Subject: Re: SDL2
Date: 16 Jan 2025 14:42:38
Message: <6789612e$1@news.povray.org>
On 1/16/25 11:48, Bald Eagle wrote:
> Ingo:  Thanks for the reminder about Nim.
> There's some pretty good ideas in there.
> I'm hoping we can begin to figure something out in this coming year.
> 
> 
> https://pbrt.org/
> 
> https://pbr-book.org/3ed-2018/contents
> 
> https://pbr-book.org/3ed-2018/Scene_Description_Interface
> I found this quite interesting, especially the part discussing immediate mode vs
> retained mode style.
> 
> William Pokorny: you might find some of the stuff in "Utilities" useful.
> 

Thanks for the reminder. I spent some time years back looking over that 
book and code, but confess to not having gone back to it for five years 
plus. Certainly good ideas there-in.

> 
> Do we think that there's a way to export POV-Ray's "Abstract Syntax Tree" or
> whatever we use after the scene has been parsed and lexed?

Disclaimer. I am no parser / (POV-Ray parsing) expert...

... Suppose maybe code could be hacked and whacked to export something 
for, say, the expression parsing alone.

Beyond that, my bet is it would be difficult to do without a significant 
investment of time - and the result won't be clean. I certainly don't 
have anything in hand. Maybe others have made runs at such work and have 
something more?

Today's POV-Ray / yuqk parsers are convoluted and tangled with the 
raytracing code itself. I see the current parsers as more like 
semi-direct text to scene translators.

> 
> Comparing and contrasting a new parser with one that's being developed would be
> a useful debugging tool, and a way to measure progress and completion.
> 

In offline discussions with jr, we discussed some approaches to a brand 
new parser and with a POV-Ray 4.0 (or 5.0). There is value to thinking 
hard about what Scene Description Language 2.0 should be and perhaps 
writing potential parsers for it that interface to something. I'd lean 
toward that 'something' being a limited set of today's SDL while working 
up what we want any language implementation to be. (Perhaps flattened / 
un-nested SDL using very few language directives)

Much in the POV-Ray source code needs/could-use work/refining. This 
includes the core functionality itself - and the core feature set is 
where I've been focused for the better part of a decade now.

I'm working inside out, functionality-wise, and deal with yuqk's parser 
derivative only to the degree I must to support this clean up (move to 
more solid functionality) push.

I don't have the bandwidth mentally / physically to do much more than 
what I'm doing already with core features and yuqk parser 'adjustments'. 


My hope is some of what I'm doing will be useful directly as code and 
help make clearer what the parsing / functionality should be for any 
v4.0 / v5.0. yuqk as a resource / reference.

Stuff popping into my head
--------------------------
Aside: Somewhat near term I plan to move the global_settings radiosity 
(and later photon) set up out of the parser altogether. I think much 
about these features should always have been ini set up items - not 
having anything to do with the parser!

Aside: I think we could pick up a chunk of performance by adding a step 
between the parser and prior to rendering where we re-allocate 'related 
stuff' in contiguous memory. This happens a few place today. With 
meshes, for example, due an internal, post memory allocation during 
parsing; it's a some lessor reason why meshes are fast. The 'related 
stuff' determination would probably include information passed from the 
parser - but how to do this is only a VERY vague idea in my head.
---

Anyhow... I agree with working out how to actively push forward toward 
what v4.0 / v5.0 should be and I believe ideas must be tried to work out 
any final path/result. Hearing my Dad chide me with: "The job doesn't 
get done by looking at it." :-)

Bill P.


Post a reply to this message

From: Bald Eagle
Subject: Re: SDL2
Date: 16 Jan 2025 15:30:00
Message: <web.67896bc6f5bf97865e04e68c25979125@news.povray.org>
William F Pokorny <ano### [at] anonymousorg> wrote:

> > Do we think that there's a way to export POV-Ray's "Abstract Syntax Tree" or
> > whatever we use after the scene has been parsed and lexed?
>
> Disclaimer. I am no parser / (POV-Ray parsing) expert...
>
> ... Suppose maybe code could be hacked and whacked to export something
> for, say, the expression parsing alone.
>
> Beyond that, my bet is it would be difficult to do without a significant
> investment of time - and the result won't be clean. I certainly don't
> have anything in hand. Maybe others have made runs at such work and have
> something more?

Right, but at the moment we really don't have anything, so even the most
imperfect / incomplete thing would be a step forward.

Do you know if the AST is - - - assembled into some kind of data structure in
memory right before rendering, and that's what the render phase operates upon?
Perhaps that data structure could just be barfed out so that we could take a
gander at it.

> Today's POV-Ray / yuqk parsers are convoluted and tangled with the
> raytracing code itself. I see the current parsers as more like
> semi-direct text to scene translators.

Right - this is the part that I don't understand.  How does the parser and
raytracing code get entangled like that?  Even a small example where two are
intertwined would help.  Just point to line #'s nnn-NNN in such-and-such a file.

> > Comparing and contrasting a new parser with one that's being developed would be
> > a useful debugging tool, and a way to measure progress and completion.
> >
>
> In offline discussions with jr, we discussed some approaches to a brand
> new parser and with a POV-Ray 4.0 (or 5.0). There is value to thinking
> hard about what Scene Description Language 2.0 should be and perhaps
> writing potential parsers for it that interface to something. I'd lean
> toward that 'something' being a limited set of today's SDL while working
> up what we want any language implementation to be. (Perhaps flattened /
> un-nested SDL using very few language directives)

Likewise, we've had several discussions and brainstorming sessions.
However before we start talking about a new SDL, I think we need to understand
the parsing lexing part.
I actually feel confident enough to write a lot of the actual raytracing code
myself - and luckily, I don't think there will need to be a lot of it
(re)written.
By flattened un-nested SDL, you're suggesting something akin to writing a scene
in another language and having it generate every instance of a thing that would
be in a loop in current SDL.  So if I had a loop that instantiated 100 spheres,
what we'd be doing is writing out the code for all 100 spheres individually to
be parsed.

> Much in the POV-Ray source code needs/could-use work/refining. This
> includes the core functionality itself - and the core feature set is
> where I've been focused for the better part of a decade now.

So you've added features to SDL.
We are all curious about exactly what needs to happen to fully accomplish such a
task.
See such an explanation by Leigh Orf at:
https://dl.acm.org/doi/fullHtml/10.5555/1029015.1029017
Apparently there are specific things that need to be done in several files to
make this happen.

- BW


Post a reply to this message

From: Bald Eagle
Subject: Re: SDL2
Date: 16 Jan 2025 15:40:00
Message: <web.67896db2f5bf97865e04e68c25979125@news.povray.org>
https://web.archive.org/web/20060116142746/http://research.orf.cx/lj2004.html


Post a reply to this message

From: Bald Eagle
Subject: Re: SDL2
Date: 16 Jan 2025 15:45:00
Message: <web.67896f46f5bf97865e04e68c25979125@news.povray.org>
Also, in the above referenced book, they discuss how most commercial /
production raytracers convert everything to triangles before rendering.

I think it would be a useful feature to have, and it's certainly a
long-requested one.

It would enable folks to save files as meshes, and allow 3D printing of scenes.

Obviously, this would be difficult or impossible for things like fractals,
media, etc.

I also found this:
http://news.povray.org/povray.binaries.utilities/thread/%3C41605076@news.povray.org%3E/

in case anyone wanted to try a marching cubes utility!  :)

- BW

(Who knows what else lurks in the archives)


Post a reply to this message

Goto Latest 10 Messages Next 5 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.