|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Ingo: Thanks for the reminder about Nim.
There's some pretty good ideas in there.
I'm hoping we can begin to figure something out in this coming year.
https://pbrt.org/
https://pbr-book.org/3ed-2018/contents
https://pbr-book.org/3ed-2018/Scene_Description_Interface
I found this quite interesting, especially the part discussing immediate mode vs
retained mode style.
William Pokorny: you might find some of the stuff in "Utilities" useful.
Do we think that there's a way to export POV-Ray's "Abstract Syntax Tree" or
whatever we use after the scene has been parsed and lexed?
Comparing and contrasting a new parser with one that's being developed would be
a useful debugging tool, and a way to measure progress and completion.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 1/16/25 11:48, Bald Eagle wrote:
> Ingo: Thanks for the reminder about Nim.
> There's some pretty good ideas in there.
> I'm hoping we can begin to figure something out in this coming year.
>
>
> https://pbrt.org/
>
> https://pbr-book.org/3ed-2018/contents
>
> https://pbr-book.org/3ed-2018/Scene_Description_Interface
> I found this quite interesting, especially the part discussing immediate mode vs
> retained mode style.
>
> William Pokorny: you might find some of the stuff in "Utilities" useful.
>
Thanks for the reminder. I spent some time years back looking over that
book and code, but confess to not having gone back to it for five years
plus. Certainly good ideas there-in.
>
> Do we think that there's a way to export POV-Ray's "Abstract Syntax Tree" or
> whatever we use after the scene has been parsed and lexed?
Disclaimer. I am no parser / (POV-Ray parsing) expert...
... Suppose maybe code could be hacked and whacked to export something
for, say, the expression parsing alone.
Beyond that, my bet is it would be difficult to do without a significant
investment of time - and the result won't be clean. I certainly don't
have anything in hand. Maybe others have made runs at such work and have
something more?
Today's POV-Ray / yuqk parsers are convoluted and tangled with the
raytracing code itself. I see the current parsers as more like
semi-direct text to scene translators.
>
> Comparing and contrasting a new parser with one that's being developed would be
> a useful debugging tool, and a way to measure progress and completion.
>
In offline discussions with jr, we discussed some approaches to a brand
new parser and with a POV-Ray 4.0 (or 5.0). There is value to thinking
hard about what Scene Description Language 2.0 should be and perhaps
writing potential parsers for it that interface to something. I'd lean
toward that 'something' being a limited set of today's SDL while working
up what we want any language implementation to be. (Perhaps flattened /
un-nested SDL using very few language directives)
Much in the POV-Ray source code needs/could-use work/refining. This
includes the core functionality itself - and the core feature set is
where I've been focused for the better part of a decade now.
I'm working inside out, functionality-wise, and deal with yuqk's parser
derivative only to the degree I must to support this clean up (move to
more solid functionality) push.
I don't have the bandwidth mentally / physically to do much more than
what I'm doing already with core features and yuqk parser 'adjustments'.
My hope is some of what I'm doing will be useful directly as code and
help make clearer what the parsing / functionality should be for any
v4.0 / v5.0. yuqk as a resource / reference.
Stuff popping into my head
--------------------------
Aside: Somewhat near term I plan to move the global_settings radiosity
(and later photon) set up out of the parser altogether. I think much
about these features should always have been ini set up items - not
having anything to do with the parser!
Aside: I think we could pick up a chunk of performance by adding a step
between the parser and prior to rendering where we re-allocate 'related
stuff' in contiguous memory. This happens a few place today. With
meshes, for example, due an internal, post memory allocation during
parsing; it's a some lessor reason why meshes are fast. The 'related
stuff' determination would probably include information passed from the
parser - but how to do this is only a VERY vague idea in my head.
---
Anyhow... I agree with working out how to actively push forward toward
what v4.0 / v5.0 should be and I believe ideas must be tried to work out
any final path/result. Hearing my Dad chide me with: "The job doesn't
get done by looking at it." :-)
Bill P.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
William F Pokorny <ano### [at] anonymousorg> wrote:
> > Do we think that there's a way to export POV-Ray's "Abstract Syntax Tree" or
> > whatever we use after the scene has been parsed and lexed?
>
> Disclaimer. I am no parser / (POV-Ray parsing) expert...
>
> ... Suppose maybe code could be hacked and whacked to export something
> for, say, the expression parsing alone.
>
> Beyond that, my bet is it would be difficult to do without a significant
> investment of time - and the result won't be clean. I certainly don't
> have anything in hand. Maybe others have made runs at such work and have
> something more?
Right, but at the moment we really don't have anything, so even the most
imperfect / incomplete thing would be a step forward.
Do you know if the AST is - - - assembled into some kind of data structure in
memory right before rendering, and that's what the render phase operates upon?
Perhaps that data structure could just be barfed out so that we could take a
gander at it.
> Today's POV-Ray / yuqk parsers are convoluted and tangled with the
> raytracing code itself. I see the current parsers as more like
> semi-direct text to scene translators.
Right - this is the part that I don't understand. How does the parser and
raytracing code get entangled like that? Even a small example where two are
intertwined would help. Just point to line #'s nnn-NNN in such-and-such a file.
> > Comparing and contrasting a new parser with one that's being developed would be
> > a useful debugging tool, and a way to measure progress and completion.
> >
>
> In offline discussions with jr, we discussed some approaches to a brand
> new parser and with a POV-Ray 4.0 (or 5.0). There is value to thinking
> hard about what Scene Description Language 2.0 should be and perhaps
> writing potential parsers for it that interface to something. I'd lean
> toward that 'something' being a limited set of today's SDL while working
> up what we want any language implementation to be. (Perhaps flattened /
> un-nested SDL using very few language directives)
Likewise, we've had several discussions and brainstorming sessions.
However before we start talking about a new SDL, I think we need to understand
the parsing lexing part.
I actually feel confident enough to write a lot of the actual raytracing code
myself - and luckily, I don't think there will need to be a lot of it
(re)written.
By flattened un-nested SDL, you're suggesting something akin to writing a scene
in another language and having it generate every instance of a thing that would
be in a loop in current SDL. So if I had a loop that instantiated 100 spheres,
what we'd be doing is writing out the code for all 100 spheres individually to
be parsed.
> Much in the POV-Ray source code needs/could-use work/refining. This
> includes the core functionality itself - and the core feature set is
> where I've been focused for the better part of a decade now.
So you've added features to SDL.
We are all curious about exactly what needs to happen to fully accomplish such a
task.
See such an explanation by Leigh Orf at:
https://dl.acm.org/doi/fullHtml/10.5555/1029015.1029017
Apparently there are specific things that need to be done in several files to
make this happen.
- BW
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
https://web.archive.org/web/20060116142746/http://research.orf.cx/lj2004.html
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Also, in the above referenced book, they discuss how most commercial /
production raytracers convert everything to triangles before rendering.
I think it would be a useful feature to have, and it's certainly a
long-requested one.
It would enable folks to save files as meshes, and allow 3D printing of scenes.
Obviously, this would be difficult or impossible for things like fractals,
media, etc.
I also found this:
http://news.povray.org/povray.binaries.utilities/thread/%3C41605076@news.povray.org%3E/
in case anyone wanted to try a marching cubes utility! :)
- BW
(Who knows what else lurks in the archives)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Bald Eagle" <cre### [at] netscapenet> wrote:
> Also, in the above referenced book, they discuss how most commercial /
> production raytracers convert everything to triangles before rendering.
The ability to mesh-ify a scene, objects etc. is extremely useful. It would
explicitly break POV-Ray into two parts, a SDL and a raytracer. The SDL side can
be used for other rendering back ends, OpenGL previews or complete renderings,
3d printing etc. One can use a kind of hybrid style of SDL coding, start solid,
go to mesh and do subdivisions and other mesh wizardry.
For writing, syntax and usability testing, of a new SDL language one could write
a small language, parser and SDL1 transpiler. With a subset of SDL a lot can be
testet. The output can be the simplest form of current POV-Ray SDL with loops
enrolled etc.
ingo
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
hi,
pushed for time currently, sorry for omission(s).
"ingo" <nomail@nomail> wrote:
thx for the Nim ref.
> ...
> For writing, syntax and usability testing, of a new SDL language one could write
> a small language, parser and SDL1 transpiler. With a subset of SDL a lot can be
> testet. The output can be the simplest form of current POV-Ray SDL with loops
> enrolled etc.
and Bald Eagle wrote:
> At this juncture, we really need a functional flowchart showing exactly HOW
> POV-Ray goes from the SDL in .pov file to the final rendered image.
diagrams and charts, _yes_. to add.
I do think the "new POV-Ray" should ship with built-in "schizophrenia" ;-). if
a version number of less than n.n is given, or the version is missing, the job
goes to the old parser/POV-Ray as of 3.8 (?), else it'll be a version indicating
"SDL2" and the new code takes over.
we need to start with the outline of the "backend", the render engine. aiui, a
(LL)Virtual Machine is (was?) supposed to be "platform". as the design of the
LLVM firms up we'll get the (first cut at the) "API".
once there's a specialised rendering "API", we can start thinking about language
features for, I hope, a SDL which compiles[*] to the LLVM low-level
instructions.
[*] many scripting languages, including PHP and Tcl, compile "just-in-time".
with a bit of luck almost all of this can (and should) be "transparent" to us
"end-users".
regards, jr.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"jr" <cre### [at] gmailcom> wrote:
> diagrams and charts, _yes_. to add.
Post a reply to this message
Attachments:
Download 'raytracing flowchart 1.png' (63 KB)
Preview of image 'raytracing flowchart 1.png'
|
|
| |
| |
|
|
|
|
| |
| |
|
|
https://nim-lang.org/blog/2020/06/30/ray-tracing-in-nim.html
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
https://github.com/jaafersheriff/CPU-Ray-Tracer
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |