 |
 |
|
 |
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
hi,
ingo <ing### [at] nomail com> wrote:
> Why is POV-Ray (development) in decline? Lack of developers? Realy?
I like Mr's take (for which thanks) -- "...train is steadily moving forwards..."
> Look at the wealth of include files.
> What if we can compile an include file, use it as a dll? What if after
> some time it gets integrated in the core?
> ...
> A glimpse of the goal:
>
> Scene SDL
> │
> ├--> Raytracer backend (analytical intersection, BVH/R-tree) ←
> reference
> |--> Path tracer backend (physically based)
> |--> OpenGL / Vulkan backend (realtime preview, all geometry meshed)
> |--> STL / mesh backend (3D printing, watertight meshes)
> |--> Audio backend (acoustic simulation)
ideally, yes. however
> ... English bridges the gap between my Dutch and your French. What
> if there is no gap at all, in coding language? Front end, back end
> middle ware all in the same language. Coders and users speak the same
> language, they all "code". That removes "friction". That can enhance the
> product.
what happened to "variety is the spice of life" ?! different languages (both
computer and human) all have their strengths and weaknesses, and I see nothing
"wrong" with "multi-language" projects.
@Bald Eagle.
> ...the sor {} object...
yes, "unfinished business", write.
regards, jr.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
On 2026-05-04 21:09, jr wrote:
> what happened to "variety is the spice of life" ?! different
languages (both
> computer and human) all have their strengths and weaknesses, and I
see nothing
> "wrong" with "multi-language" projects.
>
but but, but the raytracer's job is to populate the database. The trace
data is the render. Everything else is SQL. Image is a query, depthmap
is a query, normalmap is a query ...
-- depth map as a query, not a render pass
SELECT
pixel_x,
pixel_y,
MIN(t) AS depth -- closest hit along each ray
FROM bounces
WHERE frame = 42
GROUP BY pixel_x, pixel_y
to take it to the extreme, the render relay should probably be the only
thing that actually traces rays. Everything else — depth, normals,
object IDs, albedo — is a material relay or a post-process query.
ingo
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
ingo <ing### [at] nomail com> wrote:
> .... Everything else is SQL. Image is a query, depthmap
> is a query, normalmap is a query ...
Perhaps you could take a bit of time to explain how the parser gets handled.
clipka's major concern was fixing/refactoring the parser.
I know that we have a special, hand-written parser that's not like most
languages.
If I understand all of it correctly, we have
A raw tokenizer
A tokenizer
A scanner
A parser
and then handling of objects, symbol tables, functions, textures, etc.
I don't really understand all the grammar, lexemes, syntax vs semantics, symbol
tables, etc. that go into building a language, and I think it would help if we
dedicated a few threads to discussing these aspects of the source, so that we
can better understand what we HAVE, and plan out a path to where we want/need to
be.
Writing out some code that just goes through the motions but doesn't implement
anything would give people an idea about the roadmap that constitutes how the
raytracer works overall, and I think a high-level pseudocode version would be a
great way to help organize all of the various parts.
We have redundant code, things that are intertwined in the parser that ought not
be there, and various algorithms that get applied to different shapes as part of
the bounding (bsp, cylinder bounding, AABB, sphere bounding).
If we rearranged the code at a high level, and made extensive comments about WHY
the parts of the code ought be in the places we're putting them, then that would
be a big step forward in filling in the place-holder stuff with actual code.
We already have the code base, so we can just copy-paste a lot of what already
exists, and then it would be a matter of making all of the various parts work
together. We can make it modular, add much-needed comments and references,
standardize some of the math and constants used (epsilons), etc.
This would also be a good class on writing patches and compiling source code, so
that more people can understand how to make POV-Ray vX.Y.Z
- BW
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
On 2026-05-05 15:02, Bald Eagle wrote:
> Perhaps you could take a bit of time to explain how the parser gets handled.
When using Nim, as I do there are two and two halve options.
- Just use Nim as I have done now (could be cleaner), no need to write a
parser as we use Nim's parser & compiler.
- A halve option adds a Domain Specific Language on top. That is created
using Nim's macro system (AST rewriting at compile time). It is still
pure Nim and the DSL could be done for scene description while using
usual Nim code for the rest functions etc. Look at it as a syntax sugar
layer:
scene:
camera perspective:
pos = vec3(0, 1, 2)
lookAt = vec3(0, 0, -3)
fov = 60
sphere:
center = vec3(0, 0, -3)
radius = 1.0
material = red
pointLight:
pos = vec3(5, 8, 2)
intensity = 100
Squiggels can probably be added to make it look more POV-Ray.
- A halve option would be to use Nim's scripting language. That would
require no compilation but it does not (yet?) fully / seamlessly
integrate with the compiled version. But it still looks and feels the
same as Nim
- An option would be to write a full scene parser. I have not arrived at
that stage, but feel it as "limiting".
Nim3 is in the making, a part of it is a Lisp like language that could
bridge a parser and the Nim compiler. If I understand it all correctly.
https://github.com/nim-lang/nifspec/blob/master/README.md
https://github.com/nim-lang/nifspec/blob/master/doc/nif-spec.md
A front end parses and converts "POV-Ray SDL" and outputs:
(.nif27)
(.lang "raytracer")
(stmts
(camera@2,1 (vec3 0.0 1.0 2.0) (vec3 0.0 0.0 -3.0) 60.0)
(sphere@3,1 (vec3 0.0 0.0 -3.0) 1.0 (material :red.0.materials))
)
Nim parses it, compiles and renders.
But, what becomes clear in the discussion so far, we are trying to solve
different problems.
ingo
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
hi,
ingo <ing### [at] nomail com> wrote:
> On 2026-05-04 21:09, jr wrote:
> > what happened to "variety is the spice of life" ?! ...
> but but, but the raytracer's job is to populate the database. The trace
> data is the render. Everything else is SQL. Image is a query, depthmap
> is a query, normalmap is a query ...
> -- depth map as a query, not a render pass
> SELECT
> pixel_x,
> pixel_y,
> MIN(t) AS depth -- closest hit along each ray
> FROM bounces
> WHERE frame = 42
> GROUP BY pixel_x, pixel_y
you sure know how "to take the wind out of my sails" </grin>. that really is an
interesting perspective. heavy..
> to take it to the extreme, ...
I don't have the required background(s) but am still left with a "nagging
feeling". a fully spec'd "application binary interface", as you've alluded to
with the dynamic-link libraries, ought to be enough. (unless I misunderstood ?)
regards, jr.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
I suddenly feel like I was off topic as, seeing no attachment and reading too
fast missed the line with repo link so I did not realize there was something to
try out already sorry! but is it actually POV-Ray
or else, might the discussion have been posted into a more explicit category
such as pov4.discussion.general?
The concept of nim sounds good to me, but seeing your repo reads more obscure to
me than some pov code... Sorry to be thick at this late hour, so is it elements
of a new renderer itself? if so, what would the syntax invoking them look like?
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
On 2026-05-06 02:45, Mr wrote:
> I suddenly feel like I was off topic as, [...]
No problem.
> [...] what would the syntax invoking them look like?
>
Currently it's not all very well organized. For example in scene.nim
there are mixed reposibilities. The whole intersection / traversal logic
there should go to the raytracer file. Also the loop in main should go
there.
The ppm generation is only "instant gratification" the database will be
the result and it will contain HDR data. An image could be extracted
with something that does:
proc exportPPM*(db: DbConn, frame: int, filename: string) =
let rows = db.allRows(sql"""
SELECT pixel_x, pixel_y,
tonemap_reinhard(sum(rgb_r)) AS r,
tonemap_reinhard(sum(rgb_g)) AS g,
tonemap_reinhard(sum(rgb_b)) AS b
FROM (
SELECT pixel_x, pixel_y,
spectral_to_rgb(contribution, wavelength) AS (r,g,b)
FROM final_pixel_values
WHERE frame = ?
) GROUP BY pixel_x, pixel_y
""", frame)
writePPM(filename, rows, width, height)
this is currently the whole scene description:
----
var sc = initScene()
# Materials
let matFloor = sc.addMaterial(Material(colour: vec3(0.8f, 0.8f, 0.8f)))
let matSphere = sc.addMaterial(Material(colour: vec3(0.8f, 0.2f, 0.2f)))
let matBox = sc.addMaterial(Material(colour: vec3(0.2f, 0.6f, 0.2f)))
# Shapes
discard sc.addShape(plane(vec3(0f, 1f, 0f), -1f, matFloor))
discard sc.addShape(sphere(vec3(0f, 0f, -3f), 1f, matSphere))
discard sc.addShape(box(vec3(-2f, -1f, -4f), vec3(-0.5f, 0.5f, -2f),
matBox))
initPerspectiveCamera(
relays = relays,
pos = vec3(0f, 1f, 2f),
lookAt = vec3(0f, 0f, -3f),
up = vec3(0f, 1f, 0f),
fovDeg = 60f,
width = Width,
height = Height
)
initRaytracerRenderer(relays, sc)
addPointLight(relays, vec3(5f, 8f, 2f), vec3(1f, 1f, 1f), 100f)
addPointLight(relays, vec3(-3f, 4f, 0f), vec3(0.4f, 0.6f, 1f), 40f)
initColourMaterial(relays, sc)
initBlueBackground(relays)
----
I do not like that!
initScene() annoyed me. But then, one can put multiple scene in one file
and render them in orther. Scene2 can the re-use, abuse, data from
scene1. (Contrived crazy example, Scene One can be an animation of a
ball bouncing against a wall many times, denting it a bit and leaving
some colour. Train a little neural net with it. Scene Two uses the net
for texture and displacement mapping generation.)
Currently an order in the scene is required. Not nice, maybe things can
be ordered after a "parsing" step.
Initializing the relays can be "hidden", be done lazy. When adding a
light source check whether a render relay is initialized if not do it.
Also shapes and materials do not look nice, I just chose the easy way to
get the whole ting going. And, certainly there is a lot I've not thought
about yet.
ingo
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
On 2026-05-05 23:18, jr wrote:
> "nagging feeling"
Hah, I have a 2000 lines "design" document, there's quite a few lines in
it that evoke that feeling, so it's in constant flux. Might be an awful
lot of work to find yet another dead end street, but we can hope that
there is a good pub at the end of it then.
ingo
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
ingo <ing### [at] nomail com> wrote:
> Might be an awful
> lot of work to find yet another dead end street, but we can hope that
> there is a good pub at the end of it then.
Having worked on many projects at many different levels, it is just as important
to know and document what DID NOT work. (And what was learned along the way)
So much so, that governments here even steal our wages to manufacture and
install metal signs that read "DEAD END". ;)
- BW
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
On 2026-05-06 14:07, Bald Eagle wrote:
> So much so, that governments here even steal our wages to manufacture and
> install metal signs that read "DEAD END". ;)
Is the situation that dire out there at the moment? :(
I used to do product development in the past. Keeping an "engineers
diary" was extremely valuable.
ingo
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|
 |