POV-Ray : Newsgroups : povray.programming : POV-Ray as WASM module Server Time
10 Nov 2025 15:35:46 EST (-0500)
  POV-Ray as WASM module (Message 1 to 6 of 6)  
From: yesbird
Subject: POV-Ray as WASM module
Date: 9 Nov 2025 20:01:32
Message: <6911396c$1@news.povray.org>
I have an idea how to combine web interface and ray tracing calculations
using the WASM technology:
https://webassembly.org/. With a WASM it will be possible to use POV-Ray
API from Javascript/Typescript and the scene can be stored in JSON
format.

Having raytracing right inside the browser will open the way for
creating online editor with many pleasant features, such as spline
editor, for example:
https://threejs.org/examples/?q=spline#webgl_geometry_spline_editor

I know this assumes a huge amount of work and before diving into this
inferno, I would like to ask you about possible approaches to start
with. If I understand well from reading previous posts about related
refactoring, the main problem is separating a scene description and
tracing parts from the parser.

Could you please point me to the modules I should examine first of all
to create a work plan ?
--
YB


Post a reply to this message

From: Bald Eagle
Subject: Re: POV-Ray as WASM module
Date: 10 Nov 2025 13:40:00
Message: <web.6912311dd8d72d8f7f81dbac25979125@news.povray.org>
yesbird wrote:
> I have an idea how to combine web interface and ray tracing calculations
> using the WASM technology:
> https://webassembly.org/. With a WASM it will be possible to use POV-Ray
> API from Javascript/Typescript and the scene can be stored in JSON
> format.
>
> Having raytracing right inside the browser will open the way for
> creating online editor with many pleasant features, such as spline
> editor, for example:
> https://threejs.org/examples/?q=spline#webgl_geometry_spline_editor
>
> I know this assumes a huge amount of work and before diving into this
> inferno, I would like to ask you about possible approaches to start
> with. If I understand well from reading previous posts about related
> refactoring, the main problem is separating a scene description and
> tracing parts from the parser.
>
> Could you please point me to the modules I should examine first of all
> to create a work plan ?
> --
> YB

Thank you, Sergei.
I'm really glad that you joined us and are also thinking about these things -
and DOING them.

I had one of the demonic AI's look it over and give me its summary, because
blindly picking my way through tens of thousands of lines of opaque code was
beyond my present capabilities.

https://news.povray.org/povray.pov4.discussion.general/thread/%3Cweb.6900faee20e0e8051af05c025979125%40news.povray.org%
3E/

(I can keep hammering on this, is we think we can assemble the breadcrumbs it
drops into a map...)

I know that Chris Cason mentioned that he had some plans of his own, so maybe he
might chime in on this.

If there's some way to scan through the old newsgroup posts to get a summary of
what clipka said about disentangling the parser from the raytracer, then that's
an avenue I'd explore as well.  I think Jim Henderson has some sort of fu that
applies in that area.


- BW


Post a reply to this message

From: Bald Eagle
Subject: Re: POV-Ray as WASM module
Date: 10 Nov 2025 14:30:00
Message: <web.69123ce7d8d72d8f7f81dbac25979125@news.povray.org>
"Can you identify where parsing, scene description, and raytracing get
interwined in the povray source code https://github.com/POV-Ray/povray and
suggest methods to separate them?"



Here's how parsing, scene description, and raytracing are intertwined in the
POV-Ray codebase, along with suggestions for separating them into cleaner,
modular layers:

---

## 📦 Where Parsing, Scene Description & Raytracing Intersect

### 1. **Parsing Initiates Scene Construction**
- The parsing pipeline is implemented in **source/parser/**:
  - `scanner.cpp` and `rawtokenizer.cpp` handle **lexical analysis**,
  - `parser.cpp`, `parser_expressions.cpp`, etc., build a detailed **scene
representation** (objects, materials, lights, cameras) in memory .

### 2. **Parser Pulls in Raytracing Code**
- `parser.cpp` directly includes raytracing and rendering headers like:
  ```cpp
  #include "core/lighting/lightsource.h"
  #include "core/bounding/boundingsphere.h"
  ```
  This indicates parsing is aware of and uses classes from the core rendering
engine .

### 3. **Scene Graph & Core Raytracing Tied Together**
- As parser builds the scene graph (objects, primitives), it instantiates core
structures like light sources and bounds, which are immediately used by the
raytracer.
- This blending makes it hard to swap or reuse the scene representation
independently of rendering.

---

## 🛠 Suggestions for Modular Separation

### A. **Isolate Parsing into a Dedicated API Layer**
- Extract parsing into a standalone module exposing a clean interface, e.g.:
  ```cpp
  Scene *ParseScene(const std::string &filename);
  ```
- Internally use parser/scanner/tokenizer, but externally depend only on a
minimal scene structure.



rendering abstractions.
- Adapt parser to build this neutral graph.
- Develop adapter layers to feed core raytracer components from this graph.

### C. **Use Adapter Patterns to Insert Decoupling Layers**
1. **Parser → SceneGraph Adapter**
   - Parser outputs SceneGraph objects, not `core::lightsource`.
2. **SceneGraph → Raytracer Adapter**
   - Convert graph nodes into permanent raytracer types within a separate
module.
   - Benefits: easier to reuse graph for other backends (e.g. viewport preview,
real-time path tracing).

### D. **Decouple Symbol Table & Expression Evaluation**

- Implement two roles:
  - **Compile-time AST** building,
  - **Runtime evaluator** invoked during rendering, possibly embedded in scene
description rather than renderer.

### E. **Refactor Code Responsibilities**
- Maintain clear boundaries:
  - **Parser module**: handles parsing, tokenization, AST building.
  - **Scene builder**: constructs neutral scene graph.
  - **Raytracer**: consumes scene graph for rendering.
- Requires removal of `#include` dependencies in parser on core/light and
bounding headers.

---

## ✅ Summary of Actions

| Layer             | Responsibilities                               | Proposed
Refactoring |
|------------------|------------------------------------------------|-----------------------|
| **Parser**        | Lexing, parsing, AST construction              | Outputs
to SceneGraph interface |
| **SceneGraph**   | Neutral representation of objects and settings | No
rendering dependencies |
| **Adapters**     | Convert SceneGraph to Core types              | One
directional mapping layer |
| **Raytracer**    | Rendering based on core objects               | Independent
from parser |

This structure isolates parsing, scene representation, and raytracing into


---


Post a reply to this message

From: Bald Eagle
Subject: Re: POV-Ray as WASM module
Date: 10 Nov 2025 14:35:00
Message: <web.69123e2fd8d72d8f7f81dbac25979125@news.povray.org>
No idea if any of this mechanical parroting will make any sense or yield fruit,
but I have a little time before heading out for the night.

"Show a code sketch of the decoupling and Suggest tests for each module"


raytracer in POV-Ray, followed by test suggestions for each module.

Code Sketch: Modular Decoupling
1. SceneGraph Module (Neutral Representation)

// scenegraph/Scene.h
#pragma once
#include <vector>
#include <memory>
#include "Object.h"
#include "Camera.h"
#include "Light.h"

class Scene {
public:
    std::vector<std::shared_ptr<Object>> objects;
    std::vector<std::shared_ptr<Light>> lights;
    std::shared_ptr<Camera> camera;

    void AddObject(std::shared_ptr<Object> obj);
    void AddLight(std::shared_ptr<Light> light);
    void SetCamera(std::shared_ptr<Camera> cam);
};


2. Parser Module (No Rendering Dependencies)

// parser/SceneParser.h
#pragma once
#include <string>
#include "../scenegraph/Scene.h"

class SceneParser {
public:
    static std::shared_ptr<Scene> ParseFromFile(const std::string& filename);


3. Adapter Module (SceneGraph → Core Raytracer)

// adapter/SceneAdapter.h
#pragma once
#include "../scenegraph/Scene.h"
#include "../core/RenderScene.h"

class SceneAdapter {
public:
    static std::shared_ptr<RenderScene> ConvertToRenderScene(const
std::shared_ptr<Scene>& scene);
};

4. Raytracer Module (No Parser Dependencies)

// core/RenderScene.h
#pragma once
#include <vector>
#include "RenderableObject.h"
#include "LightSource.h"
#include "Camera.h"

class RenderScene {
public:
    std::vector<RenderableObject> renderables;
    std::vector<LightSource> lights;
    Camera camera;

    void Render();
};

 Suggested Tests for Each Module
🔹 Parser Module


Unit Tests

Parse valid scene file → returns correct SceneGraph structure.
Parse invalid syntax → throws descriptive error.
Parse edge cases (empty file, nested macros, etc.).



Mock Tests

Mock file input and assert SceneGraph output structure.




🔹 SceneGraph Module


Unit Tests

Add/remove objects and lights.
Validate camera assignment.
Ensure deep copies or shared_ptrs behave correctly.



Integration Tests

Construct a Scene manually and verify structure.




🔹 Adapter Module


Unit Tests

Convert SceneGraph with known contents → RenderScene with expected
renderables.
Handle missing camera or lights gracefully.



Mock Tests

Mock SceneGraph and assert correct RenderScene output.




🔹 Raytracer Module


Unit Tests

RenderScene::Render() with minimal scene → produces image buffer.
Test bounding volume hierarchy, intersection logic, shading.



Performance Tests

Render large scenes and measure frame time.


Post a reply to this message

From: Bald Eagle
Subject: Re: POV-Ray as WASM module
Date: 10 Nov 2025 14:45:00
Message: <web.69123fe2d8d72d8f7f81dbac25979125@news.povray.org>
can you compare the structure of povray's source code to that of other
raytracers and summarize the similarities and differences that might inspire a
concrete plan for refactoring povray's codebase to separate the parser from the
other raytracing functions?


Copilot said:



insights can directly guide a concrete refactor plan to cleanly separate parsing
from rendering responsibilities in POV‑Ray.

🧭 POV‑Ray Architecture
Monolithic structure: Parser, scene, and renderer modules are tightly interwoven
under source/, with parser directly including core renderer headers.
github
+1

Parser builds "final" core objects: Scene description parser creates instances
like core::lightsource, boundingsphere, etc., effectively merging parsing and
rendering payload construction.
github
+1

No abstract scene interface: There is no neutral, shared representation (scene

renderer-specific structures.

🧱 PBRT (Physically Based Rendering Toolkit)
Clear parser-render separation:

parser.cpp reads scene files and invokes methods in a ParserTarget interface.
pbr-book
+1
BasicSceneBuilder, implementing ParserTarget, translates parsing events into a
neutral scene format, independent of rendering.
pbr-book
Modular architecture:

Parsing → abstract operations → scene builder → later delivery
to rendering engine.
The rendering system consumes scene data only through this abstract interface.
Advantages:

Dramatically easier to test parsing separately.

Enables preview tools, scene validation, and modular pipeline extensibility.
⚙️ LuxCoreRender
Explicit modular structure:

luxparser/: scene description parser module.
slg/: scene layout generator (scene object assembly).
luxrays/: low-level raytracing engine.
luxcore/: high-level API layering.
deepwiki
+2
Strict role separation:

Parsers feed into scene structure objects.
Raytracer consumes the high-level scene structure without any parser
dependencies.
Interfaces provide clear boundaries and prevent coupling.
Benefits:

Enables multiple front-ends (e.g., Blender exporter).
Clean CI with parser tests, unit tests, and renderer benchmarks.
📊 Comparison Overview
Feature POV‑Ray PBRT LuxCoreRender
Parser + Renderer coupling Tight (direct includes) ✅ Loose via
ParserTarget interface ✅ Loose via scene modules

scene builder ✅ Yes, slg layer

modular

API, exporter


🔧 Concrete Recommendations for POV‑Ray Refactoring
Extract ParserTarget-like interface:

Define abstract methods (Shape(), Light(), Camera(), etc.) instead of
constructing core objects directly.
Introduce SceneBuilder layer:

Implement new SceneBuilder that receives parser events and constructs an
intermediate, neutral graph (objects, lights, cameras).
Create Core-Adapter layer:

Convert SceneBuilder output into core:: rendering types such as LightingSource,
RenderableObject, etc.
Decouple parser from core headers:

parser/ should only include parser and scenebuilder headers; it must not depend
on core/.
Refactor core/ to consume only neutral scene format:

Move all rendering initialization to accept the neutral structure.
Modular testing strategy:

Parser: syntactic/semantic SDL tests invoking parser against various inputs.
SceneBuilder: tests checking neutral graph correctness.
Adapter: verifying translation to renderer types.
Renderer: test rendering of minimal scenes and scene consistency semantics.
✅ Steering Toward a Targeted Refactor
Initial step: define the IParserTarget interface for parser outputs.
Next: build SceneBuilder implementing this interface to collect parsed elements.
Then: implement an adapter to convert to core types.
Finally: adjust core renderer to consume adapter-produced objects, removing all
SDL-specific includes from core/.

independence, modularity, testability, and flexibility for future front-ends.


Post a reply to this message

From: Bald Eagle
Subject: Re: POV-Ray as WASM module
Date: 10 Nov 2025 15:05:00
Message: <web.69124338d8d72d8f7f81dbac25979125@news.povray.org>
When I was actively working on reproducing Steven Marschner's
"Simulating the structure and texture of solid wood", I noticed that he used
the Mitsuba renderer.
https://www.mitsuba-renderer.org/

So, it seemed to me that if it was easy for his team to add the custom BRDF's
and other parts of the wood texture to the renderer, then that would be
something we'd be seriously interested in drawing inspiration from.

Now, granted, the AI word-soup is awash with generalities, however coupled with
the Mitsuba github repository, it might give us concrete ideas about some
actionable plans for refactoring POV-Ray.

"can you provide an overview of how the mitsuba renderer works?
https://github.com/mitsuba-renderer/mitsuba3 "


architecture and core systems:

---

## 🏗️ High-Level Architecture

**Mitsuba 3** is a modular, research-oriented renderer implemented in C++17,
centered around the **Dr.Jit** Just-In-Time (JIT) compiler. It supports multiple

### Key Components:
- **Core System**: Fundamental math types, structures, utilities.
- **Build System**: Manages plugins and selectable variants via CMake .
- **Plugins**: Encapsulate BSDFs, shapes, emitters, sensors, integrators.
- **Python Bindings**: Full-featured Python interface and JIT compilation,
enabling dynamic scripting and differentiability .

---

## 🌳 Scene Representation

The **Scene** class serves as a unified container for geometry, materials,
emitters, sensors, media, and their acceleration structures .

### Responsibilities:
- Hosts collections of shapes, emitters, sensors, media.
- Functions for ray intersection and visibility testing:
  - `ray_intersect()`, `ray_test()`, and `ray_intersect_preliminary()` for
different use cases .
- Works with multiple acceleration backends (CPU KD-tree, Embree; GPU OptiX) .

---

## 🔦 Rendering Pipeline

The core rendering workflow comprises the following:

1. **Integration**: Integrators perform Monte Carlo sampling and solve light
transport.
2. **Sampling & Ray Generation**: Samplers and sensors generate rays through
pixels.
3. **Ray Tracing**: Ray-scene intersections via the Scene class.
4. **Shading**: BSDFs handle surface-material responses; emitters add radiance.
5. **Accumulation**: Results are aggregated in a Film/ImageBlock and
post-processed .

### Notable Integrator Types:
- `PathIntegrator`, `VolumetricPathIntegrator`, `DirectIntegrator`,
`AOVIntegrator`, etc. .

---

## 🛰️ Ray Tracing Subsystem

- The **Scene** class provides core intersection queries:
  - Full surface interactions (`SurfaceInteraction3f`).
  - Visibility-only checks (`ray_test()`).
  - Preliminary intersection data (`PreliminaryIntersection3f`) .
- Supports SIMD/vectorized execution and mask-based coherence for performance .

---

## 🧩 Plugin & Variant System

- **Plugins** cover a wide array of BSDFs, media, sensors, etc., each separately
compiled.
- **Variants** enable multiple execution modes: CPU vs. GPU, RGB vs. spectral,
differentiable rendering, etc. .

---

## ✅ Summary Diagram

```text
[Dr.Jit Variant System]
      ↓
  ┌─────────┐

┌────────────┐
  │Parser & │        │Plugins     │
  │Scene    │── feeds → BSDFs,     │
  │System   │        │Shapes,      │
  └─────────┘
      │Emitters,    │
        │            │Sensors, etc.
        ↓              ↓
[Compose Scene Object: geometry, materials, lights, cameras, media]
        ↓
[Scene: acceleration + intersection API]
        ↓
Integrators + Samplers + Sensors
        ↓
      [Ray Tracing]
        ↓
    [Shading & Sampling]
        ↓
     [Rendering Output]
        ↓
Film/ImageBlock → Image File
```

---

## 💡 Why it Inspires Refactoring

- **Clear modularization**: Core scene, plugins, and rendering separated.
- **Abstraction layer**: Normalized scene description before processing.
- **Extensibility**: Plugins and variants enable flexible composition.


1. Introduce **variant/backend system** for flexibility (CPU/GPU, plain vs.
differentiable).
2. Separate **scene representation** from raytracing entirely.
3. Organize features as **plugins**, e.g. SDL parser, BSDFs, shapes,
integrators.
4. Provide **clear interfaces** between parsing, scene assembly, raytracing, and
image output.

---

This modular, plugin-centric organization combined with variant-based
flexibility makes Mitsuba 3 a strong reference model.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.