|
 |
---
## High‑level flowchart
```
Start
│
├─► 1) Read CLI / INI options
│ (input file, libs, image size, AA, output type, etc.)
│
├─► 2) Load scene sources
│ (main .pov + #include files via library paths)
│
├─► 3) Parse SDL
│ a) Scan & tokenize
│ b) Parse syntax → build scene data
│ c) Evaluate expressions/macros/functions
│
├─► 4) Build render structures
│ (materials, textures, lights, camera, objects,
│ bounding/acceleration, symbol tables)
│
├─► 5) Prepasses (optional)
│ a) Radiosity pretrace
│ b) Photon shooting/sorting
│ c) Media setup
│
├─► 6) Render view (multi-threaded)
│ a) Partition image into rectangles/tiles
│ b) For each pixel: primary ray(s)
│ c) Intersections, shading, secondary rays
│ d) AA sampling & filtering
│
├─► 7) Post & output
│ (gamma/display; write PNG/TGA/BMP/PPM; alpha/bit depth)
│
└─► End
```
---
## Expanded outline (what each stage does and where to look)
### 1) Read command‑line / INI options
- POV‑Ray merges options from the command‑line and `.ini` files:
input file (`+I`), library paths (`+L`), image size (`+W`, `+H`), display
depth, alpha, partial regions, and more. These options drive both parsing and
rendering behavior.
### 2) Load scene sources
- The main scene file is opened, and any `#include` files are resolved using the
current working directory and configured library paths. Parsing options also
honor version switches affecting language compatibility.
### 3) Parse SDL into an internal scene
**Lexing & tokenization**
- **Scanner** reads raw characters (handles encoding, whitespace, comments) and
emits lexemes.
- **Tokenizer** converts lexemes into tokens (identifiers, numbers, strings,
keywords, operators).
**Syntax & semantic parsing**
- The **Parser** consumes tokens and constructs the in‑memory scene:
objects, CSG, transforms, textures/pigments/finishes, lights, cameras, media,
and control structures (`#declare`, `#local`, `#macro`, conditionals, loops). It
manages symbol tables and evaluates expressions; user‑defined functions
are compiled to an internal representation for evaluation.
> Where to look in the tree: `source/parser/*` for scanner/tokenizer/parser;
`reservedwords.*` for language tokens. The
### 4) Build render‑time structures
- Parsed scene data are organized into runtime structures used by the renderer
(scene graph, object instances, materials). Spatial data structures and bounding
information are prepared to accelerate intersection tests; views and task queues
are initialized for multi‑threaded rendering. (See the rendering engine
overview and scene/view/task classes.)
### 5) Optional prepasses: global illumination & volumetrics
- **Radiosity** pretrace: samples indirect diffuse lighting and populates a
cache before (and during) final rendering. Controlled via `global_settings {
save/load data).
- **Photon mapping**: shoots photons from light sources, stores them
(surface/media photon maps), then sorts/builds lookup structures used during
shading for caustics and volumetric effects.
- **Media setup**: configures participating media (scattering/absorption) for
volumetric lighting and fog during the ray march in the render stage. (General
capability references via integration docs and engine overview.)
### 6) Rendering the image (core ray tracing loop)
**Block/rectangle scheduling (multi‑threaded)**
- The renderer splits the image into rectangles (tiles). Worker threads pull
utilization and cache behavior.
**Per‑pixel workflow**
1. Generate primary camera ray(s) through the pixel.
2. Traverse acceleration/bounding to find the nearest intersection.
3. Shade the hit point:
- Direct lighting (light sampling, shadows/attenuation).
- **Radiosity** (indirect diffuse) lookup/update if enabled.
- **Photon map** estimates for caustics/volumetrics if enabled.
- Reflections/refractions, fresnel, dispersion, normal/bump mapping,
procedural/textured pigments, media marching, etc.
4. Spawn secondary rays as needed (reflection/refraction/shadow/GI) with
recursion limits.
5. **Anti‑aliasing**: Depending on method, adaptively supersample pixels
when contrast exceeds threshold; recursive subdivision for `+AM2` vs.
non‑recursive `+AM1`. Jittering is optional.
### 7) Post‑processing & file output
- Apply display/gamma handling as configured; update interactive preview if
enabled. Output is then encoded in the requested format (e.g., PNG, TGA,
BMP/SYS, PPM) with chosen bit depth and optional alpha, controlled by
`Output_File_Type`, `Bits_Per_Color`, `Output_Alpha`, and `Output_to_File`.
Partial‑region rendering and resume/abort are also supported through
options.
---
## Quick cross‑reference (docs & source)
- **Official documentation hub** (3.7/3.6): tables of all runtime options &
language features.
- **Parsing options** (input file, includes, versioning):
- **Output file options** (formats, bit depth, alpha):
- **Display/preview & gamma**:
- **Anti‑aliasing** (threshold, methods, jitter):
- **Radiosity** (reference/how‑to):
- **Photon mapping** (implementation overview):
- **Rendering engine internals** (view, tiles, threads, trace pixel):
- **Repo root / status**:
---
Post a reply to this message
|
 |