POV-Ray : Newsgroups : povray.pov4.discussion.general : <no subject> Server Time
22 Dec 2024 12:38:30 EST (-0500)
  <no subject> (Message 1 to 3 of 3)  
From: clipka
Subject: <no subject>
Date: 29 Nov 2008 13:05:00
Message: <web.493183f8c23a6d9fba16390@news.povray.org>
Reading the feature request threads, I get the impression that there are two
differing opinions out here that are NOT necessarily contraditory:

- Some people argue that a rendering engine should be "lean & mean"

- Other people argue that optimization of some stuff is a must, plus the scene
description language should remain easy enough for the 3d artist

Where's the contradiction?

Both can be done IF a good extension mechanism is built into BOTH the core
engine AND the scene scripting language.

One plug-in could provide generic objects / patterns / shaders / what-have-you,
to be scripted via functions to get all the bells & whistles at the fingertips
of the 3d scene language "coder".

Another plug-in (or set of plug-ins) could provide primitive objects and some
standard patterns / shaders / etc., easily configurable via a few parameters,
exposed by the scene language (but defined by the plug-in) for the 3d scene
"artist" who doesn't want to be bothered with too much coding - and/or for
people who appreciate optimized code for such things.

And the very hard-core coders could write their own objects using C++, or maybe
even a programming language of their choice, to get optimized code for highly
specialized needs, avoid long scene file parsing times or whatever.


So, basically, e.g. an object could expose:
- A standard interface to get its bounding box
- A standard interface to test for a ray-to-object intersection, and retrieve
the associated parameters: location, unpertubed normal, and pertubed normal
(for mesh-like objects)
- A standard interface to build a "generic" object from itself (I'd suggest an
isosurface)
- Maybe one or two things I'm missing here because I'm not that much of a
raytracing expert
(This API could be extended in future if some features turned out to be of great
benefit and easily optimizable for some objects; for "old" plug-in objects,
those features could be handled using a non-optimized isosurface proxy)
- PLUS a list of parameters, each with information such as:
    - parameter name
    - type
    - optional yes / no
    - address of function to call for set/get

For example, a sphere would have the following parameters:
    center / vector          / optional (might default to <0,0,0>)
    radius / positive scalar / optional (might default to 1)

(Any other parameter is, of couse, not the job of the sphere object; the pure
geometric properties are sufficiently described.)

The SDL would then expose these parameters to the user, like say:

    object {
        shape = sphere {
            center = <3,2,0>;
            radius = 2 * radius;
        };
        material = diffuse_material {
            ...
        };
    };

(just as an example for how it COULD look like) meaning:

    - create an object container.
    - create a sphere and add it to the container as its shape
    - set the center of the sphere to <3,2,0>
    - get the current radius of the sphere, multiply it by 2,
      and set it as the new radius
      (just a trivial example of how scripting might work)


This approach would have some advantages, all of which boil down to more clearly
defining the various components of the ray tracer.

So the essential idea is to have very generic interfaces for everything that
needs to be done during raytracing - yet leave it up to other parts of the
software (or even the user himself) to decide whether to actually expose all
that genericity to the user, or instead encapsulate a good deal of it for
various common cases in order to gain speed and ease of use.


Post a reply to this message

From: Reactor
Subject: Re: <no subject>
Date: 29 Nov 2008 15:55:00
Message: <web.4931ab586f92e856108f18ce0@news.povray.org>
"clipka" <nomail@nomail> wrote:
> Reading the feature request threads, I get the impression that there are two
> differing opinions out here that are NOT necessarily contraditory...


This is something I've been mulling over for a while, and I agree in concept,
but thought of about 3 major parts.  The most obvious one would be the
rendering engine, which, of course, will doubtless have many subcomponents, but
I am lumping all calculating tasks that are directly required for image output
under 'rendering engine.'  The next would be the parser, which is responsible
for all parsing tasks, including those that may not necessarily result in
rendering.  The parser would also have a standardized interface that allows
plugins to access and add scene objects before handing the parsed scene over to
the rendering engine.

The last is the environment, which would contain an IDE style interface for
direct use, but is not required for remote use.  The environment would have the
typical IDE niceties, like the option of a user having the syntax checked as
they typed (by having the parser do a non-rendering, partial read), but would
just be a colorful interface that calls the parser and renderer via command
line as required.

I favor the idea of keeping them completely separate, so that changes to one
will not affect the other, and could be simply drop in changes.  Some of the
things brought up, such as the idea of macros and objects being able to read
into what is happening in the scene fit well with this.  The parsing engine
could have an xml dtd that tells it what is valid syntax for a given #version.
The parser could populate a DOM as the user enters their scene (or shortly
before rendering), and the parser could recognize (or be told) what it needs to
'come back to.'  For example: Let's say we want a sphere to be green if the
camera is looking up (above the horizon), but red if the camera is looking
down, and we want the sphere to always be in the middle of the frame.

// begin code:
sphere{
 scene.camera.look_at, 1
  // even though the camera look_at vector has not yet been set, this is valid,
  //    because the parser knows where to get that property.

 pigment{ color rgb <1,1,1> }
// it doesn't matter what this is, we will change it below
 name "mySphere"
// new, optional name property that can be applied to any object
}


camera{
 location <0,5,-10>
 look_at <0,2,0>
}

#if( scene.camera.look_at.y > scene.camera.location.y ) // camera is looking
above the horizon
 scene.objects.mySphere.pigment.color = <0,1,0>;
#else
 scene.objects.spheres[0].pigment.color = <1,0,0>;
 // also a valid reference, alternate syntax
 //  allows un-named objects to be referenced
#end

// end code

We could set variables and have the camera and sphere respond to both of them,
but to maximize backward scene compatibility, I would prefer something similar
to the above.  Wit that, you could take old scenes and includes and rework them
with far less effort by being able to add a few commands in a single place using
the fully qualified names.

It almost goes without saying that this is also fully compatible with an
object/scene browser within the IDE that would allow one to track what objects
reference what other objects/macros/includes.

The above example is mostly at the parser/environment level of what happens
before a scene is parsed with intent to render.  It would also be nice to have
the ability to reference things that will be calculated by the rendering
engine, perhaps by using an asterisk to let the parser know that it will not be
able to determine the exact value until after a rendering task of some sort.

Someone used the example, a while ago, of a macro that would grow moss in dark
corners of an object as determined by the radiosity pretrace data.  That is a
good example of the type of data that could be referenced by a scene element,
but cannot be determined by the parser itself.  Only the rendering engine would
know.

I think accessing this data is facilitated by splitting the rendering engine
into separate parts, even separate applications - a radiosity calculating one,
a photon one, etc - that take the scene data, do their part, and return the
result (which could, in turn, could be accessed by code within the scene before
the next engine is called).

This requires, of course, a method of specifying what order the engines are
called in and what is done with the result, which I would allow the user to
specify in the ini file.  In theory, if one wanted to have the moss grow in
dark corners created by other moss, then they could specify the radiosity step
twice, and ignore (or save) the first result from an image output standpoint
(this means that one could do a low quality radiosity pretrace, have the scene
respond to it, then do a higher quality trace).


There are many other things that need to be addressed, of course - issues such
as object access modifiers (to prevent a macro from accidentally changing
objects elsewhere in a scene), object storage classes (to tell the parser
whether or not to reparse an object between engine calls or even frames of an
animation), but I do have ideas for those, too...

-Reactor


Post a reply to this message

From: clipka
Subject: Re: <no subject>
Date: 1 Dec 2008 10:05:00
Message: <web.4933fc626f92e856f55cbdff0@news.povray.org>
"Reactor" <rea### [at] hotmailcom> wrote:
> The parser could populate a DOM as the user enters their scene (or shortly
> before rendering), and the parser could recognize (or be told) what it needs to
> 'come back to.'  For example: Let's say we want a sphere to be green if the
> camera is looking up (above the horizon), but red if the camera is looking
> down, and we want the sphere to always be in the middle of the frame.
>
> // begin code:
> sphere{
>  scene.camera.look_at, 1
>   // even though the camera look_at vector has not yet been set, this is valid,
>   //    because the parser knows where to get that property.
>
>  pigment{ color rgb <1,1,1> }
> // it doesn't matter what this is, we will change it below
>  name "mySphere"
> // new, optional name property that can be applied to any object
> }
>
>
> camera{
>  location <0,5,-10>
>  look_at <0,2,0>
> }
>
> #if( scene.camera.look_at.y > scene.camera.location.y ) // camera is looking
> above the horizon
>  scene.objects.mySphere.pigment.color = <0,1,0>;
> #else
>  scene.objects.spheres[0].pigment.color = <1,0,0>;
>  // also a valid reference, alternate syntax
>  //  allows un-named objects to be referenced
> #end
>
> // end code

This is a hybrid approach which won't work out:

- Referencing the camera location from within an object defined before the
camera actually is, you imply that the scene file is something static, and
forward references can always be made because "the parser knows where to get
that property"

- The code afterwards, which dynamically changes the sphere's pigment, implies
that the scene file is something dynamic, which can change during parsing.

The following code shows why that can't work:

sphere { scene.camera.look_at, 1 }
camera { location <0,5,-10> look_at <0,2,0> }
scene.camera.look_at += <0.1,0.1,0.1>;

Which "version" of the camera location should the sphere reference?

And if you do something like this, you're totally screwed:

sphere MySphere { scene.camera.look_at, 1 }
camera { location <0,5,-10> look_at scene.objects[MySphere].center + <1,0,0> }

So it needs to be decided whether the approach will be a static "scene markup
language", or a scripting language.

You might go for a mix like in dynamic HTML, but that's actually two separate
languages that "feed back" on each other, which doesn't help make the concept
of their combination easier.

I guess a scripting language will be perfectly fine, and the concept will
actually be much easier to grasp to the average user. If you ever did XSLT with
its static approach you probably know what I mean - at least it gave me brain
haemorrhaeging, virtually speaking.

> This requires, of course, a method of specifying what order the engines are
> called in and what is done with the result, which I would allow the user to
> specify in the ini file.

How about am approach like this - in a script-oriented world it should work:

sphere { <0,0,0>, 1 }
camera { location <2,0,0> look_at <0,0,0> }
render
result.save { png, "my_scene_x.png" }
camera { location <0,2,0> look_at <0,0,0> }
sphere { <0,1,0>, 0.5 color result.average_color }
render
result.save { png, "my_scene_y.png" }

Question here is, how to make this user-friendly so that the user doesn't have
to type the standard commands for every simple scene that is just to be
rendered and that's it.

Maybe something like:

process {
  sphere { <0,0,0>, 1 }
  camera { location <2,0,0> look_at <0,0,0> }
  engine.raytrace
  engine.result.save { png, "my_scene_x.png" }
  camera { location <0,2,0> look_at <0,0,0> }
  sphere { <0,1,0>, 0.5 color result.average_color }
  engine.do_photons
  engine.do_radiosity
  engine.raytrace
  engine.result.save { png, "my_scene_y.png" }
}

if the scene doesn't contain a "process{}" statement, a default one is
constructed (or simply implied) around it:

process {
  // actual scene file content
  engine.do_photons
  engine.do_radiosity
  engine.raytrace
  engine.result.save { ... }
}


So what we'd do this way, we'd be constructing a language to script the
individual components of the renderer: Scene builder, photon shooter, radiosity
gatherer, output file writer, and so on.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.