POV-Ray : Newsgroups : povray.programming : ATT: POV team and everyone - POV4 design proposal Server Time
28 Jul 2024 18:15:11 EDT (-0400)
  ATT: POV team and everyone - POV4 design proposal (Message 1 to 10 of 91)  
Goto Latest 10 Messages Next 10 Messages >>>
From: Eugene Arenhaus
Subject: ATT: POV team and everyone - POV4 design proposal
Date: 10 Jan 2002 08:18:43
Message: <3C3D9297.6641E0E4@avalon-net.co.il>
Hello.

Here's my two cents. 

Comments, discussion, corrections, additions are welcome.

"Waa, who needs [X]" and "Blasphemy!!!" are not welcome. :)

So, without much further noise:

-----

What to look for in PoV 4: A design approach

By Eugene Arenhaus

Persistence of Vision 3, as it is currently being pulled together to its
final state, is already fit to handle many tasks, and is truly a
universal and flexible tool. 

However, in certain places its flexibility is inherently limited by its
design, and as the result the user is forced to resort to awkward
tweaking in order to achieve the desired effect. (A prominent example is
using the file I/O to circumvent the effects of macro parameter
precalculation.) 

We believe that the artists should not be forced to struggle with their
tools; therefore, efforts should be made to remove the existing and
potential limitations to creativity. 

PoV4 should therefore be made as flexible as possible, because it is the
very type of tool whose users constantly apply in more and more creative
ways, making it do things that no one had thought even possible. We must
encourage this way and make it easy to follow.


We believe that basing PoV4 internal design on the following four
principles should be sufficient to achieve this goal:

1.	Unified, interchangeable scene object model
2.	Complete accessibility of object properties from scene description
language
3.	Instancing support
4.	Full scripting to replace macros


Explanation of the principles follows. (For those who are interested in
how the scene description language may look with these principles in
full action, there's an outline in the Appendix.)


Principle I. Unified, interchangeable scene object model

The principle: 

All objects that make the scene should be as interchangeable as
possible, with as few features unique to a single object as possible.

How PoV3 violates it:

Example 1. 
PoV3 has a camera object that is used to render the scene. However, it
cannot be used in no other form in the scene, so making a scene that has
a television screen viewing the same scene requires trickery with image
maps and double-pass rendering, or even recreating copies of the scene.

Example 2.
PoV3 has dedicated texture maps that can be combined to create patterns
on objects' surfaces. However, geometry cannot be used for that; it's
not possible to draw three red circles in a row on the surface of white
cylinder, unless you use an image map (which causes artifacts) - not
even if you use CSG to merge the three red cylinders and one white and
then clip it all with another cylinder - because CSG has side effects on
color. Geometry-controlled pigments are just not generally possible.

Example 3.
PoV3 has a flat height field object whose shape is calculated from an
external image map. It does not, however, allow generation of height
fields from built-in procedural textures, so when such need arises that
has to be done through rendering an image with the texture first, then
using the saved image to control the height field - and it utilizes its
own image loader, different from one used by the standard image map
texture object, and supporting a different set of file formats. Nor does
it support height field objects other than flat.

Why that is bad:

It increases confusion in interfaces, destroys uniformity of scene
language making it harder to learn with dozens of differently syntaxed
commands, provokes redundancy in implementation, limits possible uses of
the software; and it impedes creativity, instigating voodooist trickery
in achieving desired effects.

Why with Principle I it  shall work better:

By making geometry, cameras, textures et al. use a single interface, we
can achieve unprecedented flexibility in procedural scene building.
Everything will become interchangeable, meaning that geometry might be
used for definition of objects' textures, textures would be useful for
control of anything from surface colors to geometry (in form of height
fields, for instance, or in more complex form of hypertextures, or to
control the media density). More important, it will offer more creative
freedom by lifting the artificial limits, and make the scene language
easier to learn by making everything more uniform and consistent.

In addition, there will be the benefit of reduced redundancy, so
programming becomes less voluminous and there is less code to debug,
meaning more development speed and fewer bugs. The latter advantage,
however minor it may seem, is drastic: once the few required interfaces
are finished and debugged, the system becomes robust and adding new
features becomes a matter of implementing just the new functionality.
The goal here is making an efficient and universal object framework, and
the less interfaces are in use, the more stable and lightweight such
framework is going to be.


Principle II. Complete accessibility of object properties from scene
description language

The principle: 

All properties of objects in the scene should be accessible from the
scene description language, both for reference and assignment.

How PoV3 violates it:

There is no way to access object's properties from scene description
language.

Why that is bad:

No access to objects' properties means that: 

1.	It is impossible to borrow parameters from other objects, so things
like positioning of objects relative to other object have to be done
exclusively through macro language, using external variables. In many
cases lifting this limitation would result in cleaner scene code and
wider range of effects. Iterative compound objects and animations are
going to benefit in particular from this.
2.	It is impossible to assign parameters to objects in any way other
than declaration of new objects, which is especially impeding in
parametric animations that still have to be re-parsed for every frame,
leading to waste of time and hardware resources.
3.	It impedes many possible creative uses of the software that would be
available if this feature were present.

Why with Principle II it shall work better:

Making all object parameters available to the programmer would make it
possible to make all object parameters relative, and not limited to
pre-declared ones. This would not only allow complete, truly parametric
control of animation, but also make parametric creation of scenes much
easier, and allow many effects that otherwise cannot be done without
heavy trickery or at all; for instance, it would enable the artist to
use texture values to control size and positioning of the objects, where
currently it has to be done either using random numbers or external
programs. (That would be useful in modeling of grass, dust, asteroid
fields, and other particle effects, and that's just one possible and
rather obvious use of the feature.)



Principle III. Instancing support

The principle: 

It should be possible to use references to objects instead of object
copies.

How PoV3 violates it:

There is no instancing or referencing except limited referencing in
triangle meshes. Every object is a separate and independent copy.

Why it is bad:

Any scene with enough complex objects becomes challenging in terms of
available memory. (A single tree with twigs and leaves may be already
big enough, but a forest quickly becomes unmanageable, because every
tree is a separate complete set of objects.) 

Why with Principle III it shall work better:

It would allow the software to store only one copy of the object, while
showing possibly hundreds of thousands in the scene without wasting the
memory on hundreds of thousands of redundant copies. A reference or
"clone" in its simplest form should only store the pointer to the
original object and the clone's transformation data, and even that would
make it possible to populate a huge forest with clones of three or five
various tree object models in varied transformation states at a cost of
the memory requirements needed to store transform matrices per tree,
instead of complete set of twigs. Referencing would also make it much
easier to control multiple instances of objects (like textures reused in
the scene) and enable full control over use of world versus local
coordinates. Referencing would also be invaluable for control of
particle systems.



Principle IV. Full scripting to replace macros

The principle:

The scene description language should be or include a full-fledged
scripting language.

How PoV3 violates it:

PoV3 only has limited macro support, with parameters strictly
precalculated and support of true functions absent.

Why this is bad:

This significantly impedes parametric generation of scenes and
animation, inducing ugly trickery down to abusing the macro language's
file I/O capabilities in order to circumvent lack of true function
declarations. Even with all the trickery, it is frequently extremely
difficult to achieve the desired effect.

Why with Principle IV it shall work better:

It shall remove the obstacle and allow complete programmatic control
over the scene, boosting performance of animators first and foremost,
but useful to everyone else.



Interlude


Above, we have talked a lot of streamlining the ways of doing things,
making things behave uniformly, giving more control, replacing tricks
with consistent features, and so on, as of benefits. It is clear that
this view on what would be a benefit to the future PoV comes from some
kind of underlying logic, and this logic we'll clarify now.

PoV raytracer is used by creative people to do creative work; therefore
it will perform its function the better, the more creative freedom it
offers to the user. 

PoV raytracer is primarily an artistic tool; therefore, its value as a
tool should be depending on the same things as all artistic tools share:
ease of use, control, efficiency, and versatility. 
In addition, PoV is also a universal tool, which means that it can be
used in many ways and also ways that are innovative. 

We hold with these values, and we believe that the direction of
improvement of such tools as PoV is making them more consistent, easier
to use, more universal and less restricting. The tool should allow,
endure, and encourage its use in innovative and even unconventional
ways, so that more creative energy could be spent on actual work instead
of inventing clever tricks that circumvent the tool's limitations. We
also trust the creative user to make good use of freedom provided by
such a tool, not to be intimidated by it.

Hence this attempt to outline the design of the raytracing framework
that would be universal, robust, consistent, and provide as much
creative freedom as possible.



Approach to coding


This all would do any good only if we can really make everything in the
scene programmatically compatible with everything else. Of course, this
is where the interchangeability comes into play: 

1.	Unified scene object interface
2.	Unified ray casting procedure
3.	Unified data type

(Naturally, the standard OOP principles like encapsulation remain valid
as well.)


Unified scene object interface

First step to full intechangeability is making all scene objects share
the same universal interface. "All" here means "all". Geometry and
textures and shaders and even cameras should be accessed through one set
of routines, ensuring that everything can be used instead of everything
- or  at least almost everything (since some functionality cannot be
easily substituted, like formation of output image by a camera object).
The easiest way to achieve that is define the interface as a single C++
object and then make all scene objects its descendants. Care should be
taken to design that interface as simple and universal as possible; all
implementation details ought to be left to actual objects. The exact
design is open to discussion, but we believe that it should have at
least one capability: to form a scene tree out of single objects; and at
least one function: Trace. 


Unified ray casting procedure

The crucial point of the whole raytracer will be distributed
ray-casting. Every scene object will implement a little, relevant to it,
part of raytracing in the single Trace routine.

Trace's input will be a single ray.

Trace's output will be a single ray.

In fact, it will be one and the same ray object.

The ray object would carry the information on at least two important
things: direction vector and color. Direction vector is the "input" part
of the ray: we cast the ray somewhere by specifying its direction. Color
is the "output" part: the object may change it according to its
properties. Additional data may also be carried with the ray, anything
at all: normal vector, reflectivity, IOR, slope, photon density, etc.
etc. etc.
Trace function would receive the ray and decide what to do with it,
checking its direction and setting its color. What it does within is a
black box: the raytracer does not care what the object does, it casts
the initial ray (feeds it to the scene tree) and lets the objects play
with it, then when caluclation is over, it collects the color. That's
all.

Of course, inside every object the function may do whatever it pleases,
as simple or as complex as needed. A texture object's Trace may simply
treat the ray's direction vector as sampling point, and return its
computed value at the specified coordinates. Geometry object's Trace, at
the other hand, would calculate the intersection point of the ray with
its geometry, then feed the intersection point to its shader as another
(or even the same) ray. A shader's Trace might take that ray, feed it to
its texture parameter object, collect the returned color value, then,
using the normal value (passed to it by the geometry object as extra
data in the ray object and still present there), cast several more rays
(by calling the scene object's upper-level Trace for each one with new
ray objects), calculate the resulting complex color, and return it. Note
that you could safely "ask" a complete object with geometry and shader
to return its color inside its volume; it would do so bypassing the
normal shader and passing the "ray" (sampling point, in this case)
directly to the texture. (Such "sampling" behavior may be determined by
setting ray's direction vector to zero length, for instance.)

Implementing Trace function in every scene object would allow using them
all in all possible roles. Use a camera as a texture? A texture for
heightfield? A sphere for texture? Of course, just call its Trace
function. But to ensure that all objects understand the data carried by
the ray correctly, we need to make them all aware of what the ray
carries. In comes the unified data type.


Unified data type

We suggest that the only and sole data type used by PoV4 should be a
vector.

If we look at PoV3, we'll see that some things are already represented
in the scene as vectors: coordinates and colors at the very least. Some
things also make use of arrays, like gradient definitions and meshes'
triangle lists. Since a vector would be represented in implementation as
an array of floating-point numbers, it'll be wise to make that the
principal unified data type of PoV4 - even when representing single
numbers. The ray would essentially be a vector as well, only with
special meaning for its components. The exact meaning of the ray
vector's components should be consistent throughout the raytracer
program, so that the tracer routines don't get confused, but flexible on
the programming level, so that adding features that need to pass extra
data with the ray would not cause unneeded complications. 
That could be done by accessing the components indirectly by name, not
by address, with aid of a central "component registry" object or in a
similar way.

For instance, the ray vector may have the following basic structure:

	<oX,oY,oZ, dX,dY,dZ, R,G,B,A, n>

with oX,oY,oZ being the point of origin (meaning also the sampling point
when direction part is zero), dX,dY,dZ specifying the vector's
direction, R,G,B being the RGB color components, A being alpha, n the
surface normal. Then if we need additionally to pass the surface slope,
we can extend the ray data by "asking" the component registry to handle
an additional Sl component, then PhD for photon density, etc. etc. - the
ray vector will grow, but the growth will never interfere with
functionality implemented before. Not all components will be needed or
modified by all tracer routines, of course - so the tracers would either
leave them intact or substitute a default value: both are easy to
implement in an object-oriented programming language like C++. For
example, a texture would not modify the n (surface normal) component, as
it has no surface, and geometry primitive would The unified vector ray
type will enable universal communications between raytracer's parts,
enabling any number of possible interactions: if you would want to use a
texture as geometry, or geometry as texture, you will be able to due to
uniformity in data types and object interfaces. Adding new functionality
also becomes extremely easy, ensuring that PoV4 will be always up to
date and incorporate the latest technological inventions.



Yes, but can it be done?

I (let's drop the formal "we" now) realize that all that was said may
sound outlandish or too theoretical or plain impossible to implement. I
rely on your, reader's, capacity to understand and your, reader's,
patience needed to not rush into battle against some feature or
principle that seems impossible or blasphemous. I do not think that PoV3
is bad; it is already excellent, but there's much room for improvement
still - there always is. Please look twice.

I claim no authority, of course, and welcome scrutiny, but my experience
with software design suggests that this indeed is a practical way of
building a complicated piece of software; that it can be implemented -
more, that it can save a lot of development time and effort and ensure
that the project stays manageable and free of inner part clashes for a
very long time; that it remains clear and extensible. The design
principles I used in this proposal were tested on many project of
similar nature (all were complex hierarchies of various objects sharing
a common interface) and their viability is clear. They do require very
careful, meticulous, and precise planning on the early stages, but once
the application of them to a project has been designed, the
implementation becomes easy (and late refinements to the design, though
they do happen, are seldom big or affecting vast parts of the project).

I would gladly answer questions, explain points, and correct mistakes if
asked. Together we can make the ultimate raytracing tool to date - if we
take it seriously and commit ourselves to it.

Dixi.



Appendix. Scene language outline.

The scene description language syntax is a major challenge because of
need to maintain backward compatibility with PoV3. 

The bright point is that strict compatibility may be not required, since
the PoV team has already announced that the language of PoV4 may not be
fully compatible with that of PoV3.

However, there's an issue of difference between "completely
incompatible" and "somewhat different". And there still is the issue of
learning curve, and the users of PoV3 should be able to make an easy
transition to version 4.  And there's no point in throwing ut the huge
body of work already done for PoV3: just like a lot of existing PoV3
source code may well be reused in PoV4, old scene files could be also
reused with the new incarnation of the tracer, with as little adjustment
as possible.  

Therefore, we suggest to make PoV4 use a scene description language
similar to that used by PoV3 in its principal syntax, but allow some
liberties.

The PoV4 scene language would use the same object-based approach as in
version 3:

	sphere { <0,0,0> 1 }

However, we suggest to use (possibly optional) named parameters and
objects:

	sphere "MySphere" { center <0,0,0> radius 1 }

and optional use of semicolons for separators:

	sphere "MySphere" { center <0,0,0>; radius 1 }

This is going to serve for significantly better readability and allow to
issue parameters in free order and in readable fashion:

	sphere { radius 1; center <0,0,0> }
	sphere { center <0,1,0>; radius 1 }

or even implementing multiple alternate ways to declare objects:

	sphere  { center <0,0,0>; radius 0.5 }
	sphere  { center <0,0,0>; diameter 1 }
	sphere  { from <-0.5, -0.5, -0.5> to <0.5, 0.5, 0.5> }

Referencing objects by names should also be allowed in declarations. For
instance, this statement would create a new sphere objects with all
parameters like UnitSphere has, but in a different place:

	sphere "UnitSphere" { center <0,0,0>; diameter 1 };
	UnitSphere  { center <1,1,1> }

We can also declare and use new named objects derived from UnitSphere,
to make management of complex objects easier:

	UnitSphere  "MySphere" { radius 2.0 }

In case where we do want to use derivative objects but do not want to
have the original one to be present in the scene, we could use the
prototype declaration:

	prototype sphere "UnitSphere" { diameter 1.0 }

And in case we want not a new object derived from existing one, but
rather just a reference to an existing one, we can use the clone
declaration:

	clone MySphere { center  <1,0,0> };
	clone MySphere { center <2,0,0> };
	clone MySphere { center <3,0,0> };

Such cloned objects would only consist of transfomation data and a
reference to the original object, therefore they probably won't be fully
modifiable - but they will be very memory-conserving.

Access to parameters of scene objects would be performed using
conventinal dot syntax:

	sphere "UnitSphere" { center <0,0,0>; diameter .5 }
	sphere "DoublesizeSphere" { center <0,0,0>; radius UnitSphere.radius*2
} 

In the above example, DoublesizeSphere is going to be always the double
size of UnitSphere no matter how big we declare the UnitSphere. One
object may be made controlled by another this way, or it may be just
used as convenience for easier tweaking.

In addition, it should be possible to assign parameter values directly
and on the fly, outside of declarations. This is going to serve
animators best of all:

	MySphere.radius <- MySphere.radius * 1.1;

Objects should offer all declarable parameters for such access, and may
offer additional calculated parameters as well. The most obvious of such
parameters are color, slope, etc:

	MyTexture.ColorAt(<0,0,0>)

For specifying colors, textures, and other properties of the objects
besides geometry, we propose a concept of channels. A channel is simply
a way to specify some variable parameter by borrowing it from an
additional object, usually a shader or texture object, not unlike the
use of another named parameter:

	shader GreenMarble { .... } ;
	UnitSphere { radius 2; texture GreenMarble }

The parameter named texture that we used here is essentially the same as
finish and pigment commands of PoV3 united; the UnitSphere object, being
placed on the scene level, is responsible for geometry, while the
GreenMarble shader object, being placed in the texture channel of
UnitSphere, becomes responsible for its surface's color properties, but
not its reflective properties. (Note that combining of finish and
texture into a single shader is only a proposal that we feel would be
consistent with proposed object model and would facilitate scene
housekeeping; using separate channels for finish and color and bump
mapping, as in PoV3, may be retained for better backward compatibility.)

And this is where interchangeability starts to come into play. We could
write:

	shader "RedCircle"  union { 
	  shader GreenMarble;
	  sphere { center <0,0,0>; diameter 1; shader RedMetal }
	}

and get a 3-dimensional texture that is green marble but with an
embedded ball of red metal in the middle. Here geometry controls the
appearance If you assign such a shader to a box smaller than the red
ball in it, you would get a green marble box with a red metal circle on
each side:

	box { center <0,0,0>; dimensions 0.7; shader RedCircle }

Replace the ball with a text object, and the box will sport a metallic
inscription seamlessly inset into marble.

This works simply because textures also have geometric properties in our
propsed scheme. A texture is an infinite space (think of infinite plane
in PoV3, this is basically the same paradigm). As rays hit the box, the
box object requests the surface properties at the intersection points
from the shader, which in turn checks into which of its two components
(infinite space-filling texture or the sphere) the intersection point
falls, then requests the surface properties from that component, and so
on until it actually gets the final answer.

The input and output of a channel is simply n-dimensional numeric
vector. It should be made possible to make it as independent from the n
as possible:  channels that need only one number should still be able to
receive input from procedures that return a three-component vector, and
vice versa. In the above example, the dimensions parameter would
normally receive three numbers, for width, height and depth; but it is
satisfied with single 0.7 which gets expanded to <0.7, 0.7, 0.7>. There
may also be special objects to expand or narrow the output to match the
channels' expectation: for example, a UV map object that creates a
three-dimensional texture from two-dimensional image, would be such an
object:

	UnitSphere { 
	 texture  uvmap { spherical; image MyImage }  
	}

Just as well it could wrap on a sphere an image not stored in a file but
produced by a camera:

	UnitSphere { 
	 texture uvmap { 
	  spherical; 
	  image MyCamera { look_at MyBox.center }  
	}

Channels are introduced a convenient method of controlling any
parameters in the scene objects. In fact, it is adviseable that every
parameter could be made controlled by another object, i.e. making
everything a channel. The beauty of this solution is that it allows
geometry control textures, textures reshape geometry, animation follow
texture's shapes, and endless other combinations. 
Here the animated object's position in space is controlled by the red
component of a texture:

	MyObject { 
	 center <- (MyAnimPos +  ATexture.redAt(MyAnimPos) - .5 )  
	}

In the above example, the position calculated by MyAnimPos (which is
just another object, perhaps a controller that calculates the position
on a path for every frame's animation timer value) is modified by
sampling the red component level of the texture called ATexture at that
position and adding it to the position (modified by .5 so the modified
value would differ from original by from -0.5 to +0.5). The object would
jitter on its way randomly, but this randomness would be controlled by
parameters of the texture. Replace noise with wood texture, and you'll
get a regular wavelike motion. Here is another example, in which the
shader is controlled by geometry:

	MyHeightfield {
	 shader gradient {
	   [0.0: shader BrightIce];
	   [0.2: shader BareRock];
	   [1.0: shader GreenGrass];
	   controller MyHeightfield.slope
	  } 
	}

We might just as well use the slope parameter of MyHeightfield to
control something else: for instance, placement of tree objects on the
height field (e.g. by sampling the green color component of the shader's
output: the greener, the more chance to have a tree there).

Finally, scripting comes into play. We've already used an expression
like:

	sphere "DoubleSize" { 
	 center <0,0,0>; 
	 radius UnitSphere.radius*2 
	} 

to make the radius of a sphere an exact double of some other sphere's
radius.

However, we may rewrite it using scripting as:

	sphere "DoubleSize" { 
	 center <0,0,0>; 
	 radius <- UnitSphere.radius*2 
	} 

Here the expression in radius parameter (much unlike PoV3) gets
incorporated into the scene as an actual expression, not number. Rather
than precomputing UnitSphere.radius*2 on the parsing stage and feeding
that to our sphere as its radius, PoV4 would attach this expression to
the sphere's radius parameter as scripting code. This would ensure that
no matter how we change the first sphere's radius, our computed one will
always remain its double. Animators would especially appreciate this
degree of dynamic control over the scene.

Of course, the script language would contain the usual programming
functionality besides expressions: alternatives, cycles, blocks,
variables, procedures etc., replacing the macro language of PoV3. The
important point, however, is that macro language is suitable only to
build the scene, whereas script language works inside the scene. Macro
is precomputed; script is post-computed. This opens many advantages:
primarily, it allows complete procedural control over the scene. For
example, randomizing positions of objects would be an easy task with
script:

	for (i = 0; i==99; i++)  { 
	 sphere { 
	  radius .5; 
	  center < rnd(-.1,.1),  rnd(-.1,.1), i> 
	 };
	}

This would produce 100 spheres, in a row along Z axis, jittered by up to
0.2 in the plane perpendicular to Z. Of course, this can be done with
macros even in PoV3; but suppose we want to reuse such a thing:

	function SphereRow(count, rad, xoffs, yoffs, zstep) {
	 for (i = 0; i==count-1; i++)  { 
	  sphere { radius rad; center <xoffs, yoffs, i * zstep> };
	 }
	}

and then want to call it with random offsets, i.e. jitter:

	SphereRow(100, .5, <- rnd(-.1,.1), <- rnd(-.1,.1), 1 );

-- then we would immediately see the difference. The script (as
proposed) would produce completely different squiggles every time it's
called. PoV3 scene language macros produce copies of the same straight
row of spheres, because it always precalculates the macro parameters
once. With script, we can pass small snippets of code instead (note
arrow glyphs in the call) and extend the flexibility greatly. Of course,
with script we still can pass precalculated parameters just as before.

Of course, this outlines only the style and feel of the language. 
Exact syntax, names, and symbols are left open to discussion and
development. For example, instead of the "C-style" function calls as
shown above, it may be desirable to make it blend with scene description
syntax:

	SphereRow {
	 count 100;
	 rad .5;
	 xoffs <- rnd(-.1,.1);
	 yoffs <- rnd(-.1,.1);
	 zstep 1;
	}

May the possibilities never end!


Post a reply to this message

From: Ron Parker
Subject: Re: ATT: POV team and everyone - POV4 design proposal
Date: 10 Jan 2002 08:40:58
Message: <slrna3r6fb.tih.ron.parker@fwi.com>
On Thu, 10 Jan 2002 15:09:43 +0200, Eugene Arenhaus wrote:
> Hello.
> 
> Here's my two cents. 
> 
> Comments, discussion, corrections, additions are welcome.
> 
> "Waa, who needs [X]" and "Blasphemy!!!" are not welcome. :)

What about "where's your working, tested, and debugged source code for
these patches again?"

--
#macro R(L P)sphere{L __}cylinder{L P __}#end#macro P(_1)union{R(z+_ z)R(-z _-z)
R(_-z*3_+z)torus{1__ clipped_by{plane{_ 0}}}translate z+_1}#end#macro S(_)9-(_1-
_)*(_1-_)#end#macro Z(_1 _ __)union{P(_)P(-_)R(y-z-1_)translate.1*_1-y*8pigment{
rgb<S(7)S(5)S(3)>}}#if(_1)Z(_1-__,_,__)#end#end Z(10x*-2,.2)camera{rotate x*90}


Post a reply to this message

From:
Subject: Re: ATT: POV team and everyone - POV4 design proposal
Date: 10 Jan 2002 08:45:42
Message: <of5r3u8nmsovrb46njc9t4orun4dvrgj8c@4ax.com>
On Thu, 10 Jan 2002 15:09:43 +0200, Eugene Arenhaus <eug### [at] avalon-netcoil>
wrote:

> Here's my two cents. 

It's rather two thousands.

> Comments, discussion, corrections, additions are welcome.

Not to much

> We believe that basing PoV4 internal design on the following four
> principles should be sufficient to achieve this goal:

Who is we ?

> 1.	Unified, interchangeable scene object model

Unified ? Is 3ds format somehow unified? If you want POV reader just write it.
Specification is available.

> 2.	Complete accessibility of object properties from scene description language

Smart scripting gives it to you

> 3.	Instancing support

?
Are you talking render farms ? It was discussed many times.

> 4.	Full scripting to replace macros

I partially understand what it means but probably yes.

> Example 1. 
> Example 2.
> Example 3.

Do you know POV 3.5 features ? Please study it.

> I (let's drop the formal "we" now)

Finally, I don't like choirs ;-)

> Therefore, we suggest

Again ?

> May the possibilities never end!

Seems you have big knowledge about languages. Try to write parser for your
syntax and converter to old syntax. If everything is possible then it should be
too. What you suggest is very radicolous change of SDL and imo it is different
language. If it is good enough and gives the same possibilities why force so
many povers to learn all things/tricks again ? Why break all exporters to
disallow POV 4 rendering ? Why break all old include files ? Some people still
work with POV 2 scripts.

ABX
--
#declare _=function(a,b,x){((a^2)+(b^2))^.5-x}#default {pigment{color rgb 1}}
union{plane{y,-3}plane{-x,-3}finish{reflection 1 ambient 0}}isosurface{ //ABX
function{_(x-2,y,1)&_((x+y)*.7,z,.1)&_((x+y+2)*.7,z,.1)&_(x/2+y*.8+1.5,z,.1)}
contained_by{box{<0,-3,-.1>,<3,0,.1>}}translate z*15finish{ambient 1}}//POV35


Post a reply to this message

From: Rick [Kitty5]
Subject: Re: ATT: POV team and everyone - POV4 design proposal
Date: 10 Jan 2002 09:02:26
Message: <3c3d9ef2$1@news.povray.org>
> If it is good enough and gives the same possibilities why force so
> many povers to learn all things/tricks again ? Why break all exporters to
> disallow POV 4 rendering ? Why break all old include files ? Some people
still
> work with POV 2 scripts.

seeing as every incarnation of pov slectivly breaks previous versions of the
SDL, why should it be a concern?


--

Rick

Kitty5 WebDesign - http://Kitty5.com
POV-Ray News & Resources - http://Povray.co.uk
TEL : +44 (01270) 501101 - FAX : +44 (01270) 251105 - ICQ : 15776037

PGP Public Key
http://pgpkeys.mit.edu:11371/pks/lookup?op=get&search=0x231E1CEA


Post a reply to this message

From:
Subject: Re: ATT: POV team and everyone - POV4 design proposal
Date: 10 Jan 2002 09:19:57
Message: <0b8r3u8v963josbslh1dv19l0alubiv2m6@4ax.com>
On Thu, 10 Jan 2002 14:00:04 -0000, "Rick [Kitty5]" <ric### [at] kitty5com> wrote:
> seeing as every incarnation of pov slectivly breaks previous versions of the
> SDL, why should it be a concern?

afaik it not breaks, it extends

you can still use poly, quadric even if it is possible with isosurface
you can still write "declare" instead of "#declare"
you have still mesh{} even if mesh2{} gives the same obeject for you
you can still play with union of triangles instead of mesh
you can still play with spherical, onion and other patterns instead of new
function pattern (it allow redo of  almost all old patterns)
the only big thing removed I remeber is halo{} - and I'm sure there was reason

ABX


Post a reply to this message

From: Christoph Hormann
Subject: Re: ATT: POV team and everyone - POV4 design proposal
Date: 10 Jan 2002 11:15:09
Message: <3C3DBDEF.FC5751AB@gmx.de>
Eugene Arenhaus wrote:
> 
> Hello.
> 
> Here's my two cents.
> 
> Comments, discussion, corrections, additions are welcome.
> 
> "Waa, who needs [X]" and "Blasphemy!!!" are not welcome. :)
> 
> [...]

Since you seem to have put quite a lot of work into this i wonder why you
did not write any information about you and the reason of your interest in
POV.  A lot of the things you write indicate that you don't have much
practical experience with POV-Ray and you don't know much about the recent
feature discussion and development.  

Some points:

- there are features for using object geometry in patterns (object
pattern) and using patterns directly for objects (function image type and
isosurfaces)

- 'instancing' (clone/refer patch) have been discussed and planned before
but it isn't as simple as it might seem.

- your language change suggestions involve a lot of additional keywords
and much longer scene files which makes learning the language and writing
scenes not exactly easier.

Christoph

-- 
Christoph Hormann <chr### [at] gmxde>
IsoWood include, radiosity tutorial, TransSkin and other 
things on: http://www.schunter.etc.tu-bs.de/~chris/


Post a reply to this message

From: marabou
Subject: Re: ATT: POV team and everyone - POV4 design proposal
Date: 10 Jan 2002 11:38:05
Message: <3c3dc36d@news.povray.org>
Eugene Arenhaus wrote:

> Hello.
> 
> Here's my two cents.
> 
> Comments, discussion, corrections, additions are welcome.
> 
> "Waa, who needs [X]" and "Blasphemy!!!" are not welcome. :)
> 
should this become a discussion about art or technique?


Post a reply to this message

From: Warp
Subject: Re: ATT: POV team and everyone - POV4 design proposal
Date: 10 Jan 2002 11:56:39
Message: <3c3dc7c7@news.povray.org>
Eugene Arenhaus <eug### [at] avalon-netcoil> wrote:
: Example 2.
: PoV3 has dedicated texture maps that can be combined to create patterns
: on objects' surfaces. However, geometry cannot be used for that; it's
: not possible to draw three red circles in a row on the surface of white
: cylinder, unless you use an image map (which causes artifacts) - not
: even if you use CSG to merge the three red cylinders and one white and
: then clip it all with another cylinder - because CSG has side effects on
: color. Geometry-controlled pigments are just not generally possible.

  Not true.
  You can create an extremely wide variety of geometrical patterns with
functions, the object pattern or the combination of both (to, for example,
transform the shape of the latter with the former).
  It is perfectly possible to make three red circles in a row on the surface
of a white cylinder with functions.

: Example 3.
: PoV3 has a flat height field object whose shape is calculated from an
: external image map. It does not, however, allow generation of height
: fields from built-in procedural textures

  Not true. It does.

: Principle IV. Full scripting to replace macros

  I really don't like the word "replace".

: The scene description language should be or include a full-fledged
: scripting language.

  The POV-Ray SDL is Turing-strong. What else do you need?

  Of course shortcuts and support for features which make some things easier
are nice, but the current SDL is not as bad as you are trying to make it
sound.
  I have made a raytracer with the POV-Ray SDL. Beat that.

: PoV3 only has limited macro support, with parameters strictly
: precalculated and support of true functions absent.

  What is a "true function", and how does it differ from #macros or functions?

: This significantly impedes parametric generation of scenes and
: animation, inducing ugly trickery down to abusing the macro language's
: file I/O capabilities in order to circumvent lack of true function
: declarations. Even with all the trickery, it is frequently extremely
: difficult to achieve the desired effect.

  I didn't understand this paragraph at all. How do #macros and functions
"impede parametric generation of scenes and animation"?

  As for your OOP language suggestion, that has been discussed countless time
before. It's not as a trivial issue as you seem to think.

: We suggest that the only and sole data type used by PoV4 should be a
: vector.

  Why? So no more strings nor floats?
  How do you print some text with a vector?

: Therefore, we suggest

  By the way, if this text is "By Eugene Arenhaus", why do you speak in
plural? Are you a member of a royal family or something?

-- 
#macro N(D)#if(D>99)cylinder{M()#local D=div(D,104);M().5,2pigment{rgb M()}}
N(D)#end#end#macro M()<mod(D,13)-6mod(div(D,13)8)-3,10>#end blob{
N(11117333955)N(4254934330)N(3900569407)N(7382340)N(3358)N(970)}//  - Warp -


Post a reply to this message

From: Simon Adameit
Subject: Re: POV team and everyone - POV4 design proposal
Date: 10 Jan 2002 13:12:39
Message: <3c3dd997@news.povray.org>
> Hello.
>
> Here's my two cents.
>

Do you know Povray 3.5 ?
With it you can do many of the things that you say are impossible to do in
POV.


Post a reply to this message

From: Patrick Elliott
Subject: Re: ATT: POV team and everyone - POV4 design proposal
Date: 10 Jan 2002 16:50:42
Message: <1103_1010699555@selliot>
On Thu, 10 Jan 2002 15:09:43 +0200, Eugene Arenhaus <eug### [at] avalon-netcoil> wrote:
> Trace's input will be a single ray.
> 
> Trace's output will be a single ray.
> 
> In fact, it will be one and the same ray object.
> 


Umm... I seem to remember a few features that make this pointless. For instance,
unless I am mistaken,
a partially transparent object splits an existing rays in two, passing one through the
object and bouncing
the other off it. How the $%# do you do that with a trace routine which only returns
one ray? I suggest
you first learn 'why' things work the way they do before making suggestions about how
to 'fix' them. :p


Post a reply to this message

Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.