POV-Ray : Newsgroups : povray.unofficial.patches : My personal wishlist : Re: My personal wishlist Server Time
2 Sep 2024 18:19:58 EDT (-0400)
  Re: My personal wishlist  
From: Peter Popov
Date: 26 Feb 2000 12:26:11
Message: <kfK3OIjBjxu=lEMtvnZip=3GMUqx@4ax.com>
On Thu, 24 Feb 2000 21:01:48 -0500, Chris Huff
<chr### [at] yahoocom> wrote:

>In article <bCW1OIjrykoVBR6nY5iI6VMKZ3UL@4ax.com>, Peter Popov 
><pet### [at] usanet> wrote:
>
>> : df3 output, dfc file
>...
>> My idea is that a new file type triggered by a comman-line (or INI)
>> option is introduced. An additional depth parameter to output
>> resolution has to be introduced, too. The program would then sample
>> the unit cube at width*height*depth points and save the clipped
>> brightness directly to a density file. Brightness should only be
>> affected by pigments and media density.
>
>Maybe something could be added to my object pattern...my proximity 
>pattern might also benefit from this(compute the density pattern once 
>using high settings for proximity, then use the density pattern instead 
>of the proximity).

Actually this is one of the major benefits of this idea. The proximity
is dreadfully slow in media (and probably isosurfaces). If it could be
calculated only once, render times would be greatly cut down.

Significant reduction of render times can be achieved if complex media
statements are replaced with precomputed density patterns. My current
project involves the usage of 200 scattering media statements in a
single blob container. It takes more than nine hours to render at
640x480. If it were a single df3 media, render times could be
dramatically decreased (by an order of ten to a hundred, depending on
the scene)

>> A new type of density file (for example dfc), one that stores
>> three-dimentional RGB data, would also be of use. The RGB data could
>> then be used as a default color map for the density_file pattern based
>> on this file if no color map is specified. Combined with df3 or dfc
>> output one could easily render a glowing media version of any object,
>> while preserving the colors.
>
>Where would you get the color data from a CSG with multiple textures? 
>Just add the color contribution from every object? And you would have to 
>restrict yourself to the pigment, since the other features(normal, 
>finish) kind of depend on having a surface.

The problem with overlapping interiors may or may not be possible to
be solved by proper CSG. Maybe if the objects are tested in a specific
order (for example the order they appear in the CSG) and the first
match is returned, the problem would go away. And yes, only pigments
would be used, but that should be enough to model things such as

>> : polygonal spotlight
>> 
>> Although unrealistic, these have their applications. Combined with an
>> area light, they can be used to simulate, for example, light passing
>> through a window or keyhole or similar.
>
>This is an interesting idea, but couldn't it be done with the 
>projected_through patch?

No, because the falloff (for spotlights) would still be circular and
not polygonal.

>> : chop pieces off with bounding

>> If a bounding object is smaller than what it bounds, the results are
>> unpredictable. I think a more logical behaviour would be the one seen
>> in isosurfaces, i.e. the bounding object should restrict the bounded
>> object, exposing the intersection surface and texture. Of course, it
>> would still bound the object for ray-shape intersection tests.
>
>This might be a good idea. On the other hand, it would reinforce the bad 
>habit of using bounded_by as a kind of CSG...(remember that with some 
>settings POV can override this statement and use it's own bounding)

I admit it is probably a bad idea anyway, especially because it would
duplicate the functionality of the CSG texture patch if it gets
implemented. The idea for both modifications came to me when I had a
complex CSG of objects with their own textures and I wanted to make a
cross-section of it. Although I succeeded to do it with bounding,
rotating the object or camera lead to unpredictable results. So I am
proposing both patches because I can't tell right now which one would
be easier/simlper/safer to possibly implement.

>> : leave original textures when CSGing
>> 
>> When performing a CGS difference or intersection, any object with a
>> non-specified texture is assumed to have a black pigment and ambient
>> 0. I think a more reasonable behaviour would be to ignore its texture
>> and use the texture of whatever object the intersection point is in.
>> Of course this could lead to "coincident interiors" problems but I
>> think with proper CGSing they can be avoided.
>
>Actually, I am pretty sure it uses the default texture, which you can 
>set with the #default keyword. 

I guess so, but it still poses the same problem.

>And I think it would be extremely 
>difficult to determine which object to choose the texture from...what if 
>the point is enclosed in more than one object?

If more then one objects enclose the point, use the texture of that
one which appears first (or last) in the CSG.

>> : seeded randomness for area_light and aa jitter
>> 
>> This one is pretty straight-forward. It would make animation people's
>> lives easier.

>I don't think this is possible. Providing a seed would allow you to 
>specify the sequence of random numbers used, but you still can't decide 
>when they are used. If a point gets covered up doesn't need a random 
>number computed or if something causes a pixel to need more numbers than 
>it did before, everything get unsynchronized and you end up with the 
>same "flying pixels" or static.

I didn't mean the user should specify the seed, rather that the seed
should be constant for each pixel (for example X+Y). This would
eliminate the "flying pixels" static.

>I will have to look at those sites...

My heart fills with hope... :)

>Other features I can think of:
>Simple shaders: really just an extended isosurface function capable of 
>handling vectors and colors. Not necessarily as full-featured as "real" 
>shaders, but potentially quite useful.

Ditto. Combine this with custom BRDFs and you have infinite
possibilities.

>Patterned light_source's: Have the ability to apply a patterned 
>color_map to a light source, with the option to use ray direction and 
>depth instead of point position.(which would kind of simulate light 
>coming through a colored, transparent sphere surrounding the light 
>source, but with depth)
>light_source {blah, blah
>   PATTERN
>   color_map {...}
>   use_depth_for_z on/off
>}
>
>If the "use_depth_for_z"(somebody please come up with a better keyword!) 
>feature is turned on, the angles of the initial emission from the light 
>source will be used for the x and y axis, and the distance the ray has 
>travelled for the z axis. This should make things like interference 
>patterns possible.(I wonder what this would look like combined with 
>scattering media and photons!)

I've raised the question of simulating wave properties (wavelength,
phase and polarization) a couple of times myself but it was
overlooked. Your idea is certainly more versatile and I see good uses
for it already.


Peter Popov
pet### [at] usanet
ICQ: 15002700


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.