POV-Ray : Newsgroups : povray.unofficial.patches : My personal wishlist Server Time
2 Sep 2024 16:14:08 EDT (-0400)
  My personal wishlist (Message 21 to 30 of 77)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: PoD
Subject: Re: My personal wishlist
Date: 26 Feb 2000 10:51:54
Message: <38B7FCCF.A533B213@merlin.net.au>
Chris Huff wrote:
> 
> In article <38B6E4BC.977FD8A0@merlin.net.au>, PoD <pod### [at] merlinnetau>
> wrote:
> 
> > > This might be a good idea. On the other hand, it would reinforce the bad
> > > habit of using bounded_by as a kind of CSG...(remember that with some
> > > settings POV can override this statement and use it's own bounding)
> >
> > No no no... bounding has nothing to do with CSG and should'nt be
> > confused with it.
> 
> That is what I was trying to get at-if bounding boxes chop off pieces of
> the object instead of behaving as they currently do, people will mistake
> them for a type of CSG more often. This is a bad thing. I don't want it
> to happen.

Oh OK it looks like we agree on this.
> 
> > > If the "use_depth_for_z"(somebody please come up with a better keyword!)
> > > feature is turned on, the angles of the initial emission from the light
> > > source will be used for the x and y axis, and the distance the ray has
> > > travelled for the z axis. This should make things like interference
> > > patterns possible.(I wonder what this would look like combined with
> > > scattering media and photons!)
> >
> > Interesting, but I think it should be done the same as sky_sphere
> > i.e. the colour comes from the intersection of the ray and a unit
> > sphere.
> 
> But that would make it a lot less flexible...you wouldn't be able to
> simulate the wave-like characteristics of light, for example. It would
> be the same as placing a small-scaled sphere around the light source.
> 
> Hmm, you could even do a cheap imitation of laser beams without photons:
> use the object pattern with two cylinders meeting at a mirror as the
> pigment for the light, and use scattering media with isotropic
> scattering(this would be important, because although the light would
> seem to be constrained to the two cylinders, it's direction would still
> be away from the light_source). This would be messed up by shadows, but
> it is an interesting possible use.

I didn't mean not to include the z-depth option, just that using planar
mapping would make a lot of uses difficult.  Maybe reuse the mapping
keyword here, I could see at least spherical, planar and cylindrical
being useful.

Here's something if you had z-depth.  Use a density file as the pattern
and project 'solid' holograms into media with shadowing from intervening
objects :)

PoD.


Post a reply to this message

From: Chris Huff
Subject: Re: My personal wishlist
Date: 26 Feb 2000 11:23:36
Message: <chrishuff_99-FBFD91.11250426022000@news.povray.org>
In article <38B7FCCF.A533B213@merlin.net.au>, PoD <pod### [at] merlinnetau> 
wrote:

> I didn't mean not to include the z-depth option, just that using planar
> mapping would make a lot of uses difficult.  Maybe reuse the mapping
> keyword here, I could see at least spherical, planar and cylindrical
> being useful.

Not planar mapping, spherical mapping. Distance would be z, and 
vertical/horizontal angle would be x and y, and the center would be the 
light_source position. It would be like wrapping the pigment around the 
sphere, except the color also varies depending on it's distance from the 
center of the sphere. You would have to tile it in some cases(many times 
the tiling wouldn't be visible), but I don't see any other way of 
mapping 3D space around a point. Maybe also cylinderical mapping.


> Here's something if you had z-depth.  Use a density file as the pattern
> and project 'solid' holograms into media with shadowing from intervening
> objects :)

That is an idea...although I guess it could also be done with scattering 
media using the density file and a plain colored light, at least as long 
as no other lights interact with it.

-- 
Chris Huff
e-mail: chr### [at] yahoocom
Web page: http://chrishuff.dhs.org/


Post a reply to this message

From: Peter Popov
Subject: Re: My personal wishlist
Date: 26 Feb 2000 12:26:11
Message: <kfK3OIjBjxu=lEMtvnZip=3GMUqx@4ax.com>
On Thu, 24 Feb 2000 21:01:48 -0500, Chris Huff
<chr### [at] yahoocom> wrote:

>In article <bCW1OIjrykoVBR6nY5iI6VMKZ3UL@4ax.com>, Peter Popov 
><pet### [at] usanet> wrote:
>
>> : df3 output, dfc file
>...
>> My idea is that a new file type triggered by a comman-line (or INI)
>> option is introduced. An additional depth parameter to output
>> resolution has to be introduced, too. The program would then sample
>> the unit cube at width*height*depth points and save the clipped
>> brightness directly to a density file. Brightness should only be
>> affected by pigments and media density.
>
>Maybe something could be added to my object pattern...my proximity 
>pattern might also benefit from this(compute the density pattern once 
>using high settings for proximity, then use the density pattern instead 
>of the proximity).

Actually this is one of the major benefits of this idea. The proximity
is dreadfully slow in media (and probably isosurfaces). If it could be
calculated only once, render times would be greatly cut down.

Significant reduction of render times can be achieved if complex media
statements are replaced with precomputed density patterns. My current
project involves the usage of 200 scattering media statements in a
single blob container. It takes more than nine hours to render at
640x480. If it were a single df3 media, render times could be
dramatically decreased (by an order of ten to a hundred, depending on
the scene)

>> A new type of density file (for example dfc), one that stores
>> three-dimentional RGB data, would also be of use. The RGB data could
>> then be used as a default color map for the density_file pattern based
>> on this file if no color map is specified. Combined with df3 or dfc
>> output one could easily render a glowing media version of any object,
>> while preserving the colors.
>
>Where would you get the color data from a CSG with multiple textures? 
>Just add the color contribution from every object? And you would have to 
>restrict yourself to the pigment, since the other features(normal, 
>finish) kind of depend on having a surface.

The problem with overlapping interiors may or may not be possible to
be solved by proper CSG. Maybe if the objects are tested in a specific
order (for example the order they appear in the CSG) and the first
match is returned, the problem would go away. And yes, only pigments
would be used, but that should be enough to model things such as

>> : polygonal spotlight
>> 
>> Although unrealistic, these have their applications. Combined with an
>> area light, they can be used to simulate, for example, light passing
>> through a window or keyhole or similar.
>
>This is an interesting idea, but couldn't it be done with the 
>projected_through patch?

No, because the falloff (for spotlights) would still be circular and
not polygonal.

>> : chop pieces off with bounding

>> If a bounding object is smaller than what it bounds, the results are
>> unpredictable. I think a more logical behaviour would be the one seen
>> in isosurfaces, i.e. the bounding object should restrict the bounded
>> object, exposing the intersection surface and texture. Of course, it
>> would still bound the object for ray-shape intersection tests.
>
>This might be a good idea. On the other hand, it would reinforce the bad 
>habit of using bounded_by as a kind of CSG...(remember that with some 
>settings POV can override this statement and use it's own bounding)

I admit it is probably a bad idea anyway, especially because it would
duplicate the functionality of the CSG texture patch if it gets
implemented. The idea for both modifications came to me when I had a
complex CSG of objects with their own textures and I wanted to make a
cross-section of it. Although I succeeded to do it with bounding,
rotating the object or camera lead to unpredictable results. So I am
proposing both patches because I can't tell right now which one would
be easier/simlper/safer to possibly implement.

>> : leave original textures when CSGing
>> 
>> When performing a CGS difference or intersection, any object with a
>> non-specified texture is assumed to have a black pigment and ambient
>> 0. I think a more reasonable behaviour would be to ignore its texture
>> and use the texture of whatever object the intersection point is in.
>> Of course this could lead to "coincident interiors" problems but I
>> think with proper CGSing they can be avoided.
>
>Actually, I am pretty sure it uses the default texture, which you can 
>set with the #default keyword. 

I guess so, but it still poses the same problem.

>And I think it would be extremely 
>difficult to determine which object to choose the texture from...what if 
>the point is enclosed in more than one object?

If more then one objects enclose the point, use the texture of that
one which appears first (or last) in the CSG.

>> : seeded randomness for area_light and aa jitter
>> 
>> This one is pretty straight-forward. It would make animation people's
>> lives easier.

>I don't think this is possible. Providing a seed would allow you to 
>specify the sequence of random numbers used, but you still can't decide 
>when they are used. If a point gets covered up doesn't need a random 
>number computed or if something causes a pixel to need more numbers than 
>it did before, everything get unsynchronized and you end up with the 
>same "flying pixels" or static.

I didn't mean the user should specify the seed, rather that the seed
should be constant for each pixel (for example X+Y). This would
eliminate the "flying pixels" static.

>I will have to look at those sites...

My heart fills with hope... :)

>Other features I can think of:
>Simple shaders: really just an extended isosurface function capable of 
>handling vectors and colors. Not necessarily as full-featured as "real" 
>shaders, but potentially quite useful.

Ditto. Combine this with custom BRDFs and you have infinite
possibilities.

>Patterned light_source's: Have the ability to apply a patterned 
>color_map to a light source, with the option to use ray direction and 
>depth instead of point position.(which would kind of simulate light 
>coming through a colored, transparent sphere surrounding the light 
>source, but with depth)
>light_source {blah, blah
>   PATTERN
>   color_map {...}
>   use_depth_for_z on/off
>}
>
>If the "use_depth_for_z"(somebody please come up with a better keyword!) 
>feature is turned on, the angles of the initial emission from the light 
>source will be used for the x and y axis, and the distance the ray has 
>travelled for the z axis. This should make things like interference 
>patterns possible.(I wonder what this would look like combined with 
>scattering media and photons!)

I've raised the question of simulating wave properties (wavelength,
phase and polarization) a couple of times myself but it was
overlooked. Your idea is certainly more versatile and I see good uses
for it already.


Peter Popov
pet### [at] usanet
ICQ: 15002700


Post a reply to this message

From: Peter Popov
Subject: Re: My personal wishlist
Date: 26 Feb 2000 12:26:12
Message: <kge4OC18QaRIBS8wsrWMYSZKpa9E@4ax.com>
On Sat, 26 Feb 2000 06:53:24 +1030, PoD <pod### [at] merlinnetau> wrote:

>No no no... bounding has nothing to do with CSG and should'nt be
>confused with it.

I know, I know, but there are certain limitations in the current CSG
model that could be avoided this way.


Peter Popov
pet### [at] usanet
ICQ: 15002700


Post a reply to this message

From: Peter Popov
Subject: Re: My personal wishlist
Date: 26 Feb 2000 12:26:30
Message: <5we4OFqs+4Dl3UdLMdfIlHVIiN86@4ax.com>
On Fri, 25 Feb 2000 10:16:09 +0200, "Gail Shaw" <gsh### [at] monotixcoza>
wrote:

>>: chop pieces off with bounding
>>
>>If a bounding object is smaller than what it bounds, the results are
>>unpredictable. I think a more logical behaviour would be the one seen
>>in isosurfaces, i.e. the bounding object should restrict the bounded
>>object, exposing the intersection surface and texture. Of course, it
>>would still bound the object for ray-shape intersection tests.
>
>
>Isn't that better done with clipped by? Or do you want to merge the two?

Neither. I want to see the texture of an arbitraty object where it
intersects its (manual!) bounding object. Chris and PoD (and probably
others) already pointed out the major problem, that some users might
be lead into believing that bounding is a type of CSG, and this is a
Real Bad Thing(tm). I am giving up this idea in favour of the next one
(which duplicates this functionality anyway).

>>: leave original textures when CSGing
>>
>>When performing a CGS difference or intersection, any object with a
>>non-specified texture is assumed to have a black pigment and ambient
>>0. I think a more reasonable behaviour would be to ignore its texture
>>and use the texture of whatever object the intersection point is in.
>>Of course this could lead to "coincident interiors" problems but I
>>think with proper CGSing they can be avoided.
>
>
>That would make my life SO much easier. Difficult to
>implement though (Think of the following:
>difference {
> union {
>  sphere {
>    <0,0.5,0>,1
>    pigment {Red}
>   }
>   sphere {
>    <0,-0.5,0>,1
>    pigment {Blue}
>   }
>  }
>  box {
>   <0,-2,-2>,<2,2,2>
>  }
> }
>
>What colour should the overlapped area of the spheres be?
>Red, Blue or Purple?

Replace it with this:
difference {
 union {
  intersection { sphere { 0.5*y, 1 } plane { -y, 0 } pigment { Red } }
  intersection { sphere { -0.5*y, 1 } plane { y, 0 } pigment { Red } }
 }
 box { <0,-2,-2>, <2,2,2> }
}

and you'll see what I mean by "proper CSG" :)

>Another I would like :
> The ability to layer a texture over a complex CSG when each
>part has its own texture (If this is possible without using two copies of
>the object, please let me know)

This would certainly be useful. It can be done without a separate copy
of the object if each piece of the CSG has the desired texture layered
over its own.


Peter Popov
pet### [at] usanet
ICQ: 15002700


Post a reply to this message

From: Peter Popov
Subject: Re: My personal wishlist
Date: 26 Feb 2000 12:26:32
Message: <fAq4OOaezNjs7k2x7JyUJuMX7ct9@4ax.com>
On 25 Feb 2000 08:13:05 -0500, Nieminen Juha
<war### [at] sarakerttunencstutfi> wrote:

>: If a bounding object is smaller than what it bounds, the results are
>: unpredictable.
>
>  Nope. They are very predictable. Only a ray hitting the bounding object
>can hit the bounded object.
>
>  A bounding box is a bounding box, not a clipping box. The current
>implementation is the correct one. It should be documented better to
>avoid confusion, but it's correct.

Suppose the bounding box is smaller than the object itself. If and
only if a ray hits the box, it will look for the first intersection
with the object itself _along_the_ray_ and that intersection could
easily be outside the bounding box! So if you have a sphere bounded by
a smalled box, the result would look as if you've rendered the sphere
and then applied to the resulting image an alpha channel based on the
box. This, in my dictionary, is not correct.

Of course specifying a bounding box smaller than the bounded object is
incorrect in the first place :) but there are a few trick (besides
vampires and anti-vampires) that can be done with it.


Peter Popov
pet### [at] usanet
ICQ: 15002700


Post a reply to this message

From: Simon de Vet
Subject: Re: My personal wishlist
Date: 26 Feb 2000 12:35:54
Message: <38B80F59.9FB6022E@istar.ca>
Peter Popov wrote:

> With time I've compiled a list of features I would be happy to see in
> POV-Ray. I think it wouldn't hurt if I posted it here.
> : Additional BRDF models
>
> I recently used a rendered which supported 11 differend BRDF
> (highlights) models. Now that's cool! The authors were kind enough to
> share some of their sources with me:
>
> http://www-graphics.stanford.edu/~smr/cs348c/surveypaper.html
> http://www.cs.columbia.edu/CAVE/curet/
> http://www.ciks.nist.gov/appmain.htm
>
> Some of them have properties very useful for specific tasks, such as
> anisotropy (for a brushed metal look).

I just took a quick look at the site. The velvet rendering is incredible!


Post a reply to this message

From: Chris Huff
Subject: Re: My personal wishlist
Date: 26 Feb 2000 13:03:30
Message: <chrishuff_99-26EBDA.13045726022000@news.povray.org>
In article <kfK3OIjBjxu=lEMtvnZip=3GMUqx@4ax.com>, Peter Popov 
<pet### [at] usanet> wrote:

> Significant reduction of render times can be achieved if complex media
> statements are replaced with precomputed density patterns. My current
> project involves the usage of 200 scattering media statements in a
> single blob container. It takes more than nine hours to render at
> 640x480. If it were a single df3 media, render times could be
> dramatically decreased (by an order of ten to a hundred, depending on
> the scene)

Have you tried the blob pattern/pigments? Those may help...my original 
idea was that they would be good density patterns for media.


> No, because the falloff (for spotlights) would still be circular and
> not polygonal.

Hmm, shouldn't an area light work with projected_through to get that?


> >> : leave original textures when CSGing
> >> 
> >> When performing a CGS difference or intersection, any object with a
> >> non-specified texture is assumed to have a black pigment and ambient
> >> 0. I think a more reasonable behaviour would be to ignore its texture
> >> and use the texture of whatever object the intersection point is in.
> >> Of course this could lead to "coincident interiors" problems but I
> >> think with proper CGSing they can be avoided.
> >
> >Actually, I am pretty sure it uses the default texture, which you can 
> >set with the #default keyword. 
> 
> I guess so, but it still poses the same problem.

> If more then one objects enclose the point, use the texture of that
> one which appears first (or last) in the CSG.




> I didn't mean the user should specify the seed, rather that the seed
> should be constant for each pixel (for example X+Y). This would
> eliminate the "flying pixels" static.

Hmm, that might work...


> I've raised the question of simulating wave properties (wavelength,
> phase and polarization) a couple of times myself but it was
> overlooked. Your idea is certainly more versatile and I see good uses
> for it already.

I actually have a very early version done already. It doesn't work with 
photons(I have to figure out how to get the evaluation point in the 
right part of the code before that will work), and there might be some 
other oddities I haven't found yet, but the straight mapping works. I 
haven't added spherical and cylinderical mappings yet, so it doesn't 
have depth dependant effects yet.

-- 
Chris Huff
e-mail: chr### [at] yahoocom
Web page: http://chrishuff.dhs.org/


Post a reply to this message

From: Nieminen Juha
Subject: Re: My personal wishlist
Date: 26 Feb 2000 13:14:43
Message: <38b81813@news.povray.org>
There are three similar things: Bounding, clipping and the intersection CSG.
They functions are, however, very different and should not be confused.
  What you are talking about, actually, is this:

  intersection
  { object { MyObject }
    object { BoundingObject }
    bounded_by { BoundingObject }
  }

  Since it can be done, and it's very intuitive, it should not be changed.
It should, however, be documented better.
  For some reason I have never had any confusion about these three things.

-- 
main(i,_){for(_?--i,main(i+2,"FhhQHFIJD|FQTITFN]zRFHhhTBFHhhTBFysdB"[i]
):5;i&&_>1;printf("%s",_-70?_&1?"[]":" ":(_=0,"\n")),_/=2);} /*- Warp -*/


Post a reply to this message

From: Nathan Kopp
Subject: Re: My personal wishlist
Date: 26 Feb 2000 13:16:34
Message: <38b81882@news.povray.org>
Peter Popov <pet### [at] usanet> wrote...
> The problem with overlapping interiors may or may not be possible to
> be solved by proper CSG. Maybe if the objects are tested in a specific
> order (for example the order they appear in the CSG) and the first
> match is returned, the problem would go away. And yes, only pigments
> would be used, but that should be enough to model things such as

Another possibility would be to use an average of textures, in the same way
as multi-textured blobs or the new vertex-textured meshes.  It would
probably just be a straight average (as opposed to a weighted average) to
avoid any proximity testing (which can be really slow, as we've seen).

Unfortunately, this is difficult to implement currently because of the way
that CSG intersections are handled.  It may not be that bad, though... I'll
have to look at it more closely and let you know.

> >> : polygonal spotlight
> >
> >This is an interesting idea, but couldn't it be done with the
> >projected_through patch?
>
> No, because the falloff (for spotlights) would still be circular and
> not polygonal.

Well, that changes things (and makes them much more difficult to implement).

>
> >> : chop pieces off with bounding
>
> I admit it is probably a bad idea anyway, especially because it would
> duplicate the functionality of the CSG texture patch if it gets
> implemented. [clip] So I am
> proposing both patches because I can't tell right now which one would
> be easier/simlper/safer to possibly implement.

Understood.

> Ditto. Combine this with custom BRDFs and you have infinite
> possibilities.

Custom BRDFs sounds really fun (though they can be slow).

> I've raised the question of simulating wave properties (wavelength,
> phase and polarization) a couple of times myself but it was
> overlooked. Your idea is certainly more versatile and I see good uses
> for it already.

Actually, it wasn't overlooked, just avoided due to perceived complexity.

-Nathan

(I am speaking for myself, not the POV-Team.)


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.