POV-Ray : Newsgroups : povray.unofficial.patches : Your wanted features Server Time
1 Jul 2024 04:06:44 EDT (-0400)
  Your wanted features (Message 21 to 30 of 30)  
<<< Previous 10 Messages Goto Initial 10 Messages
From: Ray Gardener
Subject: Re: Your wanted features
Date: 19 Jun 2003 13:24:56
Message: <3ef1f1e8@news.povray.org>
> This may be useful for scanlining, but what I'm after is a method for
> generating meshes for general use.

Yeah, that's brutal. If a recursive dicer was used,
one would then have to merge small triangles
afterwards, basically a mesh LOD-reduction algorithm.
Granted the memory requirement can be steep, but
worst case is only for the primitive being diced,
and any persistent geometry afterwards is of the
optimized LOD form. If one used a bezier patch
instead of triangles as the atomic dicing unit,
the tesselation memory use would be worse but the
final optimized mesh would be more compact.


> ...It may not be
> necessary though...things like proximity and glow effects (two other
> things I plan to use tessellation with) may work fine with simple
> quick-and-dirty removal of triangles that are entirely contained in
> another object.

Entirely possible. The method is nice in having the
property of offering a fine-grained speed vs. quality distribution.
One need not be forced to compute a perfect CSG op every time.


> What I actually meant was: how are you tessellating and drawing the
> objects? Did you add an object method to draw in OpenGL, or one to
> tessellate to a mesh which is used by drawing code elsewhere? Or
> something else?

Each POV primitive has a wireframe method which emits
abstracted OpenGL calls (daylon_glBeginLineStrip, daylon_glVertex3dv, etc.).
The tesselation is done in a Wireframe_<primitive> routine
residing in the related module (spheres.cpp for spheres, etc.),
A primitive is considered a mesh of line strips along U and V axes,
and there is no retained geometry (for each line strip, vertices
are just baked up and issued and then discarded).

It's one of those lucky coincidences that POV's primitives'
Trans->matrix member happens to match the row/column order
and precision of OpenGL matrices; one can feed them directly to glMul().
Given how fortuitious that is, it almost seems criminal not
to integrate OpenGL. :)

I had some (currently disabled) code that captured bbox drawing to disk for
later display in Leveller, to test wireframe exports for interactive
scene navigation, but that's something running outside of POV-Ray,
or even at another time or on another machine. I thought of
embedding a scene navigator in the WinPOV GUI, but it might be
better (and more flexible) to just have it exec a navigation app.
You'd place the cursor within a camera statement, issue a
"render for nav" command, a scene wireframe file would be
generated with the camera block included, the navigator would be run,
and when it wrapped up it would update the cam in that file and
then WinPOV would tell CodeMax to replace the camera statement
with it (and perhaps optionally commenting out or #if(false)'ing
the old camera statement). Actually, the navigator would be better
as a DLL, since it can be hard to tell when another app has finished
running.

What gets kind of interesting is that a navigator DLL could
also double as a scene interaction previewer. For example,
you could implement a navigator that uses Quake-style walking/running,
and another that provides a flight-simulator point of view.
If you wanted to be really intense, a DLL could interpret
the wireframe (model) file as a game level, and let you
literally "play" that level on the spot. It means that the
POV SDL can be a game level modeler (among other things),
but why not? There's no reason the SDL has to be tightly
coupled to a particular rendering system. Once you have
a way to emit the geometry into something easier to parse
than SDL syntax, some neat possibilities become enabled.
Geometry emission is actually a whole other patch, but
it happens to be easy to do in the OpenGL patch as a side effect.

Normally one would just use a traditional modeler, but
the SDL is handy for those times when you want to quickly
make a bunch of objects procedurally, or you find
it easier to express an object with SDL than by drawing it.
You see scripting in modelers, even, like Rhino's textual
command line above its view windows, etc., but this would
let POV users leverage their SDL knowledge -- it's kind
of bothersome to have to learn a different language for
each app in the toolchain. If you had some SDL code that
did something clever, it's nice to be able to just take
that and deploy it to feed geometry to another app,
and let the computer handle the translation.

Ray


Post a reply to this message

From: Warp
Subject: Re: Your wanted features
Date: 19 Jun 2003 17:15:10
Message: <3ef227de@news.povray.org>
Christopher James Huff <cja### [at] earthlinknet> wrote:
> One of the other things that needs to be done is to do clipping after 
> the antialiasing. An extremely bright object should contribute more to a 
> pixel than just a white one. However, because simple clipping is used, 
> this makes super-bright objects appear unantialiased...

  A more (photo)realistic effect would be that this kind of ultrabright
spot bleeds brightness to the surrounding pixels.
  However, the algorithm for implementing this might not be trivial nor
fast...

-- 
plane{-x+y,-1pigment{bozo color_map{[0rgb x][1rgb x+y]}turbulence 1}}
sphere{0,2pigment{rgbt 1}interior{media{emission 1density{spherical
density_map{[0rgb 0][.5rgb<1,.5>][1rgb 1]}turbulence.9}}}scale
<1,1,3>hollow}text{ttf"timrom""Warp".1,0translate<-1,-.1,2>}//  - Warp -


Post a reply to this message

From: Christopher James Huff
Subject: Re: Your wanted features
Date: 19 Jun 2003 23:46:28
Message: <cjameshuff-10FDF0.22362719062003@netplex.aussie.org>
In article <3ef227de@news.povray.org>, Warp <war### [at] tagpovrayorg> 
wrote:

> > One of the other things that needs to be done is to do clipping after 
> > the antialiasing. An extremely bright object should contribute more to a 
> > pixel than just a white one. However, because simple clipping is used, 
> > this makes super-bright objects appear unantialiased...
> 
>   A more (photo)realistic effect would be that this kind of ultrabright
> spot bleeds brightness to the surrounding pixels.

This would be more photorealistic than an ordinary antialiased image, 
but is a separate effect from antialiasing.


>   However, the algorithm for implementing this might not be trivial nor
> fast...

Actually, it was implemented as a post-process filter in the old 
MegaPOV, and there has been quite a bit of discussion about it recently. 
Basically, you make a blurred version of the image using the unclipped 
values, and combine it with the unblurred version. The blurred version 
simulates the light scattered within the eye (or camera film), the 
unblurred version is the unscattered light, so you just weight them 
appropriately and average them.

-- 
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/


Post a reply to this message

From: Warp
Subject: Re: Your wanted features
Date: 20 Jun 2003 07:29:33
Message: <3ef2f01d@news.povray.org>
Christopher James Huff <cja### [at] earthlinknet> wrote:
> Actually, it was implemented as a post-process filter in the old 
> MegaPOV, and there has been quite a bit of discussion about it recently. 

  Did it really use values >1.0 for calculating the soft glow?

-- 
plane{-x+y,-1pigment{bozo color_map{[0rgb x][1rgb x+y]}turbulence 1}}
sphere{0,2pigment{rgbt 1}interior{media{emission 1density{spherical
density_map{[0rgb 0][.5rgb<1,.5>][1rgb 1]}turbulence.9}}}scale
<1,1,3>hollow}text{ttf"timrom""Warp".1,0translate<-1,-.1,2>}//  - Warp -


Post a reply to this message

From: Christopher James Huff
Subject: Re: Your wanted features
Date: 20 Jun 2003 10:21:34
Message: <cjameshuff-974393.09112220062003@netplex.aussie.org>
In article <3ef2f01d@news.povray.org>, Warp <war### [at] tagpovrayorg> 
wrote:

> > Actually, it was implemented as a post-process filter in the old 
> > MegaPOV, and there has been quite a bit of discussion about it recently. 
> 
>   Did it really use values >1.0 for calculating the soft glow?

I assume so...those values were available, it would only make sense to 
use them. If not, there was another filter to load the unclipped colors 
into the image buffer.

-- 
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/


Post a reply to this message

From: Christopher James Huff
Subject: Re: Your wanted features
Date: 20 Jun 2003 11:06:53
Message: <cjameshuff-AD4D63.09564120062003@netplex.aussie.org>
In article <3ef1f1e8@news.povray.org>,
 "Ray Gardener" <ray### [at] daylongraphicscom> wrote:

> Entirely possible. The method is nice in having the
> property of offering a fine-grained speed vs. quality distribution.
> One need not be forced to compute a perfect CSG op every time.

There are actually one or two constraints I would use with this 
method...only triangles with maximum edge lengths and/or areas below a 
certain threshold would be removed, triangles that are too big would be 
subdivided and the same test used on their children. This should keep 
the new triangle generation down to a reasonable level, while allowing 
low triangle count meshes to be reasonably used.

-- 
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/


Post a reply to this message

From: sascha
Subject: Re: Your wanted features
Date: 20 Jun 2003 13:19:07
Message: <3ef3420b$1@news.povray.org>
Christopher James Huff wrote:

 > ...maybe even using ray differentials.

This would be a great feature because it would theoretically allow to 
filter off high frequencies from pigments - like antialiased shaders do 
in RenderMan Shader Language - making supersampling only necessary for 
edges.

-sascha


Post a reply to this message

From: Greg M  Johnson
Subject: Re: Your wanted features
Date: 23 Jun 2003 15:25:15
Message: <3ef7541b$1@news.povray.org>
"Rohan Bernett" <rox### [at] yahoocom> wrote in message
news:web.3edf5ebd7e2b69fbaa7c54710@news.povray.org...

> Let's make a nice big list of all the features people would like to see in
> an official/unofficial version of POVRay!
>


1) To be an ass and a non-team player,   I'll say    ^

2)  Is there some way to fix the specular to allow color-specific
highlights?
I'm thinking about a possible (and extremely simple) toon-shader just based
on a high specular finish.
See:  http://news.povray.org/3ef7334c%241%40news.povray.org


Post a reply to this message

From: Christopher James Huff
Subject: Re: Your wanted features
Date: 24 Jun 2003 18:48:34
Message: <cjameshuff-4306E0.17371424062003@netplex.aussie.org>
In article <3ef3420b$1@news.povray.org>,
 sascha <sas### [at] userssourceforgenet> wrote:

> Christopher James Huff wrote:
> 
>  > ...maybe even using ray differentials.
> 
> This would be a great feature because it would theoretically allow to 
> filter off high frequencies from pigments - like antialiased shaders do 
> in RenderMan Shader Language - making supersampling only necessary for 
> edges.

Yes...and a similar filtering could be used on the pigments to 
strengthen small details. However, the pigments would still need to be 
supersampled...it could just be done separately from edges. Maybe 
something like this: sample the surface coverage, computing an average 
location and normal for each surface hit by the pixel, and then compute 
texturing once with that information and the ray footprint. It may be 
too much work for the possible savings.

-- 
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/


Post a reply to this message

From: sascha
Subject: Re: Your wanted features
Date: 5 Jul 2003 03:23:40
Message: <3f067cfc@news.povray.org>
Christopher James Huff wrote:
 > something like this: sample the surface coverage, computing an average
 > location and normal for each surface hit by the pixel, and then
 > compute texturing once with that information and the ray footprint.

Sounds a bit like MIP-mapping to me. I once posted a question about 
MIP-mapping to p.u.patches - it needs the surface derivatives too, but 
Nathan Kopp postet a link to a paper about ray-differentials in 
raytracing: http://graphics.stanford.edu/papers/trd/


-sascha


Post a reply to this message

<<< Previous 10 Messages Goto Initial 10 Messages

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.