|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Christoph Hormann <chr### [at] gmxde> wrote:
> No, POV-Ray supports NURBS as much as most other renderers support them
> - by rendering meshes generated from them.
But I suppose that one big difference between supporting NURBS directly
and "supporting" them only be being able to render a triangle mesh
generated from the NURBS is that when rendering a NURBS directly, all
kinds of speed/memory optimizations can be made (eg. tesselate on demand,
that is, tesselate only the part currently being rendered, and when it's
done, throw away those triangles and go to the next part...).
--
plane{-x+y,-1pigment{bozo color_map{[0rgb x][1rgb x+y]}turbulence 1}}
sphere{0,2pigment{rgbt 1}interior{media{emission 1density{spherical
density_map{[0rgb 0][.5rgb<1,.5>][1rgb 1]}turbulence.9}}}scale
<1,1,3>hollow}text{ttf"timrom""Warp".1,0translate<-1,-.1,2>}// - Warp -
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Sascha Ledinsky wrote:
>
>> No. No algorithm commonly used to directly raytrace NURBS would allow
>> displacement mapping. So the above statement does not make sense.
>
>
> I meant the tesselation approach (as used in POV-Ray for
> bezier-patches). Theoretically it should be possible to add
> displacement mapping to the bezier-patch tesselation code.
Why would you want to do this if it is a mesh anyway and you could
displace the mesh (any mesh, not just patches)?
Christoph
--
POV-Ray tutorials, include files, Sim-POV,
HCR-Edit and more: http://www.tu-bs.de/~y0013390/
Last updated 23 Sep. 2004 _____./\/^>_*_<^\/\.______
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
delle wrote:
> Is there someone working on "Displacement Mapping" in Povray (as a Patch)?
>
> Best regards.
>
> Delle.
>
>
If you have a mesh in a 3D model file you can use PoseRay to read the
mesh, subdivide it, and displace it. Then you can either export it or
render it in POV-Ray directly from PoseRay.
http://user.txcyber.com/~sgalls/
The displacement is not adaptive to the gradient but it works.
later,
FlyerX
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Christoph Hormann wrote:
>> I meant the tesselation approach (as used in POV-Ray for
>> bezier-patches). Theoretically it should be possible to add
>> displacement mapping to the bezier-patch tesselation code.
>
>
> Why would you want to do this if it is a mesh anyway and you could
> displace the mesh (any mesh, not just patches)?
I think there is a misconception of what displacement mapping does. What
I understand as displacement mapping is in effect quite similar to bump
mapping (POV's normal { ... } feature). The difference is that true
displacement mapping does not simply chnage the surface normals (thats
only a side effect), but actually changes the geometry. Thus, the
displaced geometry will self-shadow, and the displacement itself is visible.
Here is an example image of a renderman displacement shader applied to a
sphere: http://jrman.sourceforge.net/image.php?img=bumpBalls.png
Compared to a bump-mapped version
http://www.povray.org/documentation/images/tutorial/pic1.png
the true displacement has several advantages: The displacement itself is
visible, there is correct self shadowing and there is no "shadow line
artifact".
While I totally agree that the displaced sphere can be done in POV-Ray
using iso-surfaces, I don't think that it makes much sense to use
iso-surfaces for e.g. character animation.
Back to your question:
I'd agree if we're talking about very-high-resolution triangle meshes
only. Displacing the vertices has the same effect as applying a
"displacement shader". But I disagree as far as NURBS, other patch
meshes, or parametric surfaces in general are concerned (currently POV
supports bicubic bezier patches only, and that not very well).
Displacing just the controlpoints of a NURBS mesh has certainly not the
desired effect!
For example, take a simple cylinder-like shape, made of four bicubic
patches (48 unique control-points). Displacing these 48 controlpoints
will change the shape and maybe add some bulges - but you couldn't apply
displacements that would give it (for example) the appearance of a
brick wall that way.
Now, when POV-Ray parses the patches, it turns each patch into a
triangle mesh (e.g. of 128 or more triangles, depending on u_steps and
v_steps). Displacing this triangle grid is very close to what a true
displacement shader would do.
Of course using an external application (or SDL macro) to tesselate the
parametric surface into a high-density triangle mesh and THEN applying
displacement to the tesselated grid would have the same effect - but I
would neither claim that POV supports NURBS, nor that it supports
displacment-mapping then...
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> What would be interesting is support for adaptive render
> time subdivision of meshes with displacement.
>
Now, I would be interested in knowing how you can achieve that!
Sincerely, I'm afraid you would need to transform each triangle into a
bounding volume and then perform 'on intersection', the actual subdivision.
For me, it sound just as a trade of memory for time.
(ok, the bigger the initial triangles, the less costly top-level
intersections you have, but you are going to need more subdivision on the
fly. And from my personal point of view, I like to have all allocations
fully done once the scene has been parsed.)
I maybe totally misleaded.
But, you have the idea, give it a try!
--
l'habillement, les chaussures que le maquillage et les accessoires.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Sascha Ledinsky wrote:
>>
>> Why would you want to do this if it is a mesh anyway and you could
>> displace the mesh (any mesh, not just patches)?
>
>
>
> Here is an example image of a renderman displacement shader applied to a
> sphere: [...]
Renderman is using scanline rendering techniques, this has nothing to do
here. And there of course is no self shadowing in the image you used as
example.
> Back to your question:
> I'd agree if we're talking about very-high-resolution triangle meshes
> only. Displacing the vertices has the same effect as applying a
> "displacement shader". But I disagree as far as NURBS, other patch
> meshes, or parametric surfaces in general are concerned (currently POV
> supports bicubic bezier patches only, and that not very well).
> Displacing just the controlpoints of a NURBS mesh has certainly not the
> desired effect!
I'd suggest you reread what i wrote. This remark shows you have
completely misunderstood it.
Christoph
--
POV-Ray tutorials, include files, Sim-POV,
HCR-Edit and more: http://www.tu-bs.de/~y0013390/
Last updated 23 Sep. 2004 _____./\/^>_*_<^\/\.______
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Le Forgeron wrote:
>
>> What would be interesting is support for adaptive render
>>time subdivision of meshes with displacement.
>>
>
> Now, I would be interested in knowing how you can achieve that!
>
> Sincerely, I'm afraid you would need to transform each triangle into a
> bounding volume and then perform 'on intersection', the actual subdivision.
>
> For me, it sound just as a trade of memory for time.
Sure but as you know time always scales linearly but you can't use more
memory than you have (at least not much more).
It has already been demonstrated that this is possible and also works
quite efficiently.
Christoph
--
POV-Ray tutorials, include files, Sim-POV,
HCR-Edit and more: http://www.tu-bs.de/~y0013390/
Last updated 23 Sep. 2004 _____./\/^>_*_<^\/\.______
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Christoph Hormann wrote:
> Renderman is using scanline rendering techniques, this has nothing to do
> here.
Renderman is an interface, not an implementation, so it is not using any
rendering techniques at all. There are a lot of Renderman
implementations that use ray-tracing.
The REYES algorithm "dices" the primitives into a micropolygon grid -
POV-Ray tesselates into a triangle mesh (I am talking about bicubic
patches only - and I assume that, if they'll be implemented somday,
NURBS and SDS will work the same way).
So there's no real difference in how parametric surfaces are being handled.
REYES applies displacement to the micropolygon grid, and POV-Ray could
apply displacements to the tesselated triangle mesh.
> And there of course is no self shadowing in the image you used as
> example.
That's right... bad example. But it would slef shadow if shadows were
turned on :-)
> I'd suggest you reread what i wrote. This remark shows you have
> completely misunderstood it.
Hmmm...
Christoph Hormann wrote:
>Why would you want to do this if it is a mesh anyway and you could
>displace the mesh (any mesh, not just patches)?
Reread. Maybe I misunderstand. How would you displace a nurbs or patch mesh?
>What would be interesting is support for adaptive render time
>subdivision of meshes with displacement.
That's exactly what I was trying to say - sorry if the term
"tesselating" was wrong (I'm not a native english speaker), still I feel
that it is more correct than "subdivision". You can subdivide a polygon
mesh to get a finer polygon-mesh (that's how SDS works), but subdividing
a patch will only yield more patches...
-Sascha
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
You are mixing things here that don't belong together and that's also
the source of your misunderstanding. Try first to understand the terms
you are using here and you will see that it does not make sense.
To sum it up:
- it does not make any sense to add a displacement mapping feature to
any higher level shape in POV-Ray that is internally rendered as a mesh.
- it is not possible to add a displacement feature to any shape POV-Ray
renders analytically.
==> the only shape that could profit from a displacement mapping feature
is the mesh but apart from render time subdivision with displacement
this is already possible to implement in SDL (or external programs).
Christoph
--
POV-Ray tutorials, include files, Sim-POV,
HCR-Edit and more: http://www.tu-bs.de/~y0013390/
Last updated 23 Sep. 2004 _____./\/^>_*_<^\/\.______
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Christoph Hormann <chr### [at] gmxde> wrote:
> - it is not possible to add a displacement feature to any shape POV-Ray
> renders analytically.
Change "possible" to "efficient" and then you may be right. (And it
probably shouldn't hurt putting a "probably" somewhere.)
> ==> the only shape that could profit from a displacement mapping feature
> is the mesh but apart from render time subdivision with displacement
> this is already possible to implement in SDL (or external programs).
"Is possible" and "is efficient" are two completely different things.
The idea in on-the-fly subdivision and displacement of meshes is that
it does not consume outrageous amounts of memory because the subdivision
and displacement can be done only to the part being currently rendered.
This way you can subdivide a mesh so that each triangle is split into
100 parts without the memory consumption growing noticeably. However,
if you subdivided the entire mesh so that each triangle was divided
into 100 parts and then rendered this entire mesh, the memory consumption
would skyrocket.
On-the-fly subdivision and displacement can't be done by modelling: It has
to be done by the renderer itself.
--
#macro N(D)#if(D>99)cylinder{M()#local D=div(D,104);M().5,2pigment{rgb M()}}
N(D)#end#end#macro M()<mod(D,13)-6mod(div(D,13)8)-3,10>#end blob{
N(11117333955)N(4254934330)N(3900569407)N(7382340)N(3358)N(970)}// - Warp -
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|