|
|
In article <3d777099@news.povray.org>,
"Niki Estner" <nik### [at] freenetde> wrote:
A similar idea, but much faster: use the normal perturbation pattern to
perturb the intersection distance. The outline of the object and its
shadow would be unaffected, but it would make a difference in CSG and
when another object is partially penetrating it, like a sphere embedded
in a plane. It would probably be too obviously fake to be worth
anything, though.
> I was just thinking how media works: a ray hits an object, several samples
> along the ray are taken, and the light along the ray is calculated. This
> takes time, but with GHz PCs...
> Just thinking aloud: A ray hits an object (say, a sphere). A second sphere
> which is a little smaller is tested against the same ray. Now I know a
> finite segment of the ray that is inside the "boundary" of the sphere. I'd
> apply a root solver to e.g. the leopard pattern plus the distance to the
> inner sphere along that finite segment (I'm afraid this isn't even close to
> being mathematically correct, I hope you can see what I mean...). So now I
> have the intersection of a sphere with a "leopard" hyper-texture. Of couse
> this is slow, but currently I'm using millions of spheres, which is probably
> slower (at least as soon as windows starts swapping).
Hmm...how is this different from an isosurface? Aside from the more
flexible container.
> Yes, close. Pigments like granite or ripples aren't interpreted either.
> There's one thing isosurfaces can't do: you can't wrap them around a
> height_field.
I think you might be able to warp the height field function in such a
way to do this...maybe. But other objects are more of a problem: meshes,
CSG, julia fractals, etc. Deforming an isosurface is just not the same
thing as deforming a mesh: deforming the surface in a direction parallel
to the normal is easy for spheres or planes, but much harder for other
shapes, and some shapes just can't be represented as an isosurface (you
could probably do some bezier patches, but it would take some very
clever clipping, and won't work for all patches).
> I also think wraping an isosurface around the boundary of a primitive should
> be faster because the boundary is smaller than the whole object, so less
> space has to be tested against some function etc.
So just make a way to specify more complex container shapes. I don't
think they have to be as limited as they are.
I just don't think the concept of a hypertexture applies to anything but
a surface-based rendering engine, using meshes, spline surfaces, etc.
It seems to me that the best alternative for POV is to add tessellation
capability for all objects. Add a general algorithm like marching
tetrahedrons that will work for anything (though you will have to place
limits on it for infinite shapes), and more refined methods for spheres,
cones, etc. (for example, tesselating a triangle is a bit redundant and
the general algorithms often just won't work well)
Then add support for performing complex operations on meshes:
deformations, subdivision, etc. Maybe capability to do subdivision at
render-time, so you don't have to store so many triangles but can stand
the slower rendering (similar to the bezier patch primitive).
--
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/
Post a reply to this message
|
|