|
![](/i/fill.gif) |
In article <3F1A7B17.DD5DBD0D@gmx.de>,
Christoph Hormann <chr### [at] gmx de> wrote:
> Well, it looks like a good start. Illustrate it with some images, some
> diagrams and formulas of the falloff functions, some render times for
> comparison, a syntax summary and you already have a quite helpful addition
> for the user.
Demanding, aren't we?
Considering the lack of interest I'm seeing in this, I'm putting this
patch on the backburner, there won't be another MP+ release until I have
more patches moved over. When that happens, there will be documentation
of the syntax and some more sample scenes, and the documentation of the
source code will be improved. In the meantime, I have other projects I
need to attend to first.
> Well, in functions you could do the same on a point basis instead of ray
> basis and add some caching if the next point is near the old one. Surely
> it will be slower but as i said having it in isosurfaces would also have
> some serious advantages.
Caching is useless here...at least, I see no way of applying it that
doesn't just add overhead. And without ray info, you can't do the same
thing. The point of doing it on a per-ray basis is that you can take a
fairly expensive computation (the bounds test/component collection
stage) and use it to optimize a large number of expensive computations
(all the point evaluations needed to find the intersections with the
ray) that would otherwise be many times the first calculation. Drop that
and you're back to looking at every single component for every point
evaluated. You could still derive some benefit from a heirarchial
bounding scheme, but so could the existing algorithm.
There will be a blob2 pattern. This will not be able to use these
optimizations either, but like any other pattern, you will be able to
use it in isosurfaces. But I have good reasons for not doing it this way
for the blob2 primitive.
> Note that handcoded isosurface functions for blobbing or CSGing many
> components scale extremely badly, even if an internal function for this
> would not be as fast as your new shape it would be ways faster than the
> manual approach.
By scaling, you appear to mean performance with increasing numbers of
components...removing these optimizations would make the order of the
algorithm equal to hand coded functions, performance would deteriorate
linearly as number of components increases. You would only get the
benefits of compiled functions. (plus the fact that the functions
themselves are more optimized)
--
Christopher James Huff <cja### [at] earthlink net>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tag povray org
http://tag.povray.org/
Post a reply to this message
|
![](/i/fill.gif) |