|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Wasn't it Orchid XP v3 who wrote:
>>> (I read somewhere that dividing your function by a big number makes
>>> max_gradient lower, but doesn't make it trace any faster. But this trick
>>> makes the big numbers small, but leaves the small numbers alone so
>>> POV-Ray doesn't go around taking tiny steps there any more.)
>>
>> That sounds like me
>>
>> http://www.econym.demon.co.uk/isotut/dont.htm
>
>Yeah.
>
>Ooo... "you can't use arrays in an isosurface function". Um... well... I
>just did. :-.
I don't say you can't use arrays. I say
You can't index an array inside an isosurface function.
You can use individual fixed elements of an array, like A[3] but you
can't use the function variables to index the array like A[floor(x)].
However, you can do this:
http://www.econym.demon.co.uk/isotut/arrays.htm
--
Mike Williams
Gentleman of Leisure
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> However, since it's the isosurface of a 18th degree polynomial, I've had
> to set max_gradient=exp(18) to get it to render correctly.
Povray isosurfaces are fastest when the actual gradient does not differ
too much from maxgradient. Therefore I always try to code isosurface
functions such that their value is the actual distance from the surface.
Of course in general this is only approximately and/or locally true.
In your case I would basically wrap an 18th root around the function.
(Still you have to take some care as to small and negative values.
Assuming a threshold of 1 you could wrap pow(max(1/2, ... ),1/18).)
Of course this is similar to your own idea of wrapping a logarithm.
> And so, I've been sitting here thinking about all the ways POV-Ray could
> arrive at its result faster. The trouble is, I can think of quite a few
> that would work for *this* shape, but would fail in special cases. It
> just so happens that the shape I'm trying to trace has a huge max
> gradient, but is actually a very simple shape.
We all learned at school how to build derivatives of functions. In fact
this was an algorithm on the symbolic representation of the functions
(as opposed to numerics) and could be incorporated into povray (Maybe
except for some inbuilt pattern functions, I don't know about those...).
So instead of assuming the gradient to be maxgradient, the isosurface
algorithm could compute the actual gradient. Of course, in order to know
how far this computed gradient can be trusted, the *second* derivative
must be bounded or computed. And if it is computed, we need to know a
bound or compute the third, and so on. So this would amount to defering
the maxgradient specification until some later derivative which hopefully
does not change too much. In your case that would be the 18th deriviative
with a bound of 0.
I am unsure though, if this actually has merits: With multiplications the
derivatives can be a lot more lengthy than the original function. Also,
as the functions depend on 3 variables, there are 3^n derivatives of
order n to be considered. This can get out of hand rather faster than it
has any benefits.
Mark Weyer
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Mark Weyer <nomail@nomail> wrote:
> Povray isosurfaces are fastest when the actual gradient does not differ
> too much from maxgradient.
You mean an isosurface which renders *correctly* is fastest when its
gradient and the max_gradient are close to each other?
Naturally a lower max_gradient will always produce a faster isosurface,
but it won't render correctly if it's too low, of course.
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp <war### [at] tagpovrayorg> wrote:
> Mark Weyer <nomail@nomail> wrote:
> > Povray isosurfaces are fastest when the actual gradient does not differ
> > too much from maxgradient.
>
> You mean an isosurface which renders *correctly* is fastest when its
> gradient and the max_gradient are close to each other?
My statement was meant as a rule of thumb and is imprecise in several ways.
Furthermore it is only based on how I think it works.
My point is that I have found it worthwhile to get rid of 'nonlinearities'
in iso functions. Extra wrappers of sqrt or such might take more time per
sample, but make the algorithm require much fewer samples. Of course I am
only interested in correct renderings.
> Naturally a lower max_gradient will always produce a faster isosurface,
> but it won't render correctly if it's too low, of course.
In this sense an incorrectly rendered iso renders fastest when it is omitted
(or maybe replaced by its contained_by -- it can even speed things up by
eclipsing complex objects) ;-)
I recall some SCC entries that use incorrect iso rendering on purpose.
But I am not aware of a 'serious' application of this practise...
Mark Weyer
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On Mon, 28 Aug 2006 15:58:02 +0100, Orchid XP v3 wrote:
> I just had a brainwave!
>
> I changed fn(x, y, z) to log(fn(x, y, z)), and changed the max_gradient
> to 4. I'm now getting several thousand pixels/second.
>
> Of course - the logarithm function makes the crazy vertical sides less
> "vertical", while not flattening out the parts near the solutions so
> POV-Ray still has some contours to go on.
>
> (I read somewhere that dividing your function by a big number makes
> max_gradient lower, but doesn't make it trace any faster. But this trick
> makes the big numbers small, but leaves the small numbers alone so
> POV-Ray doesn't go around taking tiny steps there any more.)
Okay, so I'm making the mistake of reading this group even though I'm not
really an expert at povray (I have made a couple of moderately
complex scenes but compared to whats on irtc I'm a novice), and I'm
finding this isosurface stuff fascinating. So I'm going to attempt to
contribute with my knowledge of maths and some educated guesses about how
the isosurface stuff works. Feel free to correct or flame me.
AIUI, the isosurface stuff works by assuming that the function's value
won't change by more than x/max_gradient over a distance of x, so it uses
this and an algorithm sorta like newton's method or the sturmian root
solver to find points that lie on the surface. This all gets ugly if the
function, like, has an order greater than 1, or has an exponential or 1/x
in it, etc, since these functions have derivatives that increase
indefinitely in certain regions.
But all of these functions can be mapped to functions where there is a
one-to-one mapping between the zero values (or the values that we want
mapped to the surface), and whose derivatives are bounded. For example,
x^2 = 10
Its derivative is x which increases indefinitely, but if we go
sqrt(x^2) = sqrt(10)
x = sqrt(10)
Then we end up with a function whose derivative is bounded and whose 0
locus (sorry if I'm not using correct terminology here) is still the same.
This is what Orchid effecitively did with his exponential function. By
taking the log of the whole function he created a new function with the
same properties.
For the function x^2 this is trivial - it produces a new function which is
easily used in an isosurface anyway. Orchid's function was less obvious.
I'm curious, could this kind of thing theoretically be done automatically
by povray? What if it were to use log(function) or sqrt(function) or
1/function to find the surface points rather than the actual function, and
the max_gradient were to apply to that function instead of the original?
Mick.
--
Remove the -news from my email address.
http://mickworld.knobbits.org/
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> AIUI, the isosurface stuff works by assuming that the function's value
> won't change by more than x/max_gradient over a distance of x, so it uses
> this and an algorithm sorta like newton's method or the sturmian root
> solver to find points that lie on the surface.
[Actually it's x TIMES max_gradient, but you've got the idea.]
POV-Ray samples the function more densely when it gets "close" to a
zero, and less densely when it's nowhere near one. To know what "near"
is, it must know the maximum gradient. It doesn't actually need to know
multiple derrivates or anything, it just needs an upper bound on how
densely it needs to sample to not miss any zeros.
> This all gets ugly if the
> function, like, has an order greater than 1, or has an exponential or 1/x
> in it, etc, since these functions have derivatives that increase
> indefinitely in certain regions.
More specifically, it gets ugly if the function contains discontinuities
or poles. Beyond that, it works for arbitrary functions. (Which is why
POV-Ray uses it. Other methods work only for polynomials or only for a
function you have the derivative of, etc. But this method works for
*anything*, provided it has "nice" numerical properties.)
> For example,
>
> x^2 = 10
>
> Its derivative is x which increases indefinitely
True - but POV-Ray only ever renders A FINITE SECTION of the function.
;-) Within that section, the derivative will be bounded.
This doesn't alter the fact that the derivative could well be
increasibly big, causing POV-Ray to take drastically more samples than
is really necessary near the zeros (where the derivative is actually
much smaller).
> but if we go
>
> x = sqrt(10)
>
> Then we end up with a function whose derivative is bounded and whose 0
> locus (sorry if I'm not using correct terminology here) is still the same.
Correct. The surface to be rendered hasn't changed, but the derivative
doesn't vary so wildly any more. (In fact, in this trivial case it's
constant.)
> This is what Orchid effecitively did with his exponential function. By
> taking the log of the whole function he created a new function with the
> same properties.
My particular function was something like x^18 + ... = 0. It has almost
vertical sides, yet the "interesting" part has a tiny derivative. My
using the log() function I can "magnify" the interesting bit, and zoom
out the unimportant vertical walls that are confusing the poor root solver.
However, note that you must make sure the function is NEVER NEGATIVE,
since log(-1) is a complex number, and POV-Ray won't like that. This
sometimes requires you to translate the function in the Y-direction,
take the log, and then translate the result back again:
f2(x) = k + log(f1(x) - k)
Selecting the correct k turns out to be very critical. The log function
shrinks large features, but it can also "magnify" small ones. If you get
the "wrong" value for k, you can end up creating big vertical side RIGHT
NEXT TO THE ZEROS. That means you MUST use a high max_gradiant or
POV-Ray will overshoot the zeros - the exact thing we were trying to
avoid...
(The problem gets even more fun if you start taking double-logs!)
> I'm curious, could this kind of thing theoretically be done automatically
> by povray? What if it were to use log(function) or sqrt(function) or
> 1/function to find the surface points rather than the actual function, and
> the max_gradient were to apply to that function instead of the original?
For every function for which this is faster, there is another one for
which it is slower. It would probably take more time to detect which
case applies than to just use the "wrong" variant. (See above.)
OTOH, it might be worth making a *keyword* to do this kind of
optimisation automatically instead of altering the function by hand...
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On Wed, 27 Sep 2006 20:44:05 +0100, Orchid XP v3 wrote:
> My particular function was something like x^18 + ... = 0. It has almost
> vertical sides, yet the "interesting" part has a tiny derivative. My
> using the log() function I can "magnify" the interesting bit, and zoom
> out the unimportant vertical walls that are confusing the poor root solver.
>
> However, note that you must make sure the function is NEVER NEGATIVE,
> since log(-1) is a complex number, and POV-Ray won't like that. This
> sometimes requires you to translate the function in the Y-direction,
> take the log, and then translate the result back again:
>
> f2(x) = k + log(f1(x) - k)
>
> Selecting the correct k turns out to be very critical. The log function
> shrinks large features, but it can also "magnify" small ones. If you get
> the "wrong" value for k, you can end up creating big vertical side RIGHT
> NEXT TO THE ZEROS. That means you MUST use a high max_gradiant or
> POV-Ray will overshoot the zeros - the exact thing we were trying to
> avoid...
Yeah, the magnification of small values (which also applies to, say,
sqrt(x)), is something I forgot about. I guess real world functions would
be less trivial to "map" in the sense I was talking about.
> (The problem gets even more fun if you start taking double-logs!)
Why would you want to do that for a function other than a
double-exponential? Masochism? Maths fetish? :)
>> I'm curious, could this kind of thing theoretically be done
>> automatically by povray? What if it were to use log(function) or
>> sqrt(function) or 1/function to find the surface points rather than the
>> actual function, and the max_gradient were to apply to that function
>> instead of the original?
>
> For every function for which this is faster, there is another one for
> which it is slower. It would probably take more time to detect which
> case applies than to just use the "wrong" variant. (See above.)
>
> OTOH, it might be worth making a *keyword* to do this kind of
> optimisation automatically instead of altering the function by hand...
Also, I just realised, I think most functions with ever-increasing
gradients could be reined in by putting it in a min() or max() function,
or both. Would that work with your exp function?
Mick.
--
Remove the -news from my email address.
http://mickworld.knobbits.org/
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Michael (Micksa) Slade nous apporta ses lumieres en ce 28/09/2006 09:20:
> On Wed, 27 Sep 2006 20:44:05 +0100, Orchid XP v3 wrote:
>
>> My particular function was something like x^18 + ... = 0. It has almost
>> vertical sides, yet the "interesting" part has a tiny derivative. My
>> using the log() function I can "magnify" the interesting bit, and zoom
>> out the unimportant vertical walls that are confusing the poor root solver.
>>
>> However, note that you must make sure the function is NEVER NEGATIVE,
>> since log(-1) is a complex number, and POV-Ray won't like that. This
>> sometimes requires you to translate the function in the Y-direction,
>> take the log, and then translate the result back again:
>>
>> f2(x) = k + log(f1(x) - k)
>>
>> Selecting the correct k turns out to be very critical. The log function
>> shrinks large features, but it can also "magnify" small ones. If you get
>> the "wrong" value for k, you can end up creating big vertical side RIGHT
>> NEXT TO THE ZEROS. That means you MUST use a high max_gradiant or
>> POV-Ray will overshoot the zeros - the exact thing we were trying to
>> avoid...
>
> Yeah, the magnification of small values (which also applies to, say,
> sqrt(x)), is something I forgot about. I guess real world functions would
> be less trivial to "map" in the sense I was talking about.
>
>> (The problem gets even more fun if you start taking double-logs!)
>
> Why would you want to do that for a function other than a
> double-exponential? Masochism? Maths fetish? :)
>
> Mick.
>
You can come acros functions that have even more extreem gradients and a single
log still give you a max_gradient in the thousands, like a 20th degree or more
or an x^10x kind of case. You then may want to use double logs. In that case, it
can be usefull to use log(max(0, log(YourFunction))) to clip negative values.
--
Alain
-------------------------------------------------
If That Phone Was Up Your Butt, Maybe You Could Drive A Little Better!
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
I suppose you could try using the evaluate keyword instead of just giving
max_gradient... go with evaluate 1e+17,3,0.1 or so, and maybe it'd run at
a decent speed (if you're lucky) without mucking about with renormalizing
the function.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> I suppose you could try using the evaluate keyword instead of just giving
> max_gradient... go with evaluate 1e+17,3,0.1 or so, and maybe it'd run at
> a decent speed (if you're lucky) without mucking about with renormalizing
> the function.
The evaluate keyword is only for figuring out what max_gradient should
be optimally set to. It doesn't change the need to make the function
*reasonably* well behaved to get good performance...
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|