POV-Ray : Newsgroups : povray.binaries.scene-files : A poly 16 file for Le_Forgeron : Re: A poly 16 file for Le_Forgeron Server Time
19 May 2024 07:04:02 EDT (-0400)
  Re: A poly 16 file for Le_Forgeron  
From: Le Forgeron
Date: 20 Dec 2010 15:19:38
Message: <4d0fba5a@news.povray.org>
Le 20/12/2010 18:02, Jaap Frank nous fit lire :
>> "Le_Forgeron"  schreef in bericht news:4d0ef88b@news.povray.org...
>> That does not give me a clue about the issue in the solver, maybe
>> someone with more insight on the subject could help.
> 
> If thougt about it and got the following reasoning:
> As I remember, the ray direction is subsituted in the equation.

All happen in Poly::intersect().
Well, the start & direction of the ray is substituted in the equation.
From x,y,z and the polynomial of order N in x,y,z we get to a polynomial
of same (or less) order in t with x = S.x+D.x*t (and so on for y & z; S
begin the start of the ray, D it's direction).

Then the solver is called, and it stacks the result in the Depth stack.

> We get noise if the distance gets bigger, so bigger values make the
> solver get astray.
> The distance vector can be normalised without changing its meaning, so
> will it help to normalize the direction vector first
> or is that already done. I can't remember if the position vector is
> substituted as well, but normalizing that vector will change his meaning
> and I
> can't think of a reason to substitute that. But who am I :-)

The normalisation of Direction is done in Poly::All_Intersections()
(which then call Poly::intersect())
When the stack of t is got back, t smaller than DEPTH_TOLERANCE are
rejected (1.0e-4 in polynom space, from start of ray, to avoid ).
t[k] is also searched in the list of t from start to k-1, to avoid
duplicate root.
When t[k] is ok, the point is converted back into scene-space to check
against the clipping object. If still ok, the t[k] (and its point in
scene space) is pushed on the actual stack of intersection (removing the
normalisation of the direction by using t[k]/len, len being the original
length of the direction vector).


> 
> Is it possible that you sent me the newest poly.c and possible
> your binary of Pov, so I can study the solver again and  can experiment
> with the poly16. I'm recently retired from work and
> have plenty of time now to crunch main brain about it ;-)

My binary is for linux/ubuntu amd64, would it work for you ?
There is more than just poly.cpp impacted, hopefully it should appears
soon (or would a diff/patch file be ok for you ?).


> 
> Do you have a clue for me about the mathematics of the solver?

It's all in math/polysolv.cpp... but even the 3 FUDGE_FACTOR seem magic
to me.
If the coefficient of lowest power (as updated by the transformation in
t) is less than ROOT_TOLERANCE (1.0e-4), the order is reduced by 1.
(because t = 0 is obvious solution in any ( t.(anything)=0 )

I wonder if it would not be better to iterate that order reduction in a
loop instead of doing it once only.

Then (polysolve): the order of coefficient is reversed and each one is
divided by the top. (so last one of the new sequence is 1, if I'm right;
constant is first, highest power is last now)
Black magic about the sturm... and then bisection is used to find the
roots (might be recursive).

If one root only, try using regula_falsa method, or if it fails:

For far points, if the interval divided by its medium value to bisect
would be smaller than RELERROR, the root is set to the medium value.

For near points, if the interval would be smaller than RELERROR in size,
the root is set to the medium value.

(RELERROR is 1.0e-12)

Regula_falsa (when 1 root to find only):
evaluate the polynomial at each end.
if both ends of same sign, failure.
if one ends small enough (less than SMALL_ENOUGH : 1.0e-10), select that
end has ok.
otherwise iterate: compute x as the linear interpolation between both
ends (assuming the curve is a straigth line) and evaluate polynomial at x.
  if value is far from 0 (> RELERROR), x far enough
and value/x < RELERROR, x is the root. (huh ???)

  If value is near 0 (<RELERROR), x is the root.

  otherwise, replace the end that has the same sign value of x for its
value with x (and divide the value of the other end by 2 in some/most
case ( ??? why ??? it seems that it happens when two consecutives ends
reduction occurs on the same side, probably meaning the influence of the
other (fixed) end is too much))
  if the new interval to consider is small enough (<RELERROR), exit loop
(x is the root anyway (it's the fresh end)


In the sturm magic: it seems to compute the coefficient of the
derivative (dropping the constant term, adjusting each coefficient by
it's power/max order;
so 1 + x + x^2 +x^3 gives 1/3 + 2.x/3 + 3.x^2/3
Then it's boggling my mind, because it goes down one more level (and
repeat) by computing the modulus the first equations by the second (its
derivative), removing neo-coefficient smaller than SMALL_ENOUGH
(1.0e-10) in the process when they are on the high power side.
And before computing the next modulus (derivative by first modulus), it
reverse the sign and normalise the highest power (so it's -1).
On the last stage (where the result is only a constant), the coefficient
is only reversed.

Using the array of (sturm) equations, it computes the number of roots
between +infinity & 0.

numchanges & visible_roots seems to be doing the same kind of jobs,
with +infinity replaced by MAX_DISTANCE (1.0e7)


> I will search the internet as well, for thirteen years ago or now makes
> a very big difference.

I would question the choice made to get the far points. (line 530+ of
polysolv.cpp) (interval/mean < RELERROR ?)

> 
> Jaap


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.