|
|
Christoph Hormann wrote:
> Samuel Benge wrote:
>>>
>>> BTW, we're not talking about what _looks_ better but what matches the
>>> actual surface better. I'm saying this because of the different looks
>>> of the grainy cylinder. Seems the grain size is in the order of the
>>> accuracy.
>
> No, we are talking about what looks better. Of course it has to
> resemble the actual model to look good but even some quite significant
> difference is acceptable as long as it looks reasonable.
>
Probably this is an attitude based on LOTW and the like.
People who want to plot physical/mathematical functions (like me)
may see that differently...
> And what you are referring to as 'grain' is simply the pattern function
> used.
>
Actually, "grain" just wanted to express what part of the image I was
talking about. Of course, I knew that the surface was pertubed by the
granite pattern.
> The appearance of the cylinder is completely correct in all
> versions and does not profit much from the patch (the random differences
> are mostly due to aliasing).
>
If you look carefully, you see that in the unpatched version, part of
the surface appears to be further apart from the camera.
Which is, of course, clear.
> As clearly visible the patch only interpolates the t value (the distance
> of the intersection from the camera) it does nothing more!
>
[ There is nothing new in this statement. Everybody who reads the patch
code should immediately see this... :p ]
> Therefore
> most artefacts are still there. Note the black areas are - as assumed
> previously - mostly caused by the use of the high accuracy values for
> normal calculation as well. If i just use 1/10 of the value for the
> normal they are gone:
>
As mentioned by me earlier, I also first thought "my" black points are caused
by incorrect normal calculation; this is something which could also still be
improved.
The current normal calculation algo has two principal problems:
(1) It uses no symmetric ("leap-frog" / central) diff like (f(x+h)-f(x-h))/(2h)
but (f(x)-f(x-h))/h which has the advantage that 2 fewer
function evaluations are needed but the disadvantage that the error
is of order O(h) instead O(h^2).
If we use central differencing, the image already looks better as can
be seen here:
http://www.cip.physik.uni-muenchen.de/~wwieser/tmp/files/test-central.png
(2) The value h is chosen more or less arbitrarily to equal accuracy. This is
not necessary a good approach since the bisection solver can
calculate very small accuracies but leading digit cancellation will
increasingly spoil the gradient derivation for normal calculation.
If we keep the accuracy at 0.1 but use h=0.01 for the normal calc,
then the image looks like that:
http://www.cip.physik.uni-muenchen.de/~wwieser/tmp/files/test-0.1.png
Wolfgang
Post a reply to this message
|
|