|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
I seem to get smoother results with a high error_bound - is this normal? With
low error_bound it looks as though invisible three-legged salamanders are
shining torches at my walls....
Radiosity settings are (this error_bound is okay, 0.4 is blotchy):
radiosity{count 500 error_bound 0.99 recursion_limit 1 brightness 1 normal true}
ambient_light 0.01
--
#macro G(D,E,F)#local I=array[3]{D,E,F}#local B=0;triangle{#while(
B<3)#while(I[B])A[mod(I[B],10)]+#local I[B]=div(I[B],10);#end<-5,-
2,9>#local B=B+1;#end}#end #local A=array[7]{x,x*2,x*4,y,y*2,y*4,z
}light_source{-x*6-z*9,1}mesh{G(105,10,146)G(105,246,10)G(105,56,
146)G(105,1256,246)G(1256,126,220)G(22156,2216,201)pigment{rgb 1}}//TM
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Tom Melly wrote:
>
> I seem to get smoother results with a high error_bound - is this normal? With
> low error_bound it looks as though invisible three-legged salamanders are
> shining torches at my walls....
>
> Radiosity settings are (this error_bound is okay, 0.4 is blotchy):
>
> radiosity{count 500 error_bound 0.99 recursion_limit 1 brightness 1 normal true}
> ambient_light 0.01
>
That's perfectly normal, if you decrease error_bound you will usually have
to use higher count to get equally smooth results.
Christoph
--
Christoph Hormann <chr### [at] gmxde>
IsoWood include, radiosity tutorial, TransSkin and other
things on: http://www.schunter.etc.tu-bs.de/~chris/
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Christoph Hormann <chr### [at] gmxde> wrote:
: That's perfectly normal, if you decrease error_bound you will usually have
: to use higher count to get equally smooth results.
Yes, but what I think is not normal is that with lower error_bound values
you get graininess/blotchiness, and you usually just can't get rid of it.
This might be a bug or design mistake or something similar.
--
#macro N(D,I)#if(I<6)cylinder{M()#local D[I]=div(D[I],104);M().5,2pigment{
rgb M()}}N(D,(D[I]>99?I:I+1))#end#end#macro M()<mod(D[I],13)-6,mod(div(D[I
],13),8)-3,10>#end blob{N(array[6]{11117333955,
7382340,3358,3900569407,970,4254934330},0)}// - Warp -
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Warp" <war### [at] tagpovrayorg> wrote in message news:3bbc74d9@news.povray.org...
>
> Yes, but what I think is not normal is that with lower error_bound values
> you get graininess/blotchiness, and you usually just can't get rid of it.
> This might be a bug or design mistake or something similar.
It certainly seems extreme, and even very high counts don't seem to affect it
much - I have got it right that low error bounds are meant to be more accurate?
My non-programming guess was that the higher accuracy means that the generalised
averaging takes place in smaller areas, hence the greater discrepancy. But what
do I know?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp wrote:
>
> Yes, but what I think is not normal is that with lower error_bound values
> you get graininess/blotchiness, and you usually just can't get rid of it.
> This might be a bug or design mistake or something similar.
>
That would be nice (at least if it's possible to fix) but i doubt it.
Christoph
--
Christoph Hormann <chr### [at] gmxde>
IsoWood include, radiosity tutorial, TransSkin and other
things on: http://www.schunter.etc.tu-bs.de/~chris/
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Tom Melly <tom### [at] tomandlucouk> wrote:
: I have got it right that low error bounds are meant to be more accurate?
Yes. As far as I know, the idea of the error_bound value is to be some kind
of "threshold" value to see whether the current point is too far from (or
something similar) previously calculated lighting values. If the point is
close enough, then the lighting is calculated just by interpolating; if it's
too far away, then rays are shot and a new value is calculated at that point.
In an optimal and perfectly-working algorithm this would (at least
theoretically) change the size of lighting "spots" in the scene. That is,
with a higher error_bound you get bigger and less spots (with uniform
lighting change) and with a smaller error_bound you get smaller and more
densely distributed spots. The lighting would be always very smooth in all
cases, but the change of lighting in the scene is more accurate with a smaller
error_bound. You need a small error_bound to get, for example, an accurate
lighting in room corners.
However, there's something wrong either in the algorithm or its
implementation. Instead of getting smooth but accurate lighting with a small
error_bound, the lighting starts to get noisy instead. That is, you start to
get abrupt lighting changes where only smooth lighting transitions are
expected.
If there's a bug in the implementation, I personally would first look at the
interpolation code. If the lighting interpolation is made in a wrong way, it
might cause those artifacts. It might be that the value of the closest lighting
values are not interpolated truely by their distance to the current point,
or this distance is calculated in the wrong way.
If the bug is not there, then I don't know where to look next...
--
#macro N(D,I)#if(I<6)cylinder{M()#local D[I]=div(D[I],104);M().5,2pigment{
rgb M()}}N(D,(D[I]>99?I:I+1))#end#end#macro M()<mod(D[I],13)-6,mod(div(D[I
],13),8)-3,10>#end blob{N(array[6]{11117333955,
7382340,3358,3900569407,970,4254934330},0)}// - Warp -
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Christoph Hormann <chr### [at] gmxde> wrote:
: That would be nice (at least if it's possible to fix) but i doubt it.
Basically the same algorithm is used in Radiance. And there it works like
a charm.
--
#macro N(D,I)#if(I<6)cylinder{M()#local D[I]=div(D[I],104);M().5,2pigment{
rgb M()}}N(D,(D[I]>99?I:I+1))#end#end#macro M()<mod(D[I],13)-6,mod(div(D[I
],13),8)-3,10>#end blob{N(array[6]{11117333955,
7382340,3358,3900569407,970,4254934330},0)}// - Warp -
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Warp" <war### [at] tagpovrayorg> wrote in message news:3bbc7a79@news.povray.org...
<snip>
Hmm, could it be a problem to do with no overlap between the areas interpolated?
The defects appear pretty circular.
As an analogy, if I average a bunch of pixels by averaging each individual
pixel, the result won't be very smooth, even though it is accurate. If I get the
average by averaging all pixels the result will be smooth but inaccurate. To
achieve accuracy and smoothness, I ought to average pixels in blocks of four
(2x2) and then just offset one before making the next average? e.g.
abcd
average ab then bc then cd not ab then cd
Needless to say, I have no idea what I'm talking about....
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Tom Melly wrote:
>
> I seem to get smoother results with a high error_bound - is this normal? With
> low error_bound it looks as though invisible three-legged salamanders are
> shining torches at my walls....
Lowering pretrace_end too should help.
_____________
Kari Kivisalo
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Kari Kivisalo wrote in message <3BBCB8AF.ED6A93B9@engineer.com>...
>
>Lowering pretrace_end too should help.
>
Thanks for the tip
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|