POV-Ray : Newsgroups : povray.advanced-users : Radiosity rendering depends on camera direction? Server Time
23 Nov 2024 07:46:17 EST (-0500)
  Radiosity rendering depends on camera direction? (Message 1 to 10 of 11)  
Goto Latest 10 Messages Next 1 Messages >>>
From: Tamas Gunda
Subject: Radiosity rendering depends on camera direction?
Date: 6 Apr 2019 15:10:00
Message: <web.5ca8f9228c4dc5b364274a210@news.povray.org>
I created 3D cube images with Povray - spherical images assembled from 6 images,
each of the images corresponding to one face of a cube. Without using radiosity
there are no problems. However, if radiosity is turned on, the six images do not
fit exactly, there are differences in brightness of the faces. The code is the
exactly the same for all faces, only the camera is rotated by 90 degs. Seems
that the result of rendering with radiosity turned on depends on the camera
direction.

To better understand, have a look at www.gunda.hu/cube/povray

Another perfect example, without using radiosity, is at www.gunda.hu/cube


Post a reply to this message

From: Alain
Subject: Re: Radiosity rendering depends on camera direction?
Date: 6 Apr 2019 16:38:37
Message: <5ca90e4d$1@news.povray.org>
Le 19-04-06 à 15:09, Tamas Gunda a écrit :
> I created 3D cube images with Povray - spherical images assembled from 6 images,
> each of the images corresponding to one face of a cube. Without using radiosity
> there are no problems. However, if radiosity is turned on, the six images do not
> fit exactly, there are differences in brightness of the faces. The code is the
> exactly the same for all faces, only the camera is rotated by 90 degs. Seems
> that the result of rendering with radiosity turned on depends on the camera
> direction.
> 
> To better understand, have a look at www.gunda.hu/cube/povray
> 
> Another perfect example, without using radiosity, is at www.gunda.hu/cube
> 
> 
> 

Strange, radiosity have no effect of the geometry and should be 
independent of the camera's location and orientation.

What can happen is a difference on what areas radiosity is sampling. If 
that's the case, increasing the count value should help, also, using the 
two values variant often help :
count 175 25000

You may also want to reduce the value for pretrace_end. Default is 0.04. 
Try 0.01, 0.005 or 0.0025.

Looking at the dark end and you really need more samples. Quadrupling 
your count value may not be quite enough. You may also need to increase 
the recursion_limit value to 4 or maybe 5. Also, reducing minimum_reuse 
could also help. Default is 0.015. Try 0.01 down to 0.004.

Adding some roundness to the edges can do a lot in reducing the 
artefacts in the darker areas. A clipped cylinder will do the trick.

If you can post the source of your scene, it'll be easier to diagnose 
the issue and help you.



Alain


Post a reply to this message

From: Tamas Gunda
Subject: Re: Radiosity rendering depends on camera direction?
Date: 7 Apr 2019 09:30:00
Message: <web.5ca9f7bd7b3f92fb64274a210@news.povray.org>
Alain <kua### [at] videotronca> wrote:
> Strange, radiosity have no effect of the geometry and should be
> independent of the camera's location and orientation.
>
> What can happen is a difference on what areas radiosity is sampling. If
> that's the case, increasing the count value should help, also, using the
> two values variant often help :
> count 175 25000
>
> You may also want to reduce the value for pretrace_end. Default is 0.04.
> Try 0.01, 0.005 or 0.0025.
>
> Looking at the dark end and you really need more samples. Quadrupling
> your count value may not be quite enough. You may also need to increase
> the recursion_limit value to 4 or maybe 5. Also, reducing minimum_reuse
> could also help. Default is 0.015. Try 0.01 down to 0.004.
>
> Adding some roundness to the edges can do a lot in reducing the
> artefacts in the darker areas. A clipped cylinder will do the trick.
>
> If you can post the source of your scene, it'll be easier to diagnose
> the issue and help you.
>
>
>
> Alain

Thanks for the reply. Well, I tried sytematically many variations. It helped to
decrease the artefacts, however, the main problem could not be solved. It is
mostly influenced when playing with the radiosity brightness and finish/diffuse
values of the walls, but the effect of a given setting is different on the
different faces. Inspect www.gunda.hu/cube/povray again: In the third picture
some of the differences between the cube sides disappeared (the upper and the
right), while others became stronger. In other words, it seems to me that nearly
every setting influences more or less the brightness of the generated pictures,
and this influence depends somehow on the direction of the camera.


The radiosity settings used:

#if (radio)
 global_settings {
    ambient_light 0
    radiosity {
      pretrace_start  64/image_width
      pretrace_end    4/image_width  //8
      count 250 25000  //150
      nearest_count 10 //
      error_bound 0.33 //0.5
      recursion_limit 4 //3
      low_error_factor 0.5  //0.5
      gray_threshold 0.5  //0.5
      minimum_reuse 0.004 //0.025
      maximum_reuse 0.2 //0.2
      brightness 1.3 //1.5
      adc_bailout 0.01 //0.01
      media m_media
      //max_sample 1
      //normal on
    }
  }

#else
 global_settings {ambient_light .1}

#end


Tamas


Post a reply to this message

From: clipka
Subject: Re: Radiosity rendering depends on camera direction?
Date: 8 Apr 2019 08:51:49
Message: <5cab43e5$1@news.povray.org>
Am 06.04.2019 um 21:09 schrieb Tamas Gunda:
> I created 3D cube images with Povray - spherical images assembled from 6 images,
> each of the images corresponding to one face of a cube. Without using radiosity
> there are no problems. However, if radiosity is turned on, the six images do not
> fit exactly, there are differences in brightness of the faces. The code is the
> exactly the same for all faces, only the camera is rotated by 90 degs. Seems
> that the result of rendering with radiosity turned on depends on the camera
> direction.

This is to be expected to some degree.

What happens is that radiosty takes a couple of samples of indirect 
lighting, and interpolates between those. The location of those samples 
is determined pseudo-randomly, with a heavy influence from the camera 
perspective (as well as other factors).

The interpolation between samples introduces subtle artifacts (very low 
"frequency" and thus difficult to see under normal circumstances); the 
camera-dependent pseudo-randomness causes the artifacts to differ 
significantly between renders, even with only minor variations in camera 
perspective or radiosity settings. This means that in a 1:1 comparison, 
or when stitching images without soft blending between them, the 
artifacts will become evident.

In official POV-Ray, the only way to solve this is to use either very 
high-quality radiosity settings, or somehow introduce high-"frequency" 
noise, so that enough samples are taken that the accuracy of the image 
gets high enough for the seams to remain within the limits of human 
perception or "drowns" in noise.

In UberPOV, you could choose radiosity settings such that instead of 
interpolating between a limited number of samples it would compute 
indirect lighting for each surface point separately. This changes the 
type of artifacts to high-frequency noise, which will automatically hide 
the seams, and arbitrary trade-offs between quality and render time can 
easily be made via stochastic anti-aliasing. However, this also 
increases render time significantly, especially so if you aim to reduce 
the noise below the threshold of human perception. Another drawback is 
that UberPOV only supports v3.7.0 syntax, not that of v3.8.0-alpha 
currently in development.

Another alternative would be to use MCPov, but it is more difficult to 
set up (you probably need to tamper with the scene itself, not just 
global settings), has a systematic error in brightness computations that 
needs working around, is limited to v3.6 syntax, and does not support 
multi-core operation "out of the box". The approach would be more or 
less the same as with UberPOV, but it may be able to achieve the same 
quality with less CPU time.


Post a reply to this message

From: Alain
Subject: Re: Radiosity rendering depends on camera direction?
Date: 8 Apr 2019 13:06:06
Message: <5cab7f7e$1@news.povray.org>
Le 19-04-07 à 09:28, Tamas Gunda a écrit :
> Alain <kua### [at] videotronca> wrote:
>> Strange, radiosity have no effect of the geometry and should be
>> independent of the camera's location and orientation.
>>
>> What can happen is a difference on what areas radiosity is sampling. If
>> that's the case, increasing the count value should help, also, using the
>> two values variant often help :
>> count 175 25000
>>
>> You may also want to reduce the value for pretrace_end. Default is 0.04.
>> Try 0.01, 0.005 or 0.0025.
>>
>> Looking at the dark end and you really need more samples. Quadrupling
>> your count value may not be quite enough. You may also need to increase
>> the recursion_limit value to 4 or maybe 5. Also, reducing minimum_reuse
>> could also help. Default is 0.015. Try 0.01 down to 0.004.
>>
>> Adding some roundness to the edges can do a lot in reducing the
>> artefacts in the darker areas. A clipped cylinder will do the trick.
>>
>> If you can post the source of your scene, it'll be easier to diagnose
>> the issue and help you.
>>
>>
>>
>> Alain
> 
> Thanks for the reply. Well, I tried sytematically many variations. It helped to
> decrease the artefacts, however, the main problem could not be solved. It is
> mostly influenced when playing with the radiosity brightness and finish/diffuse
> values of the walls, but the effect of a given setting is different on the
> different faces. Inspect www.gunda.hu/cube/povray again: In the third picture
> some of the differences between the cube sides disappeared (the upper and the
> right), while others became stronger. In other words, it seems to me that nearly
> every setting influences more or less the brightness of the generated pictures,
> and this influence depends somehow on the direction of the camera.
> 
> 
> The radiosity settings used:
> 
> #if (radio)
>   global_settings {
>      ambient_light 0
>      radiosity {
>        pretrace_start  64/image_width
>        pretrace_end    4/image_width  //8
>        count 250 25000  //150
>        nearest_count 10
Using a larger value, op to 20, can help in smoothing some rough edges.
>        error_bound 0.33 //0.5
>        recursion_limit 4 //3
Try 5 to bring some more light down the hallway.
>        low_error_factor 0.5  //0.5
Reducing this may eliminate some artefacts.
>        gray_threshold 0.5  //0.5
Should be kept at the default of 0
>        minimum_reuse 0.004 //0.025
>        maximum_reuse 0.2 //0.2
>        brightness 1.3 //1.5
Should be left at default of 1
>        adc_bailout 0.01 //0.01
Try something smaller, like 1/256. It can help when you have very bright 
and dark areas.
>        media m_media
>        //max_sample 1
>        //normal on
>      }
>    }
> 
> #else
>   global_settings {ambient_light .1}
> 
> #end
> 
> 
> Tamas
> 
> 

Just an idea :
Try rendering it all in one go, then, project onto a box or sphere :

First, render using this camera :

camera{
  spherical
  angle 360
  up y
  right x*2
}
Render with a 2:1 aspect ratio with radiosity.
The image will look a little strange with both ends of the hallway 
visible at the same time.
You may want to do a test without radiosity first to get a feel of how 
it work.

Next, project the resulting image onto a box :
box{-1, 1
  pigment{image_map{ png"ImageName.png" map_type 1 scale <-1,1,1>}}
  finish{emission 1 diffuse 0 ambient 0}
}

Or a sphere :
sphere{0 1
  pigment{image_map{ png"ImageName.png" map_type 1 scale <-1,1,1>}}
  finish{emission 1 diffuse 0 ambient 0}
}

Render with a normal camera at <0,0,0> with no light.
If the image is to bright or dim, adjust emission to compensate.

As the whole scene was rendered as a single image, there can be no 
seams, allowing you to use more relaxed radiosity settings.
The final renders will be fast as the geometry is simple, there are no 
light, and only an image_map to evaluate.


Alain


Post a reply to this message

From: Tamas Gunda
Subject: Re: Radiosity rendering depends on camera direction?
Date: 8 Apr 2019 15:15:00
Message: <web.5cab9cb67b3f92fb64274a210@news.povray.org>
clipka <ano### [at] anonymousorg> wrote:

>
> This is to be expected to some degree.
> ...
> ...
> ...
> In official POV-Ray, the only way to solve this is to use either very
> high-quality radiosity settings, or somehow introduce high-"frequency"
> noise, so that enough samples are taken that the accuracy of the image
> gets high enough for the seams to remain within the limits of human
> perception or "drowns" in noise.
>

Indeed, I used now very high radiosity settings (with rendering time about 20x),
and although the result is still not perfect but much better. See the third
variation at the above url now.

Tamas


Post a reply to this message

From: clipka
Subject: Re: Radiosity rendering depends on camera direction?
Date: 9 Apr 2019 06:25:17
Message: <5cac730d$1@news.povray.org>
Am 08.04.2019 um 19:06 schrieb Alain:

> Just an idea :
> Try rendering it all in one go, then, project onto a box or sphere :

That's actually a pretty smart idea.
It's important though to use a high enough resolution, or at least use 
interpolation for the projected image.

Alternatively, the image could be rendered directly to a single cube map 
image using a v3.8.0-alpha user-defined camera (or a v3.7.0 mesh camera, 
but that's more of a hassle to set up). If 6 separate images are needed, 
that could then simply be achieved by cutting the output image into 
pieces in any image processing software.


Post a reply to this message

From: Alain
Subject: Re: Radiosity rendering depends on camera direction?
Date: 9 Apr 2019 20:07:55
Message: <5cad33db@news.povray.org>
Le 19-04-09 à 06:25, clipka a écrit :
> Am 08.04.2019 um 19:06 schrieb Alain:
> 
>> Just an idea :
>> Try rendering it all in one go, then, project onto a box or sphere :
> 
> That's actually a pretty smart idea.
> It's important though to use a high enough resolution, or at least use 
> interpolation for the projected image.
> 
That's why I suggested to do some test without radiosity to get the feel 
of it.

The horizontal resolution should be about the sum of that of the 
individual images.


Post a reply to this message

From: Tamas Gunda
Subject: Re: Radiosity rendering depends on camera direction?
Date: 10 Apr 2019 14:40:05
Message: <web.5cae37317b3f92fb92733c3b0@news.povray.org>
clipka <ano### [at] anonymousorg> wrote:
> Am 08.04.2019 um 19:06 schrieb Alain:
>
> > Just an idea :
> > Try rendering it all in one go, then, project onto a box or sphere :
>
> That's actually a pretty smart idea.
> It's important though to use a high enough resolution, or at least use
> interpolation for the projected image.
>
> Alternatively, the image could be rendered directly to a single cube map
> image using a v3.8.0-alpha user-defined camera (or a v3.7.0 mesh camera,
> but that's more of a hassle to set up). If 6 separate images are needed,
> that could then simply be achieved by cutting the output image into
> pieces in any image processing software.

Bingo!

Thanks, Alain. Rendering with 360 deg spherical camera in one go and then
projecting onto a sphere and cut to cubic images has yielded a seamless
spherical image. The resolution of the first rendering should be at least twice
to that of final image.

http://www.gunda.hu/cube/povray/index3.html

Tamas


Post a reply to this message

From: Alain
Subject: Re: Radiosity rendering depends on camera direction?
Date: 10 Apr 2019 16:47:03
Message: <5cae5647@news.povray.org>
Le 19-04-10 à 14:34, Tamas Gunda a écrit :
> clipka <ano### [at] anonymousorg> wrote:
>> Am 08.04.2019 um 19:06 schrieb Alain:
>>
>>> Just an idea :
>>> Try rendering it all in one go, then, project onto a box or sphere :
>>
>> That's actually a pretty smart idea.
>> It's important though to use a high enough resolution, or at least use
>> interpolation for the projected image.
>>
>> Alternatively, the image could be rendered directly to a single cube map
>> image using a v3.8.0-alpha user-defined camera (or a v3.7.0 mesh camera,
>> but that's more of a hassle to set up). If 6 separate images are needed,
>> that could then simply be achieved by cutting the output image into
>> pieces in any image processing software.
> 
> Bingo!
> 
> Thanks, Alain. Rendering with 360 deg spherical camera in one go and then
> projecting onto a sphere and cut to cubic images has yielded a seamless
> spherical image. The resolution of the first rendering should be at least twice
> to that of final image.
> 
> http://www.gunda.hu/cube/povray/index3.html
> 
> Tamas
> 
> 

Glad that it worked for you.


Post a reply to this message

Goto Latest 10 Messages Next 1 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.