POV-Ray : Newsgroups : povray.advanced-users : Radiosity rendering depends on camera direction? : Re: Radiosity rendering depends on camera direction? Server Time
14 Oct 2024 07:19:38 EDT (-0400)
  Re: Radiosity rendering depends on camera direction?  
From: clipka
Date: 8 Apr 2019 08:51:49
Message: <5cab43e5$1@news.povray.org>
Am 06.04.2019 um 21:09 schrieb Tamas Gunda:
> I created 3D cube images with Povray - spherical images assembled from 6 images,
> each of the images corresponding to one face of a cube. Without using radiosity
> there are no problems. However, if radiosity is turned on, the six images do not
> fit exactly, there are differences in brightness of the faces. The code is the
> exactly the same for all faces, only the camera is rotated by 90 degs. Seems
> that the result of rendering with radiosity turned on depends on the camera
> direction.

This is to be expected to some degree.

What happens is that radiosty takes a couple of samples of indirect 
lighting, and interpolates between those. The location of those samples 
is determined pseudo-randomly, with a heavy influence from the camera 
perspective (as well as other factors).

The interpolation between samples introduces subtle artifacts (very low 
"frequency" and thus difficult to see under normal circumstances); the 
camera-dependent pseudo-randomness causes the artifacts to differ 
significantly between renders, even with only minor variations in camera 
perspective or radiosity settings. This means that in a 1:1 comparison, 
or when stitching images without soft blending between them, the 
artifacts will become evident.

In official POV-Ray, the only way to solve this is to use either very 
high-quality radiosity settings, or somehow introduce high-"frequency" 
noise, so that enough samples are taken that the accuracy of the image 
gets high enough for the seams to remain within the limits of human 
perception or "drowns" in noise.

In UberPOV, you could choose radiosity settings such that instead of 
interpolating between a limited number of samples it would compute 
indirect lighting for each surface point separately. This changes the 
type of artifacts to high-frequency noise, which will automatically hide 
the seams, and arbitrary trade-offs between quality and render time can 
easily be made via stochastic anti-aliasing. However, this also 
increases render time significantly, especially so if you aim to reduce 
the noise below the threshold of human perception. Another drawback is 
that UberPOV only supports v3.7.0 syntax, not that of v3.8.0-alpha 
currently in development.

Another alternative would be to use MCPov, but it is more difficult to 
set up (you probably need to tamper with the scene itself, not just 
global settings), has a systematic error in brightness computations that 
needs working around, is limited to v3.6 syntax, and does not support 
multi-core operation "out of the box". The approach would be more or 
less the same as with UberPOV, but it may be able to achieve the same 
quality with less CPU time.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.