POV-Ray : Newsgroups : povray.unofficial.patches : Is anyone working on a distributed / stochastic / Monte Carlo ray-tracing patch for 3.7? : Re: Is anyone working on a distributed / stochastic / MonteCarloray-tracingpatchfor 3.7? Server Time
2 Mar 2024 01:24:40 EST (-0500)
  Re: Is anyone working on a distributed / stochastic / MonteCarloray-tracingpatchfor 3.7?  
From: clipka
Date: 22 Jul 2014 08:26:33
Message: <53ce5879$1@news.povray.org>
Am 22.07.2014 09:11, schrieb scott:
>> A less artifact-prone replacement for the effect that official POV-Ray
>> uses radiosity for (so-called Global Illumination)? Raise your hand, and
>> I'll make it happen in UberPOV.
> I don't know what you have in mind for this, if it's not the render over
> and over again approach?

Essentially that, yes.

At the moment, for Global Illumination UberPOV still uses the same 
approach as POV-Ray: On an object's surface it chooses some points more 
or less at random, for which it computes indirect illumination with high 
precision(*), and caches this information; for any points in between, 
indirect illumination is interpolated from nearby cached samples. This 
interpolation can lead to systematic artifacts that are difficult to get 
rid of - obviously it's not enough to just sample the image over and 
over again with the same set of cached samples.

(*) Computation of indirect illumination is done simply by shooting a 
number of secondary rays, and see what light might come from other 
objects present there; the more rays are shot, the more precise the 
result will be.

To avoid the typical POV-Ray radiosity artifacts, MCPov (re-)computes 
indirect illumination for each and every point on an object's surface, 
and doesn't cache the results at all. Usually this is done with a 
comparatively low precision, which also leads to artifacts; however, 
they manifest as random noise that can be reduced by oversampling pixels 
over and over again.

>> Render over and over again while you sit and watch, until you're happy
>> with the result? Not supported in UberPOV yet; it's on the agenda, but
>> may require quite some intrusion into the POV-Ray architecture. What
>> UberPOV does already support, however, is an "anti-aliasing" mode (or,
>> more to the point, an oversampling mode) that allows you to specify what
>> quality you'd be happy with, and UberPOV will do the
>> rendering-over-and-over-again in the background until the quality
>> criteria are met.
> This seems like a better approach, as with MCPov you're often left
> re-rendering the entire image waiting for just a small dark area to
> smooth out.

It's not all that bad; MCPov does spend more time on areas of the image 
that prove to really need the additional work.

> So if you implement the less artefact-prone GI and combine it with the
> per-pixel oversampling it should be something better than MCPov? Sounds
> good to me! Hand firmly raised!

I'm not sure if it that'll suffice to indeed be better than MCPov; time 
will tell. A definitive advantage will be the full support for 

Two other things that had always bothered me about MCPov is that it 
doesn't allow the use of classic light source (which would be far less 
noisy and hence much faster than using bright spheres), and that it has 
a factor-2 error in diffuse computations that make it necessary to use 
different finish settings. Both make it excessively difficult to create 
scenes that render essentially identical (except for artifacts or noise) 
in both POV-Ray and MCPov. Needless to say that UberPOV is intended to 
do a better job in that respect.

Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.