|
|
Warp wrote:
> A BRDF is a function between incoming light (from all possible directions)
> and outcoming light (to all possible directions).
>
> In other words (in an ideal case), at each point light coming from all
> possible directions are taken into account, and this light can be reflected
> to all possible directions, with a factor given by the BRDF function.
>
> Naturally in order to calculate this you would have to shoot an infinite
> number of rays from each point, and when those rays intersect other surfaces,
> again an infinite amount of rays would have to be sent from those points,
> ad infinitum.
>
> In practice this is, of course, impossible. However, brute force renderers
> try to approximate this by simply sending rays at random directions, lots
> and lots of them. The more rays are sent, the more the final results
> approaches the optimal. This results in a very grainy image at first,
> because the amount of samples is not even nearly enough. However, as more
> and more rays are traced, the result starts slowly approaching the ideal.
>
> When the BRDFs are properly designed to simulate the behavior of
> real-world materials, the results can be quite realistic.
>
> Or this is how I have understood it.
OK. Thanks.
> POV-Ray's stochastic global illumination resembles this, although it's
> quite limited (and doesn't use BRDFs).
Also doesn't recompute irridescence at each point, but reuses samples
from nearby points [in an effort to reduce the insane number of ray
intesection tests required]. Hence all that parameter-fiddling business.
As best I can tell, the algorithm described just sounds like POV-Ray's
radiosity with an infinitely low error_bound. (I.e., always resample.)
But applied to *all* terms, not just diffuse illumination...
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
|