POV-Ray : Newsgroups : povray.beta-test : Radiosity Status: Giving Up... Server Time
30 Jul 2024 00:30:51 EDT (-0400)
  Radiosity Status: Giving Up... (Message 25 to 34 of 194)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: nemesis
Subject: Re: Radiosity Status: Giving Up...
Date: 29 Dec 2008 15:30:01
Message: <web.49593282cd9d1e75180057960@news.povray.org>
Warp <war### [at] tagpovrayorg> wrote:
> Chambers <ben### [at] pacificwebguycom> wrote:
> > Just a quick note: a few quick calculations will show that 1600 samples
> > is not enough for things like a lightbulb across the room, or the sun in
> > the sky, to be effectively used for radiosity.
>
>   OTOH, why not make the lightbulb or the sun a light source?
>
>   I don't really understand people's obsession in making radiosity-only
> scenes. Radiosity calculates diffuse lighting only. If you use only radiosity
> to calculate the lighting of the scene, the entire scene will consist of
> purely diffusely lighted surfaces. There will be no specular component.
>
>   When a surface is purely diffuse, without any specular component to it,
> it's like a 100% matte surface, with no reflective component. There will
> be no highlights caused by light sources, which will make all surfaces
> matte and dull.
>
>   It's the specular component of lighting which makes surfaces look vivid,
> lively and brilliant, which make them have that "sparky" look. Purely matte
> surfaces are flat and dull.

Indeed.  But specular is itself a fake, a rough shortcut to the real deal:
specular is really diffuse reflection on very smooth surfaces.  Those
highlights also did not match the shape of the light sources -- at least until
you corrected that, for area lights at least -- and also why they look funky
from almost tangent angles...

I've seen people advocating the use of (glossy) reflection to more realistically
achieve that effect.


Post a reply to this message

From: clipka
Subject: Re: Radiosity Status: Giving Up...
Date: 29 Dec 2008 17:05:00
Message: <web.4959496bcd9d1e75ab169ede0@news.povray.org>
andrel <a_l### [at] hotmailcom> wrote:
> Ok, I'll try to read it. If only I could find the link.
> Could you post that again, I clicked through most of your posts here and
> I cannot find it. :(

No surprise - I never posted a link to it ;)

I don't remember where I got my copy from. I just googled it up ("A Ray Tracing
Solution for Diffuse Interreflection" and "Gregory J. Ward" should do fine); I
found a few pages that looked like they would have charged for access to the
document, but there's also a PDF out there with the text (alas not the
diagrams), and a series of TIFFs from a scanned-in copy (including the
diagrams).

> I'd be surprised if there is not already an efficient random generator
> that does that all in one go, bypassing the need for trigonometry. Maybe
> we only need the help of a resident google expert.

I don't think so.

Generating good pseudo-random number sequences is an art by itself, even without
specific target distributions, so at the core will most always be an algorithm
generating uniformly distributed scalar (i.e. 1-D) numbers.

The desired distribution is usually achieved through some smart (non-linear)
transformation of the generated numbers (which in our case would involve
trigonometrics I guess, or at least squares and roots). See POV-ray's random
numbers include file (don't recall the exact name right now) for examples.

Alternatively, it can be done through Monte-Carlo integration, i.e. pulling some
random number from a uniform-distribution RNG, computing the desired probability
for that particular value, and according to this probability randomly discard
the pulled number (by comparing the probability against another
uniformly-distributed random number), doing the whole process over and over
again if necessary. However, you'll usually try to avoid this, as it may become
quite slow depending on your desired distribution, and you'll inevitably end up
with unpredictable execution times. (The POV random numbers include file
implements this for example to generate a random number inside an object.)

So even though there are RNG's out there that do produce particular
distributions, I reckon they'll typically be just wrappers around a
uniform-distribution RNG for your convenience, so that you don't have to come
up with all the math yourselves (which may be a hassle because it involves
integrals), so in the end your CPU will still need to do trigonometrics.


Post a reply to this message

From: clipka
Subject: Re: Radiosity Status: Giving Up...
Date: 29 Dec 2008 17:10:00
Message: <web.49594a2bcd9d1e75ab169ede0@news.povray.org>
Warp <war### [at] tagpovrayorg> wrote:
>   I wonder if adaptive supersampling (similar to what is used in antialiasing
> method 2 and adaptive area lights) could be used in the stochastic sampling:
> If the brightness of two adjacent samples differ more than a given threshold,
> take an additional sample between them.

Shame on you! Still haven't read the paper yet, have you? :)

The original paper by Greg Ward et al. explicitly mentions the possibility of
using an adaptive approach. And I actually think about implementing it. Not
this year though, I guess :)


Post a reply to this message

From: clipka
Subject: Re: Radiosity Status: Giving Up...
Date: 29 Dec 2008 17:30:01
Message: <web.49594f41cd9d1e75ab169ede0@news.povray.org>
Severi Salminen <sev### [at] NOTTHISsaunalahtifiinvalid> wrote:
> 1. Choose a random point on a disc. (You can do this quickly by first
> choosing a random point on square and then check if it is inside a circle).
> 2. Project it on a hemisphere.
>
> Voila! There you have a cosine weighted random point on a hemishpere. No
> need to do the cosine weighting anymore.

.... aside from the fact that what you describe *is* cosine weighting (unless
your approach would be wrong, that is) :)

Thinking about it (didn't check the math behind it yet), it's not all that bad:

- take two random numbers [-1..1] (well, ]-1..1] should do as well)
- compare sum of squares against 1.0
- repeat if necessary

doesn't sound too bad (however, performance should be compared with taking two
random numbers and transforming them to circle distribution using trigonometry
instead of the monte-carlo approach).

Projecting onto a sphere (I guess you're talking about a parallel projection
along the surface normal) just needs to take the sum of squares we already
have, subtract it from 1, and take the square root as additional co-ordinate.

So yes, that should do: A loop typically taking less than two iterations (the
most expensive part probably being the RNG) and a square root. Quite
inexpensive after all.

So it's definitely less expensive than going via a uniform distribution on a
sphere in any case. Don't know whether that can be achieved without
trigonometry, and at least it will involve a root somewhere as well - plus the
math to get either a cosine weighting or a cosine weighted distribution.


Thanks, this helped me a lot - mostly because it also provides a good basis of
how to properly do adaptive supersampling in a cosine-weighted world.


Post a reply to this message

From: clipka
Subject: Re: Radiosity Status: Giving Up...
Date: 29 Dec 2008 17:45:01
Message: <web.49595273cd9d1e75ab169ede0@news.povray.org>
"Chambers" <ben### [at] pacificwebguycom> wrote:
> Just a quick note: a few quick calculations will show that 1600 samples
> is not enough for things like a lightbulb across the room, or the sun in
> the sky, to be effectively used for radiosity.

I feared there would be some catch :}

> > implement some adaptive algorithm. I also think it would be a good
> idea
> > to flag
> > objects as "radiosity targets", like it is done with photons, to
> inform
> > the
> > sampling algorithm about small but bright objects so it can shoot a
> few
> > more
> > rays in that direction.
>
> That would be a great way to deal with what would otherwise be extremely
> large sample sets.

That's why I'm thinking about it :)

Another idea I had was to do such flagging automatically. Say we do the first
samples with a very high sample count. We use some techniques borrowed from
adaptive sampling (though we're not actually going adaptive yet), and if some
close sample rays detect a high gradient (either in color or distance) the area
is somehow marked as "hot".

An area getting less than a certain threshold of "hot" markers will get less
attention during subsequent passes, while areas getting particularly high
numbers of "hot" markers will get increased attention.

However, I'm not sure yet about how to store this type of information,
especially since the perception of what area is "hot" will gradually change
from point to point. Maybe some information attached to each sample, and any
new sample will first check whether there are any samples nearby that know
something about hotspots already.

Just a bunch of wild ideas at the moment.


Post a reply to this message

From: clipka
Subject: Re: Radiosity Status: Giving Up...
Date: 29 Dec 2008 17:50:01
Message: <web.4959538bcd9d1e75ab169ede0@news.povray.org>
Warp <war### [at] tagpovrayorg> wrote:
>   I don't really understand people's obsession in making radiosity-only
> scenes. Radiosity calculates diffuse lighting only. If you use only radiosity
> to calculate the lighting of the scene, the entire scene will consist of
> purely diffusely lighted surfaces. There will be no specular component.

That's where the HDRI and reflections with micronormals come into play :)

Honestly, this combination really rocks. Your classic specular highlights are
nothing compared to what you can get out of this combo.

Takes some rendering time though.


Post a reply to this message

From: Severi Salminen
Subject: Re: Radiosity Status: Giving Up...
Date: 29 Dec 2008 17:53:55
Message: <49595503$1@news.povray.org>
> .... aside from the fact that what you describe *is* cosine weighting (unless
> your approach would be wrong, that is) :)

Yea, I just meant that now we don't have to take the cosine factor into
account anymore when figuring radiance (or whatever) contribution as the
ray distribution handles it by itself. Both give the exact same result.

> doesn't sound too bad (however, performance should be compared with taking two
> random numbers and transforming them to circle distribution using trigonometry
> instead of the monte-carlo approach).

In my Monte-Carlo path tracing renderer the above method was faster than
those using polar coordinates (random phi and theta describing the
angles). It was very clearly faster. I have Core2Duo so things might be
different on other platforms / with other compilers. I use GCC.

Severi


Post a reply to this message

From: Warp
Subject: Re: Radiosity Status: Giving Up...
Date: 29 Dec 2008 17:58:24
Message: <49595610@news.povray.org>
clipka <nomail@nomail> wrote:
> So yes, that should do: A loop typically taking less than two iterations (the
> most expensive part probably being the RNG) and a square root. Quite
> inexpensive after all.

  When you project the point to the hemisphere, you'll probably need three
multiplications and a square root. That's going to be much more expensive
than the RNG. (High-quality RNGs are very fast. They are faster than a LCG,
which consists of one integer multiplication and addition.)

> So it's definitely less expensive than going via a uniform distribution on a
> sphere in any case. Don't know whether that can be achieved without
> trigonometry, and at least it will involve a root somewhere as well - plus the
> math to get either a cosine weighting or a cosine weighted distribution.

  The whole idea of the given algorithm (ie. get a random point evenly
distributed on a disc and then parallel-project it to a hemisphere
correspondent to that disc) is that you don't need any trigonometry
anymore, and the points will be automatically distributed on the
hemisphere according to the cosine function. (Square root of multiplications
is equivalent to sin/cos.)

-- 
                                                          - Warp


Post a reply to this message

From: clipka
Subject: Re: Radiosity Status: Giving Up...
Date: 29 Dec 2008 18:00:00
Message: <web.495955e2cd9d1e75ab169ede0@news.povray.org>
"nemesis" <nam### [at] gmailcom> wrote:
> Indeed.  But specular is itself a fake, a rough shortcut to the real deal:
> specular is really diffuse reflection on very smooth surfaces.

If I may correct you: They are in fact *specular* reflections on a comparatively
*rough* surface...

> I've seen people advocating the use of (glossy) reflection to more realistically
> achieve that effect.

.... which is why that advocated approach works to very good effect.

The idea is simple: Make all light sources actually visible. Make all your
surfaces somewhat reflective, and rough them up using a "bump" normal, scaled
to dimensions much smaller than two adjacent pixels on screen. Render the
thing, making sure Antialiasing or focal blur take many samples per pixel.
Voila: Blurred highlights done just the same way as mother nature does.

Works great with HDR light probes for sky or backdrop.

Takes a lot of samples though to get flat and comparatively rough surfaces to
look good, so be prepared for overnight renders.


Post a reply to this message

From: Severi Salminen
Subject: Re: Radiosity Status: Giving Up...
Date: 29 Dec 2008 18:19:29
Message: <49595b01$1@news.povray.org>
And this is the code I use:


    double x,y,z;

    do{
        x = 2.0 * rNG.randomNumberClosed() - 1.0;
        z = 2.0 * rNG.randomNumberClosed() - 1.0;
    }
    while(x*x + z*z > 1.0);

    y = sqrt(1.0 - (x*x + z*z));


There you have it: a random vector inside a hemishere.


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.