|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
So, I wonder if you guys ever came in touch with the Lightcuts Global
Illumination algorithm?
http://www.cs.cornell.edu/~kb/projects/lightcuts/
If you're on broadband, check out the video too. It contains some
amazing animations with complex geometry and lightning being handled
very graciously.
I wonder what's the feasibility of an implementation of it for povray,
perhaps as a replacement for radiosity. One of the apparent advantages
seems to be sharp shadows as seen in the last seconds of the video. It
doesn't seem to do caustics, though...
It was presented at Sigraph 2005 and is currently being integrated into
Blender:
http://unclezeiv.kerid.org/
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
nemesis <nam### [at] nospamgmailcom> wrote:
> So, I wonder if you guys ever came in touch with the Lightcuts Global
> Illumination algorithm?
>
> http://www.cs.cornell.edu/~kb/projects/lightcuts/
Looks very, very, *very* promising when it comes to handling large amounts of
point or area lights.
I'm not sure whether it can be integrated into PoV-ray to do something
radiosity-like though.
Could be possible to do this in a multi-iteration approach though:
- Set up all classic light sources (point, area, spot etc. lights)
- Compile virtual light sources from all objects having an emissive term
(objects with ambient finish and - guess what! - media; hey, this may become
really good after all!); as it seems, number of light sources generated is
*not* an issue, so we can go into extremes here
That's our initial setting; to get radiosity-like effects, we can go ahead like
this:
- Compile more virtual light sources from all objects having a *diffuse* term
(objects with a diffuse surface, and simple scattering media), by sampling all
the incoming light from the initial set-up (using lightcuts of course!) and
multiplying with the object's diffuse color.
- Repeatedly re-compile all the virtual "diffuse-term light sources", now taking
into account the light coming from other diffuse sources, until we're either
confident that they are "stable", or until we hit some configurable hard
recursion limit.
Caustics could be done by running a classic photon pass first (or multiple times
in between), and creating a light source for every photon that hits a diffuse
surface. I guess it may even be possible to modify the photon pass in a similar
fashion.
Just a bunch of thoughts though. Maybe I'm missing some serious obstacles.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Would be a typical Megapov patch.... :-)
Thomas
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Thomas de Groot wrote:
> Would be a typical Megapov patch.... :-)
But why Megapov? AFAIK, it's based on an old fork from the pov
codebase. How many of megapov patches go the way back to pov? Still
waiting for AOI or HDRI.
I think this should be the right time to reevaluate some things.
Radiosity is an old algorithm, from before Photon Mapping (which is a
general GI algorithm in renderers other than pov) and it's very limited
in pov: besides the limited count, you can't, for instance, save the
radiosity and then try to take a shot of the scene from another angle,
because areas occluded from the original angle will appear awkward.
Worse: it seems like as if pov's radiosity is simply a 2D projection
you later apply over a raytraced scene from the same angle. Besides, it
didn't quite fare well in the new multiprocessing core.
So, how about letting it die and either go all photon mapping or try out
this new lightcuts idea? It seems it's strong point is to quickly
render scenes made out of sheer amounts of lights, which are
approximated and /cut/ out from the final scene, but with amazing
fidelity to the original.
And, yes, advocates of unbiased rendering are likely to give a thought
too...
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
nemesis <nam### [at] nospamgmailcom> wrote:
> Radiosity is an old algorithm, from before Photon Mapping
Photon mapping global illumination has already been tried, and it didn't
work too well. The results were much worse than the stochastic technique
currently used.
> you can't, for instance, save the
> radiosity and then try to take a shot of the scene from another angle,
> because areas occluded from the original angle will appear awkward.
The whole idea of the stochastic algorithm used in POV-Ray is that you
*can* save the results and then render from a different angle and have
POV-Ray only calculate the necessary additional samples for that angle.
If the save/load feature is not doing that, then maybe file a bug report?
The specific implementation in POV-Ray might be slightly lacking in some
areas, for whatever reason, but that doesn't mean the algorithm itself is
not sound.
You make it sound like it's a bad idea to calculate GI samples only for
the visible scene. I'd say it's the exact opposite: That's exactly what
*want* to do. If you calculated GI for the entire scene, then the Sun would
probably die before it finishes if the scene is large enough. (Besides,
how would you limit the GI calculations given that POV-Ray supports infinite
surfaces?)
A renderer called Radiance successfully implemented the exact algorithm
which is also used in POV-Ray, and Radiance has been used for architectural
lighting simulations and other such applications which require accuracy.
If you search for Radiance pictures, you'll find out that they look quite
good.
I'd say that rather than drop the algorithm, what's needed is a complete
rewrite. Read the original paper by Greg Ward, and study how Radiance did
it correctly and what POV-Ray is doing incorrectly, and fix the problems.
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp wrote:
> Photon mapping global illumination has already been tried, and it didn't
> work too well. The results were much worse than the stochastic technique
> currently used.
Hmm, it's the method used by high end renderers like Mental Ray.
> The whole idea of the stochastic algorithm used in POV-Ray is that you
> *can* save the results and then render from a different angle and have
> POV-Ray only calculate the necessary additional samples for that angle.
> If the save/load feature is not doing that, then maybe file a bug report?
Hmm, perhaps to calculate the additional samples for the other angle
means letting always_sample on in the second phase? Never did that and
so I discovered areas hidden from the original angle sucked from
another. It seems messing with camera's sky also messes with the
radiosity projection onto the raytraced scene.
> You make it sound like it's a bad idea to calculate GI samples only for
> the visible scene.
No, I was just wondering why I couldn't reuse a saved file that took
ages and render it from another angle with always_sample off. Seemed to
beat the whole purpose...
> A renderer called Radiance successfully implemented the exact algorithm
> which is also used in POV-Ray
Yes, I know that. "POV-Ray's radiosity is based on an idea by Greg Ward".
I always wanted to use Radiance, but it's RIB-like input language is
horrid compared to pov's and there are too many specialized command-line
tools for my liking, with no front-end of any kind.
> I'd say that rather than drop the algorithm, what's needed is a complete
> rewrite. Read the original paper by Greg Ward, and study how Radiance did
> it correctly and what POV-Ray is doing incorrectly, and fix the problems.
Why stick with old ideas when new and exciting ideas building on
previous failures appear to be successful?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
nemesis <nam### [at] nospamgmailcom> wrote:
> > I'd say that rather than drop the algorithm, what's needed is a complete
> > rewrite. Read the original paper by Greg Ward, and study how Radiance did
> > it correctly and what POV-Ray is doing incorrectly, and fix the problems.
> Why stick with old ideas when new and exciting ideas building on
> previous failures appear to be successful?
Because the new ideas usually only work for polygonized scenes or, in
the case of unbiased rendering, are much slower (although may result in
better-looking images).
Unless I understood incorrectly, the "lightcuts" algorithm is not about
global illumination, but about speeding up illumination from hundreds of
thousands of point light sources, which is a rather different thing. (Global
illumination is about light reflecting from surfaces and illuminating other
surfaces, in turn reflecting from them and illuminating yet other surfaces,
and so on. "Lightcuts" looked like an algorithm for direct illumination only.)
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
nemesis <nam### [at] nospamgmailcom> wrote:
> But why Megapov? AFAIK, it's based on an old fork from the pov
> codebase. How many of megapov patches go the way back to pov? Still
> waiting for AOI or HDRI.
Guess what: HDRI actually *is* in the beta :)
Nobody seems to have made much fuss about it, but it works. It just seems to
handle gamma a bit differently than the MegaPOV implementation, but that may be
simply due to the general gamma handling change.
And AFAIK some things like radiosity and/or photons actually made their way into
official POV via MegaPOV.
Aiming for it to be integrated into MegaPOV seems the right way for me (and
besides: Who uses standard POV anyway? :)); at least as long as we're talking
about 3.7 - for throwing old things overboard, the 4.0 would be a much better
candidate.
4.0 is still at its very basics, and is intended to head for radical changes
anyway (new SDL and all).
So getting Lightcuts into MegaPOV - as a proof that it *can* be integrated into
existing concepts, that the *POV* development community can implement it, that
it *does* give the expected performance benefits, and that it can be done
*without* sacrificing other features like reflections, caustics or all the
types of media - might give it a chance to be selected for the 4.0 *main*
lighting model. If it's never integrated into a POV patch, I guess it will at
best make it into 4.0 as an experimental feature.
So yes, going for a Lightcut-patched 3.6 (or 3.7) - ideally as part of MegaPOV -
might be *the* key to actually get it into official POV at all.
Doing it as fast as possible might be crucial on both sides: As of now, no final
decision has been made about the main lighting model for 4.0; some advocate to
stick with the existing model, while others suggest other approaches, but the
discussion seems to have come to a standstill. An approach with such
extraordinary runtime characteristics as promised by Lightcuts - if proven to
be feasible in the POV world - would most likely tip the scales. A sub-linearly
scaling algorithm - and not only that, looks like we're talking about very close
to logarithmic here - where do you get *that* in 3D rendering, where sometimes
you'd be happy if things would scale sub-quadratic?!
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp wrote:
> Unless I understood incorrectly, the "lightcuts" algorithm is not about
> global illumination, but about speeding up illumination from hundreds of
> thousands of point light sources, which is a rather different thing. (Global
> illumination is about light reflecting from surfaces and illuminating other
> surfaces, in turn reflecting from them and illuminating yet other surfaces,
> and so on. "Lightcuts" looked like an algorithm for direct illumination only.)
Yes, I was in doubt too. But in the webpage for the paper they say:
"It handles arbitrary geometry, non-diffuse materials, and illumination
from a wide variety of sources including point lights, area lights,...
and indirect illumination." They couldn't just be plain lying.
So, I did a test using the Blender implementation:
http://img440.imageshack.us/img440/1255/lightcutstestzg0.jpg
Only lightsource is an area light outside and above the window. So,
indeed it handles indirect illumination automatically...
It has a lightcuts panels with quite a few parameters, among them,
number of point lights generated from an area light. The default is
immense, so I fearfully put it down to a modest 128 as well as a
indirect light factor about the same. I think such "low" settings can
be accounted for the splotches and "cuts" seen.
OTOH, it took 44 minutes on a P4 2.66. Not as fast as I had hoped for... :P
Next, I'll try to export to pov and play around with radiosity. I
didn't like the radiosity in Blender, even with quite lots of
subdivision, it still looks quite ugly and not on par with the lightcuts
picture...
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
clipka wrote:
> Guess what: HDRI actually *is* in the beta :)
Great! Now only a few more years until mechsim, AOI, displace warps and
some others...
> besides: Who uses standard POV anyway? :))
I never used megapov.
> 4.0 is still at its very basics, and is intended to head for radical changes
> anyway (new SDL and all).
Is it anything other than a hope at this point?
> be feasible in the POV world - would most likely tip the scales. A sub-linearly
> scaling algorithm - and not only that, looks like we're talking about very close
> to logarithmic here - where do you get *that* in 3D rendering, where sometimes
> you'd be happy if things would scale sub-quadratic?!
Well, it does seem to handle indeed a huge amount of point lights
automatically generated from quite a few light sources. But it seems
about as slow as any other GI method. OTOH, there doesn't seem to be
any radiosity count limit. And, I wonder if it'd be more amenable to
multiprocessing than radiosity...
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|