|
|
I've been trying to get a basic understanding of this topic for awhile now (as
far as it concerns lighting for CGI) and WOW is it tough-going, basically
because of the maths involved.
What sparked my interest was reading of its use (apparently to GREAT effect) in
AVATAR.
I'm *beginning* to understand some of the basics-- mainly that it's a
render-time-and-memory-saving technique for use in *real-time* rendering of
games. But I'm wondering if it has any advantages for our usual raytracing, or
how it's even applicable to non-real-time rendering. (In some ways, it seems to
be a kind of environment-mapping technique that uses GPU power to emulate(?)
what we get from raytracing/radiosity and/or HDRI lighting. But is it baked into
textures? I haven't a clue.) The 'overall' mechanism of what it does makes
*some* sense-- it creates a blurry environment map for lighting a scene. (That's
my simpleton's understanding of it!) And with a low-overhead as far as
computation goes, which I guess makes a lot of sense in the gaming world.
Here's a good intro to the subject, as good as any other; the page has links to
some of the original academic papers as well...
http://imdoingitwrong.wordpress.com/2011/04/14/spherical-harmonics-wtf/
Post a reply to this message
|
|