|
![](/i/fill.gif) |
> I'm *beginning* to understand some of the basics-- mainly that it's a
> render-time-and-memory-saving technique for use in *real-time* rendering of
> games. But I'm wondering if it has any advantages for our usual raytracing, or
> how it's even applicable to non-real-time rendering. (In some ways, it seems to
> be a kind of environment-mapping technique that uses GPU power to emulate(?)
> what we get from raytracing/radiosity and/or HDRI lighting. But is it baked into
> textures? I haven't a clue.) The 'overall' mechanism of what it does makes
> *some* sense-- it creates a blurry environment map for lighting a scene. (That's
> my simpleton's understanding of it!) And with a low-overhead as far as
> computation goes, which I guess makes a lot of sense in the gaming world.
I played about with this a bit back in the DirectX 6/7(?) days when it
was first included. From what I remember it helps with lighting diffuse
surfaces and blurred reflections in real time. Without SH you'd need to
sample the environment at a number of points and do a weighted average
to come up with the result which is very costly and noisy. SH is a way
to perform a (costly) one-off transformation of the environment map so
that smooth "realistic-looking" diffuse lighting can be created without
much overhead per frame. It might be a good speedup for rendering
animations in POV if you can approximate your background as a texture
mapped sphere, it might even give similar/smoother results than
radiosity for a given render time.
Post a reply to this message
|
![](/i/fill.gif) |